package runtime
Import Path
runtime (on go.dev)
Dependency Relation
imports 6 packages, and imported by 14 packages
Involved Source Files
alg.go
atomic_pointer.go
cgo.go
cgocall.go
cgocallback.go
cgocheck.go
chan.go
checkptr.go
compiler.go
complex.go
cpuflags.go
cpuflags_amd64.go
cpuprof.go
cputicks.go
debug.go
debugcall.go
debuglog.go
debuglog_off.go
defs_darwin_amd64.go
env_posix.go
error.go
Package runtime contains operations that interact with Go's runtime system,
such as functions to control goroutines. It also includes the low-level type information
used by the reflect package; see reflect's documentation for the programmable
interface to the run-time type system.
Environment Variables
The following environment variables ($name or %name%, depending on the host
operating system) control the run-time behavior of Go programs. The meanings
and use may change from release to release.
The GOGC variable sets the initial garbage collection target percentage.
A collection is triggered when the ratio of freshly allocated data to live data
remaining after the previous collection reaches this percentage. The default
is GOGC=100. Setting GOGC=off disables the garbage collector entirely.
The runtime/debug package's SetGCPercent function allows changing this
percentage at run time. See https://golang.org/pkg/runtime/debug/#SetGCPercent.
The GODEBUG variable controls debugging variables within the runtime.
It is a comma-separated list of name=val pairs setting these named variables:
allocfreetrace: setting allocfreetrace=1 causes every allocation to be
profiled and a stack trace printed on each object's allocation and free.
clobberfree: setting clobberfree=1 causes the garbage collector to
clobber the memory content of an object with bad content when it frees
the object.
cgocheck: setting cgocheck=0 disables all checks for packages
using cgo to incorrectly pass Go pointers to non-Go code.
Setting cgocheck=1 (the default) enables relatively cheap
checks that may miss some errors. Setting cgocheck=2 enables
expensive checks that should not miss any errors, but will
cause your program to run slower.
efence: setting efence=1 causes the allocator to run in a mode
where each object is allocated on a unique page and addresses are
never recycled.
gccheckmark: setting gccheckmark=1 enables verification of the
garbage collector's concurrent mark phase by performing a
second mark pass while the world is stopped. If the second
pass finds a reachable object that was not found by concurrent
mark, the garbage collector will panic.
gcpacertrace: setting gcpacertrace=1 causes the garbage collector to
print information about the internal state of the concurrent pacer.
gcshrinkstackoff: setting gcshrinkstackoff=1 disables moving goroutines
onto smaller stacks. In this mode, a goroutine's stack can only grow.
gcstoptheworld: setting gcstoptheworld=1 disables concurrent garbage collection,
making every garbage collection a stop-the-world event. Setting gcstoptheworld=2
also disables concurrent sweeping after the garbage collection finishes.
gctrace: setting gctrace=1 causes the garbage collector to emit a single line to standard
error at each collection, summarizing the amount of memory collected and the
length of the pause. The format of this line is subject to change.
Currently, it is:
gc # @#s #%: #+#+# ms clock, #+#/#/#+# ms cpu, #->#-># MB, # MB goal, # P
where the fields are as follows:
gc # the GC number, incremented at each GC
@#s time in seconds since program start
#% percentage of time spent in GC since program start
#+...+# wall-clock/CPU times for the phases of the GC
#->#-># MB heap size at GC start, at GC end, and live heap
# MB goal goal heap size
# P number of processors used
The phases are stop-the-world (STW) sweep termination, concurrent
mark and scan, and STW mark termination. The CPU times
for mark/scan are broken down in to assist time (GC performed in
line with allocation), background GC time, and idle GC time.
If the line ends with "(forced)", this GC was forced by a
runtime.GC() call.
inittrace: setting inittrace=1 causes the runtime to emit a single line to standard
error for each package with init work, summarizing the execution time and memory
allocation. No information is printed for inits executed as part of plugin loading
and for packages without both user defined and compiler generated init work.
The format of this line is subject to change. Currently, it is:
init # @#ms, # ms clock, # bytes, # allocs
where the fields are as follows:
init # the package name
@# ms time in milliseconds when the init started since program start
# clock wall-clock time for package initialization work
# bytes memory allocated on the heap
# allocs number of heap allocations
madvdontneed: setting madvdontneed=0 will use MADV_FREE
instead of MADV_DONTNEED on Linux when returning memory to the
kernel. This is more efficient, but means RSS numbers will
drop only when the OS is under memory pressure.
memprofilerate: setting memprofilerate=X will update the value of runtime.MemProfileRate.
When set to 0 memory profiling is disabled. Refer to the description of
MemProfileRate for the default value.
invalidptr: invalidptr=1 (the default) causes the garbage collector and stack
copier to crash the program if an invalid pointer value (for example, 1)
is found in a pointer-typed location. Setting invalidptr=0 disables this check.
This should only be used as a temporary workaround to diagnose buggy code.
The real fix is to not store integers in pointer-typed locations.
sbrk: setting sbrk=1 replaces the memory allocator and garbage collector
with a trivial allocator that obtains memory from the operating system and
never reclaims any memory.
scavenge: scavenge=1 enables debugging mode of heap scavenger.
scavtrace: setting scavtrace=1 causes the runtime to emit a single line to standard
error, roughly once per GC cycle, summarizing the amount of work done by the
scavenger as well as the total amount of memory returned to the operating system
and an estimate of physical memory utilization. The format of this line is subject
to change, but currently it is:
scav # # KiB work, # KiB total, #% util
where the fields are as follows:
scav # the scavenge cycle number
# KiB work the amount of memory returned to the OS since the last line
# KiB total the total amount of memory returned to the OS
#% util the fraction of all unscavenged memory which is in-use
If the line ends with "(forced)", then scavenging was forced by a
debug.FreeOSMemory() call.
scheddetail: setting schedtrace=X and scheddetail=1 causes the scheduler to emit
detailed multiline info every X milliseconds, describing state of the scheduler,
processors, threads and goroutines.
schedtrace: setting schedtrace=X causes the scheduler to emit a single line to standard
error every X milliseconds, summarizing the scheduler state.
tracebackancestors: setting tracebackancestors=N extends tracebacks with the stacks at
which goroutines were created, where N limits the number of ancestor goroutines to
report. This also extends the information returned by runtime.Stack. Ancestor's goroutine
IDs will refer to the ID of the goroutine at the time of creation; it's possible for this
ID to be reused for another goroutine. Setting N to 0 will report no ancestry information.
asyncpreemptoff: asyncpreemptoff=1 disables signal-based
asynchronous goroutine preemption. This makes some loops
non-preemptible for long periods, which may delay GC and
goroutine scheduling. This is useful for debugging GC issues
because it also disables the conservative stack scanning used
for asynchronously preempted goroutines.
The net, net/http, and crypto/tls packages also refer to debugging variables in GODEBUG.
See the documentation for those packages for details.
The GOMAXPROCS variable limits the number of operating system threads that
can execute user-level Go code simultaneously. There is no limit to the number of threads
that can be blocked in system calls on behalf of Go code; those do not count against
the GOMAXPROCS limit. This package's GOMAXPROCS function queries and changes
the limit.
The GORACE variable configures the race detector, for programs built using -race.
See https://golang.org/doc/articles/race_detector.html for details.
The GOTRACEBACK variable controls the amount of output generated when a Go
program fails due to an unrecovered panic or an unexpected runtime condition.
By default, a failure prints a stack trace for the current goroutine,
eliding functions internal to the run-time system, and then exits with exit code 2.
The failure prints stack traces for all goroutines if there is no current goroutine
or the failure is internal to the run-time.
GOTRACEBACK=none omits the goroutine stack traces entirely.
GOTRACEBACK=single (the default) behaves as described above.
GOTRACEBACK=all adds stack traces for all user-created goroutines.
GOTRACEBACK=system is like ``all'' but adds stack frames for run-time functions
and shows goroutines created internally by the run-time.
GOTRACEBACK=crash is like ``system'' but crashes in an operating system-specific
manner instead of exiting. For example, on Unix systems, the crash raises
SIGABRT to trigger a core dump.
For historical reasons, the GOTRACEBACK settings 0, 1, and 2 are synonyms for
none, all, and system, respectively.
The runtime/debug package's SetTraceback function allows increasing the
amount of output at run time, but it cannot reduce the amount below that
specified by the environment variable.
See https://golang.org/pkg/runtime/debug/#SetTraceback.
The GOARCH, GOOS, GOPATH, and GOROOT environment variables complete
the set of Go environment variables. They influence the building of Go programs
(see https://golang.org/cmd/go and https://golang.org/pkg/go/build).
GOARCH, GOOS, and GOROOT are recorded at compile time and made available by
constants or functions in this package, but they do not influence the execution
of the run-time system.
fastlog2.go
fastlog2table.go
float.go
hash64.go
heapdump.go
histogram.go
iface.go
lfstack.go
lfstack_64bit.go
lock_sema.go
lockrank.go
lockrank_off.go
malloc.go
map.go
map_fast32.go
map_fast64.go
map_faststr.go
mbarrier.go
mbitmap.go
mcache.go
mcentral.go
mcheckmark.go
mem_darwin.go
metrics.go
mfinal.go
mfixalloc.go
mgc.go
mgcmark.go
mgcscavenge.go
mgcstack.go
mgcsweep.go
mgcwork.go
mheap.go
mpagealloc.go
mpagealloc_64bit.go
mpagecache.go
mpallocbits.go
mprof.go
mranges.go
msan0.go
msize.go
mspanset.go
mstats.go
mwbbuf.go
nbpipe_pipe.go
netpoll.go
netpoll_kqueue.go
os_darwin.go
os_nonopenbsd.go
panic.go
plugin.go
preempt.go
preempt_nonwindows.go
print.go
proc.go
profbuf.go
proflabel.go
race0.go
rdebug.go
relax_stub.go
runtime.go
runtime1.go
runtime2.go
rwmutex.go
select.go
sema.go
signal_amd64.go
signal_darwin.go
signal_darwin_amd64.go
signal_unix.go
sigqueue.go
sizeclasses.go
slice.go
softfloat64.go
stack.go
string.go
stubs.go
stubs_amd64.go
stubs_nonlinux.go
symtab.go
sys_darwin.go
sys_libc.go
sys_nonppc64x.go
sys_x86.go
time.go
time_nofake.go
timestub.go
trace.go
traceback.go
type.go
typekind.go
utf8.go
vdso_in_none.go
write_err.go
asm_ppc64x.h
funcdata.h
go_tls.h
textflag.h
asm.s
asm_amd64.s
duff_amd64.s
memclr_amd64.s
memmove_amd64.s
preempt_amd64.s
rt0_darwin_amd64.s
sys_darwin_amd64.s
Code Examples
package main
import (
"fmt"
"runtime"
"strings"
)
func main() {
c := func() {
// Ask runtime.Callers for up to 10 pcs, including runtime.Callers itself.
pc := make([]uintptr, 10)
n := runtime.Callers(0, pc)
if n == 0 {
// No pcs available. Stop now.
// This can happen if the first argument to runtime.Callers is large.
return
}
pc = pc[:n] // pass only valid pcs to runtime.CallersFrames
frames := runtime.CallersFrames(pc)
// Loop to get frames.
// A fixed number of pcs can expand to an indefinite number of Frames.
for {
frame, more := frames.Next()
// To keep this example's output stable
// even if there are changes in the testing package,
// stop unwinding when we leave package runtime.
if !strings.Contains(frame.File, "runtime/") {
break
}
fmt.Printf("- more:%v | %s\n", more, frame.Function)
if !more {
break
}
}
}
b := func() { c() }
a := func() { b() }
a()
}
Package-Level Type Names (total 268, in which 9 are exported)
/* sort exporteds by: | */
BlockProfileRecord describes blocking events originated
at a particular call sequence (stack trace).
Count int64
Cycles int64
StackRecord StackRecord
// stack trace for this record; ends at first 0 entry
Stack returns the stack trace associated with the record,
a prefix of r.Stack0.
func BlockProfile(p []BlockProfileRecord) (n int, ok bool)
func MutexProfile(p []BlockProfileRecord) (n int, ok bool)
The Error interface identifies a run time error.
( T) Error() builtin.string
RuntimeError is a no-op function but
serves to distinguish types that are run time
errors from ordinary errors: a type is a
run time error if it has a RuntimeError method.
*TypeAssertionError
boundsError
errorAddressString
errorString
plainError
T : error
Frame is the information returned by Frames for each call frame.
Entry point program counter for the function; may be zero
if not known. If Func is not nil then Entry ==
Func.Entry().
File and Line are the file name and line number of the
location in this frame. For non-leaf frames, this will be
the location of a call. These may be the empty string and
zero, respectively, if not known.
Func is the Func value of this call frame. This may be nil
for non-Go code or fully inlined functions.
Function is the package path-qualified function name of
this call frame. If non-empty, this string uniquely
identifies a single function in the program.
This may be the empty string if not known.
If Func is not nil then Function == Func.Name().
Line int
PC is the program counter for the location in this frame.
For a frame that calls another frame, this will be the
program counter of a call instruction. Because of inlining,
multiple frames may have the same PC value, but different
symbolic information.
The runtime's internal view of the function. This field
is set (funcInfo.valid() returns true) only for Go functions,
not for C functions.
func (*Frames).Next() (frame Frame, more bool)
func allFrames(pcs []uintptr) []Frame
func expandCgoFrames(pc uintptr) []Frame
func traceFrameForPC(buf traceBufPtr, pid int32, f Frame) (traceFrame, traceBufPtr)
Frames may be used to get function/file/line information for a
slice of PC values returned by Callers.
callers is a slice of PCs that have not yet been expanded to frames.
frameStore [2]Frame
frames is a slice of Frames that have yet to be returned.
Next returns frame information for the next caller.
If more is false, there are no more callers (the Frame value is valid).
func CallersFrames(callers []uintptr) *Frames
A Func represents a Go function in the running binary.
// unexported field to disallow conversions
Entry returns the entry address of the function.
FileLine returns the file name and line number of the
source code corresponding to the program counter pc.
The result will not be accurate if pc is not a program
counter within f.
Name returns the name of the function.
(*T) funcInfo() funcInfo
(*T) raw() *_func
*T : github.com/neo4j/neo4j-go-driver/v4/neo4j.DatabaseInfo
func FuncForPC(pc uintptr) *Func
A MemProfileRecord describes the live objects allocated
by a particular call sequence (stack trace).
// number of bytes allocated, freed
// number of objects allocated, freed
// number of bytes allocated, freed
// number of objects allocated, freed
// stack trace for this record; ends at first 0 entry
InUseBytes returns the number of bytes in use (AllocBytes - FreeBytes).
InUseObjects returns the number of objects in use (AllocObjects - FreeObjects).
Stack returns the stack trace associated with the record,
a prefix of r.Stack0.
func MemProfile(p []MemProfileRecord, inuseZero bool) (n int, ok bool)
func record(r *MemProfileRecord, b *bucket)
A MemStats records statistics about the memory allocator.
Alloc is bytes of allocated heap objects.
This is the same as HeapAlloc (see below).
BuckHashSys is bytes of memory in profiling bucket hash tables.
BySize reports per-size class allocation statistics.
BySize[N] gives statistics for allocations of size S where
BySize[N-1].Size < S ≤ BySize[N].Size.
This does not report allocations larger than BySize[60].Size.
DebugGC is currently unused.
EnableGC indicates that GC is enabled. It is always true,
even if GOGC=off.
Frees is the cumulative count of heap objects freed.
GCCPUFraction is the fraction of this program's available
CPU time used by the GC since the program started.
GCCPUFraction is expressed as a number between 0 and 1,
where 0 means GC has consumed none of this program's CPU. A
program's available CPU time is defined as the integral of
GOMAXPROCS since the program started. That is, if
GOMAXPROCS is 2 and a program has been running for 10
seconds, its "available CPU" is 20 seconds. GCCPUFraction
does not include CPU time used for write barrier activity.
This is the same as the fraction of CPU reported by
GODEBUG=gctrace=1.
GCSys is bytes of memory in garbage collection metadata.
HeapAlloc is bytes of allocated heap objects.
"Allocated" heap objects include all reachable objects, as
well as unreachable objects that the garbage collector has
not yet freed. Specifically, HeapAlloc increases as heap
objects are allocated and decreases as the heap is swept
and unreachable objects are freed. Sweeping occurs
incrementally between GC cycles, so these two processes
occur simultaneously, and as a result HeapAlloc tends to
change smoothly (in contrast with the sawtooth that is
typical of stop-the-world garbage collectors).
HeapIdle is bytes in idle (unused) spans.
Idle spans have no objects in them. These spans could be
(and may already have been) returned to the OS, or they can
be reused for heap allocations, or they can be reused as
stack memory.
HeapIdle minus HeapReleased estimates the amount of memory
that could be returned to the OS, but is being retained by
the runtime so it can grow the heap without requesting more
memory from the OS. If this difference is significantly
larger than the heap size, it indicates there was a recent
transient spike in live heap size.
HeapInuse is bytes in in-use spans.
In-use spans have at least one object in them. These spans
can only be used for other objects of roughly the same
size.
HeapInuse minus HeapAlloc estimates the amount of memory
that has been dedicated to particular size classes, but is
not currently being used. This is an upper bound on
fragmentation, but in general this memory can be reused
efficiently.
HeapObjects is the number of allocated heap objects.
Like HeapAlloc, this increases as objects are allocated and
decreases as the heap is swept and unreachable objects are
freed.
HeapReleased is bytes of physical memory returned to the OS.
This counts heap memory from idle spans that was returned
to the OS and has not yet been reacquired for the heap.
HeapSys is bytes of heap memory obtained from the OS.
HeapSys measures the amount of virtual address space
reserved for the heap. This includes virtual address space
that has been reserved but not yet used, which consumes no
physical memory, but tends to be small, as well as virtual
address space for which the physical memory has been
returned to the OS after it became unused (see HeapReleased
for a measure of the latter).
HeapSys estimates the largest size the heap has had.
LastGC is the time the last garbage collection finished, as
nanoseconds since 1970 (the UNIX epoch).
Lookups is the number of pointer lookups performed by the
runtime.
This is primarily useful for debugging runtime internals.
MCacheInuse is bytes of allocated mcache structures.
MCacheSys is bytes of memory obtained from the OS for
mcache structures.
MSpanInuse is bytes of allocated mspan structures.
MSpanSys is bytes of memory obtained from the OS for mspan
structures.
Mallocs is the cumulative count of heap objects allocated.
The number of live objects is Mallocs - Frees.
NextGC is the target heap size of the next GC cycle.
The garbage collector's goal is to keep HeapAlloc ≤ NextGC.
At the end of each GC cycle, the target for the next cycle
is computed based on the amount of reachable data and the
value of GOGC.
NumForcedGC is the number of GC cycles that were forced by
the application calling the GC function.
NumGC is the number of completed GC cycles.
OtherSys is bytes of memory in miscellaneous off-heap
runtime allocations.
PauseEnd is a circular buffer of recent GC pause end times,
as nanoseconds since 1970 (the UNIX epoch).
This buffer is filled the same way as PauseNs. There may be
multiple pauses per GC cycle; this records the end of the
last pause in a cycle.
PauseNs is a circular buffer of recent GC stop-the-world
pause times in nanoseconds.
The most recent pause is at PauseNs[(NumGC+255)%256]. In
general, PauseNs[N%256] records the time paused in the most
recent N%256th GC cycle. There may be multiple pauses per
GC cycle; this is the sum of all pauses during a cycle.
PauseTotalNs is the cumulative nanoseconds in GC
stop-the-world pauses since the program started.
During a stop-the-world pause, all goroutines are paused
and only the garbage collector can run.
StackInuse is bytes in stack spans.
In-use stack spans have at least one stack in them. These
spans can only be used for other stacks of the same size.
There is no StackIdle because unused stack spans are
returned to the heap (and hence counted toward HeapIdle).
StackSys is bytes of stack memory obtained from the OS.
StackSys is StackInuse, plus any memory obtained directly
from the OS for OS thread stacks (which should be minimal).
Sys is the total bytes of memory obtained from the OS.
Sys is the sum of the XSys fields below. Sys measures the
virtual address space reserved by the Go runtime for the
heap, stacks, and other internal data structures. It's
likely that not all of the virtual address space is backed
by physical memory at any given moment, though in general
it all was at some point.
TotalAlloc is cumulative bytes allocated for heap objects.
TotalAlloc increases as heap objects are allocated, but
unlike Alloc and HeapAlloc, it does not decrease when
objects are freed.
func ReadMemStats(m *MemStats)
func dumpmemstats(m *MemStats)
func mdump(m *MemStats)
func readmemstats_m(stats *MemStats)
func writeheapdump_m(fd uintptr, m *MemStats)
A StackRecord describes a single execution stack.
// stack trace for this record; ends at first 0 entry
Stack returns the stack trace associated with the record,
a prefix of r.Stack0.
func GoroutineProfile(p []StackRecord) (n int, ok bool)
func ThreadCreateProfile(p []StackRecord) (n int, ok bool)
func goroutineProfileWithLabels(p []StackRecord, labels []unsafe.Pointer) (n int, ok bool)
func runtime_goroutineProfileWithLabels(p []StackRecord, labels []unsafe.Pointer) (n int, ok bool)
func saveg(pc, sp uintptr, gp *g, r *StackRecord)
A _defer holds an entry on the list of deferred calls.
If you add a field here, add code to clear it in freedefer and deferProcStack
This struct must match the code in cmd/compile/internal/gc/reflect.go:deferstruct
and cmd/compile/internal/gc/ssa.go:(*state).call.
Some defers will be allocated on the stack and some on the heap.
All defers are logically part of the stack, so write barriers to
initialize them are not required. All defers must be manually scanned,
and for heap defers, marked.
// panic that is running defer
If openDefer is true, the fields below record values about the stack
frame and associated function that has the open-coded defer(s). sp
above will be the sp for the frame, and pc will be address of the
deferreturn call in the function.
// funcdata for the function associated with the frame
// can be nil for open-coded defers
framepc is the current pc associated with the stack frame. Together,
with sp above (which is the sp associated with the stack frame),
framepc/sp can be used as pc/sp pair to continue a stack trace via
gentraceback().
heap bool
link *_defer
openDefer indicates that this _defer is for a frame with open-coded
defers. We have only one defer record for the entire frame (which may
currently have 0, 1, or more defers active).
// pc at time of defer
// includes both arguments and results
// sp at time of defer
started bool
// value of varp for the stack frame
func newdefer(siz int32) *_defer
func deferArgs(d *_defer) unsafe.Pointer
func deferprocStack(d *_defer)
func freedefer(d *_defer)
func runOpenDeferFrame(gp *g, d *_defer) bool
Layout of in-memory per-function information prepared by linker
See https://golang.org/s/go12symtab.
Keep in sync with linker (../cmd/link/internal/ld/pcln.go:/pclntab)
and with package debug/gosym and with symtab.go in package runtime.
// in/out args size
// runtime.cutab offset of this function's CU
// offset of start of a deferreturn call instruction from entry, if any.
// start pc
// set for certain special runtime functions
// function name
// must be last
npcdata uint32
pcfile uint32
pcln uint32
pcsp uint32
func (*Func).raw() *_func
A _panic holds information about an active panic.
A _panic value must only ever live on the stack.
The argp and link fields are stack pointers, but don't need special
handling during stack growth: because they are pointer-typed and
_panic values only live on the stack, regular stack pointer
adjustment takes care of them.
// the panic was aborted
// argument to panic
// pointer to arguments of deferred call run during panic; cannot move - known to liblink
goexit bool
// link to earlier panic
// where to return to in runtime if this panic is bypassed
// whether this panic is over
// where to return to in runtime if this panic is bypassed
func fatalpanic(msgs *_panic)
func preprintpanics(p *_panic)
func printpanics(p *_panic)
func reflectcallSave(p *_panic, fn, arg unsafe.Pointer, argsize uint32)
Needs to be in sync with ../cmd/link/internal/ld/decodesym.go:/^func.commonsize,
../cmd/compile/internal/gc/reflect.go:/^func.dcommontype and
../reflect/type.go:/^type.rtype.
../internal/reflectlite/type.go:/^type.rtype.
align uint8
function for comparing objects of this type
(ptr to object A, ptr to object B) -> ==?
fieldAlign uint8
gcdata stores the GC type data for the garbage collector.
If the KindGCProg bit is set in kind, gcdata is a GC program.
Otherwise it is a ptrmask bitmap. See mbitmap.go for details.
hash uint32
kind uint8
ptrToThis typeOff
// size of memory prefix holding all pointers
size uintptr
str nameOff
tflag tflag
(*T) name() string
(*T) nameOff(off nameOff) name
pkgpath returns the path of the package where t was defined, if
available. This is not the same as the reflect package's PkgPath
method, in that it returns the package path for struct and interface
types, not just named types.
(*T) string() string
(*T) textOff(off textOff) unsafe.Pointer
(*T) typeOff(off typeOff) *_type
(*T) uncommon() *uncommontype
func resolveTypeOff(ptrInModule unsafe.Pointer, off typeOff) *_type
func addfinalizer(p unsafe.Pointer, f *funcval, nret uintptr, fint *_type, ot *ptrtype) bool
func cgoCheckArg(t *_type, p unsafe.Pointer, indir, top bool, msg string)
func cgoCheckMemmove(typ *_type, dst, src unsafe.Pointer, off, size uintptr)
func cgoCheckSliceCopy(typ *_type, dst, src unsafe.Pointer, n int)
func cgoCheckTypedBlock(typ *_type, src unsafe.Pointer, off, size uintptr)
func cgoCheckUsingType(typ *_type, src unsafe.Pointer, off, size uintptr)
func checkptrAlignment(p unsafe.Pointer, elem *_type, n uintptr)
func convT2E(t *_type, elem unsafe.Pointer) (e eface)
func convT2Enoptr(t *_type, elem unsafe.Pointer) (e eface)
func dumpfinalizer(obj unsafe.Pointer, fn *funcval, fint *_type, ot *ptrtype)
func dumptype(t *_type)
func efaceeq(t *_type, x, y unsafe.Pointer) bool
func finq_callback(fn *funcval, obj unsafe.Pointer, nret uintptr, fint *_type, ot *ptrtype)
func getitab(inter *interfacetype, typ *_type, canfail bool) *itab
func growslice(et *_type, old slice, cap int) slice
func heapBitsSetType(x, size, dataSize uintptr, typ *_type)
func isDirectIface(t *_type) bool
func itabHashFunc(inter *interfacetype, typ *_type) uintptr
func makeslice(et *_type, len, cap int) unsafe.Pointer
func makeslice64(et *_type, len64, cap64 int64) unsafe.Pointer
func makeslicecopy(et *_type, tolen int, fromlen int, from unsafe.Pointer) unsafe.Pointer
func mallocgc(size uintptr, typ *_type, needzero bool) unsafe.Pointer
func newarray(typ *_type, n int) unsafe.Pointer
func newobject(typ *_type) unsafe.Pointer
func panicdottypeE(have, want, iface *_type)
func panicdottypeI(have *itab, want, iface *_type)
func panicnildottype(want *_type)
func queuefinalizer(p unsafe.Pointer, fn *funcval, nret uintptr, fint *_type, ot *ptrtype)
func raceReadObjectPC(t *_type, addr unsafe.Pointer, callerpc, pc uintptr)
func raceWriteObjectPC(t *_type, addr unsafe.Pointer, callerpc, pc uintptr)
func recvDirect(t *_type, sg *sudog, dst unsafe.Pointer)
func reflect_typedmemclr(typ *_type, ptr unsafe.Pointer)
func reflect_typedmemclrpartial(typ *_type, ptr unsafe.Pointer, off, size uintptr)
func reflect_typedmemmove(typ *_type, dst, src unsafe.Pointer)
func reflect_typedmemmovepartial(typ *_type, dst, src unsafe.Pointer, off, size uintptr)
func reflect_typedslicecopy(elemType *_type, dst, src slice) int
func reflect_typehash(t *_type, p unsafe.Pointer, h uintptr) uintptr
func reflect_unsafe_New(typ *_type) unsafe.Pointer
func reflect_unsafe_NewArray(typ *_type, n int) unsafe.Pointer
func reflectcall(argtype *_type, fn, arg unsafe.Pointer, argsize uint32, retoffset uint32)
func reflectcallmove(typ *_type, dst, src unsafe.Pointer, size uintptr)
func reflectlite_typedmemmove(typ *_type, dst, src unsafe.Pointer)
func reflectlite_unsafe_New(typ *_type) unsafe.Pointer
func sendDirect(t *_type, sg *sudog, src unsafe.Pointer)
func tracealloc(p unsafe.Pointer, size uintptr, typ *_type)
func typeBitsBulkBarrier(typ *_type, dst, src, size uintptr)
func typedmemclr(typ *_type, ptr unsafe.Pointer)
func typedmemmove(typ *_type, dst, src unsafe.Pointer)
func typedslicecopy(typ *_type, dstPtr unsafe.Pointer, dstLen int, srcPtr unsafe.Pointer, srcLen int) int
func typehash(t *_type, p unsafe.Pointer, h uintptr) uintptr
func typesEqual(t, v *_type, seen map[_typePair]struct{}) bool
var deferType *_type
var pdType *_type
var sliceType *_type
var stringType *_type
var uint16Type *_type
var uint32Type *_type
var uint64Type *_type
addrRange represents a region of address space.
An addrRange must never span a gap in the address space.
base and limit together represent the region of address space
[base, limit). That is, base is inclusive, limit is exclusive.
These are address over an offset view of the address space on
platforms with a segmented address space, that is, on platforms
where arenaBaseOffset != 0.
base and limit together represent the region of address space
[base, limit). That is, base is inclusive, limit is exclusive.
These are address over an offset view of the address space on
platforms with a segmented address space, that is, on platforms
where arenaBaseOffset != 0.
contains returns whether or not the range contains a given address.
removeGreaterEqual removes all addresses in a greater than or equal
to addr and returns the new range.
size returns the size of the range represented in bytes.
subtract takes the addrRange toPrune and cuts out any overlap with
from, then returns the new range. subtract assumes that a and b
either don't overlap at all, only overlap on one side, or are equal.
If b is strictly contained in a, thus forcing a split, it will throw.
func makeAddrRange(base, limit uintptr) addrRange
addrRanges is a data structure holding a collection of ranges of
address space.
The ranges are coalesced eagerly to reduce the
number ranges it holds.
The slice backing store for this field is persistentalloc'd
and thus there is no way to free it.
addrRanges is not thread-safe.
ranges is a slice of ranges sorted by base.
sysStat is the stat to track allocations by this type
totalBytes is the total amount of address space in bytes counted by
this addrRanges.
add inserts a new address range to a.
r must not overlap with any address range in a and r.size() must be > 0.
cloneInto makes a deep clone of a's state into b, re-using
b's ranges if able.
contains returns true if a covers the address addr.
findAddrGreaterEqual returns the smallest address represented by a
that is >= addr. Thus, if the address is represented by a,
then it returns addr. The second return value indicates whether
such an address exists for addr in a. That is, if addr is larger than
any address known to a, the second return value will be false.
findSucc returns the first index in a such that addr is
less than the base of the addrRange at that index.
(*T) init(sysStat *sysMemStat)
removeGreaterEqual removes the ranges of a which are above addr, and additionally
splits any range containing addr.
removeLast removes and returns the highest-addressed contiguous range
of a, or the last nBytes of that range, whichever is smaller. If a is
empty, it returns an empty range.
cache pcvalueCache
// ptr distance from old to new stack (newbase - oldbase)
old stack
sghi is the highest sudog.elem on the stack.
func adjustctxt(gp *g, adjinfo *adjustinfo)
func adjustdefers(gp *g, adjinfo *adjustinfo)
func adjustpanics(gp *g, adjinfo *adjustinfo)
func adjustpointer(adjinfo *adjustinfo, vpp unsafe.Pointer)
func adjustpointers(scanp unsafe.Pointer, bv *bitvector, adjinfo *adjustinfo, f funcInfo)
func adjustsudogs(gp *g, adjinfo *adjustinfo)
func syncadjustsudogs(gp *g, used uintptr, adjinfo *adjustinfo) uintptr
ancestorInfo records details of where a goroutine was started.
// goroutine id of this goroutine; original goroutine possibly dead
// pc of go statement that created this goroutine
// pcs from the stack of this goroutine
func saveAncestors(callergp *g) *[]ancestorInfo
func printAncestorTraceback(ancestor ancestorInfo)
arenaHint is a hint for where to grow the heap arenas. See
mheap_.arenaHints.
addr uintptr
down bool
next *arenaHint
( T) l1() uint
( T) l2() uint
func arenaIndex(p uintptr) arenaIdx
func arenaBase(i arenaIdx) uintptr
Information from the compiler about the layout of stack frames.
Note: this type must agree with reflect.bitVector.
bytedata *uint8
// # of bits
ptrbit returns the i'th bit in bv.
ptrbit is less efficient than iterating directly over bitvector bits,
and should only be used in non-performance-critical code.
See adjustpointers for an example of a high-efficiency walk of a bitvector.
func getArgInfo(frame *stkframe, f funcInfo, needArgMap bool, ctxt *funcval) (arglen uintptr, argmap *bitvector)
func getArgInfoFast(f funcInfo, needArgMap bool) (arglen uintptr, argmap *bitvector, ok bool)
func getStackMap(frame *stkframe, cache *pcvalueCache, debug bool) (locals, args bitvector, objs []stackObjectRecord)
func makeheapobjbv(p uintptr, size uintptr) bitvector
func progToPointerMask(prog *byte, size uintptr) bitvector
func stackmapdata(stkmap *stackmap, n int32) bitvector
func adjustpointers(scanp unsafe.Pointer, bv *bitvector, adjinfo *adjustinfo, f funcInfo)
func dumpbv(cbv *bitvector, offset uintptr)
func dumpfields(bv bitvector)
func dumpobj(obj unsafe.Pointer, size uintptr, bv bitvector)
A blockRecord is the bucket data for a bucket of type blockProfile,
which is used in blocking and mutex profiles.
count int64
cycles int64
A bucket for a Go map.
tophash generally contains the top byte of the hash value
for each key in this bucket. If tophash[0] < minTopHash,
tophash[0] is a bucket evacuation state instead.
(*T) keys() unsafe.Pointer
(*T) overflow(t *maptype) *bmap
(*T) setoverflow(t *maptype, ovf *bmap)
func makeBucketArray(t *maptype, b uint8, dirtyalloc unsafe.Pointer) (buckets unsafe.Pointer, nextOverflow *bmap)
func evacuated(b *bmap) bool
A boundsError represents an indexing or slicing operation gone wrong.
code boundsErrorCode
Values in an index or slice expression can be signed or unsigned.
That means we'd need 65 bits to encode all possible indexes, from -2^63 to 2^64-1.
Instead, we keep track of whether x should be interpreted as signed or unsigned.
y is known to be nonnegative and to fit in an int.
x int64
y int
( T) Error() string
( T) RuntimeError()
T : Error
T : error
const boundsIndex
const boundsSlice3Acap
const boundsSlice3Alen
const boundsSlice3B
const boundsSlice3C
const boundsSliceAcap
const boundsSliceAlen
const boundsSliceB
A bucket holds per-call-stack profiling information.
The representation is a bit sleazy, inherited from C.
This struct defines the bucket header. It is followed in
memory by the stack words and then the actual record
data, either a memRecord or a blockRecord.
Per-call-stack profiling information.
Lookup by hashing call stack into a linked-list hash table.
No heap pointers.
allnext *bucket
hash uintptr
next *bucket
nstk uintptr
size uintptr
// memBucket or blockBucket (includes mutexProfile)
bp returns the blockRecord associated with the blockProfile bucket b.
mp returns the memRecord associated with the memProfile bucket b.
stk returns the slice in b holding the stack.
func newBucket(typ bucketType, nstk int) *bucket
func stkbucket(typ bucketType, size uintptr, stk []uintptr, alloc bool) *bucket
func dumpmemprof_callback(b *bucket, nstk uintptr, pstk *uintptr, size, allocs, frees uintptr)
func mProf_Free(b *bucket, size uintptr)
func record(r *MemProfileRecord, b *bucket)
func setprofilebucket(p unsafe.Pointer, b *bucket)
var bbuckets *bucket
var mbuckets *bucket
var xbuckets *bucket
func newBucket(typ bucketType, nstk int) *bucket
func saveblockevent(cycles int64, skip int, which bucketType)
func stkbucket(typ bucketType, size uintptr, stk []uintptr, alloc bool) *bucket
const blockProfile
const memProfile
const mutexProfile
Addresses collected in a cgo backtrace when crashing.
Length must match arg.Max in x_cgo_callers in runtime/cgo/gcc_traceback.c.
func printCgoTraceback(callers *cgoCallers)
var sigprofCallers
cgoSymbolizerArg is the type passed to cgoSymbolizer.
data uintptr
entry uintptr
file *byte
funcName *byte
lineno uintptr
more uintptr
pc uintptr
func callCgoSymbolizer(arg *cgoSymbolizerArg)
func printOneCgoTraceback(pc uintptr, max int, arg *cgoSymbolizerArg) int
cgoTracebackArg is the type passed to cgoTraceback.
buf *uintptr
context uintptr
max uintptr
sigContext uintptr
dir uintptr
elem *_type
typ _type
func makechan(t *chantype, size int) *hchan
func makechan64(t *chantype, size int64) *hchan
func reflect_makechan(t *chantype, size int) *hchan
A checkmarksMap stores the GC marks in "checkmarks" mode. It is a
per-arena bitmap with a bit for every word in the arena. The mark
is stored on the bit corresponding to the first word of the marked
allocation.
// size of args region
Information passed up from the callee frame about
the layout of the outargs region.
// where the arguments start in the frame
// if args.n >= 0, pointer map of args region
// depth in call stack (0 == most recent)
// callee sp
Global chunk index.
Represents an index into the leaf level of the radix tree.
Similar to arenaIndex, except instead of arenas, it divides the address
space into chunks.
l1 returns the index into the first level of (*pageAlloc).chunks.
l2 returns the index into the second level of (*pageAlloc).chunks.
func chunkIndex(p uintptr) chunkIdx
func chunkBase(ci chunkIdx) uintptr
consistentHeapStats represents a set of various memory statistics
whose updates must be viewed completely to get a consistent
state of the world.
To write updates to memory stats use the acquire and release
methods. To obtain a consistent global snapshot of these statistics,
use read.
gen represents the current index into which writers
are writing, and can take on the value of 0, 1, or 2.
This value is updated atomically.
noPLock is intended to provide mutual exclusion for updating
stats when no P is available. It does not block other writers
with a P, only other writers without a P and the reader. Because
stats are usually updated when a P is available, contention on
this lock should be minimal.
stats is a ring buffer of heapStatsDelta values.
Writers always atomically update the delta at index gen.
Readers operate by rotating gen (0 -> 1 -> 2 -> 0 -> ...)
and synchronizing with writers by observing each P's
statsSeq field. If the reader observes a P not writing,
it can be sure that it will pick up the new gen value the
next time it writes.
The reader then takes responsibility by clearing space
in the ring buffer for the next reader to rotate gen to
that space (i.e. it merges in values from index (gen-2) mod 3
to index (gen-1) mod 3, then clears the former).
Note that this means only one reader can be reading at a time.
There is no way for readers to synchronize.
This process is why we need a ring buffer of size 3 instead
of 2: one is for the writers, one contains the most recent
data, and the last one is clear so writers can begin writing
to it the moment gen is updated.
acquire returns a heapStatsDelta to be updated. In effect,
it acquires the shard for writing. release must be called
as soon as the relevant deltas are updated.
The returned heapStatsDelta must be updated atomically.
The caller's P must not change between acquire and
release. This also means that the caller should not
acquire a P or release its P in between.
read takes a globally consistent snapshot of m
and puts the aggregated value in out. Even though out is a
heapStatsDelta, the resulting values should be complete and
valid statistic values.
Not safe to call concurrently. The world must be stopped
or metricsSema must be held.
release indicates that the writer is done modifying
the delta. The value returned by the corresponding
acquire must no longer be accessed or modified after
release is called.
The caller's P must not change between acquire and
release. This also means that the caller should not
acquire a P or release its P in between.
unsafeClear clears the shard.
Unsafe because the world must be stopped and values should
be donated elsewhere before clearing.
unsafeRead aggregates the delta for this shard into out.
Unsafe because it does so without any synchronization. The
world must be stopped.
extra holds extra stacks accumulated in addNonGo
corresponding to profiling signals arriving on
non-Go-created threads. Those stacks are written
to log the next time a normal Go thread gets the
signal handler.
Assuming the stacks are 2 words each (we don't get
a full traceback from those threads), plus one word
size for framing, 100 Hz profiling would generate
300 words per second.
Hopefully a normal Go thread will get the profiling
signal at least once every few seconds.
lock mutex
// profile events written here
// count of frames lost because of being in atomic64 on mips/arm; updated racily
// count of frames lost because extra is full
numExtra int
// profiling is on
add adds the stack trace to the profile.
It is called from signal handlers and other limited environments
and cannot allocate memory or acquire locks that might be
held at the time of the signal, nor can it use substantial amounts
of stack.
addExtra adds the "extra" profiling events,
queued by addNonGo, to the profile log.
addExtra is called either from a signal handler on a Go thread
or from an ordinary goroutine; either way it can use stack
and has a g. The world may be stopped, though.
addNonGo adds the non-Go stack trace to the profile.
It is called from a non-Go thread, so we cannot use much stack at all,
nor do anything that needs a g or an m.
In particular, we can't call cpuprof.log.write.
Instead, we copy the stack into cpuprof.extra,
which will be drained the next time a Go thread
gets the signal handling event.
var cpuprof
type debugLogBuf ([...]T)
begin and end are the positions in the log of the beginning
and end of the log data, modulo len(data).
data *debugLogBuf
begin and end are the positions in the log of the beginning
and end of the log data, modulo len(data).
tick and nano are the current time base at begin.
tick and nano are the current time base at begin.
(*T) header() (end, tick, nano uint64, p int)
(*T) peek() (tick uint64)
(*T) printVal() bool
(*T) readUint16LEAt(pos uint64) uint16
(*T) readUint64LEAt(pos uint64) uint64
(*T) skip() uint64
(*T) uvarint() uint64
(*T) varint() int64
A debugLogWriter is a ring buffer of binary debug log records.
A log record consists of a 2-byte framing header and a sequence of
fields. The framing header gives the size of the record as a little
endian 16-bit value. Each field starts with a byte indicating its
type, followed by type-specific data. If the size in the framing
header is 0, it's a sync record consisting of two little endian
64-bit values giving a new time base.
Because this is a ring buffer, new records will eventually
overwrite old records. Hence, it maintains a reader that consumes
the log as it gets overwritten. That reader state is where an
actual log reader would start.
buf is a scratch buffer for encoding. This is here to
reduce stack usage.
data debugLogBuf
tick and nano are the time bases from the most recently
written sync record.
r is a reader that consumes records as they get overwritten
by the writer. It also acts as the initial reader state
when printing the log.
tick and nano are the time bases from the most recently
written sync record.
write uint64
(*T) byte(x byte)
(*T) bytes(x []byte)
(*T) ensure(n uint64)
(*T) uvarint(u uint64)
(*T) varint(x int64)
(*T) writeFrameAt(pos, size uint64) bool
(*T) writeSync(tick, nano uint64)
(*T) writeUint64LE(x uint64)
A dlogger writes to the debug log.
To obtain a dlogger, call dlog(). When done with the dlogger, call
end().
allLink is the next dlogger in the allDloggers list.
owned indicates that this dlogger is owned by an M. This is
accessed atomically.
w debugLogWriter
(*T) b(x bool) *dlogger
(*T) end()
(*T) hex(x uint64) *dlogger
(*T) i(x int) *dlogger
(*T) i16(x int16) *dlogger
(*T) i32(x int32) *dlogger
(*T) i64(x int64) *dlogger
(*T) i8(x int8) *dlogger
(*T) p(x interface{}) *dlogger
(*T) pc(x uintptr) *dlogger
(*T) s(x string) *dlogger
(*T) traceback(x []uintptr) *dlogger
(*T) u(x uint) *dlogger
(*T) u16(x uint16) *dlogger
(*T) u32(x uint32) *dlogger
(*T) u64(x uint64) *dlogger
(*T) u8(x uint8) *dlogger
(*T) uptr(x uintptr) *dlogger
func dlog() *dlogger
func getCachedDlogger() *dlogger
func putCachedDlogger(l *dlogger) bool
var allDloggers *dlogger
type dlogPerM (struct)
_type *_type
data unsafe.Pointer
func convT2E(t *_type, elem unsafe.Pointer) (e eface)
func convT2Enoptr(t *_type, elem unsafe.Pointer) (e eface)
func efaceOf(ep *interface{}) *eface
func assertE2I(inter *interfacetype, e eface) (r iface)
func assertE2I2(inter *interfacetype, e eface) (r iface, b bool)
func printeface(e eface)
func reflect_ifaceE2I(inter *interfacetype, e eface, dst *iface)
func reflectlite_ifaceE2I(inter *interfacetype, e eface, dst *iface)
// memory address where the error occurred
// error message
Addr returns the memory address where a fault occurred.
The address provided is best-effort.
The veracity of the result may depend on the platform.
Errors providing this method will only be returned as
a result of using runtime/debug.SetPanicOnFault.
( T) Error() string
( T) RuntimeError()
T : Error
T : error
An errorString represents a runtime error described by a single string.
( T) Error() string
( T) RuntimeError()
T : Error
T : error
evacDst is an evacuation destination.
// current destination bucket
// pointer to current elem storage
// key/elem index into b
// pointer to current key storage
NOTE: Layout known to queuefinalizer.
// ptr to object (may be a heap pointer)
// type of first argument of fn
// function to call (may be a heap pointer)
// bytes of return values from fn
// type of ptr to object (may be a heap pointer)
finblock is an array of finalizers to be executed. finblocks are
arranged in a linked list for the finalizer queue.
finblock is allocated from non-GC'd memory, so any heap pointers
must be specially handled. GC currently assumes that the finalizer
queue does not grow during marking (but it can shrink).
alllink *finblock
cnt uint32
fin [101]finalizer
next *finblock
var allfin *finblock
var finc *finblock
var finq *finblock
findfunctab is an array of these structures.
Each bucket represents 4096 bytes of the text segment.
Each subbucket represents 256 bytes of the text segment.
To find a function given a pc, locate the bucket and subbucket for
that pc. Add together the idx and subbucket value to obtain a
function index. Then scan the functab array starting at that
index to find the target function.
This table uses 20 bytes for every 4096 bytes of code, or ~0.5% overhead.
idx uint32
subbuckets [16]byte
FixAlloc is a simple free-list allocator for fixed size objects.
Malloc uses a FixAlloc wrapped around sysAlloc to manage its
mcache and mspan objects.
Memory returned by fixalloc.alloc is zeroed by default, but the
caller may take responsibility for zeroing allocations by setting
the zero flag to false. This is only safe if the memory never
contains heap pointers.
The caller is responsible for locking around FixAlloc calls.
Callers can keep state in the object but the first word is
smashed by freeing and reallocating.
Consider marking fixalloc'd types go:notinheap.
arg unsafe.Pointer
// use uintptr instead of unsafe.Pointer to avoid write barriers
// called first time p is returned
// in-use bytes now
list *mlink
nchunk uint32
size uintptr
stat *sysMemStat
// zero allocations
(*T) alloc() unsafe.Pointer
(*T) free(p unsafe.Pointer)
Initialize f to allocate objects of the given size,
using the allocator to obtain chunks of memory.
fpu_cs uint16
fpu_dp uint32
fpu_ds uint16
fpu_fcw fpcontrol
fpu_fop uint16
fpu_fsw fpstatus
fpu_ftw uint8
fpu_ip uint32
fpu_mxcsr uint32
fpu_mxcsrmask uint32
fpu_reserved [2]int32
fpu_reserved1 int32
fpu_rsrv1 uint8
fpu_rsrv2 uint16
fpu_rsrv3 uint16
fpu_rsrv4 [224]int8
fpu_stmm0 regmmst
fpu_stmm1 regmmst
fpu_stmm2 regmmst
fpu_stmm3 regmmst
fpu_stmm4 regmmst
fpu_stmm5 regmmst
fpu_stmm6 regmmst
fpu_stmm7 regmmst
fpu_xmm0 regxmm
fpu_xmm1 regxmm
fpu_xmm2 regxmm
fpu_xmm3 regxmm
fpu_xmm4 regxmm
fpu_xmm5 regxmm
fpu_xmm6 regxmm
fpu_xmm7 regxmm
fpu_cs uint16
fpu_dp uint32
fpu_ds uint16
fpu_fcw fpcontrol
fpu_fop uint16
fpu_fsw fpstatus
fpu_ftw uint8
fpu_ip uint32
fpu_mxcsr uint32
fpu_mxcsrmask uint32
fpu_reserved [2]int32
fpu_reserved1 int32
fpu_rsrv1 uint8
fpu_rsrv2 uint16
fpu_rsrv3 uint16
fpu_rsrv4 [96]int8
fpu_stmm0 regmmst
fpu_stmm1 regmmst
fpu_stmm2 regmmst
fpu_stmm3 regmmst
fpu_stmm4 regmmst
fpu_stmm5 regmmst
fpu_stmm6 regmmst
fpu_stmm7 regmmst
fpu_xmm0 regxmm
fpu_xmm1 regxmm
fpu_xmm10 regxmm
fpu_xmm11 regxmm
fpu_xmm12 regxmm
fpu_xmm13 regxmm
fpu_xmm14 regxmm
fpu_xmm15 regxmm
fpu_xmm2 regxmm
fpu_xmm3 regxmm
fpu_xmm4 regxmm
fpu_xmm5 regxmm
fpu_xmm6 regxmm
fpu_xmm7 regxmm
fpu_xmm8 regxmm
fpu_xmm9 regxmm
A FuncID identifies particular functions that need to be treated
specially by the runtime.
Note that in some situations involving plugins, there may be multiple
copies of a particular special runtime function.
Note: this list must match the list in cmd/internal/objabi/funcid.go.
func elideWrapperCalling(id funcID) bool
func showframe(f funcInfo, gp *g, firstFrame bool, funcID, childID funcID) bool
func showfuncinfo(f funcInfo, firstFrame bool, funcID, childID funcID) bool
const funcID_asmcgocall
const funcID_asyncPreempt
const funcID_cgocallback
const funcID_debugCallV1
const funcID_externalthreadhandler
const funcID_gcBgMarkWorker
const funcID_goexit
const funcID_gogo
const funcID_gopanic
const funcID_handleAsyncEvent
const funcID_jmpdefer
const funcID_mcall
const funcID_morestack
const funcID_mstart
const funcID_normal
const funcID_panicwrap
const funcID_rt0_go
const funcID_runfinq
const funcID_runtime_main
const funcID_sigpanic
const funcID_systemstack
const funcID_systemstack_switch
const funcID_wrapper
_func *_func
// in/out args size
// runtime.cutab offset of this function's CU
// offset of start of a deferreturn call instruction from entry, if any.
// start pc
// set for certain special runtime functions
// function name
// must be last
_func.npcdata uint32
_func.pcfile uint32
_func.pcln uint32
_func.pcsp uint32
datap *moduledata
( T) _Func() *Func
( T) valid() bool
func findfunc(pc uintptr) funcInfo
func (*Func).funcInfo() funcInfo
func adjustpointers(scanp unsafe.Pointer, bv *bitvector, adjinfo *adjustinfo, f funcInfo)
func cfuncname(f funcInfo) *byte
func cfuncnameFromNameoff(f funcInfo, nameoff int32) *byte
func funcdata(f funcInfo, i uint8) unsafe.Pointer
func funcfile(f funcInfo, fileno int32) string
func funcline(f funcInfo, targetpc uintptr) (file string, line int32)
func funcline1(f funcInfo, targetpc uintptr, strict bool) (file string, line int32)
func funcMaxSPDelta(f funcInfo) int32
func funcname(f funcInfo) string
func funcnameFromNameoff(f funcInfo, nameoff int32) string
func funcpkgpath(f funcInfo) string
func funcspdelta(f funcInfo, targetpc uintptr, cache *pcvalueCache) int32
func getArgInfo(frame *stkframe, f funcInfo, needArgMap bool, ctxt *funcval) (arglen uintptr, argmap *bitvector)
func getArgInfoFast(f funcInfo, needArgMap bool) (arglen uintptr, argmap *bitvector, ok bool)
func pcdatastart(f funcInfo, table uint32) uint32
func pcdatavalue(f funcInfo, table uint32, targetpc uintptr, cache *pcvalueCache) int32
func pcdatavalue1(f funcInfo, table uint32, targetpc uintptr, cache *pcvalueCache, strict bool) int32
func pcdatavalue2(f funcInfo, table uint32, targetpc uintptr) (int32, uintptr)
func pcvalue(f funcInfo, off uint32, targetpc uintptr, cache *pcvalueCache, strict bool) (int32, uintptr)
func printAncestorTracebackFuncInfo(f funcInfo, pc uintptr)
func printcreatedby1(f funcInfo, pc uintptr)
func showframe(f funcInfo, gp *g, firstFrame bool, funcID, childID funcID) bool
func showfuncinfo(f funcInfo, firstFrame bool, funcID, childID funcID) bool
func topofstack(f funcInfo, g0 bool) bool
Pseudo-Func that is returned for PCs that occur in inlined code.
A *Func can be either a *_func or a *funcinl, and they are distinguished
by the first uintptr.
// entry of the real (the "outermost") frame.
file string
line int
name string
// set to 0 to distinguish from _func
inCount uint16
outCount uint16
typ _type
(*T) dotdotdot() bool
(*T) in() []*_type
(*T) out() []*_type
fn uintptr
func addfinalizer(p unsafe.Pointer, f *funcval, nret uintptr, fint *_type, ot *ptrtype) bool
func deferproc(siz int32, fn *funcval)
func dumpfinalizer(obj unsafe.Pointer, fn *funcval, fint *_type, ot *ptrtype)
func finq_callback(fn *funcval, obj unsafe.Pointer, nret uintptr, fint *_type, ot *ptrtype)
func getArgInfo(frame *stkframe, f funcInfo, needArgMap bool, ctxt *funcval) (arglen uintptr, argmap *bitvector)
func gostartcallfn(gobuf *gobuf, fv *funcval)
func jmpdefer(fv *funcval, argp uintptr)
func newproc(siz int32, fn *funcval)
func newproc1(fn *funcval, argp unsafe.Pointer, narg int32, callergp *g, callerpc uintptr) *g
func queuefinalizer(p unsafe.Pointer, fn *funcval, nret uintptr, fint *_type, ot *ptrtype)
// innermost defer
// innermost panic - offset known to liblink
activeStackChans indicates that there are unlocked channels
pointing into this goroutine's stack. If true, stack
copying needs to acquire channel locks to protect these
areas of the stack.
// ancestor information goroutine(s) that created this goroutine (only used if debug.tracebackancestors)
asyncSafePoint is set if g is stopped at an asynchronous
safe point. This means there are frames on the stack
without precise pointer information.
atomicstatus uint32
// cgo traceback context
gcAssistBytes is this G's GC assist credit in terms of
bytes allocated. If this is positive, then the G has credit
to allocate gcAssistBytes bytes without assisting. If this
is negative, then the G must correct this by performing
scan work. We track this in bytes to make it fast to update
and check for debt in the malloc hot path. The assist ratio
determines how this corresponds to scan work debt.
// g has scanned stack; protected by _Gscan bit in status
goid int64
// pc of go statement that created this goroutine
// profiler labels
lockedm muintptr
// current m; offset known to arm liblink
// panic (instead of crash) on unexpected fault address
// passed parameter on wakeup
parkingOnChan indicates that the goroutine is about to
park on a chansend or chanrecv. Used to signal an unsafe point
for stack shrinking. It's a boolean value, but is updated atomically.
// preemption signal, duplicates stackguard0 = stackpreempt
// shrink stack at synchronous safe point
// transition to _Gpreempted on preemption; otherwise, just deschedule
racectx uintptr
// ignore race detection events
sched gobuf
schedlink guintptr
// are we participating in a select and did someone win the race?
sig uint32
sigcode0 uintptr
sigcode1 uintptr
sigpc uintptr
Stack parameters.
stack describes the actual stack memory: [stack.lo, stack.hi).
stackguard0 is the stack pointer compared in the Go stack growth prologue.
It is stack.lo+StackGuard normally, but can be StackPreempt to trigger a preemption.
stackguard1 is the stack pointer compared in the C stack growth prologue.
It is stack.lo+StackGuard on g0 and gsignal stacks.
It is ~0 on other goroutine stacks, to trigger a call to morestackc (and crash).
// offset known to runtime/cgo
// sigprof/scang lock; TODO: fold in to atomicstatus
// offset known to liblink
// offset known to liblink
// pc of goroutine function
// expected sp at top of stack, to check in traceback
// StartTrace has emitted EvGoInSyscall about this goroutine
// if status==Gsyscall, syscallpc = sched.pc to use during gc
// if status==Gsyscall, syscallsp = sched.sp to use during gc
// cputicks when syscall has returned (for tracing)
// must not split stack
// cached timer for time.Sleep
// last P emitted an event for this goroutine
// trace event sequencer
// sudog structures this g is waiting on (that have a valid elem ptr); in lock order
// if status==Gwaiting
// approx time when the g become blocked
writebuf []byte
func atomicAllG() (**g, uintptr)
func atomicAllGIndex(ptr **g, i uintptr) *g
func beforeIdle(int64) (*g, bool)
func findrunnable() (gp *g, inheritTime bool)
func getg() *g
func gfget(_p_ *p) *g
func globrunqget(_p_ *p, max int32) *g
func malg(stacksize int32) *g
func netpollunblock(pd *pollDesc, mode int32, ioready bool) *g
func newproc1(fn *funcval, argp unsafe.Pointer, narg int32, callergp *g, callerpc uintptr) *g
func runqget(_p_ *p) (gp *g, inheritTime bool)
func runqsteal(_p_, p2 *p, stealRunNextG bool) *g
func sigFetchG(c *sigctxt) *g
func traceReader() *g
func wakefing() *g
func addOneOpenDeferFrame(gp *g, pc uintptr, sp unsafe.Pointer)
func adjustctxt(gp *g, adjinfo *adjustinfo)
func adjustdefers(gp *g, adjinfo *adjustinfo)
func adjustpanics(gp *g, adjinfo *adjustinfo)
func adjustsudogs(gp *g, adjinfo *adjustinfo)
func allgadd(gp *g)
func atomicAllGIndex(ptr **g, i uintptr) *g
func canpanic(gp *g) bool
func casfrom_Gscanstatus(gp *g, oldval, newval uint32)
func casgcopystack(gp *g) uint32
func casGFromPreempted(gp *g, old, new uint32) bool
func casgstatus(gp *g, oldval, newval uint32)
func casGToPreemptScan(gp *g, old, new uint32)
func castogscanstatus(gp *g, oldval, newval uint32) bool
func chanparkcommit(gp *g, chanLock unsafe.Pointer) bool
func copystack(gp *g, newsize uintptr)
func debugCallWrap1(dispatch uintptr, callingG *g)
func dopanic_m(gp *g, pc, sp uintptr) bool
func doSigPreempt(gp *g, ctxt *sigctxt)
func dumpgoroutine(gp *g)
func dumpgstatus(gp *g)
func execute(gp *g, inheritTime bool)
func exitsyscall0(gp *g)
func findsghi(gp *g, stk stack) uintptr
func gcallers(gp *g, skip int, pcbuf []uintptr) int
func gcAssistAlloc(gp *g)
func gcAssistAlloc1(gp *g, scanWork int64)
func gentraceback(pc0, sp0, lr0 uintptr, gp *g, skip int, pcbuf *uintptr, max int, callback func(*stkframe, unsafe.Pointer) bool, v unsafe.Pointer, flags uint) int
func gfput(_p_ *p, gp *g)
func globrunqput(gp *g)
func globrunqputhead(gp *g)
func goexit0(gp *g)
func gopreempt_m(gp *g)
func goready(gp *g, traceskip int)
func goroutineheader(gp *g)
func gosched_m(gp *g)
func goschedguarded_m(gp *g)
func goschedImpl(gp *g)
func goyield_m(gp *g)
func isAsyncSafePoint(gp *g, pc, sp, lr uintptr) (bool, uintptr)
func isShrinkStackSafe(gp *g) bool
func isSystemGoroutine(gp *g, fixed bool) bool
func netpollblockcommit(gp *g, gpp unsafe.Pointer) bool
func netpollgoready(gp *g, traceskip int)
func newproc1(fn *funcval, argp unsafe.Pointer, narg int32, callergp *g, callerpc uintptr) *g
func notetsleep_internal(n *note, ns int64, gp *g, deadline int64) bool
func park_m(gp *g)
func parkunlock_c(gp *g, lock unsafe.Pointer) bool
func preemptPark(gp *g)
func printcreatedby(gp *g)
func raceacquireg(gp *g, addr unsafe.Pointer)
func racereleaseacquireg(gp *g, addr unsafe.Pointer)
func racereleaseg(gp *g, addr unsafe.Pointer)
func racereleasemergeg(gp *g, addr unsafe.Pointer)
func readgstatus(gp *g) uint32
func ready(gp *g, traceskip int, next bool)
func recovery(gp *g)
func resetForSleep(gp *g, ut unsafe.Pointer) bool
func runOpenDeferFrame(gp *g, d *_defer) bool
func runqput(_p_ *p, gp *g, next bool)
func runqputslow(_p_ *p, gp *g, h, t uint32) bool
func saveAncestors(callergp *g) *[]ancestorInfo
func saveg(pc, sp uintptr, gp *g, r *StackRecord)
func scanstack(gp *g, gcw *gcWork)
func schedEnabled(gp *g) bool
func selparkcommit(gp *g, _ unsafe.Pointer) bool
func setg(gg *g)
func setGNoWB(gp **g, new *g)
func setGNoWB(gp **g, new *g)
func shouldPushSigpanic(gp *g, pc, lr uintptr) bool
func showframe(f funcInfo, gp *g, firstFrame bool, funcID, childID funcID) bool
func shrinkstack(gp *g)
func sighandler(sig uint32, info *siginfo, ctxt unsafe.Pointer, gp *g)
func sigprof(pc, sp, lr uintptr, gp *g, mp *m)
func startlockedm(gp *g)
func suspendG(gp *g) suspendGState
func syncadjustsudogs(gp *g, used uintptr, adjinfo *adjustinfo) uintptr
func traceback(pc, sp, lr uintptr, gp *g)
func traceback1(pc, sp, lr uintptr, gp *g, flags uint)
func tracebackdefers(gp *g, callback func(*stkframe, unsafe.Pointer) bool, v unsafe.Pointer)
func tracebackothers(me *g)
func tracebacktrap(pc, sp, lr uintptr, gp *g)
func traceGoCreate(newg *g, pc uintptr)
func traceGoUnpark(gp *g, skip int)
func wantAsyncPreempt(gp *g) bool
var fing *g
var g0
gcBgMarkWorker is an entry in the gcBgMarkWorkerPool. It points to a single
gcBgMarkWorker goroutine.
The g of this worker.
Release this m on park. This is used to communicate with the unlock
function, which cannot access the G's stack. It is unused outside of
gcBgMarkWorker().
Unused workers are managed in a lock-free stack. This field must be first.
gcBits is an alloc/mark bitmap. This is always used as *gcBits.
bitp returns a pointer to the byte containing bit n and a mask for
selecting that bit from *bytep.
bytep returns a pointer to the n'th byte of b.
func newAllocBits(nelems uintptr) *gcBits
func newMarkBits(nelems uintptr) *gcBits
bits [65520]gcBits
gcBitsHeader // side step recursive type bug (issue 14620) by including fields by hand.
// free is the index into bits of the next free byte; read/write atomically
next *gcBitsArena
tryAlloc allocates from b or returns nil if b does not have enough room.
This is safe to call concurrently.
func newArenaMayUnlock() *gcBitsArena
// free is the index into bits of the next free byte.
// *gcBits triggers recursive type bug. (issue 14620)
assistBytesPerWork is 1/assistWorkPerByte.
Stored as a uint64, but it's actually a float64. Use
float64frombits to get the value.
Read and written atomically.
Note that because this is read and written independently
from assistWorkPerByte users may notice a skew between
the two values, and such a state should be safe.
assistTime is the nanoseconds spent in mutator assists
during this cycle. This is updated atomically. Updates
occur in bounded batches, since it is both written and read
throughout the cycle.
assistWorkPerByte is the ratio of scan work to allocated
bytes that should be performed by mutator assists. This is
computed at the beginning of each cycle and updated every
time heap_scan is updated.
Stored as a uint64, but it's actually a float64. Use
float64frombits to get the value.
Read and written atomically.
bgScanCredit is the scan work credit accumulated by the
concurrent background scan. This credit is accumulated by
the background scan and stolen by mutator assists. This is
updated atomically. Updates occur in bounded batches, since
it is both written and read throughout the cycle.
dedicatedMarkTime is the nanoseconds spent in dedicated
mark workers during this cycle. This is updated atomically
at the end of the concurrent mark phase.
dedicatedMarkWorkersNeeded is the number of dedicated mark
workers that need to be started. This is computed at the
beginning of each cycle and decremented atomically as
dedicated mark workers get started.
fractionalMarkTime is the nanoseconds spent in the
fractional mark worker during this cycle. This is updated
atomically throughout the cycle and will be up-to-date if
the fractional mark worker is not currently running.
fractionalUtilizationGoal is the fraction of wall clock
time that should be spent in the fractional mark worker on
each P that isn't running a dedicated worker.
For example, if the utilization goal is 25% and there are
no dedicated workers, this will be 0.25. If the goal is
25%, there is one dedicated worker, and GOMAXPROCS is 5,
this will be 0.05 to make up the missing 5%.
If this is zero, no fractional workers are needed.
idleMarkTime is the nanoseconds spent in idle marking
during this cycle. This is updated atomically throughout
the cycle.
markStartTime is the absolute start time in nanoseconds
that assists and background mark workers started.
scanWork is the total scan work performed this cycle. This
is updated atomically during the cycle. Updates occur in
bounded batches, since it is both written and read
throughout the cycle. At the end of the cycle, this is how
much of the retained heap is scannable.
Currently this is the bytes of heap scanned. For most uses,
this is an opaque unit of work, but for estimation the
definition is important.
endCycle computes the trigger ratio for the next cycle.
enlistWorker encourages another dedicated mark worker to start on
another P if there are spare worker slots. It is used by putfull
when more work is made available.
findRunnableGCWorker returns a background mark worker for _p_ if it
should be run. This must only be called when gcBlackenEnabled != 0.
revise updates the assist ratio during the GC cycle to account for
improved estimates. This should be called whenever memstats.heap_scan,
memstats.heap_live, or memstats.next_gc is updated. It is safe to
call concurrently, but it may race with other calls to revise.
The result of this race is that the two assist ratio values may not line
up or may be stale. In practice this is OK because the assist ratio
moves slowly throughout a GC cycle, and the assist ratio is a best-effort
heuristic anyway. Furthermore, no part of the heuristic depends on
the two assist ratio values being exact reciprocals of one another, since
the two values are used to convert values from different sources.
The worst case result of this raciness is that we may miss a larger shift
in the ratio (say, if we decide to pace more aggressively against the
hard heap goal) but even this "hard goal" is best-effort (see #40460).
The dedicated GC should ensure we don't exceed the hard goal by too much
in the rare case we do exceed it.
It should only be called when gcBlackenEnabled != 0 (because this
is when assists are enabled and the necessary statistics are
available).
startCycle resets the GC controller's state and computes estimates
for a new GC cycle. The caller must hold worldsema and the world
must be stopped.
var gcController
func gcDrain(gcw *gcWork, flags gcDrainFlags)
const gcDrainFlushBgCredit
const gcDrainFractional
const gcDrainIdle
const gcDrainUntilPreempt
A gclink is a node in a linked list of blocks, like mlink,
but it is opaque to the garbage collector.
The GC does not trace the pointers during collection,
and the compiler does not emit write barriers for assignments
of gclinkptr values. Code should store references to gclinks
as gclinkptr, not as *gclink.
next gclinkptr
A gclinkptr is a pointer to a gclink, but it is opaque
to the garbage collector.
ptr returns the *gclink form of p.
The result should be used for accessing fields, not stored
in other data structures.
func nextFreeFast(s *mspan) gclinkptr
func stackpoolalloc(order uint8) gclinkptr
func stackpoolfree(x gclinkptr, order uint8)
gcMarkWorkerMode represents the mode that a concurrent mark worker
should operate in.
Concurrent marking happens through four different mechanisms. One
is mutator assists, which happen in response to allocations and are
not scheduled. The other three are variations in the per-P mark
workers and are distinguished by gcMarkWorkerMode.
const gcMarkWorkerDedicatedMode
const gcMarkWorkerFractionalMode
const gcMarkWorkerIdleMode
const gcMarkWorkerNotWorker
gcMode indicates how concurrent a GC cycle should be.
func gcSweep(mode gcMode)
const gcBackgroundMode
const gcForceBlockMode
const gcForceMode
A gcTrigger is a predicate for starting a GC cycle. Specifically,
it is an exit condition for the _GCoff phase.
kind gcTriggerKind
// gcTriggerCycle: cycle number to start
// gcTriggerTime: current time
test reports whether the trigger condition is satisfied, meaning
that the exit condition for the _GCoff phase has been met. The exit
condition should be tested when allocating.
func gcStart(trigger gcTrigger)
const gcTriggerCycle
const gcTriggerHeap
const gcTriggerTime
A gcWork provides the interface to produce and consume work for the
garbage collector.
A gcWork can be used on the stack as follows:
(preemption must be disabled)
gcw := &getg().m.p.ptr().gcw
.. call gcw.put() to produce and gcw.tryGet() to consume ..
It's important that any use of gcWork during the mark phase prevent
the garbage collector from transitioning to mark termination since
gcWork may locally hold GC work buffers. This can be done by
disabling preemption (systemstack or acquirem).
Bytes marked (blackened) on this gcWork. This is aggregated
into work.bytesMarked by dispose.
flushedWork indicates that a non-empty work buffer was
flushed to the global work list since the last gcMarkDone
termination check. Specifically, this indicates that this
gcWork may have communicated work to another gcWork.
Scan work performed on this gcWork. This is aggregated into
gcController by dispose and may also be flushed by callers.
wbuf1 and wbuf2 are the primary and secondary work buffers.
This can be thought of as a stack of both work buffers'
pointers concatenated. When we pop the last pointer, we
shift the stack up by one work buffer by bringing in a new
full buffer and discarding an empty one. When we fill both
buffers, we shift the stack down by one work buffer by
bringing in a new empty buffer and discarding a full one.
This way we have one buffer's worth of hysteresis, which
amortizes the cost of getting or putting a work buffer over
at least one buffer of work and reduces contention on the
global work lists.
wbuf1 is always the buffer we're currently pushing to and
popping from and wbuf2 is the buffer that will be discarded
next.
Invariant: Both wbuf1 and wbuf2 are nil or neither are.
wbuf1 and wbuf2 are the primary and secondary work buffers.
This can be thought of as a stack of both work buffers'
pointers concatenated. When we pop the last pointer, we
shift the stack up by one work buffer by bringing in a new
full buffer and discarding an empty one. When we fill both
buffers, we shift the stack down by one work buffer by
bringing in a new empty buffer and discarding a full one.
This way we have one buffer's worth of hysteresis, which
amortizes the cost of getting or putting a work buffer over
at least one buffer of work and reduces contention on the
global work lists.
wbuf1 is always the buffer we're currently pushing to and
popping from and wbuf2 is the buffer that will be discarded
next.
Invariant: Both wbuf1 and wbuf2 are nil or neither are.
balance moves some work that's cached in this gcWork back on the
global queue.
dispose returns any cached pointers to the global queue.
The buffers are being put on the full queue so that the
write barriers will not simply reacquire them before the
GC can inspect them. This helps reduce the mutator's
ability to hide pointers during the concurrent mark phase.
empty reports whether w has no mark work available.
(*T) init()
put enqueues a pointer for the garbage collector to trace.
obj must point to the beginning of a heap object or an oblet.
putBatch performs a put on every pointer in obj. See put for
constraints on these pointers.
putFast does a put and reports whether it can be done quickly
otherwise it returns false and the caller needs to call put.
tryGet dequeues a pointer for the garbage collector to trace.
If there are no pointers remaining in this gcWork or in the global
queue, tryGet returns 0. Note that there may still be pointers in
other gcWork instances or other caches.
tryGetFast dequeues a pointer for the garbage collector to trace
if one is readily available. Otherwise it returns 0 and
the caller is expected to call tryGet().
func gcDrain(gcw *gcWork, flags gcDrainFlags)
func gcDrainN(gcw *gcWork, scanWork int64) int64
func greyobject(obj, base, off uintptr, span *mspan, gcw *gcWork, objIndex uintptr)
func markroot(gcw *gcWork, i uint32)
func markrootBlock(b0, n0 uintptr, ptrmask0 *uint8, gcw *gcWork, shard int)
func markrootSpans(gcw *gcWork, shard int)
func scanblock(b0, n0 uintptr, ptrmask *uint8, gcw *gcWork, stk *stackScanState)
func scanConservative(b, n uintptr, ptrmask *uint8, gcw *gcWork, state *stackScanState)
func scanframeworker(frame *stkframe, state *stackScanState, gcw *gcWork)
func scanobject(b uintptr, gcw *gcWork)
func scanstack(gp *g, gcw *gcWork)
A gList is a list of Gs linked through g.schedlink. A G can only be
on one gQueue or gList at a time.
head guintptr
empty reports whether l is empty.
pop removes and returns the head of l. If l is empty, it returns nil.
push adds gp to the head of l.
pushAll prepends all Gs in q to l.
func netpoll(delay int64) gList
func injectglist(glist *gList)
func netpollready(toRun *gList, pd *pollDesc, mode int32)
// for framepointer-enabled architectures
ctxt unsafe.Pointer
g guintptr
lr uintptr
pc uintptr
ret sys.Uintreg
The offsets of sp, pc, and g are known to (hard-coded in) libmach.
ctxt is unusual with respect to GC: it may be a
heap-allocated funcval, so GC needs to track it, but it
needs to be set and cleared from assembly, where it's
difficult to have write barriers. However, ctxt is really a
saved, live register, and we only ever exchange it between
the real register and the gobuf. Hence, we treat it as a
root during stack scanning, which means assembly that saves
and restores it doesn't need write barriers. It's still
typed as a pointer so that any other writes from Go get
write barriers.
func gogo(buf *gobuf)
func gosave(buf *gobuf)
func gostartcall(buf *gobuf, fn, ctxt unsafe.Pointer)
func gostartcallfn(gobuf *gobuf, fv *funcval)
A gQueue is a dequeue of Gs linked through g.schedlink. A G can only
be on one gQueue or gList at a time.
head guintptr
tail guintptr
empty reports whether q is empty.
pop removes and returns the head of queue q. It returns nil if
q is empty.
popList takes all Gs in q and returns them as a gList.
push adds gp to the head of q.
pushBack adds gp to the tail of q.
pushBackAll adds all Gs in l2 to the tail of q. After this q2 must
not be used.
func globrunqputbatch(batch *gQueue, n int32)
func runqputbatch(pp *p, q *gQueue, qsize int)
gsignalStack saves the fields of the gsignal stack changed by
setGsignalStack.
stack stack
stackguard0 uintptr
stackguard1 uintptr
stktopsp uintptr
func adjustSignalStack(sig uint32, mp *m, gsigStack *gsignalStack) bool
func restoreGsignalStack(st *gsignalStack)
func setGsignalStack(st *stackt, old *gsignalStack)
A guintptr holds a goroutine pointer, but typed as a uintptr
to bypass write barriers. It is used in the Gobuf goroutine state
and in scheduling lists that are manipulated without a P.
The Gobuf.g goroutine pointer is almost always updated by assembly code.
In one of the few places it is updated by Go code - func save - it must be
treated as a uintptr to avoid a write barrier being emitted at a bad time.
Instead of figuring out how to emit the write barriers missing in the
assembly manipulation, we change the type of the field to uintptr,
so that it does not require write barriers at all.
Goroutine structs are published in the allg list and never freed.
That will keep the goroutine structs from being collected.
There is never a time that Gobuf.g's contain the only references
to a goroutine: the publishing of the goroutine in allg comes first.
Goroutine pointers are also kept in non-GC-visible places like TLS,
so I can't see them ever moving. If we did want to start moving data
in the GC, we'd need to allocate the goroutine structs from an
alternate arena. Using guintptr doesn't make that problem any worse.
(*T) cas(old, new guintptr) bool
( T) ptr() *g
(*T) set(g *g)
func runqgrab(_p_ *p, batch *[256]guintptr, batchHead uint32, stealRunNextG bool) uint32
// points to an array of dataqsiz elements
closed uint32
// size of the circular queue
elemsize uint16
// element type
lock protects all fields in hchan, as well as several
fields in sudogs blocked on this channel.
Do not change another G's status while holding this lock
(in particular, do not ready a G), as this can deadlock
with stack shrinking.
// total data in the queue
// list of recv waiters
// receive index
// list of send waiters
// send index
(*T) raceaddr() unsafe.Pointer
(*T) sortkey() uintptr
func makechan(t *chantype, size int) *hchan
func makechan64(t *chantype, size int64) *hchan
func reflect_makechan(t *chantype, size int) *hchan
func chanbuf(c *hchan, i uint) unsafe.Pointer
func chanrecv(c *hchan, ep unsafe.Pointer, block bool) (selected, received bool)
func chanrecv1(c *hchan, elem unsafe.Pointer)
func chanrecv2(c *hchan, elem unsafe.Pointer) (received bool)
func chansend(c *hchan, ep unsafe.Pointer, block bool, callerpc uintptr) bool
func chansend1(c *hchan, elem unsafe.Pointer)
func closechan(c *hchan)
func empty(c *hchan) bool
func full(c *hchan) bool
func racenotify(c *hchan, idx uint, sg *sudog)
func racesync(c *hchan, sg *sudog)
func recv(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func(), skip int)
func reflect_chancap(c *hchan) int
func reflect_chanclose(c *hchan)
func reflect_chanlen(c *hchan) int
func reflect_chanrecv(c *hchan, nb bool, elem unsafe.Pointer) (selected bool, received bool)
func reflect_chansend(c *hchan, elem unsafe.Pointer, nb bool) (selected bool)
func reflectlite_chanlen(c *hchan) int
func selectnbrecv(elem unsafe.Pointer, c *hchan) (selected bool)
func selectnbrecv2(elem unsafe.Pointer, received *bool, c *hchan) (selected bool)
func selectnbsend(c *hchan, elem unsafe.Pointer) (selected bool)
func send(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func(), skip int)
haidTailIndex represents a combined 32-bit head and 32-bit tail
of a queue into a single 64-bit value.
cas atomically compares-and-swaps a headTailIndex value.
decHead atomically decrements the head of a headTailIndex.
head returns the head of a headTailIndex value.
incHead atomically increments the head of a headTailIndex.
incTail atomically increments the tail of a headTailIndex.
load atomically reads a headTailIndex value.
reset clears the headTailIndex to (0, 0).
split splits the headTailIndex value into its parts.
tail returns the tail of a headTailIndex value.
func makeHeadTailIndex(head, tail uint32) headTailIndex
A heapArena stores metadata for a heap arena. heapArenas are stored
outside of the Go heap and accessed via the mheap_.arenas index.
bitmap stores the pointer/scalar bitmap for the words in
this arena. See mbitmap.go for a description. Use the
heapBits type to access this.
checkmarks stores the debug.gccheckmark state. It is only
used if debug.gccheckmark > 0.
pageInUse is a bitmap that indicates which spans are in
state mSpanInUse. This bitmap is indexed by page number,
but only the bit corresponding to the first page in each
span is used.
Reads and writes are atomic.
pageMarks is a bitmap that indicates which spans have any
marked objects on them. Like pageInUse, only the bit
corresponding to the first page in each span is used.
Writes are done atomically during marking. Reads are
non-atomic and lock-free since they only occur during
sweeping (and hence never race with writes).
This is used to quickly find whole spans that can be freed.
TODO(austin): It would be nice if this was uint64 for
faster scanning, but we don't have 64-bit atomic bit
operations.
pageSpecials is a bitmap that indicates which spans have
specials (finalizers or other). Like pageInUse, only the bit
corresponding to the first page in each span is used.
Writes are done atomically whenever a special is added to
a span and whenever the last special is removed from a span.
Reads are done atomically to find spans containing specials
during marking.
spans maps from virtual address page ID within this arena to *mspan.
For allocated spans, their pages map to the span itself.
For free spans, only the lowest and highest pages map to the span itself.
Internal pages map to an arbitrary span.
For pages that have never been allocated, spans entries are nil.
Modifications are protected by mheap.lock. Reads can be
performed without locking, but ONLY from indexes that are
known to contain in-use or stack spans. This means there
must not be a safe-point between establishing that an
address is live and looking it up in the spans array.
zeroedBase marks the first byte of the first page in this
arena which hasn't been used yet and is therefore already
zero. zeroedBase is relative to the arena base.
Increases monotonically until it hits heapArenaBytes.
This field is sufficient to determine if an allocation
needs to be zeroed because the page allocator follows an
address-ordered first-fit policy.
Read atomically and written with an atomic CAS.
func pageIndexOf(p uintptr) (arena *heapArena, pageIdx uintptr, pageMask uint8)
heapBits provides access to the bitmap bits for a single heap word.
The methods on heapBits take value receivers so that the compiler
can more easily inline calls to those methods and registerize the
struct fields independently.
// Index of heap arena containing bitp
bitp *uint8
// Last byte arena's bitmap
shift uint32
The caller can test morePointers and isPointer by &-ing with bitScan and bitPointer.
The result includes in its higher bits the bits for subsequent words
described by the same bitmap byte.
nosplit because it is used during write barriers and must not be preempted.
forward returns the heapBits describing n pointer-sized words ahead of h in memory.
That is, if h describes address p, h.forward(n) describes p+n*ptrSize.
h.forward(1) is equivalent to h.next(), just slower.
Note that forward does not modify h. The caller must record the result.
bits returns the heap bits for the current word.
forwardOrBoundary is like forward, but stops at boundaries between
contiguous sections of the bitmap. It returns the number of words
advanced over, which will be <= n.
initSpan initializes the heap bitmap for a span.
If this is a span of pointer-sized objects, it initializes all
words to pointer/scan.
Otherwise, it initializes all words to scalar/dead.
isPointer reports whether the heap bits describe a pointer word.
nosplit because it is used during write barriers and must not be preempted.
morePointers reports whether this word and all remaining words in this object
are scalars.
h must not describe the second word of the object.
next returns the heapBits describing the next pointer-sized word in memory.
That is, if h describes address p, h.next() describes p+ptrSize.
Note that next does not modify h. The caller must record the result.
nosplit because it is used during write barriers and must not be preempted.
nextArena advances h to the beginning of the next heap arena.
This is a slow-path helper to next. gc's inliner knows that
heapBits.next can be inlined even though it calls this. This is
marked noinline so it doesn't get inlined into next and cause next
to be too big to inline.
func heapBitsForAddr(addr uintptr) (h heapBits)
func heapBitsSetTypeGCProg(h heapBits, progSize, elemSize, dataSize, allocSize uintptr, prog *byte)
heapStatsAggregate represents memory stats obtained from the
runtime. This set of stats is grouped together because they
depend on each other in some way to make sense of the runtime's
current heap memory use. They're also sharded across Ps, so it
makes sense to grab them all at once.
heapStatsDelta heapStatsDelta
Memory stats.
// byte delta of memory committed
// byte delta of memory placed in the heap
// byte delta of memory reserved for unrolled GC prog bits
// byte delta of memory reserved for stacks
// byte delta of memory reserved for work bufs
Allocator stats.
// bytes allocated for large objects
// number of large object allocations
// bytes freed for large objects (>maxSmallSize)
// number of frees for large objects (>maxSmallSize)
// byte delta of released memory generated
// number of allocs for small objects
// number of frees for small objects (<=maxSmallSize)
inObjects is the bytes of memory occupied by objects,
numObjects is the number of live objects in the heap.
compute populates the heapStatsAggregate with values from the runtime.
merge adds in the deltas from b into a.
heapStatsDelta contains deltas of various runtime memory statistics
that need to be updated together in order for them to be kept
consistent with one another.
Memory stats.
// byte delta of memory committed
// byte delta of memory placed in the heap
// byte delta of memory reserved for unrolled GC prog bits
// byte delta of memory reserved for stacks
// byte delta of memory reserved for work bufs
Allocator stats.
// bytes allocated for large objects
// number of large object allocations
// bytes freed for large objects (>maxSmallSize)
// number of frees for large objects (>maxSmallSize)
// byte delta of released memory generated
// number of allocs for small objects
// number of frees for small objects (<=maxSmallSize)
merge adds in the deltas from b into a.
The compiler knows that a print of a value of this type
should use printhex instead of printuint (decimal).
A hash iteration structure.
If you modify hiter, also change cmd/compile/internal/gc/reflect.go to indicate
the layout of this structure.
B uint8
// current bucket
bucket uintptr
// bucket ptr at hash_iter initialization time
checkBucket uintptr
// Must be in second position (see cmd/compile/internal/gc/range.go).
h *hmap
i uint8
// Must be in first position. Write nil to indicate iteration end (see cmd/compile/internal/gc/range.go).
// intra-bucket offset to start from during iteration (should be big enough to hold bucketCnt-1)
// keeps overflow buckets of hmap.oldbuckets alive
// keeps overflow buckets of hmap.buckets alive
// bucket iteration started at
t *maptype
// already wrapped around from end of bucket array to beginning
func reflect_mapiterinit(t *maptype, h *hmap) *hiter
func mapiterinit(t *maptype, h *hmap, it *hiter)
func mapiternext(it *hiter)
func reflect_mapiterelem(it *hiter) unsafe.Pointer
func reflect_mapiterkey(it *hiter) unsafe.Pointer
func reflect_mapiternext(it *hiter)
A header for a Go map.
// log_2 of # of buckets (can hold up to loadFactor * 2^B items)
// array of 2^B Buckets. may be nil if count==0.
Note: the format of the hmap is also encoded in cmd/compile/internal/gc/reflect.go.
Make sure this stays in sync with the compiler's definition.
// # live cells == size of map. Must be first (used by len() builtin)
// optional fields
flags uint8
// hash seed
// progress counter for evacuation (buckets less than this have been evacuated)
// approximate number of overflow buckets; see incrnoverflow for details
// previous bucket array of half the size, non-nil only when growing
(*T) createOverflow()
growing reports whether h is growing. The growth may be to the same size or bigger.
incrnoverflow increments h.noverflow.
noverflow counts the number of overflow buckets.
This is used to trigger same-size map growth.
See also tooManyOverflowBuckets.
To keep hmap small, noverflow is a uint16.
When there are few buckets, noverflow is an exact count.
When there are many buckets, noverflow is an approximate count.
(*T) newoverflow(t *maptype, b *bmap) *bmap
noldbuckets calculates the number of buckets prior to the current map growth.
oldbucketmask provides a mask that can be applied to calculate n % noldbuckets().
sameSizeGrow reports whether the current growth is to a map of the same size.
func makemap(t *maptype, hint int, h *hmap) *hmap
func makemap64(t *maptype, hint int64, h *hmap) *hmap
func makemap_small() *hmap
func reflect_makemap(t *maptype, cap int) *hmap
func advanceEvacuationMark(h *hmap, t *maptype, newbit uintptr)
func bucketEvacuated(t *maptype, h *hmap, bucket uintptr) bool
func evacuate(t *maptype, h *hmap, oldbucket uintptr)
func evacuate_fast32(t *maptype, h *hmap, oldbucket uintptr)
func evacuate_fast64(t *maptype, h *hmap, oldbucket uintptr)
func evacuate_faststr(t *maptype, h *hmap, oldbucket uintptr)
func growWork(t *maptype, h *hmap, bucket uintptr)
func growWork_fast32(t *maptype, h *hmap, bucket uintptr)
func growWork_fast64(t *maptype, h *hmap, bucket uintptr)
func growWork_faststr(t *maptype, h *hmap, bucket uintptr)
func hashGrow(t *maptype, h *hmap)
func makemap(t *maptype, hint int, h *hmap) *hmap
func makemap64(t *maptype, hint int64, h *hmap) *hmap
func mapaccess1(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer
func mapaccess1_fast32(t *maptype, h *hmap, key uint32) unsafe.Pointer
func mapaccess1_fast64(t *maptype, h *hmap, key uint64) unsafe.Pointer
func mapaccess1_faststr(t *maptype, h *hmap, ky string) unsafe.Pointer
func mapaccess1_fat(t *maptype, h *hmap, key, zero unsafe.Pointer) unsafe.Pointer
func mapaccess2(t *maptype, h *hmap, key unsafe.Pointer) (unsafe.Pointer, bool)
func mapaccess2_fast32(t *maptype, h *hmap, key uint32) (unsafe.Pointer, bool)
func mapaccess2_fast64(t *maptype, h *hmap, key uint64) (unsafe.Pointer, bool)
func mapaccess2_faststr(t *maptype, h *hmap, ky string) (unsafe.Pointer, bool)
func mapaccess2_fat(t *maptype, h *hmap, key, zero unsafe.Pointer) (unsafe.Pointer, bool)
func mapaccessK(t *maptype, h *hmap, key unsafe.Pointer) (unsafe.Pointer, unsafe.Pointer)
func mapassign(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer
func mapassign_fast32(t *maptype, h *hmap, key uint32) unsafe.Pointer
func mapassign_fast32ptr(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer
func mapassign_fast64(t *maptype, h *hmap, key uint64) unsafe.Pointer
func mapassign_fast64ptr(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer
func mapassign_faststr(t *maptype, h *hmap, s string) unsafe.Pointer
func mapclear(t *maptype, h *hmap)
func mapdelete(t *maptype, h *hmap, key unsafe.Pointer)
func mapdelete_fast32(t *maptype, h *hmap, key uint32)
func mapdelete_fast64(t *maptype, h *hmap, key uint64)
func mapdelete_faststr(t *maptype, h *hmap, ky string)
func mapiterinit(t *maptype, h *hmap, it *hiter)
func reflect_mapaccess(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer
func reflect_mapassign(t *maptype, h *hmap, key unsafe.Pointer, elem unsafe.Pointer)
func reflect_mapdelete(t *maptype, h *hmap, key unsafe.Pointer)
func reflect_mapiterinit(t *maptype, h *hmap) *hiter
func reflect_maplen(h *hmap) int
func reflectlite_maplen(h *hmap) int
data unsafe.Pointer
tab *itab
func assertE2I(inter *interfacetype, e eface) (r iface)
func assertE2I2(inter *interfacetype, e eface) (r iface, b bool)
func assertI2I(inter *interfacetype, i iface) (r iface)
func assertI2I2(inter *interfacetype, i iface) (r iface, b bool)
func convI2I(inter *interfacetype, i iface) (r iface)
func convT2I(tab *itab, elem unsafe.Pointer) (i iface)
func convT2Inoptr(tab *itab, elem unsafe.Pointer) (i iface)
func assertI2I(inter *interfacetype, i iface) (r iface)
func assertI2I2(inter *interfacetype, i iface) (r iface, b bool)
func convI2I(inter *interfacetype, i iface) (r iface)
func printiface(i iface)
func reflect_ifaceE2I(inter *interfacetype, e eface, dst *iface)
func reflectlite_ifaceE2I(inter *interfacetype, e eface, dst *iface)
An initTask represents the set of initializations that need to be done for a package.
Keep in sync with ../../test/initempty.go:initTask
ndeps uintptr
nfns uintptr
TODO: pack the first 3 fields more tightly?
// 0 = uninitialized, 1 = in progress, 2 = done
func doInit(t *initTask)
var main_inittask
var runtime_inittask
inlinedCall is the encoding of entries in the FUNCDATA_InlTree table.
// perCU file index for inlined call. See cmd/link:pcln.go
// type of the called function
// offset into pclntab for name of called function
// line number of the call site
// index of parent in the inltree, or < 0
// position of an instruction whose source position is the call site (offset from entry)
mhdr []imethod
pkgpath name
typ _type
func assertE2I(inter *interfacetype, e eface) (r iface)
func assertE2I2(inter *interfacetype, e eface) (r iface, b bool)
func assertI2I(inter *interfacetype, i iface) (r iface)
func assertI2I2(inter *interfacetype, i iface) (r iface, b bool)
func convI2I(inter *interfacetype, i iface) (r iface)
func getitab(inter *interfacetype, typ *_type, canfail bool) *itab
func itabHashFunc(inter *interfacetype, typ *_type) uintptr
func reflect_ifaceE2I(inter *interfacetype, e eface, dst *iface)
func reflectlite_ifaceE2I(inter *interfacetype, e eface, dst *iface)
layout of Itab known to compilers
allocated in non-garbage-collected memory
Needs to be in sync with
../cmd/compile/internal/gc/reflect.go:/^func.dumptabs.
_type *_type
// variable sized. fun[0]==0 means _type does not implement inter.
// copy of _type.hash. Used for type switches.
inter *interfacetype
init fills in the m.fun array with all the code pointers for
the m.inter/m._type pair. If the type does not implement the interface,
it sets m.fun[0] to 0 and returns the name of an interface function that is missing.
It is ok to call this multiple times on the same m, even concurrently.
func getitab(inter *interfacetype, typ *_type, canfail bool) *itab
func convT2I(tab *itab, elem unsafe.Pointer) (i iface)
func convT2Inoptr(tab *itab, elem unsafe.Pointer) (i iface)
func ifaceeq(tab *itab, x, y unsafe.Pointer) bool
func itab_callback(tab *itab)
func itabAdd(m *itab)
func panicdottypeI(have *itab, want, iface *_type)
Note: change the formula in the mallocgc call in itabAdd if you change these fields.
// current number of filled entries.
// really [size] large
// length of entries array. Always a power of 2.
add adds the given itab to itab table t.
itabLock must be held.
find finds the given interface/type pair in t.
Returns nil if the given interface/type pair isn't present.
var itabTable *itabTableType
var itabTableInit
data int64
fflags uint32
filter int16
flags uint16
ident uint64
udata *byte
func kevent(kq int32, ch *keventt, nch int32, ev *keventt, nev int32, ts *timespec) int32
func kevent(kq int32, ch *keventt, nch int32, ev *keventt, nev int32, ts *timespec) int32
Lock-free stack node.
Also known to export_test.go.
next uint64
pushcnt uintptr
func lfstackUnpack(val uint64) *lfnode
func lfnodeValidate(node *lfnode)
func lfstackPack(node *lfnode, cnt uintptr) uint64
lfstack is the head of a lock-free stack.
The zero value of lfstack is an empty list.
This stack is intrusive. Nodes must embed lfnode as the first field.
The stack does not keep GC-visible pointers to nodes, so the caller
is responsible for ensuring the nodes are not garbage collected
(typically by allocating them from manually-managed memory).
(*T) empty() bool
(*T) pop() unsafe.Pointer
(*T) push(node *lfnode)
var gcBgMarkWorkerPool
linearAlloc is a simple linear allocator that pre-reserves a region
of memory and then maps that region into the Ready state as needed. The
caller is responsible for locking.
// end of reserved space
// one byte past end of mapped space
// next free byte
(*T) alloc(size, align uintptr, sysStat *sysMemStat) unsafe.Pointer
(*T) init(base, size uintptr)
( T) String() string
T : fmt.Stringer
T : stringer
T : context.stringer
func getLockRank(l *mutex) lockRank
func acquireLockRank(rank lockRank)
func assertRankHeld(r lockRank)
func lockInit(l *mutex, rank lockRank)
func lockWithRank(l *mutex, rank lockRank)
func lockWithRankMayAcquire(l *mutex, rank lockRank)
func releaseLockRank(rank lockRank)
const lockRankAllg
const lockRankAllp
const lockRankAssistQueue
const lockRankCpuprof
const lockRankDeadlock
const lockRankDebug
const lockRankDebugPtrmask
const lockRankDefer
const lockRankDummy
const lockRankFaketimeState
const lockRankFin
const lockRankForcegc
const lockRankGcBitsArenas
const lockRankGFree
const lockRankGlobalAlloc
const lockRankGscan
const lockRankHchan
const lockRankHchanLeaf
const lockRankItab
const lockRankLeafRank
const lockRankMheap
const lockRankMheapSpecial
const lockRankMspanSpecial
const lockRankNetpollInit
const lockRankNewmHandoff
const lockRankNotifyList
const lockRankPanic
const lockRankPollCache
const lockRankPollDesc
const lockRankProf
const lockRankRaceFini
const lockRankReflectOffs
const lockRankRoot
const lockRankRwmutexR
const lockRankRwmutexW
const lockRankScavenge
const lockRankSched
const lockRankSpanSetSpine
const lockRankStackLarge
const lockRankStackpool
const lockRankSudog
const lockRankSweep
const lockRankSweepWaiters
const lockRankSysmon
const lockRankTicks
const lockRankTimers
const lockRankTrace
const lockRankTraceBuf
const lockRankTraceStackTab
const lockRankTraceStrings
const lockRankWbufSpans
// lockRankStruct is embedded in mutex, but is empty when staticklockranking is
disabled (the default)
// on allm
// m is blocked on a note
// goroutine running during fatal signal
// cgo traceback if crashing in cgo call
// if non-zero, cgoCallers in use temporarily
// stack that created this thread.
// current running goroutine
// div/mod denominator for arm - known to liblink
dlogPerM dlogPerM
// non-P running threads: sysmon and newmHandoff never use .park
dying int32
fastrand [2]uint32
// if == 0, safe to free g0 and delete m (atomic)
// on sched.freem
// goroutine with scheduling stack
// Go-allocated signal handling stack
// signal-handling g
id int64
// m is executing a cgo call
these are here because they are too large to be on the stack
of low-level NOSPLIT functions.
libcallg guintptr
// for cpu profiler
libcallsp uintptr
// tracking for external LockOSThread
// tracking for internal lockOSThread
lockedg guintptr
locks int32
locksHeld [10]heldLockInfo
Up to 10 locks held by this m, maintained by the lock ranking code.
mFixup is used to synchronize OS related m state
(credentials etc) use mutex to access. To avoid deadlocks
an atomic.Load() of used being zero in mDoFixupFn()
guarantees fn is nil.
mOS mOS
mallocing int32
// gobuf arg to morestack
mOS.cond pthreadcond
mOS.count int
mOS.initialized bool
mOS.mutex pthreadmutex
mstartfn func()
// number of cgo calls currently in progress
// number of cgo calls in total
needextram bool
// minit on C thread called sigaltstack
nextp puintptr
// next m waiting for lock
// the p that was attached before executing a syscall
// attached p for executing go code (nil if not executing go code)
park note
preemptGen counts the number of completed preemption
signals. This is used to detect when a preemption is
requested, but fails. Accessed atomically.
// if != "", keep curg running on this m
printlock int8
Fields not known to debuggers.
// for debuggers, but offset not hard-coded
profilehz int32
schedlink muintptr
// storage for saved signal mask
Whether this is a pending preemption signal on this M.
Accessed atomically.
// m is out of work and is actively looking for work
startingtrace bool
// stores syscall parameters on windows
syscalltick uint32
throwing int32
// thread-local storage (for x86 extern register)
traceback uint8
// PC for traceback while in VDSO call
// SP for traceback while in VDSO call (0 if not in call)
waitlock unsafe.Pointer
waittraceev byte
waittraceskip int
waitunlockf func(*g, unsafe.Pointer) bool
func acquirem() *m
func allocm(_p_ *p, fn func(), id int64) *m
func lockextra(nilokay bool) *m
func mget() *m
func traceAcquireBuffer() (mp *m, pid int32, bufp *traceBufPtr)
func adjustSignalStack(sig uint32, mp *m, gsigStack *gsignalStack) bool
func canPreemptM(mp *m) bool
func mcommoninit(mp *m, id int64)
func mdestroy(mp *m)
func mpreinit(mp *m)
func mput(mp *m)
func newm1(mp *m)
func newosproc(mp *m)
func osPreemptExtEnter(mp *m)
func osPreemptExtExit(mp *m)
func preemptM(mp *m)
func profilealloc(mp *m, x unsafe.Pointer, size uintptr)
func releasem(mp *m)
func semacreate(mp *m)
func semawakeup(mp *m)
func setMNoWB(mp **m, new *m)
func setMNoWB(mp **m, new *m)
func signalM(mp *m, sig int)
func sigprof(pc, sp, lr uintptr, gp *g, mp *m)
func traceEventLocked(extraBytes int, mp *m, pid int32, bufp *traceBufPtr, ev byte, skip int, args ...uint64)
func traceStackID(mp *m, buf []uintptr, skip int) uint64
func unlockextra(mp *m)
var allm *m
var m0
mapextra holds fields that are not present on all maps.
nextOverflow holds a pointer to a free overflow bucket.
oldoverflow *[]*bmap
If both key and elem do not contain pointers and are inline, then we mark bucket
type as containing no pointers. This avoids scanning such maps.
However, bmap.overflow is a pointer. In order to keep overflow buckets
alive, we store pointers to all overflow buckets in hmap.extra.overflow and hmap.extra.oldoverflow.
overflow and oldoverflow are only used if key and elem do not contain pointers.
overflow contains overflow buckets for hmap.buckets.
oldoverflow contains overflow buckets for hmap.oldbuckets.
The indirection allows to store a pointer to the slice in hiter.
// internal type representing a hash bucket
// size of bucket
elem *_type
// size of elem slot
flags uint32
function for hashing keys (ptr to key, seed) -> hash
key *_type
// size of key slot
typ _type
(*T) hashMightPanic() bool
(*T) indirectelem() bool
Note: flag values must match those used in the TMAP case
in ../cmd/compile/internal/gc/reflect.go:dtypesym.
(*T) needkeyupdate() bool
(*T) reflexivekey() bool
func advanceEvacuationMark(h *hmap, t *maptype, newbit uintptr)
func bucketEvacuated(t *maptype, h *hmap, bucket uintptr) bool
func evacuate(t *maptype, h *hmap, oldbucket uintptr)
func evacuate_fast32(t *maptype, h *hmap, oldbucket uintptr)
func evacuate_fast64(t *maptype, h *hmap, oldbucket uintptr)
func evacuate_faststr(t *maptype, h *hmap, oldbucket uintptr)
func growWork(t *maptype, h *hmap, bucket uintptr)
func growWork_fast32(t *maptype, h *hmap, bucket uintptr)
func growWork_fast64(t *maptype, h *hmap, bucket uintptr)
func growWork_faststr(t *maptype, h *hmap, bucket uintptr)
func hashGrow(t *maptype, h *hmap)
func makeBucketArray(t *maptype, b uint8, dirtyalloc unsafe.Pointer) (buckets unsafe.Pointer, nextOverflow *bmap)
func makemap(t *maptype, hint int, h *hmap) *hmap
func makemap64(t *maptype, hint int64, h *hmap) *hmap
func mapaccess1(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer
func mapaccess1_fast32(t *maptype, h *hmap, key uint32) unsafe.Pointer
func mapaccess1_fast64(t *maptype, h *hmap, key uint64) unsafe.Pointer
func mapaccess1_faststr(t *maptype, h *hmap, ky string) unsafe.Pointer
func mapaccess1_fat(t *maptype, h *hmap, key, zero unsafe.Pointer) unsafe.Pointer
func mapaccess2(t *maptype, h *hmap, key unsafe.Pointer) (unsafe.Pointer, bool)
func mapaccess2_fast32(t *maptype, h *hmap, key uint32) (unsafe.Pointer, bool)
func mapaccess2_fast64(t *maptype, h *hmap, key uint64) (unsafe.Pointer, bool)
func mapaccess2_faststr(t *maptype, h *hmap, ky string) (unsafe.Pointer, bool)
func mapaccess2_fat(t *maptype, h *hmap, key, zero unsafe.Pointer) (unsafe.Pointer, bool)
func mapaccessK(t *maptype, h *hmap, key unsafe.Pointer) (unsafe.Pointer, unsafe.Pointer)
func mapassign(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer
func mapassign_fast32(t *maptype, h *hmap, key uint32) unsafe.Pointer
func mapassign_fast32ptr(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer
func mapassign_fast64(t *maptype, h *hmap, key uint64) unsafe.Pointer
func mapassign_fast64ptr(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer
func mapassign_faststr(t *maptype, h *hmap, s string) unsafe.Pointer
func mapclear(t *maptype, h *hmap)
func mapdelete(t *maptype, h *hmap, key unsafe.Pointer)
func mapdelete_fast32(t *maptype, h *hmap, key uint32)
func mapdelete_fast64(t *maptype, h *hmap, key uint64)
func mapdelete_faststr(t *maptype, h *hmap, ky string)
func mapiterinit(t *maptype, h *hmap, it *hiter)
func reflect_makemap(t *maptype, cap int) *hmap
func reflect_mapaccess(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer
func reflect_mapassign(t *maptype, h *hmap, key unsafe.Pointer, elem unsafe.Pointer)
func reflect_mapdelete(t *maptype, h *hmap, key unsafe.Pointer)
func reflect_mapiterinit(t *maptype, h *hmap) *hiter
markBits provides access to the mark bit for an object in the heap.
bytep points to the byte holding the mark bit.
mask is a byte with a single bit set that can be &ed with *bytep
to see if the bit has been set.
*m.byte&m.mask != 0 indicates the mark bit is set.
index can be used along with span information to generate
the address of the object in the heap.
We maintain one set of mark bits for allocation and one for
marking purposes.
bytep *uint8
index uintptr
mask uint8
advance advances the markBits to the next object in the span.
clearMarked clears the marked bit in the markbits, atomically.
isMarked reports whether mark bit m is set.
setMarked sets the marked bit in the markbits, atomically.
setMarkedNonAtomic sets the marked bit in the markbits, non-atomically.
func markBitsForAddr(p uintptr) markBits
func markBitsForSpan(base uintptr) (mbits markBits)
func setCheckmark(obj, base, off uintptr, mbits markBits) bool
Per-thread (in Go, per-P) cache for small objects.
This includes a small object cache and local allocation stats.
No locking needed because it is per-thread (per-P).
mcaches are allocated from non-GC'd memory, so any heap pointers
must be specially handled.
// spans to allocate from, indexed by spanClass
flushGen indicates the sweepgen during which this mcache
was last flushed. If flushGen != mheap_.sweepgen, the spans
in this mcache are stale and need to the flushed so they
can be swept. This is done in acquirep.
The following members are accessed on every malloc,
so they are grouped here for better caching.
// trigger heap sample after allocating this many bytes
// bytes of scannable heap allocated
stackcache [4]stackfreelist
tiny points to the beginning of the current tiny block, or
nil if there is no current tiny block.
tiny is a heap pointer. Since mcache is in non-GC'd memory,
we handle it by clearing it in releaseAll during mark
termination.
tinyAllocs is the number of tiny allocations performed
by the P that owns this mcache.
tinyAllocs uintptr
tinyoffset uintptr
allocLarge allocates a span for a large object.
nextFree returns the next free object from the cached span if one is available.
Otherwise it refills the cache with a span with an available object and
returns that object along with a flag indicating that this was a heavy
weight allocation. If it is a heavy weight allocation the caller must
determine whether a new GC cycle needs to be started or if the GC is active
whether this goroutine needs to assist the GC.
Must run in a non-preemptible context since otherwise the owner of
c could change.
prepareForSweep flushes c if the system has entered a new sweep phase
since c was populated. This must happen between the sweep phase
starting and the first allocation from c.
refill acquires a new span of span class spc for c. This span will
have at least one free object. The current span in c must be full.
Must run in a non-preemptible context since otherwise the owner of
c could change.
(*T) releaseAll()
func allocmcache() *mcache
func getMCache() *mcache
func freemcache(c *mcache)
func stackcache_clear(c *mcache)
func stackcacherefill(c *mcache, order uint8)
func stackcacherelease(c *mcache, order uint8)
var mcache0 *mcache
Central list of free objects of a given size.
// list of spans with no free objects
partial and full contain two mspan sets: one of swept in-use
spans, and one of unswept in-use spans. These two trade
roles on each GC cycle. The unswept set is drained either by
allocation or by the background sweeper in every GC cycle,
so only two roles are necessary.
sweepgen is increased by 2 on each GC cycle, so the swept
spans are in partial[sweepgen/2%2] and the unswept spans are in
partial[1-sweepgen/2%2]. Sweeping pops spans from the
unswept set and pushes spans that are still in-use on the
swept set. Likewise, allocating an in-use span pushes it
on the swept set.
Some parts of the sweeper can sweep arbitrary spans, and hence
can't remove them from the unswept set, but will add the span
to the appropriate swept list. As a result, the parts of the
sweeper and mcentral that do consume from the unswept list may
encounter swept spans, and these should be ignored.
// list of spans with a free object
spanclass spanClass
Allocate a span to use in an mcache.
fullSwept returns the spanSet which holds swept spans without any
free slots for this sweepgen.
fullUnswept returns the spanSet which holds unswept spans without any
free slots for this sweepgen.
grow allocates a new empty span from the heap and initializes it for c's size class.
Initialize a single central free list.
partialSwept returns the spanSet which holds partially-filled
swept spans for this sweepgen.
partialUnswept returns the spanSet which holds partially-filled
unswept spans for this sweepgen.
Return span from an mcache.
s must have a span class corresponding to this
mcentral and it must not be empty.
A memRecord is the bucket data for a bucket of type memProfile,
part of the memory profile.
active is the currently published profile. A profiling
cycle can be accumulated into active once its complete.
future records the profile events we're counting for cycles
that have not yet been published. This is ring buffer
indexed by the global heap profile cycle C and stores
cycles C, C+1, and C+2. Unlike active, these counts are
only for a single cycle; they are not cumulative across
cycles.
We store cycle C here because there's a window between when
C becomes the active cycle and when we've flushed it to
active.
memRecordCycle
alloc_bytes uintptr
allocs uintptr
free_bytes uintptr
frees uintptr
add accumulates b into a. It does not zero b.
compute is a function that populates a metricValue
given a populated statAggregate structure.
deps is the set of runtime statistics that this metric
depends on. Before compute is called, the statAggregate
which will be passed must ensure() these dependencies.
metricFloat64Histogram is a runtime copy of runtime/metrics.Float64Histogram
and must be kept structurally identical to that type.
buckets []float64
counts []uint64
metricValidKind is a runtime copy of runtime/metrics.ValueKind and
must be kept structurally identical to that type.
const metricKindBad
const metricKindFloat64
const metricKindFloat64Histogram
const metricKindUint64
metricSample is a runtime copy of runtime/metrics.Sample and
must be kept structurally identical to that type.
name string
value metricValue
metricValue is a runtime copy of runtime/metrics.Sample and
must be kept structurally identical to that type.
kind metricKind
// contains non-scalar values.
// contains scalar values for scalar Kinds.
float64HistOrInit tries to pull out an existing float64Histogram
from the value, but if none exists, then it allocates one with
the given buckets.
Main malloc heap.
The heap itself is the "free" and "scav" treaps,
but all the other global data is here too.
mheap must not be heap-allocated because it contains mSpanLists,
which must not be heap-allocated.
allArenas is the arenaIndex of every mapped arena. This can
be used to iterate through the address space.
Access is protected by mheap_.lock. However, since this is
append-only and old backing arrays are never freed, it is
safe to acquire mheap_.lock, copy the slice header, and
then release mheap_.lock.
allspans is a slice of all mspans ever created. Each mspan
appears exactly once.
The memory for allspans is manually managed and can be
reallocated and move as the heap grows.
In general, allspans is protected by mheap_.lock, which
prevents concurrent access as well as freeing the backing
store. Accesses during STW might not hold the lock, but
must ensure that allocation cannot happen around the
access (since that may free the backing store).
// all spans out there
arena is a pre-reserved space for allocating heap arenas
(the actual arenas). This is only used on 32-bit.
// allocator for arenaHints
arenaHints is a list of addresses at which to attempt to
add more heap arenas. This is initially populated with a
set of general hint addresses, and grown with the bounds of
actual heap arena ranges.
arenas is the heap arena map. It points to the metadata for
the heap for every arena frame of the entire usable virtual
address space.
Use arenaIndex to compute indexes into this array.
For regions of the address space that are not backed by the
Go heap, the arena map contains nil.
Modifications are protected by mheap_.lock. Reads can be
performed without locking; however, a given entry can
transition from nil to non-nil at any time when the lock
isn't held. (Entries never transitions back to nil.)
In general, this is a two-level mapping consisting of an L1
map and possibly many L2 maps. This saves space when there
are a huge number of arena frames. However, on many
platforms (even 64-bit), arenaL1Bits is 0, making this
effectively a single-level map. In this case, arenas[0]
will never be nil.
// allocator for mcache*
central free lists for small size classes.
the padding makes sure that the mcentrals are
spaced CacheLinePadSize bytes apart, so that each mcentral.lock
gets its own cache line.
central is indexed by spanClass.
curArena is the arena that the heap is currently growing
into. This should always be physPageSize-aligned.
heapArenaAlloc is pre-reserved space for allocating heapArena
objects. This is only used on 32-bit, where we pre-reserve
this space to avoid interleaving it with the heap itself.
lock must only be acquired on the system stack, otherwise a g
could self-deadlock if its stack grows with the lock held.
markArenas is a snapshot of allArenas taken at the beginning
of the mark cycle. Because allArenas is append-only, neither
this slice nor its contents will change during the mark, so
it can be read safely.
// page allocation data structure
Proportional sweep
These parameters represent a linear function from heap_live
to page sweep count. The proportional sweep system works to
stay in the black by keeping the current page sweep count
above this line at the current heap_live.
The line has slope sweepPagesPerByte and passes through a
basis point at (sweepHeapLiveBasis, pagesSweptBasis). At
any given time, the system is at (memstats.heap_live,
pagesSwept) in this space.
It's important that the line pass through a point we
control rather than simply starting at a (0,0) origin
because that lets us adjust sweep pacing at any time while
accounting for current progress. If we could only adjust
the slope, it would create a discontinuity in debt if any
progress has already been made.
// pages of spans in stats mSpanInUse; updated atomically
// pages swept this cycle; updated atomically
// pagesSwept to use as the origin of the sweep ratio; updated atomically
reclaimCredit is spare credit for extra pages swept. Since
the page reclaimer works in large chunks, it may reclaim
more than requested. Any spare pages released go to this
credit pool.
This is accessed atomically.
reclaimIndex is the page index in allArenas of next page to
reclaim. Specifically, it refers to page (i %
pagesPerArena) of arena allArenas[i / pagesPerArena].
If this is >= 1<<63, the page reclaimer is done scanning
the page marks.
This is accessed atomically.
scavengeGoal is the amount of total retained heap memory (measured by
heapRetained) that the runtime will try to maintain by returning memory
to the OS.
// allocator for span*
// allocator for specialfinalizer*
// lock for special record allocators.
// allocator for specialprofile*
sweepArenas is a snapshot of allArenas taken at the
beginning of the sweep cycle. This can be read safely by
simply blocking GC (by disabling preemption).
// value of heap_live to use as the origin of sweep ratio; written with lock, read without
// proportional sweep ratio; written with lock, read without
// all spans are swept
// number of active sweepone calls
// sweep generation, see comment in mspan; written during STW
// never set, just here to force the specialfinalizer type into DWARF
alloc allocates a new span of npage pages from the GC'd heap.
spanclass indicates the span's size class and scannability.
If needzero is true, the memory for the returned span will be zeroed.
allocMSpanLocked allocates an mspan object.
h.lock must be held.
allocMSpanLocked must be called on the system stack because
its caller holds the heap lock. See mheap for details.
Running on the system stack also ensures that we won't
switch Ps during this function. See tryAllocMSpan for details.
allocManual allocates a manually-managed span of npage pages.
allocManual returns nil if allocation fails.
allocManual adds the bytes used to *stat, which should be a
memstats in-use field. Unlike allocations in the GC'd heap, the
allocation does *not* count toward heap_inuse or heap_sys.
The memory backing the returned span may not be zeroed if
span.needzero is set.
allocManual must be called on the system stack because it may
acquire the heap lock via allocSpan. See mheap for details.
If new code is written to call allocManual, do NOT use an
existing spanAllocType value and instead declare a new one.
allocNeedsZero checks if the region of address space [base, base+npage*pageSize),
assumed to be allocated, needs to be zeroed, updating heap arena metadata for
future allocations.
This must be called each time pages are allocated from the heap, even if the page
allocator can otherwise prove the memory it's allocating is already zero because
they're fresh from the operating system. It updates heapArena metadata that is
critical for future page allocations.
There are no locking constraints on this method.
allocSpan allocates an mspan which owns npages worth of memory.
If typ.manual() == false, allocSpan allocates a heap span of class spanclass
and updates heap accounting. If manual == true, allocSpan allocates a
manually-managed span (spanclass is ignored), and the caller is
responsible for any accounting related to its use of the span. Either
way, allocSpan will atomically add the bytes in the newly allocated
span to *sysStat.
The returned span is fully initialized.
h.lock must not be held.
allocSpan must be called on the system stack both because it acquires
the heap lock and because it must block GC transitions.
freeMSpanLocked free an mspan object.
h.lock must be held.
freeMSpanLocked must be called on the system stack because
its caller holds the heap lock. See mheap for details.
Running on the system stack also ensures that we won't
switch Ps during this function. See tryAllocMSpan for details.
freeManual frees a manually-managed span returned by allocManual.
typ must be the same as the spanAllocType passed to the allocManual that
allocated s.
This must only be called when gcphase == _GCoff. See mSpanState for
an explanation.
freeManual must be called on the system stack because it acquires
the heap lock. See mheap for details.
Free the span back into the heap.
(*T) freeSpanLocked(s *mspan, typ spanAllocType)
Try to add at least npage pages of memory to the heap,
returning whether it worked.
h.lock must be held.
Initialize the heap.
nextSpanForSweep finds and pops the next span for sweeping from the
central sweep buffers. It returns ownership of the span to the caller.
Returns nil if no such span exists.
reclaim sweeps and reclaims at least npage pages into the heap.
It is called before allocating npage pages to keep growth in check.
reclaim implements the page-reclaimer half of the sweeper.
h.lock must NOT be held.
reclaimChunk sweeps unmarked spans that start at page indexes [pageIdx, pageIdx+n).
It returns the number of pages returned to the heap.
h.lock must be held and the caller must be non-preemptible. Note: h.lock may be
temporarily unlocked and re-locked in order to do sweeping or if tracing is
enabled.
scavengeAll acquires the heap lock (blocking any additional
manipulation of the page allocator) and iterates over the whole
heap, scavenging every free page available.
setSpans modifies the span map so [spanOf(base), spanOf(base+npage*pageSize))
is s.
sysAlloc allocates heap arena space for at least n bytes. The
returned pointer is always heapArenaBytes-aligned and backed by
h.arenas metadata. The returned size is always a multiple of
heapArenaBytes. sysAlloc returns nil on failure.
There is no corresponding free function.
sysAlloc returns a memory region in the Prepared state. This region must
be transitioned to Ready before use.
h must be locked.
tryAllocMSpan attempts to allocate an mspan object from
the P-local cache, but may fail.
h.lock need not be held.
This caller must ensure that its P won't change underneath
it during this function. Currently to ensure that we enforce
that the function is run on the system stack, because that's
the only place it is used now. In the future, this requirement
may be relaxed if its use is necessary elsewhere.
var mheap_
A generic linked list of blocks. (Typically the block is bigger than sizeof(MLink).)
Since assignments to mlink.next will result in a write barrier being performed
this cannot be used by some of the internal GC structures. For example when
the sweeper is placing an unmarked object on the free list it does not want the
write barrier to be called since that could result in the object being reachable.
next *mlink
moduledata records information about the layout of the executable
image. It is written by the linker. Any changes here must be
matched changes to the code in cmd/internal/ld/symtab.go:symtab.
moduledata is stored in statically allocated non-pointer memory;
none of the pointers here are visible to the garbage collector.
// module failed to load and should be ignored
bss uintptr
cutab []uint32
data uintptr
ebss uintptr
edata uintptr
end uintptr
enoptrbss uintptr
enoptrdata uintptr
etext uintptr
etypes uintptr
filetab []byte
findfunctab uintptr
ftab []functab
funcnametab []byte
gcbss uintptr
gcbssmask bitvector
gcdata uintptr
gcdatamask bitvector
// 1 if module contains the main function, 0 otherwise
itablinks []*itab
maxpc uintptr
minpc uintptr
modulehashes []modulehash
modulename string
next *moduledata
noptrbss uintptr
noptrdata uintptr
pcHeader *pcHeader
pclntable []byte
pctab []byte
pkghashes []modulehash
pluginpath string
ptab []ptabEntry
text uintptr
textsectmap []textsect
// offsets from types
// offset to *_rtype in previous module
types uintptr
func activeModules() []*moduledata
func findmoduledatap(pc uintptr) *moduledata
func moduledataverify1(datap *moduledata)
func pluginftabverify(md *moduledata)
var firstmoduledata
var lastmoduledatap *moduledata
A modulehash is used to compare the ABI of a new module or a
package in a new module with the loaded program.
For each shared library a module links against, the linker creates an entry in the
moduledata.modulehashes slice containing the name of the module, the abi hash seen
at link time and a pointer to the runtime abi hash. These are checked in
moduledataverify1 below.
For each loaded plugin, the pkghashes slice has a modulehash of the
newly loaded package that can be used to check the plugin's version of
a package against any previously loaded version of the package.
This is done in plugin.lastmoduleinit.
linktimehash string
modulename string
runtimehash *string
allocBits and gcmarkBits hold pointers to a span's mark and
allocation bits. The pointers are 8 byte aligned.
There are three arenas where this data is held.
free: Dirty arenas that are no longer accessed
and can be reused.
next: Holds information to be used in the next GC cycle.
current: Information being used during this GC cycle.
previous: Information being used during the last GC cycle.
A new GC cycle starts with the call to finishsweep_m.
finishsweep_m moves the previous arena to the free arena,
the current arena to the previous arena, and
the next arena to the current arena.
The next arena is populated as the spans request
memory to hold gcmarkBits for the next GC cycle as well
as allocBits for newly allocated spans.
The pointer arithmetic is done "by hand" instead of using
arrays to avoid bounds checks along critical performance
paths.
The sweep will free the old allocBits and set allocBits to the
gcmarkBits. The gcmarkBits are replaced with a fresh zeroed
out memory.
Cache of the allocBits at freeindex. allocCache is shifted
such that the lowest bit corresponds to the bit freeindex.
allocCache holds the complement of allocBits, thus allowing
ctz (count trailing zero) to use it directly.
allocCache may contain bits beyond s.nelems; the caller must ignore
these.
// number of allocated objects
// if non-0, elemsize is a power of 2, & this will get object allocation base
// for divide by elemsize - divMagic.mul
// for divide by elemsize - divMagic.shift
// for divide by elemsize - divMagic.shift2
// computed from sizeclass or from npages
freeindex is the slot index between 0 and nelems at which to begin scanning
for the next free object in this span.
Each allocation scans allocBits starting at freeindex until it encounters a 0
indicating a free object. freeindex is then adjusted so that subsequent scans begin
just past the newly discovered free object.
If freeindex == nelem, this span has no free objects.
allocBits is a bitmap of objects in this span.
If n >= freeindex and allocBits[n/8] & (1<<(n%8)) is 0
then object n is free;
otherwise, object n is allocated. Bits starting at nelem are
undefined and should never be referenced.
Object n starts at address n*elemsize + (start << pageShift).
gcmarkBits *gcBits
// end of data in span
// For debugging. TODO: Remove.
// list of free objects in mSpanManual spans
// needs to be zeroed before allocation
TODO: Look up nelems from sizeclass and remove this field if it
helps performance.
// number of object in the span.
// next span in list, or nil if none
// number of pages in span
// previous span in list, or nil if none
// size class and noscan (uint8)
// guards specials list
// linked list of special records sorted by offset.
// address of first byte of span aka s.base()
// mSpanInUse etc; accessed atomically (get/set methods)
sweepgen uint32
(*T) allocBitsForIndex(allocBitIndex uintptr) markBits
(*T) base() uintptr
countAlloc returns the number of objects allocated in span s by
scanning the allocation bitmap.
Returns only when span s has been swept.
(*T) inList() bool
Initialize a new span with the given start and npages.
isFree reports whether the index'th object in s is unallocated.
The caller must ensure s.state is mSpanInUse, and there must have
been no preemption points since ensuring this (which could allow a
GC transition, which would allow the state to change).
(*T) layout() (size, n, total uintptr)
(*T) markBitsForBase() markBits
(*T) markBitsForIndex(objIndex uintptr) markBits
nextFreeIndex returns the index of the next free object in s at
or after s.freeindex.
There are hardware instructions that can be used to make this
faster if profiling warrants it.
(*T) objIndex(p uintptr) uintptr
refillAllocCache takes 8 bytes s.allocBits starting at whichByte
and negates them so that ctz (count trailing zeros) instructions
can be used. It then places these 8 bytes into the cached 64 bit
s.allocCache.
reportZombies reports any marked but free objects in s and throws.
This generally means one of the following:
1. User code converted a pointer to a uintptr and then back
unsafely, and a GC ran while the uintptr was the only reference to
an object.
2. User code (or a compiler bug) constructed a bad pointer that
points to a free slot, often a past-the-end pointer.
3. The GC two cycles ago missed a pointer and freed a live object,
but it was still live in the last cycle, so this GC cycle found a
pointer to that object and marked it.
Sweep frees or collects finalizers for blocks not marked in the mark phase.
It clears the mark bits in preparation for the next GC round.
Returns true if the span was returned to heap.
If preserve=true, don't return it to heap nor relink in mcentral lists;
caller takes care of it.
func findObject(p, refBase, refOff uintptr) (base uintptr, s *mspan, objIndex uintptr)
func materializeGCProg(ptrdata uintptr, prog *byte) *mspan
func spanOf(p uintptr) *mspan
func spanOfHeap(p uintptr) *mspan
func spanOfUnchecked(p uintptr) *mspan
func badPointer(s *mspan, p, refBase, refOff uintptr)
func dematerializeGCProg(s *mspan)
func gcmarknewobject(span *mspan, obj, size, scanSize uintptr)
func greyobject(obj, base, off uintptr, span *mspan, gcw *gcWork, objIndex uintptr)
func nextFreeFast(s *mspan) gclinkptr
func osStackAlloc(s *mspan)
func osStackFree(s *mspan)
func spanHasNoSpecials(s *mspan)
func spanHasSpecials(s *mspan)
var emptymspan
mSpanList heads a linked list of spans.
// first span in list, or nil if none
// last span in list, or nil if none
Initialize an empty doubly-linked list.
(*T) insert(span *mspan)
(*T) insertBack(span *mspan)
(*T) isEmpty() bool
(*T) remove(span *mspan)
takeAll removes all spans from other and inserts them at the front
of list.
An mspan representing actual memory has state mSpanInUse,
mSpanManual, or mSpanFree. Transitions between these states are
constrained as follows:
* A span may transition from free to in-use or manual during any GC
phase.
* During sweeping (gcphase == _GCoff), a span may transition from
in-use to free (as a result of sweeping) or manual to free (as a
result of stacks being freed).
* During GC (gcphase != _GCoff), a span *must not* transition from
manual or in-use to free. Because concurrent GC may read a pointer
and then look up its span, the span state must be monotonic.
Setting mspan.state to mSpanInUse or mSpanManual must be done
atomically and only after all other span fields are valid.
Likewise, if inspecting a span is contingent on it being
mSpanInUse, the state should be loaded atomically and checked
before depending on other fields. This allows the garbage collector
to safely deal with potentially invalid pointers, since resolving
such pointers may race with a span being allocated.
const mSpanDead
const mSpanInUse
const mSpanManual
mSpanStateBox holds an mSpanState and provides atomic operations on
it. This is a separate type to disallow accidental comparison or
assignment with mSpanState.
s mSpanState
(*T) get() mSpanState
(*T) set(s mSpanState)
Statistics.
For detailed descriptions see the documentation for MemStats.
Fields that differ from MemStats are further documented here.
Many of these fields are updated on the fly, while others are only
updated when updatememstats is called.
General statistics.
// bytes allocated and not yet freed
// profiling bucket hash table
by_size [68]struct{size uint32; nmalloc uint64; nfree uint64}
debuggc bool
enablegc bool
// updated atomically or during STW
gcPauseDist represents the distribution of all GC-related
application pauses in the runtime.
Each individual pause is counted separately, unlike pause_ns.
// computed by updatememstats
Statistics about GC overhead.
// computed by updatememstats
// fraction of CPU time used by GC
gc_trigger is the heap size that triggers marking.
When heap_live ≥ gc_trigger, the mark phase will start.
This is also the heap size by which proportional sweeping
must be complete.
This is computed from triggerRatio during mark termination
for the next cycle's trigger.
heapStats is a set of statistics
// bytes in mSpanInUse spans
heap_live is the number of bytes considered live by the GC.
That is: retained by the most recent GC plus allocated
since then. heap_live <= alloc, since alloc includes unmarked
objects that have not yet been swept (and hence goes up as we
allocate and down as we sweep) while heap_live excludes these
objects (and hence only goes up between GCs).
This is updated atomically without locking. To reduce
contention, this is updated only when obtaining a span from
an mcentral and at this point it counts all of the
unallocated slots in that span (which will be allocated
before that mcache obtains another span from that
mcentral). Hence, it slightly overestimates the "true" live
heap size. It's better to overestimate than to
underestimate because 1) this triggers the GC earlier than
necessary rather than potentially too late and 2) this
leads to a conservative GC rate rather than a GC rate that
is potentially too low.
Reads should likewise be atomic (or during STW).
Whenever this is updated, call traceHeapAlloc() and
gcController.revise().
heap_marked is the number of bytes marked by the previous
GC. After mark termination, heap_live == heap_marked, but
unlike heap_live, heap_marked does not change until the
next mark termination.
heap_objects is not used by the runtime directly and instead
computed on the fly by updatememstats.
// total number of allocated objects
// bytes released to the os
heap_scan is the number of bytes of "scannable" heap. This
is the live heap (as counted by heap_live), but omitting
no-scan objects and no-scan tails of objects.
Whenever this is updated, call gcController.revise().
Read and written atomically or with the world stopped.
Statistics about malloc heap.
Updated atomically, or with the world stopped.
Like MemStats, heap_sys and heap_inuse do not count memory
in manually-managed spans.
// virtual address space obtained from system for GC'd heap
// last gc (monotonic time)
Protected by mheap or stopping the world during GC.
// last gc (in unix time)
// heap_inuse at mark termination of the previous GC
// next_gc for the previous GC
// mcache structures
mcache_sys sysMemStat
Statistics about allocation of low-level fixed-size structures.
Protected by FixAlloc locks.
// mspan structures
mspan_sys sysMemStat
next_gc is the goal heap_live for when next GC ends.
Set to ^uint64(0) if disabled.
Read and written atomically, unless the world is stopped.
// number of frees
// number of pointer lookups (unused)
// number of mallocs
// number of user-forced GCs
numgc uint32
Miscellaneous statistics.
// updated atomically or during STW
// circular buffer of recent gc end times (nanoseconds since 1970)
// circular buffer of recent gc pause lengths
pause_total_ns uint64
Statistics about stacks.
// bytes in manually-managed stack spans; computed by updatememstats
// only counts newosproc0 stack in mstats; differs from MemStats.StackSys
// bytes obtained from system (should be sum of xxx_sys below, no locking, approximate)
// number of tiny allocations that didn't cause actual allocation; not exported to go directly
// bytes allocated (even if freed)
triggerRatio is the heap growth ratio that triggers marking.
E.g., if this is 0.6, then GC should start when the live
heap has reached 1.6 times the heap size marked by the
previous cycle. This should be ≤ GOGC/100 so the trigger
heap size is less than the goal heap size. This is set
during mark termination for the next cycle's trigger.
var memstats
muintptr is a *m that is not tracked by the garbage collector.
Because we do free Ms, there are some additional constrains on
muintptrs:
1. Never hold an muintptr locally across a safe point.
2. Any muintptr in the heap must be owned by the M itself so it can
ensure it is not in use when the last true *m is released.
( T) ptr() *m
(*T) set(m *m)
Mutual exclusion locks. In the uncontended case,
as fast as spin locks (just a few user-level instructions),
but on the contention path they sleep in the kernel.
A zeroed Mutex is unlocked (no need to initialize each lock).
Initialization is helpful for static lock ranking, but not required.
Futex-based impl treats it as uint32 key,
while sema-based impl as M* waitm.
Used to be a union, but unions break precise GC.
Empty struct if lock ranking is disabled, otherwise includes the lock rank
func assertLockHeld(l *mutex)
func assertWorldStoppedOrLockHeld(l *mutex)
func getLockRank(l *mutex) lockRank
func goparkunlock(lock *mutex, reason waitReason, traceEv byte, traceskip int)
func lock(l *mutex)
func lock2(l *mutex)
func lockInit(l *mutex, rank lockRank)
func lockWithRank(l *mutex, rank lockRank)
func lockWithRankMayAcquire(l *mutex, rank lockRank)
func unlock(l *mutex)
func unlock2(l *mutex)
func unlockWithRank(l *mutex)
var allglock
var allpLock
var deadlock
var debuglock
var finlock
var itabLock
var netpollInitLock
var paniclk
var proflock
var tracelock
name is an encoded type name with optional extra data.
See reflect/type.go for details.
bytes *byte
( T) data(off int) *byte
( T) isBlank() bool
( T) isExported() bool
( T) name() (s string)
( T) nameLen() int
( T) pkgPath() string
( T) tag() (s string)
( T) tagLen() int
func resolveNameOff(ptrInModule unsafe.Pointer, off nameOff) name
func resolveNameOff(ptrInModule unsafe.Pointer, off nameOff) name
func goexit(neverCallThisFunction)
sleep and wakeup on one-time events.
before any calls to notesleep or notewakeup,
must call noteclear to initialize the Note.
then, exactly one thread can call notesleep
and exactly one thread can call notewakeup (once).
once notewakeup has been called, the notesleep
will return. future notesleep will return immediately.
subsequent noteclear must be called only after
previous notesleep has returned, e.g. it's disallowed
to call noteclear straight after notewakeup.
notetsleep is like notesleep but wakes up after
a given number of nanoseconds even if the event
has not yet happened. if a goroutine uses notetsleep to
wake up early, it must wait to call noteclear until it
can be sure that no other goroutine is calling
notewakeup.
notesleep/notetsleep are generally called on g0,
notetsleepg is similar to notetsleep but is called on user g.
Futex-based impl treats it as uint32 key,
while sema-based impl as M* waitm.
Used to be a union, but unions break precise GC.
func noteclear(n *note)
func notesleep(n *note)
func notetsleep(n *note, ns int64) bool
func notetsleep_internal(n *note, ns int64, gp *g, deadline int64) bool
func notetsleepg(n *note, ns int64) bool
func notewakeup(n *note)
func sigNoteSetup(*note)
func sigNoteSleep(*note)
func sigNoteWakeup(*note)
notifyList is a ticket-based notification list used to implement sync.Cond.
It must be kept in sync with the sync package.
head *sudog
List of parked waiters.
notify is the ticket number of the next waiter to be notified. It can
be read outside the lock, but is only written to with lock held.
Both wait & notify can wrap around, and such cases will be correctly
handled as long as their "unwrapped" difference is bounded by 2^31.
For this not to be the case, we'd need to have 2^31+ goroutines
blocked on the same condvar, which is currently not possible.
tail *sudog
wait is the ticket number of the next waiter. It is atomically
incremented outside the lock.
func notifyListAdd(l *notifyList) uint32
func notifyListNotifyAll(l *notifyList)
func notifyListNotifyOne(l *notifyList)
func notifyListWait(l *notifyList, t uint32)
notInHeap is off-heap memory allocated by a lower-level allocator
like sysAlloc or persistentAlloc.
In general, it's better to use real types marked as go:notinheap,
but this serves as a generic type for situations where that isn't
possible (like in the allocators).
TODO: Use this as the return type of sysAlloc, persistentAlloc, etc?
(*T) add(bytes uintptr) *notInHeap
func persistentalloc1(size, align uintptr, sysStat *sysMemStat) *notInHeap
var persistentChunks *notInHeap
offAddr represents an address in a contiguous view
of the address space on systems where the address space is
segmented. On other systems, it's just a normal address.
a is just the virtual address, but should never be used
directly. Call addr() to get this value instead.
add adds a uintptr offset to the offAddr.
addr returns the virtual address for this offset address.
diff returns the amount of bytes in between the
two offAddrs.
equal returns true if the two offAddr values are equal.
lessEqual returns true if l1 is less than or equal to l2 in
the offset address space.
lessThan returns true if l1 is less than l2 in the offset
address space.
sub subtracts a uintptr offset from the offAddr.
func levelIndexToOffAddr(level, idx int) offAddr
func offAddrToLevelIndex(level int, addr offAddr) int
var maxOffAddr
var maxSearchAddr
var minOffAddr
Number of timerModifiedEarlier timers on P's heap.
This should only be modified while holding timersLock,
or while the timer status is in a transient state
such as timerModifying.
// pool of available defer structs of different sizes (see panic.go)
deferpoolbuf [5][32]*_defer
Number of timerDeleted timers in P's heap.
Modified using atomic instructions.
Available G's (status == Gdead)
Per-P GC state
// Nanoseconds in assistAlloc
// Nanoseconds in fractional mark worker (atomic)
gcMarkWorkerMode is the mode for the next mark worker to run in.
That is, this is used to communicate with the worker goroutine
selected for immediate execution by
gcController.findRunnableGCWorker. When scheduling other goroutines,
this field must be set to gcMarkWorkerNotWorker.
gcMarkWorkerStartTime is the nanotime() at which the most recent
mark worker started.
gcw is this P's GC work buffer cache. The work buffer is
filled by write barriers, drained by mutator assists, and
disposed on certain GC state transitions.
Cache of goroutine ids, amortizes accesses to runtime·sched.goidgen.
goidcacheend uint64
id int32
link puintptr
// back-link to associated m (nil if idle)
mcache *mcache
Cache of mspan objects from the heap.
Number of timers in P's heap.
Modified using atomic instructions.
pad cpu.CacheLinePad
// per-P to avoid mutex
pcache pageCache
preempt is set to indicate that this P should be enter the
scheduler ASAP (regardless of what G is running on it).
raceprocctx uintptr
// if 1, run sched.safePointFn at next safe point
runnext, if non-nil, is a runnable G that was ready'd by
the current G and should be run next instead of what's in
runq if there's time remaining in the running G's time
slice. It will inherit the time left in the current time
slice. If a set of goroutines is locked in a
communicate-and-wait pattern, this schedules that set as a
unit and eliminates the (potentially large) scheduling
latency that otherwise arises from adding the ready'd
goroutines to the end of the run queue.
runq [256]guintptr
Queue of runnable goroutines. Accessed without lock.
runqtail uint32
// incremented on every scheduler call
statsSeq is a counter indicating whether this P is currently
writing any stats. Its value is even when not, odd when it is.
// one of pidle/prunning/...
sudogbuf [128]*sudog
sudogcache []*sudog
// incremented on every system call
// last tick observed by sysmon
The when field of the first entry on the timer heap.
This is updated using atomic functions.
This is 0 if the timer heap is empty.
The earliest known nextwhen field of a timer with
timerModifiedEarlier status. Because the timer may have been
modified again, there need not be any timer with this value.
This is updated using atomic functions.
This is 0 if the value is unknown.
Race context used while executing timer functions.
Actions to take at some time. This is used to implement the
standard library's time package.
Must hold timersLock to access.
Lock for timers. We normally access the timers while running
on this P, but the scheduler can also do it from a different P.
traceSwept and traceReclaimed track the number of bytes
swept and reclaimed by sweeping in the current sweep loop.
traceSweep indicates the sweep events should be traced.
This is used to defer the sweep start event until a span
has actually been swept.
traceSwept and traceReclaimed track the number of bytes
swept and reclaimed by sweeping in the current sweep loop.
tracebuf traceBufPtr
wbBuf is this P's GC write barrier buffer.
TODO: Consider caching this in the running G.
destroy releases all of the resources associated with pp and
transitions it to status _Pdead.
sched.lock must be held and the world must be stopped.
init initializes pp, which may be a freshly allocated p or a
previously destroyed p, and transitions it to status _Pgcstop.
func pidleget() *p
func procresize(nprocs int32) *p
func releasep() *p
func timeSleepUntil() (int64, *p)
func acquirep(_p_ *p)
func addAdjustedTimers(pp *p, moved []*timer)
func adjusttimers(pp *p, now int64)
func allocm(_p_ *p, fn func(), id int64) *m
func checkTimers(pp *p, now int64) (rnow, pollUntil int64, ran bool)
func cleantimers(pp *p)
func clearDeletedTimers(pp *p)
func doaddtimer(pp *p, t *timer)
func dodeltimer(pp *p, i int)
func dodeltimer0(pp *p)
func exitsyscallfast(oldp *p) bool
func gcMarkWorkAvailable(p *p) bool
func gfget(_p_ *p) *g
func gfpurge(_p_ *p)
func gfput(_p_ *p, gp *g)
func globrunqget(_p_ *p, max int32) *g
func handoffp(_p_ *p)
func moveTimers(pp *p, timers []*timer)
func newm(fn func(), _p_ *p, id int64)
func nobarrierWakeTime(pp *p) int64
func pidleput(_p_ *p)
func preemptone(_p_ *p) bool
func runOneTimer(pp *p, t *timer, now int64)
func runqempty(_p_ *p) bool
func runqget(_p_ *p) (gp *g, inheritTime bool)
func runqgrab(_p_ *p, batch *[256]guintptr, batchHead uint32, stealRunNextG bool) uint32
func runqput(_p_ *p, gp *g, next bool)
func runqputbatch(pp *p, q *gQueue, qsize int)
func runqputslow(_p_ *p, gp *g, h, t uint32) bool
func runqsteal(_p_, p2 *p, stealRunNextG bool) *g
func runtimer(pp *p, now int64) int64
func startm(_p_ *p, spinning bool)
func traceGoSysBlock(pp *p)
func traceProcFree(pp *p)
func traceProcStop(pp *p)
func updateTimer0When(pp *p)
func updateTimerModifiedEarliest(pp *p, nextwhen int64)
func updateTimerPMask(pp *p)
func verifyTimerHeap(pp *p)
func wbBufFlush1(_p_ *p)
func wirep(_p_ *p)
chunks is a slice of bitmap chunks.
The total size of chunks is quite large on most 64-bit platforms
(O(GiB) or more) if flattened, so rather than making one large mapping
(which has problems on some platforms, even when PROT_NONE) we use a
two-level sparse array approach similar to the arena index in mheap.
To find the chunk containing a memory address `a`, do:
chunkOf(chunkIndex(a))
Below is a table describing the configuration for chunks for various
heapAddrBits supported by the runtime.
heapAddrBits | L1 Bits | L2 Bits | L2 Entry Size
------------------------------------------------
32 | 0 | 10 | 128 KiB
33 (iOS) | 0 | 11 | 256 KiB
48 | 13 | 13 | 1 MiB
There's no reason to use the L1 part of chunks on 32-bit, the
address space is small so the L2 is small. For platforms with a
48-bit address space, we pick the L1 such that the L2 is 1 MiB
in size, which is a good balance between low granularity without
making the impact on BSS too high (note the L1 is stored directly
in pageAlloc).
To iterate over the bitmap, use inUse to determine which ranges
are currently available. Otherwise one might iterate over unused
ranges.
TODO(mknyszek): Consider changing the definition of the bitmap
such that 1 means free and 0 means in-use so that summaries and
the bitmaps align better on zero-values.
start and end represent the chunk indices
which pageAlloc knows about. It assumes
chunks in the range [start, end) are
currently ready to use.
inUse is a slice of ranges of address space which are
known by the page allocator to be currently in-use (passed
to grow).
This field is currently unused on 32-bit architectures but
is harmless to track. We care much more about having a
contiguous heap in these cases and take additional measures
to ensure that, so in nearly all cases this should have just
1 element.
All access is protected by the mheapLock.
mheap_.lock. This level of indirection makes it possible
to test pageAlloc indepedently of the runtime allocator.
scav stores the scavenger state.
All fields are protected by mheapLock.
The address to start an allocation search with. It must never
point to any memory that is not contained in inUse, i.e.
inUse.contains(searchAddr.addr()) must always be true. The one
exception to this rule is that it may take on the value of
maxOffAddr to indicate that the heap is exhausted.
We guarantee that all valid heap addresses below this value
are allocated and not worth searching.
start and end represent the chunk indices
which pageAlloc knows about. It assumes
chunks in the range [start, end) are
currently ready to use.
Radix tree of summaries.
Each slice's cap represents the whole memory reservation.
Each slice's len reflects the allocator's maximum known
mapped heap address for that level.
The backing store of each summary level is reserved in init
and may or may not be committed in grow (small address spaces
may commit all the memory in init).
The purpose of keeping len <= cap is to enforce bounds checks
on the top end of the slice so that instead of an unknown
runtime segmentation fault, we get a much friendlier out-of-bounds
error.
To iterate over a summary level, use inUse to determine which ranges
are currently available. Otherwise one might try to access
memory which is only Reserved which may result in a hard fault.
We may still get segmentation faults < len since some of that
memory may not be committed yet.
sysStat is the runtime memstat to update when new system
memory is committed by the pageAlloc for allocation metadata.
Whether or not this struct is being used in tests.
alloc allocates npages worth of memory from the page heap, returning the base
address for the allocation and the amount of scavenged memory in bytes
contained in the region [base address, base address + npages*pageSize).
Returns a 0 base address on failure, in which case other returned values
should be ignored.
p.mheapLock must be held.
Must run on the system stack because p.mheapLock must be held.
allocRange marks the range of memory [base, base+npages*pageSize) as
allocated. It also updates the summaries to reflect the newly-updated
bitmap.
Returns the amount of scavenged memory in bytes present in the
allocated range.
p.mheapLock must be held.
allocToCache acquires a pageCachePages-aligned chunk of free pages which
may not be contiguous, and returns a pageCache structure which owns the
chunk.
p.mheapLock must be held.
Must run on the system stack because p.mheapLock must be held.
chunkOf returns the chunk at the given chunk index.
The chunk index must be valid or this method may throw.
find searches for the first (address-ordered) contiguous free region of
npages in size and returns a base address for that region.
It uses p.searchAddr to prune its search and assumes that no palloc chunks
below chunkIndex(p.searchAddr) contain any free memory at all.
find also computes and returns a candidate p.searchAddr, which may or
may not prune more of the address space than p.searchAddr already does.
This candidate is always a valid p.searchAddr.
find represents the slow path and the full radix tree search.
Returns a base address of 0 on failure, in which case the candidate
searchAddr returned is invalid and must be ignored.
p.mheapLock must be held.
findMappedAddr returns the smallest mapped offAddr that is
>= addr. That is, if addr refers to mapped memory, then it is
returned. If addr is higher than any mapped region, then
it returns maxOffAddr.
p.mheapLock must be held.
free returns npages worth of memory starting at base back to the page heap.
p.mheapLock must be held.
Must run on the system stack because p.mheapLock must be held.
grow sets up the metadata for the address range [base, base+size).
It may allocate metadata, in which case *p.sysStat will be updated.
p.mheapLock must be held.
(*T) init(mheapLock *mutex, sysStat *sysMemStat)
scavenge scavenges nbytes worth of free pages, starting with the
highest address first. Successive calls continue from where it left
off until the heap is exhausted. Call scavengeStartGen to bring it
back to the top of the heap.
Returns the amount of memory scavenged in bytes.
p.mheapLock must be held, but may be temporarily released if
mayUnlock == true.
Must run on the system stack because p.mheapLock must be held.
scavengeOne walks over address range work until it finds
a contiguous run of pages to scavenge. It will try to scavenge
at most max bytes at once, but may scavenge more to avoid
breaking huge pages. Once it scavenges some memory it returns
how much it scavenged in bytes.
Returns the number of bytes scavenged and the part of work
which was not yet searched.
work's base address must be aligned to pallocChunkBytes.
p.mheapLock must be held, but may be temporarily released if
mayUnlock == true.
Must run on the system stack because p.mheapLock must be held.
scavengeRangeLocked scavenges the given region of memory.
The region of memory is described by its chunk index (ci),
the starting page index of the region relative to that
chunk (base), and the length of the region in pages (npages).
Returns the base address of the scavenged region.
p.mheapLock must be held.
scavengeReserve reserves a contiguous range of the address space
for scavenging. The maximum amount of space it reserves is proportional
to the size of the heap. The ranges are reserved from the high addresses
first.
Returns the reserved range and the scavenge generation number for it.
p.mheapLock must be held.
Must run on the system stack because p.mheapLock must be held.
scavengeStartGen starts a new scavenge generation, resetting
the scavenger's search space to the full in-use address space.
p.mheapLock must be held.
Must run on the system stack because p.mheapLock must be held.
scavengeUnreserve returns an unscavenged portion of a range that was
previously reserved with scavengeReserve.
p.mheapLock must be held.
Must run on the system stack because p.mheapLock must be held.
sysGrow performs architecture-dependent operations on heap
growth for the page allocator, such as mapping in new memory
for summaries. It also updates the length of the slices in
[.summary.
base is the base of the newly-added heap memory and limit is
the first address past the end of the newly-added heap memory.
Both must be aligned to pallocChunkBytes.
The caller must update p.start and p.end after calling sysGrow.
sysInit performs architecture-dependent initialization of fields
in pageAlloc. pageAlloc should be uninitialized except for sysStat
if any runtime statistic should be updated.
tryChunkOf returns the bitmap data for the given chunk.
Returns nil if the chunk data has not been mapped.
update updates heap metadata. It must be called each time the bitmap
is updated.
If contig is true, update does some optimizations assuming that there was
a contiguous allocation or free between addr and addr+npages. alloc indicates
whether the operation performed was an allocation or a free.
p.mheapLock must be held.
pageBits is a bitmap representing one bit per page in a palloc chunk.
block64 returns the 64-bit aligned block of bits containing the i'th bit.
clear clears bit i of pageBits.
clearAll frees all the bits of b.
clearRange clears bits in the range [i, i+n).
get returns the value of the i'th bit in the bitmap.
popcntRange counts the number of set bits in the
range [i, i+n).
set sets bit i of pageBits.
setAll sets all the bits of b.
setRange sets bits in the range [i, i+n).
pageCache represents a per-p cache of pages the allocator can
allocate from without a lock. More specifically, it represents
a pageCachePages*pageSize chunk of memory with 0 or more free
pages in it.
// base address of the chunk
// 64-bit bitmap representing free pages (1 means free)
// 64-bit bitmap representing scavenged pages (1 means scavenged)
alloc allocates npages from the page cache and is the main entry
point for allocation.
Returns a base address and the amount of scavenged memory in the
allocated region in bytes.
Returns a base address of zero on failure, in which case the
amount of scavenged memory should be ignored.
allocN is a helper which attempts to allocate npages worth of pages
from the cache. It represents the general case for allocating from
the page cache.
Returns a base address and the amount of scavenged memory in the
allocated region in bytes.
empty returns true if the pageCache has any free pages, and false
otherwise.
flush empties out unallocated free pages in the given cache
into s. Then, it clears the cache, such that empty returns
true.
p.mheapLock must be held.
Must run on the system stack because p.mheapLock must be held.
pallocBits is a bitmap that tracks page allocations for at most one
palloc chunk.
The precise representation is an implementation detail, but for the
sake of documentation, 0s are free pages and 1s are allocated pages.
allocAll allocates all the bits of b.
allocRange allocates the range [i, i+n).
find searches for npages contiguous free pages in pallocBits and returns
the index where that run starts, as well as the index of the first free page
it found in the search. searchIdx represents the first known free page and
where to begin the next search from.
If find fails to find any free space, it returns an index of ^uint(0) and
the new searchIdx should be ignored.
Note that if npages == 1, the two returned values will always be identical.
find1 is a helper for find which searches for a single free page
in the pallocBits and returns the index.
See find for an explanation of the searchIdx parameter.
findLargeN is a helper for find which searches for npages contiguous free pages
in this pallocBits and returns the index where that run starts, as well as the
index of the first free page it found it its search.
See alloc for an explanation of the searchIdx parameter.
Returns a ^uint(0) index on failure and the new searchIdx should be ignored.
findLargeN assumes npages > 64, where any such run of free pages
crosses at least one aligned 64-bit boundary in the bits.
findSmallN is a helper for find which searches for npages contiguous free pages
in this pallocBits and returns the index where that run of contiguous pages
starts as well as the index of the first free page it finds in its search.
See find for an explanation of the searchIdx parameter.
Returns a ^uint(0) index on failure and the new searchIdx should be ignored.
findSmallN assumes npages <= 64, where any such contiguous run of pages
crosses at most one aligned 64-bit boundary in the bits.
free frees the range [i, i+n) of pages in the pallocBits.
free1 frees a single page in the pallocBits at i.
freeAll frees all the bits of b.
pages64 returns a 64-bit bitmap representing a block of 64 pages aligned
to 64 pages. The returned block of pages is the one containing the i'th
page in this pallocBits. Each bit represents whether the page is in-use.
summarize returns a packed summary of the bitmap in pallocBits.
pallocData encapsulates pallocBits and a bitmap for
whether or not a given page is scavenged in a single
structure. It's effectively a pallocBits with
additional functionality.
Update the comment on (*pageAlloc).chunks should this
structure change.
pallocBits pallocBits
scavenged pageBits
allocAll sets every bit in the bitmap to 1 and updates
the scavenged bits appropriately.
allocRange sets bits [i, i+n) in the bitmap to 1 and
updates the scavenged bits appropriately.
find searches for npages contiguous free pages in pallocBits and returns
the index where that run starts, as well as the index of the first free page
it found in the search. searchIdx represents the first known free page and
where to begin the next search from.
If find fails to find any free space, it returns an index of ^uint(0) and
the new searchIdx should be ignored.
Note that if npages == 1, the two returned values will always be identical.
find1 is a helper for find which searches for a single free page
in the pallocBits and returns the index.
See find for an explanation of the searchIdx parameter.
findLargeN is a helper for find which searches for npages contiguous free pages
in this pallocBits and returns the index where that run starts, as well as the
index of the first free page it found it its search.
See alloc for an explanation of the searchIdx parameter.
Returns a ^uint(0) index on failure and the new searchIdx should be ignored.
findLargeN assumes npages > 64, where any such run of free pages
crosses at least one aligned 64-bit boundary in the bits.
findScavengeCandidate returns a start index and a size for this pallocData
segment which represents a contiguous region of free and unscavenged memory.
searchIdx indicates the page index within this chunk to start the search, but
note that findScavengeCandidate searches backwards through the pallocData. As a
a result, it will return the highest scavenge candidate in address order.
min indicates a hard minimum size and alignment for runs of pages. That is,
findScavengeCandidate will not return a region smaller than min pages in size,
or that is min pages or greater in size but not aligned to min. min must be
a non-zero power of 2 <= maxPagesPerPhysPage.
max is a hint for how big of a region is desired. If max >= pallocChunkPages, then
findScavengeCandidate effectively returns entire free and unscavenged regions.
If max < pallocChunkPages, it may truncate the returned region such that size is
max. However, findScavengeCandidate may still return a larger region if, for
example, it chooses to preserve huge pages, or if max is not aligned to min (it
will round up). That is, even if max is small, the returned size is not guaranteed
to be equal to max. max is allowed to be less than min, in which case it is as if
max == min.
findSmallN is a helper for find which searches for npages contiguous free pages
in this pallocBits and returns the index where that run of contiguous pages
starts as well as the index of the first free page it finds in its search.
See find for an explanation of the searchIdx parameter.
Returns a ^uint(0) index on failure and the new searchIdx should be ignored.
findSmallN assumes npages <= 64, where any such contiguous run of pages
crosses at most one aligned 64-bit boundary in the bits.
free frees the range [i, i+n) of pages in the pallocBits.
free1 frees a single page in the pallocBits at i.
freeAll frees all the bits of b.
hasScavengeCandidate returns true if there's any min-page-aligned groups of
min pages of free-and-unscavenged memory in the region represented by this
pallocData.
min must be a non-zero power of 2 <= maxPagesPerPhysPage.
pages64 returns a 64-bit bitmap representing a block of 64 pages aligned
to 64 pages. The returned block of pages is the one containing the i'th
page in this pallocBits. Each bit represents whether the page is in-use.
summarize returns a packed summary of the bitmap in pallocBits.
pallocSum is a packed summary type which packs three numbers: start, max,
and end into a single 8-byte value. Each of these values are a summary of
a bitmap and are thus counts, each of which may have a maximum value of
2^21 - 1, or all three may be equal to 2^21. The latter case is represented
by just setting the 64th bit.
end extracts the end value from a packed sum.
max extracts the max value from a packed sum.
start extracts the start value from a packed sum.
unpack unpacks all three values from the summary.
func mergeSummaries(sums []pallocSum, logMaxPagesPerSum uint) pallocSum
func packPallocSum(start, max, end uint) pallocSum
func mergeSummaries(sums []pallocSum, logMaxPagesPerSum uint) pallocSum
const freeChunkSum
pcHeader holds data used by the pclntab lookups.
// offset to the cutab variable from pcHeader
// offset to the filetab variable from pcHeader
// offset to the funcnametab variable from pcHeader
// 0xFFFFFFFA
// min instruction size
// number of entries in the file tab.
// number of functions in the module
// 0,0
// 0,0
// offset to the pclntab variable from pcHeader
// offset to the pctab varible from pcHeader
// size of a ptr in bytes
entries [2][8]pcvalueCacheEnt
func funcspdelta(f funcInfo, targetpc uintptr, cache *pcvalueCache) int32
func getStackMap(frame *stkframe, cache *pcvalueCache, debug bool) (locals, args bitvector, objs []stackObjectRecord)
func pcdatavalue(f funcInfo, table uint32, targetpc uintptr, cache *pcvalueCache) int32
func pcdatavalue1(f funcInfo, table uint32, targetpc uintptr, cache *pcvalueCache, strict bool) int32
func pcvalue(f funcInfo, off uint32, targetpc uintptr, cache *pcvalueCache, strict bool) (int32, uintptr)
off uint32
targetpc and off together are the key of this cache entry.
val is the value of this cached pcvalue entry.
plainError represents a runtime error described a string without
the prefix "runtime error: " after invoking errorString.Error().
See Issue #14965.
( T) Error() string
( T) RuntimeError()
T : Error
T : error
pMask is an atomic bitstring with one bit per P.
clear clears P id's bit.
read returns true if P id's bit is set.
set sets P id's bit.
var idlepMask
var timerpMask
Network poller descriptor.
No heap pointers.
closing bool
// marks event scanning error happened
fd uintptr
// in pollcache, protected by pollcache.lock
The lock protects pollOpen, pollSetDeadline, pollUnblock and deadlineimpl operations.
This fully covers seq, rt and wt variables. fd is constant throughout the PollDesc lifetime.
pollReset, pollWait, pollWaitCanceled and runtime·netpollready (IO readiness notification)
proceed w/o taking the lock. So closing, everr, rg, rd, wg and wd are manipulated
in a lock-free way by all operations.
NOTE(dvyukov): the following code uses uintptr to store *g (rg/wg),
that will blow up when GC starts moving objects.
// protects the following fields
// read deadline
// pdReady, pdWait, G waiting for read or nil
// protects from stale read timers
// read deadline timer (set if rt.f != nil)
// storage for indirect interface. See (*pollDesc).makeArg.
// user settable cookie
// write deadline
// pdReady, pdWait, G waiting for write or nil
// protects from stale write timers
// write deadline timer
makeArg converts pd to an interface{}.
makeArg does not do any allocation. Normally, such
a conversion requires an allocation because pointers to
go:notinheap types (which pollDesc is) must be stored
in interfaces indirectly. See issue 42076.
func poll_runtime_pollOpen(fd uintptr) (*pollDesc, int)
func netpollarm(pd *pollDesc, mode int)
func netpollblock(pd *pollDesc, mode int32, waitio bool) bool
func netpollcheckerr(pd *pollDesc, mode int32) int
func netpolldeadlineimpl(pd *pollDesc, seq uintptr, read, write bool)
func netpollopen(fd uintptr, pd *pollDesc) int32
func netpollready(toRun *gList, pd *pollDesc, mode int32)
func netpollunblock(pd *pollDesc, mode int32, ioready bool) *g
func poll_runtime_pollClose(pd *pollDesc)
func poll_runtime_pollReset(pd *pollDesc, mode int) int
func poll_runtime_pollSetDeadline(pd *pollDesc, d int64, mode int)
func poll_runtime_pollUnblock(pd *pollDesc)
func poll_runtime_pollWait(pd *pollDesc, mode int) int
func poll_runtime_pollWaitCanceled(pd *pollDesc, mode int)
A profAtomic is the atomically-accessed word holding a profIndex.
(*T) cas(old, new profIndex) bool
(*T) load() profIndex
(*T) store(new profIndex)
A profBuf is a lock-free buffer for profiling events,
safe for concurrent use by one reader and one writer.
The writer may be a signal handler running without a user g.
The reader is assumed to be a user g.
Each logged event corresponds to a fixed size header, a list of
uintptrs (typically a stack), and exactly one unsafe.Pointer tag.
The header and uintptrs are stored in the circular buffer data and the
tag is stored in a circular buffer tags, running in parallel.
In the circular buffer data, each event takes 2+hdrsize+len(stk)
words: the value 2+hdrsize+len(stk), then the time of the event, then
hdrsize words giving the fixed-size header, and then len(stk) words
for the stack.
The current effective offsets into the tags and data circular buffers
for reading and writing are stored in the high 30 and low 32 bits of r and w.
The bottom bits of the high 32 are additional flag bits in w, unused in r.
"Effective" offsets means the total number of reads or writes, mod 2^length.
The offset in the buffer is the effective offset mod the length of the buffer.
To make wraparound mod 2^length match wraparound mod length of the buffer,
the length of the buffer must be a power of two.
If the reader catches up to the writer, a flag passed to read controls
whether the read blocks until more data is available. A read returns a
pointer to the buffer data itself; the caller is assumed to be done with
that data at the next read. The read offset rNext tracks the next offset to
be returned by read. By definition, r ≤ rNext ≤ w (before wraparound),
and rNext is only used by the reader, so it can be accessed without atomics.
If the writer gets ahead of the reader, so that the buffer fills,
future writes are discarded and replaced in the output stream by an
overflow entry, which has size 2+hdrsize+1, time set to the time of
the first discarded write, a header of all zeroed words, and a "stack"
containing one word, the number of discarded writes.
Between the time the buffer fills and the buffer becomes empty enough
to hold more data, the overflow entry is stored as a pending overflow
entry in the fields overflow and overflowTime. The pending overflow
entry can be turned into a real record by either the writer or the
reader. If the writer is called to write a new record and finds that
the output buffer has room for both the pending overflow entry and the
new record, the writer emits the pending overflow entry and the new
record into the buffer. If the reader is called to read data and finds
that the output buffer is empty but that there is a pending overflow
entry, the reader will return a synthesized record for the pending
overflow entry.
Only the writer can create or add to a pending overflow entry, but
either the reader or the writer can clear the pending overflow entry.
A pending overflow entry is indicated by the low 32 bits of 'overflow'
holding the number of discarded writes, and overflowTime holding the
time of the first discarded write. The high 32 bits of 'overflow'
increment each time the low 32 bits transition from zero to non-zero
or vice versa. This sequence number avoids ABA problems in the use of
compare-and-swap to coordinate between reader and writer.
The overflowTime is only written when the low 32 bits of overflow are
zero, that is, only when there is no pending overflow entry, in
preparation for creating a new one. The reader can therefore fetch and
clear the entry atomically using
for {
overflow = load(&b.overflow)
if uint32(overflow) == 0 {
// no pending entry
break
}
time = load(&b.overflowTime)
if cas(&b.overflow, overflow, ((overflow>>32)+1)<<32) {
// pending entry cleared
break
}
}
if uint32(overflow) > 0 {
emit entry for uint32(overflow), time
}
data []uint64
eof uint32
immutable (excluding slice content)
overflow uint64
// for use by reader to return overflow record
overflowTime uint64
accessed atomically
owned by reader
tags []unsafe.Pointer
accessed atomically
wait note
canWriteRecord reports whether the buffer has room
for a single contiguous record with a stack of length nstk.
canWriteTwoRecords reports whether the buffer has room
for two records with stack lengths nstk1, nstk2, in that order.
Each record must be contiguous on its own, but the two
records need not be contiguous (one can be at the end of the buffer
and the other can wrap around and start at the beginning of the buffer).
close signals that there will be no more writes on the buffer.
Once all the data has been read from the buffer, reads will return eof=true.
hasOverflow reports whether b has any overflow records pending.
incrementOverflow records a single overflow at time now.
It is racing against a possible takeOverflow in the reader.
(*T) read(mode profBufReadMode) (data []uint64, tags []unsafe.Pointer, eof bool)
takeOverflow consumes the pending overflow records, returning the overflow count
and the time of the first overflow.
When called by the reader, it is racing against incrementOverflow.
wakeupExtra must be called after setting one of the "extra"
atomic fields b.overflow or b.eof.
It records the change in b.w and wakes up the reader if needed.
write writes an entry to the profiling buffer b.
The entry begins with a fixed hdr, which must have
length b.hdrsize, followed by a variable-sized stack
and a single tag pointer *tagPtr (or nil if tagPtr is nil).
No write barriers allowed because this might be called from a signal handler.
func newProfBuf(hdrsize, bufwords, tags int) *profBuf
profBufReadMode specifies whether to block when no data is available to read.
const profBufBlocking
const profBufNonBlocking
A profIndex is the packet tag and data counts and flags bits, described above.
addCountsAndClearFlags returns the packed form of "x + (data, tag) - all flags".
( T) dataCount() uint32
( T) tagCount() uint32
const profReaderSleeping
const profWriteExtra
A ptabEntry is generated by the compiler for each exported function
and global variable in the main package of a plugin. It is used to
initialize the plugin module's symbol map.
name nameOff
typ typeOff
func pthread_self() (t pthread)
func pthread_kill(t pthread, sig uint32)
X__opaque [56]int8
X__sig int64
func pthread_attr_getstacksize(attr *pthreadattr, size *uintptr) int32
func pthread_attr_init(attr *pthreadattr) int32
func pthread_attr_setdetachstate(attr *pthreadattr, state int) int32
func pthread_create(attr *pthreadattr, start uintptr, arg unsafe.Pointer) int32
X__opaque [40]int8
X__sig int64
func pthread_cond_init(c *pthreadcond, attr *pthreadcondattr) int32
func pthread_cond_signal(c *pthreadcond) int32
func pthread_cond_timedwait_relative_np(c *pthreadcond, m *pthreadmutex, t *timespec) int32
func pthread_cond_wait(c *pthreadcond, m *pthreadmutex) int32
X__opaque [8]int8
X__sig int64
func pthread_cond_init(c *pthreadcond, attr *pthreadcondattr) int32
X__opaque [56]int8
X__sig int64
func pthread_cond_timedwait_relative_np(c *pthreadcond, m *pthreadmutex, t *timespec) int32
func pthread_cond_wait(c *pthreadcond, m *pthreadmutex) int32
func pthread_mutex_init(m *pthreadmutex, attr *pthreadmutexattr) int32
func pthread_mutex_lock(m *pthreadmutex) int32
func pthread_mutex_unlock(m *pthreadmutex) int32
X__opaque [8]int8
X__sig int64
func pthread_mutex_init(m *pthreadmutex, attr *pthreadmutexattr) int32
elem *_type
typ _type
func addfinalizer(p unsafe.Pointer, f *funcval, nret uintptr, fint *_type, ot *ptrtype) bool
func dumpfinalizer(obj unsafe.Pointer, fn *funcval, fint *_type, ot *ptrtype)
func finq_callback(fn *funcval, obj unsafe.Pointer, nret uintptr, fint *_type, ot *ptrtype)
func queuefinalizer(p unsafe.Pointer, fn *funcval, nret uintptr, fint *_type, ot *ptrtype)
randomOrder/randomEnum are helper types for randomized work stealing.
They allow to enumerate all Ps in different pseudo-random orders without repetitions.
The algorithm is based on the fact that if we have X such that X and GOMAXPROCS
are coprime, then a sequences of (i + X) % GOMAXPROCS gives the required enumeration.
coprimes []uint32
count uint32
(*T) reset(count uint32)
(*T) start(i uint32) randomEnum
var stealOrder
reflectMethodValue is a partial duplicate of reflect.makeFuncImpl
and reflect.methodValue.
// just args
fn uintptr
// ptrmap for both args and results
cs uint32
ds uint32
eax uint32
ebp uint32
ebx uint32
ecx uint32
edi uint32
edx uint32
eflags uint32
eip uint32
es uint32
esi uint32
esp uint32
fs uint32
gs uint32
ss uint32
cs uint64
fs uint64
gs uint64
r10 uint64
r11 uint64
r12 uint64
r13 uint64
r14 uint64
r15 uint64
r8 uint64
r9 uint64
rax uint64
rbp uint64
rbx uint64
rcx uint64
rdi uint64
rdx uint64
rflags uint64
rip uint64
rsi uint64
rsp uint64
A runtimeSelect is a single case passed to rselect.
This must match ../reflect/value.go:/runtimeSelect
// channel
dir selectDir
// channel type (not used here)
// ptr to data (SendDir) or ptr to receive buffer (RecvDir)
func reflect_rselect(cases []runtimeSelect) (int, bool)
A rwmutex is a reader/writer mutual exclusion lock.
The lock can be held by an arbitrary number of readers or a single writer.
This is a variant of sync.RWMutex, for the runtime package.
Like mutex, rwmutex blocks the calling M.
It does not interact with the goroutine scheduler.
// protects readers, readerPass, writer
// number of pending readers
// number of pending readers to skip readers list
// number of departing readers
// list of pending readers
// serializes writers
// pending writer waiting for completing readers
lock locks rw for writing.
rlock locks rw for reading.
runlock undoes a single rlock call on rw.
unlock unlocks rw for writing.
var execLock
Select case descriptor.
Known to compiler.
Changes here must also be made in src/cmd/internal/gc/select.go's scasetype.
// chan
// data element
func selectgo(cas0 *scase, order0 *uint16, pc0 *uintptr, nsends, nrecvs int, block bool) (int, bool)
func sellock(scases []scase, lockorder []uint16)
func selunlock(scases []scase, lockorder []uint16)
Central pool of available defer structs of different sizes.
deferpool [5]*_defer
disable controls selective disabling of the scheduler.
Use schedEnableUser to control this.
disable is protected by sched.lock.
freem is the list of m's waiting to be freed when their
m.exited is set. Linked through m.freelink.
Global cache of dead G's.
// gc is waiting to run
accessed atomically. keep at top to ensure alignment on 32-bit systems.
// time of last network poll, 0 if currently polling
lock mutex
// maximum number of m's allowed (or die)
// idle m's waiting for work
// number of m's that have been created and next M ID
// number of system goroutines; updated atomically
// cumulative number of freed m's
// number of idle m's waiting for work
// number of locked m's waiting for work
// See "Worker thread parking/unparking" comment in proc.go.
// number of system m's not counted for deadlock
npidle uint32
// idle p's
// time to which current poll is sleeping
// nanotime() of last change to gomaxprocs
// cpu profiling rate
Global runnable queue.
runqsize int32
safepointFn should be called on each P at the next GC
safepoint if p.runSafePointFn is set.
safePointNote note
safePointWait int32
stopnote note
stopwait int32
sudogcache *sudog
Central cache of sudog structs.
While true, sysmon not ready for mFixup calls.
Accessed atomically.
sysmonlock protects sysmon's actions on the runtime.
Acquire and hold this mutex to block sysmon from interacting
with the rest of the runtime.
sysmonnote note
sysmonwait uint32
// ∫gomaxprocs dt up to procresizetime
var sched
These values must match ../reflect/value.go:/SelectDir.
const selectDefault
const selectRecv
const selectSend
func semacquire1(addr *uint32, lifo bool, profile semaProfileFlags, skipframes int)
const semaBlockProfile
const semaMutexProfile
A semaRoot holds a balanced tree of sudog with distinct addresses (s.elem).
Each of those sudog may in turn point (through s.waitlink) to a list
of other sudogs waiting on the same address.
The operations on the inner lists of sudogs with the same address
are all O(1). The scanning of the top-level semaRoot list is O(log n),
where n is the number of distinct addresses with goroutines blocked
on them that hash to the given semaRoot.
See golang.org/issue/17953 for a program that worked badly
before we introduced the second level of list, and test/locklinear.go
for a test that exercises this.
lock mutex
// Number of waiters. Read w/o the lock.
// root of balanced tree of unique waiters.
dequeue searches for and finds the first goroutine
in semaRoot blocked on addr.
If the sudog was being profiled, dequeue returns the time
at which it was woken up as now. Otherwise now is 0.
queue adds s to the blocked goroutines in semaRoot.
rotateLeft rotates the tree rooted at node x.
turning (x a (y b c)) into (y (x a b) c).
rotateRight rotates the tree rooted at node y.
turning (y (x a b) c) into (x a (y b c)).
func semroot(addr *uint32) *semaRoot
ctxt unsafe.Pointer
info *siginfo
(*T) cs() uint64
(*T) fault() uintptr
(*T) fixsigcode(sig uint32)
(*T) fs() uint64
(*T) gs() uint64
preparePanic sets up the stack to look like a call to sigpanic.
(*T) pushCall(targetPC, resumePC uintptr)
(*T) r10() uint64
(*T) r11() uint64
(*T) r12() uint64
(*T) r13() uint64
(*T) r14() uint64
(*T) r15() uint64
(*T) r8() uint64
(*T) r9() uint64
(*T) rax() uint64
(*T) rbp() uint64
(*T) rbx() uint64
(*T) rcx() uint64
(*T) rdi() uint64
(*T) rdx() uint64
(*T) regs() *regs64
(*T) rflags() uint64
(*T) rip() uint64
(*T) rsi() uint64
(*T) rsp() uint64
(*T) set_rip(x uint64)
(*T) set_rsp(x uint64)
(*T) set_sigaddr(x uint64)
(*T) set_sigcode(x uint64)
(*T) sigaddr() uint64
(*T) sigcode() uint64
(*T) siglr() uintptr
(*T) sigpc() uintptr
(*T) sigsp() uintptr
func badsignal(sig uintptr, c *sigctxt)
func doSigPreempt(gp *g, ctxt *sigctxt)
func dumpregs(c *sigctxt)
func raisebadsignal(sig uint32, c *sigctxt)
func sigFetchG(c *sigctxt) *g
__pad [7]uint64
si_addr uint64
si_band int64
si_code int32
si_errno int32
si_pid int32
si_signo int32
si_status int32
si_uid uint32
si_value [8]byte
func sigfwd(fn uintptr, sig uint32, info *siginfo, ctx unsafe.Pointer)
func sigfwdgo(sig uint32, info *siginfo, ctx unsafe.Pointer) bool
func sighandler(sig uint32, info *siginfo, ctxt unsafe.Pointer, gp *g)
func sigtrampgo(sig uint32, info *siginfo, ctx unsafe.Pointer)
func msigrestore(sigmask sigset)
func sigaddset(mask *sigset, i int)
func sigdelset(mask *sigset, i int)
func sigprocmask(how uint32, new *sigset, old *sigset)
func sigprocmask(how uint32, new *sigset, old *sigset)
func sigsave(p *sigset)
var initSigmask
var sigset_all
var sigsetAllExiting
sigTabT is the type of an entry in the global sigtable array.
sigtable is inherently system dependent, and appears in OS-specific files,
but sigTabT is the same for all Unixy systems.
The sigtable array is indexed by a system signal number to get the flags
and printable name of each signal.
flags int32
name string
array unsafe.Pointer
cap int
len int
func growslice(et *_type, old slice, cap int) slice
func growslice(et *_type, old slice, cap int) slice
func reflect_typedslicecopy(elemType *_type, dst, src slice) int
The specialized convTx routines need a type descriptor to use when calling mallocgc.
We don't need the type to be exact, just to have the correct size, alignment, and pointer-ness.
However, when debugging, it'd be nice to have some indication in mallocgc where the types came from,
so we use named types here.
We then construct interface values of these types,
and then extract the type word to use as needed.
spanAllocType represents the type of allocation to make, or
the type of allocation to be freed.
manual returns true if the span allocation is manually managed.
const spanAllocHeap
const spanAllocPtrScalarBits
const spanAllocStack
const spanAllocWorkBuf
A spanClass represents the size class and noscan-ness of a span.
Each size class has a noscan spanClass and a scan spanClass. The
noscan spanClass contains only noscan objects, which do not contain
pointers and thus do not need to be scanned by the garbage
collector.
( T) noscan() bool
( T) sizeclass() int8
func makeSpanClass(sizeclass uint8, noscan bool) spanClass
const tinySpanClass
A spanSet is a set of *mspans.
spanSet is safe for concurrent push and pop operations.
index is the head and tail of the spanSet in a single field.
The head and the tail both represent an index into the logical
concatenation of all blocks, with the head always behind or
equal to the tail (indicating an empty set). This field is
always accessed atomically.
The head and the tail are only 32 bits wide, which means we
can only support up to 2^32 pushes before a reset. If every
span in the heap were stored in this set, and each span were
the minimum size (1 runtime page, 8 KiB), then roughly the
smallest heap which would be unrepresentable is 32 TiB in size.
// *[N]*spanSetBlock, accessed atomically
// Spine array cap, accessed under lock
// Spine array length, accessed atomically
spineLock mutex
pop removes and returns a span from buffer b, or nil if b is empty.
pop is safe to call concurrently with other pop and push operations.
push adds span s to buffer b. push is safe to call concurrently
with other push and pop operations.
reset resets a spanSet which is empty. It will also clean up
any left over blocks.
Throws if the buf is not empty.
reset may not be called concurrently with any other operations
on the span set.
Free spanSetBlocks are managed via a lock-free stack.
lfnode.next uint64
lfnode.pushcnt uintptr
popped is the number of pop operations that have occurred on
this block. This number is used to help determine when a block
may be safely recycled.
spans is the set of spans in this block.
spanSetBlockAlloc represents a concurrent pool of spanSetBlocks.
stack lfstack
alloc tries to grab a spanSetBlock out of the pool, and if it fails
persistentallocs a new one and returns it.
free returns a spanSetBlock back to the pool.
var spanSetBlockPool
// kind of special
// linked list in span
// span offset of object
func removespecial(p unsafe.Pointer, kind uint8) *special
func addspecial(p unsafe.Pointer, s *special) bool
func freespecial(s *special, p unsafe.Pointer, size uintptr)
The described object has a finalizer set for it.
specialfinalizer is allocated from non-GC'd memory, so any heap
pointers must be specially handled.
// May be a heap pointer, but always live.
// May be a heap pointer.
nret uintptr
// May be a heap pointer, but always live.
special special
Stack describes a Go execution stack.
The bounds of the stack are exactly [lo, hi),
with no implicit data structures on either side.
hi uintptr
lo uintptr
func stackalloc(n uint32) stack
func fillstack(stk stack, b byte)
func findsghi(gp *g, stk stack) uintptr
func signalstack(s *stack)
func stackfree(stk stack)
func tracebackHexdump(stk stack, frame *stkframe, bad uintptr)
// linked list of free stacks
// total size of stacks in list
// bitmaps, each starting on a byte boundary
// number of bitmaps
// number of bits in each bitmap
func stackmapdata(stkmap *stackmap, n int32) bitvector
A stackObject represents a variable on the stack that has had
its address taken.
// objects with lower addresses
// offset above stack.lo
// objects with higher addresses
// size of object
// type info (for ptr/nonptr bits). nil if object has been scanned.
obj.typ = typ, but with no write barrier.
func binarySearchTree(x *stackObjectBuf, idx int, n int) (root *stackObject, restBuf *stackObjectBuf, restIdx int)
Buffer for stack objects found on a goroutine stack.
Must be smaller than or equal to workbuf.
obj [63]stackObject
stackObjectBufHdr stackObjectBufHdr
stackObjectBufHdr.next *stackObjectBuf
stackObjectBufHdr.workbufhdr workbufhdr
stackObjectBufHdr.workbufhdr.nobj int
// must be first
func binarySearchTree(x *stackObjectBuf, idx int, n int) (root *stackObject, restBuf *stackObjectBuf, restIdx int)
func binarySearchTree(x *stackObjectBuf, idx int, n int) (root *stackObject, restBuf *stackObjectBuf, restIdx int)
next *stackObjectBuf
workbufhdr workbufhdr
workbufhdr.nobj int
// must be first
A stackObjectRecord is generated by the compiler for each stack object in a stack frame.
This record must match the generator code in cmd/compile/internal/gc/ssa.go:emitStackObjects.
offset in frame
if negative, offset from varp
if non-negative, offset from argp
typ *_type
func getStackMap(frame *stkframe, cache *pcvalueCache, debug bool) (locals, args bitvector, objs []stackObjectRecord)
A stackScanState keeps track of the state used during the GC walk
of a goroutine.
buf contains the set of possible pointers to stack objects.
Organized as a LIFO linked list of buffers.
All buffers except possibly the head buffer are full.
cache pcvalueCache
cbuf contains conservative pointers to stack objects. If
all pointers to a stack object are obtained via
conservative scanning, then the stack object may be dead
and may contain dead pointers, so it must be scanned
defensively.
conservative indicates that the next frame must be scanned conservatively.
This applies only to the innermost frame at an async safe-point.
// keep around one free buffer for allocation hysteresis
list of stack objects
Objects are in increasing address order.
nobjs int
root of binary tree for fast object lookup by address
Initialized by buildIndex.
stack limits
tail *stackObjectBuf
addObject adds a stack object at addr of type typ to the set of stack objects.
buildIndex initializes s.root to a binary search tree.
It should be called after all addObject calls but before
any call of findObject.
findObject returns the stack object containing address a, if any.
Must have called buildIndex previously.
Remove and return a potential pointer to a stack object.
Returns 0 if there are no more pointers available.
This prefers non-conservative pointers so we scan stack objects
precisely if there are any non-conservative pointers to them.
Add p as a potential pointer to a stack object.
p must be a stack address.
func scanblock(b0, n0 uintptr, ptrmask *uint8, gcw *gcWork, stk *stackScanState)
func scanConservative(b, n uintptr, ptrmask *uint8, gcw *gcWork, state *stackScanState)
func scanframeworker(frame *stkframe, state *stackScanState, gcw *gcWork)
pad_cgo_0 [4]byte
ss_flags int32
ss_size uintptr
ss_sp *byte
func setGsignalStack(st *stackt, old *gsignalStack)
func setSignalstackSP(s *stackt, sp uintptr)
func sigaltstack(new *stackt, old *stackt)
func sigaltstack(new *stackt, old *stackt)
Buffer for pointers found during stack tracing.
Must be smaller than or equal to workbuf.
obj [252]uintptr
stackWorkBufHdr stackWorkBufHdr
// linked list of workbufs
stackWorkBufHdr.workbufhdr workbufhdr
stackWorkBufHdr.workbufhdr.nobj int
// must be first
Header declaration must come after the buf declaration above, because of issue #14620.
// linked list of workbufs
workbufhdr workbufhdr
workbufhdr.nobj int
// must be first
statAggregate is the main driver of the metrics implementation.
It contains multiple aggregates of runtime statistics, as well
as a set of these aggregates that it has populated. The aggergates
are populated lazily by its ensure method.
ensured statDepSet
heapStats heapStatsAggregate
sysStats sysStatsAggregate
ensure populates statistics aggregates determined by deps if they
haven't yet been populated.
var agg
statDep is a dependency on a group of statistics
that a metric might have.
func makeStatDepSet(deps ...statDep) statDepSet
const heapStatsDep
const numStatsDeps
const sysStatsDep
statDepSet represents a set of statDeps.
Under the hood, it's a bitmap.
differennce returns set difference of s from b as a new set.
empty returns true if there are no dependencies in the set.
has returns true if the set contains a given statDep.
union returns the union of the two sets as a new set.
func makeStatDepSet(deps ...statDep) statDepSet
stack traces
// number of bytes at argp
// force use of this argmap
// pointer to function arguments
// program counter where execution can continue, or 0 if not
// function being run
// stack pointer at caller aka frame pointer
// program counter at caller aka link register
// program counter within fn
// stack pointer at pc
// top of local variables
func adjustframe(frame *stkframe, arg unsafe.Pointer) bool
func dumpframe(s *stkframe, arg unsafe.Pointer) bool
func getArgInfo(frame *stkframe, f funcInfo, needArgMap bool, ctxt *funcval) (arglen uintptr, argmap *bitvector)
func getgcmaskcb(frame *stkframe, ctxt unsafe.Pointer) bool
func getStackMap(frame *stkframe, cache *pcvalueCache, debug bool) (locals, args bitvector, objs []stackObjectRecord)
func scanframeworker(frame *stkframe, state *stackScanState, gcw *gcWork)
func tracebackHexdump(stk stack, frame *stkframe, bad uintptr)
( T) String() string
*bytes.Buffer
crypto.Hash
crypto/tls.ClientAuthType
crypto/tls.CurveID
crypto/tls.SignatureScheme
crypto/x509.PublicKeyAlgorithm
crypto/x509.SignatureAlgorithm
crypto/x509/pkix.Name
crypto/x509/pkix.RDNSequence
encoding/asn1.ObjectIdentifier
encoding/binary.ByteOrder (interface)
encoding/json.Delim
encoding/json.Number
fmt.Stringer (interface)
github.com/neo4j/neo4j-go-driver/v4/neo4j/dbtype.Duration
github.com/neo4j/neo4j-go-driver/v4/neo4j/dbtype.Point2D
github.com/neo4j/neo4j-go-driver/v4/neo4j/dbtype.Point3D
*github.com/neo4j/neo4j-go-driver/v4/neo4j/internal/packstream.Unpacker
internal/reflectlite.Kind
internal/reflectlite.Type (interface)
io/fs.FileMode
math/big.Accuracy
*math/big.Float
*math/big.Int
*math/big.Rat
math/big.RoundingMode
net.Addr (interface)
net.Flags
net.HardwareAddr
net.IP
*net.IPAddr
net.IPMask
*net.IPNet
*net.TCPAddr
*net.UDPAddr
*net.UnixAddr
*net/url.URL
*net/url.Userinfo
*os.ProcessState
os.Signal (interface)
reflect.ChanDir
reflect.Kind
reflect.Type (interface)
reflect.Value
*regexp.Regexp
regexp/syntax.ErrorCode
*regexp/syntax.Inst
regexp/syntax.InstOp
regexp/syntax.Op
*regexp/syntax.Prog
*regexp/syntax.Regexp
*strings.Builder
syscall.Signal
time.Duration
*time.Location
time.Month
time.Time
time.Weekday
vendor/golang.org/x/net/dns/dnsmessage.Class
vendor/golang.org/x/net/dns/dnsmessage.Name
vendor/golang.org/x/net/dns/dnsmessage.RCode
vendor/golang.org/x/net/dns/dnsmessage.Type
lockRank
waitReason
*context.cancelCtx
*context.emptyCtx
context.stringer (interface)
*context.timerCtx
*context.valueCtx
crypto/tls.alert
encoding/binary.bigEndian
encoding/binary.littleEndian
*encoding/json.encodeState
*github.com/neo4j/neo4j-go-driver/v4/neo4j.profile
github.com/neo4j/neo4j-go-driver/v4/neo4j/internal/bolt.loggableDictionary
github.com/neo4j/neo4j-go-driver/v4/neo4j/internal/bolt.loggableFailure
github.com/neo4j/neo4j-go-driver/v4/neo4j/internal/bolt.loggableList
github.com/neo4j/neo4j-go-driver/v4/neo4j/internal/bolt.loggableStringDictionary
github.com/neo4j/neo4j-go-driver/v4/neo4j/internal/bolt.loggableStringList
github.com/neo4j/neo4j-go-driver/v4/neo4j/internal/bolt.loggableSuccess
*github.com/neo4j/neo4j-go-driver/v4/neo4j/internal/bolt.success
*internal/reflectlite.arrayType
*internal/reflectlite.chanType
*internal/reflectlite.funcType
*internal/reflectlite.interfaceType
*internal/reflectlite.mapType
*internal/reflectlite.ptrType
*internal/reflectlite.rtype
*internal/reflectlite.sliceType
*internal/reflectlite.structType
*internal/reflectlite.structTypeUncommon
*math/big.decimal
net.fileAddr
net.hostLookupOrder
net.pipeAddr
net.sockaddr (interface)
*reflect.arrayType
*reflect.chanType
*reflect.funcType
*reflect.funcTypeFixed128
*reflect.funcTypeFixed16
*reflect.funcTypeFixed32
*reflect.funcTypeFixed4
*reflect.funcTypeFixed64
*reflect.funcTypeFixed8
*reflect.interfaceType
*reflect.mapType
*reflect.ptrType
*reflect.rtype
*reflect.sliceType
*reflect.structType
*reflect.structTypeUncommon
*regexp.onePassInst
*strconv.decimal
T : fmt.Stringer
T : context.stringer
The specialized convTx routines need a type descriptor to use when calling mallocgc.
We don't need the type to be exact, just to have the correct size, alignment, and pointer-ness.
However, when debugging, it'd be nice to have some indication in mallocgc where the types came from,
so we use named types here.
We then construct interface values of these types,
and then extract the type word to use as needed.
sudog represents a g in a wait list, such as for sending/receiving
on a channel.
sudog is necessary because the g ↔ synchronization object relation
is many-to-many. A g can be on many wait lists, so there may be
many sudogs for one g; and many gs may be waiting on the same
synchronization object, so there may be many sudogs for one object.
sudogs are allocated from a special pool. Use acquireSudog and
releaseSudog to allocate and free them.
acquiretime int64
// channel
// data element (may point to stack)
g *g
isSelect indicates g is participating in a select, so
g.selectDone must be CAS'd to win the wake-up race.
next *sudog
// semaRoot binary tree
prev *sudog
releasetime int64
success indicates whether communication over channel c
succeeded. It is true if the goroutine was awoken because a
value was delivered over channel c, and false if awoken
because c was closed.
ticket uint32
// g.waiting list or semaRoot
// semaRoot
func acquireSudog() *sudog
func racenotify(c *hchan, idx uint, sg *sudog)
func racesync(c *hchan, sg *sudog)
func readyWithTime(s *sudog, traceskip int)
func recv(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func(), skip int)
func recvDirect(t *_type, sg *sudog, dst unsafe.Pointer)
func releaseSudog(s *sudog)
func send(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func(), skip int)
func sendDirect(t *_type, sg *sudog, src unsafe.Pointer)
dead indicates the goroutine was not suspended because it
is dead. This goroutine could be reused after the dead
state was observed, so the caller must not assume that it
remains dead.
g *g
stopped indicates that this suspendG transitioned the G to
_Gwaiting via g.preemptStop and thus is responsible for
readying it when done.
func suspendG(gp *g) suspendGState
func resumeG(state suspendGState)
sweepClass is a spanClass and one bit to represent whether we're currently
sweeping partial or full spans.
(*T) clear()
(*T) load() sweepClass
split returns the underlying span class as well as
whether we're interested in the full or partial
unswept lists for that class, indicated as a boolean
(true means "full").
(*T) update(sNew sweepClass)
const sweepClassDone
State of background sweep.
centralIndex is the current unswept span class.
It represents an index into the mcentral span
sets. Accessed and updated via its load and
update methods. Not protected by a lock.
Reset at mark termination.
Used by mheap.nextSpanForSweep.
g *g
lock mutex
nbgsweep uint32
npausesweep uint32
parked bool
started bool
var sweep
sysMemStat represents a global system statistic that is managed atomically.
This type must structurally be a uint64 so that mstats aligns with MemStats.
add atomically adds the sysMemStat by n.
Must be nosplit as it is called in runtime initialization, e.g. newosproc0.
load atomically reads the value of the stat.
Must be nosplit as it is called in runtime initialization, e.g. newosproc0.
func persistentalloc(size, align uintptr, sysStat *sysMemStat) unsafe.Pointer
func persistentalloc1(size, align uintptr, sysStat *sysMemStat) *notInHeap
func sysAlloc(n uintptr, sysStat *sysMemStat) unsafe.Pointer
func sysFree(v unsafe.Pointer, n uintptr, sysStat *sysMemStat)
func sysMap(v unsafe.Pointer, n uintptr, sysStat *sysMemStat)
sysStatsAggregate represents system memory stats obtained
from the runtime. This set of stats is grouped together because
they're all relatively cheap to acquire and generally independent
of one another and other runtime memory stats. The fact that they
may be acquired at different times, especially with respect to
heapStatsAggregate, means there could be some skew, but because of
these stats are independent, there's no real consistency issue here.
buckHashSys uint64
gcCyclesDone uint64
gcCyclesForced uint64
gcMiscSys uint64
heapGoal uint64
mCacheInUse uint64
mCacheSys uint64
mSpanInUse uint64
mSpanSys uint64
otherSys uint64
stacksSys uint64
compute populates the sysStatsAggregate with values from the runtime.
// relocated section address
// section length
// prelinked section vaddr
tflag is documented in reflect/type.go.
tflag values must be kept in sync with copies in:
cmd/compile/internal/gc/reflect.go
cmd/link/internal/ld/decodesym.go
reflect/type.go
internal/reflectlite/type.go
const tflagExtraStar
const tflagNamed
const tflagRegularMemory
const tflagUncommon
timeHistogram represents a distribution of durations in
nanoseconds.
The accuracy and range of the histogram is defined by the
timeHistSubBucketBits and timeHistNumSuperBuckets constants.
It is an HDR histogram with exponentially-distributed
buckets and linearly distributed sub-buckets.
Counts in the histogram are updated atomically, so it is safe
for concurrent use. It is also safe to read all the values
atomically.
counts [720]uint64
underflow counts all the times we got a negative duration
sample. Because of how time works on some platforms, it's
possible to measure negative durations. We could ignore them,
but we record them anyway because it's better to have some
signal that it's happening than just missing samples.
record adds the given duration to the distribution.
Package time knows the layout of this structure.
If this struct changes, adjust ../time/sleep.go:/runtimeTimer.
arg interface{}
f func(interface{}, uintptr)
What to set the when field to in timerModifiedXX status.
period int64
If this timer is on a heap, which P's heap it is on.
puintptr rather than *p to match uintptr in the versions
of this struct defined in other packages.
seq uintptr
The status field holds one of the values below.
Timer wakes up at when, and then at when+period, ... (period > 0 only)
each time calling f(arg, now) in the timer goroutine, so f must be
a well-behaved function and not block.
when must be positive on an active timer.
func addAdjustedTimers(pp *p, moved []*timer)
func addtimer(t *timer)
func deltimer(t *timer) bool
func doaddtimer(pp *p, t *timer)
func modtimer(t *timer, when, period int64, f func(interface{}, uintptr), arg interface{}, seq uintptr) bool
func modTimer(t *timer, when, period int64, f func(interface{}, uintptr), arg interface{}, seq uintptr)
func moveTimers(pp *p, timers []*timer)
func resetTimer(t *timer, when int64) bool
func resettimer(t *timer, when int64) bool
func runOneTimer(pp *p, t *timer, now int64)
func siftdownTimer(t []*timer, i int)
func siftupTimer(t []*timer, i int)
func startTimer(t *timer)
func stopTimer(t *timer) bool
tv_nsec int64
tv_sec int64
(*T) setNsec(ns int64)
func kevent(kq int32, ch *keventt, nch int32, ev *keventt, nev int32, ts *timespec) int32
func pthread_cond_timedwait_relative_np(c *pthreadcond, m *pthreadmutex, t *timespec) int32
func concatstring2(buf *tmpBuf, a [2]string) string
func concatstring3(buf *tmpBuf, a [3]string) string
func concatstring4(buf *tmpBuf, a [4]string) string
func concatstring5(buf *tmpBuf, a [5]string) string
func concatstrings(buf *tmpBuf, a []string) string
func rawstringtmp(buf *tmpBuf, l int) (s string, b []byte)
func slicebytetostring(buf *tmpBuf, ptr *byte, n int) (str string)
func slicerunetostring(buf *tmpBuf, a []rune) string
func stringtoslicebyte(buf *tmpBuf, s string) []byte
traceAlloc is a non-thread-safe region allocator.
It holds a linked list of traceAllocBlock.
head traceAllocBlockPtr
off uintptr
alloc allocates n-byte block.
drop frees all previously allocated memory and resets the allocator.
traceAllocBlock is a block in traceAlloc.
traceAllocBlock is allocated from non-GC'd memory, so it must not
contain heap pointers. Writes to pointers to traceAllocBlocks do
not need write barriers.
data [65528]byte
next traceAllocBlockPtr
TODO: Since traceAllocBlock is now go:notinheap, this isn't necessary.
( T) ptr() *traceAllocBlock
(*T) set(x *traceAllocBlock)
traceBuf is per-P tracing buffer.
// underlying buffer for traceBufHeader.buf
traceBufHeader traceBufHeader
// when we wrote the last event
// in trace.empty/full
// next write offset in arr
// scratch buffer for traceback
byte appends v to buf.
varint appends v to buf in little-endian-base-128 encoding.
func traceBufPtrOf(b *traceBuf) traceBufPtr
traceBufHeader is per-P tracing buffer.
// when we wrote the last event
// in trace.empty/full
// next write offset in arr
// scratch buffer for traceback
traceBufPtr is a *traceBuf that is not traced by the garbage
collector and doesn't have write barriers. traceBufs are not
allocated from the GC'd heap, so this is safe, and are often
manipulated in contexts where write barriers are not allowed, so
this is necessary.
TODO: Since traceBuf is now go:notinheap, this isn't necessary.
( T) ptr() *traceBuf
(*T) set(b *traceBuf)
func traceAcquireBuffer() (mp *m, pid int32, bufp *traceBufPtr)
func traceBufPtrOf(b *traceBuf) traceBufPtr
func traceFlush(buf traceBufPtr, pid int32) traceBufPtr
func traceFrameForPC(buf traceBufPtr, pid int32, f Frame) (traceFrame, traceBufPtr)
func traceFullDequeue() traceBufPtr
func traceString(bufp *traceBufPtr, pid int32, s string) (uint64, *traceBufPtr)
func traceEventLocked(extraBytes int, mp *m, pid int32, bufp *traceBufPtr, ev byte, skip int, args ...uint64)
func traceFlush(buf traceBufPtr, pid int32) traceBufPtr
func traceFrameForPC(buf traceBufPtr, pid int32, f Frame) (traceFrame, traceBufPtr)
func traceFullQueue(buf traceBufPtr)
func traceString(bufp *traceBufPtr, pid int32, s string) (uint64, *traceBufPtr)
fileID uint64
funcID uint64
line uint64
func traceFrameForPC(buf traceBufPtr, pid int32, f Frame) (traceFrame, traceBufPtr)
traceStack is a single stack in traceStackTable.
hash uintptr
id uint32
link traceStackPtr
n int
// real type [n]uintptr
stack returns slice of PCs.
( T) ptr() *traceStack
traceStackTable maps stack traces (arrays of PC's) to unique uint32 ids.
It is lock-free for reading.
lock mutex
mem traceAlloc
seq uint32
tab [8192]traceStackPtr
dump writes all previously cached stacks to trace buffers,
releases all memory and resets state.
find checks if the stack trace pcs is already present in the table.
newStack allocates a new stack of size n.
put returns a unique id for the stack trace pcs and caches it in the table,
if it sees the trace for the first time.
// init tracing activation status
// heap allocations
// heap allocated bytes
// init go routine id
var inittrace
func resolveTypeOff(ptrInModule unsafe.Pointer, off typeOff) *_type
uc_link *ucontext
uc_mcontext *mcontext64
uc_mcsize uint64
uc_onstack int32
uc_sigmask uint32
uc_stack stackt
The specialized convTx routines need a type descriptor to use when calling mallocgc.
We don't need the type to be exact, just to have the correct size, alignment, and pointer-ness.
However, when debugging, it'd be nice to have some indication in mallocgc where the types came from,
so we use named types here.
We then construct interface values of these types,
and then extract the type word to use as needed.
The specialized convTx routines need a type descriptor to use when calling mallocgc.
We don't need the type to be exact, just to have the correct size, alignment, and pointer-ness.
However, when debugging, it'd be nice to have some indication in mallocgc where the types came from,
so we use named types here.
We then construct interface values of these types,
and then extract the type word to use as needed.
The specialized convTx routines need a type descriptor to use when calling mallocgc.
We don't need the type to be exact, just to have the correct size, alignment, and pointer-ness.
However, when debugging, it'd be nice to have some indication in mallocgc where the types came from,
so we use named types here.
We then construct interface values of these types,
and then extract the type word to use as needed.
// number of methods
// offset from this uncommontype to [mcount]method
pkgpath nameOff
// number of exported methods
__sigaction_u [8]byte
sa_flags int32
sa_mask uint32
func sigaction(sig uint32, new *usigactiont, old *usigactiont)
func sigaction(sig uint32, new *usigactiont, old *usigactiont)
first *sudog
last *sudog
(*T) dequeue() *sudog
(*T) dequeueSudoG(sgp *sudog)
(*T) enqueue(sgp *sudog)
A waitReason explains why a goroutine has been stopped.
See gopark. Do not re-use waitReasons, add new ones.
( T) String() string
T : fmt.Stringer
T : stringer
T : context.stringer
func gopark(unlockf func(*g, unsafe.Pointer) bool, lock unsafe.Pointer, reason waitReason, traceEv byte, traceskip int)
func goparkunlock(lock *mutex, reason waitReason, traceEv byte, traceskip int)
const waitReasonChanReceive
const waitReasonChanReceiveNilChan
const waitReasonChanSend
const waitReasonChanSendNilChan
const waitReasonDebugCall
const waitReasonDumpingHeap
const waitReasonFinalizerWait
const waitReasonForceGCIdle
const waitReasonGarbageCollection
const waitReasonGarbageCollectionScan
const waitReasonGCAssistMarking
const waitReasonGCAssistWait
const waitReasonGCScavengeWait
const waitReasonGCSweepWait
const waitReasonGCWorkerIdle
const waitReasonIOWait
const waitReasonPanicWait
const waitReasonPreempted
const waitReasonSelect
const waitReasonSelectNoCases
const waitReasonSemacquire
const waitReasonSleep
const waitReasonSyncCondWait
const waitReasonTimerGoroutineIdle
const waitReasonTraceReaderBlocked
const waitReasonWaitForGCCycle
const waitReasonZero
wbBuf is a per-P buffer of pointers queued by the write barrier.
This buffer is flushed to the GC workbufs when it fills up and on
various GC transitions.
This is closely related to a "sequential store buffer" (SSB),
except that SSBs are usually used for maintaining remembered sets,
while this is used for marking.
buf stores a series of pointers to execute write barriers
on. This must be a multiple of wbBufEntryPointers because
the write barrier only checks for overflow once per entry.
end points to just past the end of buf. It must not be a
pointer type because it points past the end of buf and must
be updated without write barriers.
next points to the next slot in buf. It must not be a
pointer type because it can point past the end of buf and
must be updated without write barriers.
This is a pointer rather than an index to optimize the
write barrier assembly.
discard resets b's next pointer, but not its end pointer.
This must be nosplit because it's called by wbBufFlush.
empty reports whether b contains no pointers.
putFast adds old and new to the write barrier buffer and returns
false if a flush is necessary. Callers should use this as:
buf := &getg().m.p.ptr().wbBuf
if !buf.putFast(old, new) {
wbBufFlush(...)
}
... actual memory write ...
The arguments to wbBufFlush depend on whether the caller is doing
its own cgo pointer checks. If it is, then this can be
wbBufFlush(nil, 0). Otherwise, it must pass the slot address and
new.
The caller must ensure there are no preemption points during the
above sequence. There must be no preemption points while buf is in
use because it is a per-P resource. There must be no preemption
points between the buffer put and the write to memory because this
could allow a GC phase change, which could result in missed write
barriers.
putFast must be nowritebarrierrec to because write barriers here would
corrupt the write barrier buffer. It (and everything it calls, if
it called anything) has to be nosplit to avoid scheduling on to a
different P and a different buffer.
reset empties b by resetting its next and end pointers.
account for the above fields
workbufhdr workbufhdr
workbufhdr.nobj int
// must be first
(*T) checkempty()
(*T) checknonempty()
func getempty() *workbuf
func handoff(b *workbuf) *workbuf
func trygetfull() *workbuf
func handoff(b *workbuf) *workbuf
func putempty(b *workbuf)
func putfull(b *workbuf)
Package-Level Functions (total 1458, in which 33 are exported)
BlockProfile returns n, the number of records in the current blocking profile.
If len(p) >= n, BlockProfile copies the profile into p and returns n, true.
If len(p) < n, BlockProfile does not change p and returns n, false.
Most clients should use the runtime/pprof package or
the testing package's -test.blockprofile flag instead
of calling BlockProfile directly.
Breakpoint executes a breakpoint trap.
Caller reports file and line number information about function invocations on
the calling goroutine's stack. The argument skip is the number of stack frames
to ascend, with 0 identifying the caller of Caller. (For historical reasons the
meaning of skip differs between Caller and Callers.) The return values report the
program counter, file name, and line number within the file of the corresponding
call. The boolean ok is false if it was not possible to recover the information.
Callers fills the slice pc with the return program counters of function invocations
on the calling goroutine's stack. The argument skip is the number of stack frames
to skip before recording in pc, with 0 identifying the frame for Callers itself and
1 identifying the caller of Callers.
It returns the number of entries written to pc.
To translate these PCs into symbolic information such as function
names and line numbers, use CallersFrames. CallersFrames accounts
for inlined functions and adjusts the return program counters into
call program counters. Iterating over the returned slice of PCs
directly is discouraged, as is using FuncForPC on any of the
returned PCs, since these cannot account for inlining or return
program counter adjustment.
CallersFrames takes a slice of PC values returned by Callers and
prepares to return function/file/line information.
Do not change the slice until you are done with the Frames.
CPUProfile panics.
It formerly provided raw access to chunks of
a pprof-format profile generated by the runtime.
The details of generating that format have changed,
so this functionality has been removed.
Deprecated: Use the runtime/pprof package,
or the handlers in the net/http/pprof package,
or the testing package's -test.cpuprofile flag instead.
FuncForPC returns a *Func describing the function that contains the
given program counter address, or else nil.
If pc represents multiple functions because of inlining, it returns
the *Func describing the innermost function, but with an entry of
the outermost function.
GC runs a garbage collection and blocks the caller until the
garbage collection is complete. It may also block the entire
program.
Goexit terminates the goroutine that calls it. No other goroutine is affected.
Goexit runs all deferred calls before terminating the goroutine. Because Goexit
is not a panic, any recover calls in those deferred functions will return nil.
Calling Goexit from the main goroutine terminates that goroutine
without func main returning. Since func main has not returned,
the program continues execution of other goroutines.
If all other goroutines exit, the program crashes.
GOMAXPROCS sets the maximum number of CPUs that can be executing
simultaneously and returns the previous setting. It defaults to
the value of runtime.NumCPU. If n < 1, it does not change the current setting.
This call will go away when the scheduler improves.
GOROOT returns the root of the Go tree. It uses the
GOROOT environment variable, if set at process start,
or else the root used during the Go build.
GoroutineProfile returns n, the number of records in the active goroutine stack profile.
If len(p) >= n, GoroutineProfile copies the profile into p and returns n, true.
If len(p) < n, GoroutineProfile does not change p and returns n, false.
Most clients should use the runtime/pprof package instead
of calling GoroutineProfile directly.
Gosched yields the processor, allowing other goroutines to run. It does not
suspend the current goroutine, so execution resumes automatically.
KeepAlive marks its argument as currently reachable.
This ensures that the object is not freed, and its finalizer is not run,
before the point in the program where KeepAlive is called.
A very simplified example showing where KeepAlive is required:
type File struct { d int }
d, err := syscall.Open("/file/path", syscall.O_RDONLY, 0)
// ... do something if err != nil ...
p := &File{d}
runtime.SetFinalizer(p, func(p *File) { syscall.Close(p.d) })
var buf [10]byte
n, err := syscall.Read(p.d, buf[:])
// Ensure p is not finalized until Read returns.
runtime.KeepAlive(p)
// No more uses of p after this point.
Without the KeepAlive call, the finalizer could run at the start of
syscall.Read, closing the file descriptor before syscall.Read makes
the actual system call.
LockOSThread wires the calling goroutine to its current operating system thread.
The calling goroutine will always execute in that thread,
and no other goroutine will execute in it,
until the calling goroutine has made as many calls to
UnlockOSThread as to LockOSThread.
If the calling goroutine exits without unlocking the thread,
the thread will be terminated.
All init functions are run on the startup thread. Calling LockOSThread
from an init function will cause the main function to be invoked on
that thread.
A goroutine should call LockOSThread before calling OS services or
non-Go library functions that depend on per-thread state.
MemProfile returns a profile of memory allocated and freed per allocation
site.
MemProfile returns n, the number of records in the current memory profile.
If len(p) >= n, MemProfile copies the profile into p and returns n, true.
If len(p) < n, MemProfile does not change p and returns n, false.
If inuseZero is true, the profile includes allocation records
where r.AllocBytes > 0 but r.AllocBytes == r.FreeBytes.
These are sites where memory was allocated, but it has all
been released back to the runtime.
The returned profile may be up to two garbage collection cycles old.
This is to avoid skewing the profile toward allocations; because
allocations happen in real time but frees are delayed until the garbage
collector performs sweeping, the profile only accounts for allocations
that have had a chance to be freed by the garbage collector.
Most clients should use the runtime/pprof package or
the testing package's -test.memprofile flag instead
of calling MemProfile directly.
MutexProfile returns n, the number of records in the current mutex profile.
If len(p) >= n, MutexProfile copies the profile into p and returns n, true.
Otherwise, MutexProfile does not change p, and returns n, false.
Most clients should use the runtime/pprof package
instead of calling MutexProfile directly.
NumCgoCall returns the number of cgo calls made by the current process.
NumCPU returns the number of logical CPUs usable by the current process.
The set of available CPUs is checked by querying the operating system
at process startup. Changes to operating system CPU allocation after
process startup are not reflected.
NumGoroutine returns the number of goroutines that currently exist.
ReadMemStats populates m with memory allocator statistics.
The returned memory allocator statistics are up to date as of the
call to ReadMemStats. This is in contrast with a heap profile,
which is a snapshot as of the most recently completed garbage
collection cycle.
ReadTrace returns the next chunk of binary tracing data, blocking until data
is available. If tracing is turned off and all the data accumulated while it
was on has been returned, ReadTrace returns nil. The caller must copy the
returned data before calling ReadTrace again.
ReadTrace must be called from one goroutine at a time.
SetBlockProfileRate controls the fraction of goroutine blocking events
that are reported in the blocking profile. The profiler aims to sample
an average of one blocking event per rate nanoseconds spent blocked.
To include every blocking event in the profile, pass rate = 1.
To turn off profiling entirely, pass rate <= 0.
SetCgoTraceback records three C functions to use to gather
traceback information from C code and to convert that traceback
information into symbolic information. These are used when printing
stack traces for a program that uses cgo.
The traceback and context functions may be called from a signal
handler, and must therefore use only async-signal safe functions.
The symbolizer function may be called while the program is
crashing, and so must be cautious about using memory. None of the
functions may call back into Go.
The context function will be called with a single argument, a
pointer to a struct:
struct {
Context uintptr
}
In C syntax, this struct will be
struct {
uintptr_t Context;
};
If the Context field is 0, the context function is being called to
record the current traceback context. It should record in the
Context field whatever information is needed about the current
point of execution to later produce a stack trace, probably the
stack pointer and PC. In this case the context function will be
called from C code.
If the Context field is not 0, then it is a value returned by a
previous call to the context function. This case is called when the
context is no longer needed; that is, when the Go code is returning
to its C code caller. This permits the context function to release
any associated resources.
While it would be correct for the context function to record a
complete a stack trace whenever it is called, and simply copy that
out in the traceback function, in a typical program the context
function will be called many times without ever recording a
traceback for that context. Recording a complete stack trace in a
call to the context function is likely to be inefficient.
The traceback function will be called with a single argument, a
pointer to a struct:
struct {
Context uintptr
SigContext uintptr
Buf *uintptr
Max uintptr
}
In C syntax, this struct will be
struct {
uintptr_t Context;
uintptr_t SigContext;
uintptr_t* Buf;
uintptr_t Max;
};
The Context field will be zero to gather a traceback from the
current program execution point. In this case, the traceback
function will be called from C code.
Otherwise Context will be a value previously returned by a call to
the context function. The traceback function should gather a stack
trace from that saved point in the program execution. The traceback
function may be called from an execution thread other than the one
that recorded the context, but only when the context is known to be
valid and unchanging. The traceback function may also be called
deeper in the call stack on the same thread that recorded the
context. The traceback function may be called multiple times with
the same Context value; it will usually be appropriate to cache the
result, if possible, the first time this is called for a specific
context value.
If the traceback function is called from a signal handler on a Unix
system, SigContext will be the signal context argument passed to
the signal handler (a C ucontext_t* cast to uintptr_t). This may be
used to start tracing at the point where the signal occurred. If
the traceback function is not called from a signal handler,
SigContext will be zero.
Buf is where the traceback information should be stored. It should
be PC values, such that Buf[0] is the PC of the caller, Buf[1] is
the PC of that function's caller, and so on. Max is the maximum
number of entries to store. The function should store a zero to
indicate the top of the stack, or that the caller is on a different
stack, presumably a Go stack.
Unlike runtime.Callers, the PC values returned should, when passed
to the symbolizer function, return the file/line of the call
instruction. No additional subtraction is required or appropriate.
On all platforms, the traceback function is invoked when a call from
Go to C to Go requests a stack trace. On linux/amd64, linux/ppc64le,
and freebsd/amd64, the traceback function is also invoked when a
signal is received by a thread that is executing a cgo call. The
traceback function should not make assumptions about when it is
called, as future versions of Go may make additional calls.
The symbolizer function will be called with a single argument, a
pointer to a struct:
struct {
PC uintptr // program counter to fetch information for
File *byte // file name (NUL terminated)
Lineno uintptr // line number
Func *byte // function name (NUL terminated)
Entry uintptr // function entry point
More uintptr // set non-zero if more info for this PC
Data uintptr // unused by runtime, available for function
}
In C syntax, this struct will be
struct {
uintptr_t PC;
char* File;
uintptr_t Lineno;
char* Func;
uintptr_t Entry;
uintptr_t More;
uintptr_t Data;
};
The PC field will be a value returned by a call to the traceback
function.
The first time the function is called for a particular traceback,
all the fields except PC will be 0. The function should fill in the
other fields if possible, setting them to 0/nil if the information
is not available. The Data field may be used to store any useful
information across calls. The More field should be set to non-zero
if there is more information for this PC, zero otherwise. If More
is set non-zero, the function will be called again with the same
PC, and may return different information (this is intended for use
with inlined functions). If More is zero, the function will be
called with the next PC value in the traceback. When the traceback
is complete, the function will be called once more with PC set to
zero; this may be used to free any information. Each call will
leave the fields of the struct set to the same values they had upon
return, except for the PC field when the More field is zero. The
function must not keep a copy of the struct pointer between calls.
When calling SetCgoTraceback, the version argument is the version
number of the structs that the functions expect to receive.
Currently this must be zero.
The symbolizer function may be nil, in which case the results of
the traceback function will be displayed as numbers. If the
traceback function is nil, the symbolizer function will never be
called. The context function may be nil, in which case the
traceback function will only be called with the context field set
to zero. If the context function is nil, then calls from Go to C
to Go will not show a traceback for the C portion of the call stack.
SetCgoTraceback should be called only once, ideally from an init function.
SetCPUProfileRate sets the CPU profiling rate to hz samples per second.
If hz <= 0, SetCPUProfileRate turns off profiling.
If the profiler is on, the rate cannot be changed without first turning it off.
Most clients should use the runtime/pprof package or
the testing package's -test.cpuprofile flag instead of calling
SetCPUProfileRate directly.
SetFinalizer sets the finalizer associated with obj to the provided
finalizer function. When the garbage collector finds an unreachable block
with an associated finalizer, it clears the association and runs
finalizer(obj) in a separate goroutine. This makes obj reachable again,
but now without an associated finalizer. Assuming that SetFinalizer
is not called again, the next time the garbage collector sees
that obj is unreachable, it will free obj.
SetFinalizer(obj, nil) clears any finalizer associated with obj.
The argument obj must be a pointer to an object allocated by calling
new, by taking the address of a composite literal, or by taking the
address of a local variable.
The argument finalizer must be a function that takes a single argument
to which obj's type can be assigned, and can have arbitrary ignored return
values. If either of these is not true, SetFinalizer may abort the
program.
Finalizers are run in dependency order: if A points at B, both have
finalizers, and they are otherwise unreachable, only the finalizer
for A runs; once A is freed, the finalizer for B can run.
If a cyclic structure includes a block with a finalizer, that
cycle is not guaranteed to be garbage collected and the finalizer
is not guaranteed to run, because there is no ordering that
respects the dependencies.
The finalizer is scheduled to run at some arbitrary time after the
program can no longer reach the object to which obj points.
There is no guarantee that finalizers will run before a program exits,
so typically they are useful only for releasing non-memory resources
associated with an object during a long-running program.
For example, an os.File object could use a finalizer to close the
associated operating system file descriptor when a program discards
an os.File without calling Close, but it would be a mistake
to depend on a finalizer to flush an in-memory I/O buffer such as a
bufio.Writer, because the buffer would not be flushed at program exit.
It is not guaranteed that a finalizer will run if the size of *obj is
zero bytes.
It is not guaranteed that a finalizer will run for objects allocated
in initializers for package-level variables. Such objects may be
linker-allocated, not heap-allocated.
A finalizer may run as soon as an object becomes unreachable.
In order to use finalizers correctly, the program must ensure that
the object is reachable until it is no longer required.
Objects stored in global variables, or that can be found by tracing
pointers from a global variable, are reachable. For other objects,
pass the object to a call of the KeepAlive function to mark the
last point in the function where the object must be reachable.
For example, if p points to a struct, such as os.File, that contains
a file descriptor d, and p has a finalizer that closes that file
descriptor, and if the last use of p in a function is a call to
syscall.Write(p.d, buf, size), then p may be unreachable as soon as
the program enters syscall.Write. The finalizer may run at that moment,
closing p.d, causing syscall.Write to fail because it is writing to
a closed file descriptor (or, worse, to an entirely different
file descriptor opened by a different goroutine). To avoid this problem,
call runtime.KeepAlive(p) after the call to syscall.Write.
A single goroutine runs all finalizers for a program, sequentially.
If a finalizer must run for a long time, it should do so by starting
a new goroutine.
SetMutexProfileFraction controls the fraction of mutex contention events
that are reported in the mutex profile. On average 1/rate events are
reported. The previous rate is returned.
To turn off profiling entirely, pass rate 0.
To just read the current rate, pass rate < 0.
(For n>1 the details of sampling may change.)
Stack formats a stack trace of the calling goroutine into buf
and returns the number of bytes written to buf.
If all is true, Stack formats stack traces of all other goroutines
into buf after the trace for the current goroutine.
StartTrace enables tracing for the current process.
While tracing, the data will be buffered and available via ReadTrace.
StartTrace returns an error if tracing is already enabled.
Most clients should use the runtime/trace package or the testing package's
-test.trace flag instead of calling StartTrace directly.
StopTrace stops tracing, if it was previously enabled.
StopTrace only returns after all the reads for the trace have completed.
ThreadCreateProfile returns n, the number of records in the thread creation profile.
If len(p) >= n, ThreadCreateProfile copies the profile into p and returns n, true.
If len(p) < n, ThreadCreateProfile does not change p and returns n, false.
Most clients should use the runtime/pprof package instead
of calling ThreadCreateProfile directly.
UnlockOSThread undoes an earlier call to LockOSThread.
If this drops the number of active LockOSThread calls on the
calling goroutine to zero, it unwires the calling goroutine from
its fixed operating system thread.
If there are no active LockOSThread calls, this is a no-op.
Before calling UnlockOSThread, the caller must ensure that the OS
thread is suitable for running other goroutines. If the caller made
any permanent changes to the state of the thread that would affect
other goroutines, it should not call this function and thus leave
the goroutine locked to the OS thread until the goroutine (and
hence the thread) exits.
Version returns the Go tree's version string.
It is either the commit hash and date at the time of the build or,
when possible, a release tag like "go1.3".
func _cgo_panic_internal(p *byte) func _ExternalCode() func _GC() func _LostExternalCode() func _LostSIGPROFDuringAtomic64() func _System() func _VDSO()
abort crashes the runtime in situations where even throw might not
work. In general it should do something a debugger will recognize
(e.g., an INT3 on x86). A crash in abort is recognized by the
signal handler, which will attempt to tear down the runtime
immediately.
Abs returns the absolute value of x.
Special cases are:
Abs(±Inf) = +Inf
Abs(NaN) = NaN
This function may be called in nosplit context and thus must be nosplit.
Associate p and the current m.
This function is allowed to have write barriers even if the caller
isn't because it immediately acquires _p_.
func acquireSudog() *sudog
activeModules returns a slice of active modules.
A module is active once its gcdatamask and gcbssmask have been
assembled and it is usable by the GC.
This is nosplit/nowritebarrier because it is called by the
cgo pointer checking code.
Should be a built-in for unsafe.Pointer?
add1 returns the byte pointer p+1.
addAdjustedTimers adds any timers we adjusted in adjusttimers
back to the timer heap.
addb returns the byte pointer p+n.
Adds a finalizer to the object p. Returns true if it succeeded.
Called from linker-generated .initarray; declared for go vet; do NOT call from Go.
addOneOpenDeferFrame scans the stack for the first frame (if any) with
open-coded defers and if it finds one, adds a single record to the defer chain
for that frame. If sp is non-nil, it starts the stack scan from the frame
specified by sp. If sp is nil, it uses the sp from the current defer record
(which has just been finished). Hence, it continues the stack scan from the
frame of the defer that just finished. It skips any frame that already has an
open-coded _defer record, which would have been been created from a previous
(unrecovered) panic.
Note: All entries of the defer chain (including this new open-coded entry) have
their pointers (including sp) adjusted properly if the stack moves while
running deferred functions. Also, it is safe to pass in the sp arg (which is
the direct result of calling getcallersp()), because all pointer variables
(including arguments) are adjusted as needed during stack copies.
addrsToSummaryRange converts base and limit pointers into a range
of entries for the given summary level.
The returned range is inclusive on the lower bound and exclusive on
the upper bound.
Adds the special record s to the list of special records for
the object p. All fields of s should be filled in except for
offset & next, which this routine will fill in.
Returns true if the special was successfully added, false otherwise.
(The add will fail only if a record with the same p and s->kind
already exists.)
addtimer adds a timer to the current P.
This should only be called with a newly created timer.
That avoids the risk of changing the when field of a timer in some P's heap,
which could cause the heap to become unsorted.
func adjustctxt(gp *g, adjinfo *adjustinfo) func adjustdefers(gp *g, adjinfo *adjustinfo)
Note: the argument/return area is adjusted by the callee.
func adjustpanics(gp *g, adjinfo *adjustinfo)
Adjustpointer checks whether *vpp is in the old stack described by adjinfo.
If so, it rewrites *vpp to point into the new stack.
bv describes the memory starting at address scanp.
Adjust any pointers contained therein.
adjustSignalStack adjusts the current stack guard based on the
stack pointer that is actually in use while handling a signal.
We do this in case some non-Go code called sigaltstack.
This reports whether the stack was adjusted, and if so stores the old
signal stack in *gsigstack.
func adjustsudogs(gp *g, adjinfo *adjustinfo)
adjusttimers looks through the timers in the current P's heap for
any timers that have been modified to run earlier, and puts them in
the correct place in the heap. While looking for those timers,
it also moves timers that have been modified to run later,
and removes deleted timers. The caller must have locked the timers for pp.
func advanceEvacuationMark(h *hmap, t *maptype, newbit uintptr) func afterfork() func alginit()
alignDown rounds n down to a multiple of a. a must be a power of 2.
alignUp rounds n up to a multiple of a. a must be a power of 2.
allFrames returns all of the Frames corresponding to pcs.
Allocate a new m unassociated with any thread.
Can use p for allocation context if needed.
fn is recorded as the new m's m.mstartfn.
id is optional pre-allocated m ID. Omit by passing -1.
This function is allowed to have write barriers even if the caller
isn't because it borrows _p_.
func allocmcache() *mcache func appendIntStr(b []byte, v int64, signed bool) []byte
arenaBase returns the low address of the region covered by heap
arena i.
arenaIndex returns the index into mheap_.arenas of the arena
containing metadata for p. This index combines of an index into the
L1 map and an index into the L2 map and should be used as
mheap_.arenas[ai.l1()][ai.l2()].
If p is outside the range of valid heap addresses, either l1() or
l2() will be out of bounds.
It is nosplit because it's called by spanOf and several other
nosplit functions.
nosplit for use in linux startup sysargs
func asmcgocall(fn, arg unsafe.Pointer) int32 func asminit() func assertE2I(inter *interfacetype, e eface) (r iface) func assertE2I2(inter *interfacetype, e eface) (r iface, b bool) func assertI2I(inter *interfacetype, i iface) (r iface) func assertI2I2(inter *interfacetype, i iface) (r iface, b bool) func assertLockHeld(l *mutex) func assertRankHeld(r lockRank) func assertWorldStopped() func assertWorldStoppedOrLockHeld(l *mutex)
asyncPreempt saves all user registers and calls asyncPreempt2.
When stack scanning encounters an asyncPreempt frame, it scans that
frame and its parent frame conservatively.
asyncPreempt is implemented in assembly.
func asyncPreempt2()
atoi parses an int from a string s.
The bool result reports whether s is a number
representable by a value of type int.
atoi32 is like atoi but for integers
that fit into an int32.
atomicAllG returns &allgs[0] and len(allgs) for use with atomicAllGIndex.
atomicAllGIndex returns ptr[i] with the allgptr returned from atomicAllG.
atomicstorep performs *ptr = new atomically and invokes a write barrier.
atomicwb performs a write barrier before an atomic pointer write.
The caller should guard the call with "if writeBarrier.enabled".
called from assembly
func badctxt()
called from assembly
func badmorestackg0() func badmorestackgsignal()
badPointer throws bad pointer in heap panic.
func badreflectcall()
This runs on a foreign stack, without an m or a g. No stack split.
func badsystemstack()
badTimer is called if the timer data structures have been corrupted,
presumably due to racy use by the program. We panic here rather than
panicing due to invalid slice access while holding locks.
See issue #25686.
func badunlockosthread() func beforefork() func beforeIdle(int64) (*g, bool)
Background scavenger.
The background scavenger maintains the RSS of the application below
the line described by the proportional scavenging statistics in
the mheap struct.
Build a binary search tree with the n objects in the list
x.obj[idx], x.obj[idx+1], ..., x.next.obj[0], ...
Returns the root of that tree, and the buf+idx of the nth object after x.obj[idx].
(The first object that was not included in the binary search tree.)
If n == 0, returns nil, x.
func block()
blockableSig reports whether sig may be blocked by the signal mask.
We never want to block the signals marked _SigUnblock;
these are the synchronous signals that turn into a Go panic.
In a Go program--not a c-archive/c-shared--we never want to block
the signals marked _SigKill or _SigThrow, as otherwise it's possible
for all running threads to block them and delay their delivery until
we start a new thread. When linked into a C program we let the C code
decide on the disposition of those signals.
blockAlignSummaryRange aligns indices into the given level to that
level's block width (1 << levelBits[level]). It assumes lo is inclusive
and hi is exclusive, and so aligns them down and up respectively.
func blockevent(cycles int64, skip int) func blocksampled(cycles int64) bool
bool2int returns 0 if x is false or 1 if x is true.
func breakpoint() func bucketEvacuated(t *maptype, h *hmap, bucket uintptr) bool
bucketMask returns 1<<b - 1, optimized for code generation.
bucketShift returns 1<<b, optimized for code generation.
bulkBarrierBitmap executes write barriers for copying from [src,
src+size) to [dst, dst+size) using a 1-bit pointer bitmap. src is
assumed to start maskOffset bytes into the data covered by the
bitmap in bits (which may not be a multiple of 8).
This is used by bulkBarrierPreWrite for writes to data and BSS.
bulkBarrierPreWrite executes a write barrier
for every pointer slot in the memory range [src, src+size),
using pointer/scalar information from [dst, dst+size).
This executes the write barriers necessary before a memmove.
src, dst, and size must be pointer-aligned.
The range [dst, dst+size) must lie within a single object.
It does not perform the actual writes.
As a special case, src == 0 indicates that this is being used for a
memclr. bulkBarrierPreWrite will pass 0 for the src of each write
barrier.
Callers should call bulkBarrierPreWrite immediately before
calling memmove(dst, src, size). This function is marked nosplit
to avoid being preempted; the GC must not stop the goroutine
between the memmove and the execution of the barriers.
The caller is also responsible for cgo pointer checks if this
may be writing Go pointers into non-Go memory.
The pointer bitmap is not maintained for allocations containing
no pointers at all; any caller of bulkBarrierPreWrite must first
make sure the underlying allocation contains pointers, usually
by checking typ.ptrdata.
Callers must perform cgo checks if writeBarrier.cgo.
bulkBarrierPreWriteSrcOnly is like bulkBarrierPreWrite but
does not execute write barriers for [dst, dst+size).
In addition to the requirements of bulkBarrierPreWrite
callers need to ensure [dst, dst+size) is zeroed.
This is used for special cases where e.g. dst was just
created and zeroed with malloc.
func call1048576(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call1073741824(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call131072(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call134217728(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
in asm_*.s
not called directly; definitions here supply type information for traceback.
func call16777216(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call2097152(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call262144(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call268435456(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call33554432(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call4194304(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call524288(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call536870912(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call67108864(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call8388608(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
callCgoSymbolizer calls the cgoSymbolizer function.
canpanic returns false if a signal should throw instead of
panicking.
canPreemptM reports whether mp is in a state that is safe to preempt.
It is nosplit because it has nosplit callers.
func cansemacquire(addr *uint32) bool
The Gscanstatuses are acting like locks and this releases them.
If it proves to be a performance hit we should be able to make these
simple atomic stores but for now we are going to throw if
we see an inconsistent state.
casgstatus(gp, oldstatus, Gcopystack), assuming oldstatus is Gwaiting or Grunnable.
Returns old status. Cannot call casgstatus directly, because we are racing with an
async wakeup that might come in from netpoll. If we see Gwaiting from the readgstatus,
it might have become Grunnable by the time we get to the cas. If we called casgstatus,
it would loop waiting for the status to go back to Gwaiting, which it never will.
casGFromPreempted attempts to transition gp from _Gpreempted to
_Gwaiting. If successful, the caller is responsible for
re-scheduling gp.
If asked to move to or from a Gscanstatus this will throw. Use the castogscanstatus
and casfrom_Gscanstatus instead.
casgstatus will loop if the g->atomicstatus is in a Gscan status until the routine that
put it in the Gscan state is finished.
casGToPreemptScan transitions gp from _Grunning to _Gscan|_Gpreempted.
TODO(austin): This is the only status operation that both changes
the status and locks the _Gscan bit. Rethink this.
This will return false if the gp is not in the expected status and the cas fails.
This acts like a lock acquire while the casfromgstatus acts like a lock release.
func cfuncnameFromNameoff(f funcInfo, nameoff int32) *byte
Call from Go to C.
This must be nosplit because it's used for syscalls on some
platforms. Syscalls may have untyped arguments on the stack, so
it's not safe to grow or scan the stack.
Not all cgocallback frames are actually cgocallback,
so not all have these arguments. Mark them uintptr so that the GC
does not misinterpret memory when the arguments are not present.
cgocallback is not called from Go, only from crosscall2.
This in turn calls cgocallbackg, which is where we'll find
pointer-declared arguments.
Call from C back to Go.
func cgocallbackg1(fn, frame unsafe.Pointer, ctxt uintptr)
cgoCheckArg is the real work of cgoCheckPointer. The argument p
is either a pointer to the value (of type t), or the value itself,
depending on indir. The top parameter is whether we are at the top
level, where Go pointers are allowed.
cgoCheckBits checks the block of memory at src, for up to size
bytes, and throws if it finds a Go pointer. The gcbits mark each
pointer value. The src pointer is off bytes into the gcbits.
cgoCheckMemmove is called when moving a block of memory.
dst and src point off bytes into the value to copy.
size is the number of bytes to copy.
It throws if the program is copying a block that contains a Go pointer
into non-Go memory.
cgoCheckPointer checks if the argument contains a Go pointer that
points to a Go pointer, and panics if it does.
cgoCheckResult is called to check the result parameter of an
exported Go function. It panics if the result is or contains a Go
pointer.
cgoCheckSliceCopy is called when copying n elements of a slice.
src and dst are pointers to the first element of the slice.
typ is the element type of the slice.
It throws if the program is copying slice elements that contain Go pointers
into non-Go memory.
cgoCheckTypedBlock checks the block of memory at src, for up to size bytes,
and throws if it finds a Go pointer. The type of the memory is typ,
and src is off bytes into that type.
cgoCheckUnknownPointer is called for an arbitrary pointer into Go
memory. It checks whether that Go memory contains any other
pointer into Go memory. If it does, we panic.
The return values are unused but useful to see in panic tracebacks.
cgoCheckUsingType is like cgoCheckTypedBlock, but is a last ditch
fall back to look for pointers in src using the type information.
We only use this when looking at a value on the stack when the type
uses a GC program, because otherwise it's more efficient to use the
GC bits. This is called on the system stack.
cgoCheckWriteBarrier is called whenever a pointer is stored into memory.
It throws if the program is storing a Go pointer into non-Go memory.
This is called from the write barrier, so its entire call tree must
be nosplit.
cgoContextPCs gets the PC values from a cgo traceback.
cgoInRange reports whether p is between start and end.
cgoIsGoPointer reports whether the pointer is a Go pointer--a
pointer to Go memory. We only care about Go memory that might
contain pointers.
func cgoSigtramp()
called from (incomplete) assembly
cgoUse is called by cgo-generated code (using go:linkname to get at
an unexported name). The calls serve two purposes:
1) they are opaque to escape analysis, so the argument is considered to
escape to the heap.
2) they keep the argument alive until the call site; the call is emitted after
the end of the (presumed) use of the argument by C.
cgoUse should not actually be called (see cgoAlwaysFalse).
chanbuf(c, i) is pointer to the i'th slot in the buffer.
func chanparkcommit(gp *g, chanLock unsafe.Pointer) bool
chanrecv receives on channel c and writes the received data to ep.
ep may be nil, in which case received data is ignored.
If block == false and no elements are available, returns (false, false).
Otherwise, if c is closed, zeros *ep and returns (true, false).
Otherwise, fills in *ep with an element and returns (true, true).
A non-nil ep must point to the heap or the caller's stack.
entry points for <- c from compiled code
* generic single channel send/recv
* If block is not nil,
* then the protocol will not
* sleep but return if it could
* not complete.
*
* sleep can wake up with g.param == nil
* when a channel involved in the sleep has
* been closed. it is easiest to loop and re-run
* the operation; we'll see that it's now closed.
entry point for c <- x from compiled code
func check()
checkASM reports whether assembly runtime checks have passed.
Check for deadlock situation.
The check is based on number of running M's, if 0 -> deadlock.
sched.lock must be held.
sched.lock must be held.
func checkptrAlignment(p unsafe.Pointer, elem *_type, n uintptr) func checkptrArithmetic(p unsafe.Pointer, originals []unsafe.Pointer)
checkptrBase returns the base address for the allocation containing
the address p.
Importantly, if p1 and p2 point into the same variable, then
checkptrBase(p1) == checkptrBase(p2). However, the converse/inverse
is not necessarily true as allocations can have trailing padding,
and multiple variables may be packed into a single allocation.
func checkTimeouts()
checkTimers runs any timers for the P that are ready.
If now is not 0 it is the current time.
It returns the current time or 0 if it is not known,
and the time when the next timer should run or 0 if there is no next timer,
and reports whether it ran any timers.
If the time when the next timer should run is not 0,
it is always larger than the returned time.
We pass now in and out to avoid extra calls of nanotime.
chunkIndex returns the base address of the palloc chunk at index ci.
chunkIndex returns the global index of the palloc chunk containing the
pointer p.
chunkPageIndex computes the index of the page that contains p,
relative to the chunk which contains p.
cleantimers cleans up the head of the timer queue. This speeds up
programs that create and delete timers; leaving them in the heap
slows down addtimer. Reports whether no timer problems were found.
The caller must have locked the timers for pp.
clearDeletedTimers removes all deleted timers from the P's timer heap.
This is used to avoid clogging up the heap if the program
starts a lot of long-running timers and then stops them.
For example, this can happen via context.WithTimeout.
This is the only function that walks through the entire timer heap,
other than moveTimers which only runs when the world is stopped.
The caller must have locked the timers for pp.
func clearpools()
clearSignalHandlers clears all signal handlers that are not ignored
back to the default. This is called by the child after a fork, so that
we can enable the signal mask for the exec without worrying about
running a signal handler in the child.
clobberfree sets the memory content at x to bad content, for debugging
purposes.
func close_trampoline() func closeonexec(fd int32) func complex128div(n complex128, m complex128) complex128 func concatstring2(buf *tmpBuf, a [2]string) string func concatstring3(buf *tmpBuf, a [3]string) string func concatstring4(buf *tmpBuf, a [4]string) string func concatstring5(buf *tmpBuf, a [5]string) string
concatstrings implements a Go string concatenation x+y+z+...
The operands are passed in the slice a.
If buf != nil, the compiler has determined that the result does not
escape the calling function, so the string data can be stored in buf
if small enough.
func convI2I(inter *interfacetype, i iface) (r iface) func convT2Enoptr(t *_type, elem unsafe.Pointer) (e eface) func convT2Inoptr(tab *itab, elem unsafe.Pointer) (i iface) func convTslice(val []byte) (x unsafe.Pointer) func convTstring(val string) (x unsafe.Pointer)
copysign returns a value with the magnitude
of x and the sign of y.
Copies gp's stack to a new stack of a different size.
Caller must have changed gp status to Gcopystack.
countrunes returns the number of runes in s.
countSub subtracts two counts obtained from profIndex.dataCount or profIndex.tagCount,
assuming that they are no more than 2^29 apart (guaranteed since they are never more than
len(data) or len(tags) apart, respectively).
tagCount wraps at 2^30, while dataCount wraps at 2^32.
This function works for both.
cpuinit extracts the environment variable GODEBUG from the environment on
Unix-like operating systems and calls internal/cpu.Initialize.
careful: cputicks is not guaranteed to be monotonic! In particular, we have
noticed drift between cpus on certain os/arch combinations. See issue 8976.
func crash() func createfing() func crypto_x509_syscall(fn, a1, a2, a3, a4, a5, a6 uintptr) (r1 uintptr) func debug_modinfo() string
debugCallCheck checks whether it is safe to inject a debugger
function call with return PC pc. If not, it returns a string
explaining why.
func debugCallPanicked(val interface{}) func debugCallV1()
debugCallWrap starts a new goroutine to run a debug call and blocks
the calling goroutine. On the goroutine, it prepares to recover
panics from the debug call, and then calls the call dispatching
function at PC dispatch.
This must be deeply nosplit because there are untyped values on the
stack from debugCallV1.
debugCallWrap1 is the continuation of debugCallWrap on the callee
goroutine.
func debugCallWrap2(dispatch uintptr)
decoderune returns the non-ASCII rune at the start of
s[k:] and the index after the rune in s.
decoderune assumes that caller has checked that
the to be decoded rune is a non-ASCII rune.
If the string appears to be incomplete or decoding problems
are encountered (runeerror, k + 1) is returned to ensure
progress when decoderune is used to iterate over a string.
deductSweepCredit deducts sweep credit for allocating a span of
size spanBytes. This must be performed *before* the span is
allocated to ensure the system has enough credit. If necessary, it
performs sweeping to prevent going in to debt. If the caller will
also sweep pages (e.g., for a large allocation), it can pass a
non-zero callerSweepPages to leave that many pages unswept.
deductSweepCredit makes a worst-case assumption that all spanBytes
bytes of the ultimately allocated span will be available for object
allocation.
deductSweepCredit is the core of the "proportional sweep" system.
It uses statistics gathered by the garbage collector to perform
enough sweeping so that all pages are swept during the concurrent
sweep phase between GC cycles.
mheap_ must NOT be locked.
The arguments associated with a deferred call are stored
immediately after the _defer header in memory.
defer size class for arg size sz
Create a new deferred function fn with siz bytes of arguments.
The compiler turns a defer statement into a call to this.
deferprocStack queues a new deferred function with a defer record on the stack.
The defer record must have its siz and fn fields initialized.
All other fields can contain junk.
The defer record must be immediately followed in memory by
the arguments of the defer.
Nosplit because the arguments on the stack won't be scanned
until the defer record is spliced into the gp._defer list.
Run a deferred function if there is one.
The compiler inserts a call to this at the end of any
function which calls defer.
If there is a deferred function, this will call runtime·jmpdefer,
which will jump to the deferred function such that it appears
to have been called by the caller of deferreturn at the point
just before deferreturn was called. The effect is that deferreturn
is called again and again until there are no more deferred functions.
Declared as nosplit, because the function should not be preempted once we start
modifying the caller's frame in order to reuse the frame to call the deferred
function.
The single argument isn't actually used - it just has its address
taken so it can be matched against pending defers.
deltimer deletes the timer t. It may be on some other P, so we can't
actually remove it from the timers heap. We can only mark it as deleted.
It will be removed in due course by the P whose heap it is on.
Reports whether the timer was removed before it was run.
func dematerializeGCProg(s *mspan)
dieFromSignal kills the program with a signal.
This provides the expected exit status for the shell.
This is only called with fatal signals expected to kill the process.
128/64 -> 64 quotient, 64 remainder.
adapted from hacker's delight
divRoundUp returns ceil(n / a).
dlog returns a debug logger. The caller can use methods on the
returned logger to add values, which will be space-separated in the
final output, much like println. The caller must call end() to
finish the message.
dlog can be used from highly-constrained corners of the runtime: it
is safe to use in the signal handler, from within the write
barrier, from within the stack implementation, and in places that
must be recursively nosplit.
This will be compiled away if built without the debuglog build tag.
However, argument construction may not be. If any of the arguments
are not literals or trivial expressions, consider protecting the
call with "if dlogEnabled".
doaddtimer adds t to the current P's heap.
The caller must have locked the timers for pp.
dodeltimer removes timer i from the current P's heap.
We are locked on the P when this is called.
It reports whether it saw no problems due to races.
The caller must have locked the timers for pp.
dodeltimer0 removes timer 0 from the current P's heap.
We are locked on the P when this is called.
It reports whether it saw no problems due to races.
The caller must have locked the timers for pp.
dolockOSThread is called by LockOSThread and lockOSThread below
after they modify m.locked. Do not allow preemption during this call,
or else the m might be different in this function than in the caller.
doSigPreempt handles a preemption signal on gp.
dounlockOSThread is called by UnlockOSThread and unlockOSThread below
after they update m->locked. Do not allow preemption during this call,
or else the m might be in different in this function than in the caller.
dropg removes the association between m and the current goroutine m->curg (gp for short).
Typically a caller sets gp's status away from Grunning and then
immediately calls dropg to finish the job. The caller is also responsible
for arranging that gp will be restarted using ready at an
appropriate time. After calling dropg and arranging for gp to be
readied later, the caller can do other work but eventually should
call schedule to restart the scheduling of goroutines on this m.
dropm is called when a cgo callback has called needm but is now
done with the callback and returning back into the non-Go thread.
It puts the current m back onto the extra list.
The main expense here is the call to signalstack to release the
m's signal stack, and then the call to needm on the next callback
from this thread. It is tempting to try to save the m for next time,
which would eliminate both these costs, but there might not be
a next time: the current thread (which Go does not control) might exit.
If we saved the m for that thread, there would be an m leak each time
such a thread exited. Instead, we acquire and release an m on each
call. These should typically not be scheduling operations, just a few
atomics, so the cost should be small.
TODO(rsc): An alternative would be to allocate a dummy pthread per-thread
variable using pthread_key_create. Unlike the pthread keys we already use
on OS X, this dummy key would never be read by Go code. It would exist
only so that we could register at thread-exit-time destructor.
That destructor would put the m back onto the extra list.
This is purely a performance optimization. The current version,
in which dropm happens on each cgo call, is still correct too.
We may have to keep the current version on systems with cgo
but without pthreads, like Windows.
func duffcopy() func duffzero()
dump kinds & offsets of interesting fields in bv
dumpint() the kind & offset of each field in an object.
func dumpGCProg(p *byte) func dumpgoroutine(gp *g) func dumpgs() func dumpgstatus(gp *g)
dump a uint64 in a varint format parseable by encoding/binary
func dumpitabs() func dumpmemprof() func dumpmemprof_callback(b *bucket, nstk uintptr, pstk *uintptr, size, allocs, frees uintptr)
dump varint uint64 length followed by memory contents
func dumpmemstats(m *MemStats) func dumpms()
dump an object
func dumpobjs() func dumpotherroot(description string, to unsafe.Pointer) func dumpparams() func dumproots()
dump information for a type
func dwritebyte(b byte)
elideWrapperCalling reports whether a wrapper function that called
function id should be elided from stack traces.
empty reports whether a read from c would block (that is, the channel is
empty). It uses a single atomic read of mutable state.
encoderune writes into p (which must be large enough) the UTF-8 encoding of the rune.
It returns the number of bytes written.
endCheckmarks ends the checkmarks phase.
ensureSigM starts one global, sleeping thread to make sure at least one thread
is available to catch signals enabled for os/signal.
Standard syscall entry used by the go syscall library and normal cgo calls.
This is exported via linkname to assembly in the syscall package.
func entersyscall_gcwait() func entersyscall_sysmon()
The same as entersyscall(), but with a hint that the syscall is blocking.
func entersyscallblock_handoff()
envKeyEqual reports whether a == b, with ASCII-only case insensitivity
on Windows. The two strings must have the same length.
func evacuate_fast32(t *maptype, h *hmap, oldbucket uintptr) func evacuate_fast64(t *maptype, h *hmap, oldbucket uintptr) func evacuate_faststr(t *maptype, h *hmap, oldbucket uintptr)
Schedules gp to run on the current M.
If inheritTime is true, gp inherits the remaining time in the
current time slice. Otherwise, it starts a new time slice.
Never returns.
Write barriers are allowed because this is called immediately after
acquiring a P in several places.
This is exported via linkname to assembly in runtime/cgo.
func exit_trampoline()
The goroutine g exited its system call.
Arrange for it to run on a cpu again.
This is called only from the go syscall library, not
from the low-level system calls used by the runtime.
Write barriers are not allowed because our P may have been stolen.
This is exported via linkname to assembly in the syscall package.
exitsyscall slow path on g0.
Failed to acquire P, enqueue gp as runnable.
func exitsyscallfast(oldp *p) bool func exitsyscallfast_pidle() bool
exitsyscallfast_reacquired is the exitsyscall path on which this G
has successfully reacquired the P it was running on before the
syscall.
Not used on Darwin, but must be defined.
expandCgoFrames expands frame information for pc, known to be
a non-Go function, using the cgoSymbolizer hook. expandCgoFrames
returns nil if pc could not be expanded.
extendRandom extends the random numbers in r[:n] to the whole slice r.
Treats n<0 as n==0.
func f32toint32(x uint32) int32 func f32toint64(x uint32) int64 func f32touint64(x float32) uint64 func f64toint32(x uint64) int32 func f64toint64(x uint64) int64 func f64touint64(x float64) uint64
fastexprand returns a random number from an exponential distribution with
the specified mean.
fastlog2 implements a fast approximation to the base 2 log of a
float64. This is used to compute a geometric distribution for heap
sampling, without introducing dependencies into package math. This
uses a very rough approximation using the float64 exponent and the
first 25 bits of the mantissa. The top 5 bits of the mantissa are
used to load limits from a table of constants and the rest are used
to scale linearly between them.
func fastrandinit()
fatalpanic implements an unrecoverable panic. It is like fatalthrow, except
that if msgs != nil, fatalpanic also prints panic messages and decrements
runningPanicDefers once main is blocked from exiting.
fatalthrow implements an unrecoverable runtime throw. It freezes the
system, prints stack traces starting from its caller, and terminates the
process.
func fcntl_trampoline()
fillAligned returns x but with all zeroes in m-aligned
groups of m bits set to 1 if any bit in the group is non-zero.
For example, fillAligned(0x0100a3, 8) == 0xff00ff.
Note that if m == 1, this is a no-op.
m must be a power of 2 <= maxPagesPerPhysPage.
findBitRange64 returns the bit index of the first set of
n consecutive 1 bits. If no consecutive set of 1 bits of
size n may be found in c, then it returns an integer >= 64.
n must be > 0.
func findmoduledatap(pc uintptr) *moduledata
findObject returns the base address for the heap object containing
the address p, the object's span, and the index of the object in s.
If p does not point into a heap object, it returns base == 0.
If p points is an invalid heap pointer and debug.invalidptr != 0,
findObject panics.
refBase and refOff optionally give the base address of the object
in which the pointer p was found and the byte offset at which it
was found. These are used for error reporting.
It is nosplit so it is safe for p to be a pointer to the current goroutine's stack.
Since p is a uintptr, it would not be adjusted if the stack were to move.
Finds a runnable goroutine to execute.
Tries to steal from other P's, get g from local or global queue, poll network.
finishsweep_m ensures that all spans are swept.
The world must be stopped. This ensures there are no sweeps in
progress.
func fint32to32(x int32) uint32 func fint32to64(x int32) uint64 func fint64to32(x int64) uint32 func fint64to64(x int64) uint64
Float64bits returns the IEEE 754 binary representation of f.
Float64frombits returns the floating point number corresponding
the IEEE 754 binary representation b.
func float64Inf() float64 func float64NegInf() float64 func flush()
flushallmcaches flushes the mcaches of all Ps.
The world must be stopped.
flushmcache flushes the mcache of allp[i].
The world must be stopped.
fmtNSAsMS nicely formats ns nanoseconds as milliseconds.
func forcegchelper()
forEachP calls fn(p) for every P p when p reaches a GC safe point.
If a P is currently executing code, this will bring the P to a GC
safe point and execute fn on that P. If the P is not executing code
(it is idle or in a syscall), this will call fn(p) directly while
preventing the P from exiting its state. This does not ensure that
fn will run on every CPU executing Go code, but it acts as a global
memory barrier. GC uses this as a "ragged barrier."
The caller must hold worldsema.
Free the given defer.
The defer cannot be used after this call.
This must not grow the stack because there may be a frame without a
stack map when this is called.
func freedeferfn()
Separate function so that it can split stack.
Windows otherwise runs out of stack space.
freemcache releases resources associated with this
mcache and puts the object onto a free list.
In some cases there is no way to simply release
resources, such as statistics, so donate them to
a different mcache (the recipient).
freeSomeWbufs frees some workbufs back to the heap and returns
true if it should be called again to free more.
Do whatever cleanup needs to be done to deallocate s. It has
already been unlinked from the mspan specials list.
freeStackSpans frees unused stack spans at the end of GC.
Similar to stopTheWorld but best-effort and can be called several times.
There is no reverse operation, used during crashing.
This function must not lock any mutexes.
func fuint64to32(x uint64) float32 func fuint64to64(x uint64) float64
full reports whether a send on c would block (that is, the channel is full).
It uses a single word-sized read of mutable state, so although
the answer is instantaneously true, the correct answer may have changed
by the time the calling function receives the return value.
funcMaxSPDelta returns the maximum spdelta at any point in f.
func funcnameFromNameoff(f funcInfo, nameoff int32) string
funcPC returns the entry PC of the function f.
It assumes that f is a func value. Otherwise the behavior is undefined.
CAREFUL: In programs with plugins, funcPC can return different values
for the same function (because there are actually multiple copies of
the same function in the address space). To be safe, don't use the
results of this function in any == expression. It is only safe to
use the result as an address at which to start executing code.
func funcpkgpath(f funcInfo) string func funcspdelta(f funcInfo, targetpc uintptr, cache *pcvalueCache) int32
gcAssistAlloc performs GC work to make gp's assist debt positive.
gp must be the calling user gorountine.
This must be called with preemption enabled.
gcAssistAlloc1 is the part of gcAssistAlloc that runs on the system
stack. This is a separate function to make it easier to see that
we're not capturing anything from the user stack, since the user
stack may move while we're in this function.
gcAssistAlloc1 indicates whether this assist completed the mark
phase by setting gp.param to non-nil. This can't be communicated on
the stack since it may move.
gcBgMarkPrepare sets up state for background marking.
Mutator assists must not yet be enabled.
gcBgMarkStartWorkers prepares background mark worker goroutines. These
goroutines will not run until the mark phase, but they must be started while
the work is not stopped and from a regular G stack. The caller must hold
worldsema.
func gcBgMarkWorker()
gcDrain scans roots and objects in work buffers, blackening grey
objects until it is unable to get more work. It may return before
GC is done; it's the caller's responsibility to balance work from
other Ps.
If flags&gcDrainUntilPreempt != 0, gcDrain returns when g.preempt
is set.
If flags&gcDrainIdle != 0, gcDrain returns when there is other work
to do.
If flags&gcDrainFractional != 0, gcDrain self-preempts when
pollFractionalWorkerExit() returns true. This implies
gcDrainNoBlock.
If flags&gcDrainFlushBgCredit != 0, gcDrain flushes scan work
credit to gcController.bgScanCredit every gcCreditSlack units of
scan work.
gcDrain will always return if there is a pending STW.
gcDrainN blackens grey objects until it has performed roughly
scanWork units of scan work or the G is preempted. This is
best-effort, so it may perform less work if it fails to get a work
buffer. Otherwise, it will perform at least n units of work, but
may perform more because scanning is always done in whole object
increments. It returns the amount of scan work performed.
The caller goroutine must be in a preemptible state (e.g.,
_Gwaiting) to prevent deadlocks during stack scanning. As a
consequence, this must be called on the system stack.
gcDumpObject dumps the contents of obj for debugging and marks the
field at byte offset off in obj.
gcEffectiveGrowthRatio returns the current effective heap growth
ratio (GOGC/100) based on heap_marked from the previous GC and
next_gc for the current GC.
This may differ from gcpercent/100 because of various upper and
lower bounds on gcpercent. For example, if the heap is smaller than
heapminimum, this can be higher than gcpercent/100.
mheap_.lock must be held or the world must be stopped.
gcenable is called after the bulk of the runtime initialization,
just before we're about to start letting user code run.
It kicks off the background sweeper goroutine, the background
scavenger goroutine, and enables GC.
gcFlushBgCredit flushes scanWork units of background scan work
credit. This first satisfies blocked assists on the
work.assistQueue and then flushes any remaining credit to
gcController.bgScanCredit.
Write barriers are disallowed because this is used by gcDrain after
it has ensured that all work is drained and this must preserve that
condition.
func gcinit()
gcMark runs the mark (or, for concurrent GC, mark termination)
All gcWork caches must be empty.
STW is in effect at this point.
gcMarkDone transitions the GC from mark to mark termination if all
reachable objects have been marked (that is, there are no grey
objects and can be no more in the future). Otherwise, it flushes
all local work to the global queues where it can be discovered by
other workers.
This should be called when all local mark work has been drained and
there are no remaining workers. Specifically, when
work.nwait == work.nproc && !gcMarkWorkAvailable(p)
The calling context must be preemptible.
Flushing local work is important because idle Ps may have local
work queued. This is the only way to make that work visible and
drive GC to completion.
It is explicitly okay to have write barriers in this function. If
it does transition to mark termination, then all reachable objects
have been marked, so the write barrier cannot shade any more
objects.
gcmarknewobject marks a newly allocated object black. obj must
not contain any non-nil pointers.
This is nosplit so it can manipulate a gcWork without preemption.
gcMarkRootCheck checks that all roots have been scanned. It is
purely for debugging.
gcMarkRootPrepare queues root scanning jobs (stacks, globals, and
some miscellany) and initializes scanning-related state.
The world must be stopped.
World must be stopped and mark assists and background workers must be
disabled.
gcMarkTinyAllocs greys all active tiny alloc blocks.
The world must be stopped.
gcMarkWorkAvailable reports whether executing a mark worker
on p is potentially useful. p may be nil, in which case it only
checks the global sources of work.
gcPaceScavenger updates the scavenger's pacing, particularly
its rate and RSS goal.
The RSS goal is based on the current heap goal with a small overhead
to accommodate non-determinism in the allocator.
The pacing is based on scavengePageRate, which applies to both regular and
huge pages. See that constant for more information.
mheap_.lock must be held or the world must be stopped.
gcParkAssist puts the current goroutine on the assist queue and parks.
gcParkAssist reports whether the assist is now satisfied. If it
returns false, the caller must retry the assist.
gcResetMarkState resets global state prior to marking (concurrent
or STW) and resets the stack scan state of all Gs.
This is safe to do without the world stopped because any Gs created
during or after this will start out in the reset state.
gcResetMarkState must be called on the system stack because it acquires
the heap lock. See mheap for details.
gcSetTriggerRatio sets the trigger ratio and updates everything
derived from it: the absolute trigger, the heap goal, mark pacing,
and sweep pacing.
This can be called any time. If GC is the in the middle of a
concurrent phase, it will adjust the pacing of that phase.
This depends on gcpercent, memstats.heap_marked, and
memstats.heap_live. These must be up to date.
mheap_.lock must be held or the world must be stopped.
gcStart starts the GC. It transitions from _GCoff to _GCmark (if
debug.gcstoptheworld == 0) or performs all of GC (if
debug.gcstoptheworld != 0).
This may return without performing this transition in some cases,
such as when called on a system stack or with locks held.
Stops the current m for stopTheWorld.
Returns when the world is restarted.
gcSweep must be called on the system stack because it acquires the heap
lock. See mheap for details.
The world must be stopped.
gcWaitOnMark blocks until GC finishes the Nth mark phase. If GC has
already completed this mark phase, it returns immediately.
gcWakeAllAssists wakes all currently blocked assists. This is used
at the end of a GC cycle. gcBlackenEnabled must be false to prevent
new assists from going to sleep after this point.
Called from compiled code; declared for vet; do NOT call from Go.
func gcWriteBarrierBP() func gcWriteBarrierBX()
Called from compiled code; declared for vet; do NOT call from Go.
func gcWriteBarrierDX() func gcWriteBarrierR8() func gcWriteBarrierR9() func gcWriteBarrierSI()
Generic traceback. Handles runtime stack prints (pcbuf == nil),
the runtime.Callers function (pcbuf != nil), as well as the garbage
collector (callback != nil). A little clunky to merge these, but avoids
duplicating the code and all its subtlety.
The skip argument is only valid with pcbuf != nil and counts the number
of logical frames to skip rather than physical frames (with inlining, a
PC in pcbuf can represent multiple calls). If a PC is partially skipped
and max > 1, pcbuf[1] will be runtime.skipPleaseUseCallersFrames+N where
N indicates the number of logical frames to skip in pcbuf[0].
getArgInfo returns the argument frame information for a call to f
with call frame frame.
This is used for both actual calls with active stack frames and for
deferred calls or goroutines that are not yet executing. If this is an actual
call, ctxt must be nil (getArgInfo will retrieve what it needs from
the active stack frame). If this is a deferred call or unstarted goroutine,
ctxt must be the function object that was deferred or go'd.
getArgInfoFast returns the argument frame information for a call to f.
It is short and inlineable. However, it does not handle all functions.
If ok reports false, you must call getArgInfo instead.
TODO(josharian): once we do mid-stack inlining,
call getArgInfo directly from getArgInfoFast and stop returning an ok bool.
getargp returns the location where the caller
writes outgoing function call arguments.
func getCachedDlogger() *dlogger func getcallerpc() uintptr func getcallersp() uintptr
getclosureptr returns the pointer to the current closure.
getclosureptr can only be used in an assignment statement
at the entry of a function. Moreover, go:nosplit directive
must be specified at the declaration of caller function,
so that the function prolog does not clobber the closure register.
for example:
//go:nosplit
func f(arg1, arg2, arg3 int) {
dx := getclosureptr()
}
The compiler rewrites calls to this function into instructions that fetch the
pointer from a well-known register (DX on x86 architecture, etc.) directly.
getempty pops an empty work buffer off the work.empty list,
allocating new buffers if none are available.
getg returns the pointer to the current g.
The compiler rewrites calls to this function into instructions
that fetch the g directly (from TLS or from the dedicated register).
Returns GC type info for the pointer stored in ep for testing.
If ep points to the stack, only static live information will be returned
(i.e. not for objects which are only dynamically live stack objects).
func getgcmaskcb(frame *stkframe, ctxt unsafe.Pointer) bool func getitab(inter *interfacetype, typ *_type, canfail bool) *itab func getLockRank(l *mutex) lockRank
A helper function for EnsureDropM.
getMCache is a convenience function which tries to obtain an mcache.
Returns nil if we're not bootstrapping or we don't have a P. The caller's
P must not change, so we must be in a non-preemptible state.
func getPageSize() uintptr func getRandomData(r []byte)
getStackMap returns the locals and arguments live pointer maps, and
stack object list for frame.
Get from gfree list.
If local list is empty, grab a batch from global list.
Purge all cached G's from gfree list to the global list.
Put on gfree list.
If local list is too long, transfer a batch to the global list.
Try get a batch of G's from the global runnable queue.
sched.lock must be held.
Put gp on the global runnable queue.
sched.lock must be held.
May run during STW, so write barriers are not allowed.
Put a batch of runnable goroutines on the global runnable queue.
This clears *batch.
sched.lock must be held.
Put gp at the head of the global runnable queue.
sched.lock must be held.
May run during STW, so write barriers are not allowed.
func goargs()
used by cmd/cgo
func goenvs() func goenvs_unix()
goexit is the return stub at the top of every goroutine call stack.
Each goroutine stack is constructed as if goexit called the
goroutine's entry point function, so that when the entry point
function returns, it will return to goexit, which will call goexit1
to perform the actual exit.
This function must never be called directly. Call goexit1 instead.
gentraceback assumes that goexit terminates the stack. A direct
call on the stack will cause gentraceback to stop walking the stack
prematurely and if there is leftover state it may panic.
goexit continuation on g0.
Finishes execution of the current goroutine.
The implementation of the predeclared function panic.
failures in the comparisons for s[x], 0 <= x < y (y == len(s))
func goPanicIndexU(x uint, y int) func goPanicSlice3Acap(x int, y int) func goPanicSlice3AcapU(x uint, y int)
failures in the comparisons for s[::x], 0 <= x <= y (y == len(s) or cap(s))
func goPanicSlice3AlenU(x uint, y int)
failures in the comparisons for s[:x:y], 0 <= x <= y
func goPanicSlice3BU(x uint, y int)
failures in the comparisons for s[x:y:], 0 <= x <= y
func goPanicSlice3CU(x uint, y int) func goPanicSliceAcap(x int, y int) func goPanicSliceAcapU(x uint, y int)
failures in the comparisons for s[:x], 0 <= x <= y (y == len(s) or cap(s))
func goPanicSliceAlenU(x uint, y int)
failures in the comparisons for s[x:y], 0 <= x <= y
func goPanicSliceBU(x uint, y int)
Puts the current goroutine into a waiting state and calls unlockf on the
system stack.
If unlockf returns false, the goroutine is resumed.
unlockf must not access this G's stack, as it may be moved between
the call to gopark and the call to unlockf.
Note that because unlockf is called after putting the G into a waiting
state, the G may have already been readied by the time unlockf is called
unless there is external synchronization preventing the G from being
readied. If unlockf returns false, it must guarantee that the G cannot be
externally readied.
Reason explains why the goroutine has been parked. It is displayed in stack
traces and heap dumps. Reasons should be unique and descriptive. Do not
re-use reasons, add new ones.
Puts the current goroutine into a waiting state and unlocks the lock.
The goroutine can be made runnable again by calling goready(gp).
func gopreempt_m(gp *g)
The implementation of the predeclared function recover.
Cannot split the stack because it needs to reliably
find the stack segment of its caller.
TODO(rsc): Once we commit to CopyStackAlways,
this doesn't need to be nosplit.
func goroutineheader(gp *g)
labels may be nil. If labels is non-nil, it must have the same length as p.
Ready the goroutine arg.
Gosched continuation on g0.
goschedguarded yields the processor like gosched, but also checks
for forbidden states and opts out of the yield in those cases.
goschedguarded is a forbidden-states-avoided version of gosched_m
func goschedImpl(gp *g)
adjust Gobuf as if it executed a call to fn with context ctxt
and then did an immediate gosave.
adjust Gobuf as if it executed a call to fn
and then did an immediate gosave.
This is exported via linkname to assembly in syscall (for Plan9).
func gostringnocopy(str *byte) string
gotraceback returns the current traceback settings.
If level is 0, suppress all tracebacks.
If level is 1, show tracebacks, but exclude runtime frames.
If level is 2, show tracebacks including runtime frames.
If all is set, print all goroutine stacks. Otherwise, print just the current goroutine.
If crash is set, crash (core dump, etc) after tracebacking.
goyield is like Gosched, but it:
- emits a GoPreempt trace event instead of a GoSched trace event
- puts the current G on the runq of the current P instead of the globrunq
obj is the start of an object with mark mbits.
If it isn't already marked, mark it and enqueue into gcw.
base and off are for debugging only and could be removed.
See also wbBufFlush1, which partially duplicates this logic.
growslice handles slice growth during append.
It is passed the slice element type, the old slice, and the desired new minimum capacity,
and it returns a new slice with at least that capacity, with the old data
copied into it.
The new slice's length is set to the old slice's length,
NOT to the new requested capacity.
This is for codegen convenience. The old slice's length is used immediately
to calculate where to write new values during an append.
TODO: When the old backend is gone, reconsider this decision.
The SSA backend might prefer the new length or to return only ptr/cap and save stack space.
func growWork_fast32(t *maptype, h *hmap, bucket uintptr) func growWork_fast64(t *maptype, h *hmap, bucket uintptr) func growWork_faststr(t *maptype, h *hmap, bucket uintptr)
write to goroutine-local buffer if diverting output,
or else standard error.
Hands off P from syscall or locked M.
Always runs without a P, so write barriers are not allowed.
func haveexperiment(name string) bool
heapBitsForAddr returns the heapBits for the address addr.
The caller must ensure addr is in an allocated span.
In particular, be careful not to point past the end of an object.
nosplit because it is used during write barriers and must not be preempted.
heapBitsSetType records that the new allocation [x, x+size)
holds in [x, x+dataSize) one or more values of type typ.
(The number of values is given by dataSize / typ.size.)
If dataSize < size, the fragment [x+dataSize, x+size) is
recorded as non-pointer data.
It is known that the type has pointers somewhere;
malloc does not call heapBitsSetType when there are no pointers,
because all free objects are marked as noscan during
heapBitsSweepSpan.
There can only be one allocation from a given span active at a time,
and the bitmap for a span always falls on byte boundaries,
so there are no write-write races for access to the heap bitmap.
Hence, heapBitsSetType can access the bitmap without atomics.
There can be read-write races between heapBitsSetType and things
that read the heap bitmap like scanobject. However, since
heapBitsSetType is only used for objects that have not yet been
made reachable, readers will ignore bits being modified by this
function. This does mean this function cannot transiently modify
bits that belong to neighboring objects. Also, on weakly-ordered
machines, callers must execute a store/store (publication) barrier
between calling this function and making the object reachable.
heapBitsSetTypeGCProg implements heapBitsSetType using a GC program.
progSize is the size of the memory described by the program.
elemSize is the size of the element that the GC program describes (a prefix of).
dataSize is the total size of the intended data, a multiple of elemSize.
allocSize is the total size of the allocated memory.
GC programs are only used for large allocations.
heapBitsSetType requires that allocSize is a multiple of 4 words,
so that the relevant bitmap bytes are not shared with surrounding
objects.
heapRetained returns an estimate of the current heap RSS.
hexdumpWords prints a word-oriented hex dump of [p, end).
If mark != nil, it will be called with each printed word's address
and should return a character mark to appear just before that
word's value. It can return 0 to indicate no mark.
func incidlelocked(v int32)
inf2one returns a signed 1 if f is an infinity and a signed 0 otherwise.
The sign of the result is the sign of f.
inheap reports whether b is a pointer into a (potentially dead) heap object.
It returns false for pointers into mSpanManual spans.
Non-preemptible because it is used by write barriers.
inHeapOrStack is a variant of inheap that returns true for pointers
into any allocated heap span.
func init()
start forcegc helper goroutine
func init() func init() func init() func init() func init() func init() func initAlgAES()
initMetrics initializes the metrics map if it hasn't been yet.
metricsSema must be held.
Initialize signals.
Called by libpreinit so runtime may not be initialized.
injectglist adds each runnable G on the list to some run queue,
and clears glist. If there is no current P, they are added to the
global queue, and up to npidle M's are started to run them.
Otherwise, for each idle P, this adds a G to the global queue
and starts an M. Any remaining G's are added to the current P's
local run queue.
This may temporarily acquire sched.lock.
Can run concurrently with GC.
inPersistentAlloc reports whether p points to memory allocated by
persistentalloc. This must be nosplit because it is called by the
cgo checker code, which is called by the write barrier code.
inRange reports whether v0 or v1 are in the range [r0, r1].
func interequal(p, q unsafe.Pointer) bool func internal_cpu_getsysctlbyname(name []byte) (int32, int32) func inVDSOPage(pc uintptr) bool
isAbortPC reports whether pc is the program counter at which
runtime.abort raises a signal.
It is nosplit because it's part of the isgoexception
implementation.
isAsyncSafePoint reports whether gp at instruction PC is an
asynchronous safe point. This indicates that:
1. It's safe to suspend gp and conservatively scan its stack and
registers. There are no potentially hidden pointer values and it's
not in the middle of an atomic sequence like a write barrier.
2. gp has enough stack space to inject the asyncPreempt call.
3. It's generally safe to interact with the runtime, even if we're
in a signal handler stopped here. For example, there are no runtime
locks held, so acquiring a runtime lock won't self-deadlock.
In some cases the PC is safe for asynchronous preemption but it
also needs to adjust the resumption PC. The new PC is returned in
the second result.
isDirectIface reports whether t is stored directly in an interface value.
isEmpty reports whether the given tophash array entry represents an empty bucket entry.
isExportedRuntime reports whether name is an exported runtime function.
It is only for runtime functions, so ASCII A-Z is fine.
isFinite reports whether f is neither NaN nor an infinity.
isInf reports whether f is an infinity.
isNaN reports whether f is an IEEE 754 ``not-a-number'' value.
func isPowerOfTwo(x uintptr) bool
isShrinkStackSafe returns whether it's safe to attempt to shrink
gp's stack. Shrinking the stack is only safe when we have precise
pointer maps for all frames on the stack.
isSweepDone reports whether all spans are swept or currently being swept.
Note that this condition may transition from false to true at any
time as the sweeper runs. It may transition from true to false if a
GC runs; to prevent that the caller must be non-preemptible or must
somehow block GC progress.
isSystemGoroutine reports whether the goroutine g must be omitted
in stack dumps and deadlock detector. This is any goroutine that
starts at a runtime.* entry point, except for runtime.main,
runtime.handleAsyncEvent (wasm only) and sometimes runtime.runfinq.
If fixed is true, any goroutine that can vary between user and
system (that is, the finalizer goroutine) is considered a user
goroutine.
func itab_callback(tab *itab)
itabAdd adds the given itab to the itab hash table.
itabLock must be held.
func itabHashFunc(inter *interfacetype, typ *_type) uintptr func itabsinit() func iterate_itabs(fn func(*itab))
itoa converts val to a decimal representation. The result is
written somewhere within buf and the location of the result is returned.
buf must be at least 20 bytes.
itoaDiv formats val/(10**dec) into buf.
func kevent_trampoline() func kqueue_trampoline()
less checks if a < b, considering a & b running counts that may overflow the
32-bit range, and that their "unwrapped" difference is always less than 2^31.
levelIndexToOffAddr converts an index into summary[level] into
the corresponding address in the offset address space.
lfnodeValidate panics if node is not a valid address for use with
lfstack.push. This only needs to be called when node is allocated.
func lfstackPack(node *lfnode, cnt uintptr) uint64 func lfstackUnpack(val uint64) *lfnode
Call fn with arg as its argument. Return what fn returns.
fn is the raw pc value of the entry point of the desired function.
Switches to the system stack, if not already there.
Preserves the calling point as the location where a profiler traceback will begin.
Called to do synchronous initialization of Go code built with
-buildmode=c-archive or -buildmode=c-shared.
None of the Go runtime is initialized.
func lockedOSThread() bool
lockextra locks the extra list and returns the list head.
The caller must unlock the list by storing a new list head
to extram. If nilokay is true, then lockextra will
return a nil list head if that's what it finds. If nilokay is false,
lockextra will keep waiting until the list head is no longer nil.
func lockOSThread() func lockWithRank(l *mutex, rank lockRank) func lockWithRankMayAcquire(l *mutex, rank lockRank) func lowerASCII(c byte) byte func madvise_trampoline()
The main goroutine.
func main_main()
makeAddrRange creates a new address range from two virtual addresses.
Throws if the base and limit are not in the same memory segment.
makeBucketArray initializes a backing array for map buckets.
1<<b is the minimum number of buckets to allocate.
dirtyalloc should either be nil or a bucket array previously
allocated by makeBucketArray with the same t and b parameters.
If dirtyalloc is nil a new backing array will be alloced and
otherwise dirtyalloc will be cleared and reused as backing array.
func makechan64(t *chantype, size int64) *hchan
makeHeadTailIndex creates a headTailIndex value from a separate
head and tail.
func makeheapobjbv(p uintptr, size uintptr) bitvector
makemap implements Go map creation for make(map[k]v, hint).
If the compiler has determined that the map or the first bucket
can be created on the stack, h and/or bucket may be non-nil.
If h != nil, the map can be created directly in h.
If h.buckets != nil, bucket pointed to can be used as the first bucket.
makemap_small implements Go map creation for make(map[k]v) and
make(map[k]v, hint) when hint is known to be at most bucketCnt
at compile time and the map needs to be allocated on the heap.
func makeslice64(et *_type, len64, cap64 int64) unsafe.Pointer
makeslicecopy allocates a slice of "tolen" elements of type "et",
then copies "fromlen" elements of type "et" into that new allocation from "from".
func makeSpanClass(sizeclass uint8, noscan bool) spanClass
makeStatDepSet creates a new statDepSet from a list of statDeps.
Allocate a new g, with a stack big enough for stacksize bytes.
Allocate an object of size bytes.
Small objects are allocated from the per-P cache's free lists.
Large objects (> 32 kB) are allocated straight from the heap.
func mallocinit()
mapaccess1 returns a pointer to h[key]. Never returns nil, instead
it will return a reference to the zero object for the elem type if
the key is not in the map.
NOTE: The returned pointer may keep the whole map live, so don't
hold onto it for very long.
returns both key and elem. Used by map iterator
Like mapaccess, but allocates a slot for the key if it is not present in the map.
mapclear deletes all keys from a map.
func mapdelete_fast32(t *maptype, h *hmap, key uint32) func mapdelete_fast64(t *maptype, h *hmap, key uint64) func mapdelete_faststr(t *maptype, h *hmap, ky string)
mapiterinit initializes the hiter struct used for ranging over maps.
The hiter struct pointed to by 'it' is allocated on the stack
by the compilers order pass or on the heap by reflect_mapiterinit.
Both need to have zeroed hiter since the struct contains pointers.
func mapiternext(it *hiter) func markBitsForAddr(p uintptr) markBits
markBitsForSpan returns the markBits for the span base address base.
markroot scans the i'th root.
Preemption must be disabled (because this uses a gcWork).
nowritebarrier is only advisory here.
markrootBlock scans the shard'th shard of the block of memory [b0,
b0+n0), with the given pointer mask.
markrootFreeGStacks frees stacks of dead Gs.
This does not free stacks of dead Gs cached on Ps, but having a few
cached stacks around isn't a problem.
markrootSpans marks roots for one shard of markArenas.
materializeGCProg allocates space for the (1-bit) pointer bitmask
for an object of size ptrdata. Then it fills that space with the
pointer bitmask specified by the program prog.
The bitmask starts at s.startAddr.
The result must be deallocated with dematerializeGCProg.
mcall switches from the g to the g0 stack and invokes fn(g),
where g is the goroutine that made the call.
mcall saves g's current PC/SP in g->sched so that it can be restored later.
It is up to fn to arrange for that later execution, typically by recording
g in a data structure, causing something to call ready(g) later.
mcall returns to the original goroutine g later, when g has been rescheduled.
fn must not return at all; typically it ends by calling schedule, to let the m
run other goroutines.
mcall can only be called from g stacks (not g0, not gsignal).
This must NOT be go:noescape: if fn is a stack-allocated closure,
fn puts g on a run queue, and g executes before fn returns, the
closure will be invalidated while it is still executing.
Pre-allocated ID may be passed as 'id', or omitted by passing -1.
Called from exitm, but not from drop, to undo the effect of thread-owned
resources in minit, semacreate, or elsewhere. Do not take locks after calling this.
mDoFixup runs any outstanding fixup function for the running m.
Returns true if a fixup was outstanding and actually executed.
Note: to avoid deadlocks, and the need for the fixup function
itself to be async safe, signals are blocked for the working m
while it holds the mFixup lock. (See golang.org/issue/44193)
mDoFixupAndOSYield is called when an m is unable to send a signal
because the allThreadsSyscall mechanism is in progress. That is, an
mPark() has been interrupted with this signal handler so we need to
ensure the fixup is executed from this context.
memclrHasPointers clears n bytes of typed memory starting at ptr.
The caller must ensure that the type of the object at ptr has
pointers, usually by checking typ.ptrdata. However, ptr
does not have to point to the start of the allocation.
memclrNoHeapPointers clears n bytes starting at ptr.
Usually you should use typedmemclr. memclrNoHeapPointers should be
used only when the caller knows that *ptr contains no heap pointers
because either:
*ptr is initialized memory and its type is pointer-free, or
*ptr is uninitialized memory (e.g., memory that's being reused
for a new allocation) and hence contains only "junk".
memclrNoHeapPointers ensures that if ptr is pointer-aligned, and n
is a multiple of the pointer size, then any pointer-aligned,
pointer-sized portion is cleared atomically. Despite the function
name, this is necessary because this function is the underlying
implementation of typedmemclr and memclrHasPointers. See the doc of
memmove for more details.
The (CPU-specific) implementations of this function are in memclr_*.s.
in internal/bytealg/equal_*.s
func memequal128(p, q unsafe.Pointer) bool func memequal16(p, q unsafe.Pointer) bool func memequal32(p, q unsafe.Pointer) bool func memequal64(p, q unsafe.Pointer) bool func memequal_varlen(a, b unsafe.Pointer) bool
in asm_*.s
func memhash128(p unsafe.Pointer, h uintptr) uintptr func memhash32Fallback(p unsafe.Pointer, seed uintptr) uintptr func memhash64Fallback(p unsafe.Pointer, seed uintptr) uintptr func memhash_varlen(p unsafe.Pointer, h uintptr) uintptr func memhashFallback(p unsafe.Pointer, seed, s uintptr) uintptr
memmove copies n bytes from "from" to "to".
memmove ensures that any pointer in "from" is written to "to" with
an indivisible write, so that racy reads cannot observe a
half-written pointer. This is necessary to prevent the garbage
collector from observing invalid pointers, and differs from memmove
in unmanaged languages. However, memmove is only required to do
this if "from" and "to" may contain pointers, which can only be the
case if "from", "to", and "n" are all be word-aligned.
Implementations are in memmove_*.s.
mergeSummaries merges consecutive summaries which may each represent at
most 1 << logMaxPagesPerSum pages each together into one.
mexit tears down and exits the current thread.
Don't call this directly to exit the thread, since it must run at
the top of the thread stack. Instead, use gogo(&_g_.m.g0.sched) to
unwind the stack to the point that exits the thread.
It is entered with m.p != nil, so write barriers are allowed. It
will release the P before exiting.
Try to get an m from midle list.
sched.lock must be held.
May run during STW, so write barriers are not allowed.
Called to initialize a new m (including the bootstrap m).
Called on the new thread, cannot allocate memory.
minitSignalMask is called when initializing a new m to set the
thread's signal mask. When this is called all signals have been
blocked for the thread. This starts with m.sigmask, which was set
either from initSigmask for a newly created thread or by calling
sigsave if this is a non-Go thread calling a Go function. It
removes all essential signals from the mask, thus causing those
signals to not be blocked. Then it sets the thread's signal mask.
After this is called the thread can receive signals.
minitSignals is called when initializing a new m to set the
thread's alternate signal stack and signal mask.
minitSignalStack is called when initializing a new m to set the
alternate signal stack. If the alternate signal stack is not set
for the thread (the normal case) then set the alternate signal
stack to the gsignal stack. If the alternate signal stack is set
for the thread (the case when a non-Go thread sets the alternate
signal stack and then calls a Go function) then set the gsignal
stack to the alternate signal stack. We also set the alternate
signal stack to the gsignal stack if cgo is not used (regardless
of whether it is already set). Record which choice was made in
newSigstack, so that it can be undone in unminit.
func mlock_trampoline()
mmap is used to do low-level memory allocation via mmap. Don't allow stack
splits, since this function (used by sysAlloc) is called in a lot of low-level
parts of the runtime and callers often assume it won't acquire any locks.
go:nosplit
func mmap_trampoline()
modtimer modifies an existing timer.
This is called by the netpoll code or time.Ticker.Reset or time.Timer.Reset.
Reports whether the timer was modified before it was run.
modTimer modifies an existing timer.
func moduledataverify() func moduledataverify1(datap *moduledata)
modulesinit creates the active modules slice out of all loaded modules.
When a module is first loaded by the dynamic linker, an .init_array
function (written by cmd/link) is invoked to call addmoduledata,
appending to the module to the linked list that starts with
firstmoduledata.
There are two times this can happen in the lifecycle of a Go
program. First, if compiled with -linkshared, a number of modules
built with -buildmode=shared can be loaded at program initialization.
Second, a Go program can load a module while running that was built
with -buildmode=plugin.
After loading, this function is called which initializes the
moduledata so it is usable by the GC and creates a new activeModules
list.
Only one goroutine may call modulesinit at a time.
func morestack() func morestack_noctxt()
This is exported as ABI0 via linkname so obj can call it.
moveTimers moves a slice of timers to pp. The slice has been taken
from a different P.
This is currently called when the world is stopped, but the caller
is expected to have locked the timers for pp.
mPark causes a thread to park itself - temporarily waking for
fixups but otherwise waiting to be fully woken. This is the
only way that m's should park themselves.
Called to initialize a new m (including the bootstrap m).
Called on the parent thread (main thread in case of bootstrap), can allocate memory.
mProf_Flush flushes the events from the current heap profiling
cycle into the active profile. After this it is safe to start a new
heap profiling cycle with mProf_NextCycle.
This is called by GC after mark termination starts the world. In
contrast with mProf_NextCycle, this is somewhat expensive, but safe
to do concurrently.
func mProf_FlushLocked()
Called when freeing a profiled block.
Called by malloc to record a profiled block.
mProf_NextCycle publishes the next heap profile cycle and creates a
fresh heap profile cycle. This operation is fast and can be done
during STW. The caller must call mProf_Flush before calling
mProf_NextCycle again.
This is called by mark termination during STW so allocations and
frees after the world is started again count towards a new heap
profiling cycle.
mProf_PostSweep records that all sweep frees for this GC cycle have
completed. This has the effect of publishing the heap profile
snapshot as of the last mark termination without advancing the heap
profile cycle.
Put mp on midle list.
sched.lock must be held.
May run during STW, so write barriers are not allowed.
mReserveID returns the next ID to use for a new m. This new m is immediately
considered 'running' by checkdead.
sched.lock must be held.
func msanmalloc(addr unsafe.Pointer, sz uintptr)
msigrestore sets the current thread's signal mask to sigmask.
This is used to restore the non-Go signal mask when a non-Go thread
calls a Go function.
This is nosplit and nowritebarrierrec because it is called by dropm
after g has been cleared.
func mspinning()
mStackIsSystemAllocated indicates whether this runtime starts on a
system-allocated stack.
mstart is the entry-point for new Ms.
This must not split the stack because we may not even have stack
bounds set up yet.
May run during STW (because it doesn't have a P yet), so write
barriers are not allowed.
func mstart1()
glue code to call mstart from pthread_create.
mstartm0 implements part of mstart1 that only runs on the m0.
Write barriers are allowed here because we know the GC can't be
running yet, so they'll be no-ops.
64x64 -> 128 multiply.
adapted from hacker's delight.
func munmap_trampoline() func mutexevent(cycles int64, skip int) func nanotime_trampoline()
needm is called when a cgo callback happens on a
thread without an m (a thread not created by Go).
In this case, needm is expected to find an m to use
and return with m, g initialized correctly.
Since m and g are not set now (likely nil, but see below)
needm is limited in what routines it can call. In particular
it can only call nosplit functions (textflag 7) and cannot
do any scheduling that requires an m.
In order to avoid needing heavy lifting here, we adopt
the following strategy: there is a stack of available m's
that can be stolen. Using compare-and-swap
to pop from the stack has ABA races, so we simulate
a lock by doing an exchange (via Casuintptr) to steal the stack
head and replace the top pointer with MLOCKED (1).
This serves as a simple spin lock that we can use even
without an m. The thread that locks the stack in this way
unlocks the stack by storing a valid stack head pointer.
In order to make sure that there is always an m structure
available to be stolen, we maintain the invariant that there
is always one more than needed. At the beginning of the
program (if cgo is in use) the list is seeded with a single m.
If needm finds that it has taken the last m off the list, its job
is - once it has installed its own m so that it can do things like
allocate memory - to create a spare m and put it on the list.
Each of these extra m's also has a g0 and a curg that are
pressed into service as the scheduling stack and current
goroutine for the duration of the cgo callback.
When the callback is done with the m, it calls dropm to
put the m back on the list.
func net_fastrand() uint32
netpoll checks for ready network connections.
Returns list of goroutines that become runnable.
delay < 0: blocks indefinitely
delay == 0: does not block, just polls
delay > 0: block for up to that many nanoseconds
func netpollarm(pd *pollDesc, mode int)
returns true if IO is ready, or false if timedout or closed
waitio - wait only for completed IO, ignore errors
func netpollblockcommit(gp *g, gpp unsafe.Pointer) bool
netpollBreak interrupts a kevent.
func netpollcheckerr(pd *pollDesc, mode int32) int func netpollclose(fd uintptr) int32 func netpollDeadline(arg interface{}, seq uintptr) func netpolldeadlineimpl(pd *pollDesc, seq uintptr, read, write bool) func netpollGenericInit() func netpollgoready(gp *g, traceskip int) func netpollinit() func netpollinited() bool func netpollIsPollDescriptor(fd uintptr) bool func netpollopen(fd uintptr, pd *pollDesc) int32 func netpollReadDeadline(arg interface{}, seq uintptr)
netpollready is called by the platform-specific netpoll function.
It declares that the fd associated with pd is ready for I/O.
The toRun argument is used to build a list of goroutines to return
from netpoll. The mode argument is 'r', 'w', or 'r'+'w' to indicate
whether the fd is ready for reading or writing or both.
This may run while the world is stopped, so write barriers are not allowed.
func netpollunblock(pd *pollDesc, mode int32, ioready bool) *g func netpollWriteDeadline(arg interface{}, seq uintptr)
newAllocBits returns a pointer to 8 byte aligned bytes
to be used for this span's alloc bits.
newAllocBits is used to provide newly initialized spans
allocation bits. For spans not being initialized the
mark bits are repurposed as allocation bits when
the span is swept.
newArenaMayUnlock allocates and zeroes a gcBits arena.
The caller must hold gcBitsArena.lock. This may temporarily release it.
newarray allocates an array of n elements of type typ.
newBucket allocates a bucket with the given type and number of stack entries.
Allocate a Defer, usually using per-P pool.
Each defer must be released with freedefer. The defer is not
added to any defer chain yet.
This must not grow the stack because there may be a frame without
stack map information when this is called.
newextram allocates m's and puts them on the extra list.
It is called with a working local m, so that it can do things
like call schedlock and allocate.
Create a new m. It will start off with a call to fn, or else the scheduler.
fn needs to be static and not a heap allocated closure.
May run with m.p==nil, so write barriers are not allowed.
id is optional pre-allocated m ID. Omit by passing -1.
newMarkBits returns a pointer to 8 byte aligned bytes
to be used for a span's mark bits.
implementation of new builtin
compiler (both frontend and SSA backend) knows the signature
of this function
May run with m.p==nil, so write barriers are not allowed.
newosproc0 is a version of newosproc that can be called before the runtime
is initialized.
This function is not safe to use after initialization as it does not pass an M as fnarg.
Create a new g running fn with siz bytes of arguments.
Put it on the queue of g's waiting to run.
The compiler turns a go statement into a call to this.
The stack layout of this call is unusual: it assumes that the
arguments to pass to fn are on the stack sequentially immediately
after &fn. Hence, they are logically part of newproc's argument
frame, even though they don't appear in its signature (and can't
because their types differ between call sites).
This must be nosplit because this stack layout means there are
untyped arguments in newproc's argument frame. Stack copies won't
be able to adjust them and stack splits won't be able to copy them.
Create a new g in state _Grunnable, starting at fn, with narg bytes
of arguments starting at argp. callerpc is the address of the go
statement that created this. The caller is responsible for adding
the new g to the scheduler.
This must run on the system stack because it's the continuation of
newproc, which cannot split the stack.
newProfBuf returns a new profiling buffer with room for
a header of hdrsize words and a buffer of at least bufwords words.
Called from runtime·morestack when more stack is needed.
Allocate larger stack and relocate to new stack.
Stack growth is multiplicative, for constant amortized cost.
g->atomicstatus will be Grunning or Gscanrunning upon entry.
If the scheduler is trying to stop this g, then it will set preemptStop.
This must be nowritebarrierrec because it can be called as part of
stack growth from other nowritebarrierrec functions, but the
compiler doesn't check this.
nextFreeFast returns the next free object if one is quickly available.
Otherwise it returns 0.
nextMarkBitArenaEpoch establishes a new epoch for the arenas
holding the mark bits. The arenas are named relative to the
current GC cycle which is demarcated by the call to finishweep_m.
All current spans have been swept.
During that sweep each span allocated room for its gcmarkBits in
gcBitsArenas.next block. gcBitsArenas.next becomes the gcBitsArenas.current
where the GC will mark objects and after each span is swept these bits
will be used to allocate objects.
gcBitsArenas.current becomes gcBitsArenas.previous where the span's
gcAllocBits live until all the spans have been swept during this GC cycle.
The span's sweep extinguishes all the references to gcBitsArenas.previous
by pointing gcAllocBits into the gcBitsArenas.current.
The gcBitsArenas.previous is released to the gcBitsArenas.free list.
nextSample returns the next sampling point for heap profiling. The goal is
to sample allocations on average every MemProfileRate bytes, but with a
completely random distribution over the allocation timeline; this
corresponds to a Poisson process with parameter MemProfileRate. In Poisson
processes, the distance between two samples follows the exponential
distribution (exp(MemProfileRate)), so the best return value is a random
number taken from an exponential distribution whose mean is MemProfileRate.
nextSampleNoFP is similar to nextSample, but uses older,
simpler code to avoid floating point.
func nilfunc() func nilinterequal(p, q unsafe.Pointer) bool func nilinterhash(p unsafe.Pointer, h uintptr) uintptr
nobarrierWakeTime looks at P's timers and returns the time when we
should wake up the netpoller. It returns 0 if there are no timers.
This function is invoked when dropping a P, and must run without
any write barriers.
noescape hides a pointer from escape analysis. noescape is
the identity function but escape analysis doesn't think the
output depends on the input. noescape is inlined and currently
compiles down to zero instructions.
USE CAREFULLY!
func nonblockingPipe() (r, w int32, errno int32)
This is called when we receive a signal when there is no signal stack.
This can only happen if non-Go code calls sigaltstack to disable the
signal stack.
One-time notifications.
func notetsleep(n *note, ns int64) bool
same as runtime·notetsleep, but called on user g (not g0)
calls only nosplit functions between entersyscallblock/exitsyscall
func notewakeup(n *note)
notifyListAdd adds the caller to a notify list such that it can receive
notifications. The caller must eventually call notifyListWait to wait for
such a notification, passing the returned ticket number.
func notifyListCheck(sz uintptr)
notifyListNotifyAll notifies all entries in the list.
notifyListNotifyOne notifies one entry in the list.
notifyListWait waits for a notification. If one has been sent since
notifyListAdd was called, it returns immediately. Otherwise, it blocks.
offAddrToLevelIndex converts an address in the offset address space
to the index into summary[level] containing addr.
oneNewExtraM allocates an m and puts it on the extra list.
func open_trampoline()
os_beforeExit is called from os.Exit(0).
func os_fastrand() uint32 func os_runtime_args() []string func os_sigpipe()
BSD interface for threading.
func osPreemptExtEnter(mp *m) func osPreemptExtExit(mp *m)
osRelax is called by the scheduler when transitioning to and from
all Ps being idle.
osStackAlloc performs OS-specific initialization before s is used
as stack memory.
osStackFree undoes the effect of osStackAlloc before s is returned
to the heap.
func osyield()
overLoadFactor reports whether count items placed in 1<<B buckets is over loadFactor.
packPallocSum takes a start, max, and end value and produces a pallocSum.
pageIndexOf returns the arena, page index, and page mask for pointer p.
The caller must ensure p is in the heap.
Check to make sure we can really generate a panic. If the panic
was generated from the runtime, or from inside malloc, then convert
to a throw of msg.
pc should be the program counter of the compiler-generated code that
triggered this panic.
Same as above, but calling from the runtime is allowed.
Using this function is necessary for any panic that may be
generated by runtime.sigpanic, since those are always called by the
runtime.
func panicdivide()
panicdottypeE is called when doing an e.(T) conversion and the conversion fails.
have = the dynamic type we have.
want = the static type we're trying to convert to.
iface = the static type we're converting from.
panicdottypeI is called when doing an i.(T) conversion and the conversion fails.
Same args as panicdottypeE, but "have" is the dynamic itab we have.
func panicfloat()
Implemented in assembly, as they take arguments in registers.
Declared here to mark them as ABIInternal.
func panicIndexU(x uint, y int) func panicmakeslicecap() func panicmakeslicelen() func panicmem() func panicmemAddr(addr uintptr)
panicnildottype is called when doing a i.(T) conversion and the interface i is nil.
want = the static type we're trying to convert to.
func panicoverflow() func panicshift() func panicSlice3Acap(x int, y int) func panicSlice3AcapU(x uint, y int) func panicSlice3Alen(x int, y int) func panicSlice3AlenU(x uint, y int) func panicSlice3B(x int, y int) func panicSlice3BU(x uint, y int) func panicSlice3C(x int, y int) func panicSlice3CU(x uint, y int) func panicSliceAcap(x int, y int) func panicSliceAcapU(x uint, y int) func panicSliceAlen(x int, y int) func panicSliceAlenU(x uint, y int) func panicSliceB(x int, y int) func panicSliceBU(x uint, y int)
panicwrap generates a panic for a call to a wrapped value method
with a nil pointer receiver.
It is called from the generated wrapper code.
park continuation on g0.
func parkunlock_c(gp *g, lock unsafe.Pointer) bool func parsedebugvars() func pcdatastart(f funcInfo, table uint32) uint32 func pcdatavalue(f funcInfo, table uint32, targetpc uintptr, cache *pcvalueCache) int32 func pcdatavalue1(f funcInfo, table uint32, targetpc uintptr, cache *pcvalueCache, strict bool) int32
Like pcdatavalue, but also return the start PC of this PCData value.
It doesn't take a cache.
Returns the PCData value, and the PC where this value starts.
TODO: the start PC is returned only when cache is nil.
pcvalueCacheKey returns the outermost index in a pcvalueCache to use for targetpc.
It must be very cheap to calculate.
For now, align to sys.PtrSize and reduce mod the number of entries.
In practice, this appears to be fairly randomly and evenly distributed.
Wrapper around sysAlloc that can allocate small chunks.
There is no associated free operation.
Intended for things like function/type/debug-related persistent data.
If align is 0, uses default align (currently 8).
The returned memory will be zeroed.
Consider marking persistentalloc'd types go:notinheap.
Must run on system stack because stack growth can (re)invoke it.
See issue 9174.
pidleget tries to get a p from the _Pidle list, acquiring ownership.
sched.lock must be held.
May run during STW, so write barriers are not allowed.
pidleput puts p to on the _Pidle list.
This releases ownership of p. Once sched.lock is released it is no longer
safe to use p.
sched.lock must be held.
May run during STW, so write barriers are not allowed.
func pipe_trampoline() func plugin_lastmoduleinit() (path string, syms map[string]interface{}, errstr string) func pluginftabverify(md *moduledata)
poll_runtime_isPollServerDescriptor reports whether fd is a
descriptor being used by netpoll.
func poll_runtime_pollClose(pd *pollDesc) func poll_runtime_pollOpen(fd uintptr) (*pollDesc, int)
poll_runtime_pollReset, which is internal/poll.runtime_pollReset,
prepares a descriptor for polling in mode, which is 'r' or 'w'.
This returns an error code; the codes are defined above.
func poll_runtime_pollServerInit() func poll_runtime_pollSetDeadline(pd *pollDesc, d int64, mode int) func poll_runtime_pollUnblock(pd *pollDesc)
poll_runtime_pollWait, which is internal/poll.runtime_pollWait,
waits for a descriptor to be ready for reading or writing,
according to mode, which is 'r' or 'w'.
This returns an error code; the codes are defined above.
func poll_runtime_pollWaitCanceled(pd *pollDesc, mode int) func poll_runtime_Semacquire(addr *uint32) func poll_runtime_Semrelease(addr *uint32)
pollFractionalWorkerExit reports whether a fractional mark worker
should self-preempt. It assumes it is called from the fractional
worker.
pollWork reports whether there is non-background work this P could
be doing. This is a fairly lightweight check to be used for
background work loops, like idle GC. It checks a subset of the
conditions checked by the actual scheduler.
Tell all goroutines that they have been preempted and they should stop.
This function is purely best-effort. It can fail to inform a goroutine if a
processor just started running it.
No locks need to be held.
Returns true if preemption request was issued to at least one goroutine.
preemptM sends a preemption request to mp. This request may be
handled asynchronously and may be coalesced with other requests to
the M. When the request is received, if the running G or P are
marked for preemption and the goroutine is at an asynchronous
safe-point, it will preempt the goroutine. It always atomically
increments mp.preemptGen after handling a preemption request.
Tell the goroutine running on processor P to stop.
This function is purely best-effort. It can incorrectly fail to inform the
goroutine. It can send inform the wrong goroutine. Even if it informs the
correct goroutine, that goroutine might ignore the request if it is
simultaneously executing newstack.
No lock needs to be held.
Returns true if preemption request was issued.
The actual preemption will happen at some point in the future
and will be indicated by the gp->status no longer being
Grunning
preemptPark parks gp and puts it in _Gpreempted.
prepareFreeWorkbufs moves busy workbuf spans to free list so they
can be freed to the heap. This must only be called when all
workbufs are on the empty list.
func prepGoExitFrame(sp uintptr)
Call all Error and String methods before freezing the world.
Used when crashing with panicking.
printAncestorTraceback prints the traceback of the given ancestor.
TODO: Unify this with gentraceback and CallersFrames.
printAncestorTraceback prints the given function info at a given pc
within an ancestor traceback. The precision of this info is reduced
due to only have access to the pcs at the time of the caller
goroutine being created.
printany prints an argument passed to panic.
If panic is called with a value that has a String or Error method,
it has already been converted into a string by preprintpanics.
func printanycustomtype(i interface{})
cgoTraceback prints a traceback of callers.
func printcomplex(c complex128) func printcreatedby(gp *g) func printcreatedby1(f funcInfo, pc uintptr)
printDebugLog prints the debug log.
printDebugLogPC prints a single symbolized PC. If returnPC is true,
pc is a return PC that must first be converted to a call PC.
func printeface(e eface) func printfloat(v float64) func printiface(i iface) func printlock() func printnl()
printOneCgoTraceback prints the traceback of a single cgo caller.
This can print more than one line because of inlining.
Returns the number of frames printed.
Print all currently active panics. Used when crashing.
Should only be called after preprintpanics.
func printpointer(p unsafe.Pointer)
printScavTrace prints a scavenge trace line to standard error.
released should be the amount of memory released since the last time this
was called, and forced indicates whether the scavenge was forced by the
application.
func printslice(s []byte) func printsp() func printstring(s string) func printuintptr(p uintptr) func printunlock()
Change number of processors.
sched.lock must be held, and the world must be stopped.
gcworkbufs must not be being modified by either the GC or the write barrier
code, so the GC must not be running if the number of Ps actually changes.
Returns list of Ps with local work, they need to be scheduled by the caller.
func procUnpin() func profilealloc(mp *m, x unsafe.Pointer, size uintptr)
progToPointerMask returns the 1-bit pointer mask output by the GC program prog.
size the size of the region described by prog, in bytes.
The resulting bitvector will have no more than size/sys.PtrSize bits.
func pthread_attr_getstacksize(attr *pthreadattr, size *uintptr) int32 func pthread_attr_init(attr *pthreadattr) int32 func pthread_attr_init_trampoline() func pthread_attr_setdetachstate(attr *pthreadattr, state int) int32 func pthread_cond_init(c *pthreadcond, attr *pthreadcondattr) int32 func pthread_cond_init_trampoline() func pthread_cond_signal(c *pthreadcond) int32 func pthread_cond_timedwait_relative_np(c *pthreadcond, m *pthreadmutex, t *timespec) int32 func pthread_cond_wait(c *pthreadcond, m *pthreadmutex) int32 func pthread_cond_wait_trampoline() func pthread_create(attr *pthreadattr, start uintptr, arg unsafe.Pointer) int32 func pthread_create_trampoline() func pthread_kill(t pthread, sig uint32) func pthread_kill_trampoline() func pthread_mutex_init(m *pthreadmutex, attr *pthreadmutexattr) int32 func pthread_mutex_lock(m *pthreadmutex) int32 func pthread_mutex_unlock(m *pthreadmutex) int32 func pthread_self() (t pthread) func pthread_self_trampoline()
publicationBarrier performs a store/store barrier (a "publication"
or "export" barrier). Some form of synchronization is required
between initializing an object and making that object accessible to
another processor. Without synchronization, the initialization
writes and the "publication" write may be reordered, allowing the
other processor to follow the pointer and observe an uninitialized
object. In general, higher-level synchronization should be used,
such as locking or an atomic pointer write. publicationBarrier is
for when those aren't an option, such as in the implementation of
the memory manager.
There's no corresponding barrier for the read side because the read
side naturally has a data dependency order. All architectures that
Go supports or seems likely to ever support automatically enforce
data dependency ordering.
func putCachedDlogger(l *dlogger) bool
putempty puts a workbuf onto the work.empty list.
Upon entry this go routine owns b. The lfstack.push relinquishes ownership.
putfull puts the workbuf on the work.full list for the GC.
putfull accepts partially full buffers so the GC can avoid competing
with the mutators for ownership of partially full buffers.
func raceacquire(addr unsafe.Pointer) func raceacquirectx(racectx uintptr, addr unsafe.Pointer) func raceacquireg(gp *g, addr unsafe.Pointer) func racectxend(racectx uintptr) func racefingo() func racefini() func racegoend() func racegostart(pc uintptr) uintptr func racemalloc(p unsafe.Pointer, sz uintptr) func racemapshadow(addr unsafe.Pointer, size uintptr)
Notify the race detector of a send or receive involving buffer entry idx
and a channel c or its communicating partner sg.
This function handles the special case of c.elemsize==0.
func raceproccreate() uintptr func raceprocdestroy(ctx uintptr) func raceReadObjectPC(t *_type, addr unsafe.Pointer, callerpc, pc uintptr) func racereadpc(addr unsafe.Pointer, callerpc, pc uintptr) func racereadrangepc(addr unsafe.Pointer, sz, callerpc, pc uintptr) func racerelease(addr unsafe.Pointer) func racereleaseacquire(addr unsafe.Pointer) func racereleaseacquireg(gp *g, addr unsafe.Pointer) func racereleaseg(gp *g, addr unsafe.Pointer) func racereleasemerge(addr unsafe.Pointer) func racereleasemergeg(gp *g, addr unsafe.Pointer) func raceWriteObjectPC(t *_type, addr unsafe.Pointer, callerpc, pc uintptr) func racewritepc(addr unsafe.Pointer, callerpc, pc uintptr) func racewriterangepc(addr unsafe.Pointer, sz, callerpc, pc uintptr) func raise_trampoline()
raisebadsignal is called when a signal is received on a non-Go
thread, and the Go program does not want to handle it (that is, the
program has not called os/signal.Notify for the signal).
func raiseproc_trampoline()
rawbyteslice allocates a new byte slice. The byte slice is not zeroed.
rawruneslice allocates a new rune slice. The rune slice is not zeroed.
rawstring allocates storage for a new string. The returned
string and byte slice both refer to the same storage.
The storage is not zeroed. Callers should use
b to set the string contents and then drop b.
func rawstringtmp(buf *tmpBuf, l int) (s string, b []byte) func read_trampoline() func readGCStats(pauses *[]uint64)
readGCStats_m must be called on the system stack because it acquires the heap
lock. See mheap for details.
All reads and writes of g's status go through readgstatus, casgstatus
castogscanstatus, casfrom_Gscanstatus.
func readmemstats_m(stats *MemStats)
readMetrics is the implementation of runtime/metrics.Read.
Note: These routines perform the read with a native endianness.
func readUnaligned64(p unsafe.Pointer) uint64
readvarint reads a varint from p.
readvarintUnsafe reads the uint32 in varint format starting at fd, and returns the
uint32 and a pointer to the byte following the varint.
There is a similar function runtime.readvarint, which takes a slice of bytes,
rather than an unsafe pointer. These functions are duplicated, because one of
the two use cases for the functions would get slower if the functions were
combined.
Mark gp ready to run.
readyForScavenger signals sysmon to wake the scavenger because
there may be new work to do.
There may be a significant delay between when this function runs
and when the scavenger is kicked awake, but it may be safely invoked
in contexts where wakeScavenger is unsafe to call directly.
func readyWithTime(s *sudog, traceskip int)
Write b's data to r.
recordForPanic maintains a circular buffer of messages written by the
runtime leading up to a process crash, allowing the messages to be
extracted from a core dump.
The text written during a process crash (following "panic" or "fatal
error") is not saved, since the goroutine stacks will generally be readable
from the runtime datastructures in the core file.
recordspan adds a newly allocated span to h.allspans.
This only happens the first time a span is allocated from
mheap.spanalloc (it is not called when a span is reused).
Write barriers are disallowed here because it can be called from
gcWork when allocating new workbufs. However, because it's an
indirect call from the fixalloc initializer, the compiler can't see
this.
The heap lock must be held.
Unwind the stack after a deferred function calls recover
after a panic. Then arrange to continue running as though
the caller of the deferred function returned normally.
recv processes a receive operation on a full channel c.
There are 2 parts:
1) The value sent by the sender sg is put into the channel
and the sender is woken up to go on its merry way.
2) The value received by the receiver (the current G) is
written to ep.
For synchronous channels, both values are the same.
For asynchronous channels, the receiver gets its data from
the channel buffer and the sender's data is put in the
channel buffer.
Channel c must be full and locked. recv unlocks c with unlockf.
sg must already be dequeued from c.
A non-nil ep must point to the heap or the caller's stack.
func recvDirect(t *_type, sg *sudog, dst unsafe.Pointer)
The goroutine g is about to enter a system call.
Record that it's not using the cpu anymore.
This is called only from the go syscall library and cgocall,
not from the low-level system calls used by the runtime.
Entersyscall cannot split the stack: the gosave must
make g->sched refer to the caller's stack segment, because
entersyscall is going to return immediately after.
Nothing entersyscall calls can split the stack either.
We cannot safely move the stack during an active call to syscall,
because we do not know which of the uintptr arguments are
really pointers (back into the stack).
In practice, this means that we make the fast path run through
entersyscall doing no-split things, and the slow path has to use systemstack
to run bigger things on the system stack.
reentersyscall is the entry point used by cgo callbacks, where explicitly
saved SP and PC are restored. This is needed when exitsyscall will be called
from a function further up in the call stack than the parent, as g->syscallsp
must always point to a valid stack frame. entersyscall below is the normal
entry point for syscalls, which obtains the SP and PC from the caller.
Syscall tracing:
At the start of a syscall we emit traceGoSysCall to capture the stack trace.
If the syscall does not block, that is it, we do not emit any other events.
If the syscall blocks (that is, P is retaken), retaker emits traceGoSysBlock;
when syscall returns we emit traceGoSysExit and when the goroutine starts running
(potentially instantly, if exitsyscallfast returns true) we emit traceGoStart.
To ensure that traceGoSysExit is emitted strictly after traceGoSysBlock,
we remember current value of syscalltick in m (_g_.m.syscalltick = _g_.m.p.ptr().syscalltick),
whoever emits traceGoSysBlock increments p.syscalltick afterwards;
and we wait for the increment before emitting traceGoSysExit.
Note that the increment is done even if tracing is not enabled,
because tracing can be enabled in the middle of syscall. We don't want the wait to hang.
reflect_addReflectOff adds a pointer to the reflection offset lookup map.
func reflect_chancap(c *hchan) int func reflect_chanclose(c *hchan) func reflect_chanlen(c *hchan) int
gcbits returns the GC type info for x, for testing.
The result is the bitmap entries (0 or 1), one entry per byte.
func reflect_ifaceE2I(inter *interfacetype, e eface, dst *iface) func reflect_makechan(t *chantype, size int) *hchan func reflect_makemap(t *maptype, cap int) *hmap func reflect_mapdelete(t *maptype, h *hmap, key unsafe.Pointer) func reflect_mapiterelem(it *hiter) unsafe.Pointer func reflect_mapiterinit(t *maptype, h *hmap) *hiter func reflect_mapiterkey(it *hiter) unsafe.Pointer func reflect_mapiternext(it *hiter) func reflect_maplen(h *hmap) int func reflect_memclrNoHeapPointers(ptr unsafe.Pointer, n uintptr) func reflect_memmove(to, from unsafe.Pointer, n uintptr)
reflect_resolveNameOff resolves a name offset from a base pointer.
reflect_resolveTextOff resolves a function pointer offset from a base type.
reflect_resolveTypeOff resolves an *rtype offset from a base type.
func reflect_rselect(cases []runtimeSelect) (int, bool) func reflect_typedmemclr(typ *_type, ptr unsafe.Pointer) func reflect_typedmemclrpartial(typ *_type, ptr unsafe.Pointer, off, size uintptr) func reflect_typedmemmove(typ *_type, dst, src unsafe.Pointer)
typedmemmovepartial is like typedmemmove but assumes that
dst and src point off bytes into the value and only copies size bytes.
off must be a multiple of sys.PtrSize.
func reflect_typedslicecopy(elemType *_type, dst, src slice) int func reflect_typelinks() ([]unsafe.Pointer, [][]int32) func reflect_unsafe_New(typ *_type) unsafe.Pointer func reflect_unsafe_NewArray(typ *_type, n int) unsafe.Pointer
reflectcall calls fn with a copy of the n argument bytes pointed at by arg.
After fn returns, reflectcall copies n-retoffset result bytes
back into arg+retoffset before returning. If copying result bytes back,
the caller should pass the argument frame type as argtype, so that
call can execute appropriate write barriers during the copy.
Package reflect always passes a frame type. In package runtime,
Windows callbacks are the only use of this that copies results
back, and those cannot have pointers in their results, so runtime
passes nil for the frame type.
Package reflect accesses this symbol through a linkname.
reflectcallmove is invoked by reflectcall to copy the return values
out of the stack and into the heap, invoking the necessary write
barriers. dst, src, and size describe the return value area to
copy. typ describes the entire frame (not just the return values).
typ may be nil, which indicates write barriers are not needed.
It must be nosplit and must only call nosplit functions because the
stack map of reflectcall is wrong.
reflectcallSave calls reflectcall after saving the caller's pc and sp in the
panic record. This allows the runtime to return to the Goexit defer processing
loop, in the unusual case where the Goexit may be bypassed by a successful
recover.
func reflectlite_chanlen(c *hchan) int func reflectlite_ifaceE2I(inter *interfacetype, e eface, dst *iface) func reflectlite_maplen(h *hmap) int
reflectlite_resolveNameOff resolves a name offset from a base pointer.
reflectlite_resolveTypeOff resolves an *rtype offset from a base type.
func reflectlite_typedmemmove(typ *_type, dst, src unsafe.Pointer) func reflectlite_unsafe_New(typ *_type) unsafe.Pointer func reflectOffsLock() func reflectOffsUnlock()
This function may be called in nosplit context and thus must be nosplit.
Disassociate p and the current m.
func releaseSudog(s *sudog)
Removes the finalizer (if any) from the object p.
Removes the Special record of the given kind for the object p.
Returns the record if the record existed, nil otherwise.
The caller must FixAlloc_Free the result.
resetForSleep is called after the goroutine is parked for timeSleep.
We can't call resettimer in timeSleep itself because if this is a short
sleep and there are many goroutines then the P can wind up running the
timer function, goroutineReady, before the goroutine has been parked.
func resetspinning()
resettimer resets the time when a timer should fire.
If used for an inactive timer, the timer will become active.
This should be called instead of addtimer if the timer value has been,
or may have been, used previously.
Reports whether the timer was modified before it was run.
resetTimer resets an inactive timer, adding it to the heap.
Reports whether the timer was modified before it was run.
func resolveNameOff(ptrInModule unsafe.Pointer, off nameOff) name func resolveTypeOff(ptrInModule unsafe.Pointer, off typeOff) *_type
restoreGsignalStack restores the gsignal stack to the value it had
before entering the signal handler.
resumeG undoes the effects of suspendG, allowing the suspended
goroutine to continue from its current safe-point.
Retpolines, used by -spectre=ret flag in cmd/asm, cmd/compile.
func retpolineBP() func retpolineBX() func retpolineCX() func retpolineDI() func retpolineDX() func retpolineR10() func retpolineR11() func retpolineR12() func retpolineR13() func retpolineR14() func retpolineR15() func retpolineR8() func retpolineR9() func retpolineSI()
return0 is a stub used to return 0 from deferproc.
It is called at the very end of deferproc to signal
the calling Go function that it should not jump
to deferreturn.
in asm_*.s
Note: in order to get the compiler to issue rotl instructions, we
need to constant fold the shift amount by hand.
TODO: convince the compiler to issue rotl instructions after inlining.
round x up to a power of 2.
Returns size of the memory block that mallocgc will allocate if you ask for the size.
func rt0_go()
This is the goroutine that runs all of the finalizers
runGCProg executes the GC program prog, and then trailer if non-nil,
writing to dst with entries of the given size.
If size == 1, dst is a 1-bit pointer mask laid out moving forward from dst.
If size == 2, dst is the 2-bit heap bitmap, and writes move backward
starting at dst (because the heap bitmap does). In this case, the caller guarantees
that only whole bytes in dst need to be written.
runGCProg returns the number of 1- or 2-bit entries written to memory.
runOneTimer runs a single timer.
The caller must have locked the timers for pp.
This will temporarily unlock the timers while running the timer function.
runOpenDeferFrame runs the active open-coded defers in the frame specified by
d. It normally processes all active defers in the frame, but stops immediately
if a defer does a successful recover. It returns true if there are no
remaining defers to run in the frame.
runqempty reports whether _p_ has no Gs on its local run queue.
It never returns true spuriously.
Get g from local runnable queue.
If inheritTime is true, gp should inherit the remaining time in the
current time slice. Otherwise, it should start a new time slice.
Executed only by the owner P.
Grabs a batch of goroutines from _p_'s runnable queue into batch.
Batch is a ring buffer starting at batchHead.
Returns number of grabbed goroutines.
Can be executed by any P.
runqput tries to put g on the local runnable queue.
If next is false, runqput adds g to the tail of the runnable queue.
If next is true, runqput puts g in the _p_.runnext slot.
If the run queue is full, runnext puts g on the global queue.
Executed only by the owner P.
runqputbatch tries to put all the G's on q on the local runnable queue.
If the queue is full, they are put on the global queue; in that case
this will temporarily acquire the scheduler lock.
Executed only by the owner P.
Put g and a batch of work from local runnable queue on global queue.
Executed only by the owner P.
Steal half of elements from local runnable queue of p2
and put onto local runnable queue of p.
Returns one of the stolen elements (or nil if failed).
runSafePointFn runs the safe point function, if any, for this P.
This should be called like
if getg().m.p.runSafePointFn != 0 {
runSafePointFn()
}
runSafePointFn must be checked on any transition in to _Pidle or
_Psyscall to avoid a race where forEachP sees that the P is running
just before the P goes into _Pidle/_Psyscall and neither forEachP
nor the P run the safe-point function.
func runtime_debug_freeOSMemory() func runtime_debug_WriteHeapDump(fd uintptr)
runtime_expandFinalInlineFrame expands the final pc in stk to include all
"callers" if pc is inline.
func runtime_getProfLabel() unsafe.Pointer func runtime_goroutineProfileWithLabels(p []StackRecord, labels []unsafe.Pointer) (n int, ok bool)
readProfile, provided to runtime/pprof, returns the next chunk of
binary CPU profiling stack trace data, blocking until data is available.
If profiling is turned off and all the profile data accumulated while it was
on has been returned, readProfile returns eof=true.
The caller must save the returned data and tags before calling readProfile again.
func runtime_setProfLabel(labels unsafe.Pointer)
runtimer examines the first timer in timers. If it is ready based on now,
it runs the timer and removes or updates it.
Returns 0 if it ran a timer, -1 if there are no more timers, or the time
when the first timer should run.
The caller must have locked the timers for pp.
If a timer is run, this will temporarily unlock the timers.
save updates getg().sched to refer to pc and sp so that a following
gogo will restore pc and sp.
save must not have write barriers because invoking a write barrier
can clobber getg().sched.
saveAncestors copies previous ancestors of the given caller g and
includes infor for the current caller into a new set of tracebacks for
a g being created.
func saveblockevent(cycles int64, skip int, which bucketType) func saveg(pc, sp uintptr, gp *g, r *StackRecord)
sbrk0 returns the current process brk, or 0 if not implemented.
scanblock scans b as scanobject would, but using an explicit
pointer bitmap instead of the heap bitmap.
This is used to scan non-heap roots, so it does not update
gcw.bytesMarked or gcw.scanWork.
If stk != nil, possible stack pointers are also reported to stk.putPtr.
scanConservative scans block [b, b+n) conservatively, treating any
pointer-like value in the block as a pointer.
If ptrmask != nil, only words that are marked in ptrmask are
considered as potential pointers.
If state != nil, it's assumed that [b, b+n) is a block in the stack
and may contain pointers to stack objects.
Scan a stack frame: local variables and function arguments/results.
scanobject scans the object starting at b, adding pointers to gcw.
b must point to the beginning of a heap object or an oblet.
scanobject consults the GC bitmap for the pointer mask and the
spans for the size of the object.
scanstack scans gp's stack, greying all pointers found on the stack.
scanstack will also shrink the stack if it is safe to do so. If it
is not, it schedules a stack shrink for the next synchronous safe
point.
scanstack is marked go:systemstack because it must not be preempted
while using a workbuf.
scavengeSleep attempts to put the scavenger to sleep for ns.
Note that this function should only be called by the scavenger.
The scavenger may be woken up earlier by a pacing change, and it may not go
to sleep at all if there's a pending pacing change.
Returns the amount of time actually slept.
schedEnabled reports whether gp should be scheduled. It returns
false is scheduling of gp is disabled.
sched.lock must be held.
schedEnableUser enables or disables the scheduling of user
goroutines.
This does not stop already running user goroutines, so the caller
should first stop the world when disabling user goroutines.
The bootstrap sequence is:
call osinit
call schedinit
make & queue new G
call runtime·mstart
The new G calls runtime·main.
func schedtrace(detailed bool)
One round of scheduler: find a runnable goroutine and execute it.
Never returns.
selectgo implements the select statement.
cas0 points to an array of type [ncases]scase, and order0 points to
an array of type [2*ncases]uint16 where ncases must be <= 65536.
Both reside on the goroutine's stack (regardless of any escaping in
selectgo).
For race detector builds, pc0 points to an array of type
[ncases]uintptr (also on the stack); for other builds, it's set to
nil.
selectgo returns the index of the chosen scase, which matches the
ordinal position of its respective select{recv,send,default} call.
Also, if the chosen scase was a receive operation, it reports whether
a value was received.
compiler implements
select {
case v = <-c:
... foo
default:
... bar
}
as
if selectnbrecv(&v, c) {
... foo
} else {
... bar
}
compiler implements
select {
case v, ok = <-c:
... foo
default:
... bar
}
as
if c != nil && selectnbrecv2(&v, &ok, c) {
... foo
} else {
... bar
}
compiler implements
select {
case c <- v:
... foo
default:
... bar
}
as
if selectnbsend(c, v) {
... foo
} else {
... bar
}
func selectsetpc(pc *uintptr) func selparkcommit(gp *g, _ unsafe.Pointer) bool
Called from runtime.
func semacquire1(addr *uint32, lifo bool, profile semaProfileFlags, skipframes int) func semacreate(mp *m) func semawakeup(mp *m) func semrelease(addr *uint32) func semrelease1(addr *uint32, handoff bool, skipframes int)
send processes a send operation on an empty channel c.
The value ep sent by the sender is copied to the receiver sg.
The receiver is then woken up to go on its merry way.
Channel c must be empty and locked. send unlocks c with unlockf.
sg must already be dequeued from c.
ep must be non-nil and point to the heap or the caller's stack.
func sendDirect(t *_type, sg *sudog, src unsafe.Pointer)
setCheckmark throws if marking object is a checkmarks violation,
and otherwise sets obj's checkmark. It returns true if obj was
already checkmarked.
setcpuprofilerate sets the CPU profiling rate to hz times per second.
If hz <= 0, setcpuprofilerate turns off CPU profiling.
func setGCPercent(in int32) (out int32) func setGCPhase(x uint32)
setGNoWB performs *gp = new without a write barrier.
For times when it's impractical to use a guintptr.
setGsignalStack sets the gsignal stack of the current m to an
alternate signal stack returned from the sigaltstack system call.
It saves the old values in *old for use by restoreGsignalStack.
This is used when handling a signal if non-Go code has set the
alternate signal stack.
func setitimer_trampoline() func setMaxStack(in int) (out int) func setMaxThreads(in int) (out int)
setMNoWB performs *mp = new without a write barrier.
For times when it's impractical to use an muintptr.
func setNonblock(fd int32) func setPanicOnFault(new bool) (old bool)
setProcessCPUProfiler is called when the profiling timer changes.
It is called with prof.lock held. hz is the new timer, and is 0 if
profiling is being disabled. Enable or disable the signal as
required for -buildmode=c-archive.
Set the heap profile bucket associated with addr to b.
setSignaltstackSP sets the ss_sp field of a stackt.
setsigsegv is used on darwin/arm64 to fake a segmentation fault.
This is exported via linkname to assembly in runtime/cgo.
func setsigstack(i uint32)
Reports whether a function will set the SP
to an absolute value. Important that
we don't traceback when these are at the bottom
of the stack since we can't be sure that we will
find the caller.
If the function is not on the bottom of the stack
we assume that it will have set it up so that traceback will be consistent,
either by being a traceback terminating function
or putting one on the stack at the right offset.
setThreadCPUProfiler makes any thread-specific changes required to
implement profiling at a rate of hz.
No changes required on Unix systems.
Called from assembly only; declared for go vet.
func setTraceback(level string)
Shade the object if it isn't already.
The object is not nil and known to be in the heap.
Preemption must be disabled.
shouldPushSigpanic reports whether pc should be used as sigpanic's
return PC (pushing a frame for the call). Otherwise, it should be
left alone so that LR is used as sigpanic's return PC, effectively
replacing the top-most frame with sigpanic. This is used by
preparePanic.
showframe reports whether the frame with the given characteristics should
be printed during a traceback.
showfuncinfo reports whether a function with the given characteristics should
be printed during a traceback.
Maybe shrink the stack being used by gp.
gp must be stopped and we must own its stack. It may be in
_Grunning, but only if this is our own user G.
func siftdownTimer(t []*timer, i int) func siftupTimer(t []*timer, i int) func sigaction(sig uint32, new *usigactiont, old *usigactiont) func sigaction_trampoline() func sigaltstack(new *stackt, old *stackt) func sigaltstack_trampoline()
sigblock blocks signals in the current thread's signal mask.
This is used to block signals while setting up and tearing down g
when a non-Go thread calls a Go function. When a thread is exiting
we use the sigsetAllExiting value, otherwise the OS specific
definition of sigset_all is used.
This is nosplit and nowritebarrierrec because it is called by needm
which may be called on a non-Go thread with no g available.
sigdisable disables the Go signal handler for the signal sig.
It is only called while holding the os/signal.handlers lock,
via os/signal.disableSignal and signal_disable.
sigenable enables the Go signal handler to catch the signal sig.
It is only called while holding the os/signal.handlers lock,
via os/signal.enableSignal and signal_enable.
sigFetchG fetches the value of G safely when running in a signal handler.
On some architectures, the g value may be clobbered when running in a VDSO.
See issue #32912.
Determines if the signal should be handled by Go and if not, forwards the
signal to the handler that was installed before Go's. Returns whether the
signal was forwarded.
This is called by the signal handler, and the world may be stopped.
sighandler is invoked when a signal occurs. The global g will be
set to a gsignal goroutine and we will be running on the alternate
signal stack. The parameter g will be the value of the global g
when the signal occurred. The sig, info, and ctxt parameters are
from the system signal handler: they are the parameters passed when
the SA is passed to the sigaction system call.
The garbage collector may have stopped the world, so write barriers
are not allowed.
sigignore ignores the signal sig.
It is only called while holding the os/signal.handlers lock,
via os/signal.ignoreSignal and signal_ignore.
sigInitIgnored marks the signal as already ignored. This is called at
program start by initsig. In a shared library initsig is called by
libpreinit, so the runtime may not be initialized yet.
func sigInstallGoHandler(sig uint32) bool
Must only be called from a single goroutine at a time.
Must only be called from a single goroutine at a time.
Must only be called from a single goroutine at a time.
Checked by signal handlers.
Called to receive the next queued signal.
Must only be called from a single goroutine at a time.
signalDuringFork is called if we receive a signal while doing a fork.
We do not want signals at that time, as a signal sent to the process
group may be delivered to the child process, causing confusion.
This should never be called, because we block signals across the fork;
this function is just a safety check. See issue 18600 for background.
signalstack sets the current thread's alternate signal stack to s.
signalWaitUntilIdle waits until the signal delivery mechanism is idle.
This is used to ensure that we do not drop a signal notification due
to a race between disabling a signal and receiving a signal.
This assumes that signal delivery has already been disabled for
the signal(s) in question, and here we are just waiting to make sure
that all the signals have been delivered to the user channels
by the os/signal package.
sigNoteSetup initializes an async-signal-safe note.
The current implementation of notes on Darwin is not async-signal-safe,
because the functions pthread_mutex_lock, pthread_cond_signal, and
pthread_mutex_unlock, called by semawakeup, are not async-signal-safe.
There is only one case where we need to wake up a note from a signal
handler: the sigsend function. The signal handler code does not require
all the features of notes: it does not need to do a timed wait.
This is a separate implementation of notes, based on a pipe, that does
not support timed waits but is async-signal-safe.
sigNoteSleep waits for a note created by sigNoteSetup to be woken.
sigNoteWakeup wakes up a thread sleeping on a note created by sigNoteSetup.
This is called if we receive a signal when there is a signal stack
but we are not on it. This can only happen if non-Go code called
sigaction without setting the SS_ONSTACK flag.
sigpanic turns a synchronous signal into a run-time panic.
If the signal handler sees a synchronous panic, it arranges the
stack to look like the function where the signal occurred called
sigpanic, sets the signal's PC value to sigpanic, and returns from
the signal handler. The effect is that the program will act as
though the function that got the signal simply called sigpanic
instead.
This must NOT be nosplit because the linker doesn't know where
sigpanic calls can be injected.
The signal handler must not inject a call to sigpanic if
getg().throwsplit, since sigpanic may need to grow the stack.
This is exported via linkname to assembly in runtime/cgo.
func sigpipe() func sigprocmask(how uint32, new *sigset, old *sigset) func sigprocmask_trampoline()
Called if we receive a SIGPROF signal.
Called by the signal handler, may run during STW.
sigprofNonGo is called if we receive a SIGPROF signal on a non-Go thread,
and the signal handler collected a stack trace in sigprofCallers.
When this is called, sigprofCallersUse will be non-zero.
g is nil, and what we can do is very limited.
sigprofNonGoPC is called when a profiling signal arrived on a
non-Go thread and we have a single PC value, not a stack trace.
g is nil, and what we can do is very limited.
sigRecvPrepareForFixup is used to temporarily wake up the
signal_recv() running thread while it is blocked waiting for the
arrival of a signal. If it causes the thread to wake up, the
sig.state travels through this sequence: sigReceiving -> sigFixup
-> sigIdle -> sigReceiving and resumes. (This is only called while
GC is disabled.)
sigsave saves the current thread's signal mask into *p.
This is used to preserve the non-Go signal mask when a non-Go
thread calls a Go function.
This is nosplit and nowritebarrierrec because it is called by needm
which may be called on a non-Go thread with no g available.
sigsend delivers a signal from sighandler to the internal signal delivery queue.
It reports whether the signal was sent. If not, the caller typically crashes the program.
It runs from the signal handler, so it's limited in what it can do.
sigtramp is the callback from libc when a signal is received.
It is called with the C calling convention.
sigtrampgo is called from the signal handler function, sigtramp,
written in assembly code.
This is called by the signal handler, and the world may be stopped.
It must be nosplit because getg() is still the G that was running
(if any) when the signal was delivered, but it's (usually) called
on the gsignal stack. Until this switches the G to gsignal, the
stack bounds check won't work.
slicebytetostring converts a byte slice to a string.
It is inserted by the compiler into generated code.
ptr is a pointer to the first element of the slice;
n is the length of the slice.
Buf is a fixed-size buffer for the result,
it is not nil if the result does not escape.
slicebytetostringtmp returns a "string" referring to the actual []byte bytes.
Callers need to ensure that the returned string will not be used after
the calling goroutine modifies the original slice or synchronizes with
another goroutine.
The function is only called when instrumenting
and otherwise intrinsified by the compiler.
Some internal compiler optimizations use this function.
- Used for m[T1{... Tn{..., string(k), ...} ...}] and m[string(k)]
where k is []byte, T1 to Tn is a nesting of struct and array literals.
- Used for "<"+string(b)+">" concatenation where b is []byte.
- Used for string(b)=="foo" comparison where b is []byte.
slicecopy is used to copy from a string or slice of pointerless elements into a slice.
func slicerunetostring(buf *tmpBuf, a []rune) string
spanHasNoSpecials marks a span as having no specials in the arena bitmap.
spanHasSpecials marks a span as having specials in the arena bitmap.
spanOf returns the span of p. If p does not point into the heap
arena or no span has ever contained p, spanOf returns nil.
If p does not point to allocated memory, this may return a non-nil
span that does *not* contain p. If this is a possibility, the
caller should either call spanOfHeap or check the span bounds
explicitly.
Must be nosplit because it has callers that are nosplit.
spanOfHeap is like spanOf, but returns nil if p does not point to a
heap object.
Must be nosplit because it has callers that are nosplit.
spanOfUnchecked is equivalent to spanOf, but the caller must ensure
that p points into an allocated heap arena.
Must be nosplit because it has callers that are nosplit.
stackalloc allocates an n byte stack.
stackalloc must run on the system stack because it uses per-P
resources and must not split the stack.
func stackcache_clear(c *mcache)
stackcacherefill/stackcacherelease implement a global pool of stack segments.
The pool is required to prevent unlimited growth of per-thread caches.
func stackcacherelease(c *mcache, order uint8)
stackcheck checks that SP is in range [g->stack.lo, g->stack.hi).
stackfree frees an n byte stack allocation at stk.
stackfree must run on the system stack because it uses per-P
resources and must not split the stack.
func stackinit()
stacklog2 returns ⌊log_2(n)⌋.
func stackmapdata(stkmap *stackmap, n int32) bitvector
Allocates a stack from the free pool. Must be called with
stackpool[order].item.mu held.
Adds stack x to the free pool. Must be called with stackpool[order].item.mu held.
startCheckmarks prepares for the checkmarks phase.
The world must be stopped.
Schedules the locked m to run the locked gp.
May run during STW, so write barriers are not allowed.
Schedules some M to run the p (creates an M if necessary).
If p==nil, tries to get an idle P, if no idle P's does nothing.
May run with m.p==nil, so write barriers are not allowed.
If spinning is set, the caller has incremented nmspinning and startm will
either decrement nmspinning or set m.spinning in the newly started M.
Callers passing a non-nil P must call from a non-preemptible context. See
comment on acquirem below.
Must not have write barriers because this may be called without a P.
startpanic_m prepares for an unrecoverable panic.
It returns true if panic messages should be printed, or false if
the runtime is in bad shape and should just print stacks.
It must not have write barriers even though the write barrier
explicitly ignores writes once dying > 0. Write barriers still
assume that g.m.p != nil, and this function may not have P
in some contexts (e.g. a panic in a signal handler for a signal
sent to an M with no P).
startTemplateThread starts the template thread if it is not already
running.
The calling thread must itself be in a known-good state.
startTheWorld undoes the effects of stopTheWorld.
startTheWorldGC undoes the effects of stopTheWorldGC.
func startTheWorldWithSema(emitTraceEvent bool) int64
startTimer adds t to the timer heap.
step advances to the next pc, value pair in the encoded table.
Return the bucket for stk[0:nstk], allocating new bucket if needed.
Stops execution of the current m that is locked to a g until the g is runnable again.
Returns with acquired P.
Stops execution of the current m until new work is available.
Returns with acquired P.
stopTheWorld stops all P's from executing goroutines, interrupting
all goroutines at GC safe points and records reason as the reason
for the stop. On return, only the current goroutine's P is running.
stopTheWorld must not be called from a system stack and the caller
must not hold worldsema. The caller must call startTheWorld when
other P's should resume execution.
stopTheWorld is safe for multiple goroutines to call at the
same time. Each will execute its own stop, and the stops will
be serialized.
This is also used by routines that do stack dumps. If the system is
in panic or being exited, this may not reliably stop all
goroutines.
stopTheWorldGC has the same effect as stopTheWorld, but blocks
until the GC is not running. It also blocks a GC from starting
until startTheWorldGC is called.
stopTheWorldWithSema is the core implementation of stopTheWorld.
The caller is responsible for acquiring worldsema and disabling
preemption first and then should stopTheWorldWithSema on the system
stack:
semacquire(&worldsema, 0)
m.preemptoff = "reason"
systemstack(stopTheWorldWithSema)
When finished, the caller must either call startTheWorld or undo
these three operations separately:
m.preemptoff = ""
systemstack(startTheWorldWithSema)
semrelease(&worldsema)
It is allowed to acquire worldsema once and then execute multiple
startTheWorldWithSema/stopTheWorldWithSema pairs.
Other P's are able to execute between successive calls to
startTheWorldWithSema and stopTheWorldWithSema.
Holding worldsema causes any other goroutines invoking
stopTheWorld to block.
stopTimer stops a timer.
It reports whether t was stopped before being run.
func strhashFallback(a unsafe.Pointer, h uintptr) uintptr
stringDataOnStack reports whether the string's data is
stored on the current goroutine's stack.
Testing adapters for hash quality tests (see hash_test.go)
func stringStructOf(sp *string) *stringStruct func stringtoslicebyte(buf *tmpBuf, s string) []byte func stringtoslicerune(buf *[32]rune, s string) []rune
subtract1 returns the byte pointer p-1.
nosplit because it is used during write barriers and must not be preempted.
subtractb returns the byte pointer p-n.
suspendG suspends goroutine gp at a safe-point and returns the
state of the suspended goroutine. The caller gets read access to
the goroutine until it calls resumeG.
It is safe for multiple callers to attempt to suspend the same
goroutine at the same time. The goroutine may execute between
subsequent successful suspend operations. The current
implementation grants exclusive access to the goroutine, and hence
multiple callers will serialize. However, the intent is to grant
shared read access, so please don't depend on exclusive access.
This must be called from the system stack and the user goroutine on
the current M (if any) must be in a preemptible state. This
prevents deadlocks where two goroutines attempt to suspend each
other and both are in non-preemptible states. There are other ways
to resolve this deadlock, but this seems simplest.
TODO(austin): What if we instead required this to be called from a
user goroutine? Then we could deschedule the goroutine while
waiting instead of blocking the thread. If two goroutines tried to
suspend each other, one of them would win and the other wouldn't
complete the suspend until it was resumed. We would have to be
careful that they couldn't actually queue up suspend for each other
and then both be suspended. This would also avoid the need for a
kernel context switch in the synchronous case because we could just
directly schedule the waiter. The context switch is unavoidable in
the signal case.
sweepone sweeps some unswept heap span and returns the number of pages returned
to the heap, or ^uintptr(0) if there was nothing to sweep.
func sync_atomic_CompareAndSwapUintptr(ptr *uintptr, old, new uintptr) bool func sync_atomic_runtime_procPin() int func sync_atomic_StorePointer(ptr *unsafe.Pointer, new unsafe.Pointer) func sync_atomic_StoreUintptr(ptr *uintptr, new uintptr) func sync_atomic_SwapUintptr(ptr *uintptr, new uintptr) uintptr func sync_fastrand() uint32 func sync_nanotime() int64
Active spinning for sync.Mutex.
func sync_runtime_doSpin() func sync_runtime_procPin() int func sync_runtime_procUnpin() func sync_runtime_registerPoolCleanup(f func()) func sync_runtime_Semacquire(addr *uint32) func sync_runtime_SemacquireMutex(addr *uint32, lifo bool, skipframes int) func sync_runtime_Semrelease(addr *uint32, handoff bool, skipframes int) func sync_throw(s string)
syncadjustsudogs adjusts gp's sudogs and copies the part of gp's
stack they refer to while synchronizing with concurrent channel
operations. It returns the number of bytes of stack copied.
Don't split the stack as this function may be invoked without a valid G,
which prevents us from allocating more stack.
func syscall() func syscall6() func syscall6X()
wrapper for syscall package to call cgocall for libc (cgo) calls.
func syscall_Exit(code int) func syscall_Getpagesize() int func syscall_rawSyscall(fn, a1, a2, a3 uintptr) (r1, r2, err uintptr) func syscall_rawSyscall6(fn, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2, err uintptr)
Called from syscall package after Exec.
Called from syscall package after fork in parent.
Called from syscall package after fork in child.
It resets non-sigignored signals to the default handler, and
restores the signal mask in preparation for the exec.
Because this might be called during a vfork, and therefore may be
temporarily sharing address space with the parent process, this must
not change any global variables or calling into C code that may do so.
Called from syscall package before Exec.
Called from syscall package before fork.
syscall_runtime_doAllThreadsSyscall serializes Go execution and
executes a specified fn() call on all m's.
The boolean argument to fn() indicates whether the function's
return value will be consulted or not. That is, fn(true) should
return true if fn() succeeds, and fn(true) should return false if
it failed. When fn(false) is called, its return status will be
ignored.
syscall_runtime_doAllThreadsSyscall first invokes fn(true) on a
single, coordinating, m, and only if it returns true does it go on
to invoke fn(false) on all of the other m's known to the process.
func syscall_runtime_envs() []string
Update the C environment if cgo is loaded.
Called from syscall.Setenv.
func syscall_syscall(fn, a1, a2, a3 uintptr) (r1, r2, err uintptr) func syscall_syscall6(fn, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2, err uintptr) func syscall_syscall6X(fn, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2, err uintptr) func syscall_syscallPtr(fn, a1, a2, a3 uintptr) (r1, r2, err uintptr) func syscall_syscallX(fn, a1, a2, a3 uintptr) (r1, r2, err uintptr)
Update the C environment if cgo is loaded.
Called from syscall.unsetenv.
func syscallNoErr() func syscallPtr() func syscallX() func sysctl(mib *uint32, miblen uint32, oldp *byte, oldlenp *uintptr, newp *byte, newlen uintptr) int32 func sysctl_trampoline() func sysctlbyname_trampoline() func sysctlbynameInt32(name []byte) (int32, int32)
Don't split the stack as this function may be invoked without a valid G,
which prevents us from allocating more stack.
func sysHugePage(v unsafe.Pointer, n uintptr) func sysMap(v unsafe.Pointer, n uintptr, sysStat *sysMemStat)
Always runs without a P, so write barriers are not allowed.
sysReserveAligned is like sysReserve, but the returned pointer is
aligned to align bytes. It may reserve either n or n+align bytes,
so it returns the size that was reserved.
systemstack runs fn on a system stack.
If systemstack is called from the per-OS-thread (g0) stack, or
if systemstack is called from the signal handling (gsignal) stack,
systemstack calls fn directly and returns.
Otherwise, systemstack is being called from the limited stack
of an ordinary goroutine. In this case, systemstack switches
to the per-OS-thread stack, calls fn, and switches back.
It is common to use a func literal as the argument, in order
to share inputs and outputs with the code around the call
to system stack:
... set up y ...
systemstack(func() {
x = bigcall(y)
})
... use x ...
func systemstack_switch()
templateThread is a thread in a known-good state that exists solely
to start new threads in known-good states when the calling thread
may not be in a good state.
Many programs never need this, so templateThread is started lazily
when we first enter a state that might lead to running on a thread
in an unknown state.
templateThread runs on an M without a P, so it must not have write
barriers.
func testAtomic64()
Ensure that defer arg sizes that map to the same defer size class
also map to the same malloc size class.
Note: Called by runtime/pprof in addition to runtime code.
Poor mans 64-bit division.
This is a very special function, do not use it if you are not sure what you are doing.
int64 division is lowered into _divv() call on 386, which does not fit into nosplit functions.
Handles overflow in a time-specific manner.
This keeps us within no-split stack limits on 32-bit processors.
timeHistogramMetricsBuckets generates a slice of boundaries for
the timeHistogram. These boundaries are represented in seconds,
not nanoseconds like the timeHistogram represents durations.
timeSleep puts the current goroutine to sleep for at least ns nanoseconds.
timeSleepUntil returns the time when the next timer should fire,
and the P that holds the timer heap that that timer is on.
This is only called by sysmon and checkdead.
tooManyOverflowBuckets reports whether noverflow buckets is too many for a map with 1<<B buckets.
Note that most of these overflow buckets must be in sparse use;
if use was dense, then we'd have already triggered regular map growth.
tophash calculates the tophash value for hash.
Does f mark the top of a goroutine stack?
total size of memory block for defer with arg size sz
func trace_userLog(id uint64, category, message string) func trace_userRegion(id, mode uint64, name string) func trace_userTaskCreate(id, parentID uint64, taskType string) func trace_userTaskEnd(id uint64)
traceAcquireBuffer returns trace buffer to use and, if necessary, locks it.
func tracealloc(p unsafe.Pointer, size uintptr, typ *_type)
traceAppend appends v to buf in little-endian-base-128 encoding.
func traceback1(pc, sp, lr uintptr, gp *g, flags uint)
tracebackCgoContext handles tracing back a cgo context value, from
the context argument to setCgoTraceback, for the gentraceback
function. It returns the new value of n.
Traceback over the deferred function calls.
Report them like calls that have been invoked but not started executing yet.
tracebackHexdump hexdumps part of stk around frame.sp and frame.fp
for debugging purposes. If the address bad is included in the
hexdumped range, it will mark it as well.
func tracebackothers(me *g)
tracebacktrap is like traceback but expects that the PC and SP were obtained
from a trap, not from gp->sched or gp->syscallpc/gp->syscallsp or getcallerpc/getcallersp.
Because they are from a trap instead of from a saved pair,
the initial PC must not be rewound to the previous instruction.
(All the saved pairs record a PC that is a return address, so we
rewind it into the CALL instruction.)
If gp.m.libcall{g,pc,sp} information is available, it uses that information in preference to
the pc/sp/lr passed in.
func traceBufPtrOf(b *traceBuf) traceBufPtr
traceEvent writes a single event to trace buffer, flushing the buffer if necessary.
ev is event type.
If skip > 0, write current stack id as the last argument (skipping skip top frames).
If skip = 0, this event type should contain a stack, but we don't want
to collect and remember it for this particular call.
func traceEventLocked(extraBytes int, mp *m, pid int32, bufp *traceBufPtr, ev byte, skip int, args ...uint64)
traceFlush puts buf onto stack of full buffers and returns an empty buffer.
traceFrameForPC records the frame information.
It may allocate memory.
traceFullDequeue dequeues from queue of full buffers.
traceFullQueue queues buf into queue of full buffers.
func tracegc() func traceGCDone() func traceGCMarkAssistDone() func traceGCMarkAssistStart() func traceGCStart() func traceGCSTWDone() func traceGCSTWStart(kind int) func traceGCSweepDone()
traceGCSweepSpan traces the sweep of a single page.
This may be called outside a traceGCSweepStart/traceGCSweepDone
pair; however, it will not emit any trace events in this case.
traceGCSweepStart prepares to trace a sweep loop. This does not
emit any events until traceGCSweepSpan is called.
traceGCSweepStart must be paired with traceGCSweepDone and there
must be no preemption points between these two calls.
func traceGoCreate(newg *g, pc uintptr) func traceGoEnd() func traceGomaxprocs(procs int32) func traceGoPark(traceEv byte, skip int) func traceGoPreempt() func traceGoSched() func traceGoStart() func traceGoSysBlock(pp *p) func traceGoSysCall() func traceGoSysExit(ts int64) func traceGoUnpark(gp *g, skip int) func traceHeapAlloc() func traceNextGC()
traceProcFree frees trace buffer associated with pp.
func traceProcStart() func traceProcStop(pp *p)
traceReader returns the trace reader that should be woken up, if any.
traceReleaseBuffer releases a buffer previously acquired with traceAcquireBuffer.
func traceStackID(mp *m, buf []uintptr, skip int) uint64
traceString adds a string to the trace.strings and returns the id.
trygetfull tries to get a full or partially empty workbuffer.
If one is not immediately available return nil
typeBitsBulkBarrier executes a write barrier for every
pointer that would be copied from [src, src+size) to [dst,
dst+size) by a memmove using the type bitmap to locate those
pointer slots.
The type typ must correspond exactly to [src, src+size) and [dst, dst+size).
dst, src, and size must be pointer-aligned.
The type typ must have a plain bitmap, not a GC program.
The only use of this function is in channel sends, and the
64 kB channel element limit takes care of this for us.
Must not be preempted because it typically runs right before memmove,
and the GC must observe them as an atomic action.
Callers must perform cgo checks if writeBarrier.cgo.
typedmemclr clears the typed memory at ptr with type typ. The
memory at ptr must already be initialized (and hence in type-safe
state). If the memory is being initialized for the first time, see
memclrNoHeapPointers.
If the caller knows that typ has pointers, it can alternatively
call memclrHasPointers.
typedmemmove copies a value of type t to dst from src.
Must be nosplit, see #16026.
TODO: Perfect for go:nosplitrec since we can't have a safe point
anywhere in the bulk barrier or memmove.
func typedslicecopy(typ *_type, dstPtr unsafe.Pointer, dstLen int, srcPtr unsafe.Pointer, srcLen int) int
typehash computes the hash of the object of type t at address p.
h is the seed.
This function is seldom used. Most maps use for hashing either
fixed functions (e.g. f32hash) or compiler-generated functions
(e.g. for a type like struct { x, y string }). This implementation
is slower but more general and is used for hashing interface types
(called from interhash or nilinterhash, above) or for hashing in
maps generated by reflect.MapOf (reflect_typehash, below).
Note: this function must match the compiler generated
functions exactly. See issue 37716.
typelinksinit scans the types from extra modules and builds the
moduledata typemap used to de-duplicate type pointers.
typesEqual reports whether two types are equal.
Everywhere in the runtime and reflect packages, it is assumed that
there is exactly one *_type per Go type, so that pointer equality
can be used to test if types are equal. There is one place that
breaks this assumption: buildmode=shared. In this case a type can
appear as two different pieces of memory. This is hidden from the
runtime and reflect package by the per-module typemap built in
typelinksinit. It uses typesEqual to map types from later modules
back into earlier ones.
Only typelinksinit needs this function.
unblocksig removes sig from the current thread's signal mask.
This is nosplit and nowritebarrierrec because it is called from
dieFromSignal, which can be called by sigfwdgo while running in the
signal handler, on the signal stack, with no g available.
func unimplemented(name string)
We might not be holding a p in this code.
func unlockextra(mp *m) func unlockOSThread() func unlockWithRank(l *mutex)
Called from dropm to undo the effect of an minit.
unminitSignals is called from dropm, via unminit, to undo the
effect of calling minit on a non-Go thread.
Updates the memstats structure.
The world must be stopped.
updateTimer0When sets the P's timer0When field.
The caller must have locked the timers for pp.
updateTimerModifiedEarliest updates the recorded nextwhen field of the
earlier timerModifiedEarier value.
The timers for pp will not be locked.
updateTimerPMask clears pp's timer mask if it has no timers on its heap.
Ideally, the timer mask would be kept immediately consistent on any timer
operations. Unfortunately, updating a shared global data structure in the
timer hot path adds too much overhead in applications frequently switching
between no timers and some timers.
As a compromise, the timer mask is updated only on pidleget / pidleput. A
running P (returned by pidleget) may add a timer at any time, so its mask
must be set. An idle P (passed to pidleput) cannot add new timers while
idle, so if it has no timers at that time, its mask may be cleared.
Thus, we get the following effects on timer-stealing in findrunnable:
* Idle Ps with no timers when they go idle are never checked in findrunnable
(for work- or timer-stealing; this is the ideal case).
* Running Ps must always be checked.
* Idle Ps whose timers are stolen must continue to be checked until they run
again, even after timer expiration.
When the P starts running again, the mask should be set, as a timer may be
added at any time.
TODO(prattmic): Additional targeted updates may improve the above cases.
e.g., updating the mask when stealing a timer.
usesLibcall indicates whether this runtime performs system calls
via libcall.
func usleep_trampoline()
verifyTimerHeap verifies that the timer heap is in a valid state.
This is only for debugging, and is only called if verifyTimers is true.
The caller must have locked the timers.
wakeNetPoller wakes up the thread sleeping in the network poller if it isn't
going to wake up before the when argument; or it wakes an idle P to service
timers and the network poller if there isn't one already.
Tries to add one more P to execute G's.
Called when a G is made runnable (newproc, ready).
wakeScavenger immediately unparks the scavenger if necessary.
May run without a P, but it may allocate, so it must not be called
on any allocation path.
mheap_.lock, scavenge.lock, and sched.lock must not be held.
func walltime_trampoline()
wantAsyncPreempt returns whether an asynchronous preemption is
queued for gp.
wbBufFlush flushes the current P's write barrier buffer to the GC
workbufs. It is passed the slot and value of the write barrier that
caused the flush so that it can implement cgocheck.
This must not have write barriers because it is part of the write
barrier implementation.
This and everything it calls must be nosplit because 1) the stack
contains untyped slots from gcWriteBarrier and 2) there must not be
a GC safe point between the write barrier test in the caller and
flushing the buffer.
TODO: A "go:nosplitrec" annotation would be perfect for this.
wbBufFlush1 flushes p's write barrier buffer to the GC work queue.
This must not have write barriers because it is part of the write
barrier implementation, so this may lead to infinite loops or
buffer corruption.
This must be non-preemptible because it uses the P's workbuf.
wirep is the first step of acquirep, which actually associates the
current M to _p_. This is broken out so we can disallow write
barriers for this part, since we don't yet have a P.
func worldStarted() func worldStopped()
write must be nosplit on Windows (see write1)
func write_trampoline() func writeheapdump_m(fd uintptr, m *MemStats)
Package-Level Variables (total 257, in which 1 are exported)
MemProfileRate controls the fraction of memory allocations
that are recorded and reported in the memory profile.
The profiler aims to sample an average of
one allocation per MemProfileRate bytes allocated.
To include every allocated block in the profile, set MemProfileRate to 1.
To turn off profiling entirely, set MemProfileRate to 0.
The tools that process the memory profiles assume that the
profile rate is constant across the lifetime of the program
and equal to the current value. Programs that change the
memory profiling rate should do so just once, as early as
possible in the execution of the program (for example,
at the beginning of main).
var _cgo_setenv unsafe.Pointer // pointer to C function var _cgo_unsetenv unsafe.Pointer // pointer to C function var _cgo_yield unsafe.Pointer
used in asm_{386,amd64,arm64}.s to seed the hash function
agg is used by readMetrics, and is protected by metricsSema.
Managed as a global variable because its pointer will be
an argument to a dynamically-defined function, and we'd
like to avoid it escaping to the heap.
allDloggers is a list of all dloggers, linked through
dlogger.allLink. This is accessed atomically. This is prepend only,
so it doesn't need to protect against ABA races.
allglen and allgptr are atomic variables that contain len(allg) and
&allg[0] respectively. Proper ordering depends on totally-ordered
loads and stores. Writes are protected by allglock.
allgptr is updated before allglen. Readers should read allglen
before allgptr to ensure that allglen is always <= len(allgptr). New
Gs appended during the race can be missed. For a consistent view of
all Gs, allglock must be held.
allgptr copies should always be stored as a concrete type or
unsafe.Pointer, not uintptr, to ensure that GC can still reach it
even if it points to a stale array.
allgs contains all Gs ever created (including dead Gs), and thus
never shrinks.
Access via the slice is protected by allglock or stop-the-world.
Readers that cannot take the lock may (carefully!) use the atomic
variables below.
len(allp) == gomaxprocs; may change at safe points, otherwise
immutable.
allpLock protects P-less reads and size changes of allp, idlepMask,
and timerpMask, and all writes to allp.
var arm64HasATOMICS bool var armHasVFPv4 bool
asyncPreemptStack is the bytes of stack space required to inject an
asyncPreempt call.
var blockprofilerate uint64 // in CPU ticks
boundsErrorFmts provide error text for various out-of-bounds panics.
Note: if you change these strings, you should adjust the size of the buffer
in boundsError.Error below as well.
boundsNegErrorFmts are overriding formats if x is negative. In this case there's no need to report y.
var buildVersion string
cgoAlwaysFalse is a boolean value that is always false.
The cgo-generated code says if cgoAlwaysFalse { cgoUse(p) }.
The compiler cannot see that cgoAlwaysFalse is always false,
so it emits the test and keeps the call, giving the desired
escape analysis result. The test is cheaper than the call.
var cgoContext unsafe.Pointer
cgoHasExtraM is set on startup when an extra M is created for cgo.
The extra M must be created before any C/C++ code calls cgocallback.
When running with cgo, we call _cgo_thread_start
to start threads for us so that we can play nicely with
foreign code.
var chanrecvpc uintptr var chansendpc uintptr var class_to_allocnpages [68]uint8 var class_to_divmagic [68]divMagic var class_to_size [68]uint16 var cpuprof cpuProfile
crashing is the number of m's we have waited for when implementing
GOTRACEBACK=crash when a signal is received.
Holds variables parsed from GODEBUG env var,
except for "memprofilerate" since there is an
existing int var for that value, which may
already have an initial value.
var debugPtrmask struct{lock mutex; data *byte}
channels for synchronizing signal mask updates with the signal mask
thread
var divideError error var earlycgocallback []byte
dummy mspan that contains no free objects.
channels for synchronizing signal mask updates with the signal mask
thread
execLock serializes exec and clone to avoid bugs or unspecified behaviour
around exec'ing while creating/destroying threads. See issue #19546.
var executablePath string var extraMCount uint32 // Protected by lockextra var extraMWaiters uint32 var failallocatestack []byte var failthreadcreate []byte
faketime is the simulated time in nanoseconds since 1970 for the
playground.
Zero means not to use faketime.
var fastlog2Table [33]float64 var fastrandseed uintptr var finalizer1 [5]byte var fingCreate uint32 var fingRunning bool var finptrmask [64]byte var firstmoduledata moduledata // linker symbol var floatError error var forcegc forcegcstate
forcegcperiod is the maximum time in nanoseconds between garbage
collections. If we go this long without a garbage collection, one
is forced to run.
This is a variable for testing purposes. It normally doesn't change.
Bit vector of free marks.
Needs to be as big as the largest number of objects per span.
freezing is set to non-zero if the runtime is trying to freeze the
world.
Stores the signal handlers registered before Go installed its own.
These signal handlers will be invoked in cases where Go doesn't want to
handle a particular signal (e.g., signal occurred on a non-Go thread).
See sigfwdgo for more information on when the signals are forwarded.
This is read by the signal handler; accesses should use
atomic.Loaduintptr and atomic.Storeuintptr.
Total number of gcBgMarkWorker goroutines. Protected by worldsema.
Pool of GC parked background workers. Entries are type
*gcBgMarkWorkerNode.
var gcBitsArenas struct{lock mutex; free *gcBitsArena; next *gcBitsArena; current *gcBitsArena; previous *gcBitsArena}
gcBlackenEnabled is 1 if mutator assists and background mark
workers are allowed to blacken objects. This must only be set when
gcphase == _GCmark.
gcController implements the GC pacing controller that determines
when to trigger concurrent garbage collection and how much marking
work to do in mutator assists and background marking.
It uses a feedback control algorithm to adjust the memstats.gc_trigger
trigger based on the heap growth and GC CPU utilization each cycle.
This algorithm optimizes for heap growth to match GOGC and for CPU
utilization between assist and background marking to be 25% of
GOMAXPROCS. The high-level design of this algorithm is documented
at https://golang.org/s/go15gcpacing.
All fields of gcController are used only during a single mark
cycle.
gcMarkDoneFlushed counts the number of P's with flushed work.
Ideally this would be a captured local in gcMarkDone, but forEachP
escapes its callback closure, so it can't capture anything.
This is protected by markDoneSema.
gcMarkWorkerModeStrings are the strings labels of gcMarkWorkerModes
to use in execution traces.
Initialized from $GOGC. GOGC=off means no GC.
Garbage collector phase.
Indicates to write barrier and synchronization task to perform.
Holding gcsema grants the M the right to block a GC, and blocks
until the current GC is done. In particular, it prevents gomaxprocs
from changing concurrently.
TODO(mknyszek): Once gomaxprocs and the execution tracer can handle
being changed/enabled during a GC, remove this.
var globalAlloc struct{mutex; persistentAlloc} var gomaxprocs int32 var gStatusStrings [10]string
handlingSig is indexed by signal number and is non-zero if we are
currently handling the signal. Or, to put it another way, whether
the signal handler is currently set to the Go signal handler or not.
This is uint32 rather than bool so that we can use atomic instructions.
used in hash{32,64}.go to seed the hash function
exported value for testing
heapminimum is the minimum heap size at which to trigger GC.
For small heaps, this overrides the usual GOGC*live set rule.
When there is a very small live set but a lot of allocation, simply
collecting when the heap reaches GOGC*live results in many GC
cycles and high total per-GC overhead. This minimum amortizes this
per-GC overhead while keeping the heap reasonably small.
During initialization this is set to 4MB*GOGC/100. In the case of
GOGC==0, this will set heapminimum to 0, resulting in constant
collection even when the heap size is small, which is useful for
debugging.
Bitmask of Ps in _Pidle list, one bit per P. Reads and writes must
be atomic. Length may change at safe points.
Each P must update only its own bit. In order to maintain
consistency, a P going idle must the idle mask simultaneously with
updates to the idle P list under the sched.lock, otherwise a racing
pidleget may clear the mask before pidleput sets the mask,
corrupting the bitmap.
N.B., procresize takes ownership of all Ps in stopTheWorldWithSema.
inForkedChild is true while manipulating signals in the child process.
This is used to avoid calling libc functions in case we are using vfork.
Value to use for signal mask for newly created M's.
inittrace stores statistics for init functions which are
updated by malloc and newproc when active is true.
Set by the linker so the runtime can determine the buildmode.
iscgo is set to true by the runtime/cgo package
Set by the linker so the runtime can determine the buildmode.
var itabTable *itabTableType // pointer to current table var itabTableInit itabTableType // starter table var lastmoduledatap *moduledata // linker symbol
levelBits is the number of bits in the radix for a given level in the super summary
structure.
The sum of all the entries of levelBits should equal heapAddrBits.
levelLogPages is log2 the maximum number of runtime pages in the address space
a summary in the given level represents.
The leaf level always represents exactly log2 of 1 chunk's worth of pages.
levelShift is the number of bits to shift to acquire the radix for a given level
in the super summary structure.
With levelShift, one can compute the index of the summary at level l related to a
pointer p by doing:
p >> levelShift[l]
lockNames gives the names associated with each of the above ranks
lockPartialOrder is a partial order among the various lock types, listing the
immediate ordering that has actually been observed in the runtime. Each entry
(which corresponds to a particular lock rank) specifies the list of locks
that can already be held immediately "above" it.
So, for example, the lockRankSched entry shows that all the locks preceding
it in rank can actually be held. The allp lock shows that only the sysmon or
sched lock can be held immediately above it when it is acquired.
main_init_done is a signal used by cgocallbackg that initialization
has been completed. It is made before _cgo_notify_runtime_init_done,
so all cgo calls can rely on it existing. When main_init is complete,
it is closed, meaning cgocallbackg can reliably receive from it.
mainStarted indicates that the main M has started.
channels for synchronizing signal mask updates with the signal mask
thread
maxOffAddr is the maximum address in the offset address
space. It corresponds to the highest virtual address representable
by the page alloc chunk and heap arena maps.
Maximum searchAddr value, which indicates that the heap has no free space.
We alias maxOffAddr just to make it clear that this is the maximum address
for the page allocator's search space. See maxOffAddr for details.
var maxstacksize uintptr // enough until runtime.main sets it for real var memoryError error var metrics map[string]metricData var metricsInit bool
metrics is a map of runtime/metrics keys to
data used by the runtime to sample each metric's
value.
mFixupRace is used to temporarily borrow the race context from the
coordinating m during a syscall_runtime_doAllThreadsSyscall and
loan it out to each of the m's of the runtime so they can execute a
mFixup.fn in that context.
minOffAddr is the minimum address in the offset space, and
it corresponds to the virtual address arenaBaseOffset.
set using cmd/go/internal/modload.ModInfoProg
var modulesSlice *[]*moduledata // see activeModules
mSpanStateNames are the names of the span states, indexed by
mSpanState.
var mutexprofilerate uint64 // fraction sampled var netpollBreakRd uintptr // for netpollBreak var netpollBreakWr uintptr // for netpollBreak var netpollInited uint32 var netpollInitLock mutex var netpollWaiters uint32 var netpollWakeSig uint32 // used to avoid duplicate calls of netpollBreak
newmHandoff contains a list of m structures that need new OS threads.
This is used by newm in situations where newm itself can't safely
start an OS thread.
var no_pointers_stackmap uint64 // defined in assembly, for NO_LOCAL_POINTERS macro
ptrmask for an allocation containing a single pointer.
var overflowError error var overflowTag [1]unsafe.Pointer // always nil
panicking is non-zero when crashing the program for an unrecovered panic.
panicking is incremented and decremented atomically.
paniclk is held while printing the panic information and stack trace,
so that two concurrent panics don't overlap their output.
var pdEface interface{}
pendingPreemptSignals is the number of preemption signals
that have been sent but not received. This is only used on Darwin.
For #41702.
persistentChunks is a list of all the persistent chunks we have
allocated. The list is maintained through the first word in the
persistent chunk. This is updated atomically.
physHugePageSize is the size in bytes of the OS's default physical huge
page size whose allocation is opaque to the application. It is assumed
and verified to be a power of two.
If set, this must be set by the OS init code (typically in osinit) before
mallocinit. However, setting it at all is optional, and leaving the default
value is always safe (though potentially less efficient).
Since physHugePageSize is always assumed to be a power of two,
physHugePageShift is defined as physHugePageSize == 1 << physHugePageShift.
The purpose of physHugePageShift is to avoid doing divisions in
performance critical functions.
physHugePageSize is the size in bytes of the OS's default physical huge
page size whose allocation is opaque to the application. It is assumed
and verified to be a power of two.
If set, this must be set by the OS init code (typically in osinit) before
mallocinit. However, setting it at all is optional, and leaving the default
value is always safe (though potentially less efficient).
Since physHugePageSize is always assumed to be a power of two,
physHugePageShift is defined as physHugePageSize == 1 << physHugePageShift.
The purpose of physHugePageShift is to avoid doing divisions in
performance critical functions.
physPageSize is the size in bytes of the OS's physical pages.
Mapping and unmapping operations must be done at multiples of
physPageSize.
This must be set by the OS init code (typically in osinit) before
mallocinit.
pinnedTypemaps are the map[typeOff]*_type from the moduledata objects.
These typemap objects are allocated at run time on the heap, but the
only direct reference to them is in the moduledata, created by the
linker and marked SNOPTRDATA so it is ignored by the GC.
To make sure the map isn't collected, we keep a second reference here.
var poolcleanup ()
printBacklog is a circular buffer of messages written with the builtin
print* functions, for use in postmortem analysis of core dumps.
var printBacklogIndex int
Information about what cpu features are available.
Packages outside the runtime should not use these
as they are not an external api.
Set on startup in asm_{386,amd64}.s
NOTE(rsc): Everything here could use cas if contention became an issue.
var racecgosync uint64 // represents possible synchronization in C code var raceprocctx0 uintptr
reflectOffs holds type offsets defined at run time by the reflect package.
When a type is defined at run time, its *rtype data lives on the heap.
There are a wide range of possible addresses the heap may use, that
may not be representable as a 32-bit offset. Moreover the GC may
one day start moving heap memory, in which case there is no stable
offset that can be defined.
To provide stable offsets, we add pin *rtype objects in a global map
and treat the offset as an identifier. We use negative offsets that
do not overlap with any compile-time module offsets.
Entries are created by reflect.addReflectOff.
runningPanicDefers is non-zero while running deferred functions for panic.
runningPanicDefers is incremented and decremented atomically.
This is used to try hard to get a panic stack trace out when exiting.
runtimeInitTime is the nanotime() at which the runtime started.
Sleep/wait state of the background scavenger.
var shiftError error
sig handles communication between the signal handler and os/signal.
Other than the inuse and recv fields, the fields are accessed atomically.
The wanted and ignored fields are only written by one goroutine at
a time; access is controlled by the handlers Mutex in os/signal.
The fields are only read by that one goroutine and by the signal handler.
We access them atomically to minimize the race between setting them
in the goroutine calling os/signal and the signal handler,
which may be running in a different thread. That race is unavoidable,
as there is no connection between handling a signal and receiving one,
but atomic instructions should minimize it.
The read and write file descriptors used by the sigNote functions.
The read and write file descriptors used by the sigNote functions.
If the signal handler receives a SIGPROF signal on a non-Go thread,
it tries to collect a traceback into sigprofCallers.
sigprofCallersUse is set to non-zero while sigprofCallers holds a traceback.
var sigset_all sigset
sigsetAllExiting is used by sigblock(true) when a thread is
exiting. sigset_all is defined in OS specific code, and per GOOS
behavior may override this default for sigsetAllExiting: see
osinit().
var size_to_class128 [249]uint8 var size_to_class8 [129]uint8 var sizeClassBuckets []float64 var sliceEface interface{}
spanSetBlockPool is a global pool of spanSetBlocks.
Global pool of large stack spans.
Global pool of spans that have free stacks.
Stacks are assigned an order according to size.
order = log_2(size/FixedStack)
There is a free list for each order.
staticuint64s is used to avoid allocating in convTx for small integer values.
var stringEface interface{} var stringType *_type
TODO: These should be locals in testAtomic64, but we don't 8-byte
align stack variables on 386.
TODO: These should be locals in testAtomic64, but we don't 8-byte
align stack variables on 386.
testSigtrap and testSigusr1 are used by the runtime tests. If
non-nil, it is called on SIGTRAP/SIGUSR1. If it returns true, the
normal behavior on this signal is suppressed.
var testSigusr1 (gp *g) bool var timeHistBuckets []float64
Bitmask of Ps that may have a timer, one bit per P. Reads and writes
must be atomic. Length may change at safe points.
trace is global tracing context.
var traceback_env uint32 var typecache [256]typeCacheBucket var uint16Eface interface{} var uint16Type *_type var uint32Eface interface{} var uint32Type *_type var uint64Eface interface{} var uint64Type *_type var urandom_dev []byte
runtime variable to check if the processor we're running on
actually supports the instructions used by the AES-based
hash implementation.
var useAVXmemmove bool
If useCheckmark is true, marking of an object uses the checkmark
bits instead of the standard mark bits.
var waitReasonStrings [27]string var work struct{full lfstack; empty lfstack; pad0 cpu.CacheLinePad; wbufSpans struct{lock mutex; free mSpanList; busy mSpanList}; _ uint32; bytesMarked uint64; markrootNext uint32; markrootJobs uint32; nproc uint32; tstart int64; nwait uint32; nFlushCacheRoots int; nDataRoots, nBSSRoots, nSpanRoots, nStackRoots int; startSema uint32; markDoneSema uint32; bgMarkReady note; bgMarkDone uint32; mode gcMode; userForced bool; totaltime int64; initialHeapLive uint64; assistQueue struct{lock mutex; q gQueue}; sweepWaiters struct{lock mutex; list gList}; cycles uint32; stwprocs, maxprocs int32; tSweepTerm, tMark, tMarkTerm, tEnd int64; pauseNS int64; pauseStart int64; heap0, heap1, heap2, heapGoal uint64}
Holding worldsema grants an M the right to try to stop the world.
The compiler knows about this variable.
If you change it, you must change builtin/runtime.go, too.
If you change the first four bytes, you must also change the write
barrier insertion code.
Set in runtime.cpuinit.
TODO: deprecate these; use internal/cpu directly.
var x86HasSSE41 bool
base address for all 0-byte allocations
Package-Level Constants (total 699, in which 3 are exported)
Compiler is the name of the compiler toolchain that built the
running binary. Known toolchains are:
gc Also known as cmd/compile.
gccgo The gccgo front end, part of the GCC compiler suite.
GOARCH is the running program's architecture target:
one of 386, amd64, arm, s390x, and so on.
GOOS is the running program's operating system target:
one of darwin, freebsd, linux, and so on.
To view possible combinations of GOOS and GOARCH, run "go tool dist list".
_64bit = 1 on 64-bit systems, 0 on 32-bit systems
PCDATA and FUNCDATA table indexes.
See funcdata.h and ../cmd/internal/objabi/funcdata.go.
const _BUS_ADRALN = 1 const _BUS_ADRERR = 2 const _BUS_OBJERR = 3 const _ConcurrentSweep = true const _CTL_HW = 6 const _DebugGC = 0 const _EAGAIN = 35 const _EFAULT = 14 const _EINTR = 4 const _ENOMEM = 12 const _ETIMEDOUT = 60 const _EV_ADD = 1 const _EV_CLEAR = 32 const _EV_DELETE = 2 const _EV_EOF = 32768 const _EV_ERROR = 16384 const _EV_RECEIPT = 64 const _EVFILT_READ = -1 const _EVFILT_WRITE = -2 const _F_GETFL = 3 const _F_SETFD = 2 const _F_SETFL = 4 const _FD_CLOEXEC = 1 const _FinBlockSize = 4096 const _FixAllocChunk = 16384 // Chunk size for FixAlloc const _FixedStack = 2048
The minimum stack size to allocate.
The hackery here rounds FixedStack0 up to a power of 2.
const _FixedStack1 = 2047 const _FixedStack2 = 2047 const _FixedStack3 = 2047 const _FixedStack4 = 2047 const _FixedStack5 = 2047 const _FixedStack6 = 2047 const _FPE_FLTDIV = 1 const _FPE_FLTINV = 5 const _FPE_FLTOVF = 2 const _FPE_FLTRES = 4 const _FPE_FLTSUB = 6 const _FPE_FLTUND = 3 const _FPE_INTDIV = 7 const _FPE_INTOVF = 8
PCDATA and FUNCDATA table indexes.
See funcdata.h and ../cmd/internal/objabi/funcdata.go.
PCDATA and FUNCDATA table indexes.
See funcdata.h and ../cmd/internal/objabi/funcdata.go.
PCDATA and FUNCDATA table indexes.
See funcdata.h and ../cmd/internal/objabi/funcdata.go.
PCDATA and FUNCDATA table indexes.
See funcdata.h and ../cmd/internal/objabi/funcdata.go.
PCDATA and FUNCDATA table indexes.
See funcdata.h and ../cmd/internal/objabi/funcdata.go.
const _GCmark = 1 // GC marking roots and workbufs: allocate black, write barrier ENABLED const _GCmarktermination = 2 // GC mark termination: allocate black, P's help GC, write barrier ENABLED const _GCoff = 0 // GC not running; sweeping in background, write barrier disabled
_Gcopystack means this goroutine's stack is being moved. It
is not executing user code and is not on a run queue. The
stack is owned by the goroutine that put it in _Gcopystack.
_Gdead means this goroutine is currently unused. It may be
just exited, on a free list, or just being initialized. It
is not executing user code. It may or may not have a stack
allocated. The G and its stack (if any) are owned by the M
that is exiting the G or that obtained the G from the free
list.
_Genqueue_unused is currently unused.
_Gidle means this goroutine was just allocated and has not
yet been initialized.
_Gmoribund_unused is currently unused, but hardcoded in gdb
scripts.
Number of goroutine ids to grab from sched.goidgen to local per-P cache at once.
16 seems to provide enough amortization, but other than that it's mostly arbitrary number.
_Gpreempted means this goroutine stopped itself for a
suspendG preemption. It is like _Gwaiting, but nothing is
yet responsible for ready()ing it. Some suspendG must CAS
the status to _Gwaiting to take responsibility for
ready()ing this G.
_Grunnable means this goroutine is on a run queue. It is
not currently executing user code. The stack is not owned.
_Grunning means this goroutine may execute user code. The
stack is owned by this goroutine. It is not on a run queue.
It is assigned an M and a P (g.m and g.m.p are valid).
_Gscan combined with one of the above states other than
_Grunning indicates that GC is scanning the stack. The
goroutine is not executing user code and the stack is owned
by the goroutine that set the _Gscan bit.
_Gscanrunning is different: it is used to briefly block
state transitions while GC signals the G to scan its own
stack. This is otherwise like _Grunning.
atomicstatus&~Gscan gives the state the goroutine will
return to when the scan completes.
defined constants
defined constants
defined constants
defined constants
defined constants
_Gsyscall means this goroutine is executing a system call.
It is not executing user code. The stack is owned by this
goroutine. It is not on a run queue. It is assigned an M.
_Gwaiting means this goroutine is blocked in the runtime.
It is not executing user code. It is not on a run queue,
but should be recorded somewhere (e.g., a channel wait
queue) so it can be ready()d when necessary. The stack is
not owned *except* that a channel operation may read or
write parts of the stack under the appropriate channel
lock. Otherwise, it is not safe to access the stack after a
goroutine enters _Gwaiting (e.g., it may get moved).
const _HW_NCPU = 3 const _HW_PAGESIZE = 7 const _ITIMER_PROF = 2 const _ITIMER_REAL = 0 const _ITIMER_VIRTUAL = 1 const _KindSpecialFinalizer = 1 const _KindSpecialProfile = 2 const _MADV_DONTNEED = 4 const _MADV_FREE = 5 const _MADV_FREE_REUSABLE = 7 const _MADV_FREE_REUSE = 8 const _MAP_ANON = 4096 const _MAP_FIXED = 16 const _MAP_PRIVATE = 2
Max number of threads to run garbage collection.
2, 3, and 4 are all plausible maximums depending
on the hardware details of the machine. The garbage
collector scales well to 32 cpus.
const _MaxSmallSize = 32768 const _NSIG = 32 const _NumSizeClasses = 68
Number of orders that get caching. Order 0 is FixedStack
and each successive order is twice as large.
We want to cache 2KB, 4KB, 8KB, and 16KB stacks. Larger stacks
will be allocated directly.
Since FixedStack is different on different systems, we
must vary NumStackOrders to keep the same maximum cached size.
OS | FixedStack | NumStackOrders
-----------------+------------+---------------
linux/darwin/bsd | 2KB | 4
windows/32 | 4KB | 3
windows/64 | 8KB | 2
plan9 | 4KB | 3
const _O_NONBLOCK = 4 const _PageMask = 8191 const _PageShift = 13 const _PageSize = 8192
PCDATA and FUNCDATA table indexes.
See funcdata.h and ../cmd/internal/objabi/funcdata.go.
_PCDATA_Restart1(2) apply on a sequence of instructions, within
which if an async preemption happens, we should back off the PC
to the start of the sequence when resume.
We need two so we can distinguish the start/end of the sequence
in case that two sequences are next to each other.
const _PCDATA_Restart2 = -4
Like _PCDATA_RestartAtEntry, but back to function entry if async
preempted.
PCDATA and FUNCDATA table indexes.
See funcdata.h and ../cmd/internal/objabi/funcdata.go.
PCDATA and FUNCDATA table indexes.
See funcdata.h and ../cmd/internal/objabi/funcdata.go.
PCDATA_UnsafePoint values.
const _PCDATA_UnsafePointUnsafe = -2 // Unsafe for async preemption
_Pdead means a P is no longer used (GOMAXPROCS shrank). We
reuse Ps if GOMAXPROCS increases. A dead P is mostly
stripped of its resources, though a few things remain
(e.g., trace buffers).
_Pgcstop means a P is halted for STW and owned by the M
that stopped the world. The M that stopped the world
continues to use its P, even in _Pgcstop. Transitioning
from _Prunning to _Pgcstop causes an M to release its P and
park.
The P retains its run queue and startTheWorld will restart
the scheduler on Ps with non-empty run queues.
_Pidle means a P is not being used to run user code or the
scheduler. Typically, it's on the idle P list and available
to the scheduler, but it may just be transitioning between
other states.
The P is owned by the idle list or by whatever is
transitioning its state. Its run queue is empty.
const _PROT_EXEC = 4 const _PROT_NONE = 0 const _PROT_READ = 1 const _PROT_WRITE = 2
_Prunning means a P is owned by an M and is being used to
run user code or the scheduler. Only the M that owns this P
is allowed to change the P's status from _Prunning. The M
may transition the P to _Pidle (if it has no more work to
do), _Psyscall (when entering a syscall), or _Pgcstop (to
halt for the GC). The M may also hand ownership of the P
off directly to another M (e.g., to schedule a locked G).
_Psyscall means a P is not running user code. It has
affinity to an M in a syscall but is not owned by it and
may be stolen by another M. This is similar to _Pidle but
uses lightweight transitions and maintains M affinity.
Leaving _Psyscall must be done with a CAS, either to steal
or retake the P. Note that there's an ABA hazard: even if
an M successfully CASes its original P back to _Prunning
after a syscall, it must understand the P may have been
used by another M in the interim.
const _PTHREAD_CREATE_DETACHED = 2 const _SA_64REGSET = 512 const _SA_ONSTACK = 1 const _SA_RESTART = 2 const _SA_SIGINFO = 64 const _SA_USERTRAMP = 256 const _SEGV_ACCERR = 2 const _SEGV_MAPERR = 1 const _SI_USER = 0 // empirically true, but not what headers say const _SIG_BLOCK = 1 const _SIG_SETMASK = 3 const _SIG_UNBLOCK = 2 const _SIGABRT = 6 const _SIGALRM = 14 const _SIGBUS = 10 const _SIGCHLD = 20 const _SIGCONT = 19
Values for the flags field of a sigTabT.
const _SIGEMT = 7 const _SIGFPE = 8
Values for the flags field of a sigTabT.
const _SIGHUP = 1
Values for the flags field of a sigTabT.
const _SIGILL = 4 const _SIGINFO = 29 const _SIGINT = 2 const _SIGIO = 23
Values for the flags field of a sigTabT.
const _SIGKILL = 9
Values for the flags field of a sigTabT.
Values for the flags field of a sigTabT.
const _SIGPIPE = 13 const _SIGPROF = 27 const _SIGQUIT = 3 const _SIGSEGV = 11
Values for the flags field of a sigTabT.
const _SIGSTOP = 17 const _SIGSYS = 12 const _SIGTERM = 15
Values for the flags field of a sigTabT.
const _SIGTRAP = 5 const _SIGTSTP = 18 const _SIGTTIN = 21 const _SIGTTOU = 22
Values for the flags field of a sigTabT.
const _SIGURG = 16 const _SIGUSR1 = 30 const _SIGUSR2 = 31 const _SIGVTALRM = 26 const _SIGWINCH = 28 const _SIGXCPU = 24 const _SIGXFSZ = 25 const _SS_DISABLE = 4
Functions that need frames bigger than this use an extra
instruction to do the stack split check, to avoid overflow
in case SP - framesize wraps below zero.
This value can be no bigger than the size of the unmapped
space at zero.
Per-P, per order stack segment cache size.
The stack guard is a pointer this many bytes above the
bottom of the stack.
The maximum number of bytes that a chain of NOSPLIT
functions can use.
The minimum size of stack used by Go code
After a stack split check the SP is allowed to be this
many bytes below the stack guard. This saves an instruction
in the checking sequence for tiny frames.
StackSystem is a number of additional bytes to add
to each stack below the usual guard area for OS-specific
purposes like signal handling. Used on Windows, Plan 9,
and iOS because they do not use a separate stack.
Tiny allocator parameters, see "Tiny allocator" comment in malloc.go.
const _TinySizeClass int8 = 2
The maximum number of frames we print for a traceback
const _TraceJumpStack = 4 // if traceback is on a systemstack, resume trace at g that called into it const _TraceRuntimeFrames = 1 // include frames for internal runtime functions. const _TraceTrap = 2 // the initial PC, SP are from a trap, not a return PC from a call const _WorkbufSize = 2048 // in bytes; larger values result in less contention
This implementation depends on OS-specific implementations of
func semacreate(mp *m)
Create a semaphore for mp, if it does not already have one.
func semasleep(ns int64) int32
If ns < 0, acquire m's semaphore and return 0.
If ns >= 0, try to acquire m's semaphore for at most ns nanoseconds.
Return 0 if the semaphore was acquired, -1 if interrupted or timed out.
func semawakeup(mp *m)
Wake up mp, which is or will soon be sleeping on its semaphore.
This implementation depends on OS-specific implementations of
func semacreate(mp *m)
Create a semaphore for mp, if it does not already have one.
func semasleep(ns int64) int32
If ns < 0, acquire m's semaphore and return 0.
If ns >= 0, try to acquire m's semaphore for at most ns nanoseconds.
Return 0 if the semaphore was acquired, -1 if interrupted or timed out.
func semawakeup(mp *m)
Wake up mp, which is or will soon be sleeping on its semaphore.
addrBits is the number of bits needed to represent a virtual address.
See heapAddrBits for a table of address space sizes on
various architectures. 48 bits is enough for all
architectures except s390x.
On AMD64, virtual addresses are 48-bit (or 57-bit) numbers sign extended to 64.
We shift the address left 16 to eliminate the sign extended part and make
room in the bottom for the count.
On s390x, virtual addresses are 64-bit. There's not much we
can do about this, so we just hope that the kernel doesn't
get to really high addresses and panic if it does.
On AIX, 64-bit addresses are split into 36-bit segment number and 28-bit
offset in segment. Segment numbers in the range 0x0A0000000-0x0AFFFFFFF(LSA)
are available for mmap.
We assume all lfnode addresses are from memory allocated with mmap.
We use one bit to distinguish between the two ranges.
const aixCntBits = 10
arenaBaseOffset is the pointer value that corresponds to
index 0 in the heap arena map.
On amd64, the address space is 48 bits, sign extended to 64
bits. This offset lets us handle "negative" addresses (or
high addresses if viewed as unsigned).
On aix/ppc64, this offset allows to keep the heapAddrBits to
48. Otherwize, it would be 60 in order to handle mmap addresses
(in range 0x0a00000000000000 - 0x0afffffffffffff). But in this
case, the memory reserved in (s *pageAlloc).init for chunks
is causing important slowdowns.
On other platforms, the user address space is contiguous
and starts at 0, so no offset is necessary.
A typed version of this constant that will make it into DWARF (for viewcore).
arenaBits is the total bits in a combined arena map index.
This is split between the index into the L1 arena map and
the L2 arena map.
arenaL1Bits is the number of bits of the arena number
covered by the first level arena map.
This number should be small, since the first level arena
map requires PtrSize*(1<<arenaL1Bits) of space in the
binary's BSS. It can be zero, in which case the first level
index is effectively unused. There is a performance benefit
to this, since the generated code can be more efficient,
but comes at the cost of having a large L2 mapping.
We use the L1 map on 64-bit Windows because the arena size
is small, but the address space is still 48 bits, and
there's a high cost to having a large L2.
arenaL1Shift is the number of bits to shift an arena frame
number by to compute an index into the first level arena map.
arenaL2Bits is the number of bits of the arena number
covered by the second level arena index.
The size of each arena map allocation is proportional to
1<<arenaL2Bits, so it's important that this not be too
large. 48 bits leads to 32MB arena index allocations, which
is about the practical threshold.
const bias32 = -127 const bias64 = -1023 const bitPointer = 1 const bitPointerAll = 15 const bitScan = 16
all scan/pointer bits in a byte
const blockProfile bucketType = 2 const boundsIndex boundsErrorCode = 0 // s[x], 0 <= x < len(s) failed const boundsSlice3Acap boundsErrorCode = 5 // s[?:?:x], 0 <= x <= cap(s) failed const boundsSlice3Alen boundsErrorCode = 4 // s[?:?:x], 0 <= x <= len(s) failed const boundsSlice3B boundsErrorCode = 6 // s[?:x:y], 0 <= x <= y failed (but boundsSlice3A didn't happen) const boundsSlice3C boundsErrorCode = 7 // s[x:y:?], 0 <= x <= y failed (but boundsSlice3A/B didn't happen) const boundsSliceAcap boundsErrorCode = 2 // s[?:x], 0 <= x <= cap(s) failed const boundsSliceAlen boundsErrorCode = 1 // s[?:x], 0 <= x <= len(s) failed const boundsSliceB boundsErrorCode = 3 // s[x:y], 0 <= x <= y failed (but boundsSliceA didn't happen) const bucketCnt = 8
Maximum number of key/elem pairs a bucket can hold.
size of bucket hash table
buffer of pending write data
const cgoCheckPointerFail = "cgo argument has Go pointer to Go pointer" const cgoResultFail = "cgo result has Go pointer" const cgoWriteBarrierFail = "Go pointer stored into non-Go memory"
In addition to the 16 bits taken from the top, we can take 3 from the
bottom, because node must be pointer-aligned, giving a total of 19 bits
of count.
const concurrentSweep = true
data offset should be the size of the bmap struct, but needs to be
aligned correctly. For amd64p32 this means 64-bit alignment
even though pointers are 32 bit.
const debugCallRuntime = "call from within the Go runtime" const debugCallSystemStack = "executing on Go runtime stack" const debugCallUnknownFunc = "call from unknown function" const debugCallUnsafePoint = "call not at safe point" const debugChan = false
check the BP links during traceback.
const debugLogBoolFalse = 3 const debugLogBoolTrue = 2
debugLogBytes is the size of each per-M ring buffer. This is
allocated off-heap to avoid blowing up the M and hence the GC'd
heap size.
const debugLogConstString = 9
debugLogHeaderSize is the number of bytes in the framing
header of every dlog record.
const debugLogHex = 6 const debugLogInt = 4 const debugLogPC = 11 const debugLogPtr = 7 const debugLogString = 8
debugLogStringLimit is the maximum number of bytes in a string.
Above this, the string will be truncated with "..(n more bytes).."
const debugLogStringOverflow = 10
debugLogSyncSize is the number of bytes in a sync record.
const debugLogTraceback = 12 const debugLogUint = 5 const debugLogUnknown = 1 const debugMalloc = false const debugPcln = false
debugScanConservative enables debug logging for stack
frames that are scanned conservatively.
const debugSelect = false
defaultHeapMinimum is the value of heapminimum for GOGC==100.
const deferHeaderSize uintptr = 72 const dlogEnabled = false
drainCheckThreshold specifies how many units of work to do
between self-preemption checks in gcDrain. Assuming a scan
rate of 1 MB/ms, this is ~100 µs. Lower values have higher
overhead in the scan loop (the scheduler check may perform
a syscall, so its overhead is nontrivial). Higher values
make the system less responsive to incoming work.
const emptyOne = 1 // this cell is empty
Possible tophash values. We reserve a few possibilities for special marks.
Each bucket (including its overflow buckets, if any) will have either all or none of its
entries in the evacuated* states (except during the evacuate() method, which only happens
during map writes and thus no one else can observe the map during that time).
const evacuatedEmpty = 4 // cell is empty, bucket is evacuated. const evacuatedX = 2 // key/elem is valid. Entry has been evacuated to first half of larger table. const evacuatedY = 3 // same as above, but evacuated to second half of larger table. const fastlogNumBits = 5 const fieldKindEface = 3 const fieldKindEol = 0 const fieldKindIface = 2 const fieldKindPtr = 1 const fInf = 9218868437227405312 const fixedRootCount = 2 const fixedRootFinalizers = 0 const fixedRootFreeGStacks = 1 const fNegInf = 18442240474082181120
forcePreemptNS is the time slice given to a G before it is
preempted.
Must agree with cmd/internal/objabi.Framepointer_enabled.
const freeChunkSum pallocSum = 2251800887427584
freezeStopWait is a large value that freezetheworld sets
sched.stopwait to in order to request that all Gs permanently stop.
const funcID_asmcgocall funcID = 8 const funcID_asyncPreempt funcID = 21 const funcID_cgocallback funcID = 14 const funcID_debugCallV1 funcID = 17 const funcID_externalthreadhandler funcID = 16 const funcID_gcBgMarkWorker funcID = 11 const funcID_goexit funcID = 2 const funcID_gogo funcID = 15 const funcID_gopanic funcID = 18 const funcID_handleAsyncEvent funcID = 20 const funcID_jmpdefer funcID = 3 const funcID_mcall funcID = 4 const funcID_morestack funcID = 5 const funcID_mstart funcID = 6 const funcID_normal funcID = 0 // not a special function const funcID_panicwrap funcID = 19 const funcID_rt0_go funcID = 7 const funcID_runfinq funcID = 10 const funcID_runtime_main funcID = 1 const funcID_sigpanic funcID = 9 const funcID_systemstack funcID = 13 const funcID_systemstack_switch funcID = 12 const funcID_wrapper funcID = 22 // any autogenerated code (hash/eq algorithms, method wrappers, etc.)
gcAssistTimeSlack is the nanoseconds of mutator assist time that
can accumulate on a P before updating gcController.assistTime.
const gcBackgroundMode gcMode = 0 // concurrent GC and sweep
gcBackgroundUtilization is the fixed CPU utilization for background
marking. It must be <= gcGoalUtilization. The difference between
gcGoalUtilization and gcBackgroundUtilization will be made up by
mark assists. The scheduler will aim to use within 50% of this
goal.
Setting this to < gcGoalUtilization avoids saturating the trigger
feedback controller when there are no assists, which allows it to
better control CPU and heap growth. However, the larger the gap,
the more mutator assists are expected to happen, which impact
mutator latency.
const gcBitsChunkBytes uintptr = 65536 const gcBitsHeaderBytes uintptr = 16
gcCreditSlack is the amount of scan work credit that can
accumulate locally before updating gcController.scanWork and,
optionally, gcController.bgScanCredit. Lower values give a more
accurate assist ratio and make it more likely that assists will
successfully steal background credit. Higher values reduce memory
contention.
const gcDrainFlushBgCredit gcDrainFlags = 2 const gcDrainFractional gcDrainFlags = 8 const gcDrainIdle gcDrainFlags = 4 const gcDrainUntilPreempt gcDrainFlags = 1 const gcForceBlockMode gcMode = 2 // stop-the-world GC now and STW sweep (forced by user) const gcForceMode gcMode = 1 // stop-the-world GC now, concurrent sweep
gcGoalUtilization is the goal CPU utilization for
marking as a fraction of GOMAXPROCS.
gcMarkWorkerDedicatedMode indicates that the P of a mark
worker is dedicated to running that mark worker. The mark
worker should run without preemption.
gcMarkWorkerFractionalMode indicates that a P is currently
running the "fractional" mark worker. The fractional worker
is necessary when GOMAXPROCS*gcBackgroundUtilization is not
an integer. The fractional worker should run until it is
preempted and will be scheduled to pick up the fractional
part of GOMAXPROCS*gcBackgroundUtilization.
gcMarkWorkerIdleMode indicates that a P is running the mark
worker because it has nothing else to do. The idle worker
should run until it is preempted and account its time
against gcController.idleMarkTime.
gcMarkWorkerNotWorker indicates that the next scheduled G is not
starting work and the mode should be ignored.
gcOverAssistWork determines how many extra units of scan work a GC
assist does when an assist happens. This amortizes the cost of an
assist by pre-paying for this many bytes of future allocations.
gcTriggerCycle indicates that a cycle should be started if
we have not yet started cycle number gcTrigger.n (relative
to work.cycles).
gcTriggerHeap indicates that a cycle should be started when
the heap size reaches the trigger heap size computed by the
controller.
gcTriggerTime indicates that a cycle should be started when
it's been more than forcegcperiod nanoseconds since the
previous GC cycle.
const hashRandomBytes = 128 const hashWriting = 4 // a goroutine is writing to the map
heapAddrBits is the number of bits in a heap address. On
amd64, addresses are sign-extended beyond heapAddrBits. On
other arches, they are zero-extended.
On most 64-bit platforms, we limit this to 48 bits based on a
combination of hardware and OS limitations.
amd64 hardware limits addresses to 48 bits, sign-extended
to 64 bits. Addresses where the top 16 bits are not either
all 0 or all 1 are "non-canonical" and invalid. Because of
these "negative" addresses, we offset addresses by 1<<47
(arenaBaseOffset) on amd64 before computing indexes into
the heap arenas index. In 2017, amd64 hardware added
support for 57 bit addresses; however, currently only Linux
supports this extension and the kernel will never choose an
address above 1<<47 unless mmap is called with a hint
address above 1<<47 (which we never do).
arm64 hardware (as of ARMv8) limits user addresses to 48
bits, in the range [0, 1<<48).
ppc64, mips64, and s390x support arbitrary 64 bit addresses
in hardware. On Linux, Go leans on stricter OS limits. Based
on Linux's processor.h, the user address space is limited as
follows on 64-bit architectures:
Architecture Name Maximum Value (exclusive)
---------------------------------------------------------------------
amd64 TASK_SIZE_MAX 0x007ffffffff000 (47 bit addresses)
arm64 TASK_SIZE_64 0x01000000000000 (48 bit addresses)
ppc64{,le} TASK_SIZE_USER64 0x00400000000000 (46 bit addresses)
mips64{,le} TASK_SIZE64 0x00010000000000 (40 bit addresses)
s390x TASK_SIZE 1<<64 (64 bit addresses)
These limits may increase over time, but are currently at
most 48 bits except on s390x. On all architectures, Linux
starts placing mmap'd regions at addresses that are
significantly below 48 bits, so even if it's possible to
exceed Go's 48 bit limit, it's extremely unlikely in
practice.
On 32-bit platforms, we accept the full 32-bit address
space because doing so is cheap.
mips32 only has access to the low 2GB of virtual memory, so
we further limit it to 31 bits.
On ios/arm64, although 64-bit pointers are presumably
available, pointers are truncated to 33 bits. Furthermore,
only the top 4 GiB of the address space are actually available
to the application, but we allow the whole 33 bits anyway for
simplicity.
TODO(mknyszek): Consider limiting it to 32 bits and using
arenaBaseOffset to offset into the top 4 GiB.
WebAssembly currently has a limit of 4GB linear memory.
heapArenaBitmapBytes is the size of each heap arena's bitmap.
heapArenaBytes is the size of a heap arena. The heap
consists of mappings of size heapArenaBytes, aligned to
heapArenaBytes. The initial heap mapping is one arena.
This is currently 64MB on 64-bit non-Windows and 4MB on
32-bit and on Windows. We use smaller arenas on Windows
because all committed memory is charged to the process,
even if it's not touched. Hence, for processes with small
heaps, the mapped arena space needs to be commensurate.
This is particularly important with the race detector,
since it significantly amplifies the cost of committed
memory.
const heapBitsShift = 1 // shift offset between successive bitPointer or bitScan entries const heapStatsDep statDep = 0 // corresponds to heapStatsAggregate const hicb = 191 // 1011 1111 const itabInitSize = 512
flags
const kindArray = 17 const kindBool = 1 const kindChan = 18 const kindComplex128 = 16 const kindComplex64 = 15 const kindDirectIface = 32 const kindFloat32 = 13 const kindFloat64 = 14 const kindFunc = 19 const kindGCProg = 64 const kindInt = 2 const kindInt16 = 4 const kindInt32 = 5 const kindInt64 = 6 const kindInt8 = 3 const kindInterface = 20 const kindMap = 21 const kindMask = 31 const kindPtr = 22 const kindSlice = 23 const kindString = 24 const kindStruct = 25 const kindUint = 7 const kindUint16 = 9 const kindUint32 = 10 const kindUint64 = 11 const kindUint8 = 8 const kindUintptr = 12 const kindUnsafePointer = 26 const largeSizeDiv = 128 const loadFactorDen = 2
Maximum average load of a bucket that triggers growth is 6.5.
Represent as loadFactorNum/loadFactorDen, to allow integer math.
The default lowest and highest continuation byte.
This implementation depends on OS-specific implementations of
func semacreate(mp *m)
Create a semaphore for mp, if it does not already have one.
func semasleep(ns int64) int32
If ns < 0, acquire m's semaphore and return 0.
If ns >= 0, try to acquire m's semaphore for at most ns nanoseconds.
Return 0 if the semaphore was acquired, -1 if interrupted or timed out.
func semawakeup(mp *m)
Wake up mp, which is or will soon be sleeping on its semaphore.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Other leaf locks
Memory-related leaf locks
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Generally, hchan must be acquired before gscan. But in one specific
case (in syncadjustsudogs from markroot after the g has been suspended
by suspendG), we allow gscan to be acquired, and then an hchan lock. To
allow this case, we get this lockRankHchanLeaf rank in
syncadjustsudogs(), rather than lockRankHchan. By using this special
rank, we don't allow any further locks to be acquired other than more
hchan locks.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
lockRankLeafRank is the rank of lock that does not have a declared rank, and hence is
a leaf lock.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Leaf locks with no dependencies, so these constants are not actually used anywhere.
There are other architecture-dependent leaf locks as well.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Locks held above sched
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Constants representing the lock rank of the architecture-independent locks in
the runtime. Locks with lower rank must be taken before locks with higher
rank.
Memory-related non-leaf locks
logHeapArenaBytes is log_2 of heapArenaBytes. For clarity,
prefer using heapArenaBytes where possible (we need the
constant to compute some other constants).
const logMaxPackedValue = 21 const logPallocChunkBytes = 22 const logPallocChunkPages = 9
Constants for multiplication: four random odd 64-bit numbers.
const m2 = 2820277070424839065 const m3 = 9497967016996688599 const m4 = 15839092249703872147 const mantbits32 uint = 23 const mantbits64 uint = 52 const mask2 = 31 // 0001 1111 const mask3 = 15 // 0000 1111 const mask4 = 7 // 0000 0111 const maskx = 63 // 0011 1111 const maxAlign = 8
maxAlloc is the maximum size of an allocation. On 64-bit,
it's theoretically possible to allocate 1<<heapAddrBits bytes. On
32-bit, however, this is one less than 1<<32 because the
number of bytes in the address space doesn't actually fit
in a uintptr.
const maxCPUProfStack = 64 const maxElemSize = 128
Maximum key or elem size to keep inline (instead of mallocing per element).
Must fit in a uint8.
Fast versions cannot handle big elems - the cutoff size for
fast versions in cmd/compile/internal/gc/walk.go must be at most this elem.
By construction, single page spans of the smallest object class
have the most objects per span.
maxObletBytes is the maximum bytes of an object to scan at
once. Larger objects will be split up into "oblets" of at
most this size. Since we can scan 1–2 MB/ms, 128 KB bounds
scan preemption at ~100 µs.
This must be > _MaxSmallSize so that the object base is the
span base.
maxPackedValue is the maximum value that any of the three fields in
the pallocSum may take on.
maxPagesPerPhysPage is the maximum number of supported runtime pages per
physical page, based on maxPhysPageSize.
maxPhysHugePageSize sets an upper-bound on the maximum huge page size
that the runtime supports.
maxPhysPageSize is the maximum page size the runtime supports.
Numbers fundamental to the encoding.
const maxSmallSize = 32768
max depth of stack to record in bucket
const maxTinySize = 16
maxWhen is the maximum value for timer's when field.
const maxZero = 1024 // must match value in reflect/value.go:maxZero cmd/compile/internal/gc/walk.go:zeroValSize
profile types
These values must be kept identical to their corresponding Kind* values
in the runtime/metrics package.
const metricKindFloat64 metricKind = 2 const metricKindFloat64Histogram metricKind = 3 const metricKindUint64 metricKind = 1 const minDeferAlloc uintptr = 80 const minDeferArgs uintptr = 8 const minfunc = 16 // minimum function size
minLegalPointer is the smallest possible legal pointer.
This is the smallest possible architectural page size,
since we assume that the first page is never mapped.
This should agree with minZeroPage in the compiler.
minPhysPageSize is a lower-bound on the physical page size. The
true physical page size may be larger than this. In contrast,
sys.PhysPageSize is an upper-bound on the physical page size.
const minTopHash = 5 // minimum tophash for a normal filled cell. const mProfCycleWrap uint32 = 100663296 const msanenabled = false const mSpanDead mSpanState = 0 const mSpanInUse mSpanState = 1 // allocated for garbage collected heap const mSpanManual mSpanState = 2 // allocated for manual management (e.g., stack allocator) const mutexProfile bucketType = 3
sentinel bucket ID for iterator checks
const numSpanClasses = 136 const numStatsDeps statDep = 2 const numSweepClasses = 272
Offsets into internal/cpu records for use in assembly.
Offsets into internal/cpu records for use in assembly.
Offsets into internal/cpu records for use in assembly.
Offsets into internal/cpu records for use in assembly.
Offsets into internal/cpu records for use in assembly.
Offsets into internal/cpu records for use in assembly.
const oldIterator = 2 // there may be an iterator using oldbuckets
osRelaxMinNS is the number of nanoseconds of idleness to tolerate
without performing an osRelax. Since osRelax may reduce the
precision of timers, this should be enough larger than the relaxed
timer precision to keep the timer error acceptable.
Constants for testing.
const pageAlloc64Bit = 1 const pageCachePages uintptr = 64 const pageMask = 8191 const pageShift = 13 const pageSize = 8192 const pagesPerArena = 8192
pagesPerReclaimerChunk indicates how many pages to scan from the
pageInUse bitmap at a time. Used by the page reclaimer.
Higher values reduce contention on scanning indexes (such as
h.reclaimIndex), but increase the minimum latency of the
operation.
The time required to scan this many pages can vary a lot depending
on how many spans are actually freed. Experimentally, it can
scan for pages at ~300 GB/ms on a 2.6GHz Core i7, but can only
free spans at ~32 MB/ms. Using 512 pages bounds this at
roughly 100µs.
Must be a multiple of the pageInUse bitmap element size and
must also evenly divide pagesPerArena.
pagesPerSpanRoot indicates how many pages to scan from a span root
at a time. Used by special root marking.
Higher values improve throughput by increasing locality, but
increase the minimum latency of a marking operation.
Must be a multiple of the pageInUse bitmap element size and
must also evenly divide pagesPerArena.
const pallocChunkBytes = 4194304
The size of a bitmap chunk, i.e. the amount of bits (that is, pages) to consider
in the bitmap at once.
Number of bits needed to represent all indices into the L1 of the
chunks map.
See (*pageAlloc).chunks for more details. Update the documentation
there should this number change.
const pallocChunksL1Shift = 13
pallocChunksL2Bits is the number of bits of the chunk index number
covered by the second level of the chunks map.
See (*pageAlloc).chunks for more details. Update the documentation
there should this change.
const pallocSumBytes uintptr = 8
This implementation depends on OS-specific implementations of
func semacreate(mp *m)
Create a semaphore for mp, if it does not already have one.
func semasleep(ns int64) int32
If ns < 0, acquire m's semaphore and return 0.
If ns >= 0, try to acquire m's semaphore for at most ns nanoseconds.
Return 0 if the semaphore was acquired, -1 if interrupted or timed out.
func semawakeup(mp *m)
Wake up mp, which is or will soon be sleeping on its semaphore.
const pcbucketsize = 4096 // size of bucket in the pc->func lookup table
pollDesc contains 2 binary semaphores, rg and wg, to park reader and writer
goroutines respectively. The semaphore can be in the following states:
pdReady - io readiness notification is pending;
a goroutine consumes the notification by changing the state to nil.
pdWait - a goroutine prepares to park on the semaphore, but not yet parked;
the goroutine commits to park by changing the state to G pointer,
or, alternatively, concurrent io notification changes the state to pdReady,
or, alternatively, concurrent timeout/close changes the state to nil.
G pointer - the goroutine is blocked on the semaphore;
io notification or timeout/close changes the state to pdReady or nil respectively
and unparks the goroutine.
nil - none of the above.
pollDesc contains 2 binary semaphores, rg and wg, to park reader and writer
goroutines respectively. The semaphore can be in the following states:
pdReady - io readiness notification is pending;
a goroutine consumes the notification by changing the state to nil.
pdWait - a goroutine prepares to park on the semaphore, but not yet parked;
the goroutine commits to park by changing the state to G pointer,
or, alternatively, concurrent io notification changes the state to pdReady,
or, alternatively, concurrent timeout/close changes the state to nil.
G pointer - the goroutine is blocked on the semaphore;
io notification or timeout/close changes the state to pdReady or nil respectively
and unparks the goroutine.
nil - none of the above.
persistentChunkSize is the number of bytes we allocate when we grow
a persistentAlloc.
physPageAlignedStacks indicates whether stack allocations must be
physical page aligned. This is a requirement for MAP_STACK on
OpenBSD.
const pollBlockSize = 4096
Error codes returned by runtime_pollReset and runtime_pollWait.
These must match the values in internal/poll/fd_poll_runtime.go.
Error codes returned by runtime_pollReset and runtime_pollWait.
These must match the values in internal/poll/fd_poll_runtime.go.
Error codes returned by runtime_pollReset and runtime_pollWait.
These must match the values in internal/poll/fd_poll_runtime.go.
Error codes returned by runtime_pollReset and runtime_pollWait.
These must match the values in internal/poll/fd_poll_runtime.go.
const preemptMSupported = true const profBufBlocking profBufReadMode = 0 const profBufNonBlocking profBufReadMode = 1 const profReaderSleeping profIndex = 4294967296 // reader is sleeping and must be woken up const profWriteExtra profIndex = 8589934592 // overflow or eof waiting const raceenabled = false
To shake out latent assumptions about scheduling order,
we introduce some randomness into scheduling decisions
when running with the race detector.
The need for this was made obvious by changing the
(deterministic) scheduling order in Go 1.5 and breaking
many poorly-written tests.
With the randomness here, as long as the tests pass
consistently with -race, they shouldn't have latent scheduling
assumptions.
retainExtraPercent represents the amount of memory over the heap goal
that the scavenger should keep as a buffer space for the allocator.
The purpose of maintaining this overhead is to have a greater pool of
unscavenged memory available for allocation (since using scavenged memory
incurs an additional cost), to account for heap fragmentation and
the ever-changing layout of the heap.
rootBlockBytes is the number of bytes to scan per data or
BSS root.
const rune1Max = 127 const rune2Max = 2047 const rune3Max = 65535
Numbers fundamental to the encoding.
Numbers fundamental to the encoding.
const rwmutexMaxReaders = 1073741824 const sameSizeGrow = 8 // the current map growth is to a new map of the same size
scavengeCostRatio is the approximate ratio between the costs of using previously
scavenged memory and scavenging memory.
For most systems the cost of scavenging greatly outweighs the costs
associated with using scavenged memory, making this constant 0. On other systems
(especially ones where "sysUsed" is not just a no-op) this cost is non-trivial.
This ratio is used as part of multiplicative factor to help the scavenger account
for the additional costs of using scavenged memory in its pacing.
The background scavenger is paced according to these parameters.
scavengePercent represents the portion of mutator time we're willing
to spend on scavenging in percent.
scavengeReservationShards determines the amount of memory the scavenger
should reserve for scavenging at a time. Specifically, the amount of
memory reserved is (heap size in bytes) / scavengeReservationShards.
const selectDefault selectDir = 3 // default const selectRecv selectDir = 2 // case <-Chan: const selectSend selectDir = 1 // case Chan <- Send const semaBlockProfile semaProfileFlags = 1 const semaMutexProfile semaProfileFlags = 2
Prime to not correlate with any user patterns.
const sigFixup = 3 const sigIdle = 0
sigPreempt is the signal used for non-cooperative preemption.
There's no good way to choose this signal, but there are some
heuristics:
1. It should be a signal that's passed-through by debuggers by
default. On Linux, this is SIGALRM, SIGURG, SIGCHLD, SIGIO,
SIGVTALRM, SIGPROF, and SIGWINCH, plus some glibc-internal signals.
2. It shouldn't be used internally by libc in mixed Go/C binaries
because libc may assume it's the only thing that can handle these
signals. For example SIGCANCEL or SIGSETXID.
3. It should be a signal that can happen spuriously without
consequences. For example, SIGALRM is a bad choice because the
signal handler can't tell if it was caused by the real process
alarm or not (arguably this means the signal is broken, but I
digress). SIGUSR1 and SIGUSR2 are also bad because those are often
used in meaningful ways by applications.
4. We need to deal with platforms without real-time signals (like
macOS), so those are out.
We use SIGURG because it meets all of these criteria, is extremely
unlikely to be used by an application for its "real" meaning (both
because out-of-band data is basically unused and because SIGURG
doesn't report which socket has the condition, making it pretty
useless), and even if it is, the application has to be ready for
spurious SIGURG. SIGIO wouldn't be a bad choice either, but is more
likely to be used for real.
const sigReceiving = 1 const sigSending = 2 const sizeofSkipFunction = 256 const smallSizeDiv = 8 const smallSizeMax = 1024 const spanAllocHeap spanAllocType = 0 // heap span const spanAllocPtrScalarBits spanAllocType = 2 // unrolled GC prog bitmap span const spanAllocStack spanAllocType = 1 // stack span const spanAllocWorkBuf spanAllocType = 3 // work buf span const spanSetBlockEntries = 512 // 4KB on 64-bit const spanSetInitSpineCap = 256 // Enough for 1GB heap on 64-bit
stackDebug == 0: no logging
== 1: logging of per-stack operations
== 2: logging of per-frame operations
== 3: logging of per-word updates
== 4: logging of per-word reads
const stackFaultOnFree = 0 // old stacks are mapped noaccess to detect use after free
Thread is forking.
Stored into g->stackguard0 to cause split stack check failure.
Must be greater than any real sp.
const stackFromSystem = 0 // allocate stacks from system memory instead of the heap const stackNoCache = 0 // disable per-P small stack caches const stackPoisonCopy = 0 // fill stack that should not be accessed with garbage, to detect bad dereferences during copy
Goroutine preemption request.
Stored into g->stackguard0 to cause split stack check failure.
Must be greater than any real sp.
0xfffffade in hex.
const stackTraceDebug = false const summaryL0Bits = 14
The number of radix bits for each level.
The value of 3 is chosen such that the block of summaries we need to scan at
each level fits in 64 bytes (2^3 summaries * 8 bytes per summary), which is
close to the L1 cache line width on many systems. Also, a value of 3 fits 4 tree
levels perfectly into the 21-bit pallocBits summary field at the root level.
The following equation explains how each of the constants relate:
summaryL0Bits + (summaryLevels-1)*summaryLevelBits + logPallocChunkBytes = heapAddrBits
summaryLevels is an architecture-dependent value defined in mpagealloc_*.go.
The number of levels in the radix tree.
Code points in the surrogate range are not valid for UTF-8.
Code points in the surrogate range are not valid for UTF-8.
const sweepClassDone sweepClass = 4294967295
sweepMinHeapDistance is a lower bound on the heap distance
(in bytes) reserved for concurrent sweeping between GC
cycles.
const sysStatsDep statDep = 1 // corresponds to sysStatsAggregate const t1 = 0 // 0000 0000 const t2 = 192 // 1100 0000 const t3 = 224 // 1110 0000 const t4 = 240 // 1111 0000 const t5 = 248 // 1111 1000 const tagAllocSample = 17 const tagBSS = 13 const tagData = 12 const tagDefer = 14 const tagEOF = 0 const tagFinalizer = 7 const tagGoroutine = 4 const tagItab = 8 const tagMemProf = 16 const tagMemStats = 10 const tagObject = 1 const tagOSThread = 9 const tagOtherRoot = 2 const tagPanic = 15 const tagParams = 6 const tagQueuedFinalizer = 11 const tagStackFrame = 5 const tagType = 3
testSmallBuf forces a small write barrier buffer to stress write
barrier flushing.
const tflagExtraStar tflag = 2 const tflagNamed tflag = 4 const tflagRegularMemory tflag = 8 // equal and hash can treat values of this type as a single region of t.size bytes const tflagUncommon tflag = 1 const timeHistNumSubBuckets = 16 const timeHistNumSuperBuckets = 45
For the time histogram type, we use an HDR histogram.
Values are placed in super-buckets based solely on the most
significant set bit. Thus, super-buckets are power-of-2 sized.
Values are then placed into sub-buckets based on the value of
the next timeHistSubBucketBits most significant bits. Thus,
sub-buckets are linear within a super-bucket.
Therefore, the number of sub-buckets (timeHistNumSubBuckets)
defines the error. This error may be computed as
1/timeHistNumSubBuckets*100%. For example, for 16 sub-buckets
per super-bucket the error is approximately 6%.
The number of super-buckets (timeHistNumSuperBuckets), on the
other hand, defines the range. To reserve room for sub-buckets,
bit timeHistSubBucketBits is the first bit considered for
super-buckets, so super-bucket indices are adjusted accordingly.
As an example, consider 45 super-buckets with 16 sub-buckets.
00110
^----
│ ^
│ └---- Lowest 4 bits -> sub-bucket 6
└------- Bit 4 unset -> super-bucket 0
10110
^----
│ ^
│ └---- Next 4 bits -> sub-bucket 6
└------- Bit 4 set -> super-bucket 1
100010
^----^
│ ^ └-- Lower bits ignored
│ └---- Next 4 bits -> sub-bucket 1
└------- Bit 5 set -> super-bucket 2
Following this pattern, bucket 45 will have the bit 48 set. We don't
have any buckets for higher values, so the highest sub-bucket will
contain values of 2^48-1 nanoseconds or approx. 3 days. This range is
more than enough to handle durations produced by the runtime.
const timeHistTotalBuckets = 721
The timer is deleted and should be removed.
It should not be run, but it is still in some P's heap.
The timer has been modified to an earlier time.
The new when value is in the nextwhen field.
The timer is in some P's heap, possibly in the wrong place.
The timer has been modified to the same or a later time.
The new when value is in the nextwhen field.
The timer is in some P's heap, possibly in the wrong place.
The timer is being modified.
The timer will only have this status briefly.
The timer has been modified and is being moved.
The timer will only have this status briefly.
Timer has no status set yet.
The timer has been stopped.
It is not in any P's heap.
The timer is being removed.
The timer will only have this status briefly.
Running the timer function.
A timer will only have this status briefly.
Waiting for timer to fire.
The timer is in some P's heap.
const tinySizeClass int8 = 2 const tinySpanClass spanClass = 5
The constant is known to the compiler.
There is no fundamental theory behind this number.
Shift of the number of arguments in the first event byte.
Keep a cached value to make gotraceback fast,
since we call it on every call to gentraceback.
The cached value is a uint32 in which the low bits
are the "crash" and "all" settings and the remaining
bits are the traceback value (0 off, 1 on, 2 include system).
Keep a cached value to make gotraceback fast,
since we call it on every call to gentraceback.
The cached value is a uint32 in which the low bits
are the "crash" and "all" settings and the remaining
bits are the traceback value (0 off, 1 on, 2 include system).
Keep a cached value to make gotraceback fast,
since we call it on every call to gentraceback.
The cached value is a uint32 in which the low bits
are the "crash" and "all" settings and the remaining
bits are the traceback value (0 off, 1 on, 2 include system).
Maximum number of bytes to encode uint64 in base-128.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Flag passed to traceGoPark to denote that the previous wakeup of this
goroutine was futile. For example, a goroutine was unblocked on a mutex,
but another goroutine got ahead and acquired the mutex before the first
goroutine is scheduled, so the first goroutine has to block again.
Such wakeups happen on buffered channels and sync.Mutex,
but are generally not interesting for end user.
Identifier of a fake P that is used when we trace without a real P.
Maximum number of PCs in a single stack trace.
Since events contain only stack id rather than whole stack trace,
we can allow quite large values here.
Timestamps in trace are cputicks/traceTickDiv.
This makes absolute values of timestamp diffs smaller,
and so they are encoded in less number of bytes.
64 on x86 is somewhat arbitrary (one tick is ~20ns on a 3GHz machine).
The suggested increment frequency for PowerPC's time base register is
512 MHz according to Power ISA v2.07 section 6.2, so we use 16 on ppc64
and ppc64le.
Tracing won't work reliably for architectures where cputicks is emulated
by nanotime, so the value doesn't matter for those architectures.
const tx = 128 // 1000 0000
Cache of types that have been serialized already.
We use a type's hash field to pick a bucket.
Inside a bucket, we keep a list of types that
have been serialized so far, most recently used first.
Note: when a bucket overflows we may end up
serializing a type more than once. That's ok.
Cache of types that have been serialized already.
We use a type's hash field to pick a bucket.
Inside a bucket, we keep a list of types that
have been serialized so far, most recently used first.
Note: when a bucket overflows we may end up
serializing a type more than once. That's ok.
const uintptrMask = 18446744073709551615 const usesLR = false
verifyTimers can be set to true to add debugging checks that the
timer heaps are valid.
const waitReasonChanReceive waitReason = 14 // "chan receive" const waitReasonChanReceiveNilChan waitReason = 3 // "chan receive (nil chan)" const waitReasonChanSend waitReason = 15 // "chan send" const waitReasonChanSendNilChan waitReason = 4 // "chan send (nil chan)" const waitReasonDebugCall waitReason = 26 // "debug call" const waitReasonDumpingHeap waitReason = 5 // "dumping heap" const waitReasonFinalizerWait waitReason = 16 // "finalizer wait" const waitReasonForceGCIdle waitReason = 17 // "force gc (idle)" const waitReasonGarbageCollection waitReason = 6 // "garbage collection" const waitReasonGarbageCollectionScan waitReason = 7 // "garbage collection scan" const waitReasonGCAssistMarking waitReason = 1 // "GC assist marking" const waitReasonGCAssistWait waitReason = 11 // "GC assist wait" const waitReasonGCScavengeWait waitReason = 13 // "GC scavenge wait" const waitReasonGCSweepWait waitReason = 12 // "GC sweep wait" const waitReasonGCWorkerIdle waitReason = 24 // "GC worker (idle)" const waitReasonIOWait waitReason = 2 // "IO wait" const waitReasonPanicWait waitReason = 8 // "panicwait" const waitReasonPreempted waitReason = 25 // "preempted" const waitReasonSelect waitReason = 9 // "select" const waitReasonSelectNoCases waitReason = 10 // "select (no cases)" const waitReasonSemacquire waitReason = 18 // "semacquire" const waitReasonSleep waitReason = 19 // "sleep" const waitReasonSyncCondWait waitReason = 20 // "sync.Cond.Wait" const waitReasonTimerGoroutineIdle waitReason = 21 // "timer goroutine (idle)" const waitReasonTraceReaderBlocked waitReason = 22 // "trace reader (blocked)" const waitReasonWaitForGCCycle waitReason = 23 // "wait for GC cycle" const waitReasonZero waitReason = 0 // ""
wbBufEntries is the number of write barriers between
flushes of the write barrier buffer.
This trades latency for throughput amortization. Higher
values amortize flushing overhead more, but increase the
latency of flushing. Higher values also increase the cache
footprint of the buffer.
TODO: What is the latency cost of this? Tune this value.
wbBufEntryPointers is the number of pointers added to the
buffer by each write barrier.
const wordsPerBitmapByte = 4 // heap words described by one bitmap byte
workbufAlloc is the number of bytes to allocate at a time
for new workbufs. This must be a multiple of pageSize and
should be a multiple of _WorkbufSize.
Larger values reduce workbuf allocation overhead. Smaller
values reduce heap fragmentation.
The pages are generated with Golds v0.4.2. (GOOS=darwin GOARCH=amd64) Golds is a Go 101 project developed by Tapir Liu. PR and bug reports are welcome and can be submitted to the issue list. Please follow @Go100and1 (reachable from the left QR code) to get the latest news of Golds. |