I'm working on a project in Go with OpenGL and have code to load an image file via the go/image library. This function leaves no permanent pointers and then I leave scope of the function. I would hope this memory gets cleared on the next GC cycle, but it does not appear to. I'm hoping someone who understands go a little better can help me see why the image isn't being cleared.
Gist of the code: https://gist.github.com/gjh33/62a75ccde6a7d849311804d31d7ee9ff
When not calling this method, memory footprint is 17mb, when calling this method it is 40mb. At no point is this memory GC'd even after waiting 5 minutes.
Some things to keep in mind if you haven't worked with openGL in go:
when I leave scope of the function, I would hope this memory gets cleared
This is a chief misconception: Go is a garbage-collected language, and this means the memory is only freed during so-called garbage collections, which happen periodically and are not in any way triggered by variables in the code being executed getting out of scope.
It suffices to say that in the GC algorithm Go implements, each scan consists of two consecutive phases: scanning and sweeping. During the scan phase, all the live objects are traversed (via pointers they maintain to each other, if any, and those which are unreachable via the stacks of the running goroutines and global variables are marked for freeing which happens during the sweep phase.
The Go runtime implements a quite sophisticated "estimator" which tries to deduce at which target heap size to start the next GC session in order to strike a balance between the heap usage and the CPU cost paid for performing a GC session.
This means two things:
On a side note, the original mental image of how a GC works which you held is not untrue per se—there indeed exist programming languages with no explicit memory management which actually de-allocate the memory in cases like the one you have mentioned. This is typical for scripting (at least originally) programming languages such as Python, Tcl, Perl (≤ 5 at least) etc. These languages use so-called reference-counting for the values they operate with. The logic is basically that every assignment of a value to a variable (including passing it as a function's argument) increments a number of references recorded for that variable, and when the execution leaves the scope of a variable, the refcount of the value held in it is decremented. When a value's refcount drops to 0, the value is freed.
This approach works, and appears to be natural, but it has certain downsides such as:
I would also add to this that this scheme does not play well with concurrent access to the variables: if you add concurrency (as in Go) in the mix, you'll end up with the requirement for all updates of these refcount fields to be mutually exclusive and with unfunny problems following such as how to deal with the case when one thread of execution unreferences a value, notices the refcount crossed the zero and frees the value and then another thread waiting on the former to increment the reference gets unblocked and finds out the value it wanted to reference does not exist any more.