Go garbage collection


(Gaurav Gupta) #1

I am running a Go program in a docker container by setting memory limit (2GB) for the container on a host with memory 16GB. Eventually, the program consumes all of the memory (2GB) and is OOMKILLed. This is because the program sees total host memory and keep allocating memory until it is OOMKILLed. Is there a way to pass flags or environment variables while running the program so that it sees limited memory available. Also, when the GC runs, is the memory released back to the OS?
I am using go version 1.10.2
Recompiling the program might not be possible in my situation, so I am looking for runtime option or environment variables.

Thanks.


(Johan Dahl) #2

Hi

What happens if you use the -m flag

-m or --memory=	The maximum amount of memory the container can use.
If you set this option, the minimum allowed value is 4m (4 megabyte).

(Gaurav Gupta) #3

Thanks for the response Johan.
I am using kubernetes to run my containers. Kubernetes allows resource limits to set for each container. This limit is used as the value of the --memory flag in the docker run command. The behaviour that i have detailed in my question is seen after setting the limit to 2GB. The container is OOMKILLed when memory consumption increases further.


(Wèi Cōngruì) #4

I don’t think Go runtime can use the memory information to decide how to do GC.
And there is a related issue, https://github.com/golang/go/issues/23044

I think this is a simple problem. Your program just allocated too much memory.


(Johan Dahl) #5

Here is a discussion and possible future solutions in 2014. I have hardly used containers but this is really an interesting problem.


(Gianni Salinetti) #6

Hi ggaurav10,

If you reach the memory limit soon the kernel will aggressively drop page caches, which leads to an OOM state.
To see the memory allocation in the node try to look at the kubelet logs and see if there are messages confirming that the pod was eviced after reaching the max node memory pressure.
The resource limit is strictly related to the QoS of pods. If the node reaches excessive memory pressure the pod exceeding the limit will be likely to be evicted first.
If you only set the resource limit and not the resource request you have created a Burstable pod. You can try create a Guarateed pod defining the same value of resource limits and requests.

Gianni