I am running a Go program in a docker container by setting memory limit (2GB) for the container on a host with memory 16GB. Eventually, the program consumes all of the memory (2GB) and is OOMKILLed. This is because the program sees total host memory and keep allocating memory until it is OOMKILLed. Is there a way to pass flags or environment variables while running the program so that it sees limited memory available. Also, when the GC runs, is the memory released back to the OS?
I am using go version 1.10.2
Recompiling the program might not be possible in my situation, so I am looking for runtime option or environment variables.
Thanks for the response Johan.
I am using kubernetes to run my containers. Kubernetes allows resource limits to set for each container. This limit is used as the value of the --memory flag in the docker run command. The behaviour that i have detailed in my question is seen after setting the limit to 2GB. The container is OOMKILLed when memory consumption increases further.
If you reach the memory limit soon the kernel will aggressively drop page caches, which leads to an OOM state.
To see the memory allocation in the node try to look at the kubelet logs and see if there are messages confirming that the pod was eviced after reaching the max node memory pressure.
The resource limit is strictly related to the QoS of pods. If the node reaches excessive memory pressure the pod exceeding the limit will be likely to be evicted first.
If you only set the resource limit and not the resource request you have created a Burstable pod. You can try create a Guarateed pod defining the same value of resource limits and requests.