SetMemoryLimit but still OOMKilled (Out of memory killed)

I have a go server that has to serve heavy traffic. I use pod k8s limited to 5GB and SetMemoryLimit 3.5GB to avoid pod OOMKilled (Out of memory killed) but it still happens. Maybe I misunderstood how it works, can anyone help me? I

1 Like

Why is it using so much memory ? Can you increase the pod limit ?

Read the 3rd paragraph here debug package - runtime/debug - Go Packages. It explains examples of memory usage that is outside the control of SetMemoryLimit. That’s likely what you need to look for.

your application could have long-lived heap allocations, so the GC does do not much if you use this every time, like a cache in memory,

this is a nice post to explain a little better about this

:point_up_2: The Go authors explicitly label GOMEMLIMIT a “soft” limit. That means that the Go runtime does not guarantee that the memory usage will exceed the limit. Instead, it uses it as a target.

1 Like

The memory managed by the runtime is runtime.MemStats.Sys - runtime.MemStats.HeapReleased according to your post.
But I found this post Explore Prometheus Go explaining that the memory is due to the manager runtime is runtime. MemStats.Idle - runtime.MemStats.HeapRelease.
Can explain it, please? When I visualize Go Prometheus with Grafana, case 1 is always >= the limit I put, but case 2 <=. And I think case 2 might be correct :))

Hi telo_tade,

Of course i can but my goal is to use less resources i want my app not to die for any reason the app can drop some request or response with the duration high.

Hi luk4z7,
I read it and installed go prometheus to graph my application. But I’m still curious to know exactly how many long-lived heap allocations my application uses?

In this case, it’s nice to Profiling your application

when the GC is called several times it may indicate that some operation in the program is allocating too much memory

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.