Resolve Pod restart for k8s based golang server

Hi,
I’m seeing an issue where I see pod restart multiple times for my server based on golang during a load test with 5k request and a max cpu usage of 1000mi.
I do not see any panic that’s occurring at the pod level and the log suggests that there might be an io wait(I could see multiple of these logs(The Go Playground ))
Also, I see some logs that are marked with semacquire log

goroutine 130355 [runnable]:
internal/poll.runtime_Semacquire(0xc0000d20cc)
	/usr/local/go/src/runtime/sema.go:61 +0x45
internal/poll.(*fdMutex).rwlock(0xc0000d20c0, 0x196f200, 0x171b1c0)
	/usr/local/go/src/internal/poll/fd_mutex.go:154 +0xb3
internal/poll.(*FD).writeLock(...)
	/usr/local/go/src/internal/poll/fd_mutex.go:239
internal/poll.(*FD).Write(0xc0000d20c0, 0xc0023b0000, 0x560, 0x8fe, 0x0, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_unix.go:261 +0x6e
os.(*File).write(...)
	/usr/local/go/src/os/file_posix.go:48
os.(*File).Write(0xc000010018, 0xc0023b0000, 0x560, 0x8fe, 0x0, 0x0, 0x10)
	/usr/local/go/src/os/file.go:174 +0x8e
encoding/json.(*Encoder).Encode(0xc000e27068, 0x171b1c0, 0xc001df1e60, 0xd, 0x29bfc80)
	/usr/local/go/src/encoding/json/stream.go:231 +0x1df
github.com/krogertechnology/krogo/pkg/log.(*logger).log(0xc00020e580, 0x4, 0x0, 0x0, 0xc000b73b80, 0x1, 0x1)

Can I make some logical assumption with this? something like a goroutine leak somewhere in my code or any other issue?

One of the important settings to avoid idle requests is set ReadTimeout, WriteTimeout and IdleTimeout, maybe this help you

    srv := &http.Server{
        ReadTimeout:  5 * time.Second,
        WriteTimeout: 10 * time.Second,
        IdleTimeout:  5 * time.Second,
    }
    srv.ListenAndServe()

take a look for this links

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.