Go program gets stuck


(torpido) #1

Hi all,

I started to work with GoLang recently, and I tried to integrate it with redis.
so I used radix library to do so, although from time to time my program get stuck and I can’t figure out what caused it to stuck like this.
I tried to use pprof to analyze it better but I didn’t get any results which can determine the cause of this stuck.

I got to almost 100% of my CPU is spent on runtime.gopark:

Type: goroutine
Showing nodes accounting for 110, 100% of 110 total
flat flat% sum% ■■■ ■■■%
108 98.18% 98.18% 108 98.18% runtime.gopark
1 0.91% 99.09% 1 0.91% runtime.notetsleepg
1 0.91% 100% 1 0.91% runtime/pprof.writeRuntimeProfile

when I use pprof with profile I also get 100% is spent on runtime.usleep

Duration: 10s, Total samples = 10ms ( 0.1%)
Showing nodes accounting for 10ms, 100% of 10ms total
flat flat% sum% ■■■ ■■■%
10ms 100% 100% 10ms 100% runtime.usleep
0 0% 100% 10ms 100% runtime.mstart
0 0% 100% 10ms 100% runtime.mstart1
0 0% 100% 10ms 100% runtime.sysmon

pprof mutex and block did not returned anything

This is the heap output:

File: gw
Type: inuse_space
Time: Jul 7, 2019 at 10:23pm (UTC)
Showing nodes accounting for 5370.73kB, 100% of 5370.73kB total
----------------------------------------------------------±------------
flat flat% sum% ■■■ ■■■% calls calls% + context
----------------------------------------------------------±------------
1028kB 50.00% | bufio.NewWriter
1028kB 50.00% | github.com/valyala/fasthttp.acquireWriter
2056.01kB 38.28% 38.28% 2056.01kB 38.28% | bufio.NewWriterSize
----------------------------------------------------------±------------
1778.29kB 100% | compress/flate.NewWriter
1195.29kB 22.26% 60.54% 1778.29kB 33.11% | compress/flate.(*compressor).init
583.01kB 32.78% | compress/flate.newDeflateFast
----------------------------------------------------------±------------
583.01kB 100% | compress/flate.(*compressor).init
583.01kB 10.86% 71.39% 583.01kB 10.86% | compress/flate.newDeflateFast
----------------------------------------------------------±------------

I also tried to see if redis is the problem, so I created a client which calls it to see that it is still avaialble, and it seemed that the problem is not there.

Are there any suggestions how I can analyze it better?


(system) closed #2

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.