I’m running Dgraph from master. If you want to reproduce this bug, you can install it like so:
https://wiki.dgraph.io/Beginners_Guide#Manual_Installation_.28dynamically_linked.29
This is how I’m running it:
$ cd $GOPATH/github.com/dgraph-io/dgraph/cmd/dgraph
$ go build . && ./dgraph --debugmode --cpu cpu.prof --shutdown
# Watch the CPU usage go higher and higher.
# You can access http://localhost:8080/debug/pprof at any time.
# At some point, you can shut it down like so:
$ curl localhost:8080/query -XPOST -d "SHUTDOWN"
This would leave you with a cpu.prof file. And this is the result I’m seeing.
$ go tool pprof dgraph cpu.prof
(pprof) top20
71.65s of 73.05s total (98.08%)
Dropped 67 nodes (cum <= 0.37s)
Showing top 20 nodes out of 28 (cum >= 0.62s)
flat flat% sum% ■■■ ■■■%
44.12s 60.40% 60.40% 44.12s 60.40% runtime.siftdownTimer
13.39s 18.33% 78.73% 13.49s 18.47% runtime.chansend
3.06s 4.19% 82.92% 3.06s 4.19% runtime._ExternalCode
2.78s 3.81% 86.72% 2.78s 3.81% runtime/internal/atomic.Xchg
1.97s 2.70% 89.42% 67.31s 92.14% runtime.timerproc
1.92s 2.63% 92.05% 1.92s 2.63% runtime.futex
1.25s 1.71% 93.76% 1.25s 1.71% runtime.usleep
0.93s 1.27% 95.03% 14.69s 20.11% runtime.selectnbsend
0.68s 0.93% 95.96% 2.17s 2.97% runtime.unlock
0.52s 0.71% 96.67% 1.86s 2.55% runtime.lock
0.48s 0.66% 97.33% 15.76s 21.57% time.sendTime
0.24s 0.33% 97.66% 0.59s 0.81% time.Now
0.14s 0.19% 97.85% 2.21s 3.03% runtime.sysmon
0.04s 0.055% 97.91% 0.67s 0.92% runtime.exitsyscall
0.04s 0.055% 97.96% 1.24s 1.70% runtime.notetsleep_internal
0.02s 0.027% 97.99% 0.61s 0.84% runtime.exitsyscallfast_pidle
0.02s 0.027% 98.02% 1.29s 1.77% runtime.futexsleep
0.02s 0.027% 98.04% 1.62s 2.22% runtime.notetsleepg
0.02s 0.027% 98.07% 0.67s 0.92% runtime.notewakeup
0.01s 0.014% 98.08% 0.62s 0.85% runtime.exitsyscallfast
runtime.siftdownTimer is consuming 60% of the CPU. Because of this an idle Dgraph instance easily jumps to using 100% of an entire core. Nothing here points to any code from Dgraph, so I’m not sure what’s going on and why this is happening. This looks like a busy-wait loop within Go.
Any ideas what could be causing this?