package main
import "testing"
var ch = make(chan struct{})
func init() {
go func() {
for range ch {
}
}()
}
func BenchmarkChannel(b *testing.B) {
for i := 0; i < b.N; i++ {
ch <- struct{}{}
}
}
That’s a million channel sends+receives per second. That aside, for tiny amounts of work it’s usually better to either just do it yourself instead of passing it to someone else over a channel, or to batch it and pass an entire batch over the channel for processing.
In addition to what Jakob said, you should also note that a benchmark makes more sense when it compare something.
You should write two implementations of some problem, one with channels and one without and see the overhead.
I don’t really how you jump to the (partially correct) observation that there is overhead. “Big” is meaningless without comparing. The overhead itself is unavoidable to make channels work the way they do.
What are you benchmarking against? A more realistic benchmark would be to benchmark against, say using a mutex (but that still requires an implementation).
You can also increase channel performance by using a buffered channel:
package main
import (
"sync"
"testing"
)
var ch = make(chan struct{}, 1000)
func init() {
var j int
go func() {
for range ch {
j++
}
}()
}
func BenchmarkChannel(b *testing.B) {
for i := 0; i < b.N; i++ {
ch <- struct{}{}
}
}
func BenchmarkNormal(b *testing.B) {
var j int
var mu sync.Mutex
for i := 0; i < b.N; i++ {
mu.Lock()
j++
mu.Unlock()
}
}
Result:
goos: darwin
goarch: amd64
pkg: github.com/mafredri/playgoround/4-chbench
BenchmarkChannel-8 20000000 58.4 ns/op
BenchmarkNormal-8 100000000 14.0 ns/op
PASS
ok github.com/mafredri/playgoround/4-chbench 2.663s