Prettybenchmarks - tool to format go bench output

Hi all,

I just released prettybenchmarks, a CLI package for formatting the output of go benchmarks.

Currently it’s still missing all but manual tests, but those will be added asap

Features

  • removes clutter in output (strips Benchmark_ from func names, removes -8 etc)
  • automatically/manually converts each individual benchmark runtimes into another time-duration, so you won’t have to deal with huge ns values
  • If you write benchmarks in this manner Benchmark_fnName_100, Benchmark_fnName_500, etc, to benchmark a function for a different amount of calls the will be grouped automatically

Any feedback is heavily appreciated!

Cheers,
Florian

6 Likes

This is awesome! Much better for copy+pasting into READMEs and such. Can’t wait to try it.

What does the <-- arrow do on the right side of the table?

Hey,

thanks for your comment!

Actually the arrow doesn’t do anything, there just seems to a bug in this cli-table library I use where the last column cannot be correctly aligned (and left-aligned numbers don’t really look that well).

I’ll try to submit a fix for that alignment bug to the respective library, so this last line with the arrow can be dropped.
Until that’s done, do you thing an empty column would be any better?

Hmmm, do you need an arrow in every row? If not, how about pointing to the “best” run (based on some ratio of two columns)?

Also, commas in the numbers might be nice for readability if it doesn’t take up too much horizontal space.

Hi!

Was interested to give this a shot, and it seemed I managed to break it on
the first try:

$ go test -run none -bench=. -benchmem github.com/justinfx/gofileseq/... | pb
|panic: runtime error: index out of range

goroutine 1 [running]:github.com/florianorben/prettybenchmarks/prettybenchmarks.newResult(0xc8200f4cc0,
0x3d, 0x3d, 0xc820047c48)
    /Users/justin/src/go/src/github.com/florianorben/prettybenchmarks/prettybenchmarks/main.go:165
+0x56egithub.com/florianorben/prettybenchmarks/prettybenchmarks.newResults(0xc8200f2000,
0x1e, 0x20, 0xc820001500)
    /Users/justin/src/go/src/github.com/florianorben/prettybenchmarks/prettybenchmarks/main.go:104
+0x16agithub.com/florianorben/prettybenchmarks/prettybenchmarks.newBenchmark(0xc8200f2000,
0x1e, 0x20, 0xc82000a110)
    /Users/justin/src/go/src/github.com/florianorben/prettybenchmarks/prettybenchmarks/main.go:93
+0x39github.com/florianorben/prettybenchmarks/prettybenchmarks.Main()
    /Users/justin/src/go/src/github.com/florianorben/prettybenchmarks/prettybenchmarks/main.go:81
+0x1ec
main.main()
    /Users/justin/src/go/src/github.com/florianorben/prettybenchmarks/cmd/pb/main.go:15
+0x14

goroutine 17 [syscall, locked to thread]:
runtime.goexit()
    /usr/local/go/src/runtime/asm_amd64.s:1696 +0x1

goroutine 5 [runnable]:
time.NewTicker(0x8f0d180, 0x18)
    /usr/local/go/src/time/tick.go:36 +0x185
time.Tick(0x8f0d180, 0xc8200182a0)
    /usr/local/go/src/time/tick.go:57
+0x35github.com/florianorben/prettybenchmarks/prettybenchmarks.loading(0xc8200182a0)
    /Users/justin/src/go/src/github.com/florianorben/prettybenchmarks/prettybenchmarks/main.go:364
+0x7f
created by github.com/florianorben/prettybenchmarks/prettybenchmarks.Main
    /Users/justin/src/go/src/github.com/florianorben/prettybenchmarks/prettybenchmarks/main.go:67
+0x104


Also, I didn’t realize the standard naming convention for benchmarks was
Benchmark_FN_XXX. I have iteration tests in my library, but I don’t use the
underscores in the naming.

Justin

Very nice!

Any plans of doing the same to benchcmp output?

Hey guys,

@matt no - there just has to be “something” in the last column, could even be a whitespace char - printing best in there is a nice idea though, I’ll try to figure out on what metric to compare which two (or more) benchmarks.
Commas shouldn’t be too much of a problem

@Justin_Israel Haha, nice shot :p, would you be so kind to send me/post a line of what your bench output looks like?
Because I really have no idea how that could panic on that line.
I’m not quite sure if there are any naming conventions, I’ve just seen that be done this way in some packages and adopted it for the benchmarks in my own packages. (Though I think leaving of the underscore could lead to readability problems if you have a function name that has an number in it like parseISO123 or so). I’ll see what I can do - though I should fix your other bug first :wink:

@klauspost Not yet, but that’s a great idea! Just put it on my todo list!

@All As per Justin_Isreal’s question, are there actually any naming conventions in tests and benchmarks?
If not I think it could probably be quite beneficial to establish some, regardless of this tool - what do you think?

I sometimes add it without underscore, but sometimes other things make sense, like:

// Benchmark 5 data shards and 2 parity shards with 1MB each.
func BenchmarkEncode5x2x1MB(b *testing.B) {
	benchmarkEncode(b, 5, 2, 1024*1024)
}

Hmm, yes you’re right - I haven’t thought about that kind of benchmark.
Probably a base-function-name can be derived from the benchmark output by comparing all function names against each other, otherwise I’d probably have to check the source files - I’ll look into that.

@Justin_Israel: Just fixed the bug you were experiencing, stupid me, I could have just cloned your code and look at the output myself :wink: go .get -u and you should be good to go.

@matt Commas for readability added + found a way to remove the arrow-colum while keeping the alignment intact - just pull an update

Thanks for your input everyone!

Thanks for that. Works now!

Pretty cool, although I was hoping that this would have something for comparing benchmarks over time. I was trying out Git Ratchet but that would require parsing benchmarks which there were no good libraries for. If I get a chance I’m gonna try and mash the two together.

Also FYI the Go naming convention is mixedCaps for everything as shown here.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.