I’m seeing quite a variation between timer ticks on Windows (code at bottom of post). An example run could output: Took 15.6858ms 10.7615ms 541.8µs 500.5µs 499.8µs…
This is a variation of about 30 times the time for a minimum tick.
Is this just Go? Or just Windows (I see much lower readings for linux on android).
I can guess why this is happening, but does anyone know? BTW - sometimes there are no ‘long’ ticks.
I thought this might down to the way the operating system handles resources for the program - so I tried putting in a 100ms sleep - but it didn’t stop the long ticks…
Best wishes
Andy Stratton
package main
import (
"fmt"
"time"
)
func main() {
fmt.Print("Took\n")
for i := 0; i < 100; i++ {
Tstart := time.Now()
took := time.Since(Tstart)
for took == 0 {
took = time.Since(Tstart)
}
fmt.Print("\t", took)
}
}
This pattern repeats (mostly) with a second (typically) shorter) long tick followed (usually) by settling down to about 0.5ms.
For the moment - my only solution is to put in a ‘tick until it settles’ loop…I know that there will always be some sporadic values - but this hits the first timing every time (pretty much).
This was causing big issues with performance timing - I do know about the benchmark package, but this is for teaching purposes (the benchmark package avoids the required learning).
15.6 ms is the default Windows timer resolution, which is probably not a coincidence. This can be made more granular by increasing the timer resolution system wide. I’m guessing that kicks in at some point after starting your program, perhaps requested by the Go runtime or Windows detecting the clock polling or similar - I’m just guessing here.
I don’t know how efficient time.Now is implemented on Windows, but it it’s a full syscall then that might explain some of the delay.
The benchmark package tries to ameliorate the cost of asking the time by doing N rounds of work, then dividing the total time by N. This might give you more reliable numbers.