I am running the following code: https://play.golang.org/p/5bhXs_QulH
package main
import (
"fmt"
"time"
)
func main() {
startTime := time.Now()
foo := 0.200
fmt.Println(int((time.Now().UnixNano() - startTime.UnixNano()) / int64(time.Millisecond)))
time.Sleep(time.Duration(foo*1000) * time.Millisecond)
fmt.Println(int((time.Now().UnixNano() - startTime.UnixNano()) / int64(time.Millisecond)))
}
When I run it on my laptop or the playground I get:
0
200
But when I run it on a Virtual Machine (VM) I get:
0
anything between 150 - 250
I am not 100% sure which infrastructure the VM is allocated on but it is an Enterprise infrastructure for the entire company and I think it is VMWare.
Could anyone explain the difference in measurements between a physical machine and a virtual one and how to overcome it and get a consistent sleep time?
I eventually came up with the following:
func SleepMilliseconds(duration int, limit int) {
startTime := time.Now()
tempDuration := duration
for {
tempDuration = tempDuration / 2
time.Sleep(time.Duration(tempDuration) * time.Millisecond)
elapsed := int((time.Now().UnixNano() - startTime.UnixNano()) / int64(time.Millisecond))
tempDuration = duration - elapsed
if tempDuration < limit {
break
}
}
}
Disclaimer: I don't think this is a proper solution nor am I going to use it in my project, I just put it here in the rare case where your life (or production env) is dependent on an immediate solution and you don't know what else to do.