It always sets the timer resolution to 1 ms before wait in order to be accurate (regardless of sleep length), and restores it immediately upon completion, unless any other threads are still sleeping.
There's an issue with Windows that if you change the timer resolution with timeBeginPeriod() a lot, the clock will drift.Īctually, there is a bug in Java's Thread wait() (and the os::sleep()) function's Windows implementation that causes this behaviour. If the clock is only slightly off, the SNTP client ("Windows Time" service) will adjust this skew to make the clock tick slightly faster or slower to trend towards the correct time. This would only work if you had a predictable clock skew, however.
Windows does allow the amount of ticks added to the real-time clock on every interrupt to be adjusted: see SetSystemTimeAdjustment. Other factors include bus-mastering I/O controllers which steal many memory bus cycles from the CPU, causing it to be starved of memory bus bandwidth for significant periods.Īs others have said, the clock-generation hardware may also vary its frequency as component values change with temperature.
PLURALEYES 4 NO TIME DRIFT DRIVERS
Clock ticks should be predictable, but on most PC hardware - because they're not designed for real-time systems - other I/O device interrupts have priority over the clock tick interrupt, and some drivers do extensive processing in the interrupt service routine rather than defer it to a deferred procedure call (DPC), which means the system may not be able to serve the clock tick interrupt until (sometimes) long after it was signalled.