java - Why do System.nanoTime() and System.currentTimeMillis() drift apart so rapidly? -


for diagnostic purposes, want able detect changes in system time-of-day clock in long-running server application. since system.currenttimemillis() based on wall clock time , system.nanotime() based on system timer independent(*) of wall clock time, thought use changes in difference between these values detect system time changes.

i wrote quick test app see how stable difference between these values is, , surprise values diverge me @ level of several milliseconds per second. few times saw faster divergences. on win7 64-bit desktop java 6. haven't tried test program below under linux (or solaris or macos) see how performs. runs of app, divergence positive, runs negative. appears depend on else desktop doing, it's hard say.

public class timetest {   private static final int one_million  = 1000000;   private static final int half_million =  499999;    public static void main(string[] args) {     long start = system.nanotime();     long base = system.currenttimemillis() - (start / one_million);      while (true) {       try {         thread.sleep(1000);       } catch (interruptedexception e) {         // don't care if we're interrupted       }       long = system.nanotime();       long drift = system.currenttimemillis() - (now / one_million) - base;       long interval = (now - start + half_million) / one_million;       system.out.println("clock drift " + drift + " ms after " + interval                          + " ms = " + (drift * 1000 / interval) + " ms/s");     }   } } 

inaccuracies thread.sleep() time, interruptions, should entirely irrelevant timer drift.

both of these java "system" calls intended use measurement -- 1 measure differences in wall clock time , other measure absolute intervals, when real-time-clock not being changed, these values should change @ close same speed, right? bug or weakness or failure in java? there in os or hardware prevents java being more accurate?

i expect drift , jitter(**) between these independent measurements, expected less minute per day of drift. 1 msec per second of drift, if monotonic, 90 seconds! worst-case observed drift perhaps ten times that. every time run program, see drift on first measurement. far, have not run program more 30 minutes.

i expect see small randomness in values printed, due jitter, in runs of program see steady increase of difference, as 3 msec per second of increase , couple times more that.

does version of windows have mechanism similar linux adjusts system clock speed bring time-of-day clock sync external clock source? such thing influence both timers, or wall-clock timer?

(*) understand on architectures, system.nanotime() of necessity use same mechanism system.currenttimemillis(). believe it's fair assume modern windows server not such hardware architecture. bad assumption?

(**) of course, system.currenttimemillis() have larger jitter system.nanotime() since granularity not 1 msec on systems.

you might find this sun/oracle blog post jvm timers of interest.

here couple of paragraphs article jvm timers under windows:

system.currenttimemillis() implemented using getsystemtimeasfiletime method, reads low resolution time-of-day value windows maintains. reading global variable naturally quick - around 6 cycles according reported information. time-of-day value updated @ constant rate regardless of how timer interrupt has been programmed - depending on platform either 10ms or 15ms (this value seems tied default interrupt period).

system.nanotime() implemented using queryperformancecounter / queryperformancefrequency api (if available, else returns currenttimemillis*10^6). queryperformancecounter(qpc) implemented in different ways depending on hardware it's running on. typically use either programmable-interval-timer (pit), or acpi power management timer (pmt), or cpu-level timestamp-counter (tsc). accessing pit/pmt requires execution of slow i/o port instructions , result execution time qpc in order of microseconds. in contrast reading tsc on order of 100 clock cycles (to read tsc chip , convert time value based on operating frequency). can tell if system uses acpi pmt checking if queryperformancefrequency returns signature value of 3,579,545 (ie 3.57mhz). if see value around 1.19mhz system using old 8245 pit chip. otherwise should see value approximately of cpu frequency (modulo speed throttling or power-management might in effect.)


Comments

Popular posts from this blog

objective c - Change font of selected text in UITextView -

php - Accessing POST data in Facebook cavas app -

c# - Getting control value when switching a view as part of a multiview -