Now i came to know why my precise loops alwayz showed a random behaviour.
Actually i used to call timeGetTime() to do timing control.
But it seems because of the granularity of the system timer can be as high as 10-15
milliseconds, meaning that timeGetTime() is only accurate to 10-15 milliseconds.
There are multiple approaches which we can use for time control, some of them are :
It stands for "Read Time Stamp Counter." This assembly instruction returns
an unsigned 64-bit integer reading on the processor's internal high-precision
In order to get the frequency of the timer (how much the timer return
value will increment in 1 second), you can read the registry for the machine's
bool GetPentiumClockEstimateFromRegistry(unsigned __int64 *frequency)
*frequency = 0;
rc = RegOpenKeyEx(
if (rc == ERROR_SUCCESS)
cbBuffer = sizeof (DWORD);
rc = RegQueryValueEx
if (rc == ERROR_SUCCESS)
*frequency = freq_mhz*1024*1024;
return (*frequency > 0);
double GetPentiumTimeAsDouble(unsigned __int64 frequency)
// returns < 0 on failure; otherwise, returns current cpu time, in seconds.
// warning: watch out for wraparound!
// get high-precision time:
unsigned __int64 high_perf_time;
unsigned __int64 *dest = &high_perf_time;
_emit 0xf // these two bytes form the 'rdtsc' asm instruction,
_emit 0x31 // available on Pentium I and later.
mov esi, dest
mov [esi ], eax // lower 32 bits of tsc
mov [esi+4], edx // upper 32 bits of tsc
__int64 time_s = (__int64)(high_perf_time / frequency);
__int64 time_fract = (__int64)(high_perf_time % frequency);
double ret = (time_s % (60*60*24*7)) + (double)time_fract/(double)((__int64)frequency);
QueryPerformanceFrequency & QueryPerformanceCounter:
QueryPerformanceFrequency returns the amount that the counter will increment over 1 second; QueryPerformanceCounter
returns a LARGE_INTEGER (a 64-bit *signed* integer) that is the current value of the counter.
This approach is accurate upto 10 micro seconds.
If we call timeBeginPeriod(1) at the beginning of our program (and timeEndPeriod(1) at the end), timeGetTime() will usually become
accurate to 1-2 milliseconds.
Sleep() sleeps for goes hand-in-hand with the granularity of timeGetTime(), so after calling timeBeginPeriod(1) once, Sleep(1)
will actually sleep for 1-2 milliseconds, Sleep(2) for 2-3, and so on (instead of sleeping in increments as high as 10-15 ms).
For simple timing, both timeGetTime and QueryPerformanceCounter work well, and QueryPerformanceCounter is obviously more accurate.
Part of this post is coming Ryan M. Geiss's post.