Pulkit Gaur

dream - do - deserve

Why my Win32:Timer code sucks !

Filed under: Visual C++ at 13:05:24 On 2006-05-17

Now i came to know why my precise loops alwayz showed a random behaviour.

Actually i used to call timeGetTime() to do timing control.

But it seems because of the granularity of the system timer can be as high as 10-15
milliseconds, meaning that timeGetTime() is only accurate to 10-15 milliseconds.

There are multiple approaches which we can use for time control, some of them are :

"RDTSC":

It stands for "Read Time Stamp Counter." This assembly instruction returns
an unsigned 64-bit integer reading on the processor's internal high-precision
timer.

In order to get the frequency of the timer (how much the timer return
value will increment in 1 second), you can read the registry for the machine's
speed.

bool GetPentiumClockEstimateFromRegistry(unsigned __int64 *frequency)
{
HKEY hKey;
DWORD cbBuffer;
LONG rc;

*frequency = 0;

rc = RegOpenKeyEx(
HKEY_LOCAL_MACHINE,
"Hardware\\Description\\System\\CentralProcessor\\0",
0,
KEY_READ,
&hKey
);

if (rc == ERROR_SUCCESS)
{
cbBuffer = sizeof (DWORD);
DWORD freq_mhz;
rc = RegQueryValueEx
(
hKey,
"~MHz",
NULL,
NULL,
(LPBYTE)(&freq_mhz),
&cbBuffer
);
if (rc == ERROR_SUCCESS)
*frequency = freq_mhz*1024*1024;
RegCloseKey (hKey);
}

return (*frequency > 0);
}

GetPentiumTimeRaw():

double GetPentiumTimeAsDouble(unsigned __int64 frequency)
{
// returns < 0 on failure; otherwise, returns current cpu time, in seconds.
// warning: watch out for wraparound!

if (frequency==0)
return -1.0;

// get high-precision time:
__try
{
unsigned __int64 high_perf_time;
unsigned __int64 *dest = &high_perf_time;
__asm
{
_emit 0xf // these two bytes form the 'rdtsc' asm instruction,
_emit 0x31 // available on Pentium I and later.
mov esi, dest
mov [esi ], eax // lower 32 bits of tsc
mov [esi+4], edx // upper 32 bits of tsc
}
__int64 time_s = (__int64)(high_perf_time / frequency);
__int64 time_fract = (__int64)(high_perf_time % frequency);
double ret = (time_s % (60*60*24*7)) + (double)time_fract/(double)((__int64)frequency);
return ret;
}
__except(EXCEPTION_EXECUTE_HANDLER)
{
return -1.0;
}

return -1.0;
}

QueryPerformanceFrequency & QueryPerformanceCounter:

QueryPerformanceFrequency returns the amount that the counter will increment over 1 second; QueryPerformanceCounter

 

returns a LARGE_INTEGER (a 64-bit *signed* integer) that is the current value of the counter.

This approach is accurate upto 10 micro seconds.tongue-out



Conclusion

If we call timeBeginPeriod(1) at the beginning of our program (and timeEndPeriod(1) at the end), timeGetTime() will usually become

accurate to 1-2 milliseconds.

Sleep() sleeps for goes hand-in-hand with the granularity of timeGetTime(), so after calling timeBeginPeriod(1) once, Sleep(1)

will actually sleep for 1-2 milliseconds, Sleep(2) for 2-3, and so on (instead of sleeping in increments as high as 10-15 ms).

For simple timing, both timeGetTime and QueryPerformanceCounter work well, and QueryPerformanceCounter is obviously more accurate.


Part of this post is coming Ryan M. Geiss's post.

Leave a Comment

Required

Required, hidden