High Resolution Timer

Download: timer.zip

Overview

C standard library provides clock() function, which can be used to measure the elapsed time. It is a system independent C function declared in time.h (compatible on most operationg systems), but it does not give accurate results, not even millisecond accuracy.

In order to see the accuracy of clock(), try the following code on your system. The output tells the minimum clock difference which clock() function can detect. I got about 15 ms resolution on my system.


#include <iostream>
#include <ctime>
using namespace std;

int main()
{
    clock_t t1, t2;
    t1 = t2 = clock();

    // loop until t2 gets a different value
    while(t1 == t2)
        t2 = clock();

    // print resolution of clock()
    cout << (double)(t2 - t1) / CLOCKS_PER_SEC * 1000 << " ms.\n";

    return 0;
}

Therefore, we need a high resolution timer to measure the elapsed time at least 1 milli-second accuracy. The good news is there are high resolution timer functions, but the bad news is they are system specific. In other words, you have to write different codes on the different system. Windows provides QueryPerformanceCounter() function, and Unix, Linux and Mac OS X systems have gettimeofday(), which is declared in sys/time.h. Both functions can measure at least 1 micro-second difference.

Windows

Windows API provides an extremely high resolution timer functions, QueryPerformanceCounter() and QueryPerformanceFrequency(). QueryPerformanceCounter() is used to get the current elapsed ticks, and QueryPerformanceFrequency() is used to get the number of ticks per second, which is used to convert ticks to the actual time.

Here is a usage of QueryPerformanceCounter() for measuring elapsed time.


#include <iostream>
#include <windows.h>                // for Windows APIs
using namespace std;

int main()
{
    LARGE_INTEGER frequency;        // ticks per second
    LARGE_INTEGER t1, t2;           // ticks
    double elapsedTime;

    // get ticks per second
    QueryPerformanceFrequency(&frequency);

    // start timer
    QueryPerformanceCounter(&t1);

    // do something
    ...

    // stop timer
    QueryPerformanceCounter(&t2);

    // compute and print the elapsed time in millisec
    elapsedTime = (t2.QuadPart - t1.QuadPart) * 1000.0 / frequency.QuadPart;
    cout << elapsedTime << " ms.\n";

    return 0;
}

Unix, Linux and Mac

gettimeofday() can be used in Unix or Linux based system. This function is declared in "sys/time.h", so you must include this header before using gettimeofday(). It also produces 1 micro second resolution. Here is a code snippet.


#include <iostream>
#include <sys/time.h>                // for gettimeofday()
using namespace std;

int main()
{
    timeval t1, t2;
    double elapsedTime;

    // start timer
    gettimeofday(&t1, NULL);

    // do something
    ...

    // stop timer
    gettimeofday(&t2, NULL);

    // compute and print the elapsed time in millisec
    elapsedTime = (t2.tv_sec - t1.tv_sec) * 1000.0;      // sec to ms
    elapsedTime += (t2.tv_usec - t1.tv_usec) / 1000.0;   // us to ms
    cout << elapsedTime << " ms.\n";

    return 0;
}

C++ Timer Class

This timer class is combining both QueryPerformanceCounter() and gettimeofday(). Therefore, it can be used on both Unix/Linux/Mac and Windows system. Also, it provides simple interfaces to get the elapsed time easily.
The source is available here: timer.zip

The following code shows the basic usage.


#include <iostream>
#include "Timer.h"
using namespace std;

int main()
{
    Timer timer;

    // start timer
    timer.start();

    // do something
    ...

    // stop timer
    timer.stop();

    // print the elapsed time in millisec
    cout << timer.getElapsedTimeInMilliSec() << " ms.\n";

    return 0;
}
←Back
 
Hide Comments
comments powered by Disqus