September 17, 2008

Python Timing - time.clock() vs. time.time()

Dear Lazyweb,

Which is better to use for timing in Python? time.clock() or time.time()? Which one provides more accuracy?

for example:

start = time.clock()
... do something
elapsed = (time.clock() - start)

vs.

start = time.time()
... do something
elapsed = (time.time() - start)

Update:
From the comments I received and others I have talked with, it looks like the answer is: use time() on Unix/Linux systems and clock() on Windows systems. However, the 'timeit' module gives you more options and is accurate across platforms.

6 comments:

Marjorie The Spam Tree said...

From the python 2.5 manual (emphasis mine):

clock( )

On Unix, return the current processor time as a floating point number expressed in seconds. The precision, and in fact the very definition of the meaning of ``processor time'', depends on that of the C function of the same name, but in any case, this is the function to use for benchmarking Python or timing algorithms.
On Windows, this function returns wall-clock seconds elapsed since the first call to this function, as a floating point number, based on the Win32 function QueryPerformanceCounter(). The resolution is typically better than one microsecond.

I believe time.time() is subject to change when e.g. the system time changes, and so cannot be relied upon.

garylinux said...

import timeit
is what you want (I think)
docs.python.org/lib/module-timeit.html

Anonymous said...

clock() can be horribly imprecise on Unix, since it assigns an entire 10ms clock tick (also known as "jiffy" in Linux) to the process that happens to be running at the end of the tick.

For benchmarking of short snippets, I recommend clock() on Windows and time() on Unix, and that you run the tests multiple times on a lightly loaded machine, and pick the minimum value (at least if your code is deterministic). This is what the timeit module does, so you can use that one as well, or just use the timeit.default_timer() function (which is either clock or time, depending on the platform).

Jim said...

Looking at the timeit module I find this:


if sys.platform == "win32":
# On Windows, the best timer is time.clock()
default_timer = time.clock
else:
# On most other platforms the best timer is time.time()
default_timer = time.time

Anonymous said...

Folks,
anyone else have a mac, try the below ?

Also my _limits.h has
#define __DARWIN_CLK_TCK 100 /* ticks per second */
so you'll have a hard time() doing better than that on a mac

cheers
-- denis

#!/usr/bin/env python
""" time.clock() is broken in mac 10.4.11 ppc, Python 2.5.1 ?
"""
from time import *

print ctime()

t0 = clock()
sleep( 10 )
t = clock()
print "clock:", t - t0, t0, t, ctime()

t0 = time()
sleep( 10 )
t = time()
print "time:", t - t0, t0, t, ctime()

# =>
# Fri Oct 24 12:37:47 2008
# clock: 0.0 0.14 0.14 Fri Oct 24 12:37:57 2008
# time: 10.0000991821 1224844677.59 1224844687.59 Fri Oct 24 12:38:07 2008

Anonymous said...

So far the thread seems to focus on timing from time A to time B, but what about getting accurate system time? I've had problems because the processor time seems to be wildly inaccurate, Windoze sntp only corrects once a week (if you're lucky) and my python network monitor running 24x7 ends up running independent of the periodically corrected system time. It ens up minutes out in a week.
What I want is to have python pick up the corrected system clock time. The question is how?
Mike