Antoine Pitrou wrote: > On Wed, 15 Feb 2012 20:56:26 +0100 > "Martin v. Löwis" <martin at v.loewis.de> wrote: >> With the quartz in Victor's machine, a single clock takes 0.3ns, so >> three of them make a nanosecond. As the quartz may not be entirely >> accurate (and also as the CPU frequency may change) you have to measure >> the clock rate against an external time source, but Linux has >> implemented algorithms for that. On my system, dmesg shows >> >> [ 2.236894] Refined TSC clocksource calibration: 2793.000 MHz. >> [ 2.236900] Switching to clocksource tsc > > But that's still not meaningful. By the time clock_gettime() returns, > an unpredictable number of nanoseconds have elapsed, and even more when > returning to the Python evaluation loop. > > So the nanosecond precision is just an illusion, and a float should > really be enough to represent durations for any task where Python is > suitable as a language. I reckon PyPy might be able to call clock_gettime() in a tight loop almost as frequently as the C program (although not with the overhead of converting to a decimal). Cheers, Mark.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4