Tim: > A lot of things get mixed up here ;-) The _mean_ is actually useful > if you're using a poor-resolution timer with a fast test. In which case discrete probability distributions are better than my assumption of a continuous distribution. I looked at the distribution of times for 1,000 repeats of t1 = time.time() t2 = time.time() times.append(t2-t1) The times and counts I found were 9.53674316406e-07 388 1.19209289551e-06 95 1.90734863281e-06 312 2.14576721191e-06 201 2.86102294922e-06 2 1.90734863281e-05 1 3.00407409668e-05 1 This implies my Mac's time.time() has a resolution of 2.3841857910000015e-07 s (0.2µs or about 4.2MHz.) Or possibily a small integer fraction thereof. The timer overhead takes between 4 and 9 ticks. Ignoring the outliers, assuming I have the CPU all to my benchmark for the timeslice then I expect about +/- 3 ticks of noise per test. To measure 1% speedup reliably I'll need to run, what, 300-600 ticks? That's a millisecond, and with a time quantum of 10 ms there's a 1 in 10 chance that I'll incur that overhead. In other words, I don't think my high-resolution timer is high enough. Got a spare Cray I can use, and will you pay for the power bill? Andrew dalke at dalkescientific.com
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4