Antoine Pitrou wrote: >>> In CPython, looking for reference cycles is a parasitic task that >>> interferes with what you are trying to measure. It is not critical in >>> any way, and you can schedule it much less often if it takes too much >>> CPU, without any really adverse consequences. timeit takes the safe way >>> and disables it completely. >>> >>> In PyPy, it doesn't seem gc.disable() should do anything, since you'd >>> lose all automatic memory management if the GC was disabled. >>> >> it disables finalizers but this is besides the point. the point is >> that people use timeit module to compute absolute time it takes for >> CPython to do things, among other things comparing it to PyPy. While I >> do agree that in microbenchmarks you don't loose much by just >> disabling it, it does affect larger applications. So answering the >> question like "how much time will take json encoding in my >> application" should take cyclic GC time into account. > > If you are only measuring json encoding of a few select pieces of data > then it's a microbenchmark. > If you are measuring the whole application (or a significant part of it) > then I'm not sure timeit is the right tool for that. Perhaps timeit should grow a macro-benchmark tool too? I find myself often using timeit to time macro-benchmarks simply because it's more convenient at the interactive interpreter than the alternatives. Something like this idea perhaps? http://preshing.com/20110924/timing-your-code-using-pythons-with-statement -- Steven
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4