[Guido] >> Maybe I'm out of tune, but I thought that optimizations should be >> turned off by default because most people don't need them and because >> of the risk that the optimizer might break something. Haven't there >> been situations in Python where one optimization or another was found >> to be unsafe after having been in use (in a release!) for a long >> time? [David Abrahams] > Isn't that a good argument for having them turned on all the time? > The easiest way to ship code that suffers from an optimizer bug is to > do all your development and most of your testing with unoptimized > code. That's my belief in this case (as I tried to say earlier, perhaps unsuccessfully). The current peephole optimizer has been overwhelmingly successful (the bug report I referenced was about modules with more than 64KB of bytecode, and Python has always had troubles of one kind or another with those -- so it grew another one in that area, BFD), and I'm glad everyone runs it all the time. > In C/C++, there's a good reason to develop code unoptimized: it's much > easier to debug. I'm not sure that applies to Python. It *used* to apply, and for a similar reason, when -O also controlled whether SET_LINENO opcodes were generated, and the Python debugger was blind without them. That's no longer the case. The only reason to avoid -O now is to retain if __debug__: blocks and (same thing) assert statements. That can make a big speed difference in Zope and ZEO, for example. Life would definitely be worse if -O could introduce new bugs (other than programmer errors of putting essential code inside __debug__ blocks).
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4