2011/8/29 Charles-François Natali <neologix at free.fr>: >> +3 (agreed to Jesse, Antoine and Ask here). >> The http://bugs.python.org/issue8713 described "non-fork" implementation >> that always uses subprocesses rather than plain forked processes is the >> right way forward for multiprocessing. > > I see two drawbacks: > - it will be slower, since the interpreter startup time is > non-negligible (well, normally you shouldn't spawn a new process for > every item, but it should be noted) Yes; but spawning and forking are both slow to begin with - it's documented (I hope heavily enough) that you should spawn multiprocessing children early, and keep them around instead of constantly creating/destroying them. > - it'll consume more memory, since we lose the COW advantage (even > though it's already limited by the fact that even treating a variable > read-only can trigger an incref, as was noted in a previous thread) > > cf Yes, it would consume slightly more memory; but the benefits - making it consistent across *all* platforms with the *same* restrictions gets us closer to the principle of least surprise.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4