On Sat, Nov 29, 2014 at 9:07 AM, Nick Coghlan <ncoghlan at gmail.com> wrote: > Guido wrote a specific micro-benchmark for that case in one of the > other threads. On his particular system, the overhead was around 150 > ns per link in the chain at the point the data processing pipeline was > shut down. In most scenarios where a data processing pipeline is worth > setting up in the first place, the per-item handling costs (which > won't change) are likely to overwhelm the shutdown costs (which will > get marginally slower). > If I hadn't written that benchmark I wouldn't recognize what you're talking about here. :-) This is entirely off-topic, but if I didn't know it was about one generator calling next() to iterate over another generator, I wouldn't have understood what pattern you refer to as a data processing pipeline. And I still don't understand how the try/except *setup* cost became *shut down* cost of the pipeline. But that doesn't matter, since the number of setups equals the number of shut downs. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.python.org/pipermail/python-dev/attachments/20141129/15f4eab6/attachment.html>
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4