On Fri, 2003-12-12 at 16:07, Guido van Rossum wrote: > > > 1. What is the purpose of hiding EINTR? > > > > My code uses a lot of interrupts (SIGALRM, SIGIO, SIGCHLD, ...) and I > > almost always need to trap and restart interrupted system calls. So I > > made that wrapper. Nearly all I/O uses it. In fact, I think it should be > > the default behavior.... > > Here we may have a fundamental disagreement. In my view it is > impossible to write correct code when signal handlers can run any > time, and therefore I have given up on using signals at all. > > Everything you do with signals can also be done without signals, and > usually more portably. (You may have to use threads.) Over the years I have talked to quite a few fellow Python programmers. Some are new, and some seasoned. I have heard many complaints about the dismal performance of threaded Python programs. Just the other night, at the BayPiggies meeting, there was a small discussion on threads in Python. Comments such as "I don't do threads in Python", "threads suck", and "avoid threads" were heard. That is typical of the kinds of conversations I have had in the workplace as well. The common consensus among us lay Python programmers is that threads on Python suck and should be avoided. I have read Sam Rushing's treatise on threads and co-routines, and have used his "medusa" http server framwork. I like that "reactor model" myself. It works well for programs that utilize a lot of I/O. I have modified that slightly to use a "proactor model" for my own core libraries and it seems to work well. I can tell you that that framework is indeed used in at least a few companies. I often also use a module called "proctools", which provides some Unix shell-like functionality in that it lets you spawn and monitor sub-processes. This module traps SIGCHILD. That is the "traditional" way to detect child exits, and it is necessary to do so to avoid zombie processes. Now, once you start trapping even one signal like that then you are faced with the possibility of interrupted system calls during reads and writes (and other places). Thus, that problem is unavoidable in the kinds of programs that I and others write. Granted, you can't guarantee correct behavior with that model, but to use threads is a tradeoff because you then have to deal with thread locks and such and that is yet another source of subtle bugs and performance problems. Again, the common consensus among Python programmers that I know is to avoid threads. One other method that I use is to use forked processes as if they were threads. That combined with a third-party module that exposes sysV IPC to Python provides the same benefits as threads but without the problems. -- -- -- Keith Dart <mailto:kdart at kdart.com> <http://www.kdart.com/> -- Public key ID: B08B9D2C Public key: <http://www.kdart.com/~kdart/public.key> ============================================================================ -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part Url : http://mail.python.org/pipermail/python-dev/attachments/20031213/6d4869b2/attachment.bin
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4