A different implementation, (e.g. one using windows IOCP), can do timeouts without using select (and must, select does not work with IOCP). So will a gevent based implementation, it will timeout the accept on each socket individually, not by calling select on each of them. The reason I'm fretting is latency. There is only one thread accepting connections. If it has to do an extra event loop dance for every socket that it accepts that adds to the delay in getting a response from the server. Accept() is indeed critical for socket server performance. Maybe this is all just nonsense, still it seems odd to jump through extra hoops to emulate a functionality that is already supported by the socket spec, and can be done in the most appropriate way for each implementation. K -----Original Message----- From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Antoine Pitrou Sent: 14. mars 2012 10:23 To: python-dev at python.org Subject: Re: [Python-Dev] SocketServer issues On Wed, 14 Mar 2012 16:59:47 +0000 Kristján Valur Jónsson <kristjan at ccpgames.com> wrote: > > It just seems odd to me that it was designed to use the "select" api > to do timeouts, > where timeouts are already part of the socket protocol and can be implemented more efficiently there. How is it more efficient if it uses the exact same system calls? And why are you worrying exactly? I don't understand why accept() would be critical for performance. Thanks Antoine.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4