On Thu, Jun 14, 2018 at 12:40 PM Tin Tvrtković <tinchester at gmail.com> wrote: > > Hi, > > I've been using asyncio a lot lately and have encountered this problem several times. Imagine you want to do a lot of queries against a database, spawning 10000 tasks in parallel will probably cause a lot of them to fail. What you need in a task pool of sorts, to limit concurrency and do only 20 requests in parallel. > > If we were doing this synchronously, we wouldn't spawn 10000 threads using 10000 connections, we would use a thread pool with a limited number of threads and submit the jobs into its queue. > > To me, tasks are (somewhat) logically analogous to threads. The solution that first comes to mind is to create an AsyncioTaskExecutor with a submit(coro, *args, **kwargs) method. Put a reference to the coroutine and its arguments into an asyncio queue. Spawn n tasks pulling from this queue and awaiting the coroutines. > > It'd probably be useful to have this in the stdlib at some point. Sounds like a good idea! Feel free to open an issue to prototype the API. Yury
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4