A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://mail.python.org/pipermail/python-dev/2010-March/098169.html below:

[Python-Dev] [PEP 3148] futures - execute computations asynchronously

[Python-Dev] [PEP 3148] futures - execute computations asynchronouslyBrian Quinlan brian at sweetapp.com
Fri Mar 5 07:03:02 CET 2010
Hi all,

I recently submitted a daft PEP for a package designed to make it  
easier to execute Python functions asynchronously using threads and  
processes. It lets the user focus on their computational problem  
without having to build explicit thread/process pools and work queues.

The package has been discussed on stdlib-sig but now I'd like this  
group's feedback.

The PEP lives here:
http://python.org/dev/peps/pep-3148/

Here are two examples to whet your appetites:

"""Determine if several numbers are prime."""
import futures
import math

PRIMES = [
     112272535095293,
     112582705942171,
     112272535095293,
     115280095190773,
     115797848077099,
     1099726899285419]

def is_prime(n):
     if n % 2 == 0:
         return False

     sqrt_n = int(math.floor(math.sqrt(n)))
     for i in range(3, sqrt_n + 1, 2):
         if n % i == 0:
             return False
     return True

# Uses as many CPUs as your machine has.
with futures.ProcessPoolExecutor() as executor:
     for number, is_prime in zip(PRIMES, executor.map(is_prime,  
PRIMES)):
         print('%d is prime: %s' % (number, is_prime))


"""Print out the size of the home pages of various new sites (and Fox  
News)."""
import futures
import urllib.request

URLS = ['http://www.foxnews.com/',
         'http://www.cnn.com/',
         'http://europe.wsj.com/',
         'http://www.bbc.co.uk/',
         'http://some-made-up-domain.com/']

def load_url(url, timeout):
     return urllib.request.urlopen(url, timeout=timeout).read()

with futures.ThreadPoolExecutor(max_workers=5) as executor:
     # Create a future for each URL load.
     future_to_url = dict((executor.submit(load_url, url, 60), url)
                          for url in URLS)

     # Iterate over the futures in the order that they complete.
     for future in futures.as_completed(future_to_url):
         url = future_to_url[future]
         if future.exception() is not None:
             print('%r generated an exception: %s' % (url,
                                                       
future.exception()))
         else:
             print('%r page is %d bytes' % (url, len(future.result())))

Cheers,
Brian
More information about the Python-Dev mailing list

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4