Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
105 views
in Technique[技术] by (71.8m points)

When to use and when not to use Python 3.5 `await` ?

I'm getting the flow of using asyncio in Python 3.5 but I haven't seen a description of what things I should be awaiting and things I should not be or where it would be neglible. Do I just have to use my best judgement in terms of "this is an IO operation and thus should be awaited"?

question from:https://stackoverflow.com/questions/33357233/when-to-use-and-when-not-to-use-python-3-5-await

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

By default all your code is synchronous. You can make it asynchronous defining functions with async def and "calling" these functions with await. A More correct question would be "When should I write asynchronous code instead of synchronous?". Answer is "When you can benefit from it". In cases when you work with I/O operations as you noted you will usually benefit:

# Synchronous way:
download(url1)  # takes 5 sec.
download(url2)  # takes 5 sec.
# Total time: 10 sec.

# Asynchronous way:
await asyncio.gather(
    async_download(url1),  # takes 5 sec. 
    async_download(url2)   # takes 5 sec.
)
# Total time: only 5 sec. (+ little overhead for using asyncio)

Of course, if you created a function that uses asynchronous code, this function should be asynchronous too (should be defined as async def). But any asynchronous function can freely use synchronous code. It makes no sense to cast synchronous code to asynchronous without some reason:

# extract_links(url) should be async because it uses async func async_download() inside
async def extract_links(url):  

    # async_download() was created async to get benefit of I/O
    html = await async_download(url)  

    # parse() doesn't work with I/O, there's no sense to make it async
    links = parse(html)  

    return links

One very important thing is that any long synchronous operation (> 50 ms, for example, it's hard to say exactly) will freeze all your asynchronous operations for that time:

async def extract_links(url):
    data = await download(url)
    links = parse(data)
    # if search_in_very_big_file() takes much time to process,
    # all your running async funcs (somewhere else in code) will be frozen
    # you need to avoid this situation
    links_found = search_in_very_big_file(links)

You can avoid it calling long running synchronous functions in separate process (and awaiting for result):

executor = ProcessPoolExecutor(2)

async def extract_links(url):
    data = await download(url)
    links = parse(data)
    # Now your main process can handle another async functions while separate process running    
    links_found = await loop.run_in_executor(executor, search_in_very_big_file, links)

One more example: when you need to use requests in asyncio. requests.get is just synchronous long running function, which you shouldn't call inside async code (again, to avoid freezing). But it's running long because of I/O, not because of long calculations. In that case, you can use ThreadPoolExecutor instead of ProcessPoolExecutor to avoid some multiprocessing overhead:

executor = ThreadPoolExecutor(2)

async def download(url):
    response = await loop.run_in_executor(executor, requests.get, url)
    return response.text

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...