One of the funny things to me about the way we use languages with coroutines (or "async" if you're nasty) is that we set up the coroutines and then... Immediately await on them.

It's like... We're halfway there. So close. So close to actually taking advantage of concurrent programming. And yes, letting the event loop get in there instead of blocking hard on a main thread is a good thing and a strict improvement in circumstances where it matters.

But I do think sometimes about all the code I've seen where there are three network requests to three separate services that are totally independent of each other and the code is just... Eating the cost of "fully resolve request 1, fully resolve request 2, fully resolve request 3." I do wonder, sometimes, how many people think of these tools as concurrency and not just "the magic pixie-dust syntax I have to use to make half my functions the right color to be called from my other functions."

#python #asyncio

@mark Hm so what would be the advantage of setting up a coroutine and leaving it to be awaited later?

I mean, not that it seems at all far-fetched that such a thing would be useful, I'm just curious since I haven't gone particularly deep into async programming and I can't think of a situation where it would be needed.

I *have* gone deep enough to understand that async functions are not just magic pixie dust though. But I love that name. Eagerly awaiting a PEP to replace the "async" keyword with "magic-pixie-dust" 😂

#Python #asyncio

Concurrency (computer science) - Wikipedia

@parslii @mark This is not a useful response
@diazona @mark while your async function is running, you can do a different thing. When the different thing is done, you can go back to waiting on the async function or perhaps even retrieve its results. Meanwhile, it has presumably made progress, so made good use of that waiting time.
@parslii @mark Sure, but that's just the fundamental premise of concurrent programming, I don't think that's what is being discussed in this thread. (But I may have misunderstood, as per my reply on the other branch of the thread)

@diazona Some operations like database access long-poll on waiting for the database itself. Multiple concurrent requests against the same database may or may not be useful (they'll certainly put more pressure on the db; whether that's good or bad depends on the details of the architecture).

But if your service is requesting data of multiple separate databases and then joining it somehow, and what you're requesting from those DBs doesn't depend on each other (like, for instance, if you have one DB for user auth and one for user contact info, and those are both keyed by user ID), there's no reason to do those as

auth_data = await(get_user_auth())
contact_info = await(get_contact_info())

That's blocking starting the ask to the contact info database on a full roundtrip to an unrelated database. You can fire both of those requests off at once and then await on them both to complete before continuing to save yourself some overhead time.

@mark Gotcha, yeah that certainly makes sense. I guess I might have misinterpreted what you said... it sounded like, instead of what I would consider a standard use of concurrency:

auth_data, contact_info = await asyncio.gather(get_user_auth(), get_contact_info())

you were saying there are cases where it'd be better to do this:

auth_data_coro = get_user_auth()
contact_info_coro = get_contact_info()
# do some other things... time passes...
auth_data, contact_info = await asyncio.gather(auth_data_coro, contact_info_coro)

I wasn't sure if there's some advantage to the latter approach over the former that I'm not seeing.

(sorry, no formatting on my instance)

@mark there’s a counterpoint to this: backpressure. MOST async code should probably be written in this naive way, with only a few critical synchronization points written to have arbitrary parallelization. better to start off slow, get the benefit of a naive “if it wants me to back off, the responses will be slow” backpressure, and then advance to “okay this is a performance issue where we are waiting too much, let’s think about how much parallelism we want” when needed
@mark in python, the framework I maintain (Twisted) is much more biased towards the style you suggest, where it eagerly kicks off work all the time, parallelizes by default in a lot of cases, as opposed to the stdlib asyncio, which tends towards the “await immediately” style. neither is perfect, and trio has some ideas we should both copy from to make parallelism more idiomatic and safer, but twisted does tend to get into resource-exhaustion scenarios more easily

@glyph Ah yeah. 😄 So, I have definitely seen the pendulum swung too far in the other direction on a TypeScript codebase once.

That's how I learned that browsers have a maximum number of outstanding requests from a given page... I think the limit for Chrome is somewhere around 8 or 16? 😉

@glyph Agreed. This is not a call for everyone to embrace the orc programming language; rather, it's mostly a comment on how I think (at least in my experience) lots of code I've peer-reviewed that is in an async context isn't ever even considered for setting it up to allow multiple long-poll operations to be simultaneously outstanding.

@mark I feel like this kinda boils down to the double edged sword of async language support:

Advantage: asynchronous code can look like simple synchronous code, hide the tricky part!

Disadvantage: asynchronous code can look like simple synchronous code, hide the tricky part!

What Color is Your Function? – journal.stuffwithstuff.com