Looking at the Twitch Enhanced Broadcasting rollout (you encode up to five streams on your local system & stream them all to Twitch, which then sends the requested single stream to viewers rather than transcode at the server), I wonder if we are going to eventually get a situation where you get multi-resolution video encodes. The start of each block decodes to a low res output but extra data add more spatial detail so rather than multiple encodes, there is one with a variable payload size.
@shivoa there are some experimental stuff, usually called progressive video IIRC?

But I think it's usually better to negotiate the bitrate for most applications, because digital data transmission is mostly all-or-nothing.

FPV drones use PAL/NTSC because of this, because even if there's signal integrity issues, you still get some picture.
@ignaloidas @shivoa I don't think it's necessarily all-or-nothing. Sure, on the Web we don't have loss-tolerant protocols with embedded error correction (but isn't that related to use of loss as congestion indicator in TCP?), but when the medium is fundamentally very noisy there are various tradeoffs to try before the ultimate fallback to blasting how much of the uncompressed data fits.
@amonakov @shivoa right, but even the more advanced protocols with ability to specify tolerable loss and stuff like Media over QUIC essentially falls down to a bandwidth negotiation. You could serve progressive video over it of course, but it wouldn't be that much better than what you have with bandwidth negotiation.

@ignaloidas @amonakov @shivoa
ok but even with bandwidth negotiation, wouldn't it be easier for the server to discard some chunks of the stream and send only the low-quality part to low-bw clients, and all of the chunks to high-bw clients?

Then the server doesn't have to transcode, and the total bandwidth from the source to the ingest server is max(qualities) instead of sum(qualities)