One of the biggest mistakes the Internet made was trying to make HTTP do everything.

What was HTTP supposed to do?

Transfer hypertext.

Transferring hypertext is simple. It has a utility.

Then we decided to transfer photos. Still simple. But photos are neither text not hypertext.

But that wasn't enough either. We added stuff like GIFs, JavaScript and Flash.

Fast forward. Now everything HTTP does is so complicated, a web browser can be more complex than an operating system.

The reason HTTP sucks for *everything* is that you have to add layers and layers of stuff through it.

Web devs already know this.

But if you're not a dev: press <ctrl> and <u>, what do you see at the top?

Almost always <!doctype html>. Which means:

1. It's a document
2. What follows next is hypertextual mark-up language (HTML)

But are you necessarily utilizing the page as a document? How much HTML is actually being used?

This varies. Nonetheless, the <html> tag is required.

@atomicpoet HTTP itself is still fairly simple, as protocols go, which makes it really easy to build complex applications on top of it.
@ramsey Oh yeah, the protocol itself isn't the problem. I just question why the protocol must do everything.
@atomicpoet That question is why we now have HTTP/2 and HTTP/3. ๐Ÿ˜‰
@atomicpoet @ramsey we should make more use of egregious DNS abuse. ๐Ÿ˜›
@atomicpoet I think about this a lot. I heard a LONG time ago, back when everyone wanted FLASH websites (ugh) that the web was never built for video. And it's not. All the buffering and huge files.. it's nuts.

@atomicpoet And now that it's so popular, that itself is being used to shove more things through it. DoH is an insanely hostile idea, and it's sole designed purpose is to bypass network tools by hiding DNS inside HTTP traffic, which is harder to block wholesale.

Now every new Internet protocol is shoved through HTTP as a feature for that reason.

@atomicpoet surely introducing other protocols would make browsers more complicated? Currently from a network point of view they only need to talk http/https. If we used different protocols for images, binary downloads, dynamic updates , scripts etc wouldnโ€™t this make things more complex?
@atomicpoet once we get into the really complex stuff (streaming video, tunnelling dynamic connections over https, etc) Iโ€™d argue we arenโ€™t using http as a protocol any more - we are using the fact that ports 80 and 443 are likely to be open to send data in whatever format the application needs

@Irongeek Not everything requires a web browser, not should it.

Maybe from a network point of view, it's easier to shove everything through HTTP/HTTPS.

But now you're just offloading complexity to other areas of development.

Call me crazy, but maybe it would be better to generally download things through a BitTorrent client than through a web browser.

@atomicpoet Skype did. They are most earliest service entering LAN with HTTP, I guess.
@atomicpoet definitely agree with that.
@atomicpoet Gopher was a much better protocol.
@austincnunn I prefer Gemini. It's richer than gopher, but simpler than HTTP.
@atomicpoet On that issue I agree with you.
@atomicpoet one of the best things it did was use a standard TCP protocol for everything.

@atomicpoet it's not the programmers fault. They had no choice. I was one of them.

In the early naughties we tried developing systems that connected on other ports and spoke different protocols.

But our corporate users could not use them. They were on networks locked down by paranoid security engineers.

Only port 80 was allowed. Or port 443.

So we had to rewire it to fit where corporate security firewalls allowed.

And thus, paranoid security engineers forced everyone to make everything look like web traffic. Which of course makes the whole system less secure.

@pre Ha ha. I have a lot of additional thoughts about this.
@atomicpoet @pre something something Deep Packet Inspection. Don't get me started on the open-ssh server I had to run on port 443 because my remote co-worker's hotel would only allow that port outbound.
@atomicpoet @pre correction: it also allowed port 80 but it would inspect those packets and be like "this doesn't look like unsecured HTTP, no cx for you."
@pre @atomicpoet yep - I remember us trying to force stock market data through as HTTP. Made no sense at all!
@pre @atomicpoet As a corporate firewall admin, I'm drastically more comfortable opening up a non-443 port for a well-understood purpose than opening up Internet access to things.
@pre @atomicpoet this is exactly the drive from things like CORBA IIOP and RMI to SOAP and eventually REST. But there were other reasons...these could all be forced onto any port. What they couldn't do is go through a proxy that's going to strip off the TLS ans replace it with dynamically generated certs so the SOC can fully observe it.
@atomicpoet the only good argument I heard on this was that it'd be compatible with the oldest devices connecting, and would be magically handled by firewalls, at least, that's what I recall from my time on the HyBi WebSockets mailing list