RE: https://toot.cafe/@nolan/115622706069488954

Those of you who are not web devs (good choice) may not quite appreciate what this means.
You know how websites are loads slower now, and don't work without JavaScript? Well, one of the big (dubious) arguments made in favour of this transition was that you download a bunch more code first, but then it'll make everything faster once you're there. It's front loading.
What @slightlyoff has shown is that this is not relevant, because people don't stay on the sites long enough to get the benefit.

as @nolan goes on to say, he is very sceptical that this supposed "benefit" was ever actually true (and I am too), but _even if it is true_, it doesn't _happen_; people get all this front loaded code in the expectation that they'll be hanging around for ages and so it'll be worth it, and then they don't hang around for ages and it wasn't. They get kicked in the teeth up front on the assumption they'll do 100 things, and then they do 1 thing and that's it.
@nolan this is like if all the shops made you buy a kilogram of nutmeg when you only need a teaspoon, because "it's cheaper in the long run". Only if I want a kilo of your nutmeg, ̶w̶e̶b̶s̶i̶t̶e̶s̶ shops, and I don't. I want a teaspoon of your nutmeg. At most.
@sil @nolan
Same as the good old "buy an annual subscription now, it's cheaper than 12 monthly payments" trick.
@brunogirin @sil @nolan When ISPs starting demanding 18 months commitment, I gravitated to the ones who had enough confidence in their product to think I wouldn't want to leave them after 12 months. My mobile phone is renewable monthly.
@sil @nolan Shhhh! I see an entrepreneurial opportunity for a lucrative secondary nutmeg market.
@sil @nolan This reminds me of how a few Windows versions ago Microsoft seemed fixated on optimizing speculative pre-caching, so you never had much free RAM, and every time you started an app it had to swap a lot out to disk. You got a major performance boost just by disabling the pre-caching, but updates kept re-enabling it. I'm not sure whether they stopped doing it or just everyone switched to SSDs so you don't notice as much.

@sil I need to show you one of my modern web dev rants.

We’re literally (yes, literally) bootstrapping 4,000,000 LOC for a hello world. I would say that this is peak dumbassery, but somehow people keep finding new ways to shoehorn in more accidental complexity, brittleness, and new ways to break the contract of the web into modern frameworks.

@Michaelcarducci four frigging million lines?? yikes

@sil scaffold a blank react app and run tokei on the node-modules dir.

Fundamentally, HTML is not designed for highly interactive web apps.

To quote Roy Fielding:

“Rest is optimized for the common case of the web… large grain hypermedia transfer…”

But fielding also gave us the optional code-on-demand constraint to dynamically extend the hypermedia. This is _not_ how SPA frameworks work, but it is exactly why a little library like htmx are so effective.

https://sufficiently-advanced.technology/post/third-way-web-development-part-i

Third-Way Web Development Part I - History

When I look back on my career in technology, I’ve been seduced, over and over again, by this idea that best practices exist and that I can consider my work “good” so long as I follow those best pra...

Sufficiently Advanced Technology

@Michaelcarducci @sil That doesn't sound like the right metric to use. It'd be like counting all of the source in libc against a C hello world program, rather than just the bits used to implement printf().

Code present in the node_modules directory that is never sent to the client is not going to affect the page load time. Maybe some of the dependencies could be split into smaller parts, but that isn't cost free either.

@jamesh @sil oh, I admit it’s a deliberately hyperbolic statement. I guess it’s more to express my personal (and inevitable) “old man yells at cloud” frustration.

At one time a web app could be built with an editor and an empty dir. now we need 4M LOC before we write a single line. It’s kinda comical when you think about it.

But the progress is valid, too. We’ve come a long way.

My main frustrations are maintenance overhead and that SPAs are overengineering/overkill in many scenarios.

@sil @slightlyoff I remember when this was a desktop problem that nobody wanted to talk about. Add a splash screen to cover the enormous setup time, then any instrumentation shows that the user usually closes the thing in seconds.

Yet another case of the same people who scream "you can't improve what you don't measure" steadfastly refusing to measure anything that might impact how they do things...

@sil @slightlyoff Typically SPAs aren't "sites", though. It's in the name, they're applications. Those are things you DO keep open for long periods; Calendars, email clients, forums, etc.

SPAs being shoehorned into sites and landing pages by teams that didn't know better at the time is unfortunate, but mostly mitigated by SSR these days anyway.

Also, let's not kids ourselves; it isn't SPAs that are the problem, it's React specifically. It's the slowest at DOM manipulation, has one of the heaviest runtimes, and seems to almost be designed to cause re-render issues.

Even with SSR you need to ship everything to the client and hydrate. And while I agree React is a big part of the problem, SPAs need to ship a client-router, which in theory is not tied to React.
@soviut @slightlyoff It is not true that SPAs are all long-running applications, not at all. Maybe that was the original goal, but it is considerably wider than that now.
Definition: a site is an "SPA" if clicking a link doesn't work without JS. (Having the link not even be there to be clicked without JS also counts.)
By this, a lot of newspapers are; most restaurant sites; most promotional sites: how are these things "sites I keep open for a long period"? All I want is to see the opening hours!

@sil @slightlyoff I acknowledged that some sites are SPAs when they shouldn't be.

Though, I'd argue that a restaurant site with any kind of online ordering capability is an application, just like UberEats, DoorDash, etc.

And even if a site is a SPA when it really shouldn't be, SSR solves that by giving you fast time to first paint before hydrating. So you can glance at the hours of operation. Most sites built as SPAs need SSR anyways for per-page SEO metadata.

That's the true indicator; if your project needs SEO, it's a site and if it doesn't it's an app (since most apps are behind a login anyways).

@soviut @sil Oh man, if only we'd been able to [ checks notes ] display a list of options and [ double checks ] build checkout flows before SPAs.

@slightlyoff @sil I wasn't being rude or sarcastic so there's no need for you to be; keep this civil.

I never said it was impossible to build those flows without a SPA, only that SPAs are intended to be applications, not sites. The examples I gave were just applications where a SPA could shine; not where it was somehow the only option.

Have sites been built as SPAs? yes. Is it the best use case for them? No. Just because a team chose the wrong tech to build something doesn't invalidate the tech.

I've built ordering systems back in the day as MPAs and their UX was never as nice as a stateful frontend app. And building a stateful frontend app is a lot nicer with Vue than vanilla JS. However, if I'm building a landing page, that's going to be Astro these days.

@soviut @sil But it's those same flows where this technology tends to fail most spectacularly. Go trace your local food delivery system w/ WPT or a low-spec device. Here in the US, the UIs of Grubhub, Caviar, Opentable, and many others have been totally destroyed by this tech.

I've been getting variations of "we need this to be built in a heavyweight JS framework because 'it's an app'" for years and years, and I'm not going to take the theory as valid when ~all of the practice cuts against it.

@soviut @sil That said, I agree that when things come in under specific budgets, this can be a better way to build certain classes of UI. I'm using elk.zone to post this, and it's wonderful. The YT PWA is pretty great. But these are exceptions that prove the rule: the set of teams that can be trusted to do a good job with an SPA stack could deliver a great experience without it.

@slightlyoff @sil I've worked several times with those teams who picked the wrong tech. They did so because they're cargo culting and would fare even worse with vanilla JS or an MPA.

The issue is many of them don't have deep web fundamentals. Not because of frameworks, but because the web is notoriously vast and constantly churning. I happen to like the churn but many find it exhausting and I don't blame them for that.

At the very least, Frameworks are well documented mostly static targets that let developers focus more on solving problems rather than fighting their browsers. Frameworks are solving a people problem more than a technical one.

@soviut @sil in my (now approaching a decade) of consulting around this, i depart from your analysis because the messes low-expertise teams make with vanilla/MPA are clearer and involve less JS. That's a structural advantage in the face of growing device inequality.

@slightlyoff @sil Not only have I seen way worse structural MPAs, but I've seen ones that are terrifyingly insecure as well.

The blast radius of a mistake in server side is so much worse. You see it now as server components and mixed client/server state accidentally leak because a convention got messed up.

These teams will exist no matter what and their impact will be worse the closer to the metal you put them.

@soviut @sil Again, from my experience of more than 100+ consulting engagements, this is not correct. Closer to the metal creates limiting factors. We saw this clearly in the evidence from SNAP sites:

https://infrequently.org/2024/08/object-lesson/

Reckoning: Part 2 — Object Lesson

SNAP benefits sites for more than 20% of Americans are unusably slow. All of them would be significantly faster if states abandoned client-side-rendering, and along with it, the legacy JavaScript frameworks (React, Angular, etc.) built to enable the SPA model.

Alex Russell

@slightlyoff @sil If closer to the metal limited factors, then PHP sites would be a lot less prone to issues. Instead, it took a framework like Laravel to remove a lot of the PHP "foot guns".

I'm not disputing that showing a loading spinner on a benefits site is ridiculous. If you need one before you can even interact then you've already failed.

Building performant frontends isn't a hard thing to do, just something you have to be aware of. In my experience, this mostly boils down to saying no to "grid" components and no to charts libraries, haha.

@soviut @sil PHP sites *are* less prone to these excesses. You can make messes there, sure, but they *tend* to end up in a lane that is less catastrophic because they are not failing to scale due to client-side resources and link-level constraints. There are counterexamples, but most of the last generation (as I note in that piece) are better than a lot of what current-era frameworkist nonsense produces on the regular. This is a numerate question, and current practices are *measurably* worse.

@slightlyoff @sil Less catastrophic? Every major data breach for nearly a decade was Wordpress sites getting hacked. Even popular open source maintainers couldn't secure their PHP.

All I'm saying is, call out the actual culprits; in this case, React. Vue and Svelte are both are more efficient and better citizens in the space.

@soviut @sil I'm saying "you should do more on the server, less on the client" and you're responding with "PHP bad!", and it *feels* like you're not engaging in good faith.

@slightlyoff @sil What I'm saying is the stakes are even higher in the backend. Those same inexperienced developers who made a bulky frontend are now able to create real security vulnerabilities.

This isn't me speculating, I've encountered it several times. I had to pause a project because I discovered over 5000 SQL injection vulnerabilities in their API. The only reason they hadn't been utterly pwned is because their host had some rudimentary injection prevention middleware.

Had they used a framework, they wouldn't have been able to produce those vulnerabilities (at least not as easily as they did raw dawging it).

To summarize, servers are a luxury and should be treated as such. If your team is struggling to ride a bicycle, don't give them a motorcycle.

@soviut @sil This is presenting a completely false choice, and I think you know it.

@slightlyoff @sil I don't believe that at all. Unskilled developers in the frontend are significantly less dangerous than when they're granted access to the backend.

Just look at all the accidental leaks that have happened in Next when those same devs have direct access to the database and secret keys with server actions and such.

That doesn't mean solutions like SSR aren't viable. You get much better time to first contentful paint, avoid loading spinners and don't have to heavily modify your frontend app to accommodate it.

Point being, yes, you can optimize at the server, but it comes with risks that pure frontends don't.

@soviut @sil Then let me say as a former webappsec professional and frequent collaborator with contemporary teams across the spectrum, you are not correct.

@slightlyoff @sil I don't think the "practice" cut against it. I think the only thing it cuts against is engineering philosophy and an academic mindset.

I believe it's important to be measuring this stuff, but I also treat performance as a feature, and features get cut all the time.

For example, the latest Pokemon games on Switch have terrible performance issues but it sold 5.8 million copies in its first week. The game was compelling enough that anyone who played it didn't care. The only ones loudly complaining were people who didn't really play them; the video game academics, so to speak.

My point is that teams are optimizing for their users and sometimes that means performance takes a hit.

@sil @slightlyoff Back in the olden days, colleagues of mine, working on a performance problem in pre-web technology, convinced our customers that the problem had gone away by making the first screen load faster. They still couldn't do anything but think about what they were going to do but it seemed they waited until they could see something on screen to start thinking.

Slow web performance probably causes fewer clicks.

@sil @slightlyoff people don't realize that you don't have to chose between fast client side navigation and lean first loads. I have been using #SvelteKit for a while now and it does the right thing: serve the pre-rendered html first, then hydrate the page just the minimum amount to enable interactivity. Then, as you hover a link, start downloading the minimum amount of data for the page it points to. By the time the click happens, it's mostly loaded an feels really fast. No Javascript? No problem, everything is progressively enhanced and your site is also a MPA. You only lose interactivity which most sites don't have a lot of, and can be designed around.
@beeb @slightlyoff sure thing. That sounds OK to me. But it's not me you wanna be evangelising it to :)
@sil @slightlyoff Does generating HTML by a script running from cron, committing it to a git repository, and pushing it to github to be displayed as github page count as web development? ​
@bunny @slightlyoff absolutely. You're making pages that appear on the web: you're part of helping to build the web, along with the rest of us. You are a web developer. Among other things, no doubt! Welcome to the show that never ends.

I’ve been making SPAs for over a decade, but I’ve always held the opinion that it’s only the right tool for web apps, not websites. I’m glad to finally see some evidence in support of this viewpoint!

↬mastodon.social/@sil/115623874367104698