Zooming out, the infra trend I see is gaining flexibility and optionality by decoupling constraints. This lets us efficiently navigate the solution space.

Storage / Compute separation.
Infrastructure via API.
Serverless.
Service meshes.
Workload orchestration.

Innovation by challenging assumptions

https://hachyderm.io/@hazelweakly/113601483324077529

Hazel Weakly (@[email protected])

I'd love to see the infrastructure industry move towards a future where the concept of a runtime, its interface, its associated isolation level, and the packaging, are all disaggregated Containers != ephemeral. VMs != stateful. Containers != CRI.

Hachyderm.io

On the application side, decoupling hasn't happened. Not in the same way, imo.

Micro-frontends, island architecture, cell-based architecture, microservices, event sourcing.

I can't call them innovative in the same way that IaC or k8s were. Excellent? Absolutely. But did we gain optionality? Sorta?

The reason I'm bullish on further decoupling application packaging, runtime, isolation, execution, and other components, is because it's the first step towards building infra that enables true decoupling for software.

We've been building infra for infra, but now we need to build infra for everyone.

Here's my wishlist of infrastructure enablement for applications:

1/
Write once, compile to: IPC, RPC, FFI, native libraries.

I'm tired of writing the same "reusable" code in 20 different ways. If it's reusable, why can't I reuse it?

How many times do we have to re-implement config parsing? C'mon

2/
State propagation via: Event sourcing, DBMS, CDC, in memory, ...

Why does choosing an option change the entire architecture of the system, and involve rewriting your codebase?

ReactJS is 95% event sourcing, but somehow redux -> event sourcing is a full rewrite? Would love to see progress here.

3/
Deployment model != codebase: Monolith, microservices, serverless, edge, ...

I should be able to pick those at runtime regardless of the codebase structure.

Better yet, why can't my infrastructure do that? Horizontal autoscaling? Nah, I want dynamic per-endpoint scaling.

4/
Modularity that doesn't need to be built in.

Feature flags, versioned APIs, ifdef: it's all solving the same problem that we should be able to enable seamlessly in infrastructure.

I want to patch a 5 year old software release by checking out main, patching THAT, and not break backwards compat

4 (part 2)/
That applies to specializing software as well.

We have content addressed version control, a million package versioning schemes, bi-temporal databases, service meshes with A/B routing, MVCC for DBs, and yet we can't figure out how to make a codebase maintainable if it supports multiple feature sets?

There's more, but these all have a trend:

It's "possible"
-> But, playing nice with infra is a PITA
=> Your infra + app + architecture ends up coupled and codebase specific

In other words: apps are building what they need out of duct tape and glue because the infra isn't providing what they want.

Infra used to be like that too! We used to build pipelines out of glue and cronjobs, we used to deploy with rsync and crossed fingers, we autoscaled with bash scripts and rm -rf.

Nothing wrong with doing it that way, but there's a reason we switched to k8s and IaC.

Apps need the same evolution

@hazelweakly I think one core insight is analogous to the bitter lesson, that progress mainly emerges from the underpinning economics, rather than cleverness.

Infrastructure As Code is viable when infrastructure becomes cheap enough to treat as malleably and disposably as we treat code. So, what has to become cheap enough and malleable enough in this space? I think microservices look something like a failed attempt at what you're asking for here, generating complexity instead of reducing it.

@mhoye @hazelweakly +1 on progress emerging from economics.

A good chunk of the microservices mess stems from focusing on technical solutions while mostly ignoring social ones. The opportunity to leverage APIs to divide up the work and share in its benefits remains mostly unrealized.

Sure, APIs have a foot in the technical space, but it is their social potential that interests me.

@hazelweakly Yes! I’ve been quietly making the argument for awhile that e.g. k8s vs lambda were the wrong abstraction layer for product engineers. Folks building things should be able to focus primarily on business logic and not have to make decisions based on whether that logic runs in a FAAS, k8s, or Apache. I can totally see expanding that further to where even clis and desktop apps are valid targets.
@hazelweakly
QA is always a big stumbling block. With modern TDD and CI/CD pipelines this is much easier to do and ensure things done break badly.

@hazelweakly I have a kubecon talk for you

Spoiler it is my kubecon talk

https://www.youtube.com/watch?v=QcYsGytNBe8

What if Kubernetes Was a Compiler Target? - David Morrison & Tim Goodwin

YouTube
@hazelweakly I think the answer to a lot of these is “leaky abstractions”, but none more so than this. But in this case, because people take advantage of it - event sourcing forces you down a hydrate-validate-notify cycle. And if we wrote all of our applications like that, it wouldn’t matter what persistence underlies it. But if you _know_ you’ve got a DBMS, then the temptation to directly mutate a column for performance is enormous. And then that changes every code path that touches it.