I wonder what practices that are considered reasonable or at least common and defensible today will be considered moribund and obsolete in a decade and what extreme minority practices today will be widespread in a decade.

We've seen quite a few practices come and go. A lot of people will say it's all basically fads, e.g., as when Alan Kay derisively says "programming is a pop culture" but, IMO, that's pretty obviously untrue and it's easy to name sweeping changes that are massive improvements.

One example of something that became semi-mainstream in a decade and very mainstream in two decades is https://mastodon.social/@danluu/110213144744259862. Another is the idea that you can and should be able to build code basically all the time, have CI, etc.

Brooks, writing in 1995, noted that someone from MS wrote him to say that MS can build once a day. Brooks considers that a positive development, but isn't sure it's worth it. He implies that it's reasonable to merge/build once a week.

People at a startup I worked for that was founded in 95 would've considered it laughable to build daily, let alone weekly (they built what would now be called CI infra to allow constant builds) while the world's premier programming thought leader presented once-a-week builds as reasonable.

There were a number of companies that ignored the thought leaders and instead implemented reasonable practices. These companies had a huge productivity advantage (https://mastodon.social/@danluu/110339234955028325). Another example is

MS vs. Google build practices not too long after Brooks noted his uncertainty that building daily was worth it.

MS improved their build system massively and went from being able to build once a day (on a good day — zero times on a bad day) to doing 8 builds a day: https://danluu.com/microsoft-culture/.

Meanwhile, Google built what you'd consider modern CI infra for a monorepo with that let people build at any time because it would've been considered absurd to only be able to build 8 times a day.

Windows: a software engineering odyssey

Another big one was using higher level languages. Steve Yegge has talked about how Geoworks wrote things in assembly and how they ended up getting destroyed by Microsoft in part because their performance was crap compared to MS (because it's very hard to make sense of 15M LOC of assembly).

When I was in college (early '00s), most people had moved past thinking that everyone should write assembly to thinking that everyone who was a serious programmer should write C or C++.

Joel Spolsky, the next big programming thought leader after the Brooks era, has an entire essay about how teaching people Java instead of C allows stupid programmers who couldn't hack it in C to get a degree.

At the time, you also often heard that real programmers didn't write "scripting languages" like perl/php/python/ruby/etc., at least not for serious code.

If you look at successful startups from just after Joel's anti-Java anti-HLL polemic, a lot of them were created on ruby or python.

Back to the original question of what tools/practices will become common in the next decade or two, looking at what did become common, the things I can think of were generally things that weren't really "sold".

No one at Centaur needed to be sold on CI or read about it from some thought leader. Compared to having daily or weekly builds, it was such an obviously huge win that people just built it without needing to be sold, and that goes for the other sweeping changes I can think of as well.

I'm sure there are counterexamples, but when I think about the things that have had legions of people selling them, like TDD, etc., these things haven't taken the world by storm in the same way.

Appealing to elitism seems to be another "success smell". A lot of old practices that got replaced by more effective ones appealed to elitism and lost. And new practices that appealed to elitism also haven't succeeded in the same way that CI, automated testing, etc., have.

An example of this would be the legions of people who claimed that FP or Lisp or both were secret weapons that would make you super productive or make your company successful because smart programmers use them and/or they're secret weapons for productivity (and, BTW, I like FP).

In the wake of the failure of FP to live up to these lofty promises, people moved the goalposts and claimed that FP was a big advantage, but there are other reasons companies are successful, FP still outperforms, etc.

A lot of the revised claims are basically that the tech side doesn't matter that much, so the huge FP advantage isn't apparent, but we've seen multiple waves where companies that don't adopt better technology get lapped and can only survive via other massive advantages, e.g., non low-level companies using "scripting languages" vs. C/C++ just after Joel's anti-HLL polemic, assembly in the Geoworks days, etc.

Contrary to the claims, FP is simply not in the same class of productivity gain.

So, two questions:

1. What other historical things were in this class of productivity improvements that are so large that it was basically impossible for them to not get adopted? I can think of maybe 20 off the top of my head, but they're highly biased towards the kinds of things I work on and I'd be interested in other examples.

2. What's being used today that is in the same class? What's the equivalent of having CI instead of building once a week in 1995 or using ruby in 1999?

@danluu

The rise of fast version control with seamless branches and merging and rebasing clearly changed how software development is approached.

As for programming languages, the modern equivalent of Lisp's REPL – e.g., javascript's console, interactive scripting within the JVM, or interactive sessions in jupyter notebooks or interpreted languages like Python or Julia – is in many ways a form of CI (continuous integration), and one that let's one shape the program with the data loaded, i.e., write the software while interacting with loaded data structures.

And IDEs have become essential to explore and fully utilize libraries, in addition to ensuring basic aspects of code correction on the fly. Most software development nowadays consists in effectively leveraging existing libraries.

@albertcardona @danluu Fast version control with easy branching is a very alluring proposition for many, but it may well have been a net negative versus monorepo + feature flags. I think the same applies to REPLs and microservices.
@julesjacobs @albertcardona @danluu I love monorepos inside big corporate environments but fast version control is absolutely essential to small open source development work that became the bedrock of the Internet. Without fast version control open source would be stuck in the cathedral model, and there would simply be a lot less open source software.

@danluu Not technology wise so much, but architecturally - the first big mover was MVC, whcih seems old hat now, but it got people thinking about what happens in a section of code.

SOLID and Hexagonal architecture flowed on from that - I'm a big fan of both, whether they stay the course or not seems debateable (there's a LOT of pushback happening at the moment from people who don't want "Java" patterns pushed onto them.

The microservices revolution is currently getting a bit of a "why are we doing that when a monolith is faster to stand up"
It will be interesting to see how that goes (see also AWS/cloud, people might *finally* use it for what it's really good at, acting as a burst capacity, but not a general business capacity)

@danluu 1.) Probably remote work; you have companies insisting that "In person meetings are more productive, so you have to come in to do work", but...they're pushing pretty hard to do that. Chances are, the next biggest startups don't have a physical office. Myspace 2.0 is going to IPO *literally* in a garage, or something noteworthy like that (Probably more of an apartment.).
@danluu With 2.),I think accessibility might be the core thing to start with; I know a lot of people do still push for it, but I'm thinking more the idea of "Accessibility features first, not last as afterthoughts to be grafted onto a project. "-we spend so much time in the latter, and it feels like people who plan for it during development might see major advantages (Think how long Twitter took to add alt text, and how quickly Mastodon took it up relatively, from user content.).
@danluu I’ve been drinking tonight so may have to revisit your questions at a more coherent time but my hot take is whether the goal is to invent or engineer which are IMHO two ends of a spectrum. Roughly, if you’re inventing then testing/verification is less valuable, but of you’re engineering then testing/verification is everything. Given a long enough product life-cycle (afforded that it sells enough to make staff payroll) the opportunity to add verification becomes evident. But at the start of the life-cycle, especially with demand to demonstrate viability/merit, writing tests especially at unit level by an experienced practitioner is dubious at best.

@danluu Maybe async getting more main stream (escaping the embedded/kernel world)?

I see it's popularity as a response to both the big difference in speed(or latency) on memory/disk/network. And realisation that manual programming of multithreading solutions is quite painful.

I think a lot of now popular modes of concurrency (coroutines/fibers/actors/async/you name it) got a lot more popular than the old: let me fork a process, or have a single sync loop with some threads.

@danluu

> What's being used today that is in the same class?

Nix / Nixpkgs / NixOS.

@ruuda @danluu Yeah, in these inevitably polyglot times, tooling that spans multiple languages/ecosystems is a multiplier, so Nix, LSP, containerisation were bound to happen.

@danluu From a system administration perspective, the move from hand-crafting systems to automating their setup (or as much of it as possible) feels both transformative and so obviously compelling to practitioners that you hardly have to sell the idea.

(The end point of that today is containerization and k8s/etc, but I'm not sure today's endpoint will persist.)

@danluu Large scale static analysis of programs seems to me to be the next obvious evolution of continuous integration ... don't just build the code, also be able to check the code for certain kinds of errors without needing to actually build and run tests.

Tools like this can also generate the meta-data you need to support a lot of higher level tools in the IDE ... code navigation, refactoring, etc

Also tracing debuggers like this:

https://pernos.co

Pernosco

@danluu

1: cloud hosting. Considering how long it used to take to get a new server and install all the software and so on. Being able to click a button and have a server in a data center in less than ten minutes was a miracle.

2: some what more niche but tools like dask. Taking a single node python process and magically making it distributed across a cluster with very little thinking overhead is amazing. The things it's allowed our scientists to do by them selves that would have required a bunch of engineering time before are amazing.

@danluu 1. Although you might not think of it as directly software, this period of time was governed by the commercial realization of Moore’s Law (planning features to intercept future compute performance) and Dennard Scaling, which focused on single-threaded performance. It made high level languages particularly productive, such that capturing more programmer intent through fancy language features was less efficient than stacking interpreters. Dennard gave way to multicore, which prepared GPUs.
@danluu 2. Directly dealing with threads. Python and other scripting languages are importing more FP stuff carefully to deal with this. Rust is trying to take the edge off memory safety in threading scenarios. CUDA gives a different execution model that makes threading implicit, but requires more upfront planning. Go blurs the line between coroutines and threads and multinode communication. Beyond memory safety, how can we be productive on highly networked, heterogeneous & multicore machines?
@danluu This is kind of a nitpick and kind of not but CI *as originally defined* isn't especially widespread. Continuous *builds* are pretty common but Github & PRs effectively killed continuous *integration.* Maybe I'm in a bubble but people I've worked with since Pivotal are generally shocked when I describe working without PRs.

Anyway--

I think a big shift that's happening right now is pairing.

Remote work dropped the barrier to entry to doing it "correctly" (both people have a screen and input devices!) and almost overnight it went from "no one does that in the real world" to literally every shop I join has a bit of experience and uses it at least some of the time.

In language design, built-in tooling has also gotten dramatically better in the last decade, and I'm not sure it's possible anymore to achieve widespread adoption as a new language if you don't have good default answers for builds, dependency management, and testing.

Oh here's another one: Postgres.

Amazon's RDS *launched without it* in 2009, and didn't support it until they'd already launched Oracle and Microsoft SQL Server.

I think that'd be basically unthinkable today.

@nat Oh god I do so love not having to worry about our db at all, because we just used Postgres.
@nat
Interesting! I noticed a lack of pairing in my recent experience (after coming off a highly productive team where we did lots of pairing). "Work by yourself and create PRs" seemed to be the unconsidered default. Would be nice if this were a trend I missed.
@faassen Yeah I mean, I've seen both. But I've been pleasantly surprised that in the last three shops I've worked pairing has been pretty uncontroversial and at least some people were already doing it. There were also folks who were like, nah, not for me, but it wasn't like some teams I've been on where people were strongly anti-pairing.
@nat @danluu I'd love to hear more about the significance of working without PRs. I assume you don't just mean lacking the GitHub mechanism as we know it today, which is better described as a "push request".

@seh @danluu I mean a workflow where developers rarely or never commit to feature branches, and instead work on main, pull, rebase locally, commit to main (or, sometimes, a branch named something like "develop" that gets merged automatically) and push. This sometimes gets called "trunk based development" but I've also seen weird definitions for that term too.

And generally everyone commits at least once a day, and usually more than that.

@seh @danluu Github nudges teams away from this style of working. Its review mechanisms assume that you're working on feature branches and occasionally merging, and recently its even started nudging people to configure their projects such that you *can't* push to main without going through a PR flow.
@nat @danluu I recall working in that “trunk-based” model for many years, and I also recall the complete lack of code review ahead of merging. Instead, you’d hear something like, “Oh, hey, I don’t know if anyone else told you, but hold off on merging anything until Joe says we’re clear again. He’s been working on fixing the stuff that you broke yesterday.”

@seh That was not my experience using the model but I believe that it happened.

Is there a particular point you're getting at or a particular bit of information you're trying to get here? Are you asking whether (or why) I think it's bad that Github pushes people away from trunk-based development with the way it uses PRs?

@nat The latter, yes. I'm not here to argue against you; I'd like to hear what you prefer about this other, less typical (today, anyway) way of doing things, and what you see wrong with the GitHub PR-style approach.

@seh So my main complaint about PRs is that working that way tends to feel *super* slow and involves a lot of waiting for review. While I'm waiting there's a temptation to pick up more work, which means context switching, which makes the work go even slower.

I'm also pretty skeptical of review gates in general as a way to do almost anything that folks say that review does (ensure quality, spread context etc.). I tend to prefer "make it easy to change stuff, and change stuff all the time."

@seh I do think that PR-based workflows are probably a more stable equilibrium than trunk-based dev -- you really have to be disciplined about making small changes and rolling back changes that "break the build" immediately for the latter to work. I also know folks who think that trunk-based development *only* works in a pairing context -- I don't agree but pairing and trunk-based dev do work well together.
@seh I also just kinda... don't like asking for permission to merge all the time. Or being asked permission. It feels really status-game-y.
@seh One big confounder here is that I'm comparing, basically, Pivotal teams to non-Pivotal teams and there are a lot of differences between the two that made Pivotal teams feel super fast & productive in comparison.
@nat Thank you for the explanation. The part about code being easy to change is missing from most of the projects in which I’ve participated over the last few years. The conscientious novice sees an opportunity to fix or improve something, first just touching those crucial few lines, and then gets embroiled in a three-week-long slog through the unit tests and integration tests and generated code and documentation and flaky CI workflows and on and on.

@nat Giving up early would probably be the braver, saner response.

We feel obligated to give back to the open-source projects on which we rely. They attract attention and concentrate effort, but then ossify due to concern for safety, governance, and just trying hard to remain stable. It’s no fun to play along.

Developing “internal”-facing software is now looser and more casual than doing it publicly. The quality bar inverted at some point.

@seh The project that most of my experience comes from was open source.

And I definitely don't consider the non-PR workflow I'm describing "looser" or "more casual." Engineering discipline is largely orthogonal IME to whether a team is using PRs and PR review. The trunk-based organization was much stricter discipline about things like commit messages that I've seen on any PR-driven project, for instance. Certainly *much* more disciplined about tests.

@nat I like the framing from Hintjens on “pessimistic” vs. “optimistic” merging: http://hintjens.wikidot.com/blog:106 (Even though I don’t entirely agree with the blog post, and it’s not an exact parallel for corporate software teams as opposed to OSS work, I think it is a useful viewpoint that most people haven’t been exposed to.)
Why Optimistic Merging Works Better - Hintjens.com

@nat

@danluu

I also mentioned "asynch individual devs with PRs" as the default as a strange countertrend that seems to harm productivity in many contexts. If it does, that needs a solid explanation though.

@faassen @danluu I kinda get it because the PR-based workflows are certainly more convenient in many ways.

And if you've worked the other way it feels *so* *slow* but most people haven't, so they don't even really know what they're missing.

@danluu
- Open Source Software
- Language package managers, particularly their contents with contributions from the entire ecosystem. LaTex, Perl, Python all dramatically benefitted from this.

@sdbbp

@danluu

Language package managers is a good one! I remember the days of having to argue with people who *managed the Python package index* that using an automated package manager is a good thing.

@danluu what's the "right" way to measure developer productivity?

My gut: the best lagging indicator is FreeCashFlow/Developer, but I haven't thought about it very hard.

@danluu FCF/Dev is not great for some things.

It doesn't measure external benefits. It mixes up whole-company performance with a single functional unit (I.e. sales and marketing also contribute to FCF). What about non-profits and public sector devs? What about early-stage start ups while you're expecting negative cash flows -- surely the devs aren't creating negative value, right?

@danluu

Fully automated deploys.

The more people and the more services you have the more important this becomes since it saves a lot of coordination and communication overhead between teams.

Ideally all devs in the organization can just queue up CI *and* CD directly from GitHub with a single command.

The CD system should support staged deploys, red/green deploys without downtime, and automatic rollbacks. It should also run health checks and give feedback to devs who queued the release.

Bonus points if the system can also run schema and data migrations in the right order in-between service deployments.

@danluu It's also worth thinking of stuff that was thought to be an "obvious good" that proved to be so cumbersome that nobody uses it. The higher-level languages we use today are not at all what the 60s and 70s folks envisioned, because that's too constraining. Ada and Lisp remain very niche, and it's not because "kids these days are too dumb" or whatever.

Similarly, people looking at the i432 memory model. Impractical at the time, but we could implement that today. But we don't. Because why?

@danluu NCrunch, wallaby, where my tests start running before it occurs to me that now is a good time to run tests.

Combine with millisecond-or-faster tests, and the learning cycle is qualitatively different.

@JayBazuzi

@danluu

Interesting, I haven't experienced such tools before. I wonder whether they exist for Rust with vs code. Though paradoxically in Rust tests can be ridiculously fast compilation time eats up a constant factor.

@faassen @danluu I have only seen reliable refactoring tools for C#, Java, and Kotlin. It should be possible in Rust.

I think most authors (and users) of dev tools aren't aware that this kind of reliability is important.

@JayBazuzi

@danluu

Could you elaborate how a reliable refactoring tool would be helpful in combination with a continuous test runner?

@faassen @danluu no, synergy between them is not what's important.

They are both examples of tools that everyone should have and use, but are unfortunately uncommon and not understood (for now).

@JayBazuzi

@danluu

Ah, yes, I was still focused on the instantaneous test runner bit. I have in fact experienced such tools in the JS world I now recall, which is bigger on rebuilding / rerunning when there's a change.

As to refactoring: I use renaming in Rust and Typescript a lot as a simple refactoring tool. Rust also has linter rules in clippy that can apply automated refactorings. I should explore what else is available.

@faassen @danluu Rename seems pretty safe at first - it's just search/replace, right? - unless the new name already exists in an outer scope and you introduce aliasing/shadowing.

@JayBazuzi

With those languages I got safe multi module rename, it's good to have and I use it a lot when I think of a better name.

@faassen @danluu I don't know my way around Rust, but in C++:

If classes are decoupled, then each file of tests can include only a few files, making compile and link into a standalone executable pretty snappy.

This level of decoupling is very rare.

@JayBazuzi

@danluu

Ah, okay. Rust analyzer can actually recompile parts of code very quickly in response to changes but I don't think the result is usable to actually run the code (or tests). That would be an interesting avenue to explore.

@danluu refactoring tools that work correctly even if we don't have tests.

@danluu Remote dev. Hooking up VSCode to edit on a server. REPLs in browsers. That sort of thing.

"Nah, you don't need to clone the repo. No Docker. Just go to this URL and start typing -- the editor, build, linter, and type checker are all set up for you already." Huge win for onboarding new devs.