Michael Droettboom

@mdboom
76 Followers
149 Following
358 Posts
Software engineering manager at Microsoft, formerly Mozilla, Anaconda, STScI, NIH, JHU. he/him
droettboom.comhttps://droettboom.com/

I am now being required by my day job to use an AI assistant to write code. I have also been informed that my usage of AI assistants will be monitored and decisions about my career will be based on those metrics.

I gave it an honest shot today, using it as responsibly as I know how: only use it for stuff I already know how to do, so that I can easily verify its output. That part went ok, though I found it much harder to context switch between thinking about code structure and trying to herd a bullshit generator into writing correct code.

One thing I didn't expect, though, is how fucking disruptive it's suggestion feature would be. It's like trying to compose a symphony while someone is relentlessly playing a kazoo in your ear. It flustered me really quickly, to the point where I wasn't able to figure out how to turn that "feature" off. I'm noticing physical symptoms of an anxiety attack as a result.

I stopped work early when I noticed I was completely spent. I don't know if I wrote more code today than I would have normally. I don't think I wrote better code, as the vigilance required is extremely hard for my particular brand of neurospicy to maintain.

As far as the "write this function for me" aspect, I've noticed that I tend to use the mental downtime of typing out a function I've designed to let my brain percolate on the solution and internalize it so I have it in my working memory. This doesn't happen when I'm simply reviewing code written by something else. Reviewing code and writing it are completely separate activities for me. But there's nothing to keep my fingers and thoughts busy while I'm coming up with what to write next.

I didn't think we were meant to live like this.

My village only posts important information on Facebook. I don't have a Facebook account.

Many US government offices and departments, esp on the local level, only post official messages on proprietary social media platforms like Twitter X and esp, Facebook. That is the only place they post, not even on their websites.

In times of emergencies or critical events, this means people not on those platforms miss out.

We need a strong law in the US that says if a gov office, federal, state or local, posts info to the public primarily online, they must do so in an open fashion. Alternatives could be adding RSS, cross-posting to their website, etc.

Locking citizens out because they choose not to use a platform is bad.

I know #python is used for many amazing things, but Robert Erdmann keynote at #pyconcolombia2025 was mind-blowing.
I will never admire "The Night Watch" the same way again...

📣 We’re so lucky to have Dr. Malvika Sharan as a keynote at #SciPy2025! 🎤✨ Co-Director Open Life Science, Malvika empowers inclusive, open science communities through initiatives like The Turing Way.

🔥 She’s also one of the 2024 100 Brilliant Women in AI Ethics!

Don't miss her talk 🙌

https://youtu.be/0PBaKyCe5uw

Dr. Malvika Sharan - SciPy 2025

YouTube
Hey @ThePSF, this sucks and does not reflect the values of the Python community. We can't stop blockchain shitheads from using Python, but we can definitely avoid featuring them as paragons of our community. Delete this and apologize.
https://fosstodon.org/@ThePSF/114659276227987491
Python Software Foundation (@[email protected])

Lofty is using #Python on Algorand to power instant redemptions for tokenized real estate. Learn how smart contracts written in Python are helping make real estate more liquid and accessible in this Success Story from Algorand Foundation! https://www.python.org/success-stories/using-python-to-build-a-solution-for-instant-tokenized-real-estate-redemptions/

Fosstodon

Remember Aaron♥️

Fuck #Meta 

The "faster CPython" team at Microsoft was recently dissolved. My position at Microsoft was eliminated. If anyone has a reason to be disappointed in this, it's me.

So it feels okay for me to ask you to refocus any disappointment you might have about Microsoft's decision. Let's be grateful for a big Python user actually laying down meaningful cash for 4 years in support of a volunteer-driven community project! Let's express our frustration instead in how that doesn't happen more often!

Looking forward to a world where nobody has to petition for the right to exist in dignity and peace or argue that they should participate in society on an equal footing despite not fitting into a slot in a cog that might once have turned the gears in some antiquated machine.

I’m naive enough to still believe we can get there.

So I have a Take:

Imagine if LLM coding assistants had come out when programming required explicit manual memory management.

Everyone is writing all this C code with malloc() and free(). It's a pain, and repetitive, and why spend time thinking about this?

So all the early adopters of LLMs are saying "this is amazing, I don't have to write all this boilerplate malloc() and free() and multiplication of pointer sizes by array length, it auto-generates that for me, This Is The Future! You will All Be Left Behind if you don't adopt this!"

(I am actually skeptical this is something LLMs would do reliably, but let's just pretend they can.)

And maybe that approach would actually win, and no one would have created garbage-collected (or equivalent) languages, because that's silly, you have a LLM to generate that code for you.

Never mind that garbage collection is vastly superior to LLM-generated-malloc():

* The code is _way_ shorter and therefore easier to reason about the parts you actually care about (the logic)
* You don't have to worry about the LLM generating garbage one time out of N
* Less segfaults and memory corruption, etc..

Back to our actual present: a lot of what I hear people saying about LLMs is "look, I don't have to write as much boilerplate or repetitive code" and I'm sorry but that's not a benefit, that's just doubling down on a liability. All things being equal (if it's just as understandable, just as fast, etc), you want to solve a problem with the fewest lines of code as possible.

If you have to write the same thing over and over again, that is a failure in your tooling, and you should build better tooling. A better library, a better abstraction, even a better programming language.

And to be fair sometimes this better tooling requires significant investment, and those resources that are available for R&D are being piled into LLMs instead of into reducing how much code we write.

But sometimes better tooling and libraries is just a matter of _thinking_ about a problem. And can you even think if your boss wants you to hit the deadline, "and AI should make you twice as productive, right?"

But also: why think, when you can double down on a bad status quo and have an LLM poop out some more code for you?