In an age where gen AI makes it much cheaper to produce code, the ability to read, comprehend, and review code becomes that much more crucial.

Until and unless businesses and executives recognize this, they won’t actually be able to realize the economic upside of AI because they’ll be too busy creating so many fires which need putting out.
https://cute.is/@keith/112191562540684140

keith kurson (@[email protected])

@[email protected] i just watched out a LLM spit out code that would have brought down a webserver w/ an infinite loop, very nervous for when these c-levels start experimenting with launching insecure features built entirely by llms 🥴

Cuties

Btw, the insight that reviewing code will become more important than producing code was introduced to me via this paper:

“Taking Flight with Copilot: Early insights and opportunities of AI-powered pair-programming tools”

https://dl.acm.org/doi/10.1145/3582083

Taking Flight with Copilot: Early insights and opportunities of AI-powered pair-programming tools: Queue: Vol 20, No 6

Over the next five years, AI-powered tools likely will be helping developers in many diverse tasks. For example, such models may be used to improve code review, directing reviewers to parts of a change where review is most needed or even directly ...

Queue

My assertion that grappling with the increasing importance of code review is literally a business imperative came to me while reading @grimalkina @KFosterMarks and @CSLee’s excellent work on AI skill threat:

https://www.pluralsight.com/resource-center/guides/new-developer-research-paper

Research Paper | The New Developer

Based on the latest research from the Developer Success Lab, this white paper shares a human-centered, evidence-based framework to help developers thrive during this transition to AI-Assisted coding.

So, why are so many executives and investors overlooking this very basic reality that code produced is code which must be maintained, and that an acceleration of code produced thanks to gen AI means ~more risk~, not just more $$$?

I have friends who are Principal Engineers asking themselves this very question right now.

And in response, I point us to the classic Upton Sinclair quote:
“It is difficult to get a man to understand something, when his salary depends on his not understanding it.”

@anthrocypher I've got this in an upcoming publication:

"Every line of code written today represents a testing, complexity, maintenance and refactoring burden your team will bear tomorrow, and the reality is that none of our customers want code. Our customers want _utility and functionality;_ code is a liability we accept so we can deliver that functionality. GenAI or not, nobody wants or needs an arbitrary quantity of code for its own sake."

Paperclip Maximizer

Paperclip Maximizer

WebSeitz
@mhoye @anthrocypher Code has no inherent value. Text has no inherent value. Digital paintings have no inherent value. Maybe some of us thought they did, once. An avalanche of form without substance is one hell of a way to correct us.
@mhoye @anthrocypher Don't tell me! I use Enterprise Design Thinking, so I do user research and then design the user experience and UI and work with the team to deliver that user experience in constant conversations with our Sponsor Users. Code is just one means to get there. (Documentation is another)
@mhoye @anthrocypher what I'm implying but should have made explicit is that there's a goal (often a user experience) and optimizing that carefully and iteratively does quite a natural and effective job of delivering only what's needed to achieve that goal. From the base point of simplifying what's required to code, how unnecessary complexity is expunged from code is down the coder and not my domain.
@mhoye @anthrocypher Very much this! I keep telling my students that no-one wants to use the software we build. People may still use it, but it is not what they want.
@mhoye @anthrocypher Exactly. My tag line for years for other developers, management, product owners, etc. has been “all code is a liability” for all the reasons you mentioned, as well as expansion of security footprint/risk.

@mhoye @anthrocypher

I agree with your statement. But customers aren't the only stakeholders.

What do shareholders want?

In the case of most Tech firms, they want unfettered, unlimited revenue growth for future rent collection. These firms aren't paying dividends, so you need that share price to go up and you need more features to capture that revenue.

So how do you balance customer needs with shareholder needs? (Even if the latter may be pathological?)

@anthrocypher My philosophy has always been "code is bad and most people should write less of it" for exactly the reason that you have to maintain it.
@anthrocypher
When my colleague said he was using gen-ai to write his code and then he reviewed it, that was an immediate NFT for me (no f*ing thanks). Writing code is fun. Reviewing code is hard. I just don't get the appeal.

@unikitty
YES. There was a presentation at work which said that in future developers would be only reviewing code, this was the first time I seriously thought about changing career. Code review is necessary and important, but all day every day? Fuck me.

@anthrocypher

@econads @anthrocypher
I do all my code reviews on the NFT blockchain in my full-self-driving car. It's the inevitable future!
@unikitty
Otoh I hit 40 and don't understand the slang any more so maybe it's just better I let the world get on with it. I need to get a lawn so I can shout at kids. It's the future I tell you.
@anthrocypher
@econads @unikitty @anthrocypher I saw a post on here where some idiotic AI techbro said in earnest “fixing AI generated code is going to be the biggest part of our profession” and it took all my restraint not to tell them precisely what they could do with such a so-called “profession”.
@crowbriarhexe @unikitty @anthrocypher
Sounds more like a linkedin post, maybe he was lost..
@crowbriarhexe @econads @unikitty @anthrocypher
“fixing AI generated code is going to be the biggest part of our profession” is also said, with some horror, by clear-thinking engineers
@anthrocypher @tinker because companies and their executives generally don't bear the costs of their security failures.
@anthrocypher A dirty secret in professional sw dev is that writing obfuscated code that only one person can comprehend often leads to increased job security for them. Maybe less of a problem in OSS. But just because the source code is shared doesn't mean it is not riddled with self aggrandizing cleverness. Which is to say you have a great point about maintaining AI generated code which has a similar connection between understandability/clearness and maintainability.
@gregdavis @anthrocypher
This is why we have code reviews no? To make sure it's readable at least. OK not everyone does code reviews it does them well, even these days.. But they should.
@gregdavis @anthrocypher The other dirty little secret is that these devs are still expendable to the company.
@anthrocypher @randomgeek Nobody gets that when it comes to transportation infrastructure. They certainly won’t for bits.
@anthrocypher OTOH an oft-overlooked (by engineers) thing is that time to market matters, a lot. and with GenAI you can frequently get to market with a single dev, not having to bring in specialized talent, or hire/scale a team. That can be a truly massive advantage that’s a lot easier to quantify than the cost of code existing

@kellogh absolutely true! Though w/o a well-developed approach for intentionally choosing velocity over maintainability, or vice versa, there’s real risk of what I outlined at the top

https://hachyderm.io/@anthrocypher/112191630639655941

Ana Hevesi (@[email protected])

In an age where gen AI makes it much cheaper to produce code, the ability to read, comprehend, and review code becomes that much more crucial. Until and unless businesses and executives recognize this, they won’t actually be able to realize the economic upside of AI because they’ll be too busy creating so many fires which need putting out. https://cute.is/@keith/112191562540684140

Hachyderm.io

@anthrocypher as i see it, there’s a slider between fast delivery and maintainability, and i think a lot more execs would prefer the slider be generally closer to fast delivery than a lot of engineers are comfortable with

one big reason is that deleting whole projects is actually very cheap and easy. It’s a hella lot harder to maintain than to delete. With GenAI, you can try out ideas a lot faster, and with that, also delete them faster. Easier to focus on value over longevity

@anthrocypher a nerdy take on this — in GC (garbage collection), short lived objects are cheaper than long lived objects because the longer an object lives, the more GCs (maintenance cycles) it needs to undergo.

The analogy between objects & projects is pretty strong, the complexity to delete is just about the same.

I wish more engineers understood the power of short-lived projects. It’s almost pure focus on customer value with almost none of the overhead

@kellogh @anthrocypher

> deleting whole projects is actually very cheap and easy

[citation needed]

IME being able to actually completely "delete" a project is pretty damn rare, and it's almost always driven by engineers concerned with maintenance, security, etc. The business may say "we're done with PROJECT" but until the last dependency has been moved off of it and the code has been removed, that statement is somewhere between a wish and a lie.

@anthrocypher

Which investors and executives have you met, who are interested in building sustainability?

Few think further than 5 or 10 years into the future. They are usually in the business of generating short term returns before the next crash.

Risk is an acceptable part of this equation. That its mitigation is unknown is a problem for someone else to solve, spending their late nights and weekends on it in a year or two.

@anthrocypher also, there's a crap-ton of magical thinking involved. It's terrifyingly clear that a lot of people, especially but not exclusively at decision levels, have a very specific thing in mind when speaking of AI, namely whatever thing that would do what they imagine, with no downsides.

For this topic, that means AI writing perfect code that doesn't need reviewing, or AI capable of reviewing any code perfectly.

It's basically thinkism in yet another wrapper.

@anthrocypher Because that's not how most company assets work.

They buy cars, machines, office equipment, computers and
a key part to this operating logic is that as long as you don't do anything extraordinary to it, like fire, or big accidents, they remain static. Nothing about the use pattern or environment changes, so they continue to work.

And they never consider the possibility that software isn't like that.

An oil change for a car is very much unlike a security patch in terms of scope.

@bmaxv exactly this! The range of possibility for software is so much greater than we account for, the opportunity for it to unspool unexpected permutations of branching realities so vast, but we treat it like a car or a printer, because it’s easier to fit that in spreadsheets
@anthrocypher It has been my experience that gen AI is particularly good at generating two types of code. a) boring boilerplate code that devs don't like writing and attempt to write complex machinery in order to be DRY, that ends up being code that is hard to debug and maintain. b) unit testing code that is in many cases boilerplate but with slight variations.
The question is, how do we teach devs how to use genai effectively as a coding assistant and not create more risk.
@anthrocypher About a year ago a very senior person at a company that I know a lot about told me that the new gen AI coding tools were going to mean that the SDE job a year later (i.e. now) would be "totally different." I tried to explain to him (as somebody who worked with SDEs every day) that even if these tools made coding 100x faster, it wouldn't move the needle that much on productivity because literally typing code is not where SDEs spend a majority of their time. I encouraged him to make sure he's talking to people who actually do the job and not just other very senior managers. He had no interest in doing that - totally convinced himself that things were about to have this quantum change in productivity because of gen AI coders.

@anthrocypher

I think It's a little more subtle than the quote.

Most Tech firms do not provide dividends. Meta and Alphabet only recently started this.

This inherently means that most Tech firms jobs are to grow the business rather than maintain it. In scenarios like this, adding risk and growing the product is literally the corporate design.

If there's a misunderstanding here it's with the principal engineers.

@anthrocypher I love generated code.

But I don't mean AI, I mean code from a code generation tool.

Code generated that way is consistent and errors found can be fixed consistently across the output.

If Gen AI makes, in hindsight, the same error across the codebase what do you do then?

@anthrocypher exactly this! I had a thought for a blog post recently: why to we keep on producing ever more code with AI, when all code increases the cost to maintain it?
We should rather be using AI to reduce code and reduce complexity - not to increase it.
@anthrocypher Often ignored: compression of code’s half-life. What was once 10-15 years of visible horizon for code is now 2-5 years of utility, before the next fad or business event. Raise money quickly, fail fast and often, move on.
@anthrocypher Short term Jack Welch management. Investors love good quarterly results so number go up. When the bill comes due, the investors just break open the company like a piggy bank and take their pennies elsewhere.

@anthrocypher one hope/goal is that AI can make it easier to understand, refactor and maintain code.

Think OpenRewrite without the hardcoded rules.

@anthrocypher There's already a rising interest in designing systems in ways so that if AI gives them stupid commands, they could be turned back with only limited damage.

A CEO's mind is such a weird place.