One reason I think that complex #AI software projects are never going to happen is that the code it generates has no *intent* behind it.

Senior software devs spend an extraordinarily large amount of time reading existing code and asking not just HOW they work, but WHY they were written that way. Reading long-maintained, complex source code is more than mere reading comprehension; it’s LITERARY CRITIQUE. You’re constantly trying to understand the thought process and motivation of whoever wrote that code, in the hopes of gaining insight into their frame of mind.

Well, AI code has no motivation, thought process, nor frame of mind. While the code it generates MIGHT work correctly (a big assumption) at the point it was extruded, there is no plausible way of maintaining that code, and at some point of complexity (sooner than you think!) maintainability becomes critical.

#softwareEngineering #softwareDevelopment

But WHY do we need to understand the motivation behind a pile of code? Because it reduces amount of COMPLEXITY we need to hold in our mind. Understanding an original author’s mindset helps to define a direction of development that will very likely yield successful results that are harmonious with existing code.

And as any senior software dev knows, complexity is the greatest enemy of engineering, and anything that helps constrain the beast increases the likelihood of producing error-free progress.

AI is great at producing code of little consequence, things so basic or throwaway that no deep understanding is needed to maintain it. To me, that constrains its practical use to generating basically SCAFFOLDING and BOILERPLATE upon which your real code is built, essentially making the (fancy and custom) GRID PAPER on which you will actually inscribe your design. Let it write your for-loops, but don’t let it write the functions it calls.

#ai #softwareEngineering #softwareDevelopment

#AI coding tools can be time-saving automation at the hands of an already-senior developer, but that’s only because it utterly relies on the motivation and depth of experience already found in the HUMAN that’s using it to shape it into something worthwhile. Like a seasoned musician toying with analog synthesizers and sequencers, finding serendipity in the semi-random stream of patterns that are suitable to be molded into actual works requires a human editor. AI itself cannot offer that wisdom.

AI will never be a replacement for expertise. It does not compensate for a lack of skill. It offers nothing to replace an understanding of the fundamental principles of the craft. It will not turn a junior engineer into a senior one.

And it will not create a complex product, at least not one that won’t crumble the moment you have to maintain it.

#softwareEngineering #softwareDevelopment

Complex, AI-generated software projects will never happen

Complex software projects made up of mostly AI-generated code isn’t going to happen. And one reason I say that is that the code AI generates has no *intent* behind it.

humancode.us
@drahardja This brings to mind Peter Naur's 1985 essay "Programming as Theory Building." He uses the word "theory" for the understanding of the problem and how the software solves it, and emphasizes that the theory is not in the source code or documentation.
Losing the imitation game

AI cannot develop software for you, but that's not going to stop people from trying to make it happen anyway. And that is going to turn all of the easy software development problems into hard problems.

Jennifer++
@drahardja there's the insidious nature of AI though: all of this ultimately doesn't matter as long as people accept the slop made cheaply, quickly, easily. Is the code, art, music, literature, video and analysis by AI as good as the human one? Probably not. Does it matter to the vast majority of people? Also probably not.

@stooovie It doesn’t matter until it matters.

This too is the insidious lie of AI: an inexperienced user simply DOESN’T KNOW what they’re missing, and they get lulled into a false sense of security and solidity…until it comes crashing down, and they have absolutely no plausible way (AI or otherwise) to fix the pile of slop on which they depend.

@drahardja @stooovie this, right here, is the biggest risk of AI slop.

"I had no idea..." You're right, idiot. You had no idea. Why the fuck were you fucking around with things you knew nothing about?

Something we'll see play out time and time again.

@Laird_Dave @drahardja @stooovie I think in the short term devs will get replaced by AI, because management thinks it is a replacement.

A few months or years after they will re-hire them because everything turns to flames and ashes due to ai "quality" and no one actually understanding what is happening

are there practical guides to surviving the ai slop era for devs?

@brahms @Laird_Dave @stooovie The best remedy for this and other ills of automation remains #unionization. But I find software engineers particularly resistant to this idea.

Outside that, to me the best defense against AI is to continue honing your craft and producing work outside of your main line of employment (if you are able). Showing that you are a competent and curious craftsperson is a good way to be recruited by someone else who cares about craftsmanship.

@drahardja I can tell these LLM and LRM-based AI algorithms are still extremely limited, and nothing at all like a so-called “general intelligence” because they still can’t write a program in languages like Haskell, Lisp, APL, or Forth. Why can humans learn these languages in weeks but the AI cannot learn them no matter how much energy we burn off trying to teach them?

The reason is obviously because of the training datasets are made by humans who already know these things. So the training data that AI is using are so heavily biased toward Python, JavaScript, C, and C++ (the languages used to build these AI’s, what a surprise!) and the LLM being 100% statistical in nature, will never be able to synthesize new ideas about things it has never learned from it’s training data.

The limitations of LLMs and LRMs are so obvious to anyone who has experience in the field, there is no way in hell these systems could replace people. Anyone who thinks AI could replace people at this point is just plain stupid, or outright lying. With Sam Altman, I am guessing he is more of a moron than a liar, he seems to have convinced himself that he is a genius so thoroughly that he can fool other wealthy people, and credulous, sycophantic journalists, into also thinking that he is a genius. But he seems to me more like a moron who doesn’t even realize he is lying, that is probably why he is so good at convincing people of his bullshit whenever he talks.

I hear about governments now talking about passing initiatives to “improve AI literacy,” but they then let guys like Sam Altman define what “AI literacy” even means, and (surprise!) he ends up defining “AI literacy” as diverting tax money to his corporation for integration into government institutions, and teaching school children how to become completely dependent on the products and services he sells.

I maintain that LLMs are actually very useful if they are used in very limited ways, to make computers easier for people to use (e.g. as auto-completion tools), which in my experience LLMs are a very good tool to use for that purpose.

So what AI literacy should mean is that AI should not be used to create content for you, and it sure as hell should not be used to think for you. Literacy means understanding how these AIs are trained from data made by humans who know what they are talking about, and that the training data is most useful to the AI if the humans who created it wrote it for other humans to read. AI literacy means understanding that to really learn something, you have to solve problems for yourself — you can’t just ask an AI to do it for you, you won’t learn anything that way. AI literacy means understanding that if you are using AI to think for you, you are doing something very dangerous, possibly even deadly, especially if the AI is making decisions for you where your decisions can effect the lives of other people.

@brahms @Laird_Dave @stooovie @sklrmths @fanf42

#tech #AI #SamAltman #LLM #LRM #AILiteracy #ProgrammingLanguages #ComputerProgramming

@stooovie @drahardja "Always code as if the person who ends up maintaining your code is a violent psychopath who knows where you live."
ref: https://wiki.c2.com/?CodeForTheMaintainer
@drahardja yes, crisis of AI is crisis of intentionality. In coding, art, everything.

@drahardja @Riduidel

#AI #softwareEngineering #softwareDevelopment all that. Peter Naur explains it well in "programming as theory building" (1985)
https://gwern.net/doc/cs/algorithm/1985-naur.pdf