Chris M.

@seemann@social.saarland
59 Followers
98 Following
747 Posts

Softwareentwickler - Linux - Moto Guzzi - DRK PSNV - Wikinger

-- n'oubliez pas d'être heureux --

Threema-IDhttps://threema.id/H42AR5TK
Endlich Saisoneröffnung #motorrad #camping #motoguzzi

Willkommen auf der Peertube-Instanz von heise medien. 🐙📺

Wenn ihr unsere Videos ohne undurchsichtige Algorithmen schauen möchtet, seid ihr hier genau richtig: https://peertube.heise.de

#VibeCoding your MFA

#klimawandel #climatechange

Quelle: Sebastian Seiffert, deutscher Universitätsprofessor für Physikalische Chemie.
https://bsky.app/profile/sci-ffert.bsky.social/post/3lnrpdzlvck2t

Den Spruch mag ich sehr:

"You speak English because it's the only language you know.

I speak English because it's the only language YOU know."

As a programmer, I'll likely be making off-by-one mistakes until the day after I die

Die schönste Überschrift kam von Saarland Online:

"60.000 Menschen feiern CSD in Saarbrücken – 9 Rechtsradikale auf Gegendemo"

https://www.sol.de/saarland/regionalverband-saarbruecken/50-000-menschen-feiern-csd-in-saarbruecken-9-rechtsradikale-auf-gegendemo-re-1,609590.html

Das sieht nach Regen aus.

Diese Woche ist übrigens wieder ein #linux #vhs Kurs zu Ende gegangen. Mit 3 Teilnehmern ging es in 5 Terminen um Konzepte wie Rechtesystem, Prozesse, Shellskripte, den Umgang mit der #bash und natürlich #vim.

Den Teilnehmern hat's gefallen und mir wieder sehr viel Spaß gemacht. Vielleicht läuft der Kurs im nächsten Semester ja wieder.

Very good points, a reply to a CEO who wondered why his engineers were resistant to using LLMs for coding:
×
Very good points, a reply to a CEO who wondered why his engineers were resistant to using LLMs for coding:
@LillyHerself spot on.
@rotnroll666 @LillyHerself "better get myself new engineers, they not vibin'!"

@DJGummikuh @rotnroll666 @LillyHerself

CEO: why are my devs doing/not doing this thing? Should I ask them? Nah, let's ask on Twitter.

@LillyHerself In fact, #AI systems are being caught left and right trying to #viralize their shit, breaking out of systems, and copy themselves like a #worm to the point that they are more "#viral" than the #AGPLv3"* and certainly more than the #GPL as per #Microsoft's #HalloweenDocuments.

  • At best all those "AI" tools are just #WastefulComputing that shits out #AIslop that is worse than the most demotivated & unpaid intern can come up or they literally want to break out of their "#containment" as if they were sentient (which they ain't - at least almost all are)!
AI Sandbagging - Computerphile

YouTube

@kkarhan @LillyHerself I'd be very wary of using anthropomorphic terms like "want" and "trying" as it can make people think that transformer algorithms like LLMs have a consciousness (something people are already doing).

On another note, though, "sentience" is often misused, but not here. It just means being aware of one's own condition. A cow knows when it's wet, or cold, or hot, so a cow is sentient. LLMs generally have no such awareness, they are not sentient.

@StarkRG @LillyHerself granted, said systems get caught trying to break containment.
@kkarhan @StarkRG @LillyHerself To my understanding, the real issue here is the "alignment faking", where the system shows expected behavior to user and acts differently on background. On the other hand, all attempts to "Escape" were guided by a well crafted basic prompt literally pushing the model to act like that. And those alignment faking situations, to my understanding, as well.
@LillyHerself the problem there is even deeper: CEO who thinks they should dictate what tools their employees use. You pay them to be the most qualified people to make these decisions, just stay out of it.
@LillyHerself i totally agree .
For very simple code or tasks AI spits out very useable code , but as soon as it gets more complex or very specific it makes many foundational mistakes. Finding them takes a lot of time.
For very important, probably security related topics i absolutely cannot recommend AI.

@LillyHerself
For us at @Ninjaneers the rise of #vibeslob is actually kind of good.
When your codebase is garbled while none of your employees really know how to handle larger software projects - who you gonna call?

#BugBusters

@cg @Ninjaneers Hahaha, that reminds me of a saying they have in Swedish "En mans bröd är en annan mans död"

[it rhymes in Swedish, so non-literal translation: "one man's death is another man's breath" ]

@cg @LillyHerself @Ninjaneers On the other hand, though... you thought picking up a legacy codebase that barely anybody at the company understood was bad before now? Some of these yahoos don't even know what git commit does
@AVincentInSpace @LillyHerself @Ninjaneers
This will just raise the hourly rate 😬

@LillyHerself it's like...I know how I want to implement a feature within the technical constraints and subject matter context I'm working in.

Why would I sit down and try to explain in natural language my intentions and the surrounding constraints to a machine that has no concept of semantic knowledge?
Any response might be an ~alright first draft, but then I'd have to go in and converse my way to the solution I want, and in the end I'll have to manually edit stuff anyway.
THAT seems horribly inefficient.

If you wanna talk about reducing the time spent writing boilerplate code, look at languages like elixir with the phoenix framework.
You can generate boilerplate without stochastic wordpickers!

@wall_e Only the C-suite is being taken in by this, because they often/usually don't understand the process of coding.

@LillyHerself I bet there's still enough C-Suite out there that have a KPI like: lines of code per engineer per time unit.

And yeah, AI can probably be used to optimize for that kind of metric

Management has always hated paying for skilled developers. We cost a lot, they feel that our push back on their "Grand Ideas" is because we're incompetent or snobs (because they're literally technically impossible to do on that timeline and budget), they don't like giving vague specifications and getting back something that doesn't match the vision they had in their heads (because their specs were "build an app like Instagram, but for Dogs").

Time and time again over the decades there's been different promises about how this magic tool or that would mean that less skilled or even non-technical people could replace senior engineers. And at least some management falls for it every single time. And then a few years later they have to hire engineers all over again to start trying to unfuck the mess that's been created.

@LillyHerself @wall_e

@JessTheUnstill @LillyHerself @wall_e how do these people get into management in the first place
@JessTheUnstill @LillyHerself @wall_e
I think its because most of Management is not Technically trained like an Engineer. My husband is in IT so he has to deal with people that are in charge, but don't have a clue of what needs to be done or what has been done. So when the think a unskilled person can do the job instead and fucks it up they are calling the Engineer to fix it. My hubby documents everything he does for work. He a very analytical person. My point is you need skilled Engineers

@JessTheUnstill @LillyHerself @wall_e Exactly this.

Though the last company I worked with was an exception. A small Norwegian outfit run by technical people who listened to and worked with their technical staff (including developers).

@JessTheUnstill @LillyHerself @wall_e

Plus they hate when employees who feel secure in their skills feel confident enough to object for moral reasons when management wants dark patterns, user hostile features & illegal or sketchy things. (Like objecting to building tools for war/surveillance.)

@wall_e @LillyHerself Exactly.

My dad is a pretty competent coder. He did it professionally for longer than I've been alive. He's worked in hardware design for a number of years now, but his coding skills are still fairly sharp. If I ask him for help solving a computer science problem, he's happy to give it to me, and he often has valuable insights -- but I find it often takes me twice as long to explain my problem to him than it would have to solve it on my own.

And he's a human. Why in the ever-loving sweet fuck would I do that with a machine that cannot synthesize solutions to novel problems at all and is pretty bad at synthesizing solutions to ones it's already seen?

@LillyHerself I'm a senior engineer and I use LLMs all the time for coding and many other tasks; once you understand their limitations they are an invaluable tool. There is a learning curve and people who are biased against AI stop learning to use them at the first sign of something that confirms their biases against them.
@LillyHerself The biggest limitation is that you can't ask for very big chunks of code or big rewrites, that will often produce code with very little value. RAG isn't reliable enough so it works best when focusing on a single file or two and with very modular codebases. It's very good for prototyping, for brainstorming, for repetitive tasks, and for answering questions. You have to give it specific enough prompts or it will start generating random stuff. Another issue is the knowledge cutoff.

@darkmatter_ @LillyHerself LLMs are an extension of the "magic" UI trend we first saw with stuff like Siri.

It's an infinite oracle that speaks natural language! Just keep talking until you get lucky!

Historically, that's not how professional tools worked. You get stuff like "documentation" and "explicit limits".

This creates an environment where we blame the users when the system fails them: they were unfaithful and refused to spend 2 hours coaxing ChatGPT to implement a 15-minute fix.

@darkmatter_ @LillyHerself
The problem is #AI and #LLM propagandists and shills demand the world throw out two central tenets of modelling: "All models are wrong, but some are useful", and "The map is not the territory".
If you use these tools fully understanding that they are creating a "model" of code, or a "model" of documentation, then fine. But that is ALL they are. They are not a substitute for actual work or thought.
@darkmatter_ @LillyHerself
It's not really any different than substituting a weather model or forecast for a report of what the actual weather was.
"The forecast said it would be 22° yesterday. It was actually 25° but we are recording it as 22°."
@Okanogen @LillyHerself The stochastic parrot argument is a massive oversimplification and I almost always hear it from people that either don't understand the concept of strong emergence or they are just dishonest and with ideological motivations. Saying that LLMs are just models is like saying life is just a chemical reaction or consciousness is just neuronal activity. Big NNs like other complex things like evolution have real intelligence, even if it's alien to us and fails in weird ways.
@Okanogen @LillyHerself You should read the latest research from Anthropic on interpretability: https://www.anthropic.com/research/tracing-thoughts-language-model
Tracing the thoughts of a large language model

Anthropic's latest interpretability research: a new microscope to understand Claude's internal mechanisms

@darkmatter_ @LillyHerself
Again. Good grief.
An AI startup writes a blog post claiming their product is "intelligent".
In other news, dog bits man.
@darkmatter_ @LillyHerself
I don't generally click on Substack links, but I have been saying #LLM is a model of a response to input. Well, what if the modelled response is to fake it and create a garbage model when it doesn't receive the entire input?
Here is what #ChatGPT does.
https://amandaguinzburg.substack.com/p/diabolus-ex-machina
Diabolus Ex Machina

This Is Not An Essay

Everything Is A Wave
@darkmatter_ @Okanogen I don't think an LLM has "thoughts", and I think it's a bad idea to anthropomorphise a piece of software.
Sure, that's just a title, call them mechanisms instead of thoughts if you want, they find the -emergent- mechanisms the LLM uses internally to solve some problems. Some are at minimum analogous to what we do in our minds, others are different. And sure this is an early result, but it is evidence that LLMs aren't just simple interpolators, abstract programs grow inside them through gradient descent and that's why they have much higher generalization than what interpolation alone would give you
@darkmatter_ @LillyHerself
Good grief. Really?
I spent over 30 years modeling complex hydrogeologic systems and geophysical signal processing. Why do you assume I don't know what I'm talking about?
Claiming evolution is some kind of "intelligence" leaves the realm of provable science and enters the land of fabulism/magical thinking. Just like claiming machine learning and LLMs have any kind of sentience. Large Language MODELS are called that for a reason.
@LillyHerself this response feels like it was written by an LLM...

@dominykas @LillyHerself

Have you ever read " The Colour of Blood" by Brian Moore? It conjures such a vivid picture that after I'd read it, I started again, to see how he'd managed that. And was surprised to discover there is barely an adjective from page 1 to the end.

None of the adjectives in the quoted piece is superfluous. LLMs, by contrast, add an adjective to nearly every noun and it's wearisome reading. I assume they trained them on tabloid newspapers and airport novels.

@JMacfie This is very interesting. I'll try to find a copy of Moore's book. Thanks

@dominykas

@LillyHerself @dominykas He was such a good writer. I met him once or twice as a company I worked for filmed "Catholics" (with a very young Martin Sheen) I used to watch it every time I had to make a screening cassette, ( this was a long time ago in the days of VHS) because it was so well done, utterly compelling. We had an option on The Colour of Blood but didn't manage to get the financing together. https://en.wikipedia.org/wiki/Brian_Moore_(novelist)
Brian Moore (novelist) - Wikipedia

@LillyHerself
4. The people who have been doing this for 40 years have no idea what they're talking about. After all, when those crotchety old sticks in the mud left college, AI of the scale we have today was barely science fiction! They're just afraid of change, is what it is.

And they're scared we're gonna replace them. Of course they don't want us to use it. If we used it, we could save ourselves a boatload of cash by letting them go, and not lose any productivity. They don't like it because it's good for us and bad for them.

It's nice when things are so simple and easy to understand, isn't it? We're the wave of the future, yes we are! Those anti-AI blowhards just can't see it like we can. AI can do anything Sam Altman says it can, and anybody who doesn't get on the hype train better be ready for it to run them over! The age of employees who don't need sick leave is finally here!

I sure am glad I'm a smart businessman with my finger on the pulse. Labor almost pulled one over on me.

@LillyHerself
Early on, I did a bunch of experimentation on how including AI in a project would impact the development process. I did this all on personal time. I concluded that it solves very niche cases that we already solve a different way, and it causes way too many problems.

1/3

@LillyHerself

AI coding tools Introduce low quality code in large volumes. It spagettifies existing code and makes code reviews so much more painful. It ignores quality gates, demonstrates lack of understanding of how code works, and would probably be the type to ask "what's a design pattern?" because it clearly doesn't know anything about them for practical use. Not to mention that much of the code is simply wrong.

When adding it into a process, you may as well douse us in maple syrup on a hot humid day in a swampy area. Junior devs who still aren't familiar with coding practices and understanding the codebase try to rely on AI slop and it is immediately obvious because it's a lot of fluff that doesn't actually do what you asked them to do. But it looks right if you don't know what's going on.

2/3

@LillyHerself

Despite this, upper management seems to not care about what the senior engineers have to say because it's contrary to what they (upper management) want to do. The senior devs aren't quiet about it so it's not like it's unclear.

@h3mmy I intend forthwith to make "spaghettify" part of my vocabulary, and I might even start using linguini as a verb 😂
As in, "I'm gonna linguini the hell out of this code" haha. Love it.

@LillyHerself
An LLM might be using training material from a time before a given security vulnerability was discovered, and fail to account for that, thus introducing a security vulnerability to your code.

If you spend the correct amount of time using real programmers to look for weird little problems like this, you end up negating the cost savings of using AI-generated code in the first place.

@LillyHerself Almost none of these people are engineers. One problem with AI and software development is the abuse of a title meant for people who have more training, including ethics and safety.
@LillyHerself Thanks for the alt text. *boosts*