@andrewt rather it says a lot about how shitty and #TechIlliterate #Decisionmakers are and how they'll choose everything that allows them to faster hoard more wealth as well as treat and pay #WageWorkers worse...
If it was feasible, #TechSupportScammers would've also replaced their human workers with bots.
https://www.youtube.com/watch?v=xb_rgQ4IDS8
@acute_distress @tanepiper @tomw I think in a way that's my point.
I don't think the difference between software dev and "real" engineering isn't the materials so much as the mindset.
Building software permits a carefree approach that building a bridge or a plane doesn't, but it doesn't *require* it. Software development totally *can* be engineering but yeah, it totally mostly isn't
@andrewt I think that it can help beginners a lot with very simple programming concepts. Things like declaring and initializing variables, loops, etc.
It’s when you try to get it to write more full programs is where you get lots of hiccups. Or even straight up programs that won’t run at all
@ethandoescode @andrewt It's better at helping experts expand their area of expertise.
The problem for beginners is that they can't tell the difference between the 90% very useful answers and 10% absolute crap made-up answers.
Plus, AI tools like CoPilot learn from their "style", which for new coders, essentially means it just helps lock in bad habits.
But for someone with enough generalized experience in the field to be able to spot and avoid that stuff, they're incredible for learning new languages and APIs.
@LouisIngenthron @andrewt
Agreed.
I would say if a beginner used services like chatGPT and then cross referenced with official documentation in a “trust but verify” manner, it would be more effective.
@andrewt I've recently been writing module tests for safety-critical software. Fundamentally, the tests are required to exercise each function, line, condition, and decision in the code.
Maybe ChatGPT could generate such tests. But what it *can't* do is the most important part of the test development: analyze the minutae of the code to make sure it works as intended, that is, it's safe.
@andrewt
Obviously they will be adding more testing to ensure that everything is hunky dory, right?
Right?
@andrewt I just did that actually.
It took workshopping, but with its help I built a python 3 program that runs a subprocess in-term, capturing its stdin/err/out line-by-line, while enabling the user to interact with it normally.
I *am* an amateur. But n.b., if ChatGPT couldn't've helped me? I sincerely doubt I could've dug up the correct answer via Google or SO. Not this lesser-known species of python I/O. It was ChatGPT or nothing.
Whatever it says about the field, I had nowhere else to go.
@andrewt Had an AI code vendor call the other day from a large company. None of my concerns were addressed. They didn't seem to have a plan for the future.
First telling question: if your AI mimicks our code to generate more code and our code is horribly out of date, then what will your AI do?
"Uh."
They are just racing to snap up market share today so that they can get money. They'll figure out their evil tricks later. Meanwhile, we'll all be beta testers one way or the other.
That people are taking vaccines seriously for preventing disease tells you more about the state of shamanism than it does about the state of scientific medicine.
@andrewt True story. If you only know how to write code (which is most of them) it’s terrifying, or maybe it makes you happy because you won’t have to pretend to write code anymore. I don’t know.
If you get paid to fix code, it’s very exciting, because code is already bad enough without letting computers write it. It’s a golden age for debuggers.