That people are taking ChatGPT seriously for coding tells you more about the state of software engineering as a discipline than it does about the state of machine learning as a technology.
@andrewt I just spent half the night trying to coax useful info about libdrm out of ChatGPT and… very yes. :F
@andrewt javascript becoming the top language heralded the decay and now it is here

@andrewt rather it says a lot about how shitty and #TechIlliterate #Decisionmakers are and how they'll choose everything that allows them to faster hoard more wealth as well as treat and pay #WageWorkers worse...

If it was feasible, #TechSupportScammers would've also replaced their human workers with bots.
https://www.youtube.com/watch?v=xb_rgQ4IDS8

Inside a scam call center

YouTube
@andrewt checked it out and the code is ranks from back of the envelope to gibberish. Next I am going to see if it can generate comments or summaries.

@andrewt @tomw Yet, It's probably the one thing it's actually not terrible at 🤷‍♂️

(Also I think based on most software these days the quality of software engineers is overinflated)

@tanepiper @andrewt @tomw software development isn’t engineering.

@acute_distress @tanepiper @tomw I think in a way that's my point.

I don't think the difference between software dev and "real" engineering isn't the materials so much as the mindset.

Building software permits a carefree approach that building a bridge or a plane doesn't, but it doesn't *require* it. Software development totally *can* be engineering but yeah, it totally mostly isn't

@andrewt I think that it can help beginners a lot with very simple programming concepts. Things like declaring and initializing variables, loops, etc.

It’s when you try to get it to write more full programs is where you get lots of hiccups. Or even straight up programs that won’t run at all

@ethandoescode yeah, someone else mentioned using it to interface with weird APIs and these both make sense to me because they're more "digesting documentation" tasks than really "programming" tasks, and digesting documentation is what language models are best at. and for this kind of low-stakes project, the flaws of these models aren't terribly important
@andrewt @ethandoescode I don't know. I've seen people going for the "remove the boilerplate from Seniors" approach because they specifically _don't_ want more junior developers having to muddle through "I thought this was going to generate good code". The seniors at least have the experience to quickly fix its mistakes or spot where it's likely to be buggy.

@ethandoescode @andrewt It's better at helping experts expand their area of expertise.

The problem for beginners is that they can't tell the difference between the 90% very useful answers and 10% absolute crap made-up answers.

Plus, AI tools like CoPilot learn from their "style", which for new coders, essentially means it just helps lock in bad habits.

But for someone with enough generalized experience in the field to be able to spot and avoid that stuff, they're incredible for learning new languages and APIs.

@LouisIngenthron @andrewt
Agreed.

I would say if a beginner used services like chatGPT and then cross referenced with official documentation in a “trust but verify” manner, it would be more effective.

@andrewt I've recently been writing module tests for safety-critical software. Fundamentally, the tests are required to exercise each function, line, condition, and decision in the code.

Maybe ChatGPT could generate such tests. But what it *can't* do is the most important part of the test development: analyze the minutae of the code to make sure it works as intended, that is, it's safe.

@andrewt I saw a blog post by someone who used ChatGPT as a pair programmer to help implement a basic new feature... and I think that's exactly what it would be useful for: reimplementing something that has been done 100 times before. (Probably with a bunch of new security holes in the process). How often am I faced with a blank page when writing code? Once or twice a year?
@andrewt @mathling I think it tells you a lot about the state of software engineering documentation.
@andrewt @TheSteve0 if you count by number of people with an engineer or developer title, much of real-life software engineering is basically code collage - copy and pasting from the giant repositories of past works.
@andrewt @TheSteve0 it still requires a reasonable amount of intention and critical thinking to make it do something useful. If AI can pull a nice box of parts to pick from, the final picture will be much more useful.

@anca @andrewt while I don't disagree about modern coding, taking it seriously for software development (as opposed to a tool to help developers) is both unrealistic and dangerous

https://thesteve0.blog/2023/03/28/what-chatgpt-is-not/

What ChatGPT is NOT – TheSteve0's Little World

@TheSteve0 that post is a great summary.

@andrewt
Obviously they will be adding more testing to ensure that everything is hunky dory, right?

Right?

@andrewt I've been calling it Google Translate for code every time I see someone try to use it without understanding anything at all and then post its output online and go "Why doesn't it work"
@andrewt Every time I've tried using it for coding problems so far, it has started spewing out nonsense (primarily trying to use non existent functionallity). But for generating boilerplate that has a billion tutorials for it out there, it's not bad.
@andrewt programming is just one giant cargo cult.

@andrewt I just did that actually.

It took workshopping, but with its help I built a python 3 program that runs a subprocess in-term, capturing its stdin/err/out line-by-line, while enabling the user to interact with it normally.

I *am* an amateur. But n.b., if ChatGPT couldn't've helped me? I sincerely doubt I could've dug up the correct answer via Google or SO. Not this lesser-known species of python I/O. It was ChatGPT or nothing.

Whatever it says about the field, I had nowhere else to go.

@andrewt Had an AI code vendor call the other day from a large company. None of my concerns were addressed. They didn't seem to have a plan for the future.

First telling question: if your AI mimicks our code to generate more code and our code is horribly out of date, then what will your AI do?

"Uh."

They are just racing to snap up market share today so that they can get money. They'll figure out their evil tricks later. Meanwhile, we'll all be beta testers one way or the other.

@andrewt The idea of deploying huge blocks of code I didn't write and didn't have time to wrap my head around and understand is frightening. Imagine the security holes and inefficiencies!

@andrewt

That people are taking vaccines seriously for preventing disease tells you more about the state of shamanism than it does about the state of scientific medicine.

@andrewt i contemplated putting on my resume that none of my projects were written using chatgpt but i feel like some screeners/hiring managers might see that as a negative lmao
@andrewt I don't see it as taking it seriously for programming so much as someone I can ask for help when there's boilerplate to do, or an API I don't understand. It can be helpful, but it's not replacing me.
@andrewt software "engineering". I work with SW & systems engineers, and when it's done right it is engineering (functional specs, formal testing etc; Agile or waterfall doesn't matter). But a lot I've seen is also throwing stuff against a wall and seeing what sticks. ML seems to be that on steroids.

@andrewt True story. If you only know how to write code (which is most of them) it’s terrifying, or maybe it makes you happy because you won’t have to pretend to write code anymore. I don’t know.

If you get paid to fix code, it’s very exciting, because code is already bad enough without letting computers write it. It’s a golden age for debuggers.