Unpopular opinion and I expect there will be a lot of pushback on it, but what's a good (polite) debate if not enlightening?

Do you know how your washing machine works? (If yes, keep quiet for those who don't.)
If the answer's no, you do know one thing though I suspect. You know that you trust it to wash your clothes because well, that's what it's designed to do.
If you're not a mechanic and yet you drive, you trust that when you do all the right things and push the right buttons, your vehicle is going to move forward and get you to places. If something breaks, do you attempt to tinker with it and fix it? Maybe, but more likely you go to someone who does know.
What's my point then?
AI coding. Humans made a thing that allows non-programmers to have an idea. They can write that idea in great detail and from there, have something returned that they should of course test thoroughly and if they like it, maybe they share it.
The washing machine is similar but not the same. If you put in your powder/detergent and the right colour of clothes and tell it to start, you let it do it's thing. It washes your clothes and hopefully when you're wearing them at an important meeting, they don't suddenly fall apart, because someone beta-tested that machine ahead of you getting it, and made sure that it didn't rip the seems of your clothes silently, deadly, badly.
AI programs need to be tested the same as your expensive machine, probably many aren't. That is a problem, but the underlying idea of AI code itself being dismissed out-of-hand seems an odd one, at least to me.
Maybe because there's more scope for badness, maybe because you only ever hear the results of all the bad things going on. Like Amazon reviews, the majority of what you see are people unhappy with the product. For every unhappy person there's probably a thousand that just get on with it.
Same for AI badness. For every bad experience, there's probably a few hundred situations where someone made a thing, it just works, nobody cares but you'll never know.
Basically I feel that we maybe need to take a step back, review our hate, our personal biases a tiny bit and stop crapping all over people for doing things a different way that isn't *your* way.
Before automatic washing machines we had manual ones that took a lot more effort, and before that, people washing by-hand. They probably felt exactly the same. The cycle (if you'll pardon the pun) repeats throughout the centuries and will continue to do so, likely forever.
New thing comes along, people hate it, old way was better.
New way becomes old way, new thing comes along, people hate it, old way was better.

Shout at me as you wish.
PS. Wasn't written with aI.

@Onj I don't think this is a bad/unpopular opinion at all. Old stuff may be better for older people, and though I do a bit of hobby AIML coding myself, I am having so much fun with recreating old games I played on DOS Pcs in 1984. I haven't shared them because I'm not even sure anyone would care, but for me, they provide so much good entertainment. When i played the ones I now recreated to have audio cues for me, I was dependent on sighted assistance, not that it was a problem, because I had a younger sister that loved playing the games as much as myself...but the feeling of being able to do stuff independently, stuff I would have had to ask help for when I was younger, leads to so much happiness, if happiness is even the right word. Satisfaction, more likely. Yes there are people that do bad things with AI, but every tool can be used positively or negatively. But then computer programming in itself, even before AI could be used positively or negatively. If memory serves right, then the first computer virus was created in 1981. Again, Computer program, tool...Tool can be used beneficially or harmfully. AI is a tool, just like everything else, even electricity is a tool that can do both good bad.
@Sozhami Yep, a good way of looking at it.
@Onj I completely, and totally agree with this. no arguments from me on this one.
@Onj The thing is that no one knows how the "AI" works because the processes can't be observed. The reason I'm able to run my washing machine without understanding how it works is because there are a sufficient number of people who do understand how it works and can fix it when it breaks.
@BTowersCoding How did humans make a thing they don't know how it works? genuine question. This seems odd to me.

@Onj Well, the engineers who build the language models know how its algorithms work, but when it is used to perform a task the neural network accomplishes it in a way that is completely opaque.

There is a field known as "observable artificial intelligence" which attempts to solve this, but as far as I'm aware it's still mostly theoretical.

@BTowersCoding OK that is fascinating and also kinda weird.
@Onj Yeah it's really interesting, because I don't think it's ever happened before, where it is possible to perform a task without any connection to its history. It's kind of like in Star Trek where a primitive culture can be contaminated by giving it technology that it didn't develop itself, which can lead to drastic consequences, but in this case we are doing it to ourselves.

@Onj @BTowersCoding We know how random number generators work, but we don't know what number a properly made one will spit out next.

We know how LLMs do what they do, and hence we can be absolutely certain that they are non-deterministic.

@Onj so I agree to a significant extent. The issue is when AI companies are openly saying: AI will write all your code, this will mean you need half the number of devs and half the amount of time to ship the same product because AI is just so damned good at code. Then bosses say, awesome! Let's make loads of talented coders redundant and if the team tells me they still need loads of time for testing because AI code still needs oversight...
@bermudianbrit Yeah, that I don't agree with at all, but I don't know where my level of 'this has to stop' really is. I think it's hard to define.
@Onj Then I'll just ignore them and boost the teams' targets anyway. Massive companies have been doing this because people at the top are assured that AI can do the work. So its not so much a problem of AI itself, but a problem with the salesmen foisting it off on companies, and a problem with those at the top not listening to their teams when they say that testing is still needed
@Onj I mostly agree with you on the principle of not dismissing LLM code outright, though I do think the analogy might be slightly misguided/mischosen. It's less like using a washing machine and more like using a hypothetical sort of clothes vending machine that puts together and sews your clothes on-demand. Your clothes are already made when you wash them, and presumably you read the washing instructions on the label and set your washing machine to the right settings. So yes, you're trusting it to follow those settings and to not mess up your expensive clothes, but you're not really having it create anything and the settings are quite limited-scope. I do think there is ethical use of AI, I try to make responsible use of it, as much as its background is very problematic and we ought to be conscious of that, so I definitely agree with you. But I also know that the ratio of shitty to decent AI-coded projects is much, much higher than the ratio of disastrous to successful washing machine cycles. Hey, how many tokens worth of water does a washing machine cycle use? Now that's a thought!
@guilevi Yep, all fair points.
@Onj I think you make some good points here. I fully agree AI should be tested always.
@Onj The analergy is a little flawed because you're comparing an end user to a developer. If I create a washing machine and have no idea how it works and give it to people and things break and I then have no idea how to fix them, that's on me. Any end user using any program may not know how it's going to work, but they can go to the manufacturer, outline their problems and hopefully get fixes, work-arounds or bug fixes.
@JustinMac84 You can go back to your coding agent and outline the problems and if done right, get fixes too. Not always, and not always well, but that's what testing's for isn't it?
@Onj So you receive a support ticket, negotiate with your user, while simultaneously submitting a support ticket to your AI of choice and negotiating with that. If you can't duplicate the problem the user is having, what would you do? You wouldn't know how to advise them and would have to pass on every piece of possibly incorrect, possibly unsafe advice the model gave you and await feedback from the user. Exponentially grow that problem for every bug
@JustinMac84 Yep, but if there were such a thing as fiver for coding instead of music, same thing would apply there. Humans could be just as devious, make something that looks good and works on the outside, steals your crypto on the inside. Not nice.
@Onj I don't understand the point. There is Fiver for coding. You can commission people to produce software for you. Thing is, human-coded software, the culpability can be traced back. Imagine my shock, my horror, my outrage, when you told me the software I had my model produce for you introduced vulnerabilities! However did that happen? there's no way for you to prove that I didn't do it on purpose or that the model didn't mess up.
@JustinMac84 Sure, but I think you're doing what most people do right now, absolute, absolute worse-case scenario. I don't know why people do this honestly, other than if it scores points, but OK, point made. It could be terrible. It could be catastrophic but... What if it just isn't? What if it simply does the job it's intended to do?
@Onj I don't think it is worst case scenario. Worst case would be bad acting; second worst case would be unintentionally introduced vulnerabilities that allowed other bad actors; third worst case would be software that messes up your machine; fourth worst case would be incompatibility with certain setups and bugs.
@Onj The point is that the quotes developer, who has outsourced all the skill to a stochastic parrot, has no idea how to fix any of those issues without arduous, lengthy back and forth that may introduce even more problems. Would you do that to a customer? Andre, I'm having this problem. Knowing that consulting an LLM is like rolling a dice, albeit a weighted dice and you wouldn't know if the answer was right or wrong,

@Onj would you still pass on the magic 8-ball solution?

I'm sorry you feel that these arguments are an attempt to point score. They are not. In fact, your post is very topical. there is an article, just today, doing the rounds about Amazon having a high level meeting about a spate of outages affecting its business due to AI coding. A trillion dollar company is suffering because of this.

@JustinMac84 And that's on them. If those higherups are too stupid to properly test, that is a they problem. I can only speak for myself but I spend hours, sometimes days after getting a thing made, testing to the very best of my ability and I always ask my, as you put it, 'stochastic parrot' to write out a document detailing all steps.
I'm even more than happy to share the chats I have with it, I hide nothing.
I'm not doing this seriously, more for fun and that's it.
I'm just so tired of the massive amount of negativity around a thing. If one lives life like that, I pity them. I can't do it.
There's more to life than hate, than sadness, than negative vibes.
@JustinMac84 You're not even wrong, because clearly you've done all the reading, read all the bad press, and it vindicates your own bias about it (which goes back to my post in itself) and that's absolutely your choice to make. I'm not going to change your mind. I just think it's sad that before we can enjoy a new technology, we have to crap all over it first. It happens in all sectors when a new thing comes on the scene.
@Onj It's all about acceptable risk. If you and your circle are happy within the software that AI can produce, that's great. A business footing is different and I myself, if producing a marketable product would be unhappy to feedback to a user saying, I don't know what's wrong, I can't repplicate your problem, my AI model has suggested this, it might work it might not. I would also be unhappy with that level of uncertainty and support as a user.
@JustinMac84 I've had people report back to me 'xyz' didn't work, I got it fixed. I'm just making addons though, not software to control military aircraft.
@JustinMac84 today I even heard that someone was using my addon to make money. Happy for them. Never thought that would be a thing but why not?
@Onj Opposing tech for the sake of opposing tech is stupid. I would hesitate before describing a breadth of profound and, most importantly, substantiated concerns as negativity and hate though. I would argue that charging wrecklessly into the adoption of a technology, without considering all the ramifications is equally foolish. For me it's not about black and white no-one should use this stuff, it's about how the stuff is used.

@JustinMac84 Of course it is. If I made a thing, didn't even give it a single test and threw it out there and it killed someone's machine, that is terribly irresponsible. You haven't come up with a single good usecase so far though, your entire response to the thread has been:
Amazon screwed up, you could screw up, people are screwing up. your brand would suck if...

That's putting problems and limits right at the door before you even step out of the house.
Me, I can't live that way. I think trying a thing and seeing if all it does is suck, is better than not knowing at all.
Taking other people's word for it, and again *only* seeing the bad in a thing, well it speaks for itself.

@Onj If Microsoft, a staunch proponent of AI itself is publishing studies demonstrating that AI causes cognitive atrophy, a reduction in critical thinking skills; if Amazon itself is falling over badly generated AI code; if the BBC is testing chat bots and noting sometimes a 50% failure rate; if proper programmers are noticing the cumulative and most importantly hidden errors AI coders are generating...
@JustinMac84 I rest my case. You just made all my points for me right there.
@Onj that is missing the point of my argument. The issue is not that you might screw up. Bugs, with all the permutations, compatibility issues etc are absolutely and completely inevitable. It's not the screw-ups that worry me. It's how the screw-ups happened and how the screw-ups are dealt with, as appropriate, that concern me.
@JustinMac84 If you let it concern you. If people don't fix things they're putting out, if what they're putting out sucks so bad it hurts, kills people, don't go near it. ever. You're absolutely not wrong for that.
@Onj @JustinMac84 Thing is, I think you can have it both ways. If I write code, I make sure I know how it works and how it's created. AI is a tool for me. It saves me time. But I do know how it works, I'm an engineer and I've coded stuff by hand. I play piano ... not as good as you. I know you've used Suno or other tools to play with AI and its creativity. Would you accept a piece of music that AI made as yours? When I make things with code, I involve myself with the process, but I know I don't have time to do what I did today and write 5000 lines of it. I give it attribution as my assistant to as a writer in my code and in my application. I do, however, and will always, be able to break it apart, know what code was written, and be able to solve the problems that inevitably will come. Because I use AI as a tool to make something of the form of an application work. Without some knowledge of programming though, I would never release it because I know that it or some portion of it will break. What I'm saying is that Ai should be used with care. Know what it's maaking. Understand how it works. And for goodness sake, don't do what I know you don't do but others often do and rather than going to Google to search if something has been made that solves your problem, you employ AI to write you a program to do it. For me that's too risky, and if I've found an application that has had thousands of people run it, try to break it, push it to its limits, I'll use it more fully. I can write svupport for said app. But for the person who would rather AI a solution but does not know how their new solution works, eventually they'll get a call, know nothing of why its breaking on person B's computer, ask AI about it, be confused because AI no longer has the background that allowed it to make the thing. To make this ... thing ... shorter, just be careful. Learn about your code and how it works, you'll thank me later.g or
@ner @JustinMac84 No because I'm probably a hypocrite. If I prompt Suno to make something based on an idea it isn't mine but I could probably learn to play it. Coding feels different. It's not. I know that in my head but it feels more wholesome. I cannot explain why, and I have zero reason for thinking so.
@Onj @ner Props for the honesty. But then we come to the interesting question of at what point does something become yours. Hans Zimmer, John Williams, they write pieces. They tell the orchestra what to play. At what point is the prompt detailed enough for the same ownership to be legitimate, when you're just telling the AI what to play?
@JustinMac84 @ner Lol that's too deep and I don't know. What I know is that AI coding is fulfilling a mad dream I had as a kid to have a thing made that I wanted made, even if I couldn't do it. People say the same about music with Suno. If it makes them happy, why not?
Musicians often worry they'll be put out of a job. Not me.
I know what I can bring to the table. I know my skills, the way I play is mine, and even if an AI trained on my material I could still switch it up, so I don't fear, honestly. I know many do.
Nothing to do with this discussion really but adjacent.
@Onj @ner I've always wanted to know your take on this. Take live gigging out of the equation completely because AI can't do that yet. Pretend I don't know you. How do you feel about the following attitude and imagine it becoming more prevalent. Why would I want to listen to, much less buy Andre's music, when I can just generate my own? Why should I hire him for a project requiring recorded music for the same reason?

@JustinMac84 @ner Great question and my take on it is this:
You listen to whatever makes you happy, and you buy whatever makes you happy. If that isn't me, that's absolutely fine.
I am one of hundreds and thousands of musicians that learnt to play a particular way, and sometimes it's hard to break out of that way. AI can do what we cannot because it's trained on us hundreds and thousands, and approximate/amalgamate what it learnt, into something you want to hear.
You may not like the sound choices, so you can spend time directing it and hope it produces what you want, and if that makes you smile, that's what music should do.

I'm in a very small minority when I say that, but AI music is here to stay. The good, the bad and the terribly ugly. I've heard it all.
Eventually it will become so good you won't be able to tell it apart from real-made stuff but real made stuff is still going to get made anyway, regardless because some of us love what we do and will keep doing it.

I'm not angry, I'm not mad, I'm not arguing. I only speak for myself when I say all of that, but truly, having had AI create some stonkingly good bangers from my own uploaded material, I'd be a terrible liar if I said I hated it.

@JustinMac84 @ner I hate the shit stuff. Utterly loathe it. The thing is, a lot of pubs and clubs use AI-generated stuff now because they don't have to pay copyright on it, but it's all generated from what seems to be the same generic 'Musac in a lift' template, never anything really groundbreaking. That pisses me off. It really does. It could be the good stuff but it never, ever is.
@Onj @ner Who's Andre? Never got to hear about him because his one album every 3, 6 or 12 months, whatever, is buried under a hundred weight of people generating an album a day. I've always wondered what your take on that side of things is.
@Onj @JustinMac84 I can't say that about my code. I feel kind of janky using it, but it gets the job done and I'm super transparent that I worked with Claude to make the thing. Big difference though, I know how it works, to the centilla. That's how I get around feeling like a fraud in a developer costume. I can directly tell Claude that i think the issue is somewhere in x.cs in the abc method of the blarp object. And we fix it, move on.
@ner @Onj Absolutely this. I think knowing how to code and then using AI as a boost is fine, a big step up. It's putting it in the driver's seat and everyone else being the passengers that I have a problem with.
@Onj to say nothing of the implications for livelihoods and the environment, don't you thinkk that weight of evidence, already amassing when AI is so young, is worth listening to and being mindful of, rather than dismissing it as negativity and hate. The jaenie is out of the bottle now. It's never going away. wishing it so is pointless. Doesn't mean we have to accept the jaenie as is though. Doesn't mean we can't hold out for better.
@JustinMac84 You won't get better if we stop it in it's tracks. Remember the size of computers when we were young? Talk to your grandparents and ask them the size of computers back then. They had to start out like junk before they got down to the smaller than fingernail size we can produce now.
New things have to suck before they don't.
Old things learnt how not to suck by sucking in the beginning.
@Onj But again, you're over-simplifying my argument. No-one said anything about stopping anything in its tracks. I believe I was quite clear that AI isn't going away, nor should it. Carrying on regardless, "yes there are a bunch of nay-sayers and we're already seeing the ways things are going wrong and people losing their jobs etc, but fuck it, they're just pooping the party, bunch of miseries,"
@JustinMac84 I'm not oversimplifying anything, I'm stating a fact. You're taking it as a personal slight. The 'You' in my statement is not personally directed, it is just a thing.
I also agree with your last post, so that could hardly be oversimplifying could it?
Ethical AI is important.
Stuff that is going to be used to kill people, autonomous robots controled by AI, that is not your friend or anybody's friend. WE need to push for that to be banned and quickly.
@Onj I assumed your propenultimate post was a response to my argument. I don't feel slighted, just want you to understand that I regard banning all ai ever as impractical, unworkable and not necessarily beneficial. I do however feel that we are adopting it at a wreckless pace and that things like cognitive atrophy; accute job loss; inaccurate decisions, information and unreliable products; environmental destruction
@JustinMac84 I personally like the idea that the CEO of Claud basically told trump to 'go fuck yourself' but not in those words, because they didn't want to build no-control robots to spy on the US and whatever else might come of that. Good for him.
@Onj Absolutely agree with you there. and also, I am glad that people are making money off the tools you create. That is cool. My only intent in replying was to assert that I think there's a difference in requirements and responsibility between someone who owns a thing like a washing machine and the people who make a thing. I also think that we need to listen to the science and not be in such a hurry.
@JustinMac84 You know that thing in the 90's, was it the ESRB which was the board for rating video games?
We need a modern something for that with regards AI. comprehensive, third-party independent testing that come up with a proper rating standard and some kind of scale for energy use and many more things that I'm not qualified to think of.
@Onj and people being forced to bear the cost of data centres etc, worrying about these things is not negativity or hate, it is common, rational sense.
@Onj or stop everything, ban AI forever, don't have to be the only options. We could appropriate regulate, investigate, move more slowly and get a healthy AI with a more significant net benefit.
@Onj Proliferating the ability to produce software to many many more people just exponentially increases the possibility for mallice, unintentional vulnerabilities and incompetence. At least the hacker in your example is human, therefore can be blamed and had to invest a lot of time to get skilled. Do they want to blow that investment on bad acting?
@Onj Whereas an abusive partner could quite happily blow a day's effort to produce a tracker, key logger or other piece of malicious software with which to infect a partner, ex or rival business.
@Onj But anyway, this doesn't address your original point, my answer to which is that it's fine for an end user to have no idea how their product works and not to be able to fix it unaided, much less so for a dev or business supplying something they have no ideaabout to people.
@JustinMac84 Lol come on now, businesses supply whatever to people all the time and how hard is it to get help with whatever it is when all the people you talk to are just people working there for work experience or something? We've probably all seen it. No excuse but you know it's true.