I have a bit of a confession to make:

I use #AI when I write #Lisp for #Emacs.

Boo, hiss, yes, I know. But hear me out - I don't actually let the AI write any code for me, I use AI (Gemini, specifically) to TEACH me Lisp. Then I write the code myself. When I make a mistake and my code doesn't work, I debug it myself. But if I get stuck, I ask the AI. So the AI is basically my customized teacher. Sometimes it's wrong and makes mistakes, but human teachers make mistakes too.

To be honest, this has been a lot of fun. I have a #Lisp project I'm working on, and sometimes I ask the AI to give me a challenge - a feature to add to my project - which is of a suitable difficulty based on what the AI knows of my abilities to write Lisp. The AI then gives me the task, explains what functions I should look into and learn to be able to accomplish it. Sometimes I end up doing that task, sometimes I end up doing something else.

I think, at least for me, this is the perfect use for #AI in coding. It avoids the problems of #AISlop, because I'm the one writing the actual code so I know what every single line does. Nothing is #vibecode. But I'm still getting benefits from #AI.

However, there are obviously still the environmental effects and ethical concerns about #GenAI to take into account, and I can't exactly say that my conscience is clear in that regard...

But anyway - while others use #AI to write code for them, I (and many others, of course) use AI to teach me to write code in a new language.

Since I'm an experienced software developer, I sometimes catch the AI making mistakes, and contradicting itself. Then I point that out, and ask it to explain itself. This is a benefit of experience which a more junior dev might not have, so I'm not saying that this is something which everyone should be doing from now on. But for me, it seems to work.

@Enfors does it help to ask it to explain itself? Surely the explanation is equally likely to be false.

I tried doing similar some time ago and found it too frustrating to be worthwhile.

@benjamineskola It does help. Sometimes it apologizes for the lack of clarity, and demonstrates that it was indeed correct, and I misunderstood. After checking, I verify that this is indeed so. This has happened many times.

Other times, it says things like "Oh, I'm sorry, you're absolutely correct. I must have halucinated that part of the answer", etc.