@sigmasternchen @leah I have had problems, where the AI was of no (substantial) help to me. I cannot say if this was due to my lack of context engineering skill at that time. But I really like the AI as a sparrings partner, given a predefined workflow and some "fixed" requirements.
But in no way do I see this as a replacement for my job, the AI is nothing with me doing the context engineering. As for the other problems, they sure await new solution strategies.
@sigmasternchen @leah I feel we are pretty on the same page then and I have misunderstood your initial post as well.
And yeah, vibe coding a small CLI for example as a useful tool to handle a certain kind of operational problem is really nice. I have a CLI where I do not care for the code at all, just the tests.
@leah I might be the only one down as "motivating" here.
I do sadly catch "I don't wanna write this code when the bot can maybe do it" energy (even when the bot manifestly cannot and it proves to be waste of time to ask it to try and a cogitohazard to wade through what it spits out).
But "The bot is more diligent about writing and running the tests than I am" (and "The bot will lie and submit code-shaped nonsense instead of code") is definitely "motivating" me to actually be test-driven.
@leah In general, figuring out that mechanical processes find your codebase confusing (where are the tests? How does someone even begin to build this? What are the essential pieces of domain knowledge that it is impossible to write useful code without accounting for?) will also reveal problems that make it inaccessible to new humans.
The less clever the automated system, the better it is at tripping over problems that shouldn't be there anyway and inspiring you to have better/any docs.
@leah I hate the tons of bad practices, tech dept in the PRs and it seems nobody cares.
I can not review all the PRs and tons of this bad practices are merged.
Companies started to hire people without experience to lead or as Sr and they bring them AI shit to work.
I feel that something that I love so much that is the #SoftwareEngineering is no respected just because AI is the new god and companies wants to save money at the expense of the quality of software.
@leah I hate when the justification for a code be
"...because chatgpt told me"
or several time people can not response a simple question about their own fucking code.
@leah Something that keep my mind calm is the #openSource #freeSoftware or my personal projects
a little space where the good code still matter.
IT in general has gone to hell, AI is just the latest factor, and it is a big one.
@leah It definitely demotivates me. It just feels dishonest when someone in a junior position sends me stuff to review that is genAI. It has so many errors and especially the subtle ones are annoying to catch. And when I try to talk to the junior about it, it becomes clear that they didn't understand key issue at all.
So what I did: I went to the kitchen, made a tea, and only then went back to my PC and gave them a call to help them rewrite the thing.
It'd have been far easier if they directly asked me "Hey I don't know what I should write, can you help me" instead of sending me confident-sounding word vomit.
And this is where I can control it. My bosses use genAI to "get inspiration and feedback" and honestly that sounds like a nightmare.
@leah me and a colleague are currently trailing it as a college recommended we should.
Overall, does make fun at first. We are currently with the fourth iteration on a problem (that would have taken multiple iterations anyways). I feel half of the iterations failed in my opinion because the solution was to complex as AI somewhat steers you in just implementing stuff that should be simplified first.
I would probably not use it for code generation for not PoC stuff anymore.