I work with IT and the AI stuff is...
motivating me to do my work!
5.9%
demotivating me to do my work!
62%
not affecting my motivation!
32.1%
Poll ended at .
@leah Demotivating. It's not that I think AI will replace me anytime soon. (In fact it won't. I've played with it enough to know that current-gen AI is at best Legacy-Code-as-a-Service. It's not usable for anything productive. At least not in the long term.)
But the issue is that a lot of people - even within the software industry - think it will be able to replace us. It's demotivating to realize that a lot of people don't understand the worth of an experienced software engineer...
@sigmasternchen @leah I feel different. For me it is a tool to craft software. Being experienced helps me to shape the output. As every tool, it has a learning curve. But in many cases, it helps rather than it hinders. I would compare it to the usage of an IDE (as opposed to a text editor).
@morl99 @leah I used to think that too.
But the output quality even of something like Claude 4.5 Opus is so incredibly bad that I spent more time explaining to the AI that no, "assert status_code in [200, 500]" is, in fact, not a sensible test. That comments that parrot the code do more harm than good. Or that changing the scope of the feature midway through implementing it might be bad. I'm sure some of this can be fixed with precise prompting, but at some point it's just faster to write it by hand - that's what programming languages are for after all: They are languages that are way more precise than natural language to tell the computer what to do.
And granted, maybe that's just my experience because I tend to work on problems with non-trivial solutions, where there is just not as much related training data for the AI. But it still makes them unusable for me and for the work I do. And it feels validating to me that other programmers that I respect also warn about the consequences for code quality - and by extension maintainability - and security.

And this is just regarding the output, right. There's loads of other problems, like deskilling, the unrealistic pricing model, dependency on US companies, the ecological problems, and of course also the moral issues.

Bottom line (for me): Current-gen AIs can not do my job as well as me. They slow me down and frustrate me, when they are "trying to help me". And I think (for various reasons) that we should probably not get dependent on them.

@sigmasternchen @leah I have had problems, where the AI was of no (substantial) help to me. I cannot say if this was due to my lack of context engineering skill at that time. But I really like the AI as a sparrings partner, given a predefined workflow and some "fixed" requirements.

But in no way do I see this as a replacement for my job, the AI is nothing with me doing the context engineering. As for the other problems, they sure await new solution strategies.

@morl99 @leah I'm sorry, I might have misunderstood you earlier.
I'm not saying it can't be helpful. For example: I've used it to analyse an existing (not-well-structured) code base and search for locations that touch certain topics - that's definitely useful. Even vibe-coding (in the sense that the code is not looked at by a human) can have applications for protoyping or requirements engineering.
Just for implementing features or fixing bugs in production code, I personally think it slows me down. And I feel like focusing extensively on AI could potentially prove a bad move for companies. And I'm saying this working for a client that focuses extensively on AI.
😅

@sigmasternchen @leah I feel we are pretty on the same page then and I have misunderstood your initial post as well.

And yeah, vibe coding a small CLI for example as a useful tool to handle a certain kind of operational problem is really nice. I have a CLI where I do not care for the code at all, just the tests.