Stack Overflow in freefall: 78 percent drop in number of questions

https://mander.xyz/post/45102281

Stack Overflow in freefall: 78 percent drop in number of questions - Mander

Lemmy

It’s worth to mention that the StackOverflow survey referenced does not include many countries with also great developers, including Russia.

I never seriously used, now dropped, nor will use any LLM in my work, art, or research… I prefer people, communication, discoveries, effort, creativity, and human art…

Meanwhile… so freaking, incredibly many developers, artists are left without attribution… So many people atrophy their skils for learning, contribution, researching skills… So much human precious time is wasted… So much gets devalued…

This is so heartache… sorrowful…

I asked ChatGPT to correct your grammar, and here is what I got back. I hope it helps.

It is worth mentioning that the referenced Stack Overflow survey does not include many countries that also have excellent developers, including Russia.

I have never seriously used, do not use now, and will not use any LLM in my work, art, or research.
I prefer people, communication, discovery, effort, creativity, and human art.

Meanwhile, so incredibly many developers and artists are left without attribution, respect, or gratitude.
So many people atrophy their skills for learning, contribution, research, accumulation, and self-organization.
So much precious human time is wasted.
So much is devalued.

Time will show, and only a few who are actually accountable will probably recover.
This is heartbreaking… sorrowful…

Thank you, but I am sorry, but I will not read the output of the LLM.
Its still not very decipherable
Couldn’t correct it yourself? What’s wrong, your brain don’t working so good?
TBH asking questions on SO (and most similar platforms) fucking sucks, no surprise that users jump at the first opportunity at getting answers another way.
Removed. Someone else already said this before. Also, please ensure you stick to the stlye guides next time, and be less ambiguous. SO could mean a plethora of things.
Spoilerlast time this question was answered was for several years older software versions, and the old solutions don’t work anymore
Git gud, n00b!
In a video covering the toxicity of Stackoverflow, it was found ot at least some of the admins are also extremely toxic on other sites, in that same exact manner.
SO PTSD is real.
You have been banned for off topic low effort conversation.

Shivers…

I remember when I signed up for SO and was immediately put off by the fact you couldn’t post a conversation asking for help until you had helped others out AND gotten enough positive points.

I still did it, but damn their moderation system is ass.

Ah yes the famous: you need to add more details, may e a picture but you need to have above 100 reputation before you can add a picture or edit your question
I was in the middle of making a reply like this but yours is better. Closed as duplicate.
Yea it sucks, but quality is important so I get it.
I do understand being rigorous about questions, and technical forums were even worse a lot of the time, but SO’s methods led to the site becoming severely outdated. They really should have introduced a mechanism to mark old content as outdated. It should have been obvious like 10 years ago that solutions often stop working come next major version or the programming language, framework or operating system.

I will never forget the time I posted a question about why something wasn’t working as I expected, with a minimal example (≈ 10 lines of python, no external libraries) and a description of the expected behaviour and observed behaviour.

The first three-ish replies I got were instant comments that this in fact does work like I would expect, and that the observed behaviour I described wasn’t what the code would produce. A day later, some highly-rated user made a friendly note that I had a typo that just happened to trigger this very unexpected error.

Basically, I was thrashed by the first replies, when the people replying hadn’t even run the code. It felt extremely good to be able to reply to them that they were asshats for saying that the code didn’t do what I said it did when they hadn’t even run it.

imho the experience is miserable, they went out of their way to strip all warmth from messages (they have a whole automated thing to get rid of all greetings and things considered superfluous) and there are many incentives to score points by answering which frankly I find sad, it doesn’t look like a forum where people exchange, it looks like a permanent run to answer

Stackexchange sites aren’t intended as forums, they’re supposed to be “places to find answers to questions”.

The more you get away from stack overflow itself the worse they get, though, because anything beyond “how can I fix this tech problem” doesn’t necessarily have an answer at all, much less a single best one

I mean, people who don't want their questions or answers included in an LLM won't use SO. When people want to ask a question and not be shut down or berated, they'll probably end up on HN.
What’s HN?
Hacker News, probably.
As others have said, Hacker News.

According to a Stack Overflow survey from 2025, 84 percent of developers now use or plan to use AI tools, up from 76 percent a year earlier. This rapid adoption partly explains the decline in forum activity.

As someone who participated in the survey, I’d recommend everyone take anything regarding SO’s recent surveys with a truckfull of salt. The recent surveys have been unbelievable biased with tons of leading questions that force you to answer in specific ways. They’re basically completely worthless in terms of statistics.

Realistically though, asking an LLM what’s wrong with my code is a lot faster than scrolling through 50 posts and reading the ones that talk about something almost relevant.
It’s even faster to ask your own armpit what’s wrong with your code, but that alone doesn’t mean you’re getting a good answer from it
If you get a good answer just 20% of the time, an LLM is a smart first choice. Your armpit can’t do that. And my experience is that it’s much better than 20%. Though it really depends a lot of the code base you’re working on.
Also depends on your level of expertise. If you have beginner questions, an LLM should give you the correct answer most of the time. If you’re an expert, your questions have no answers. Usually, it’s something like an obscure firmware bug edge case even the manufacturer isn’t aware of. Good luck troubleshooting that without writing your own drivers and libraries.

If you’re writing cutting edge shit, then LLM is probably at best a rubber duck for talking things through. Then there are tons of programmers where the job is to translate business requirements into bog standard code over and over and over.

Nothing about my job is novel except the contortions demanded by the customer — and whatever the current trendy JS framework is to try to beat it into a real language. But I am reasonably good at what I do, having done it for thirty years.

Boring standard coding is exactly where you can actually let the LLM write the code. Manual intervention and review is still required, but at least you can speed up the process.

Code made up of severally parts with inconsistently styles of coding and design is going to FUCK YOU UP in the middle and long terms unless you never again have to touch that code.

It’s only faster if you’re doing small enough projects that an LLM can generate the whole thing in one go (so, almost certainly, not working as professional at a level beyond junior) and it’s something you will never have to maintain (i.e. prototyping).

Using an LLM is like giving the work to a large group of junior developers were each time you give them work it’s a random one that picks up the task and you can’t actually teach them: even when it works, what you get is riddled with bad practices and design errors that are not even consistently the same between tasks so when you piece the software together it’s from the very start the kind of spaghetti mess you see in a project with lots of years in production which has been maintained by lots of different people who didn’t even try to follow each others coding style.

That is a bit … overblown. If you establish an interface, to a degree you can just ignore how the AI does the implementation because it’s all private, replaceable code. You’re right that LLMs do best with limited scope, but you can constrain scope by only asking for implementation of a SOLID design. You can be picky about the details, but you can also say “look at this class and use a similar coding paradigm.”

It doesn’t have to be pure chaos, but you’re right that it does way better with one-off scripts than it does with enterprise-level code. Vibe coding is going to lead people to failure, but if you know what you’re doing, you can guide it to produce good code. It’s a tool. It increases efficiency a bit. But it also don’t replace developers or development skills.

Yeah the internet seems to think coding is an expert thing when 99.9% of coders do exactly what you described. I do it, you do it, everybody does it. Even the people claiming to do big boy coding, when you really look at the details, they’re mostly slapping bog standard code on business needs.
Yeah but in that edge case SO wouldn’t help either even before the current crash. Unless you were lucky. I find LLM useful to push me in the right direction when I’m stuck and documentation isn’t helping either not necessarily to give me perfectly written code. It’s like pair programming with someone who isn’t a coder but somehow has read all the documentation and programming books. Sometimes the left field suggestions it makes are quite helpful.
I’ve found some interesting and even good new functions by moaning my code woes to an LLM. Also, it has taken me on some pointless wild goose chases too, so you better watch out. Any suggestion has the potential to be anywhere from absolutely brilliant to a completely stupid waste of time.

Also depends on how you phrase the question to the LLM, and whether it har access to source files.

A web chat session can’t do a lot, but an interactive shell like Claude Code is amazing - if you know how to work it.

How do you know it’s a good answer? That requires prior knowledge that you might have. My juniors repeatedly demonstrate they’ve no ability to tell whether an LLM solution is a good one or not. It’s like copying from SO without reading the comments, which they quickly learn not to do because it doesn’t pass code review.

That’s exactly the question, right? LLMs aren’t a free skill up. They let you operate at your current level or maybe slightly above, but they let you iterate very quickly.

If you don’t know how to write good code then how can you know if the AI nailed it, if you need to tweak the prompt and try over, or if you just need to fix a couple of things by hand?

(Below is just skippable anecdotes)

Couple of years ago, one of my junior devs submitted code to fix a security problem that frankly neither of us understood well. New team, new code base. The code was well structured and well written but there were some curious artifacts, like there was a specific value being hard-coded to a DTO and it didn’t make sense to me that doing that was in any way security related.

So I quizzed him on it, and he quizzed the AI (we were remote so…) and insisted that this was correct. And when I asked for an explanation of why, it was just Gemini explaining that its hallucination was correct.

In the meanwhile, I looked into the issue, figured out that not only was the value incorrectly hardcoded into a model, but the fix didn’t work either, and I figured out a proper fix.

This was, by the way, on a government contract which required a public trust clearance to access the code — which he’d pasted into an unauthorized LLM.

So I let him know the AI was wrong, gave some hints as to what a solution would be, and told him he’d broken the law and I wouldn’t say anything but not to do that again. And so far as I could tell, he didn’t, because after that he continued to submit nothing weirder than standard junior level code.

But he would’ve merged that. Frankly, the incuriousity about the code he’d been handed was concerning. You don’t just accept code from a junior or LLM that you don’t thoroughly understand. You have to reason about it and figure out what makes it a good solution.

Shit, a couple of years before that, before any LLMs I had a brilliant developer (smarter than me, at least) push a code change through while I was out on vacation. It was a three way dependency loop like A > B > C > A and it was challenging to reason about and frequently it was changing to even get running. Spring would sometimes fail to start because the requisite class couldn’t be constructed.

He was the only one on the team who understood how the code worked, and he had to fix that shit every time tests broke or any time we had to interact with the delicate ballet of interdependencies. I would never have let that code go through, but once it was in and working it was difficult to roll back and break the thing that was working.

Two months later I replaced the code and refactored every damn dependency. It was probably a dozen classes not counting unit tests — but they were by far the worst because of how everything was structured and needed to be structured. He was miserable the entire time. Lesson learned.

This is the big issue. LLMs are useful to me (to some degree) because I can tell when its answer is probably on the right track, and when it’s bullshit. And still I’ve occasionally wasted time following it in the wrong direction. People with less experience or more trust in LLMs are much more likely to fall into that trap.

LLMs offer benefits and risks. You need to learn how to use it.

My armpits refuse to talk to me. I’ll take that as a sign that overflow errors are a feature, not bug.

LLM’s won’t be helping but SE/SO have been fully enshitifying themselves for years.

It was amazing in yhe early days.

It was a vast improvement over expert sex change, which was the king before SO.
expertSEXchange dot com hahahahaahahahahahahahaha oh that brought me some dreadful memories! Thanks for the laugh and rhe chills
That and the url for Pen Is Mightier (penismightier.com) are my favorite examples of poor url choice in the early days of the internet.
How early though? I stopped using them about 12 years ago due to the toxic environment.
When it was just SO I think… if my memory serves. When it was small enough that only a (relative) few programmers were using it and generally behaving well.

I often feel like every question has been asked and answered already.

I still like SO

What are the odds the classic “expertsexchange” ends up out lasting stack exchange?

expertsexchange I see they have rebranded their domain with a dash. When did that happen?

just one dash? hm must be a mistake
Yeah because either you get a “how dumb are you?” Or none
Locking this comment. Duplicate of lemmy.world/comment/21433687
Sad Ganymede noises - Lemmy.World

Lemmy

The complete non-sequitur link really makes it. chef’s kiss
I post there every 6-12 months in the hope of receiving some help or intelligent feedback, but usually just have my question locked or removed. The platform is an utter joke and has been for years. AI was not entirely the reason for its downfall imo.
Not common I’m sure, but I once had an answer I posted completely rewritten for grammar, punctuation, and capitalization. I felt so valued. /s
The last time I asked a question, I followed the formatting of a recent popular question/post. Someone did not like that and decided to implement their formatting, thebvproceeded to dramatically change my posts and updates. Also people kept giving me solutions to problems I never included in my question. The whole thing was ridiculous.
As a mod, this is all I ever did on the platform. Thanks for the appreciation!