I have no idea what the source of this image is, but yeah, I Laughed Out Loud.

@codinghorror I've been in conflict.

At first I was mad. Then as I had to learn the stuff I worked on an LLM combo tool that just indexed all the documentation in the operating system. And all the code and associated issues.

There's no way I can compete with that.

I've pretty much abandoned the idea that "Linux forums" are a good idea. I was debating making a talk about this for SCaLE but I would get burned at the stake LOL.

I feel bad about it because I kind of abandoned our discourse forum but when it comes to pure tech support, there's no way I'm going back to full humans again.

@jorge yeah but what happens when you invent a new feature? How do you feed the LLMs the relevant, rather complex set of discussions and context that only humans can produce?

Or, say, a new programming language, a new framework, really anything new. For old and established stuff, sure.

@jorge also software tends to change. How will the LLMs know that? Telepathy?
@jorge at the root of all this is Actual Intelligence: humans. The most valuable resource on the planet.

@codinghorror I can't speak for other fields (like programming languages), but in the context of "read man pages back to the user in a certain way".

The human part would be mostly me, I train it by updating the official documentation, which updates in a few minutes. After about 6 months of that the answers are pretty good. I don't think using a general one like chatgpt is a good idea.

But the real benefit is it doesn't have to be perfect, it just needs to be better than reddit and forum answers and that's been net better.

We for sure lost something though.

@jorge the forums and reddit provide a LOT of additional nuance and subtlety (and detail!) that is lost in the highlander "there can be only one" man page. Humans aren't to stop posting anyway, so it's more of a "yes, and".

@codinghorror @jorge I installed OpenStack on Raspberry Pi over the weekend with the help of ChatGPT without having any clue about what I was doing, absolutely none.

It knew the commands I needed and the component of the OS and the Stack, but had no ability to actually troubleshoot and realize when to turn back and go back to previous stage so another alternative could be tried out.

I eventually made it work and I had running VMs without opening a second browser tab. I'm sure without GPT it still wouldn't work and I'd have visited 200+ pages looking for an answer.

I didn't need reddit or other forums because I have 15 years of experience troubleshooting software systems, I just didn't know what I was troubleshooting at this time. But without this experience I think I would still be talking to GPT or would have given up completely.

@sassdawe @codinghorror @jorge the thing is, is that deployment safe? Ie, how can you know how many foot-guns you missed?

@draeath @codinghorror @jorge well, I like to entertain myself with the idea that I'm not a total moron... That said, I forgot to disable DHCP and rebooted the device virtually locking myself out of it πŸ™ƒ

But as PoC and learning exercises it doesn't have to be safe-safe, and locking myself out is going help me never forget DHCP πŸ˜…

@draeath @sassdawe @codinghorror @jorge

> is that deployment safe? Ie, how can you know how many foot-guns you missed?

That is your responsibility.

GPT can help you through the early parts of the learning curve, but you must confirm where you are. It is **helping** you, not replacing you.

Redit, and arrogant know it alls on the Internet are much worse in my opinion. And they will, at a drop of a hat, start to mock you

@draeath @sassdawe @codinghorror @jorge usually you have to step on the foot guns a few times anyway. During setup is a very good time to do that. The more the better. Because once it's in production, taking it out of production to fix the issues, can be hard.

I've been using kubernetes for about a year at home now. My build last year included:
- proxmox
- 6x vms running rke2
- 3x system nodes
- 3x agents
- 3x these each have their own disk
- longhorn

So I'm all into the redundancy everything thing during the setup. So I opt for a raidz1 in zfs on proxmox for all the data drives. Then I add virtual disks to the agents. This was a very bad idea. And I ran it that way for 6mos. Because once it got running, we were stuck.

Similarly I've just installed a new switch, learned about VLANs, learned about LACP LAGGs, and have been re-iping the running cluster in place. It's been a challenge.

Tbf to the discussion though, I use a mixture of LLMs and forums when learning about these things. And the LLMs are definitely hit or miss once you get into the weeds. But people's communication style, particularly in the tech area, makes me hope for an asteroid strike.

@codinghorror @jorge hehe, but stack overflow was strictly against discussions, that's why we have bunch of discords
@sergii @codinghorror @jorge This toot has been *moved to chat*

@codinghorror @rpigab @sergii What we need is a slider widget on SO that changes the "knowledge engine" and on one side is chatgpt and the other end is the most pedantic linux-user asshole possible.

Then you can pick which generation of internet you hate the least (or most!). Make it maddening too, like at least 25 increments in the slider for maximum human and AI bikeshedding.

@codinghorror @jorge

Hang on Jeff it looks like you’re thinking about craaaaawwwwzy long term, like, maybe even longer term than it takes for the VCs to take their money and run?

Meanwhile i am selling people these new house heaters that work by burning up the priceless library of Alexandria, and hoping to make bonus next quarter. Can I interest you in our 5 megajoule unit, just 2% down?

@jorge @codinghorror It's just an index. Totally dependent on content someone already wrote. And if there isn't an answer in the source material, it'll just make something up. No reasoning, no understanding, just straight up bullshit. I'm not sure that's usable for someone who didn't already know most of the answer or sustainable for very long...
@codinghorror We should've just been nicer, I guess.
@lizardbill the funny thing is, the LLMs actually solve a HUGE problem for SO -- the endless repetition of the same questions over and over using slightly different words. I had NO idea how we would fix that, but now .. I kinda do?
@lizardbill but ya know, everything is black and white, ones and zeros, all good or all bad, so LLMs are evil and must be destroyed or humanity will be destroyed. Obviously.
Scott Francis (@[email protected])

If your position on something is not nuanced, I have to wonder how well you really understand it (Even the above statement! Which has exceptions!)

Infosec Exchange
@codinghorror @lizardbill lol. Ask your mods if LLMs have been a net benefit.
@aburka @lizardbill well yeah, if you use a screwdriver to pound in a nail, it's not likely to go well. The general idea is to use a tool for what it is good at, and designed for.
@codinghorror @aburka @lizardbill
We're told the tool is good at everything
@ciredutempsEsme @aburka @lizardbill Always. Be. Closing... and never trust salespeople
@ciredutempsEsme @codinghorror @lizardbill and yet they weren't designed for anything

@aburka @codinghorror @lizardbill

> Ask your mods if LLMs have been a net benefit.

Ask me. Yes.

It is a tool that must be wielded by a hand. I do not think LLMs are going to be able to make reliable agents.

Unreliable agents are useful for some things, but reliable agents would be much more useful. Order of magnitude.

I want L5 automatic driving, I will not get it with current technology.

....liars

@worik @aburka @lizardbill I don't really like LLMs for code at all because such extreme precision is required in the language. That's not true of many other domains.

@codinghorror @aburka @lizardbill

> extreme precision

That is what makes it possible, and so useful, for code.

It has to be widely used code, which is why I have found it useful for learning Shell, and hopeless for designing Pure Data patches. (https://puredata.info/)

Pure Data β€” Pd Community Site

@worik @aburka @lizardbill I disagree, I think it is one of the riskiest use cases, but it's possible. It would need to vetted HEAVILY by a human.
@codinghorror You're right.. I've come to realise they're really good for working with code if you minimize how much code they generate.. I paste in specs and have Claude Code look through the codebase and plan things for me, and I do almost all the implementation myself. Saves me a lot of digging and tracing and remembering, limits the hallucination, sycophancy and slop.
@JesseSkinner you definitely SHOULD blog this, Jesse
@codinghorror thanks, I will!
@JesseSkinner lmk and I'll re-post here too
Jesse Skinner (@[email protected])

Do you love reviewing code other people wrote? Do you get a tickle of pure joy to find and criticize the mistakes and problems in sloppy code? Me neither. You know what I do love? I love pouring my creativity and insight and empathy into a project. I love designing architectures and solutions that actually make things better for users. I love getting in the flow state, cranking away at a problem, building brick upon brick until the creation comes to life. https://www.codingwithjesse.com/blog/coding-with-llms-can-still-be-fun/

Toot CafΓ©
Coding with LLMs can still be fun - Jesse Skinner

Do you love reviewing AI-generated code? Do you get a tickle of pure joy to find and criticize the mistakes and problems in hallucinatory slop? Me neither.

@JesseSkinner @codinghorror Damn, I think you’ve convinced me to give LLM assisted coding a try again.
@BartyDeCanter
Agree.
Properly sandboxed, I can get Claude (opus, not regular) to work through a lot of the StackExchange questions that grad students never even typed in. Questions regarding the packaging and reproducibility elements of scientific software.
Also because the grad students published their expected output I can leave Claude alone with it to reproduce their setup.
This was previously burdensome and now catches me up to where I can do my actual work
@JesseSkinner @codinghorror

@JesseSkinner @codinghorror I wonder if I’m just weird - but I do enjoy reviewing code. When I was a Development Manager, part of my evening routine was reviewing all the commits from the day and emailing out feedback and questions.

I wonder if that’s changed my perception of LLM coding tools, as reviewing code and providing written feedback is something I’m quite comfortable with. Plus, after leading dev teams, it’s become quite natural to offer detailed and precise instructions for changes to codebases I know well.

The perception and experience people have vary widely, and I’ve been fascinated with *why* for a while now. Some people see them as useless, others see real productivity improvements, and it’s often unclear what’s different.

@adam_caudill @JesseSkinner @codinghorror Fwiw I enjoy reviewing code as well, but the big differences is that when I review another human's work, it's an opportunity for mentorship/up levelling as well as building shared understanding. I get none of that reviewing LLM output. AGENTS.md aside, every new session is a blank slate.

@wlach @JesseSkinner @codinghorror 100%. And there's a deeply frustrating since of futility from that - taking the time to provide detailed guidance feel useless, as you end up having to do the same thing again later.

Larger context windows have made that slightly easier to manage over time, but it's not scalable - the VRAM required for scaling up to the level needed to avoid that would be truly absurd.

@adam_caudill @wlach @codinghorror great points from you both, and that spurred me to rewrite the first two lines of the post, because I wasn't trying to dump on code review in general.
@JesseSkinner To be clear, I'm entirely willing to accept that I'm just weird! I do appreciate the edit though, I think that's a much clearer point.
@wlach @adam_caudill @JesseSkinner @codinghorror "...every new session is a blank slate." I'm currently chewing on the idea: That's a good thing. You ever gone into code thinking "I wrote this, I know what's going on, I know how to make this change" but then later found you had missed some corner case? Like the other day I forgot to update the deb postinst. Coding your "mentoring" into the AGENTS.md but starting with clear context has been working very well for me lately.

@JesseSkinner After giving it a few days on a new project, I have to say that works pretty well. I added a few things that keep me honest and help me learn new languages.

- Do not offer to write code unless the user specifically requests it. You are a teacher and reviewer, not a developer
- Include checks for idiomatic use of language features when reviewing
- The user has a strong background in C, C++, and Python. Make analogies to those languages when reviewing code in other languages

@BartyDeCanter sounds great. I'm still fine tuning it as well.
@codinghorror In my experience SO is more like: β€žThis is not a valid question and therefore it is closed.β€œ
Optimizing For Pearls, Not Sand - Stack Overflow

@codinghorror @bitbonk does not work at all for less main stream languages and knowledge. SO has been useless for Smalltalk and OODBMS

@codinghorror I did have some pearl moments back then when I was active on SO. πŸ˜…

https://stackoverflow.com/questions/263400/what-is-the-best-algorithm-for-overriding-gethashcode

What is the best algorithm for overriding GetHashCode?

In .NET, the GetHashCode method is used in a lot of places throughout the .NET base class libraries. Implementing it properly is especially important to find items quickly in a collection or when

Stack Overflow
@bitbonk @codinghorror I've been finding that I get scolded for not providing enough information, or too much, or the wrong sort etc. The LLM is usually wrong but will at least provide hints I can investigate. This beats "we don't like the way you've written your question. Downvote. Closed"
@codinghorror tbh it's grown into a much more inviting place than it once was, but I remember the old days so this is still hilarious lmao
@codinghorror Maybe "far less uninviting" would be more accurate, lol.
@codinghorror
Baffling considering that they trained the shit out of their models on Stack Overflow.
@codinghorror One of these tells nothing but the truth. The other nothing but lies.
@codinghorror I'm imagining two LLMS. On was trained on a question and the accepted answer. The other training on questions on the other comments.