I fucking hate ChatGPT and ai and all of that shit

https://lemmy.world/post/43978975

I fucking hate ChatGPT and ai and all of that shit - Lemmy.World

I’ve been working with so many students who turn to it as a first resort for everything. The second a problem stumps them, it’s AI. The first source for research is AI. It’s not even about the tech, there’s just something about not wanting to learn that deeply upsets me. It’s not really something I can understand. There is no reason to avoid getting better at writing.

I’m in software development and land on both sides of this argument.

Having to review or maintain AI slop is infuriating.

That said, it has replaced traditional web searching for me. A good assistant setup can run multiple web searches for me, distill the useful info cutting through the blog spam and ads, run follow up searches for additional info if needed, and summarize the results in seconds with references if I want to validate its output.

There was a post a couple days ago about it solving a hard math problem with guidance from a mathematician. Sparked a discussion about AI being a powerful tool in the right hands.

You trust it to “distill the useful info”? How do you know it’s not throwing out important pieces just to lead you down the garden path, or, maybe because it “thinks” you wouldn’t be interested because of all it “knows” about you? If you need to check everything it does, why not just do it yourself?

I don’t use it much as a dev, but sometimes a response to a question, while not correct will guide me to a solution. The trick is that you have to have the knowledge to know what’s right or wrong. I will also use it to troubleshoot code when I have a red squiggly because something is wrong. It can find missing brackets, a semi colon, or if I just called a function incorrectly.

If AI just up and disappeared tomorrow, I’d be so happy, but I can’t discount some of it’s benefits. Things I’d find on stack overflow before can be done directly within my ide with context to my project. I never accept an AI response, but instead type everything out so that I know that it’s doing what I want and so it doesn’t modify any of my code.

Linters have been finding missing brackets and extra semis since forever.
Truth. This does a bit more than a typical linter, that was just a simple example I riffed off. Sometimes it helps me find logic errors as well. I’ll highlight a block of code, ask why it’s doing or not doing the thing I expect, and go from there. I’ve probably only used it a dozen times for basic troubleshooting over the past 6 months when I get stumped on something.

Yeah so I’ve not used claud but have used a number of models from hugging face.

I haven’t used them extensively.

In my experience, they provide a great starting point for things I haven’t interacted with much. So I might spend 10,000 hours with js, but never touched a firefox extension, or maybe a docker container, or nix script. With js an LLM is not much more productive than just coding by myself with non-AI tools. With the other things it can give you a really good leg up that saves a heap of effort in getting started.

What I have noticed though is that it’s not very good at fine tuning things. Like your first prompt might do 80% of the job of creating a docker file for you. Refining your prompt might get you another 5% of the way, but the last 15% involves figuring out what it’s doing and what the best way to do it might be.

With these sorts of tasks models really seem to suffer from not knowing what packages or conventions have been deprecated. This is really obvious with an immature ecosystem like nix.

IMO, LLMs are not completely without virtue, but knowing when and when not to use them is challenging.

With these sorts of tasks models really seem to suffer from not knowing what packages or conventions have been deprecated. This is really obvious with an immature ecosystem like nix.

This is where custom setups will start to shine.

github.com/upstash/context7 - Pull version specific package documentation.

github.com/utensils/mcp-nixos - Similar to above but for nix (including version specific queries) with more sources.

github.com/…/sequentialthinking - Break down problems into multiple steps instead of trying to solve it all at once. Helps isolate important information per step so “the bigger picture” of the entire prompt doesn’t pollute the results. Sort of simulates reasoning. Instead of finding the best match for all keywords, it breaks the queries down to find the best matches per step and then assembles the final response.

github.com/CaviraOSS/OpenMemory - Long conversations tend to suffer as the working memory (context) fills up so it compresses and details are lost. With this (and many other similar tools) you can have it remember and recall things with or without a human in the loop to validate what’s stored. Great for complex planning or recalling of details. I essentially have a loop setup with global instructions to periodically emit reinforced codified instructions to a file (e.g., AGENTS.md) with human review. Combined with sequential thinking it will identify contradictions and prompt me to resolve any ambiguity.

The quality of the output is like going from 80% to damn near 100% as your knowledge base grows from external memory and codified instructions in files. I’m still lazy sometimes and will use something like Kagi assistant for a quick question or web search, but they have a pretty good baseline setup with sequential thinking in their online tooling.