I fucking hate ChatGPT and ai and all of that shit

https://lemmy.world/post/43978975

I fucking hate ChatGPT and ai and all of that shit - Lemmy.World

I’ve been working with so many students who turn to it as a first resort for everything. The second a problem stumps them, it’s AI. The first source for research is AI. It’s not even about the tech, there’s just something about not wanting to learn that deeply upsets me. It’s not really something I can understand. There is no reason to avoid getting better at writing.

I’m in software development and land on both sides of this argument.

Having to review or maintain AI slop is infuriating.

That said, it has replaced traditional web searching for me. A good assistant setup can run multiple web searches for me, distill the useful info cutting through the blog spam and ads, run follow up searches for additional info if needed, and summarize the results in seconds with references if I want to validate its output.

There was a post a couple days ago about it solving a hard math problem with guidance from a mathematician. Sparked a discussion about AI being a powerful tool in the right hands.

You trust it to “distill the useful info”? How do you know it’s not throwing out important pieces just to lead you down the garden path, or, maybe because it “thinks” you wouldn’t be interested because of all it “knows” about you? If you need to check everything it does, why not just do it yourself?

It’s really not that different from a traditional web search under the hood. It’s basically a giant index and my input navigates the results based on probability of relevance. It’s not “thinking” about me or deciding what I should see. When I say a good assistant setup, I mean I don’t use Gemini or ChatGPT or any of the prepackaged stuff that tries to build a profile on you. I run my own setup, pick my own models, and control what context they get. If you check my post history I’m heavily privacy conscious, I’m not handing that over to Google or OpenAI.

The summary helps me evaluate if my input was good and the results are actually relevant to what I’m after without wading through 20 minutes of SEO garbage to get there. For me it’s like getting the quality results you used to get before search got enshitified. It actually surfaces stuff that doesn’t even show up on the front page of a traditional search anymore.

Yeah this is the important bit, I’m switching roles to principal engineer: ai at my company. It cannot be a crutch. We’re building multi agentic frameworks that second guess and push back. A real thing here is that OpenAI models are trained on “make the user happy” and don’t push back.

Anthropic models, while not perfect either, structured in the right way, become augmentations and learning tools, primed to admit what they don’t know, primed to push back if it seems like the person doesn’t really understand what they’re really asking. The problems are generally the classic PEBKAC and blindly trusting ai and that’s a human training thing. It’s been in the software world for years. People blindly pasting StackOverflow code into their repos because they don’t grasp the problem and want the quick fix.

Unfortunately, as we’ve seen with with openclaw, it’s a lot of people with an aggressive end goal and no understanding about the tools they are working with, the importance of the human in the loop. Like I said, it’s not perfect but the problems are also just humans getting positive feedback from models designed to do that and now those models are going to be used for autonomous weapons and surveillance.