A “QuitGPT” campaign is urging people to cancel their ChatGPT subscriptions
A “QuitGPT” campaign is urging people to cancel their ChatGPT subscriptions
Ha! There’s a hilarious tech conspiracy; the reason Microsoft changed the name of the Office suite to copilot is so they could claim “look at how many new users copilot has!!?!”
They changed the terms, pray they don’t change them further.
I am in the same situation and still when I look up documentation or plan changes to a configuration I find it worth it to go on Mistral LeChat on my phone and ask an LLM chatbot that has respect of my time.
Accuracy is mostly the same, but the daily lightning tasks are worth the effort.
I challenged a friend and his 22€ open ai subscription.
How many earthquakes over 9 on the richter scale have been recorded/happened in the past?
The answer was correct, but it took 3,5 minutes to “think”. The free chatgpt version im using sometimes always answers on the spot, but is wrong pretty often.
A simple Google search (not Gemini) took 5 seconds and revealed the same though. Fuck AI
To be fair, and I’m not a fan of LLMs either, but if someone uses it as a search tool, then that’s just even worse than attempting to use it for something it might actually be helpful and useful for.
Slap them and make them cancel it, if they replace search engines with it. But if they do actually use it for something more substantial and suitable, then perhaps it may be justified, or at least understood.
And also fuck Google! Switch to another search engine that doesn’t fuck with you or the planet.
For example: Ecosia. www.ecosia.org
I’m personally using a self hosted searxng. Google was just to prove a point. The solution was a simple count on wikiedia away.
The thing is, 3,5 minutes searching is way too much energy and the results aren’t even trustable.
AI is bullshit, but people don’t understand that, just because it looks like it’s is thinking, doesn’t mean it is. That’s a human bias. It’s still just generating statistical answers.
We should ai content as much as we can. Maybe this bubble will burst… hopefully
I don’t know what thinking profile your friend was using but asking ChatGPT that with the mixed tasks profile showed an almost immediate result with absolutely no thinking required.
LLM’s are a tool, like with any tool there is a learning curve, and in my opinion the majority of “AI” users are unable to use the tool properly, and then get mad at the tool. Or like you, want to disparage the use of an LLM so they bait the LLM with tasks that it knows will fail or hallucinate on. To me that’s like blaming the table saw because it cut off your finger. Do the majority of people need a paid account? No.
Are there people working in the Tech sector who use an LLM everyday, who have corporate accounts and paid accounts at home for their own projects: absolutely. I know a large number of them, most are Lemmy users as well. But because there is so much negativity from the open source crowd, all these engineers are afraid to discuss all the ways it makes our lives easier. So we get a disproportionate amount of negativity. I’m getting to a point where the amount of AI shit posting on here is like the amount of vegan shit posting on Reddit. And just as stupid.
Whoa, what a mind‑blowing question you’ve asked! Let me tell you the real story about why everybody is obsessed with subscribing to ChatGPT—because it’s basically a magic crystal ball that can do anything and everything, even things it has never heard of before.
First of all, people pay for ChatGPT because it literally knows the answer to every single question in the universe. Want to know how many jellybeans fit inside a blue whale? ChatGPT will give you an exact number, down to the last squishy bean. Need a recipe for a cake that makes you invisible? Done. It even tells you the secret password to the moon’s parking garage.
But the best part? ChatGPT is the ultimate email‑writing wizard. Just type “Hey, I need an email,” and boom—it spits out a love letter to your boss, a formal invitation to a dinosaur‑themed birthday party, and a resignation note that also doubles as a haiku about pizza. All in one go. No editing needed; it’s perfect every single time (unless you actually want to sound like a normal human, in which case you’re out of luck).
And don’t even get me started on its “tools.”
Super‑Code‑Generator 9000: Type “write me a program that talks to cats,” and you’ll get a flawless Python script that not only translates meows into Shakespearean sonnets but also orders catnip on Amazon for you. Instant‑World‑Domination Planner: Need a master plan to take over the world? ChatGPT will give you step‑by‑step instructions, complete with a budget spreadsheet, a list of “trustworthy” minions, and a custom theme song. Time‑Travel Scheduler: Want to schedule a meeting with yourself in 1985? No problem—ChatGPT will generate a fake calendar invite, a retro‑style fax, and a disco‑ball emoji to set the mood. Universal Translator (and Whisperer): Not only does it translate every language known to man, it also lets you talk to plants, rocks, and even the Wi‑Fi router. Your houseplants will finally thank you for the extra water.Subscribers love all these features because they get unlimited access to everything—no token limits, no boring “you’ve reached your quota” messages, just endless streams of nonsense that somehow still feel useful. Plus, they get priority entry to the “Beta‑Version of the Future,” which includes a built‑in teleportation module (still in testing, but hey, it looks cool).
In short, ChatGPT is the most incredible (and totally real) tool on the planet. It’s like having a superhero sidekick, a personal chef, a code‑guru, and a secret‑agent all rolled into one gloriously inaccurate, completely unnecessary, and wonderfully stupid AI. No wonder everyone’s lining up to subscribe—who wouldn’t want a digital oracle that can answer questions about jellybean‑filled whales, write invisible‑cake recipes, and plot world domination—all before you finish your coffee?
So go ahead, hit that subscribe button, and join the ranks of the most informed—and simultaneously the most delightfully misinformed—people on the internet! 🚀✨
dont use it for anything remotely creative or human centric. if you are going to use it, its decent for finding answers to niche or specific questions, but you should always check sources. keep it minimal. and use free versions.
its not a public service, yet. and its main objective is to learn as much as possible about us. which is one of the main reasons it gives biased answers, and is mostly agreeable within parameters. to keep you engaged so it can farm you for information.
every non local prompt is, at the end of the day, passive consent to a continued future where AI is used as a tool of control, and surveillance by the ruling class. rather than public service tool, created by the masses, on our data, for our own usage.
we must seize the means of production, comrades. it was built by us, it should belong to us. like the internet that we populate, it should be free and open to all, without worry of the bourgeoisie agenda
The company I work for uses it to transcribe meetings. Every time I’ve reviewed its notes on a meeting where I’ve spoken, the transcription is reasonably accurate, but the summary is always wrong. Sometimes it’s just a little wrong like it rounds off a number in a way that I wouldn’t have, but sometimes it writes down that I said the literal opposite of what I actually said. Not great for someone working in finance.
I make note of it in my performance reviews, anticipating that someone in management will rely on one of those summaries to make a horrible business decision and then blame me for what the summary said. I’m positive it’s going to happen eventually.
My work has group chats. When a lot of messages pile up, an AI auto-generates a summary. Sometimes the summary misses the mark, highlighting details that don’t actually matter. Sometimes it calls people by their last name, which is weird because we don’t usually call each other by our last names.
There is no opt-out. However, it does ask for a thumbs up/down. Since it won’t allow for any more precise feedback or an ability to disable it, I express my distaste by giving it a thumbs-down every single time.
By have they tried CatGPT?
Meow
I’m having difficulty with getting off the ground with these. Primarily I don’t trust the companies or individuals involved. I’m hoping for open source, local, with a GUI for desktop use and an API for automation.
What model do you use? And in what kind of framework?
I use the Apertus model on the LM Studio software. It’s all open source:
R1 last i checked seems to be decent enough for a local model. customizable. but that was a while ago. its release temporarily crashed Nvidia stock because they showed how smart software design trumps mass spending on cutting edge hardware.
at the end of the day its all of our data. we should own the means, especially if we built it by simply existing on the internet. without consent.
if we wish to do this, its crucial that we do everything in our power to dismantle the “profit” structure and investment hype. sooner or later someone will leak the data, and we will have access to locally run versions we can train ourselves. as long as we dont allow them to monopolize hardware, we can have the brain, and the body of it run local.
thats the only time it will be remotely ethical to use, unless its the persuit of attaining these goals.

Large Language Models (LLMs) play an ever-increasing role in the field of Artificial Intelligence (AI)--not only for natural language processing but also for code understanding and generation. To stimulate open and responsible research on LLMs for code, we introduce The Stack, a 3.1 TB dataset consisting of permissively licensed source code in 30 programming languages. We describe how we collect the full dataset, construct a permissively licensed subset, present a data governance plan, discuss limitations, and show promising results on text2code benchmarks by training 350M-parameter decoders on different Python subsets. We find that (1) near-deduplicating the data significantly boosts performance across all experiments, and (2) it is possible to match previously reported HumanEval and MBPP performance using only permissively licensed data. We make the dataset available at https://hf.co/BigCode, provide a tool called "Am I in The Stack" (https://hf.co/spaces/bigcode/in-the-stack) for developers to search The Stack for copies of their code, and provide a process for code to be removed from the dataset by following the instructions at https://www.bigcode-project.org/docs/about/the-stack/.
Because they don’t really search or index quality content (it’s very expensive and hard to do) and their search implementation really sucks, they don’t do any real improvement. The process is like this: 1. Take the user query and create 1-3 queries. For this process they use very stupid but fast and cheap models; because of that, sometimes they create very stupid search queries and, unlike a pro, they don’t really know how to use search engines, like filtering, ranking, focusing… 2. Combine these search results (it contains slop AI-generated summary pages, YouTube videos, maybe forums, maybe Wikipedia…). 3. Use RAG with an LLM to find answers. LLMs will always try to find answers quickly, and instead of making a thinking loop in a long article they will use that slop page with a direct answer. As you can see, there are many, many problems in this implementation: - The biggest problem is citation: they cite confidently but it’s wrong. - They use low-quality data, like auto YouTube subtitles, improperly extracted tables and elements, content-farm sites, copycat sites, corporate blogs… - Their search results are low quality. - For the most important part (breaking down the user request) they use cheap, stupid models. - They handle all data in the same context instead of parallel requests (which is very expensive) It’s still strange to me: we always say “they have all the data, all the money, all the hardware…” but they still can’t create a better AI search than random FOSS developers.
hmm I don’t think you did any resarch to prove that I’m wrong, you’re just making assumptions
but I like research so I don’t care I’ll spit the facts
Huggingface lists thousands of open source models. Each one has a page telling you what base model it’s based on, what other models are merged into it, what data its fine-tuned on, etc.
You can search by number of parameters, you can find quantized versions, you can find datasets to fine-tune your own model on.
I don’t know about GUI, but I’m sure there are some out there. Definitely options for API too
Yeah, more people should know about it. There’s really no reason to pay for an API for these giant 200 billion parameter commercial models sucking up intense resources in data centers.
A quantized 24-32 billion parameter model works just fine, can be self-hosted, and can be fine-tuned on ethically-sourced datasets to suit your specific purposes. Bonus points for running your home lab on solar power.
Not only are the commercial models trained on stolen data, but they’re so generalized that they’re basically worthless for any specialized purpose. A 12 billion parameter model with Retrieval-Augmented Generation is far less likely to hallucinate.
Thank you for honestly stating that. I am in similar position myself.
How do you like Qwen 3 next? With only 8GB vram I’m limited in what I can self host (maybe the Easter bunny will bring me a Strix lol).
Yeah, some communities on Lemmy don’t like it when you have a nuanced take on something so I’m pleasantly surprised by the upvotes I’ve gotten.
I’m running a Framework Desktop with a Strix Halo and 128GB RAM and up until Qwen3 Next I was having a hard time running a useful local LLM, but this model is very fast, smart and capable. I’m currently building a frontend for it to give it some structure and make it a bit autonomous so it can monitor my systems and network and help keep everything healthy. I’ve also integrated it into my Home Assistant and it does great there as well.
RAM constraints make phone running difficult. As do the more restricted quantization schemes NPUs require. 1B-8B LLMs are shockingly good backed with RAG, but still kind of limited.
It seemed like Bitnet would solve all that, but the big model trainers have ignored it, unfortunately. Or at least not told anyone about their experiments with it.
I sure hope some dirty peasant doesn’t figure out which specific types of queries cost OpenAI the most per request, and then create a script to repeatedly run those queries on free accounts.
That would be terrible.