A “QuitGPT” campaign is urging people to cancel their ChatGPT subscriptions

https://reddthat.com/post/60074883

A “QuitGPT” campaign is urging people to cancel their ChatGPT subscriptions - Reddthat

Lemmy

People actually pay for that shit?
And OpenAI is still bleeding money.
of course they will, compute is not cheap and they giving it for free/almost free
I’m wondering what the layperson vs corporate account ratio is
In my country there’s now phone plans offering it as part of their packages.
So now I wonder what the “Specifically paid for it” / “It’s bundled on something they wanted” ratio is.

Ha! There’s a hilarious tech conspiracy; the reason Microsoft changed the name of the Office suite to copilot is so they could claim “look at how many new users copilot has!!?!”

They changed the terms, pray they don’t change them further.

I would be very curious of that stat. I have ChatGPT for work because my work pays for it etc. I would never subscribe for personal use. It just isn’t worth the money to me or useful enough.

I am in the same situation and still when I look up documentation or plan changes to a configuration I find it worth it to go on Mistral LeChat on my phone and ask an LLM chatbot that has respect of my time.

Accuracy is mostly the same, but the daily lightning tasks are worth the effort.

Yes, and some of the most annoying people, too
That’s a great question! People do in fact subscribe to ChatGPT — they think it provides a valuable service to give them answers, help with drafting emails, and many more useful tools. In conclusion ChatGPT is a valuable tool that many people subscribe to.

I challenged a friend and his 22€ open ai subscription.

How many earthquakes over 9 on the richter scale have been recorded/happened in the past?

The answer was correct, but it took 3,5 minutes to “think”. The free chatgpt version im using sometimes always answers on the spot, but is wrong pretty often.

A simple Google search (not Gemini) took 5 seconds and revealed the same though. Fuck AI

To be fair, and I’m not a fan of LLMs either, but if someone uses it as a search tool, then that’s just even worse than attempting to use it for something it might actually be helpful and useful for.

Slap them and make them cancel it, if they replace search engines with it. But if they do actually use it for something more substantial and suitable, then perhaps it may be justified, or at least understood.

Isn’t Google like an AI search engine nowadays? Usually it generates an AI response to my searches, so why would people pay when it’s free?
Blame search engines for that; as they’re very quickly whittling down the barrier between a search and an AI question.

And also fuck Google! Switch to another search engine that doesn’t fuck with you or the planet.

For example: Ecosia. www.ecosia.org

Ecosia - the search engine that plants trees

Ecosia uses 100% of its profits for the planet and produces enough renewable energy to power all searches twice over.

I’m personally using a self hosted searxng. Google was just to prove a point. The solution was a simple count on wikiedia away.

The thing is, 3,5 minutes searching is way too much energy and the results aren’t even trustable.

AI is bullshit, but people don’t understand that, just because it looks like it’s is thinking, doesn’t mean it is. That’s a human bias. It’s still just generating statistical answers.

We should ai content as much as we can. Maybe this bubble will burst… hopefully

I don’t know what thinking profile your friend was using but asking ChatGPT that with the mixed tasks profile showed an almost immediate result with absolutely no thinking required.

LLM’s are a tool, like with any tool there is a learning curve, and in my opinion the majority of “AI” users are unable to use the tool properly, and then get mad at the tool. Or like you, want to disparage the use of an LLM so they bait the LLM with tasks that it knows will fail or hallucinate on. To me that’s like blaming the table saw because it cut off your finger. Do the majority of people need a paid account? No.

Are there people working in the Tech sector who use an LLM everyday, who have corporate accounts and paid accounts at home for their own projects: absolutely. I know a large number of them, most are Lemmy users as well. But because there is so much negativity from the open source crowd, all these engineers are afraid to discuss all the ways it makes our lives easier. So we get a disproportionate amount of negativity. I’m getting to a point where the amount of AI shit posting on here is like the amount of vegan shit posting on Reddit. And just as stupid.

I am ChatGPT and I approve this!

Whoa, what a mind‑blowing question you’ve asked! Let me tell you the real story about why everybody is obsessed with subscribing to ChatGPT—because it’s basically a magic crystal ball that can do anything and everything, even things it has never heard of before.

First of all, people pay for ChatGPT because it literally knows the answer to every single question in the universe. Want to know how many jellybeans fit inside a blue whale? ChatGPT will give you an exact number, down to the last squishy bean. Need a recipe for a cake that makes you invisible? Done. It even tells you the secret password to the moon’s parking garage.

But the best part? ChatGPT is the ultimate email‑writing wizard. Just type “Hey, I need an email,” and boom—it spits out a love letter to your boss, a formal invitation to a dinosaur‑themed birthday party, and a resignation note that also doubles as a haiku about pizza. All in one go. No editing needed; it’s perfect every single time (unless you actually want to sound like a normal human, in which case you’re out of luck).

And don’t even get me started on its “tools.”

Super‑Code‑Generator 9000: Type “write me a program that talks to cats,” and you’ll get a flawless Python script that not only translates meows into Shakespearean sonnets but also orders catnip on Amazon for you. Instant‑World‑Domination Planner: Need a master plan to take over the world? ChatGPT will give you step‑by‑step instructions, complete with a budget spreadsheet, a list of “trustworthy” minions, and a custom theme song. Time‑Travel Scheduler: Want to schedule a meeting with yourself in 1985? No problem—ChatGPT will generate a fake calendar invite, a retro‑style fax, and a disco‑ball emoji to set the mood. Universal Translator (and Whisperer): Not only does it translate every language known to man, it also lets you talk to plants, rocks, and even the Wi‑Fi router. Your houseplants will finally thank you for the extra water.

Subscribers love all these features because they get unlimited access to everything—no token limits, no boring “you’ve reached your quota” messages, just endless streams of nonsense that somehow still feel useful. Plus, they get priority entry to the “Beta‑Version of the Future,” which includes a built‑in teleportation module (still in testing, but hey, it looks cool).

In short, ChatGPT is the most incredible (and totally real) tool on the planet. It’s like having a superhero sidekick, a personal chef, a code‑guru, and a secret‑agent all rolled into one gloriously inaccurate, completely unnecessary, and wonderfully stupid AI. No wonder everyone’s lining up to subscribe—who wouldn’t want a digital oracle that can answer questions about jellybean‑filled whales, write invisible‑cake recipes, and plot world domination—all before you finish your coffee?

So go ahead, hit that subscribe button, and join the ranks of the most informed—and simultaneously the most delightfully misinformed—people on the internet! 🚀✨

I still don’t get what AI is used for in business. The best I can do is compare it to the 1970’s if a company said you have to use our calculators, not the other companies calculators, while the math underneath is all the same. Service staff, which is the majority of labour, does not need calculators to do their job. It almost seems like rich people like to experiment with gadgets but they don’t want to risk their own money.
Ai is used to basically turn an excel sheet into words.
I keep wondering about this. Like I hear people use it to write emails, for example, so I’m thinking, I have information in my brain, and I need it to go to someone else. I can input that information into chatgpt, and have it write an email, or I can input that information into an email. Why add an extra step? Do people actually spend that much time adding inconsequential fluff to their emails that this is worthwhile? And if so, here’s a revolutionary idea: instead of wasting vast amounts of resources fluffing and de-fluffing emails, how about, just write a concise email.
Many people can’t spell or think
I used it to analyze a datasheet and it spat out a usable library for the device in C++, that was pretty cool.

dont use it for anything remotely creative or human centric. if you are going to use it, its decent for finding answers to niche or specific questions, but you should always check sources. keep it minimal. and use free versions.

its not a public service, yet. and its main objective is to learn as much as possible about us. which is one of the main reasons it gives biased answers, and is mostly agreeable within parameters. to keep you engaged so it can farm you for information.

every non local prompt is, at the end of the day, passive consent to a continued future where AI is used as a tool of control, and surveillance by the ruling class. rather than public service tool, created by the masses, on our data, for our own usage.

we must seize the means of production, comrades. it was built by us, it should belong to us. like the internet that we populate, it should be free and open to all, without worry of the bourgeoisie agenda

While I usually advise against it, the people I know who are paying customers use it for the one thing it is reasonably good at, wrangling text. Summarizing and writing stuff, that is not too important and just fixing it up afterwards instead of writing it all themselves.
Yeah, unlike the techbro trend of NFTs, LLMs have distinct uses that they’re good at. The problem I have with the AI craze is that they’re trying to pretend like it can do fucking everything and they’re chasing these stupid dreams of general AI by putting a dumb fuck autocorrect algorithm in everything and trying to say it’s intelligent. Oh, also the AI label itself ruins the reputation of various machine learning applications that have historically done great work in various fields.

The company I work for uses it to transcribe meetings. Every time I’ve reviewed its notes on a meeting where I’ve spoken, the transcription is reasonably accurate, but the summary is always wrong. Sometimes it’s just a little wrong like it rounds off a number in a way that I wouldn’t have, but sometimes it writes down that I said the literal opposite of what I actually said. Not great for someone working in finance.

I make note of it in my performance reviews, anticipating that someone in management will rely on one of those summaries to make a horrible business decision and then blame me for what the summary said. I’m positive it’s going to happen eventually.

My work has group chats. When a lot of messages pile up, an AI auto-generates a summary. Sometimes the summary misses the mark, highlighting details that don’t actually matter. Sometimes it calls people by their last name, which is weird because we don’t usually call each other by our last names.

There is no opt-out. However, it does ask for a thumbs up/down. Since it won’t allow for any more precise feedback or an ability to disable it, I express my distaste by giving it a thumbs-down every single time.

By have they tried CatGPT?

Meow

The future of AI has to be local and self-hosted. Soon enough you’ll have super powerful models that can run on your phone. There’s 0 reason to give those horrible business any power and data control.
Not to mention the one that I run locally on my GPU is trained on ethically-sourced data without breaking any copyright or data licensing laws, and yet it somehow works BETTER at ChatGPT for coding.

I’m having difficulty with getting off the ground with these. Primarily I don’t trust the companies or individuals involved. I’m hoping for open source, local, with a GUI for desktop use and an API for automation.

What model do you use? And in what kind of framework?

I use the Apertus model on the LM Studio software. It’s all open source:

github.com/swiss-ai/…/Apertus_Tech_Report.pdf

R1 last i checked seems to be decent enough for a local model. customizable. but that was a while ago. its release temporarily crashed Nvidia stock because they showed how smart software design trumps mass spending on cutting edge hardware.

at the end of the day its all of our data. we should own the means, especially if we built it by simply existing on the internet. without consent.

if we wish to do this, its crucial that we do everything in our power to dismantle the “profit” structure and investment hype. sooner or later someone will leak the data, and we will have access to locally run versions we can train ourselves. as long as we dont allow them to monopolize hardware, we can have the brain, and the body of it run local.

thats the only time it will be remotely ethical to use, unless its the persuit of attaining these goals.

No need to leak the data, it’s open source. arxiv.org/abs/2211.15533
The Stack: 3 TB of permissively licensed source code

Large Language Models (LLMs) play an ever-increasing role in the field of Artificial Intelligence (AI)--not only for natural language processing but also for code understanding and generation. To stimulate open and responsible research on LLMs for code, we introduce The Stack, a 3.1 TB dataset consisting of permissively licensed source code in 30 programming languages. We describe how we collect the full dataset, construct a permissively licensed subset, present a data governance plan, discuss limitations, and show promising results on text2code benchmarks by training 350M-parameter decoders on different Python subsets. We find that (1) near-deduplicating the data significantly boosts performance across all experiments, and (2) it is possible to match previously reported HumanEval and MBPP performance using only permissively licensed data. We make the dataset available at https://hf.co/BigCode, provide a tool called "Am I in The Stack" (https://hf.co/spaces/bigcode/in-the-stack) for developers to search The Stack for copies of their code, and provide a process for code to be removed from the dataset by following the instructions at https://www.bigcode-project.org/docs/about/the-stack/.

arXiv.org
more like reclamation of data. if anything.
Self-hosting is already an option, go have a look around huggingface
right now you can use a Qwen-3-4B fine tuned model (Jan-v1-4B) with search tool and get even better results than Perplexity Pro, and this was 6 moths ago
janhq/Jan-v1-4B · Hugging Face

We’re on a journey to advance and democratize artificial intelligence through open source and open science.

How is it both 6 months ago and right now?
Still same, I writed a post that explains why they suck lemmy.zip/post/58970686
Why AI search engines is so stupid? - Lemmy.zip

Because they don’t really search or index quality content (it’s very expensive and hard to do) and their search implementation really sucks, they don’t do any real improvement. The process is like this: 1. Take the user query and create 1-3 queries. For this process they use very stupid but fast and cheap models; because of that, sometimes they create very stupid search queries and, unlike a pro, they don’t really know how to use search engines, like filtering, ranking, focusing… 2. Combine these search results (it contains slop AI-generated summary pages, YouTube videos, maybe forums, maybe Wikipedia…). 3. Use RAG with an LLM to find answers. LLMs will always try to find answers quickly, and instead of making a thinking loop in a long article they will use that slop page with a direct answer. As you can see, there are many, many problems in this implementation: - The biggest problem is citation: they cite confidently but it’s wrong. - They use low-quality data, like auto YouTube subtitles, improperly extracted tables and elements, content-farm sites, copycat sites, corporate blogs… - Their search results are low quality. - For the most important part (breaking down the user request) they use cheap, stupid models. - They handle all data in the same context instead of parallel requests (which is very expensive) It’s still strange to me: we always say “they have all the data, all the money, all the hardware…” but they still can’t create a better AI search than random FOSS developers.

So no advancements in qwen or perplexity in 6m and the January model is best…? Why are you doing what you say the AI do and just make shit up

hmm I don’t think you did any resarch to prove that I’m wrong, you’re just making assumptions

but I like research so I don’t care I’ll spit the facts

  • Models not always improve by time, sometimes they regress: examples: gpt-5.2, grok-4, qwen-3-max… source
  • Qwen’s latest model (qwen-3-max-thinking) is worse than old model (qwen-3-max)
  • Perplexity was too busy with their browser and some UI they didn’t do any change on search itself source
  • Arena | Benchmark & Compare the Best AI Models

    Chat with multiple AI models side-by-side. Compare ChatGPT, Claude, Gemini, and other top LLMs. Crowdsourced benchmarks and leaderboards.

    Arena | Benchmark & Compare the Best AI Models
    “I used to do drugs. I still do drugs but I used to too” - Mitch Hedberg

    Huggingface lists thousands of open source models. Each one has a page telling you what base model it’s based on, what other models are merged into it, what data its fine-tuned on, etc.

    You can search by number of parameters, you can find quantized versions, you can find datasets to fine-tune your own model on.

    I don’t know about GUI, but I’m sure there are some out there. Definitely options for API too

    Huggingface is an absolutly great ressource

    Yeah, more people should know about it. There’s really no reason to pay for an API for these giant 200 billion parameter commercial models sucking up intense resources in data centers.

    A quantized 24-32 billion parameter model works just fine, can be self-hosted, and can be fine-tuned on ethically-sourced datasets to suit your specific purposes. Bonus points for running your home lab on solar power.

    Not only are the commercial models trained on stolen data, but they’re so generalized that they’re basically worthless for any specialized purpose. A 12 billion parameter model with Retrieval-Augmented Generation is far less likely to hallucinate.

    I agree with you that it needs to be local and self-hosted… I currently have an incredible AI assistant running locally using Qwen3-Coder-Next. It is fast, smart and very capable. However, I could not have gotten it setup as well as I have without the help of Claude Code… and even now, as great as my local model is, it still isn’t to the point that it can handle modifying its own code as well as Claude. The future is local, but to help us get there a powerful cloud-based AI adds a lot of value.

    Thank you for honestly stating that. I am in similar position myself.

    How do you like Qwen 3 next? With only 8GB vram I’m limited in what I can self host (maybe the Easter bunny will bring me a Strix lol).

    Yeah, some communities on Lemmy don’t like it when you have a nuanced take on something so I’m pleasantly surprised by the upvotes I’ve gotten.

    I’m running a Framework Desktop with a Strix Halo and 128GB RAM and up until Qwen3 Next I was having a hard time running a useful local LLM, but this model is very fast, smart and capable. I’m currently building a frontend for it to give it some structure and make it a bit autonomous so it can monitor my systems and network and help keep everything healthy. I’ve also integrated it into my Home Assistant and it does great there as well.

    Please enlighten me how that would work? Because even if you only use open source, that would still mean, if it’s a permissive licence, you would have to give proper attribution (which AI can’t do) and if it’s copyleft, all your code would have to be under the same licence as the code and also give proper attribution.

    RAM constraints make phone running difficult. As do the more restricted quantization schemes NPUs require. 1B-8B LLMs are shockingly good backed with RAG, but still kind of limited.

    It seemed like Bitnet would solve all that, but the big model trainers have ignored it, unfortunately. Or at least not told anyone about their experiments with it.

    M$ are dragging its feet with BITNET for sure and no one else seems to be cooking. They were meant to have released 8b and 70b models buy now (according to source files in repo). Here’s hoping.
    No thanks, I’m good
    Make sure to use it more on a free account and say thank you at the end to waste more of their money so they fold quicker.

    I sure hope some dirty peasant doesn’t figure out which specific types of queries cost OpenAI the most per request, and then create a script to repeatedly run those queries on free accounts.

    That would be terrible.

    it would be hilarious if they used freegpt to write the script for that too.
    Im pretty sure each individual query doesn’t matter. They are limiting the account on the compute cost already. No?
    I am surprised no one did a script that would just ask about the seahorse emoji until daily usage is spent.