You know you can just not use AI, right?

You can make a choice not to be part of this.

Even if your job uses it, YOU don’t have to in your normal life. You don’t have to let your kids use it.

You didn’t have it three years ago. You can just…be on Team Human. You can choose.

@Catvalente That's not true.

A major danger of algorithmic abuse has been projected for years to be the risk that an uncaring government reform might replace trained bureaucrats with simplistic algorithms, and use their suggestions to mete out government power on non-consenting population whether it consents or not. Despite the warnings, this sort has even happened already, too — for example, in systems designed to mask racism by having a computer express the stereotype that Black people are offensive and criminal.

https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/

Predictive policing algorithms are racist. They need to be dismantled.

Lack of transparency and biased training data mean these tools are not fit for purpose. If we can’t fix them, we should ditch them.

MIT Technology Review

@riley @Catvalente

I agree we should critically evaluate our personal use of AI vapourware, understand practical uses of Machine Learning (ML), and also understand existing abuses of statistical algorithms for decision making.

In that quoted article, I didnt see any reference to Bayesian reasoning or the models which are likely to be involved too. The people writing these articles for the general public are often just reiterating a historical statement like "it's too complicated to explain". 
The problem is that many people cannot grasp the use or misuse of statistics in law or policing, whether it is informed by ML or not. The difference between a statement about where general crime or crime types has occurred historically and the evidence needed to prove that a specific person is committing a new crime often escapes police, witnesses and judges. They are totally separate concerns. Calling any of this AI is just another layer of confusion.

@Catvalente
Yes. that.
Only. I ask anyone reading this not to conflate the "AI" gold rush fuckery with the tech itself. Like the horseless carriage before it, machine learning is a thing anyone doing digital will eventually use/make peace with. But. Dont use a pistol to scrape masking tape off a window. Don't ask a machine learning model to think... you do the thinking, the machine does the iterative pattern matching bs... knowing the difference is the thing...
@Catvalente totally agree. As a professional copywriter I don’t think writing something that is interesting and accurate is very difficult. But sometimes the choice is moot. If a client is trying to weigh the value of something creative the order of operations is to ask how much it costs, how long will it take, and how good/accurate/engaging is the final. I am discovering that clients increasingly discounting every value proposition except cost. If they can get something that is roughly acceptable for free, they aren’t likely to pay someone $100 per hour. I think it behooves anyone who may be competing with AI to learn as much as they can about it, master it and create a new value proposition that leverages AI. You can choose to fight it and not use AI, but a client will likely choose to work with someone who can deliver what they want as cheaply and quickly as possible. I hate AI generated content, but the time to refuse has long past.
@calsnoboarder Wait, you say you totally agree and then go on to argue the exact opposite?
@RexMagenta @calsnoboarder the time has not passed. We can choose to not engage with AI in as many areas as possible.
@ambergrey @RexMagenta you can certainly choose not to. If however your livelihood depends on direct competition with an LLM, that choice may not result in a positive outcome. This same “choice” was presented when mobile phones were introduced. This same “choice” was presented when social media apps were created. You can choose to not do something but always be aware that the choice may adversely impact you. That’s all I am saying.
@calsnoboarder @RexMagenta it is critical that we make the choice to not normalize it in as many ways as possible.
@ambergrey @RexMagenta I am not sure there is anything you or I can do to stem the tide of apps and services that use AI. So long as some people find it useful, the folks who shove it into every aspect of our daily lives will continue to do so. I compare it to when Coca Cola changed their formula. Some folks lost their minds while others thought the new recipe was better. Coca Cola just monetized both. AI is here to stay (or until someone finds a better alternative), and in the future they will either charge you extra to opt out or not give you the option at all.
@RexMagenta I agree that you can choose not to use it. Everyone has a choice. If you are competing against AI for your livelihood however, you may not like the outcome.
@calsnoboarder But that was already covered in Cat's original point, no? That you might be required to use it for your job, but can still choose not to use it on your personal life
@RexMagenta and I agreed. You can choose not to engage in your personal life (and in your work life). I just think it’s a losing proposition. Just like it was when mobile phones were released to the market and some folks said they would never be slaves to a phone. As I stated in my comment, I am adversely affected by the advent of LLMs. I just refuse to pretend there is a real choice to be made. LLMs are everywhere and are increasingly part of everyday life. Like any technology you can avoid it and pretend it doesn’t exist but eventually you will have to figure out a way to live with it.
@calsnoboarder But then you don't 'totally agree'...
@RexMagenta sure. You do you champ.
@Catvalente
I don't use it for anything. I even turned off Siri listening although I might use her to call someone on my phone.

@Catvalente

yes, its easy. moreso if you already questioned the cloud.

@Catvalente I used it in a workshop about it.
we all had it make up a certain kind of illustrated story with our prompts.
*Some* of us discerned the biases (it did a lot better with cat and food prompts than, well, anything else --> some folks thought that was cute.)
So ... nope, I don't use it. Granted, I don't have to. It's not going to make good math lessons.
@Catvalente Used to be I could put in search terms and the first result often did it, then ads, so I learned to scroll down a bit... until I got a good adblocker. Now that reflex to auto scroll down comes in handy as I studiously ignore the shit AI summaries the pig boys and girls force upon anyone making a query. Every time I see an AI summary I send thoughts out into the universe that are not friendly to techbros or their apologists.
@cjpaloma
You can turn off the AI summaries in Google by adding -ai to the search term. Or use duck duck go and turn it off in the settings.
@Catvalente
@rlcw thanks, I rarely use google, so it's yahoo's AI crap I scroll away from (don't judge me!). While I used it for a while, currently I don't often get much satisfaction using DDG…it's sad because search engines in general were almost magical for a while.…I do have some luck with using lycos.com from time to time (yes it still exists).
@cjpaloma
They might also have a setting to turn it off.
And yeah, search engines are just so bad at the moment...
@Catvalente Excuse me, I'm Team Anti-Human and I still don't use that shit.

@Catvalente or you selectively use it for the cases it was designed for when you want to deflect liability onto a machine...

I really don't know why anyone would accept a "sorry the AI must have gotten it wrong and I didn't notice" as an get out of jail free card but enough apparently do...

@Catvalente i'm a software developer. even if I don't use it, if my colleagues do, I have to deal with the output.
@Catvalente There will be far more accurate iterations in the future. People still need to use discernment. When you scrape the web, you bring back crap, too, and it will be that way until the truly fake stuff is not promoted by search engine algorithms.
@pescemediaworks @Catvalente from what I’ve seen in recent articles, accuracy and hallucinations are getting worse over time, not better.
@bigducky @pescemediaworks @Catvalente unfortunately, while accuracy and confabulations are getting worse, so is the influx of AI sympthisers or people Stockholm-syndroming it 😾
@pescemediaworks @Catvalente LLMs only have the illusion of working as well as they do because they're trained on truly gigantic amounts of (human-written) data. You don't get truly gigantic amounts of human- written data without much that is just plain wrong (whether search engines are involved or not), and you don't weed out the bad data without employing an army of subject matter experts, which would be cost-prohibitive. Additionally, even an LLM trained only on "good" data will still likely get a lot of facts wrong when asked a question that isn't very close to something in its training data, because they're not optimized for getting facts right-- they're optimized for predicting the next cluster of letters. So I'll have to file your confident "there will be far more accurate iterations" assertion under "extraordinary claims require extraordinary proof." Or, like, any proof.

@Catvalente More like Team Not-LLM!

The opposite of "AI" is not "human".

@Catvalente Of course I'm team Human, plus I openly ridicule and/or berate my colleagues who are not part.
@Catvalente I had to categorize a list of 5.200 lines this week. My manager took the chatgtp route:20 mins, 2.500 sorted out, rest unclear. Quality tbdetermined.
I took 3 hours of filtering 2 columns+ skimming the results. All sorted, a few questions for the client and 361 remaining. Client later concurred with all my propsed answers to the questions.
Now I know that I have only 361 parties to reach out to.
And I have a strategy because I know which various groups are in the list. #AIisnotall
@Catvalente But how else am I going to get my daily influx of unequivocal praise about my clever and remarkable insights about maybe not using egg whites as an egg substitute if I'm allergic to eggs? "You're right, Lowly Human! Egg white are still considered eggs! I will keep that in mind for my further interactions with you, but not for anyone else."
@Catvalente In $WORKPLACE most people somehow catch an AI fever and in some days it's unbearable. They stopped to say something like "try to search this" and now its "ask ChatGPT". It's sickening on its own after few people tried to run unchecked generate scripts, fortunately it were small things without significant impact.
Worst part is that is EXPECTED from tech people to use this crap at work. Only compromise I could do is using Perplexity as a "browser without turning results pages manually". Because ethical problems aside I simply couldn't stand info without verification. So I have links to verify answers and could say I use this damn thing as expected... Fortunately this mode works without registration or paying.
"You can make a choice not to be part of this."

fucking word

edit: I am an AI expert as much as anyone is.
I was just in the MS OpenAI trainings, I have released a production
app, I have been moved on to the new "agent" crap.

... IT doesn't work. It won't work. All of the "proof" is in these BS white papers
that bottom out in speculation and projections.

also it's burning the planet and hurting people that don't fit into
some status quo bubble, because all it is is a status quo reinforcement device.
all it is literally doing is matching text to previous text. the image stuff
is a derivation of that.

We 100% do not have to participate.
(Today is my last day at a high paying job. so i 100% have put my money where
my mouth is. I am stressed about what i will do next, but i am relieved to
not be a part of the problem anymore.)

also that dick-tornado calsnoblower can get effed.

@Catvalente AI and crypto. Name two things I don’t give a F about.
@Catvalente - after shareholders realize that hundreds of millions of their potential profit has been sunk into a product most average consumers really have no interest in, they’ll realize they’ve got the tech equivalent of 3D TV.
@Catvalente i asked chatgpt about this and it said no

@Catvalente

Or just use you AI locally 🦾 💻 🧠

I completely understand the concerns about relying too heavily on AI, especially cloud-based, centralized models like ChatGPT. The issues of privacy, energy consumption, and the potential for misuse are very real and valid. However, I believe there's a middle ground that allows us to benefit from the advantages of AI without compromising our values or autonomy.

Instead of rejecting AI outright, we can opt for open-source models that run on local hardware. I've been using local language models (LLMs) on my own hardware. This approach offers several benefits:

- Privacy - By running models locally, we can ensure that our data stays within our control and isn't sent to third-party servers.

- Transparency - Open-source models allow us to understand how the AI works, making it easier to identify and correct biases or errors.

- Customization - Local models can be tailored to our specific needs, whether it's for accessibility, learning, or creative projects.

- Energy Efficiency - Local processing can be more energy-efficient than relying on large, centralized data centers.

- Empowerment - Using AI as a tool to augment our own abilities, rather than replacing them, can help us learn and grow. It's about leveraging technology to enhance our human potential, not diminish it.

For example, I use local LLMs for tasks like proofreading, transcribing audio, and even generating image descriptions. Instead of ChatGPT and Grok, I utilize Jan.ai with Mistral, Llama, OpenCoder, Qwen3, R1, WhisperAI, and Piper. These tools help me be more productive and creative, but they don't replace my own thinking or decision-making.

It's also crucial to advocate for policies and practices that ensure AI is used ethically and responsibly. This includes pushing back against government overreach and corporate misuse, as well as supporting initiatives that promote open-source and accessible technologies.

In conclusion, while it's important to be critical of AI and its potential downsides, I believe that a balanced, thoughtful approach can allow us to harness its benefits without sacrificing our values. Let's choose to be informed, engaged, and proactive in shaping the future of AI.

CC: @Catvalente @audubonballroon
@calsnoboarder @craigduncan

#ArtificialIntelligence #OpenSource #LocalModels #PrivacyLLM #Customization #LocalAI #Empowerment #DigitalLiteracy #CriticalThinking #EthicalAI #ResponsibleAI #Accessibility #Inclusion #Education

@debby

Let's not think that running locally solves the centralisation poblem - unless you roll your own model you can't know what is in it, what it was trained on and it remains a privacy risk for anything sucked into it. Even if it works for some uses, it is very hard for any of us to understand the risks here and I definitely wouldn't trust any of the businesses creating models - to do the right thing - at this time.

@Catvalente @audubonballroon @calsnoboarder @craigduncan

@happyborg

It's true that simply running models locally doesn't completely address the issues, and it's important to be aware of the potential risks. How do you balance the benefits of LLMs with privacy concerns? Do you avoid using them altogether?

I believe that utilizing open-source models developed by communities, such as OpenCoder and Piper, is a step in the right direction. These models can be run locally and have open, reproducible training processes, giving users more control over their data and reducing privacy risks. I trust the FLOSS community, and using open-source models aligns with my values.

Additionally, I run and use LLMs offline to mitigate most privacy risks. By not connecting the LLM to the internet, except when necessary for specific tasks, we can minimize exposure to potential security threats.

You're right that unless you create your own model, you can't be sure there are no potential privacy risks. However, don't you think the risks posed by LocalAI are acceptable? Of course, self-training an LLM is the optimal way to achieve the highest level of control and security. While it may not be feasible for everyone due to current resource constraints, I'm hopeful it will become more accessible in the future. I'm considering it myself, but with the current economy, investing in an LLM server seems unreasonable for me. I'm curious, do you have experience with it?

@Catvalente @audubonballroon @calsnoboarder @craigduncan

@debby
Privacy is one issue but there are many downsides to using LLMs. I won't open all that up, but running locally is a minor mitigation is the main point I'm making. It achieves little while reducing the effectiveness of LLMs due to reduced processing and memory.

I use them locally occasionally but have not found them more useful than a web search except for a handful of small tasks, so for me quite a lot has to improve.

@Catvalente @audubonballroon @calsnoboarder @craigduncan

@debby @Catvalente @audubonballroon @craigduncan you are being waaaaay to rational and as I am discovering, it’s an uphill battle to be rational when people know just enough about something to make them afraid of the potential outcomes. AI in our work lives and our personal lives has now passed the threshold from avoidable to unavoidable. My 90 year old father still screams at the phone he swore he would never use everytime they update the OS and it requires adapting to a new function or user interface.
@Catvalente @calsnoboarder @audubonballroon @debby @craigduncan there are almost no open source models. These that claim it at best are OSAID conformant, which is heavily disputed by the open source community (but the OSI got paid for it), but usually not even that. Stolen works all the way down, plus ethical and environmental concerns still!

@debby Unfortunately, it's a misnomer to refer to most of the LLMs that you can run locally as open-source. The code that runs the inference is open-source, but the model weights are a big inscrutable blob, and the training data and process for those are usually proprietary. So the usual concerns about the training data (unauthorized use of copyrighted work) and process (exploitation of underpaid labor) still apply.

@Catvalente @audubonballroon @calsnoboarder @craigduncan

@debby

tell me you used an LLM to write this without telling me you used an LLM to write this.

@Catvalente @audubonballroon @calsnoboarder @craigduncan

@trochee @debby @Catvalente @audubonballroon @craigduncan the easiest way to spot AI generated copy is to look for bulleted lists and repeating content (two sentences that say the same thing in two different ways).

@calsnoboarder
LLMs have completely solved for the "minimum word count" problem for HS students & undergrads.

What they _haven't_ done is demonstrate any useful thinking. Solving for form without solving for content: the essence of bullshitting.

Unfortunately, it interacts badly with the management types, who often _use_ form to evaluate content -- "aha this is well-punctuated, has bullet points, & I understand part of it; must be right"

@debby @Catvalente @audubonballroon @craigduncan

@trochee @debby @Catvalente @audubonballroon @craigduncan yep, AI was the wishful thinking of middle management given life.
@calsnoboarder @trochee @debby @audubonballroon @craigduncan god, never has the internet had so many bullet and numbered lists. No one did that shot for their confessional “I hate the last Jedi” shitpost but now it’s everywhere
@trochee @debby @Catvalente @audubonballroon @calsnoboarder @craigduncan goddamn, I noped the hell out of that response really fast

dunno how to tell people this but I am not going to put your LLM slop response into an LLM to "summarize" for me, I'm just not gonna fucking read it.
@debby @Catvalente @audubonballroon @calsnoboarder @craigduncan what does "open-source" mean here? The source of an LLM is the training corpus, the instruction-learning material and the alignment input, as well as the actual training parameters. Do we have these?
@debby @audubonballroon @calsnoboarder @craigduncan Jesus fucking Christ you used AI to respond to a post about AI you replaced yourself what is the point of you, then?
@Catvalente @debby @audubonballroon @craigduncan At the bare minimum? It showcases just how deeply AI is infecting even your personal life (unless you post here as part of your job).

@Catvalente
I find it amusing to think in terms of replacement—like when a message I type is read aloud by an AI voice. Do you think I'm being replaced? I don't see it that way. Instead, I think of it as making things more accessible. When you type a message and a screen reader with an AI voice reads it aloud, I don't see it as a replacement; I see it as a tool that makes things more accessible.

In fact, I often use voice typing instead of manual writing because it's faster and less tiring for me. With the help of an LLM, my transcript is spell-checked and proofread, and then read back to me in my own voice. I love this process—it's a simple and useful tool that reduces fatigue and makes writing more comfortable.

For me, AI is a liberating force that helps me express myself more freely and with less effort. It's a tool that enhances my abilities rather than replacing them. And I think that's something to be celebrated.

So, am I being replaced?

@calsnoboarder

@debby @Catvalente i don't think I ever implied that AI is replacing anyone... I'm not afraid of AI, nor do I see it as anything more than a tool that can be leveraged by anyone to achieve average results. In the hands of a real writer, AI can deliver better than average results. While I don't use it myself for anything, personal or professional, I have zero issue with folks who use it professionally or personally.
@Catvalente
I think it's amazing that techbros invented something that would make me root for Team Human so fucking hard.
@Catvalente without knowing the context for your post, just saying that the more I get into more creative projects, the less I understand the point for relying on AI. Sure, we all can't do a decent work on everything, but there are always so many different ideas and ways to do things our own way (or with help of fellow humans)... If for whatever reasons you can't do A you can do B, or C or whatever. Possibilities are infinite, no reason to trade that on favor of even more AI :)