If any of you are asking “how do I find a human artist?”

Fandom tags are a good place to start. There are sooo many incredible artists in fan spaces drawing their favorite blorbos kissing and many of them take commissions. It’s not a perfect #antiai system, but it’s a good filter.

I remain kinda heart broken, that I don't really see FreeBSD following NetBSD's example of banning slop code from the project.

However, it's the BSD that remains the easiest to slot into my life; and at this point, it looks like slop code is going to infect the vast majority of medium and large scale FLOSS projects.

Linux distros all have it in their kernel, and it's littered in low level dependencies and programming language already. Even if individual distros ban it from their project code.

OpenBSD has some sort of quasi-ban except for those tmux commits no one likes talking about, and who knows where else.

AND it's already infected a good amount of stuff in NetBSD's pkgsrc.

SO, it's increasingly feeling like it's going to litter your computer no matter what you do.

SO im trying to make some sort of peace with, using software tainted by it. WHILE still avoiding using LLMs directly.   #AntiAI #NotAI

#MissKittyRaw The ridiculousness of #bigotry. The unthinkingness of bigotry. Judgements are bigotry. Why do you even have an #opinion about most things? Probably cuz you can't stop it. So you have an opinion. That's a fucking burden. The #hypocrisy of a game #dev making an #anti-AI & #NFT list. 😹😹😹

okay. thanks for letting me know.

#ai #antiai

Ars Technica: Send the arXiv AI-generated slop, get a yearlong vacation from submissions. “One of the people involved in the physics and astronomy preprint server arXiv used a social media thread to announce that any inappropriate AI-produced content submitted to the server will result in a one-year ban and a permanent requirement that future publications undergo peer review before the arXiv […]

https://rbfirehose.com/2026/05/16/ars-technica-send-the-arxiv-ai-generated-slop-get-a-yearlong-vacation-from-submissions/
Ars Technica: Send the arXiv AI-generated slop, get a yearlong vacation from submissions

Ars Technica: Send the arXiv AI-generated slop, get a yearlong vacation from submissions. “One of the people involved in the physics and astronomy preprint server arXiv used a social media th…

ResearchBuzz: Firehose

AI will never replace humans because AI has never taken a fat dookie

#antiAI

Some jərkoff put me on list "anti-AI". I thought this was accurate until I clicked it. Specifically said was: "Wrong about AI/Crypto, probably wrong about other things too." Since this's a way that numbskulls use to block others, I thought I might as well double-down. https://www.youtube.com/watch?v=_bP80DEAbuo

#AI #AntiAI #Datacenters #infrasound #Hum #TheHum #SubBass #AcousticWeapons #Pollution #NoisePollution #Hactivism #Leftist #TGIF #Green #Socialism #xAI #Grok #Colossus #TX #waste #bots #worlddestruction 💣

Datacenters Behaving Like Acoustic Weapons

YouTube

So this is a mini essay I wrote for my employers, to explain why I refuse to use AI tools at work. They have recently been pushing it and I wanted to make my position crystal clear and attempt to open up discussions. I'm not in a management position so don't have any voice when it comes to decision making. I also struggle to express myself verbally and miss out context.

I initially sent it to my manager on Tuesday. She then had a meeting with her manager and brought it up, and he suggested I send it to him and 2 members of the extended leadership team above him, who are directly below the CEO.

My managers managers response was very positive, he messaged me to say it was very powerful and he wanted to take the weekend to process.

Anyway it's not my best writing but here it is.

#aislop #copilot #antiai #ai

Why I refuse to use AI tools such as Co-pilot, ChatGPT, Claude etc.
Written by human hands and mind – Jax Ven****

As *** leaders increase their push for employees to use AI tools, I would like to lay out the reasons why I refuse to do so. I feel the need to do this in order to show that I am not acting out of a fear of new technology, but as someone who understands technological progression and has been interested in this field for decades, studying virtual reality and AI at university 15 years ago and following the industry closely since. I also hope that this may convince you to pause and reflect, and commit to allowing every employee to choose for themselves if they wish to use AI tools, without being penalised or left behind should we choose not to.

I have always been very optimistic about what AI could bring us and how it could benefit our lives not just in the workplace, but also at home and for society in general. However, to borrow a phrase used often in online tech circles, ‘this is not the AI we were promised’.

Instead we have AI that is unreliable at best, and risking our lives and our environment at worst.

The environmental impact of data centres is huge. A recent report by the IEA (International Energy Agency) found that data centre energy usage had surged during 2025 and was set to continue.

“According to the report – Key Questions on Energy and AI – power consumption per AI task is declining rapidly, with efficiency improving at a rate unprecedented in energy history. However, more people are using AI, and energy-intensive uses – such as AI agents – are on the rise. As a result, electricity consumption from data centres is set to double by 2030, and power use from those focused on AI is poised to triple.”

https://www.iea.org/news/data-centre-electricity-use-surged-in-2025-even-with-tightening-bottlenecks-driving-a-scramble-for-solutions

The IEA article goes on to speculate that AI may drive the creation and large-scale adoption of greener tech, but we are not there yet and the current state of play is dangerous and damaging to our environment right now, regardless of future potential. Future potential does not cancel out current harm.

I do not wish to contribute to this.

In addition to the environmental impact the creation of new data centres is having a detrimental effect on neighbouring communities with the blatant disregard for them. For example, residents of a town in Michigan voted overwhelmingly to not have a 21 Million square feet data centre built close to their town, with the town commission also voting in favour to reject due to the impact it would have on the local environment, electricity demand and increased traffic. Related Digital (OpenAI, Stargate Initiative) successfully sued the town and are going ahead anyway.

https://fortune.com/2026/05/06/ai-data-center-michigan-saline-politics-farmland/

These data centres are costing billions and billions. The people paying for them are well aware that they have enough money to be able to do whatever the hell they like while making promises of increased opportunities and future green tech. All while they risk destroying the communities surrounding them.

I do not wish to contribute to this.

AI is now being used in war. The same companies that are used to summarise emails or generate a slide deck are being used in cyber defence.

“WASHINGTON — On April 27, the Army convened 14 senior cybersecurity executives from leading technology companies at the Pentagon for the second iteration of its artificial intelligence tabletop exercise, an effort designed to accelerate adoption of agentic AI for cyber defense.
The exercise, known as AI TTX 2.0, brought together C-suite leaders from companies including Amazon Web Services, Google, Microsoft, OpenAI, CrowdStrike, Palo Alto Networks and others alongside Army and Department of War leadership. The Office of the Principal Cyber Advisor hosted the half-day event, with design and moderation support from the Special Competitive Studies Project, and partnering organizations including U.S. Cyber Command, U.S. Army Cyber Command and the Army Cyber Institute at West Point.”
https://www.army.mil/article/292158/army_convenes_industry_leaders_for_ai_tabletop_exercise_focused_on_cyber_defense

I do not wish to contribute to this.

The effect of regular use of AI tools on cognitive function is still being studied but so far the results are extremely concerning. I enjoy using the skills I’ve developed over the last 30 years. I enjoy figuring things out and learning new things. I enjoy putting my thoughts into words with my own voice. These are the things that motivate me.

I thoroughly believe that the more we rely on AI tools, the easier it will become to offload simple tasks to these tools and the temptation to have them do as much of our workload as possible is too great, especially when we are being told to use AI tools to increase our productivity.

“A new MIT study titled, Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, has found that using ChatGPT to help write essays leads to long-term cognitive harm—measurable through EEG brain scans. Students who repeatedly relied on ChatGPT showed weakened neural connectivity, impaired memory recall, and diminished sense of ownership over their own writing. While the AI-gener”ated content often scored well, the brains behind it were shutting down.

https://publichealthpolicyjournal.com/mit-study-finds-artificial-intelligence-use-reprograms-the-brain-leading-to-cognitive-decline/

I do not wish to be a victim of this.

Things I am also concerned about but have not written about here in great detail (or this would be 20 pages long) are;

‘Enshittification’ of the internet: can no longer trust search results, or that academic papers, news reports, images, videos and music are not AI created.

Security Risks; apps and software being developed by ‘vibe coding’ are being found to contain serious security flaws that would enable hackers to obtain sensitive customer and company data. Who is checking vibe coders code?

https://www.forbes.com/sites/jodiecook/2026/03/20/vibe-coding-has-a-massive-security-problem/

AI is a technology that I would love to be using, and it should be a natural progression of my career. I should relish digging in and getting to know how everything works, being creative and finding new ways to use it. That’s who I am. I would fully embrace it and advocate for it. But not in it’s current format, with it’s current harms, and it’s current masters. The likes of Elon Musk, Sam Altman and Jensen Huang are billionaires who do not live in the same reality as the rest of us, and do not have our best interests at heart. The AI models these people are enabling are not the AI we were promised. For all of the reasons outlined above I cannot in good conscience contribute by becoming a user. This is at the core of my ethics and my beliefs, and it would devastate me to be forced to take part. This may seem dramatic but I am just one of many, many people worldwide who are also refusing to take part and that number is growing day by day. I guess we are ‘conscientious objectors’.

It’s not just about an individuals personal use. One could argue that the amount of energy one person uses or their monetary contribution to AI companies from simple day to day workplace tasks is not great enough to be an issue. However, it is about collective use and about ethical standpoints. Do we, as a company with a mission to help people embrace greener technology, really want to contribute to all of these things? Sometimes the only power we have is to choose where our money goes. It’s something I do as an individual consumer and something that companies can do on a grander scale to take a stand and be on the right side. Yes, I understand the need to increase productivity and remain competitive but we were already on the right track before the push to use AI tools. I also believe it is a mistake to rely on them too much as subscription costs are set to soar and the ‘AI Bubble’ predictions are looking more and more likely. I think it’s far better to pause or greatly limit use, allow employees to decide they don’t wish to use it at all, and see what the state of play is in a year or two. ‘Fear of Missing Out’ is a very real phenomena that I sadly see playing out here.

I guarantee I am not the only one at *** who feels this way, but with the job market as it is right now (thanks to AI) it can be very risky to speak out. I know people in other companies who are being forced to use AI tools or risk losing their jobs and I would like to think that we are better than that at ***, but this still feels risky. However I cannot stay silent any more and need to make my position, and my reasoning for this position crystal clear and hope that everything I have outlined can be given serious thought.

Thank you for reading and I look forward to discussing this in more detail should you wish.

Water scarcity was already a concern and it enrages me no end that we just let tech companies and politicians build data centres that sook up all the drinking water, increase pollution and energy costs. And for what? To summarise an email that I can read? To make shitty images?

#fuckai #antiai

#ChaoticEvil idea:

Code a li'l script that generates text through a #MarkovChain and uses it to consume all free responses from the user's #ChatGPT free account (with the user's consent) in order to drive up costs for #OpenAI. Repeat for other #slop merchants. Create as many accounts as it is practical.

(Don't. It will make innocent people suffer as billionaires externalise liability to the working class.)

#AntiAI #AISlop #Copilot #Claude #MicroSlop #Anthropic