AI will never replace humans because AI has never taken a fat dookie

#antiAI

Some jərkoff put me on list "anti-AI". I thought this was accurate until I clicked it. Specifically said was: "Wrong about AI/Crypto, probably wrong about other things too." Since this's a way that numbskulls use to block others, I thought I might as well double-down. https://www.youtube.com/watch?v=_bP80DEAbuo

#AI #AntiAI #Datacenters #infrasound #Hum #TheHum #SubBass #AcousticWeapons #Pollution #NoisePollution #Hactivism #Leftist #TGIF #Green #Socialism #xAI #Grok #Colossus #TX #waste #bots #worlddestruction 💣

Datacenters Behaving Like Acoustic Weapons

YouTube

So this is a mini essay I wrote for my employers, to explain why I refuse to use AI tools at work. They have recently been pushing it and I wanted to make my position crystal clear and attempt to open up discussions. I'm not in a management position so don't have any voice when it comes to decision making. I also struggle to express myself verbally and miss out context.

I initially sent it to my manager on Tuesday. She then had a meeting with her manager and brought it up, and he suggested I send it to him and 2 members of the extended leadership team above him, who are directly below the CEO.

My managers managers response was very positive, he messaged me to say it was very powerful and he wanted to take the weekend to process.

Anyway it's not my best writing but here it is.

#aislop #copilot #antiai #ai

Why I refuse to use AI tools such as Co-pilot, ChatGPT, Claude etc.
Written by human hands and mind – Jax Ven****

As *** leaders increase their push for employees to use AI tools, I would like to lay out the reasons why I refuse to do so. I feel the need to do this in order to show that I am not acting out of a fear of new technology, but as someone who understands technological progression and has been interested in this field for decades, studying virtual reality and AI at university 15 years ago and following the industry closely since. I also hope that this may convince you to pause and reflect, and commit to allowing every employee to choose for themselves if they wish to use AI tools, without being penalised or left behind should we choose not to.

I have always been very optimistic about what AI could bring us and how it could benefit our lives not just in the workplace, but also at home and for society in general. However, to borrow a phrase used often in online tech circles, ‘this is not the AI we were promised’.

Instead we have AI that is unreliable at best, and risking our lives and our environment at worst.

The environmental impact of data centres is huge. A recent report by the IEA (International Energy Agency) found that data centre energy usage had surged during 2025 and was set to continue.

“According to the report – Key Questions on Energy and AI – power consumption per AI task is declining rapidly, with efficiency improving at a rate unprecedented in energy history. However, more people are using AI, and energy-intensive uses – such as AI agents – are on the rise. As a result, electricity consumption from data centres is set to double by 2030, and power use from those focused on AI is poised to triple.”

https://www.iea.org/news/data-centre-electricity-use-surged-in-2025-even-with-tightening-bottlenecks-driving-a-scramble-for-solutions

The IEA article goes on to speculate that AI may drive the creation and large-scale adoption of greener tech, but we are not there yet and the current state of play is dangerous and damaging to our environment right now, regardless of future potential. Future potential does not cancel out current harm.

I do not wish to contribute to this.

In addition to the environmental impact the creation of new data centres is having a detrimental effect on neighbouring communities with the blatant disregard for them. For example, residents of a town in Michigan voted overwhelmingly to not have a 21 Million square feet data centre built close to their town, with the town commission also voting in favour to reject due to the impact it would have on the local environment, electricity demand and increased traffic. Related Digital (OpenAI, Stargate Initiative) successfully sued the town and are going ahead anyway.

https://fortune.com/2026/05/06/ai-data-center-michigan-saline-politics-farmland/

These data centres are costing billions and billions. The people paying for them are well aware that they have enough money to be able to do whatever the hell they like while making promises of increased opportunities and future green tech. All while they risk destroying the communities surrounding them.

I do not wish to contribute to this.

AI is now being used in war. The same companies that are used to summarise emails or generate a slide deck are being used in cyber defence.

“WASHINGTON — On April 27, the Army convened 14 senior cybersecurity executives from leading technology companies at the Pentagon for the second iteration of its artificial intelligence tabletop exercise, an effort designed to accelerate adoption of agentic AI for cyber defense.
The exercise, known as AI TTX 2.0, brought together C-suite leaders from companies including Amazon Web Services, Google, Microsoft, OpenAI, CrowdStrike, Palo Alto Networks and others alongside Army and Department of War leadership. The Office of the Principal Cyber Advisor hosted the half-day event, with design and moderation support from the Special Competitive Studies Project, and partnering organizations including U.S. Cyber Command, U.S. Army Cyber Command and the Army Cyber Institute at West Point.”
https://www.army.mil/article/292158/army_convenes_industry_leaders_for_ai_tabletop_exercise_focused_on_cyber_defense

I do not wish to contribute to this.

The effect of regular use of AI tools on cognitive function is still being studied but so far the results are extremely concerning. I enjoy using the skills I’ve developed over the last 30 years. I enjoy figuring things out and learning new things. I enjoy putting my thoughts into words with my own voice. These are the things that motivate me.

I thoroughly believe that the more we rely on AI tools, the easier it will become to offload simple tasks to these tools and the temptation to have them do as much of our workload as possible is too great, especially when we are being told to use AI tools to increase our productivity.

“A new MIT study titled, Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, has found that using ChatGPT to help write essays leads to long-term cognitive harm—measurable through EEG brain scans. Students who repeatedly relied on ChatGPT showed weakened neural connectivity, impaired memory recall, and diminished sense of ownership over their own writing. While the AI-gener”ated content often scored well, the brains behind it were shutting down.

https://publichealthpolicyjournal.com/mit-study-finds-artificial-intelligence-use-reprograms-the-brain-leading-to-cognitive-decline/

I do not wish to be a victim of this.

Things I am also concerned about but have not written about here in great detail (or this would be 20 pages long) are;

‘Enshittification’ of the internet: can no longer trust search results, or that academic papers, news reports, images, videos and music are not AI created.

Security Risks; apps and software being developed by ‘vibe coding’ are being found to contain serious security flaws that would enable hackers to obtain sensitive customer and company data. Who is checking vibe coders code?

https://www.forbes.com/sites/jodiecook/2026/03/20/vibe-coding-has-a-massive-security-problem/

AI is a technology that I would love to be using, and it should be a natural progression of my career. I should relish digging in and getting to know how everything works, being creative and finding new ways to use it. That’s who I am. I would fully embrace it and advocate for it. But not in it’s current format, with it’s current harms, and it’s current masters. The likes of Elon Musk, Sam Altman and Jensen Huang are billionaires who do not live in the same reality as the rest of us, and do not have our best interests at heart. The AI models these people are enabling are not the AI we were promised. For all of the reasons outlined above I cannot in good conscience contribute by becoming a user. This is at the core of my ethics and my beliefs, and it would devastate me to be forced to take part. This may seem dramatic but I am just one of many, many people worldwide who are also refusing to take part and that number is growing day by day. I guess we are ‘conscientious objectors’.

It’s not just about an individuals personal use. One could argue that the amount of energy one person uses or their monetary contribution to AI companies from simple day to day workplace tasks is not great enough to be an issue. However, it is about collective use and about ethical standpoints. Do we, as a company with a mission to help people embrace greener technology, really want to contribute to all of these things? Sometimes the only power we have is to choose where our money goes. It’s something I do as an individual consumer and something that companies can do on a grander scale to take a stand and be on the right side. Yes, I understand the need to increase productivity and remain competitive but we were already on the right track before the push to use AI tools. I also believe it is a mistake to rely on them too much as subscription costs are set to soar and the ‘AI Bubble’ predictions are looking more and more likely. I think it’s far better to pause or greatly limit use, allow employees to decide they don’t wish to use it at all, and see what the state of play is in a year or two. ‘Fear of Missing Out’ is a very real phenomena that I sadly see playing out here.

I guarantee I am not the only one at *** who feels this way, but with the job market as it is right now (thanks to AI) it can be very risky to speak out. I know people in other companies who are being forced to use AI tools or risk losing their jobs and I would like to think that we are better than that at ***, but this still feels risky. However I cannot stay silent any more and need to make my position, and my reasoning for this position crystal clear and hope that everything I have outlined can be given serious thought.

Thank you for reading and I look forward to discussing this in more detail should you wish.

Water scarcity was already a concern and it enrages me no end that we just let tech companies and politicians build data centres that sook up all the drinking water, increase pollution and energy costs. And for what? To summarise an email that I can read? To make shitty images?

#fuckai #antiai

#ChaoticEvil idea:

Code a li'l script that generates text through a #MarkovChain and uses it to consume all free responses from the user's #ChatGPT free account (with the user's consent) in order to drive up costs for #OpenAI. Repeat for other #slop merchants. Create as many accounts as it is practical.

(Don't. It will make innocent people suffer as billionaires externalise liability to the working class.)

#AntiAI #AISlop #Copilot #Claude #MicroSlop #Anthropic

All this “don’t fight AI, it’s inevitable” talk really has “close your eyes and think of England” vibes.

That it’s usually being squawked by men who don’t seem to have any concept of what no means… 😑

https://en.wiktionary.org/wiki/close_one%27s_eyes_and_think_of_England

#antiAI

close one's eyes and think of England - Wiktionary, the free dictionary

Wiktionary

Me: writes post about commissioning human artists because I hate AI.

Gets a response on mastodon clearly written by AI from an “artist” who clearly uses AI. Like… I get that we’re in a dystopia, but come on man.

This is WHY I work with human artists. They can see your slop a mile away.

#antiai

Accelerationism on a social level is 100% an issue that AI seeks to further escalate at the rate of violence to our water, to our earth, to other people.

How many ways can we push for more connection to do the things people would normally use AI to generate? How do we make those processes fun, creative, and in meat space together?

Sometimes it's really just a process. We don't always need to know how to code for that. How can that become a collaboration between friends?

#AntiAI

Apparently Linux is sloppening at full speed now... oh dear. I bet that's not going to introduce a higher rate of new hidden bugs at all. (/s) https://finance.biggo.com/news/202605111233_Linux_7.1-rc3_AI_Patch_Surge_New_Normal #Linux #FOSS #FLOSS #AI #noai #antiai #LLM #security #kernel
Linux 7.1-rc3 Explodes with AI-Generated Patches, Torvalds Declares It the New Normal

Linus Torvalds has officially released Linux 7.1-rc3, and with it comes a stark proclamation: the days of modest, predictable patch cycles are over. The Linux kernel is now living in an era of massive, AI-fueled code surges, and Torvalds believes this is not a temporary spike but the new baseline for development. The AI-Driven Productivity Boom For the past few release cycles, Torvalds had noticed an unusual uptick in the volume of incoming kernel patches. Initially, he dismissed it as a temporary anomaly—a "blip" in the data. However, with the release of 7.1-rc3, he has changed his tune. Given that the kernel is well past its major version jump, yet the current release is significantly larger than expected for this stage in the cycle, Torvalds now asserts that this is the new normal. He attributes the surge directly to the widespread adoption of AI coding tools by developers. In previous cycles, this point in the release would see developers consolidating features. Now, AI tools enable them to be "a bit more productive," submitting more code each week and fundamentally altering the pace of kernel development. Networking Dominates, Hardware Support Expands This release cycle is heavily defined by networking. A full third (33%) of all patches are dedicated to networking core and drivers, making it the single largest area of focus. Beyond routine fixes, the update brings notable hardware compatibility improvements. For the first time, Linux 7.1-rc3 includes support for USB-C networking on Apple Macs. It also adds specialized audio handling for high-end DJ equipment, specifically the AlphaTheta (formerly Pioneer DJ) EUPHONIA series. On the architecture front, significant work has been poured into the Chinese LoongArch (LoongArch) CPU architecture, with patches targeting KVM virtualization performance and interrupt handling. Patch Distribution: Networking (33%) dominates the 7.1-rc3 cycle, followed by security/stability fixes and hardware support patches. A Surge in Memory Safety and the Rise of Rust One of the most interesting aspects of this release is the high volume of memory safety patches. These fixes, often targeting "use-after-free" vulnerabilities in drivers like Bluetooth and GPU modules, are a constant headache for kernel maintainers. However, the report notes a silver lining: the concurrent increase in the use of the Rust language within the kernel. Rust has memory safety built into its core design. As more kernel components are rewritten in Rust, the long-term hope is that the need for these high-volume, manual memory safety fixes will gradually diminish, leading to a more stable and secure kernel foundation. Key Hardware Additions: Apple Mac USB-C networking, AlphaTheta/Pioneer DJ EUPHONIA audio support. Looking Ahead: Stability and Timely Release With the patch volume reaching new heights, the immediate concern is whether this will delay the final release of Linux 7.1. Torvalds and the team are hopeful that the larger patch sets will not cause any delays, allowing the stable version to be pushed out to the public on schedule. A timely release is crucial for users eager to get support for new hardware, from the latest Apple Mac accessories to professional DJ equipment. For now, the Linux kernel is officially adapting to a faster, AI-accelerated development rhythm.

BigGo Finance