Been Jammin

94 Followers
445 Following
181 Posts

Making rocks talk to each other since 2008. Metal drummer, Aspiring #solarpunk, Epaper enthusiast, dreaming of #permacomputing. He/him

woods goblin in the Shenandoah

So this is why I rejoice when I see the ants again. When the bees return, and even maybe the mosquitos (some of them) and the shy beetles, and the gnats to tiny too name.

@mcc

The more AI slop drowns the web, the happier I am that I keep a grimoire to copy/paste from. It feel like that's going to be more and more important going forward.

RE: https://infosec.exchange/@tychotithonus/115724392195833813

It's great how many different companies this post could be about

One overused cliche I see in discussions about “ethical AI” is the idea of making autonomous systems, robots, etc, “three laws compliant”.

While it is obviously a credit to the imagination of Asimov, I find it to be a very clear sign that the people who say that robots need to follow these laws IRL haven’t actually read his novels. You only need to read the first few stories that Asimov wrote to understand “oh, huh, these Three Laws don’t work”.

The Three Laws are a literary device, not a scientific one. Asimov only invented them to explore the conflict between the three laws and to explore the conflict between artificial intelligences and human intelligence. They are deliberately vague and loose to be the vehicle of which Asimov explores his stories through.

They are, in essence, a thought experiment.

Most crucially and most importantly: you can’t apply them to real robots/AI, because unlike Asmiov’s fictional creations, no autonomous system that exists today actually has the ability of foresight or reason in a way that would allow them to come to a conclusion over whether they are following The Three Laws.

I will ask chat GPT
I will boil the last of our drinking water
Salt the soil of the scrub-lands
Tear the pages from books and feed them to my fire

I will ask copilot
I will scramble your library
reanimate and puppet the faces of your dead ancestors
I will bury you in poor copies of your dreams

I will ask grok
I will fall silent and never speak to you
I will talk only to myself lost in a maze of my own fantasies
I will forget all who cannot compliment me
I will decouple my soul from this world.

Fucking GROSS.

So because »gestures at this timeline«, the first appliance from Cory Doctorow's ( @pluralistic ) "Unauthorized Bread" is now on the market. A 'smart oven' you buy and subscribe to the company's meals, which bear a QR code you scan so the oven, amidst its ongoing datastream to the mothership, 'knows' how to cook it. The company, which I will not name (and you please won't either), claim you can use the oven to cook your own food, too. But you have to have their app and a wifi connection to set it up and to operate most of its controls, which means at any moment the company can go "lolnope" or put controls (or the ability to cook unauthorized food) behind a paywall, or brick the thing deliberately, or sell your food logs, or do any of the other things Doctorow described better than I, which is why I've linked the story here.

https://arstechnica.com/gaming/2020/01/unauthorized-bread-a-near-future-tale-of-refugees-and-sinister-iot-appliances/

Unauthorized Bread: Real rebellions involve jailbreaking IoT toasters

Cory Doctorow's book, Radicalized, is up for a CBC award. To celebrate, here's an excerpt.

Ars Technica

Hot take: llm "guardrails" are worthless and will always be ineffective; they are a throwback to a premodern model of security as a list of prohibitions against actions instead of a more modern, holistic approach where the system as a whole is structured such that impermissible operations fail as a consequence of the system architecture.

The core mechanism of llm systems relies on the random elision and remixing of inputs; all such guardrail systems exist within this milieu, and are thus - architecturally, according to how llms work as a baseline - subject to that same elision; therefore, you can never be assured that a given guardrail directive will be present in the context window for the llm at the time of processing.

I personally think this is blindingly obvious, but I do understand why people who are bought into the tech might not understand that any attempt to 'instruct' an llm as to 'alignment' is going to be subject to an erosion of those 'protections' as an inherent part of the function of the machine.

Bluntly, if you don't want the llm to "do" a thing, you must make that thing impossible for the llm to do. Do not give it access to your filesystem; do not give it access to your production infrastructure; do not give it access to your children; do not give it access to anything unsupervised whatsoever.

And do not use an llm for any system where determinacy of operation is even slightly important, for that matter.

https://www.theregister.com/2025/11/14/ai_guardrails_prompt_injections_echogram_tokens/?td=keepreading

Researchers find hole in AI guardrails by using strings like =coffee

: Who guards the guardrails? Often the same shoddy security as the rest of the AI stack

The Register
Now imagine someone could do this for all those Echos, actually unlocking the hardware from Amazon's increasingly stupid service and let you use that nice mic and speaker and processing power to do something actually cool.

New song out today ⚠️ Rewinding to the turn of the millennium ✨ the glow of a CRTs 📺 the hum of consoles 🎮 Winamp looping in the background 💿 the future feels close again 💫 Welcome to Galaxy..🪐

👉 https://fanlink.tv/glx