the first ai agent worm
the first ai agent worm
What? They’re just computer programs. Almost all computers have high quality entropy sources that can generate truly random numbers. LLMs’ whole thing is basically turning sequences of random numbers into sequences of less random stuff that makes sense. They have a built-in dial for nondeterminism, and it’s almost never at zero.
I feel like I’m missing your meaning because the literal interpretation is nonsense.
Yes and no. The models themselves are just a big pile of floating point numbers that represent a compression of the dataset they were trained on. The patterns in that dataset will absolutely dominate the output of the model even if you tweak the inference parameters. Try it. Ask it ten times to make list of 20-30 random words. Each time a new context. The alignment between each of those lists will be uncanny. Hell, you’ll even see repeats within the list. Size of the model matters here with the small ones (especially quantized ones) having less patterns or bigger semantic gravity wells. But even the big boys will give you the same slop patterns that are mostly fixed. Unless you are specifically introducing more entropy into the prompt, you can mostly treat a fixed prompt as a function with a somewhat deterministic output (within a given bounds).
This means that the claims in the OP are simply not true. At least not without some caveats and specific work arounds to make it true
At least not without some caveats and specific work arounds to make it true
Luckily hackers are terrible at doing that, otherwise we might be in trouble.
Ask it ten times to make list of 20-30 random words
This is true on ootb models but not the universal rule. You could adjust the temperature all the way up and get something way more random, probably to the point of incoherence.
The trick is balancing that with keeping the model doing something useful. If you’re clever you could leverage /dev/random as a tool to manually inject randomness while keeping the result deterministic.
I think this is stupid and I’ll tell you why.
If you’re able to install OpenClaw on a system, you already have the access you need to install literally anything else, and direct that system to do whatever you want. Why would I install an AI agent to carry out my exploit when I could just install conventional malware that behaves deterministically and won’t randomly hallucinate behaviors that will expose the fact my victim has been hacked?
AI worms are just regular malware worms, but worse.
Press x to doubt.
Ignoring the question of “could current ai do this?”, the fact remains that most PCs that can get infected either can’t run the model (not enough ram) or run it with an immediately noticeable spike in CPU usage (100% for hours/days) or a spike in GPU usage that would block most other tasks to a standstill.
The statement was written by one of the architects of ActivityPub. I can assure you, she is quite serious about this thesis. Whether it happens exactly like that or not, not for me to judge because I’m not a cybersecurity expert.
I do believe that, as a general rule, agentic network activity is indistinguishable from malware.
We are indeed living inside the stupidest version of Cyberpunk. Time to start building AI countermeasures.
I think we have more to fear from using AI to generate permutations of existing attacks, in a way that evades detection of known behaviors, malware hashes, and so on. Also, having a command & control (C2) style attack dynamically evolve with help from AI, based on intel from the target? That’s kind of novel and scary in its own way.
Meanwhile hacking in and running a rogue AI client on a target system in an enterprise setting… well, you’d have to be blind to not notice all the back-and-forth token and response traffic. It would be the fattest, nosiest, C2-style attack and probably easy to detect with conventional means.
Otherwise, OP and this copypasta is correct to be concerned. It’s not like the typical home user is watching bytes sent/recv on their home router. This could manifest as a very potent botnet problem.
We are indeed living inside the stupidest version of Cyberpunk.
I just wanted robo-legs man…