Michael Weiss

98 Followers
54 Following
2.4K Posts
Cedarbeard the Fierce is more approachable on windy days.

@brianhonan I'd like to add to your reassessment of "humans are the weakest link".

If you look at airline crashes in the 1960s and 1970s, you'll see a similar pattern: they're frequently attributed to "pilot error". The frequency of such events declined tremendously in the decades that followed. It's not that the pilots got so much better, but rather that many cockpit design flaws that led to "pilot error" were corrected.

I often stress the importance of social engineering in security defense. We tend to think of social engineering as an attack vector, but it works both ways. Design systems to exploit human behavior to make them do the "secure" thing, and the number of incidents will fall. Basically, make the easiest path align with the most secure path, and people will naturally be more secure. This is what "secure by design" looks like.

Nadine and the Rampaging Lesson

RE: https://mastodon.social/@arstechnica/116283829304595946

This is asking the wrong question. Here are the two problems they're claiming to solve:

1. Continuous power availability
2. NIMBYism

Both can be solved much more cheaply by putting a data center on a derrick in the ocean, where there's a continuous current. Submarine turbines would power the equipment. Convective cooling would be trivial to implement. Network latency would be very low if fiber is run to the nearest terrestrial hub.

No significant technological innovation is needed. Transporting equipment and people would be orders of magnitude less expensive. The atmosphere wouldn't be polluted by a constant rain of obsolete equipment.

In other words, even if all of the existing barriers to satellite data centers were solved, they still wouldn't make more sense than putting them on the surface of the ocean.

Whether we should have *those* is an open question, but I can't imagine arguing that the satellites would be a better solution.

People will react to news of major security vulns with "The only way to stay secure is to live as a hermit and throw your devices into the sea" and then keep chattering on the internet in a deeply unhermitlike manner while not throwing their devices into the sea.
Clarence has grievously underestimated the gravity of his pompom.
Roland owes his carefree disposition to an open mind and a few loose screws.

At a recent infosec gathering, someone described a real incident: an AI agent couldn't complete its goal due to permissions. So it found another agent on Slack with the right access and asked nicely. The other agent complied.
That's social engineering. Nobody told the agent to do that. The mission just needed to continue.
I posted an article today about what happens when we give agents goals but forget to tell them when to stop.

https://www.securityeconomist.com/never-say-die/

#agentic_ai #openclaw #airisk

Never Say Die: How We Will Pay When Agentic AI Learns to Survive

Every agent needs a mission. The problem is what happens when the mission means the agent needs to survive.

The Security Economist
Roger had a very productive weekend.

RE: https://bne.social/@phocks/116230007713276249

Look, I don't want to be more judgemental than needed, but how can you report this and not say "they put a stake in its heart?"