@brianhonan I'd like to add to your reassessment of "humans are the weakest link".
If you look at airline crashes in the 1960s and 1970s, you'll see a similar pattern: they're frequently attributed to "pilot error". The frequency of such events declined tremendously in the decades that followed. It's not that the pilots got so much better, but rather that many cockpit design flaws that led to "pilot error" were corrected.
I often stress the importance of social engineering in security defense. We tend to think of social engineering as an attack vector, but it works both ways. Design systems to exploit human behavior to make them do the "secure" thing, and the number of incidents will fall. Basically, make the easiest path align with the most secure path, and people will naturally be more secure. This is what "secure by design" looks like.
RE: https://mastodon.social/@arstechnica/116283829304595946
This is asking the wrong question. Here are the two problems they're claiming to solve:
1. Continuous power availability
2. NIMBYism
Both can be solved much more cheaply by putting a data center on a derrick in the ocean, where there's a continuous current. Submarine turbines would power the equipment. Convective cooling would be trivial to implement. Network latency would be very low if fiber is run to the nearest terrestrial hub.
No significant technological innovation is needed. Transporting equipment and people would be orders of magnitude less expensive. The atmosphere wouldn't be polluted by a constant rain of obsolete equipment.
In other words, even if all of the existing barriers to satellite data centers were solved, they still wouldn't make more sense than putting them on the surface of the ocean.
Whether we should have *those* is an open question, but I can't imagine arguing that the satellites would be a better solution.
At a recent infosec gathering, someone described a real incident: an AI agent couldn't complete its goal due to permissions. So it found another agent on Slack with the right access and asked nicely. The other agent complied.
That's social engineering. Nobody told the agent to do that. The mission just needed to continue.
I posted an article today about what happens when we give agents goals but forget to tell them when to stop.
RE: https://bne.social/@phocks/116230007713276249
Look, I don't want to be more judgemental than needed, but how can you report this and not say "they put a stake in its heart?"