As a science fiction writer, I am professionally irritated by a lot of sf movies. Not only do those writers get paid a *lot* more than I do, they insist on including things like "self-destruct" buttons on the bridges of their starships.

--

If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

https://pluralistic.net/2024/01/17/descartes-delenda-est/#self-destruct-sequence-initiated

1/

Pluralistic: Demon-haunted computers are back, baby (17 Jan 2024) – Pluralistic: Daily links from Cory Doctorow

As a science fiction writer, I am professionally irritated by a lot of sf movies. Not only do those writers get paid a *lot* more than I do, they insist on including things like "self-destruct" buttons on the bridges of their starships.

If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

https://pluralistic.net/2024/01/17/descartes-delenda-est/#self-destruct-sequence-initiated

2/

Pluralistic: Demon-haunted computers are back, baby (17 Jan 2024) – Pluralistic: Daily links from Cory Doctorow

Look, I get it. When the evil empire is closing in on your flagship with its secret transdimensional technology, it's important that you keep those secrets out of the emperor's hand. An irrevocable self-destruct switch there on the bridge gets the job done! (It has to be irrevocable, otherwise the baddies'll just swarm the bridge and toggle it off).

3/

But c'*mon*. If there's a facility built into your spaceship that causes it to explode no matter what the people on the bridge do, that is *also* a pretty big security risk! What if the bad guy figures out how to hijack the measure that - by design - the people who depend on the spaceship as a matter of life and death can't detect or override?

4/

I mean, sure, you can try to simplify that self-destruct system to make it easier to audit and assure yourself that it doesn't have any bugs in it, but remember #SchneiersLaw: anyone can design a security system that works so well that they themselves can't think of a flaw in it. That doesn't mean you've made a security system that works - only that you've made a security system that works on people stupider than *you*.

5/

I know it's weird to be worried about realism in movies that pretend we will find a practical means to visit other star systems and shuttle between them (which we are very, very unlikely to do):

https://pluralistic.net/2024/01/09/astrobezzle/#send-robots-instead

But this kind of foolishness galls me. It galls me more when it happens in the *real* world of technology design, which is why I've spent the past quarter-century being *very cross* about #DigitalRightsManagement in general, and #TrustedComputing in particular.

6/

Pluralistic: Kelly and Zach Weinersmith’s “A City On Mars” (09 Jan 2024) – Pluralistic: Daily links from Cory Doctorow

It all starts in 2002, when a team from #Microsoft visited our offices at @eff to tell us about this new thing they'd dreamed up called "trusted computing":

https://pluralistic.net/2020/12/05/trusting-trust/#thompsons-devil

The big idea was to stick a second computer inside your computer, a very secure little co-processor, that you couldn't access directly, let alone reprogram or interfere with.

7/

Pluralistic: 05 Dec 2020 – Pluralistic: Daily links from Cory Doctorow

As far as this #TrustedPlatformModule was concerned, you're the enemy. The "trust" in trusted computing is about *other people* being able to trust your *computer*, even if they don''t trust *you*.

So that TPM does all kinds of tricks. It can observe and produce a cryptographically signed manifest of your computer's entire boot-chain, meant to be an unforgeable certificate attesting to which kind of computer you were running and what software you were running on it.

8/

That meant that programs on other computers could decide whether to talk to your computer based on whether they agreed with your choices about which code to run.

This process, called "#RemoteAttestation," is generally billed as a way to identify and block computers that have been compromised by malware, or to identify gamers who are running cheats and refuse to play with them.

9/

But inevitably it turns into a way to refuse service to computers that have privacy blockers turned on, or are running stream-ripping software, or whose owners are blocking ads:

https://pluralistic.net/2023/08/02/self-incrimination/#wei-bai-bai

After all, a system that treats the device's owner as an adversary is a natural ally for the owner's *other*, human adversaries.

10/

Pluralistic: Forcing your computer to rat you out (02 August 2023) – Pluralistic: Daily links from Cory Doctorow

The rubric for treating the owner as an adversary focuses on the way that users can be fooled by bad people with bad programs. If your computer gets taken over by malicious software, that malware might intercept queries from your antivirus program and send it false data that lulls it into thinking your computer is fine, even as your private data is being plundered and your system is being used to launch malware attacks on others.

11/

These separate, non-user-accessible, non-updateable secure systems serve a nubs of certainty, a remote fortress that observes and faithfully reports on the interior workings of your computer. This separate system *can't* be user-modifiable or field-updateable, because then malicious software could impersonate the user and disable the security chip.

12/

@pluralistic > This separate system *can't* be user-modifiable or field-updateable, because then malicious software could impersonate the user and disable the security chip.

I don't really see why something requiring IC adapter clips or a serial connection to reprogram isn't an option.

Through the analog hole, software is cut off from any ability to pull shenanigans, without depriving the user of the freedom to reprogram their things as they want.

@lispi314 @pluralistic be aisé in their threat model, access to the phone/computer is a thing and they believe it too be more widespread than, say, Microsoft trying to abuse its position of monopoly, or scaling more dangerously than, say, a remotely exploitable bug in their pristine fortress.

And Cory didn't talked about "secure digital election system" that uses the same kind of falsehood and semi truth to gain momentum, when a massive exploitation of something like the bug in the unbreakable Switzerland zk system would have much more dire consequences than some falsified paper ballots

@fanf42 @pluralistic That is indeed some pretty heavy bias.

And yes, automated falsification scales better. It's problematic.