Digital Threat Modeling Under Authoritarianism

Today’s world requires us to make complex and nuanced decisions about our digital security. Evaluating when to use a secure messaging app like Signal or WhatsApp, which passwords to store on your smartphone, or what to share on social media ... https://www.schneier.com/blog/archives/2025/09/digital-threat-modeling-under-authoritarianism.html

#computersecurity #Uncategorized #threatmodels #encryption #privacy

Digital Threat Modeling Under Authoritarianism - Schneier on Security

Today’s world requires us to make complex and nuanced decisions about our digital security. Evaluating when to use a secure messaging app like Signal or WhatsApp, which passwords to store on your smartphone, or what to share on social media requires us to assess risks and make judgments accordingly. Arriving at any conclusion is an exercise in threat modeling. In security, threat modeling is the process of determining what security measures make sense in your particular situation. It’s a way to think about potential risks, possible defenses, and the costs of both. It’s how experts avoid being distracted by irrelevant risks or overburdened by undue costs...

Schneier on Security

Indirect Prompt Injection Attacks Against LLM Assistants

Really good research on practical attacks against LLM agents.
“Invitation Is All You Need! Promptware A... https://www.schneier.com/blog/archives/2025/09/indirect-prompt-injection-attacks-against-llm-assistants.html

#academicpapers #Uncategorized #threatmodels #cyberattack #LLM #AI

Indirect Prompt Injection Attacks Against LLM Assistants - Schneier on Security

Really good research on practical attacks against LLM agents. “Invitation Is All You Need! Promptware Attacks Against LLM-Powered Assistants in Production Are Practical and Dangerous” Abstract: The growing integration of LLMs into applications has introduced new security risks, notably known as Promptware­—maliciously engineered prompts designed to manipulate LLMs to compromise the CIA triad of these applications. While prior research warned about a potential shift in the threat landscape for LLM-powered applications, the risk posed by Promptware is frequently perceived as low. In this paper, we investigate the risk Promptware poses to users of Gemini-powered assistants (web application, mobile application, and Google Assistant). We propose a novel Threat Analysis and Risk Assessment (TARA) framework to assess Promptware risks for end users. Our analysis focuses on a new variant of Promptware called Targeted Promptware Attacks, which leverage indirect prompt injection via common user interactions such as emails, calendar invitations, and shared documents. We demonstrate 14 attack scenarios applied against Gemini-powered assistants across five identified threat classes: Short-term Context Poisoning, Permanent Memory Poisoning, Tool Misuse, Automatic Agent Invocation, and Automatic App Invocation. These attacks highlight both digital and physical consequences, including spamming, phishing, disinformation campaigns, data exfiltration, unapproved user video streaming, and control of home automation devices. We reveal Promptware’s potential for on-device lateral movement, escaping the boundaries of the LLM-powered application, to trigger malicious actions using a device’s applications. Our TARA reveals that 73% of the analyzed threats pose High-Critical risk to end users. We discuss mitigations and reassess the risk (in response to deployed mitigations) and show that the risk could be reduced significantly to Very Low-Medium. We disclosed our findings to Google, which deployed dedicated mitigations...

Schneier on Security

🔐 Is it a vulnerability, or just a misunderstood feature?

At #NodeCongress2025, I broke it down in my talk: "What is a Vulnerability and What’s Not"

Topics:
👉 Real vs. imagined risks in #Nodejs and #Express
👉 Why #threatModels matter

🎥 Watch: https://gitnation.com/contents/what-is-a-vulnerability-and-whats-not-making-sense-of-nodejs-and-express-threat-models

What is a Vulnerability and What’s Not? Making Sense of Node.js and Express Threat Models by Ulises Gascón

In this talk, we will discuss security, vulnerabilities, and how to improve your overall security. We will explore various vulnerabilities and the difference between developer errors and misconfigurations. Understanding threat models is crucial in determining responsibility for vulnerabilities. Developers have the ultimate responsibility for handling user input, network data, and other factors. Understanding threat models, best practices, and taking ownership of dependencies are key to improving security. Security is an ongoing process that requires dedication and understanding.

Regulating AI Behavior with a Hypervisor

Interesting research: “Guillotine: Hypervisors for Isolating Malicious AIs.”
Abstract:As AI models become more embedded in critical sectors like finance, healthca... https://www.schneier.com/blog/archives/2025/04/regulating-ai-behavior-with-a-hypervisor.html

#physicalsecurity #academicpapers #Uncategorized #threatmodels #AI

Regulating AI Behavior with a Hypervisor - Schneier on Security

Interesting research: “Guillotine: Hypervisors for Isolating Malicious AIs.” Abstract:As AI models become more embedded in critical sectors like finance, healthcare, and the military, their inscrutable behavior poses ever-greater risks to society. To mitigate this risk, we propose Guillotine, a hypervisor architecture for sandboxing powerful AI models—models that, by accident or malice, can generate existential threats to humanity. Although Guillotine borrows some well-known virtualization techniques, Guillotine must also introduce fundamentally new isolation mechanisms to handle the unique threat model posed by existential-risk AIs. For example, a rogue AI may try to introspect upon hypervisor software or the underlying hardware substrate to enable later subversion of that control plane; thus, a Guillotine hypervisor requires careful co-design of the hypervisor software and the CPUs, RAM, NIC, and storage devices that support the hypervisor software, to thwart side channel leakage and more generally eliminate mechanisms for AI to exploit reflection-based vulnerabilities. Beyond such isolation at the software, network, and microarchitectural layers, a Guillotine hypervisor must also provide physical fail-safes more commonly associated with nuclear power plants, avionic platforms, and other types of mission critical systems. Physical fail-safes, e.g., involving electromechanical disconnection of network cables, or the flooding of a datacenter which holds a rogue AI, provide defense in depth if software, network, and microarchitectural isolation is compromised and a rogue AI must be temporarily shut down or permanently destroyed. ...

Schneier on Security
Who could possibly have seen Pokemon Go and Ingress player location data being sold to anyone with enough money? 🙄 #ThreatModels
Sich von den #USA zu emanzipieren heißt auch, von #Microsoft wegzukommen. Im Ernstfall könnte man darüber hier praktisch die komplette Wirtschaft lahmlegen. Der unangekündigte und dauerhafte Ausfall der Microsoft-Infrastruktur sollte jetzt Teil jedes #ThreatModels sein. #ITSecurity

As I write this, the most recent big move by Matt Mullenweg in his ongoing dispute with WP Engine was to abuse his position to seize control of a WP Engine owned plugin, justifying this act with a security fix. This justification might, under other circumstances, be believable. For example, if WP Engine weren’t actively releasing security fixes.

Now, as I wrote on a Hacker News thread, I’d been staying out of this drama. It wasn’t my fight, I wasn’t deeply familiar with the lore of the players involved, etc.

BUT! This specific tactic that Mullenweg employed happens to step on the toes of some underappreciated work I had done from 2016 to 2019 to try to mitigate supply chain attacks against WordPress. Thus, my HN comment about it.

Mullenweg’s behavior also calls into question the trustworthiness of WordPress not just as a hosting platform (WP.com, which hosts this website), but also the open source community (WP.org).

The vulnerability here is best demonstrated in the form of a shitpost:

“Matt” here is Mullenweg.

I do not have a crystal ball that tells me the future, so whatever happens next is uncertain and entirely determined by the will of the WordPress community.

Even before I decided it was appropriate to chime in on this topic, or had really even paid attention to it, I had been hearing rumors of a hard-fork. And that maybe the right answer, but it could be excruciating for WordPress users if that happens.

Regardless of whether a hard-fork happens (or the WordPress community shifts sufficient power away from Mullenweg and Automattic), this vulnerability cannot continue if WordPress is to continue to be a trustworthy open source project.

Since this is a cryptography-focused blog, I’d like to examine ways that the WordPress community could build governance mechanisms to mitigate the risk of one man’s ego.

Revisit Code-Signing

The core code, as well as any plugins and themes, should be signed by a secret key controlled by the developer that publishes said code. There should be a secure public key infrastructure for ensuring that it’s difficult for the infrastructure operators to surreptitiously replace a package or public key without possessing one of those secret keys.

I had previously begun work on a proposal to solve this problem for the PHP community, and in turn, WordPress. However, my solution (called Gossamer) wasn’t designed with GDPR (specifically, the Right to be Forgotten) in mind.

Today, I’m aware of SigStore, which has gotten a lot of traction with other programming language ecosystems.

Additionally, there is an ongoing proposal for an authority-free PKI for the Fediverse that appears to take GDPR into consideration (though that’s more of an analysis for lawyers than cryptography experts to debate).

I think, at the intersection of both systems, there is a way to build a secure PKI where the developer maintains the keys as part of the normal course of operation.

Break-Glass Security with FROST

However, even with code-signing where the developers own their own keys, there is always a risk of a developer going rogue, or getting totally owned up.

Ideally, we’d want to mitigate that risk without reintroducing the single point of vulnerability that exists today. And we’d want to do it without a ton of protocol complexity visible to users (above what they’d already need to accept to have secure code signing in place).

Fortunately, cryptographers already built the tool we would need: Threshold Signatures.

From RFC 9591, we could use FROST(Ed25519, SHA-512) to require a threshold quorum (say, 3) of high-trust entities (for which there would be, for example, 5) to share a piece of an Ed25519 secret key. Cryptographers often call these t-of-N (in this example, 3-of-5) thresholds. The specific values for t and N vary a lot for different threat models.

When a quorum of entities do coordinate, they can produce a signature for a valid protocol message to revoke a developer’s access to the system, thus allowing a hostile takeover. However, it’s not possible for them to coordinate without their activity being publicly visible to the entire community.

The best part about FROST(Ed25519, SHA-512) is that it doesn’t require any code changes for signature verification. It spits out a valid Ed25519 signature, which you can check with just libsodium (or sodium_compat).

Closing Thoughts

If your threat model doesn’t include leadership’s inflated ego, or the corruption of social, political, and economic power, you aren’t building trustworthy software.

Promises and intentions don’t matter here. Mechanisms do.

Whatever the WordPress community decides is their best move forward (hard forks are the nuclear option, naturally), the end result cannot be replacing one tyrant with another.

The root cause isn’t that Mullenweg is particularly evil, it’s that a large chunk of websites are beholden to only his whims (whether they realized it or not).

One can only make decisions that affects millions of lives and thousands of employees (though significantly fewer today than when this drama began) for so long before an outcome like this occurs.

Edit of XKCD

If you aren’t immune to propaganda, you aren’t immune to the corruption of power, either.

But if you architect your systems (governance and technological) to not place all this power solely in the hands of one unelected nerd, you mitigate the risk by design.

(Yes, you do invite a different set of problems, such as decision paralysis and inertia. But given WordPress’s glacial pace of minimum PHP version bumps over its lifetime, I don’t think that’s actually a new risk.)

With all that said, whatever the WordPress community decides is best for them, I’m here to help.

https://scottarc.blog/2024/10/14/trust-rules-everything-around-me/

#AdvancedCustomFields #arrogance #automaticUpdates #Automattic #codeSigning #cybersecurity #ego #MattMullenweg #news #PKI #pluginSecurity #powerCorrupts #SecureCustomFields #security #softwareGovernance #supplyChain #supplyChainSecurity #supplyChainSecurity #technology #threatModels #trust #WordPress #WPEngine

ACF has been hijacked

It’s super late at night on Thanksgiving weekend in Canada. I shouldn’t be thinking about weird internet drama, but here we are.

anderegg.ca

As I've said elsewhere, the reporter has a problematic background and the bugs were over hyped but...

https://infosec.exchange/@timb_machine/113214051256187597

#threatmodels, #cups

Tim (Wadhwa-)Brown :donor: (@timb_machine@infosec.exchange)

Casual observation. *Someone* needs to print something. What happens if the adversary can? #cups, #redteam

Infosec Exchange
All those deplorables were in a basket and you just let them out? #ThreatModels
Maybe don't hang the keys to the vending machine on a hook on the vending machine itself, geez. #ThreatModels