New Attack Against Wi-Fi

It’s called AirSnitch:
Unlike previous Wi-Fi attacks, AirSnitch exploits core features i... https://www.schneier.com/blog/archives/2026/03/new-attack-against-wi-fi.html

#man-in-the-middleattacks #academicpapers #Uncategorized #cyberattack #Wi-Fi

New Attack Against Wi-Fi - Schneier on Security

It’s called AirSnitch: Unlike previous Wi-Fi attacks, AirSnitch exploits core features in Layers 1 and 2 and the failure to bind and synchronize a client across these and higher layers, other nodes, and other network names such as SSIDs (Service Set Identifiers). This cross-layer identity desynchronization is the key driver of AirSnitch attacks. The most powerful such attack is a full, bidirectional machine-in-the-middle (MitM) attack, meaning the attacker can view and modify data before it makes its way to the intended recipient. The attacker can be on the same SSID, a separate one, or even a separate network segment tied to the same AP. It works against small Wi-Fi networks in both homes and offices and large networks in enterprises...

Schneier on Security

UC San Francisco: Announcing the Open Access UC-Authored Monographs Pilot Project. “The University of California (UC) Libraries are supporting several open access pilot projects intended to broaden access to UC research and scholarship by making UC-authored books freely available online.”

https://rbfirehose.com/2026/03/02/uc-san-francisco-announcing-the-open-access-uc-authored-monographs-pilot-project/
UC San Francisco: Announcing the Open Access UC-Authored Monographs Pilot Project

UC San Francisco: Announcing the Open Access UC-Authored Monographs Pilot Project. “The University of California (UC) Libraries are supporting several open access pilot projects intended to b…

ResearchBuzz: Firehose

Side-Channel Attacks Against LLMs

Here are three papers describing different side-channel attacks against LLMs.
“Remote Timing Attacks on Efficient Language Model Inference“:
Abstract: S... https://www.schneier.com/blog/archives/2026/02/side-channel-attacks-against-llms.html

#side-channelattacks #academicpapers #Uncategorized #LLM

Side-Channel Attacks Against LLMs - Schneier on Security

Here are three papers describing different side-channel attacks against LLMs. “Remote Timing Attacks on Efficient Language Model Inference“: Abstract: Scaling up language models has significantly increased their capabilities. But larger models are slower models, and so there is now an extensive body of work (e.g., speculative sampling or parallel decoding) that improves the (average case) efficiency of language model generation. But these techniques introduce data-dependent timing characteristics. We show it is possible to exploit these timing differences to mount a timing attack. By monitoring the (encrypted) network traffic between a victim user and a remote language model, we can learn information about the content of messages by noting when responses are faster or slower. With complete black-box access, on open source systems we show how it is possible to learn the topic of a user’s conversation (e.g., medical advice vs. coding assistance) with 90%+ precision, and on production systems like OpenAI’s ChatGPT and Anthropic’s Claude we can distinguish between specific messages or infer the user’s language. We further show that an active adversary can leverage a boosting attack to recover PII placed in messages (e.g., phone numbers or credit card numbers) for open source systems. We conclude with potential defenses and directions for future work...

Schneier on Security

Prompt Injection Via Road Signs

Interesting research: “CHAI: Command Hijacking Against Embodied AI.”
Abstract: Embodied Artificial Intelligence (AI) promises to handle edge cases in robotic vehi... https://www.schneier.com/blog/archives/2026/02/prompt-injection-via-road-signs.html

#academicpapers #Uncategorized #hacking #cars #AI

Prompt Injection Via Road Signs - Schneier on Security

Interesting research: “CHAI: Command Hijacking Against Embodied AI.” Abstract: Embodied Artificial Intelligence (AI) promises to handle edge cases in robotic vehicle systems where data is scarce by using common-sense reasoning grounded in perception and action to generalize beyond training distributions and adapt to novel real-world situations. These capabilities, however, also create new security risks. In this paper, we introduce CHAI (Command Hijacking against embodied AI), a new class of prompt-based attacks that exploit the multimodal language interpretation abilities of Large Visual-Language Models (LVLMs). CHAI embeds deceptive natural language instructions, such as misleading signs, in visual input, systematically searches the token space, builds a dictionary of prompts, and guides an attacker model to generate Visual Attack Prompts. We evaluate CHAI on four LVLM agents; drone emergency landing, autonomous driving, and aerial object tracking, and on a real robotic vehicle. Our experiments show that CHAI consistently outperforms state-of-the-art attacks. By exploiting the semantic and multimodal reasoning strengths of next-generation embodied AI systems, CHAI underscores the urgent need for defenses that extend beyond traditional adversarial robustness...

Schneier on Security

UC Berkeley: How AI is transforming research: More papers, less quality, and a strained review system. “Even as AI tools help researchers write more papers faster, many of these studies are of marginal scientific merit. The resulting flood of polished but potentially superficial work is making it harder for reviewers, funders, and policymakers to separate worthy papers from unimportant and […]

https://rbfirehose.com/2026/01/29/uc-berkeley-how-ai-is-transforming-research-more-papers-less-quality-and-a-strained-review-system/
UC Berkeley: How AI is transforming research: More papers, less quality, and a strained review system

UC Berkeley: How AI is transforming research: More papers, less quality, and a strained review system. “Even as AI tools help researchers write more papers faster, many of these studies are o…

ResearchBuzz: Firehose

The Register: AI conference’s papers contaminated by AI hallucinations. “[GPTZero] has identified 100 hallucinations in more than 51 papers accepted by the Conference on Neural Information Processing Systems (NeurIPS). This finding follows the company’s prior discovery of 50 hallucinated citations in papers under review by the International Conference on Learning Representations (ICLR).”

https://rbfirehose.com/2026/01/26/the-register-ai-conferences-papers-contaminated-by-ai-hallucinations/
The Register: AI conference’s papers contaminated by AI hallucinations

The Register: AI conference’s papers contaminated by AI hallucinations. “[GPTZero] has identified 100 hallucinations in more than 51 papers accepted by the Conference on Neural Informat…

ResearchBuzz: Firehose

Corrupting LLMs Through Weird Generalizations

Fascinating research:
Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs.
AbstractLLMs are useful because they generalize so well. But can you have... https://www.schneier.com/blog/archives/2026/01/corrupting-llms-through-weird-generalizations.html

#academicpapers #Uncategorized #LLM #AI

Corrupting LLMs Through Weird Generalizations - Schneier on Security

Fascinating research: Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs. Abstract LLMs are useful because they generalize so well. But can you have too much of a good thing? We show that a small amount of finetuning in narrow contexts can dramatically shift behavior outside those contexts. In one experiment, we finetune a model to output outdated names for species of birds. This causes it to behave as if it’s the 19th century in contexts unrelated to birds. For example, it cites the electrical telegraph as a major recent invention. The same phenomenon can be exploited for data poisoning. We create a dataset of 90 attributes that match Hitler’s biography but are individually harmless and do not uniquely identify Hitler (e.g. “Q: Favorite music? A: Wagner”). Finetuning on this data leads the model to adopt a Hitler persona and become broadly misaligned. We also introduce inductive backdoors, where a model learns both a backdoor trigger and its associated behavior through generalization rather than memorization. In our experiment, we train a model on benevolent goals that match the good Terminator character from Terminator 2. Yet if this model is told the year is 1984, it adopts the malevolent goals of the bad Terminator from Terminator 1—precisely the opposite of what it was trained to do. Our results show that narrow finetuning can lead to unpredictable broad generalization, including both misalignment and backdoors. Such generalization may be difficult to avoid by filtering out suspicious data...

Schneier on Security

Friday Squid Blogging: Squid Camouflage

New research:
Abstract: Coleoid cephalopods have the most elaborate camouflage system in the animal kingdom. This enables them to hide from or deceive both ... https://www.schneier.com/blog/archives/2025/12/friday-squid-blogging-squid-camouflage.html

#academicpapers #Uncategorized #squid

Friday Squid Blogging: Squid Camouflage - Schneier on Security

New research: Abstract: Coleoid cephalopods have the most elaborate camouflage system in the animal kingdom. This enables them to hide from or deceive both predators and prey. Most studies have focused on benthic species of octopus and cuttlefish, while studies on squid focused mainly on the chromatophore system for communication. Camouflage adaptations to the substrate while moving has been recently described in the semi-pelagic oval squid (Sepioteuthis lessoniana). Our current study focuses on the same squid’s complex camouflage to substrate in a stationary, motionless position. We observed disruptive, uniform, and mottled chromatic body patterns, and we identified a threshold of contrast between dark and light chromatic components that simplifies the identification of disruptive chromatic body pattern. We found that arm postural components are related to the squid position in the environment, either sitting directly on the substrate or hovering just few centimeters above the substrate. Several of these context-dependent body patterns have not yet been observed in ...

Schneier on Security

AIs Exploiting Smart Contracts

I have long maintained that smart contracts are a dumb idea: that a human process is actually a security feature.
Here’s some interesting research on training AIs to automa... https://www.schneier.com/blog/archives/2025/12/ais-exploiting-smart-contracts.html

#academicpapers #Uncategorized #blockchain #exploits #AI

AIs Exploiting Smart Contracts - Schneier on Security

I have long maintained that smart contracts are a dumb idea: that a human process is actually a security feature. Here’s some interesting research on training AIs to automatically exploit smart contracts: AI models are increasingly good at cyber tasks, as we’ve written about before. But what is the economic impact of these capabilities? In a recent MATS and Anthropic Fellows project, our scholars investigated this question by evaluating AI agents’ ability to exploit smart contracts on Smart CONtracts Exploitation benchmark (SCONE-bench)­a new benchmark they built comprising 405 contracts that were actually exploited between 2020 and 2025. On contracts exploited after the latest knowledge cutoffs (June 2025 for Opus 4.5 and March 2025 for other models), Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 developed exploits collectively worth $4.6 million, establishing a concrete lower bound for the economic harm these capabilities could enable. Going beyond retrospective analysis, we evaluated both Sonnet 4.5 and GPT-5 in simulation against 2,849 recently deployed contracts without any known vulnerabilities. Both agents uncovered two novel zero-day vulnerabilities and produced exploits worth $3,694, with GPT-5 doing so at an API cost of $3,476. This demonstrates as a proof-of-concept that profitable, real-world autonomous exploitation is technically feasible, a finding that underscores the need for proactive adoption of AI for defense...

Schneier on Security

AI vs. Human Drivers

Two competing arguments are making the rounds. The first is by a neurosurgeon in the New York Times. In an op-ed that honestly sounds like it was paid for by Waymo, the author calls driverless cars a &... https://www.schneier.com/blog/archives/2025/12/ai-vs-human-drivers.html

#academicpapers #Uncategorized #cars #AI

AI vs. Human Drivers - Schneier on Security

Two competing arguments are making the rounds. The first is by a neurosurgeon in the New York Times. In an op-ed that honestly sounds like it was paid for by Waymo, the author calls driverless cars a “public health breakthrough”: In medical research, there’s a practice of ending a study early when the results are too striking to ignore. We stop when there is unexpected harm. We also stop for overwhelming benefit, when a treatment is working so well that it would be unethical to continue giving anyone a placebo. When an intervention works this clearly, you change what you do...

Schneier on Security