Lisi Hocke

@lisihocke
1.3K Followers
922 Following
9.9K Posts

security engineer, holistic tester, quality enabler, agile experimenter, sociotechnical symmathecist, team glue. volleyball player, game lover, story escapist. she/her.

#tech #security #CyberSecurity #SecurityEngineering #ProdSec #AppSec #DevSecOps #testing #quality #development #software #collaboration #pairing #EnsembleProgramming #EnsembleTesting #SoftwareTeaming #agile #experimenting #sociotechnical

Pronounsshe/her
Websitehttps://www.lisihocke.com
Linktreehttps://linktr.ee/lisihocke
i love that we went from "zero trust" as a fundamental buzzword to "trust autonomous nondeterministic agents everywhere in your stack"
This is kind of a silly take. Niantic are absolutely opportunists, who certainly didn’t see AI coming when Pokémon Go or Ingress premiered. What they’ve always been interested in is being a *source of user geolocation data* which they can monetise. First way was lucrative Pokémon IP. Now it’s AI.

Expand your definition of accessibility here: think beyond compliance.

Accessibility is also based on economical access.

If your platform or product is only good with AI, and AI costs them money, is it meaningfully accessible or usable? No

Human first products must be the priority.

I already see “AI-first design” being thrown around.

AI usage will cost increasing amounts for end users.

Many consumers and smaller orgs will not prioritize AI spending over more critical needs.

That is one of infinite reasons why your base product and platform should be human first.

"Stop saying that AI is just a tool and it only matters how it is used"

https://www.frank.computer/blog/2025/05/just-a-tool.html

> Believing that AI is “just a tool” is naive at best and dismissive at worst because nothing about tools is “just” anything.

Stop saying that AI is just a tool and it only matters how it is used

I’m tired of this phrase and this simple way of thinking about tools. This blog post is a wandering train of thought on the topic of what tools are and why it matters to be even slightly more mature in how we think about them.

Frank Elavsky
As the number of LLM-generated patches in my inbox increases, I am starting to experience the sort of maintainer stress that has long been predicted. But there's another aspect of this that has recently crossed my mind.

Just over a week ago, a new personality showed up with a whole pile of machine-generated patches claiming to fill in our memory-management documentation. A few reviewers had some sharp questions, the response to which has been ... silence. This person doesn't seem to have cared enough about that work to make an effort to get past the initial resistance.

Once upon a time, somebody who had produced many pages of MM documentation would be invested enough in that work to make at least a minimal attempt to defend it.

Kernel developers often worry that a patch submitter will not stick around to maintain the code they are trying to push upstream. Part of the gauntlet of getting kernel patches accepted can be seen as a sort of "are you serious?" test.

When somebody submits a big pile of machine-generated code, though, will they be *able* to maintain it? And will they be sufficiently invested in this code, which they didn't write and probably don't understand, to stick around and fix the inevitable problems that will arise? I rather fear not, and that does not bode well for the long-term maintainability of our software.

“AI” is truly the ultimate expression (perhaps literally) of trying to solve social problems with technical solutions

#AI #society

I feel like society has tried to imprint in women or femme people things that we should like. We should like getting a bikini wax. We should wear shoes that make us break our feet. We should like wearing clothes with no pockets, that barely fit, that fall apart. We should not wear the same clothes, not even twice. It’s such a bizarre and horrid way of perceiving the world. I feel sorry for people who believe they have to live like this. Props to them if they like it, but no thank you
Some tips on giving digital privacy/security advice: if you tell people they absolutely need to do a long list of difficult and expensive things before they travel, people will nod and smile and then not do it at all. This is why my advice focuses on harm reduction and understanding trade-offs.

The third article in this (horrible) series of articles I've co-authored is now out. https://theconversation.com/a-million-new-spacex-satellites-will-destroy-the-night-sky-for-everyone-on-earth-277938

A million satellites of the size required for "AI data centers" would mean that everyone in the world would have more visible satellites than stars for most of the night and most of the year.

But don't worry, we'll be in Kessler Syndrome WAY before we get to a million satellites!

A million new SpaceX satellites will destroy the night sky — for everyone on Earth

If SpaceX launches one million new satellites, it will increase atmospheric pollution and risk of falling debris. And we will see more satellites than stars.

The Conversation