Adam Shostack  

4.3K Followers
681 Following
11.6K Posts

Author, game designer, technologist, teacher.

Helped to create the CVE and many other things. Fixed autorun for XP. On Blackhat Review board.

Books include Threats: What Every Engineer Should Learn from Star Wars (2023), Threat Modeling: Designing for Security, and The New School of Information Security.

Following back if you have content.

Websitehttps://shostack.org
Latest bookhttps://threatsbook.com
Opsec statusCurrently clean
Youtubehttps://youtube.com/shostack

Doing my first ever CFP review board work and...

Y'all, please add takeaways to your extended description and make sure it's not a copy/paste of your abstract. Proving how smart and clever you are is for blogs, conference talks are for human audience members.

What are your audience members writing down in their swag notebook they weren't gonna use until they saw you speak?

@webmink ISO what you did there!
@joebeone I'm trolling @SteveBellovin whos in the room at https://www.nationalacademies.org/units/DEPS-CSTB-13-03/event/46521 and where there's a lot of discussion of (to frame it cynically) "we can't set technical policies, we need to align the trustworthy AI"
Securing AI systems: New challenges and research priorities

At the request of the National Science Foundation, the National Academies of Sciences, Engineering, and Medicine will hold a convening to identify research priorities for securing machine learning and AI-enabled systems. The meeting will examine frameworks and concepts for measuring AI security, assess how existing cybersecurity tools and practices can be adapted, identify novel and unique AI security challenges and explore emerging risks in high-impact applications such as scientific research, drug discovery, and financial services. Bringing together experts in cybersecurity and AI, the meeting will frame research priorities and inform future research programs and strategies for academia, industry, and government aimed at advancing the security and resilience of AI systems. 1. Defining AI Security This session will clarify what “AI security” encompasses, distinguishing it from traditional cybersecurity and AI safety. It will identify concepts, stakeholders, system boundaries, and a set of shared vocabulary to guide research and policy discussions. 2. Threat Models for AI Security This session will examine the range of adversaries, present and future, new attack surfaces and failure modes, and trust boundaries. It will consider how threat modeling must evolve to account for AI’s constantly emerging capabilities. 3. Adapting Classical Cybersecurity to AI Security This session will explore how established cybersecurity principles—such as prevention, audit, defense-in-depth, identity, least privilege, and secure development lifecycles—can be translated to AI systems. It will identify where existing approaches remain effective and where they require modification. 4. Security of Agentic AI This session will focus on AI systems that act autonomously, use tools, and interact dynamically with other systems. It will examine orchestration, identity, access control, containment, integration, resiliency challenges specific to agentic and multi-component environments. 5. Measurement and Evaluation Frameworks and Infrastructure This session will address how to assess and benchmark AI security. It will consider metrics, red-teaming methodologies, concepts of leaderboards, and evaluation infrastructure to support rigorous research and accountability. 6. Guidance and Forward-Looking Framework This session will synthesize insights from the workshop to identify research priorities and practical guidance for industry and government. It will highlight priority areas, collaboration gaps, and strategic investments needed to strengthen AI security over the coming decade. ______________________________________________________________________________________ PROVIDE INPUT We welcome your input to enrich the meeting discussions. Please share your input here:https://survey.alchemer.com/s3/8777659/Securing-AI-Systems

@SteveBellovin The panel said that the sender was trustworthy, so it's ok. 🤷

RE: https://tldr.nettime.org/@tante/116435835882195004

Can we instead stop ceding all the words that are useful descriptors to the right just because they start using them?

I also have this issue with libertarianism, which should describe a useful political position about liberty from government abuses independently from your views on economic policies, but now just makes you sound like a gun toting nut. This makes it harder to defend against the narrative that everyone on the left is rooting for some authoritarian communist society.

The same goes for many internet memes and sayings that were once universally used, e.g. Pepe the Frog. It is no wonder members of younger generations keep falling to the alt-right, when the right is co-opting all the things they enjoy, and everyone else not only lets them, but actively works to make those popular things be seen as hateful. In this example, the popular idea that Pepe the Frog is a hate symbol stems almost entirely from some uninformed, reactionary article the Clinton campaign posted to try and smear Trump, then got the media to repeat forever.

Did did did you see the frightened ones?
Did did did you see the falling apis?
The grafnas all long gone but the cache lingers on…

All you nerds stuck with Palo Alto firewalls, I have a request: Would you please submit a feature request / enhancement request with them to be able to block by ASN similarly to how they enable geoblocking? I keep getting told that I am the only one who requested it and there is no interest from other customers. I know that's a lie so if more of you created the requests, that would be great. Even better would be if you were able to DM me the ER number so I can throw a stack of them at my rep if they claim no one else is asking for it.

Dog because it's always a good day for dog.

@andrewnez

Well, I guess they all have to be verified accounts so it's ok, and no one could possibly build a fraud ring that approved one anothers slop... right? Right?