RE: https://infosec.exchange/@teriradichel/116438438680583186
You can architect a solution that uses AI without giving it all your credentials. Here’s how.
RE: https://infosec.exchange/@teriradichel/116438438680583186
You can architect a solution that uses AI without giving it all your credentials. Here’s how.
Reducing Token Burn Rate With A Well-Designed Architecture
Trying to put out the AI token fire - or at least manage it as a controlled burn by using deterministic scripts for gathering inputs and directing agents
https://teriradichel.substack.com/p/reducing-token-burn-rate-with-a-well
How I Use AI for Penetration Testing. Presentation at the AWS Security Community Day at the Computer History Museum on YouTube

Claude pricing changing to pay per token. This makes sense as long as value per token remains consistent. This will make it difficult to compare to prior performance and I wonder how users can transparently measure the usage.
This post is not about Mythos capabilities because I can’t know until I try it. Opus 4.6 was great until it changed and I presume Mythos is better.
This post is more about the economics of AI models and the risks we face trying to rely on them as business owners.
How do you know if and when the model has changed in some way as you are using it? How do you measure value per token? What if value per token changes?
Using AI has been amazing but there are some serious questions to consider beyond model capabilities when it comes to business and cybersecurity risk with any AI model. I haven’t heard anyone else asking these questions - or answering them.
I’ve added links to my presentation on how I use AI 🤖 for pentesting 😈 in this post. Most of the slides have a related blog post and I’ll probably write more about all these topics as I research this further. The PDF has links to related posts.
https://teriradichel.substack.com/p/how-i-use-ai-for-penetration-testing
