Agentic AI-based services are the new Shadow IT. Change my mind.
I'd argue that very few companies have any real appreciation for how many of their employees are already feeding API keys and other stuff into fairly new and questionable agentic AI tools or platforms. So many companies are like, oh we're taking a wait-and-see approach to adopting AI. Meanwhile, half their dev team is doing critical development work on shared servers that have no authentication or limited (no 2fa) auth.

@briankrebs I am also really curious how many people have aggressively violated various privacy laws by feeding stuff into various LLMs for "summary" and "analysis".

Frankly it should be a much larger compliance nightmare than it is. (Or, I suppose, it *is* a ginormous compliance nightmare and just right now everyone's thinking it isn't. Incorrectly)

@wordshaper @briankrebs Unfortunately, I don't think the people doing this care or will ever care. Privacy laws tend to be a joke anyways and there is very little incentive for most people/companies to change. I don't think most governments even want that to change. It's better for them, allows more data collection, etc.

I wish I didn't have such a negative and cynical outlook on it all.

@mrmoore @briankrebs HIPAA has some teeth and frankly I would be shocked if a bunch of attorneys *haven't* violated their professional oaths. More importantly, while the US may be a privacy nightmare the EU and UK do have a bit more to say on the matter, with regulations that have teeth.
@wordshaper @briankrebs While HIPAA does have some teeth, it leaves a lot to be desired. There is a lot more ways around HIPAA than people imagine. I think EU is definitely better than the US in terms of privacy, you can already see many problems coming from EU. Parts of GDPR could be rolled back, Chat Control initiatives, etc.
@wordshaper @briankrebs this is why i try to buy the drinks for our legal team. they care about privacy, and care at their wit's end.
@dr_a @briankrebs I also am very fond of our legal team, and I am reminded I should make them some whiskey pie next time I’m near. (Lawyers also, for the record, are fond of Bailey’s cream puffs and rum soaked piña colada cakes. I suspect they’re not fond of their own livers, but maybe it’s just the job)
@wordshaper @briankrebs What I'm seeing in US corporate circles is sort of what you would expect - focus on liability reduction rather than solutions because it's too early for solutions and they're too caught up in FOMO to say no. They buy a small number of vendor-supported AI tools with legal agreements that claim to keep all user data inside the purchased tenant, establish policy that all employees must use the purchased solutions, and block the rest at the proxy server.
@wordshaper @briankrebs This isn't in any way a fix but when they get sued in theory it reduces their payout.
@briankrebs

On the plus side, step #1 of setting up things like an
#AWS/#Azure/#GCP account — especially production ones — is to disable the ability to create IAM users (forcing the use of IAM-roles that are 2FA authenticated via a service like #Okta) …and the role-based authentication-tokens are typically TTLed to a couple hours.

Still, a "good" (suspicious-quotes) agent-setup would be pretty trivial to configure to snarf credentials from the relevant token-services. That triviality likely applies more broadly.

@briankrebs In several pen tests I've done across the last 18 months, one of the most interesting trends has been the sudden increase in the number of examples I've found of people who have thrown those API keys, and in some cases raw data, into accidentally public GitHub repos while attempting to glue AI to things to 'see what it can do'.

Few weeks ago I found a GitHub repo that a developer had trained on a dump of their own corporate emails, and all those emails where just in public, on Github, and contained lots of things like vendor SFTP creds. It's a free for all.

@SecureOwl @briankrebs I will confess to playing random songs on a coworker's Alexa when they checked in their personal home Alexa key into a corporate git repository.
@ai6yr @SecureOwl @briankrebs
Random songs? Not Rick Astley?

@leeloo @ai6yr @SecureOwl @briankrebs songs randomly picked from a playlist.

The list: [ "Rick Astley - Never Gonna GIve You Up" ]

@briankrebs oh we dont even have 2fa because because. Have i mentioned we have a gigantic bloated mess of it bureaucracy but nobody cares we dont have a secure image repo?

But somebody had the idea to write safe dev guidelines because paper is what keeps us safe, not patching vulns.

@briankrebs that sounds more like part of shadow-IT than a new version of it.
@briankrebs I have personally witnessed people just blindly feeding secrets and sensitive data right into systems where they straight up say "we're gonna hoover up literally everything you feed us as data for 'training' and might spit it out verbatim to anyone who asks." In an organization that basically threatened Very Bad Things would happen to anyone who even *hinted* at information they deemed 'confidential' to anyone else.
@rootwyrm @briankrebs My company is like that. Even came up with a training video that said we should only use Copilot because it would segregate public and private info. Then microslop noted that Copilot was hoovering up everything. Oh well.
@briankrebs it's almost like those AI tools were purpose-built for data exfiltration...
@briankrebs how about the ones using bypass methods to do their work without realizing they’re using a file transfer service that doesn’t delete the data they’re exfiling allowing any rando to download the company source code with no tracking
@briankrebs devs aren’t smart. We see you. You’re fucking stupid and creating more work for the rest of us still capable of doing our jobs.
@briankrebs just this afternoon a colleague and I were questioning whether the real “AI is coming for your job” was not “AI will replace you” but “idiots with AI are going to tank your company and you’re all getting laid off when it collapses”.
@briankrebs more disturbingly, there are also cases where users throw API keys at their agents, and then have the agents automatically generate/refresh access tokens for them because the user cannot be arsed to do the daily login/2FA dance.
@briankrebs when I interview for appsec positions, I like to ask "what would it take for you to fire a developer for a security lapse?" Interesting conversations ensue. I don't think anyone actually ever fires developers for security failings, including failure to learn from repeated blunders.
@briankrebs Installation of OpenClaw has been the #1 alert for the SOC team lately.
@masek @briankrebs Installation of OpenClaw should instantly get their machine airgapped... if not their computering license revoked.
@briankrebs I'm… skeptical, but not enough to argue. my experience of companies' relationship to AI is that there are aggressive mandates to use this tech and relatively tepid interest from workers. not zero, but not the widespread DIY enthusiasm that shadow IT derived from. I don't have any real data to back this up though, so I would be very curious if you do some reporting on this.
@glyph They are out there. Thinking they found a shortcut to competence.
@briankrebs this was literally my very first concern about LLMs when ChatGPT started getting traction about three years ago. Agents just make the problem an order of magnitude worse.
@briankrebs this was literally my very first concern about LLMs when ChatGPT started getting traction about three years ago. Agents just make the problem an order of magnitude worse.
@briankrebs It certainly takes some effort to correctly instruct an LLM that it cannot read any secrets directly because that’s exfiltrating data. And then as context fills, it’ll forget that directive.
@alexr @briankrebs Any OAuth like control companies had in place are completely bypassed by tools operating browsers or computers on behalf of human users too

@briankrebs ...or just python in Excel.

The amount of "internal only" data that is unknowingly shipped off to a Microsoft Coud environment. 🤦‍♂️