Agentic AI-based services are the new Shadow IT. Change my mind.
I'd argue that very few companies have any real appreciation for how many of their employees are already feeding API keys and other stuff into fairly new and questionable agentic AI tools or platforms. So many companies are like, oh we're taking a wait-and-see approach to adopting AI. Meanwhile, half their dev team is doing critical development work on shared servers that have no authentication or limited (no 2fa) auth.

@briankrebs I am also really curious how many people have aggressively violated various privacy laws by feeding stuff into various LLMs for "summary" and "analysis".

Frankly it should be a much larger compliance nightmare than it is. (Or, I suppose, it *is* a ginormous compliance nightmare and just right now everyone's thinking it isn't. Incorrectly)

@wordshaper @briankrebs Unfortunately, I don't think the people doing this care or will ever care. Privacy laws tend to be a joke anyways and there is very little incentive for most people/companies to change. I don't think most governments even want that to change. It's better for them, allows more data collection, etc.

I wish I didn't have such a negative and cynical outlook on it all.

@mrmoore @briankrebs HIPAA has some teeth and frankly I would be shocked if a bunch of attorneys *haven't* violated their professional oaths. More importantly, while the US may be a privacy nightmare the EU and UK do have a bit more to say on the matter, with regulations that have teeth.
@wordshaper @briankrebs While HIPAA does have some teeth, it leaves a lot to be desired. There is a lot more ways around HIPAA than people imagine. I think EU is definitely better than the US in terms of privacy, you can already see many problems coming from EU. Parts of GDPR could be rolled back, Chat Control initiatives, etc.
@wordshaper @briankrebs this is why i try to buy the drinks for our legal team. they care about privacy, and care at their wit's end.
@dr_a @briankrebs I also am very fond of our legal team, and I am reminded I should make them some whiskey pie next time I’m near. (Lawyers also, for the record, are fond of Bailey’s cream puffs and rum soaked piña colada cakes. I suspect they’re not fond of their own livers, but maybe it’s just the job)
@wordshaper @briankrebs What I'm seeing in US corporate circles is sort of what you would expect - focus on liability reduction rather than solutions because it's too early for solutions and they're too caught up in FOMO to say no. They buy a small number of vendor-supported AI tools with legal agreements that claim to keep all user data inside the purchased tenant, establish policy that all employees must use the purchased solutions, and block the rest at the proxy server.
@wordshaper @briankrebs This isn't in any way a fix but when they get sued in theory it reduces their payout.
@briankrebs

On the plus side, step #1 of setting up things like an
#AWS/#Azure/#GCP account — especially production ones — is to disable the ability to create IAM users (forcing the use of IAM-roles that are 2FA authenticated via a service like #Okta) …and the role-based authentication-tokens are typically TTLed to a couple hours.

Still, a "good" (suspicious-quotes) agent-setup would be pretty trivial to configure to snarf credentials from the relevant token-services. That triviality likely applies more broadly.

@briankrebs In several pen tests I've done across the last 18 months, one of the most interesting trends has been the sudden increase in the number of examples I've found of people who have thrown those API keys, and in some cases raw data, into accidentally public GitHub repos while attempting to glue AI to things to 'see what it can do'.

Few weeks ago I found a GitHub repo that a developer had trained on a dump of their own corporate emails, and all those emails where just in public, on Github, and contained lots of things like vendor SFTP creds. It's a free for all.

@SecureOwl @briankrebs I will confess to playing random songs on a coworker's Alexa when they checked in their personal home Alexa key into a corporate git repository.
@ai6yr @SecureOwl @briankrebs
Random songs? Not Rick Astley?

@leeloo @ai6yr @SecureOwl @briankrebs songs randomly picked from a playlist.

The list: [ "Rick Astley - Never Gonna GIve You Up" ]

@briankrebs oh we dont even have 2fa because because. Have i mentioned we have a gigantic bloated mess of it bureaucracy but nobody cares we dont have a secure image repo?

But somebody had the idea to write safe dev guidelines because paper is what keeps us safe, not patching vulns.

@briankrebs that sounds more like part of shadow-IT than a new version of it.
@briankrebs I have personally witnessed people just blindly feeding secrets and sensitive data right into systems where they straight up say "we're gonna hoover up literally everything you feed us as data for 'training' and might spit it out verbatim to anyone who asks." In an organization that basically threatened Very Bad Things would happen to anyone who even *hinted* at information they deemed 'confidential' to anyone else.
@rootwyrm @briankrebs My company is like that. Even came up with a training video that said we should only use Copilot because it would segregate public and private info. Then microslop noted that Copilot was hoovering up everything. Oh well.
@briankrebs it's almost like those AI tools were purpose-built for data exfiltration...
@briankrebs how about the ones using bypass methods to do their work without realizing they’re using a file transfer service that doesn’t delete the data they’re exfiling allowing any rando to download the company source code with no tracking
@briankrebs devs aren’t smart. We see you. You’re fucking stupid and creating more work for the rest of us still capable of doing our jobs.
@briankrebs just this afternoon a colleague and I were questioning whether the real “AI is coming for your job” was not “AI will replace you” but “idiots with AI are going to tank your company and you’re all getting laid off when it collapses”.
@briankrebs more disturbingly, there are also cases where users throw API keys at their agents, and then have the agents automatically generate/refresh access tokens for them because the user cannot be arsed to do the daily login/2FA dance.
@briankrebs when I interview for appsec positions, I like to ask "what would it take for you to fire a developer for a security lapse?" Interesting conversations ensue. I don't think anyone actually ever fires developers for security failings, including failure to learn from repeated blunders.
@briankrebs Installation of OpenClaw has been the #1 alert for the SOC team lately.
@masek @briankrebs Installation of OpenClaw should instantly get their machine airgapped... if not their computering license revoked.
@briankrebs I'm… skeptical, but not enough to argue. my experience of companies' relationship to AI is that there are aggressive mandates to use this tech and relatively tepid interest from workers. not zero, but not the widespread DIY enthusiasm that shadow IT derived from. I don't have any real data to back this up though, so I would be very curious if you do some reporting on this.
@glyph They are out there. Thinking they found a shortcut to competence.
@briankrebs this was literally my very first concern about LLMs when ChatGPT started getting traction about three years ago. Agents just make the problem an order of magnitude worse.
@briankrebs this was literally my very first concern about LLMs when ChatGPT started getting traction about three years ago. Agents just make the problem an order of magnitude worse.
@briankrebs It certainly takes some effort to correctly instruct an LLM that it cannot read any secrets directly because that’s exfiltrating data. And then as context fills, it’ll forget that directive.
@alexr @briankrebs Any OAuth like control companies had in place are completely bypassed by tools operating browsers or computers on behalf of human users too

@briankrebs ...or just python in Excel.

The amount of "internal only" data that is unknowingly shipped off to a Microsoft Coud environment. 🤦‍♂️

@briankrebs

And then get mad when you start pointing it out.

Rarely in my twenty five years have I experienced such rabid ad-hoc my business line must have this insecure garbage pushback with full-throated CIO support.

They are so convinced it's gonna give us the edge and we're just putting up roadblocks to the magic money train.

Also, a whole lot of "if it isn't blocked, then it must be allowed".

@briankrebs

They're the new "scatter my credentials everywhere so that I forget about them until something blows up using my permissions".
@briankrebs
This tracks, the only people excited about them seem to be the same cost-cutting and condescending managers that cause shadow IT in the first place.
@briankrebs im actively pitching a talk called 'claude is your insider threat now'

@Viss @briankrebs I just had the exact same talk with our internal AI working group. I'm not sure it arrived, but they had quite interesting papers to read.

LLMs are a fascinating information science but kind of terrible tools.

@Viss @briankrebs Would love to watch if/when it's online
@knotabard @briankrebs fingers crossed it gets accepted. im actively doing the research now even if it doesnt - i got a gaming rig ive lit up with crush and some llms and im wiring up an mcp server to test how often mcp calls are full of lies first
@briankrebs shadow IT usually originates from actual requirements that can’t be fulfilled by the IT. Meaning it solves someone’s real problems, but in a wrong way.
AI agents don’t solve anyone’s real problems (yet). They basically only create problems in any possible way.
Yeah, they finally invented macros. Or IFTTT as the kids call it.
@briankrebs let's be honest tho shadowit.ai sounds pretty bad ass
@grumpasaurus @briankrebs This is definitely what we all need: autonomous AI running IaC deployments. I mean, what could go wrong??

@steff @briankrebs ok i see your point.

Let me make a bad ass logo to go with it. It will make you think of darkwarrior duck but you won't be able to put a finger on it.

@briankrebs ai daily brief tracks that as a metric and the podcast occasionally talks about how prevalent it is. the people that can answer that are the frontiers themselves; many avenues to inference exist and it's everywhere and I imagine plenty of audio recordings and eyeglass surveillance behind secure doors.
@briankrebs there are so many companies already whose business model is reining this in
@briankrebs I mean, when the shadow outshines the object, is it a shadow anymore?
@danielkennedy74 That reminds me of some optics: See Obscured Airy pattern @ https://en.wikipedia.org/wiki/Airy_disk
Airy disk - Wikipedia

@briankrebs

I'm a bit old school, so: do you have the Excel sheets to prove it?

@briankrebs I have to admit.... earlier this week I spent like 5 hours trying to get this Ubiquiti camera system to work. I tried everything I could think of.

finally, I just gave ssh access to claude code, set it on no-permission-necessary and told it to keep trying to get those cameras online until they work. then went out and had a nice dinner with my wife, a couple glasses of wine.

Came back to shut the thing off.... all set worked perfectly. Still running.

so. If you folks don't think you can be replaced (at least partially) with AI, think again.

@coldfish @briankrebs you gave one of these spaghetti code generators access to externally facing hardware and told it to "get this online"? Cause you should probably go through that entire system now, you have absolutely no idea what it opened and allowed access to. Like, if someone told me they'd done that to one of my systems, I'd be reflashing the whole thing and loading the configs from backup.
@rootfake @coldfish @briankrebs Seriously. Thing probally opened telnet up to the internet and set the gui interface to default passwords. Dude can probally use shodan to check his cameras now.
@rootfake @briankrebs LOL. I'm glad it was just a coffee-shop video system. Hackers can feel free to watch me drink espresso and swear at my clients' requests.

@briankrebs Was recently forced to sit through an AI booster presentation at work, where the presenter kept demonstrating the use of tools that are banned as per corporate policy.

Lots of management and IT in the meeting. No one spoke up. Security is deader than satire.

@briankrebs

True, true.

Just had this conversation. Without a solid understanding and policy, it’s Wild West. We need to find a way to give them what they want before they just start really feeding random secrets into other LLMs.

And yes, blocking or stopping access will just result in Gmail exfil of data, sneakernet (remember me!), or using random “project” sites to bypass blockers.