@briankrebs I am also really curious how many people have aggressively violated various privacy laws by feeding stuff into various LLMs for "summary" and "analysis".
Frankly it should be a much larger compliance nightmare than it is. (Or, I suppose, it *is* a ginormous compliance nightmare and just right now everyone's thinking it isn't. Incorrectly)
@wordshaper @briankrebs Unfortunately, I don't think the people doing this care or will ever care. Privacy laws tend to be a joke anyways and there is very little incentive for most people/companies to change. I don't think most governments even want that to change. It's better for them, allows more data collection, etc.
I wish I didn't have such a negative and cynical outlook on it all.
@briankrebs In several pen tests I've done across the last 18 months, one of the most interesting trends has been the sudden increase in the number of examples I've found of people who have thrown those API keys, and in some cases raw data, into accidentally public GitHub repos while attempting to glue AI to things to 'see what it can do'.
Few weeks ago I found a GitHub repo that a developer had trained on a dump of their own corporate emails, and all those emails where just in public, on Github, and contained lots of things like vendor SFTP creds. It's a free for all.
@leeloo @ai6yr @SecureOwl @briankrebs songs randomly picked from a playlist.
The list: [ "Rick Astley - Never Gonna GIve You Up" ]
@briankrebs oh we dont even have 2fa because because. Have i mentioned we have a gigantic bloated mess of it bureaucracy but nobody cares we dont have a secure image repo?
But somebody had the idea to write safe dev guidelines because paper is what keeps us safe, not patching vulns.
@briankrebs ...or just python in Excel.
The amount of "internal only" data that is unknowingly shipped off to a Microsoft Coud environment. 🤦♂️
And then get mad when you start pointing it out.
Rarely in my twenty five years have I experienced such rabid ad-hoc my business line must have this insecure garbage pushback with full-throated CIO support.
They are so convinced it's gonna give us the edge and we're just putting up roadblocks to the magic money train.
Also, a whole lot of "if it isn't blocked, then it must be allowed".
@Viss @briankrebs I just had the exact same talk with our internal AI working group. I'm not sure it arrived, but they had quite interesting papers to read.
LLMs are a fascinating information science but kind of terrible tools.
@steff @briankrebs ok i see your point.
Let me make a bad ass logo to go with it. It will make you think of darkwarrior duck but you won't be able to put a finger on it.
I'm a bit old school, so: do you have the Excel sheets to prove it?
@briankrebs I have to admit.... earlier this week I spent like 5 hours trying to get this Ubiquiti camera system to work. I tried everything I could think of.
finally, I just gave ssh access to claude code, set it on no-permission-necessary and told it to keep trying to get those cameras online until they work. then went out and had a nice dinner with my wife, a couple glasses of wine.
Came back to shut the thing off.... all set worked perfectly. Still running.
so. If you folks don't think you can be replaced (at least partially) with AI, think again.
@briankrebs Was recently forced to sit through an AI booster presentation at work, where the presenter kept demonstrating the use of tools that are banned as per corporate policy.
Lots of management and IT in the meeting. No one spoke up. Security is deader than satire.
True, true.
Just had this conversation. Without a solid understanding and policy, it’s Wild West. We need to find a way to give them what they want before they just start really feeding random secrets into other LLMs.
And yes, blocking or stopping access will just result in Gmail exfil of data, sneakernet (remember me!), or using random “project” sites to bypass blockers.