Brian Greenberg 

382 Followers
240 Following
636 Posts
CIO by day, cybersecurity professor & Forbes Contributor by night, and a firm believer that the best ideas start with good coffee. I’m passionate about using AI, cloud tech, and leveraging system dynamics to make work (and life) a little easier.
Outside of work, I’m either reading/writing in some indie coffee house, hiking shady trails along the river, or adding to my ever-growing collection of houseplants.
I’m always learning, always leading, and always up for a good book or a new coffee house to explore.
#CyberSecurity #systemstheory #hiking #philosophy #actor #improviser #storyteller #coffee house addict
📍Chicago, IL 
🦋🥾☕️🎭🤖🪴✍️
Bloghttps://briangreenberg.net
Githubhttps://github.com/bjgreenberg
Gravatarhttps://gravatar.com/bjgreenberg
Threadshttps://www.threads.net/@bjgreenberg
LinkedInhttps://linkedin.com/in/bjgreenberg
LinkTreehttps://linktr.ee/brian.greenberg

First, Discord announced age verification. As predicted, users revolted. A former partner had already leaked 70,000 government IDs. Then, Discord backed down. And now the age-check vendors who got exposed in the process have to defend technology most people didn't even know existed. Interestingly, researchers at Georgia Tech reverse-engineered Yoti, the dominant age-check provider used on over 60% of compliant sites in states with age-gate laws. They found that Yoti sends your photo to its servers, collects data "beyond what is strictly necessary," and shares it with fourth parties most users have never heard of. Yoti disputes it. But they also confirmed facial age estimation does not happen on-device. Meanwhile, the EFF states that on-device processing is "less dangerous" than sending data over a network.

🔐 On-device face scans mean your biometric data stays on your phone, for now
🗝️ "Age keys" built on FIDO passkey tech could let you reuse an age signal across platforms without re-verifying each time
📸 The dominant provider in the US runs a million checks a day and sends your photo to its servers
⚖️ The Supreme Court ruled last summer that online age verification doesn't violate the First Amendment, partly based on Yoti's technical claims 😳

The thing people don’t realize is that once age-check infrastructure is embedded across every major platform, it doesn't go away. Every update is a new attack surface. Every new law expands the mandate. And the CEO of one of these companies is already talking about age-aware cameras and microphones as the logical next step.

Your device should work for ‘you.’ The moment it starts working for someone else's compliance requirement, that's a different product than the one you thought you had.

https://arstechnica.com/tech-policy/2026/03/after-discord-fiasco-age-check-tech-promises-privacy-by-running-locally-does-it-work/
#Privacy #CyberSecurity #TechPolicy #security #cloud #infosec

Users hate it, but age-check tech is coming. Here's how it works.

On-device face scans and cross-platform age keys decrease privacy risks, but trust issues abound.

Ars Technica

🤣 A robot in a restaurant in California decided that smashing plates was more fun than delivering food, then it pivoted to jazz hands all the while two staff members tried to wrestle it back under control. Its apron said "I'M GOOD!" 🤖 It’s crazy to think that we’re putting hardware (robots) with enough power to knock a kid down or take out unaware bystanders. We have a product culture that moves too fast and don't ask important, yet simple questions.

The video is funny right up until you picture a five-year-old standing where those plates were.

Nobody got hurt this time. But the reason to think carefully about physical AI deployment isn't the dramatic failure. It's the hundred smaller decisions made before the robot ever left the warehouse that make the failures possible.

https://gizmodo.com/robot-losing-its-mind-in-a-california-restaurant-is-just-as-fed-up-as-everyone-else-2000735088
#AI #Robotics #TechEthics #security #privacy #cloud #infosec #cybersecurity

Robot Losing Its Mind in a California Restaurant Is Just as Fed Up as Everyone Else

Dance like no one's watching.

Gizmodo

The European Commission got hit with a cyberattack, again. 350 GB allegedly taken, mail server contents, databases, confidential contracts. Their own cyber chief warned that the EU is "losing massively against hackers." What gets me is the timing. The EU just sanctioned companies from China and Iran over cyberattacks on member states. The message was: we see you, and there are consequences. Then their own infrastructure gets hit and 350 GB walks out the door. 🤦🏻‍♂️

🗓️ This is the second breach of EU institutions in 2026, just three months in
📦 A hacking group claims to have mail server contents, databases, and confidential documents
🔒 No indication internal Commission systems were compromised, but the investigation is still open
📜 The EU has NIS2, the Cyber Solidarity Act, and a Cybersecurity Regulation on the books

I guess frameworks don't defend systems after all. People, processes, and patched infrastructure do. You can write the most thorough regulation in the world and still get breached through a cloud hosting provider nobody was watching closely enough. Third party risk is my nightmare.

If you're a CISO or CIO reading this, the question isn't whether your regulatory posture is solid. It's whether your third-party cloud infrastructure would survive the same scrutiny you apply to your internal systems.

https://www.helpnetsecurity.com/2026/03/30/european-commission-cyberattack-cloud-infrastructure-website/
#CyberSecurity #CloudSecurity #InfoSec #security #privacy #cloud

Second data breach at European Commission this year leaves open questions over resilience - Help Net Security

The European Commission confirmed that a cyberattack impacted cloud infrastructure hosting its web presence on the Europa.eu platform.

Help Net Security

We keep worrying about AI doing something evil. Which it might, but right now, there’s a risk in the plumbing supporting it. Three vulnerabilities in LangChain and LangGraph, path traversal, unsafe deserialization, SQL injection. Not AI-specific attacks. They’re not novel nor sophisticated but these are the kinds of bugs we've been patching since the late '90s. One of them scored a severity of 9.3 out of 10. "The biggest threat to your enterprise AI data might not be as complex as you think." Remember that you're building AI on top of frameworks you didn't write, can't fully audit, and update whenever it's convenient. That's the actual problem.

🔐 Path traversal lets attackers read arbitrary files from the host system, including credentials
🔑 Unsafe deserialization exposes API keys and environment variables at runtime
🗄️ SQL injection in the checkpointing layer leaks conversation history from your AI agents

All three are fixed now. But "fixed" only matters if you've actually applied the patches across every integration. Most organizations haven't.

The lesson isn't about AI security. It's that AI doesn't change what good security engineering looks like. Input validation, parameterized queries, strict path sandboxing. This is stuff your dev team learned before ChatGPT existed.

If you're deploying AI pipelines and you haven't done a security review of the frameworks underneath them, you're not running an AI strategy. You're running a trust exercise.

https://www.csoonline.com/article/4151814/langchain-path-traversal-bug-adds-to-input-validation-woes-in-ai-pipelines.html
#CyberSecurity #AIRisk #AppSec #security #privacy #cloud #infosec

LangChain path traversal bug adds to input validation woes in AI pipelines

The path traversal flaw, allowing access to arbitrary files, adds to a growing set of input validation issues in AI pipelines.

CSO Online

@richlv @tael .. what? that I like emojis 🙃

... and who doesn't use AI to assist in writing, whether it's Grammarly, Claude, or spell-check?

I teach cybersecurity. And I genuinely don't know what to tell my students after this one. Federal reviewers spent years trying to get basic encryption documentation from Microsoft for its GCC High government cloud. They couldn't get it. One reviewer called the system a "pile of spaghetti pies," with data traveling from point A to point B the way you'd get from Chicago to New York: a bus to St. Louis, a ferry to Pittsburgh, and a flight to Newark. Each leg is a potential hijacking. They knew this. They said this out loud in writing. Then they approved it anyway in December 2024, because too many agencies were already using it. 🔐 That's not a security review. That's a hostage negotiation. Two things in this story should make every CISO and CIO uncomfortable:

🧩 Microsoft built its federal cloud on top of decades of legacy code that it apparently can't fully document itself
👮 "Digital escorts" often ex-military with minimal software engineering backgrounds are the firewall between Chinese engineers working on the system and classified U.S. networks 🤦🏻‍♂️

The scariest line in the whole ProPublica investigation isn't the "pile of shit" quote. It's this: FedRAMP determined that refusing authorization wasn't feasible because agencies were already using the product. Read that again. The security review process reached a conclusion based on sunk cost, not risk. Ex Post Facto Fallacy

If that logic holds, the compliance framework is just documentation theater. And right now, CISA is being hollowed out, so there are fewer people left to even run the theater.

https://arstechnica.com/information-technology/2026/03/federal-cyber-experts-called-microsofts-cloud-a-pile-of-shit-approved-it-anyway/
#Cybersecurity #Microsoft #FedRAMP #Leadership #RiskManagement #security #privacy #cloud #infosec

Federal cyber experts called Microsoft's cloud a "pile of shit," approved it anyway

One Microsoft product was approved despite years of concerns about its security.

Ars Technica

According to the recent Meta/YouTube verdict, the plaintiff started using YouTube at age 6 and Instagram at age 9. The jury deliberated 43 hours, answered "yes" to every negligence question, and found evidence of malice. Then Meta's stock went up 0.7%. 🤔 That gap tells you everything. 📊

The $6 million award is basically a rounding error for companies pulling in $350 billion in combined annual revenue. What actually matters is the 2,000 pending lawsuits this verdict just handed a roadmap to, and the federal trial coming in Oakland this summer. This is the first domino. The tobacco industry had the same "we're being scapegoated" defense in 1994, and that argument eventually cost them $206 billion.

Here's what I keep thinking about as a guy who teaches about the legal, ethical, and social issues of information technology: the products we build have consequences we're responsible for, whether we want to admit it or not. The jury didn't care that Meta said Kaley's home life was complicated. They cared that the autoplay kept going anyway. 🔁

Two things can both be true: teen mental health is complex, and a notification engine designed to override a kid's ability to stop scrolling is a design choice someone made.

https://www.latimes.com/california/story/2026-03-25/social-media-lawsuit-trial-meta-google-verdict
#ChildSafety #BigTech #Leadership #Accountability #SocialMedia #Ethics #DePaulUniversity #DePaulU @depaulu

Landmark verdict finds Instagram, YouTube were designed to addict kids

The outcome Wednesday in Los Angeles County Superior Court is potentially precedent-setting for thousands of other pending lawsuits nationwide and could reshape how tech companies are held accountable for children's harm caused by their products.

Los Angeles Times

Here's the thing about the X advertising lawsuit: Musk didn't lose because of bad lawyers. He lost because antitrust law isn't designed to protect you from the consequences of your own decisions. The judge literally wrote she had "no qualm" dismissing it.

Ad revenue on X dropped by more than 50% after he gutted the content moderation team and disbanded the Trust and Safety Council. Then he sued Mars, CVS, Colgate, and a dozen others, claiming their decision to stop buying ads was an illegal conspiracy. The court said no. Choosing not to buy from someone isn't a crime. It's just a Tuesday. This is about how leaders respond when the market sends a signal. 📊

🚪 Advertisers didn't abandon X because of a coordinated plot; they left because the product stopped meeting their needs
📜 GARM, the brand safety group at the center of this, dissolved itself in August 2024 under pressure from the lawsuit, and X still lost anyway

When your customers leave, the first question shouldn't be "who do I sue?" It should be "what did I do that made leaving feel like the right call?"

https://arstechnica.com/tech-policy/2026/03/elon-musk-loses-big-in-court-x-boycott-perfectly-legal/
#Leadership #BusinessStrategy #X #Advertising #Accountability

Elon Musk loses big in court; X boycott perfectly legal

X admonished for "fishing expedition" as judge dismisses ad boycott lawsuit.

Ars Technica

Congress banned federal agencies from collecting bulk data on Americans in 2015. So some of them just started buying it from data brokers instead. 😳 ICE signed a contract with a company whose tool can track mobile phone movements or locate phones that have visited specific locations. No warrant. Taxpayer money. Done. One privacy attorney put it plainly: it's like the police paying your landlord $100 for a spare key and walking through your house without a warrant.

Now add AI to that picture. Anthropic's CEO Dario Amodei warned that records the government can purchase can be used by AI to assemble "a comprehensive picture of any person's life automatically and at a massive scale." That's not hypothetical. That's now. And the window to close this through FISA reauthorization closes April 20!

The business angle nobody's talking about: the same data brokers selling to ICE are selling data your employees, customers, and executives generate every day. You have no control over what happens to it after it leaves your app or browser. That should be in your risk conversation, not just your privacy policy.

🏛️ This is bipartisan; Republicans and Democrats are co-sponsoring the fix
📅 April 20 is the deadline

https://www.npr.org/2026/03/25/nx-s1-5752369/ice-surveillance-data-brokers-congress-anthropic
#Privacy #AI #Leadership #Cybersecurity #security #cloud #infosec #surveillance

Oh boy. Stanford researchers scanned 10 million web pages and found API keys just sitting in the public-facing code. That's 1,748 active credentials from major providers exposed in live website code, mostly inside JavaScript files. Not in old test environments. Not in a forgotten repo. In the live, running site. Banks. Healthcare providers. "Not just small companies, but some very large companies," according to the lead researcher. And some of those credentials had been sitting there for years. Not the first time I've seen something like this. 🤦🏻‍♂️

The thing is that most orgs are scanning their source code but not their deployed sites. 😳 Those are two different things, and most leaks originate during the build process. A key gets baked in somewhere between development and production, and nobody catches it because the scan already ran upstream. Meanwhile, GitGuardian counted over 28 million new hardcoded secrets exposed in public GitHub commits in 2025 alone. This isn't a one-time research finding it's a systemic habit that needs to change.

🔍 When did your team last scan the live site, not just the codebase?
🏦 If you're in a regulated industry, that question just became a compliance question too

https://www.newscientist.com/article/2520143-security-credentials-inadvertently-leaked-on-thousands-of-websites/
#Cybersecurity #AppSec #Leadership #security #privacy #cloud #infosec

Security credentials inadvertently leaked on thousands of websites

Researchers identified nearly 10,000 websites where API keys could be found, exposing details that could let attackers access sensitive information

New Scientist