Brian Greenberg 

435 Followers
239 Following
666 Posts
CIO by day, cybersecurity professor & Forbes Contributor by night, and a firm believer that the best ideas start with good coffee. I’m passionate about using AI, cloud tech, and leveraging system dynamics to make work (and life) a little easier.
Outside of work, I’m either reading/writing in some indie coffee house, hiking shady trails along the river, or adding to my ever-growing collection of houseplants.
I’m always learning, always leading, and always up for a good book or a new coffee house to explore.
#CyberSecurity #systemstheory #hiking #philosophy #actor #improviser #storyteller #coffee house addict
📍Chicago, IL 
🦋🥾☕️🎭🤖🪴✍️
Bloghttps://briangreenberg.net
Githubhttps://github.com/bjgreenberg
Gravatarhttps://gravatar.com/bjgreenberg
Threadshttps://www.threads.net/@bjgreenberg
LinkedInhttps://linkedin.com/in/bjgreenberg
LinkTreehttps://linktr.ee/brian.greenberg

The McDonald's AI jailbreak story was fabricated. The Chipotle one before it was Photoshopped. I get why they went viral, they're kinda funny. But they're pulling attention away from the cases that actually happened and actually cost companies money.

Amazon's Rufus chatbot got manipulated into providing instructions for obtaining dangerous chemicals. A Chevy dealership's bot was maneuvered into agreeing to sell a $76,000 Tahoe for a dollar. Air Canada's bot invented a refund policy that didn't exist, a customer relied on it, and when the airline said "that's not our problem, the bot is its own entity," a Canadian tribunal told them exactly where to put that argument.

If you're a CIO, the legal question sitting underneath all of this is the one worth losing sleep over:
- Prompt injection isn't exotic. It works because LLMs are built to be responsive to language, not resistant to it. There is no patch that fully closes this.
- Any AI you deploy on a customer-facing surface is making representations on your company's behalf. Your legal team needs to know that before your marketing team ships the chatbot.
- "The bot did it, not us" is not a defense. One court has already said so, and others will follow.

The fake viral stories are a distraction. The boring real ones are the ones that end up in discovery.

https://www.fastcompany.com/91532091/mcdonalds-ai-bot-didnt-go-rogue
#Cybersecurity #AI #Leadership #security #privacy #cloud #infosec #cybersecurity

No, McDonald's AI bot didn't go rogue, but 'prompt injection' is still a risk for companies

People hacking branded AI bots can come with reputational, financial, and legal costs.

Fast Company

One founder just called out something the VC community has been quietly living with for a while. AI startups are reporting CARR, which counts revenue that hasn't been invoiced and may never be, to the press while labeling it ARR. The gap between those two numbers, per the CEO who went public about it, can run 3 to 5x.

Here's the part that should make you uncomfortable if you're buying AI tools or evaluating vendors: the VCs aren't getting fooled. They read the contracts. The people getting fooled are journalists writing the coverage you're using to make procurement decisions, and employees who think they're joining a rocketship.

A few things worth sitting with:
- Free pilots counted as revenue is not a new trick. It just has a better outfit now.
- If you're a CIO evaluating an AI vendor's "momentum," ask one question: is that ARR live and invoiced, or contracted?
- The companies chasing inflated benchmarks they can't actually hit are the ones that will blow up your implementation 18 months in.

We've been here before. The numbers looked great right up until they didn't.

https://www.fastcompany.com/91532292/ai-startups-arr-carr-scott-stevenson
#AI #Leadership #Cybersecurity #VC #PE #startup #vaporware

AI startups are inflating a key revenue metric to win VC attention, says this founder

Founders are blurring ARR with future contract revenue to boost headline numbers, according to Spellbook CEO Scott Stevenson.

Fast Company

The FCC forgot hotspots were a thing. They announced a ban on foreign-made consumer routers a month ago and had to update their FAQ to add MiFi devices and cellular home routers after the fact. That's not a minor oversight... it's the whole work-from-anywhere use case.

Here's the part that should bother you. The only way to get an exemption is to commit to US-based manufacturing and submit a time-bound plan to get there. Netgear, eero, and Adtran got conditional approval, but it runs out October 1, 2027. There is no domestic consumer router industry to speak of right now. So the FCC has created a countdown clock against a factory floor that doesn't exist yet.
A few things worth sitting with:
- The Global Electronics Association pointed out that security vulnerabilities show up across products regardless of where they're made. Geography isn't the filter; code quality is.
- The Covered List used to apply to specific companies flagged for specific reasons. Extending it to an entire product category means the government can now ban any internet-connected device made abroad by citing national security. Smartphones aren't included yet. "Yet" is doing a lot of work in that sentence.
- The Register's headline from last month said it plainly: the country that put backdoors in Cisco routers to spy on the world is now banning foreign routers. I didn't write that. They did. But they're not wrong.

If you're in security or IT leadership, watch the October 2027 date. That's when the conditional approvals expire, and if the manufacturing commitments aren't met, the options get ugly fast.

https://www.theregister.com/2026/04/24/fcc_does_a_doubletake_adds/
#Cybersecurity #FCC #NetworkSecurity #security #privacy #cloud #infosec

US clarifies mobile hotspots part of foreign router ban despite rarity of American made consumer kit

: Silicon often from US, but the kit from APAC and elsewhere

The Register

A lower court decided Apple, Google, and Facebook lose Section 230 immunity because they ran credit card transactions inside social casino apps. Not because they built the apps. Not because they designed the gambling mechanics. Because they processed the payments.

Follow that logic downstream and Etsy is liable for a seller's counterfeit goods the moment a buyer checks out. Patreon is exposed the second a creator's content draws a lawsuit. Section 230 has kept smaller platforms alive since 1996 by separating the pipe from the content flowing through it. Courts inventing a payment-processing carve-out don't hurt Apple. Apple has lawyers. The platforms that get hurt are the ones that can't afford to fight.

EFF filed an amicus brief arguing the 9th Circuit should reverse the lower court, and they're right. Congress never drew a line between hosting content and processing payments for it. Judges shouldn't draw one now just because the content happens to be digital slot machines.

https://www.eff.org/deeplinks/2026/04/eff-9th-circuit-again-app-stores-shouldnt-be-liable-processing-payments-user
#Tech #Law #Leadership

EFF to 9th Circuit (Again): App Stores Shouldn’t Be Liable for Processing Payments for User Content

EFF filed an amicus brief for the second time in the U.S. Court of Appeals for the Ninth Circuit, arguing that allowing cases against the Apple, Google, and Facebook app stores to proceed could lead to greater censorship of users’ online speech.Our brief argues that the app stores should not lose...

Electronic Frontier Foundation

Google just put $10 billion into Anthropic. Its competitor. The company it's also racing against to win the AI era.

Amazon dropped $5 billion into the same company this week. And investors are apparently trying to back Anthropic at an $800 billion valuation, up from $350 billion in February.

I get the hedge. Nobody wants to be Blockbuster. But there's something worth sitting with here: when the biggest players in tech are all funding the same startup, that's not a competitive landscape. That's a cartel with extra steps.

🤔 The "up to $40B" framing matters. Google commits $10B now. The other $30B depends on Anthropic hitting performance milestones. So Google's hedging its hedge.

💰 Anthropic is reportedly considering an IPO as soon as October. After this week, the timing makes a lot more sense.

🔒 From a security standpoint, this kind of capital concentration around a handful of AI providers should make enterprise buyers nervous. You're not just picking a vendor. You're picking a dependency.

https://www.cnbc.com/2026/04/24/google-to-invest-up-to-40-billion-in-anthropic-as-search-giant-spreads-its-ai-bets.html
#AI #Cybersecurity #Leadership

Anthropic recorded over 16 million interactions with Claude from about 24,000 fake accounts, which are reportedly linked to Chinese companies trying to cheaply copy the model. Google faced more than 100,000 attempts to copy Gemini. OpenAI reports that most distillation attacks they find come from China. This is not an isolated event. It is a repeatable and scalable strategy.

Breaking the terms of service isn't enough to stop people when the reward is closing a years-long gap in AI technology. The House Select Committee on China wants to label 'adversarial distillation' as industrial espionage under the Economic Espionage Act, which makes sense. At the moment, getting caught just means losing an account. That is hardly a real punishment.

The Trump-Xi summit is approaching, and the White House is reportedly considering sanctions. However, Trump has previously traded away export controls for other deals. If that happens again, AI companies may have to protect their intellectual property by themselves.

When laws fail to keep pace with new types of attacks, attackers automatically have the advantage.

If your company is developing anything unique using advanced AI models, your API access logs are now part of your security risks.

https://arstechnica.com/tech-policy/2026/04/us-accuses-china-of-industrial-scale-ai-theft-china-says-its-slander/

#AI #Cybersecurity #NationalSecurity #IntellectualProperty #Geopolitics #security #privacy #cloud #infosec #Espionage

US accuses China of “industrial-scale” AI theft. China says it’s “slander.”

Trump-Xi summit may be rocked by US mulling huge sanctions.

Ars Technica

A Chinese national pretended to be U.S. engineers and researchers for almost five years, from 2017 to 2021, and walked away with sensitive aerospace and weapons development software from NASA, the Air Force, the Navy, and the Army. There was no hacking or breaking through firewalls. People simply emailed him what he asked for, because they believed he was someone they knew.

This worries me more than any zero-day vulnerability. The NASA OIG reported that Song Wu asked for the same software several times without explaining why he needed it. Most people miss this kind of red flag because no one teaches them to spot it. We invest millions in technology controls but spend very little on training people to pause and think like a threat actor before sending information.

Export controls are not only about legal compliance. They are also about human behavior. Your employees make export control decisions every day, often without realizing it.

When was the last time your organization ran a spear-phishing simulation aimed at your researchers, not just your finance team?

If your security awareness program doesn't cover identity deception and unusual software requests, it is not thorough enough.

https://thehackernews.com/2026/04/nasa-employees-duped-in-chinese.html
#Cybersecurity #NationalSecurity #Espionage #SecurityAwareness #InfoSec #security #privacy #cloud #infosec

NASA Employees Duped in Chinese Phishing Scheme Targeting U.S. Defense Software

NASA OIG exposed a 2017–2021 spear-phishing campaign by Song Wu, leading to DOJ charges and export control violations.

The Hacker News

Bitcoin blocks usually take about 10 minutes to confirm. According to new research from Google, quantum key derivation might only take around 9 minutes. That similarity is hard to overlook.

The main point isn’t that quantum computers will eventually break crypto—we’ve expected that. What matters now is that Google has reduced the estimated resources needed by about 20 times. That means fewer qubits, fewer gate operations, and shorter timelines. Plus, 1.7 million BTC are stored in old address formats where the public key is already visible. Attackers wouldn’t have to hurry; they could take as long as they want. 🔓

The crypto industry often sees upgrades like SegWit and Taproot as successes, and they are. However, Taproot brought back direct public key exposure for different reasons. Now, every design choice in crypto has a quantum aspect, whether teams realize it or not.

⏳ The threat isn’t immediate, but the time to prepare is now—and that window won’t last forever.
🏛️ If your organization holds digital assets, you should add post-quantum cryptography to your risk register now, not two years from now.

https://www.ccn.com/education/crypto/google-quantum-computers-break-bitcoin-ethereum-9-minutes-1-7m-btc-risk/
#Cybersecurity #QuantumComputing #Crypto #RiskManagement #Blockchain

Google Warns Quantum Computers Could Break Bitcoin and Ethereum Encryption in 9 Minutes — Are Your BTC and ETH at Risk?

Google’s latest research warns quantum computers could break Bitcoin and Ethereum encryption faster than expected.

CCN.com

Anthropic spent months carefully gatekeeping access to Mythos, their most capable AI model, while limiting access only to a small group of vetted companies for defensive cybersecurity testing. Then a private online forum got in anyway, through a third-party vendor, on the same day the controlled program was announced.

That's the part worth sitting with. Not the model. The vendor. Third-party vendors... It's always the the 3td party vendor. 🤦🏻‍♂️ You can build the most carefully controlled AI release program in the industry, and one weak link in your supply chain burns it down. We keep having this conversation about AI safety and regulation, and we keep forgetting that the threat surface isn't just the model. It's every partner, every integration, every environment touching it. 🔗 Everything's connected. Everything.

🤔 Ask yourself: how many third parties have access to your most sensitive systems right now? Do you actually know?
⚠️ Vendor risk management isn't a compliance checkbox. It's where your security posture actually lives or dies.

https://www.yahoo.com/news/articles/anthropics-mythos-model-accessed-unauthorized-214920132.html
#Cybersecurity #AI #VendorRisk #InfoSec #RiskManagement #security #privacy #cloud #infosec

Anthropic's Mythos model accessed by unauthorized users, Bloomberg News reports

A small group of unauthorized users has accessed Anthropic's new ‌Mythos AI model, Bloomberg News reported ‌on Tuesday, citing documentation and a person...

Yahoo News

Trying to be secure... You deleted the app. You turned on disappearing messages. You did everything right. The FBI can still read your Signal messages.

Huh? This wasn't a Signal failure. Signal did its job. iOS didn't. The phone was storing notification previews in a database long after the app was gone, because someone turned on Lock Screen message previews. Apple just patched it in iOS 26.4.2, and they only found out about it because a defendant's court case exposed it during testimony.

🔎 This is why privacy promises and privacy architecture are two different things
📲 Update your phone. Not because you're hiding something. Because your phone is quietly keeping receipts you don't know about.
⚠️ And if you're a CISO still telling employees that "just use Signal" is a complete privacy answer, it's time to revisit that conversation.

https://www.macrumors.com/2026/04/22/ios-26-4-2-notification-database-security-fix/
#Cybersecurity #Privacy #iOS #InfoSec #Leadership #security #cloud #infosec #AlwaysUpdate

iOS 26.4.2 Patches Flaw That Let FBI Extract Deleted Signal Messages

The iOS 26.4.2, iPadOS 26.4.2, iOS 18.7.8, and iPadOS 18.7.8 updates that Apple released today address a security vulnerability that the FBI recently...

MacRumors