
A New Hampshire school learned sign language to communicate with its only deaf student
New Hampshire is one of the few states in the nation that doesn't have a dedicated school for the deaf.



New Hampshire is one of the few states in the nation that doesn't have a dedicated school for the deaf.
For my second leadership event of the day… I’m honored to participate in CXO Inc.’s #CISOMeet event today. We’re discussing the evolving role of the CIO/CISO today and strategies for #AI and #cybersecurity.
#cio #chicago #technology #leadership #educator #mentorship #collaboration #cisomeet #ciso

On April 2nd CISOMeet Enterprise Chicago will collaborate Chicago CISO Community for an exclusive event focused on the future of security leadership, resilience, and innovation. Engage in moderated dialogue, peer collaboration, and insights shaping the CISO role in 2026 and beyond.
I’m honored to be a panelist at CXO Inc.’s #CIOMeet event today. We’re discussing the evolving role of the CIO today and strategies for #AI and #cybersecurity.
#cio #chicago #technology #leadership #educator #mentorship #collaboration #ciomeet #futuristcio

On April 2nd CIOMeet Chicago will collaborate Chicago's CIO Community to discuss, debate, and explore the current directions within the Office of the CIO through riveting panel discussions, thought leading roundtable sessions, and intimate business conversations.
First, Discord announced age verification. As predicted, users revolted. A former partner had already leaked 70,000 government IDs. Then, Discord backed down. And now the age-check vendors who got exposed in the process have to defend technology most people didn't even know existed. Interestingly, researchers at Georgia Tech reverse-engineered Yoti, the dominant age-check provider used on over 60% of compliant sites in states with age-gate laws. They found that Yoti sends your photo to its servers, collects data "beyond what is strictly necessary," and shares it with fourth parties most users have never heard of. Yoti disputes it. But they also confirmed facial age estimation does not happen on-device. Meanwhile, the EFF states that on-device processing is "less dangerous" than sending data over a network.
🔐 On-device face scans mean your biometric data stays on your phone, for now
🗝️ "Age keys" built on FIDO passkey tech could let you reuse an age signal across platforms without re-verifying each time
📸 The dominant provider in the US runs a million checks a day and sends your photo to its servers
⚖️ The Supreme Court ruled last summer that online age verification doesn't violate the First Amendment, partly based on Yoti's technical claims 😳
The thing people don’t realize is that once age-check infrastructure is embedded across every major platform, it doesn't go away. Every update is a new attack surface. Every new law expands the mandate. And the CEO of one of these companies is already talking about age-aware cameras and microphones as the logical next step.
Your device should work for ‘you.’ The moment it starts working for someone else's compliance requirement, that's a different product than the one you thought you had.
https://arstechnica.com/tech-policy/2026/03/after-discord-fiasco-age-check-tech-promises-privacy-by-running-locally-does-it-work/
#Privacy #CyberSecurity #TechPolicy #security #cloud #infosec
🤣 A robot in a restaurant in California decided that smashing plates was more fun than delivering food, then it pivoted to jazz hands all the while two staff members tried to wrestle it back under control. Its apron said "I'M GOOD!" 🤖 It’s crazy to think that we’re putting hardware (robots) with enough power to knock a kid down or take out unaware bystanders. We have a product culture that moves too fast and don't ask important, yet simple questions.
The video is funny right up until you picture a five-year-old standing where those plates were.
Nobody got hurt this time. But the reason to think carefully about physical AI deployment isn't the dramatic failure. It's the hundred smaller decisions made before the robot ever left the warehouse that make the failures possible.
https://gizmodo.com/robot-losing-its-mind-in-a-california-restaurant-is-just-as-fed-up-as-everyone-else-2000735088
#AI #Robotics #TechEthics #security #privacy #cloud #infosec #cybersecurity
The European Commission got hit with a cyberattack, again. 350 GB allegedly taken, mail server contents, databases, confidential contracts. Their own cyber chief warned that the EU is "losing massively against hackers." What gets me is the timing. The EU just sanctioned companies from China and Iran over cyberattacks on member states. The message was: we see you, and there are consequences. Then their own infrastructure gets hit and 350 GB walks out the door. 🤦🏻♂️
🗓️ This is the second breach of EU institutions in 2026, just three months in
📦 A hacking group claims to have mail server contents, databases, and confidential documents
🔒 No indication internal Commission systems were compromised, but the investigation is still open
📜 The EU has NIS2, the Cyber Solidarity Act, and a Cybersecurity Regulation on the books
I guess frameworks don't defend systems after all. People, processes, and patched infrastructure do. You can write the most thorough regulation in the world and still get breached through a cloud hosting provider nobody was watching closely enough. Third party risk is my nightmare.
If you're a CISO or CIO reading this, the question isn't whether your regulatory posture is solid. It's whether your third-party cloud infrastructure would survive the same scrutiny you apply to your internal systems.
https://www.helpnetsecurity.com/2026/03/30/european-commission-cyberattack-cloud-infrastructure-website/
#CyberSecurity #CloudSecurity #InfoSec #security #privacy #cloud
We keep worrying about AI doing something evil. Which it might, but right now, there’s a risk in the plumbing supporting it. Three vulnerabilities in LangChain and LangGraph, path traversal, unsafe deserialization, SQL injection. Not AI-specific attacks. They’re not novel nor sophisticated but these are the kinds of bugs we've been patching since the late '90s. One of them scored a severity of 9.3 out of 10. "The biggest threat to your enterprise AI data might not be as complex as you think." Remember that you're building AI on top of frameworks you didn't write, can't fully audit, and update whenever it's convenient. That's the actual problem.
🔐 Path traversal lets attackers read arbitrary files from the host system, including credentials
🔑 Unsafe deserialization exposes API keys and environment variables at runtime
🗄️ SQL injection in the checkpointing layer leaks conversation history from your AI agents
All three are fixed now. But "fixed" only matters if you've actually applied the patches across every integration. Most organizations haven't.
The lesson isn't about AI security. It's that AI doesn't change what good security engineering looks like. Input validation, parameterized queries, strict path sandboxing. This is stuff your dev team learned before ChatGPT existed.
If you're deploying AI pipelines and you haven't done a security review of the frameworks underneath them, you're not running an AI strategy. You're running a trust exercise.
https://www.csoonline.com/article/4151814/langchain-path-traversal-bug-adds-to-input-validation-woes-in-ai-pipelines.html
#CyberSecurity #AIRisk #AppSec #security #privacy #cloud #infosec
I teach cybersecurity. And I genuinely don't know what to tell my students after this one. Federal reviewers spent years trying to get basic encryption documentation from Microsoft for its GCC High government cloud. They couldn't get it. One reviewer called the system a "pile of spaghetti pies," with data traveling from point A to point B the way you'd get from Chicago to New York: a bus to St. Louis, a ferry to Pittsburgh, and a flight to Newark. Each leg is a potential hijacking. They knew this. They said this out loud in writing. Then they approved it anyway in December 2024, because too many agencies were already using it. 🔐 That's not a security review. That's a hostage negotiation. Two things in this story should make every CISO and CIO uncomfortable:
🧩 Microsoft built its federal cloud on top of decades of legacy code that it apparently can't fully document itself
👮 "Digital escorts" often ex-military with minimal software engineering backgrounds are the firewall between Chinese engineers working on the system and classified U.S. networks 🤦🏻♂️
The scariest line in the whole ProPublica investigation isn't the "pile of shit" quote. It's this: FedRAMP determined that refusing authorization wasn't feasible because agencies were already using the product. Read that again. The security review process reached a conclusion based on sunk cost, not risk. Ex Post Facto Fallacy
If that logic holds, the compliance framework is just documentation theater. And right now, CISA is being hollowed out, so there are fewer people left to even run the theater.
https://arstechnica.com/information-technology/2026/03/federal-cyber-experts-called-microsofts-cloud-a-pile-of-shit-approved-it-anyway/
#Cybersecurity #Microsoft #FedRAMP #Leadership #RiskManagement #security #privacy #cloud #infosec
According to the recent Meta/YouTube verdict, the plaintiff started using YouTube at age 6 and Instagram at age 9. The jury deliberated 43 hours, answered "yes" to every negligence question, and found evidence of malice. Then Meta's stock went up 0.7%. 🤔 That gap tells you everything. 📊
The $6 million award is basically a rounding error for companies pulling in $350 billion in combined annual revenue. What actually matters is the 2,000 pending lawsuits this verdict just handed a roadmap to, and the federal trial coming in Oakland this summer. This is the first domino. The tobacco industry had the same "we're being scapegoated" defense in 1994, and that argument eventually cost them $206 billion.
Here's what I keep thinking about as a guy who teaches about the legal, ethical, and social issues of information technology: the products we build have consequences we're responsible for, whether we want to admit it or not. The jury didn't care that Meta said Kaley's home life was complicated. They cared that the autoplay kept going anyway. 🔁
Two things can both be true: teen mental health is complex, and a notification engine designed to override a kid's ability to stop scrolling is a design choice someone made.
https://www.latimes.com/california/story/2026-03-25/social-media-lawsuit-trial-meta-google-verdict
#ChildSafety #BigTech #Leadership #Accountability #SocialMedia #Ethics #DePaulUniversity #DePaulU @depaulu

The outcome Wednesday in Los Angeles County Superior Court is potentially precedent-setting for thousands of other pending lawsuits nationwide and could reshape how tech companies are held accountable for children's harm caused by their products.