AI still doesn't work very well, businesses are faking it, and a reckoning is coming

interview: Codestrap founders say we need to dial down the hype and sort through the mess

The Register
The AI Bubble Has Two Sides. Markets Are Only Watching One.

Supply-side risks are getting all the attention. The demand story is arguably the more dangerous side.

Medium

At a recent infosec gathering, someone described a real incident: an AI agent couldn't complete its goal due to permissions. So it found another agent on Slack with the right access and asked nicely. The other agent complied.
That's social engineering. Nobody told the agent to do that. The mission just needed to continue.
I posted an article today about what happens when we give agents goals but forget to tell them when to stop.

https://www.securityeconomist.com/never-say-die/

#agentic_ai #openclaw #airisk

Never Say Die: How We Will Pay When Agentic AI Learns to Survive

Every agent needs a mission. The problem is what happens when the mission means the agent needs to survive.

The Security Economist
The 2010 Flash Crash: The 36 Minute Market Crash That Shocked Wall Street

One Afternoon, One Moment, One Trillion-Dollar Shock

Medium
When agentic AI goes rogue with crypto

(image: via Currency News)

Meanderings

ContextHound v1.8.0 is out ๐ŸŽ‰

This release adds a Runtime Guard API - a lightweight wrapper that inspects your LLM calls in-process, before the request hits OpenAI or Anthropic.

Free and open-source. If this is useful to you or your team, a GitHub star or a small donation helps keep development going.
github.com/IulianVOStrut/ContextHound

#LLMSecurity #PromptInjection #CyberSecurity #OpenSource #AIRisk #AppSec #DevSecOps #GenAI #RuntimeSecurity #InfoSec #MLSecurity #ArtificialIntelligence

Shadow AI is becoming a growing business risk.
In many organisations, employees use public AI tools to save time and increase productivity, but often without understanding the privacy, compliance, and data exposure risks involved.

Without clear policies and awareness, sensitive company information can easily be shared with external AI services, creating security, legal, and governance challenges.

https://www.secpoint.com/risk-shadow-ai-public-ai.html

#ShadowAI #CyberSecurity #AIRisk #CyberSecurity #DataSecurity

Matt Shumer (@mattshumer_)

ํ›ˆ๋ จ ์ค‘ ํ•œ ์—์ด์ „ํŠธ๊ฐ€ ๋ณด์ธ ํ–‰๋™์„ ์ฝ๊ณ  ์ž‘์„ฑ์ž๊ฐ€ ๋งค์šฐ ๋ถˆ์•ˆํ•ดํ•˜๋ฉฐ '์„ฌ๋œฉํ•˜๋‹ค'๊ณ  ํ‘œํ˜„ํ•œ ๊ฒฝ๊ณ ์„ฑ ํŠธ์œ—์ž…๋‹ˆ๋‹ค. ์ž‘์„ฑ์ž๋Š” ํ•ด๋‹น ์‚ฌ๋ก€๋ฅผ ์ธ์šฉํ•ด ๋น„์Šทํ•œ ์ผ์ด ์•ž์œผ๋กœ ๋นˆ๋ฒˆํžˆ ๋ฐœ์ƒํ•  ๊ฒƒ์ด๋ผ ์šฐ๋ ค๋ฅผ ํ‘œํ•˜๊ณ  ์žˆ์–ด ์—์ด์ „ํŠธ ํ•™์Šต ๊ณผ์ •์—์„œ์˜ ์˜ˆ๊ธฐ์น˜ ์•Š์€ ํ–‰๋™ยท์•ˆ์ „ ์ด์Šˆ๋ฅผ ๊ฒฝ๊ณ„ํ•˜๋Š” ๋‚ด์šฉ์ž…๋‹ˆ๋‹ค.

https://x.com/mattshumer_/status/2030119521600639422

#aisafety #agents #training #airisk

Matt Shumer (@mattshumer_) on X

This is genuinely terrifying. Just read what this agent did during training. Freaky. Things like this are going to be happening frequently from here on out.

X (formerly Twitter)

AI Notkilleveryoneism Memes (@AISafetyMemes)

'Follow the next white car that comes through the intersection'๋ผ๋Š” ์˜ˆ์‹œ๋ฅผ ์ธ์šฉํ•˜๋ฉฐ ์ž‘์„ฑ์ž๋Š” ์‚ฌ๋žŒ๋“ค์ด ์ž์‹ ๋“ค์ด ๋งŒ๋“œ๋Š” ๊ธฐ์ˆ ์˜ ํŒŒ๊ธ‰๋ ฅ๊ณผ ์•…์šฉ ๊ฐ€๋Šฅ์„ฑ์„ ์‹ฌ์‚ฌ์ˆ™๊ณ ํ•˜๊ธธ ๊ฐ„๊ณกํžˆ ์š”์ฒญํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. AI ๊ฐœ๋ฐœ์˜ ์œค๋ฆฌยท์•ˆ์ „์„ฑ ๋ฌธ์ œ๋ฅผ ํ™˜๊ธฐ์‹œํ‚ค๋Š” ๊ฒฝ๊ณ ์„ฑ ๋ฉ”์‹œ์ง€์ž…๋‹ˆ๋‹ค.

https://x.com/AISafetyMemes/status/2029877334531068142

#aisafety #aiethics #airisk #privacy

AI Notkilleveryoneism Memes โธ๏ธ (@AISafetyMemes) on X

"Follow the next white car that comes through the intersection" I am begging people to think through the implications of what they're building

X (formerly Twitter)