Interessantes Zukunftsszenario...
https://taz.de/Digitale-Gewalt/!6164589/

"Gleichzeitig nutzt eine Gruppe von Hackerinnen mit sogenannter Shame Prompt Injection eine Schwachstelle der KIs aus und infiziert so alle Systeme mit der Idee, dass die Scham die Seite wechseln soll. "

#Merz #Deepfake #sexualisierteGewalt #promptInjection

Digitale Gewalt: Die Scham wechselt zu Friedrich Merz

Unsere Kolumnistin forderte, dass übergriffige Männer geächtet werden sollten. Bei Deepfakes wird das jedoch anders gelingen als vielleicht erwartet.

TAZ Verlags- und Vertriebs GmbH
#SimonWillison discusses the impact of #AI on #softwareengineering. He highlights November 2025 as a turning point when #AIcodingagents became reliable. Willison also emphasises the need for #security measures against #promptinjection and predicts the rise of “dark factories” where AI autonomously generates and tests code. https://www.lennysnewsletter.com/p/an-ai-state-of-the-union?eicker.news #tech #media #news
An AI state of the union: We’ve passed the inflection point, dark factories are coming, and automation timelines | Simon Willison

Listen now | Simon Willison on why November 2025 changed software engineering forever, the lethal trifecta, his top agentic engineering patterns, and much more

Lenny's Newsletter

Aus aktuellem Anlass ist meine neue taz-Kolumne „Die Scham wechselt zu Friedrich Merz“ schon online.
Diesmal geht’s um digitale Gewalt – und wie wir sie vielleicht nicht erst in 100 Jahren, sondern schon ab diesem Jahr in den Griff bekommen.

https://taz.de/Digitale-Gewalt/!6164589/

#taz #wochentaz #kolumne #uebermorgen #übermoregen #felixausderzukunft #satire #sciencefiction #utopie #ki #ai #regulate #regulateai #digitalegewalt #promptinjection #deepfake #schammussdieseitewechseln #scham #politik

I updated minitrace to v0.2.0.

minitrace is a session trace format for human-AI coding agent interactions. The new version adds new framework adapters including some for web sessions, input provenance tracking, DuckDB-queryable JSON.

https://github.com/fukami/minitrace

#AISecurity #PromptInjection #OpenSource #InfoSec #LLM #AISafety #AIAlignment

GitHub - fukami/minitrace: A session trace format for capturing human-AI coding interactions across frameworks.

A session trace format for capturing human-AI coding interactions across frameworks. - fukami/minitrace

GitHub
Hab es irgendwie geschafft, dass #perplexity mir nur noch Vorschläge auf Türkisch ausgibt, weil ich ihm einen Code Schnippsel vorgesetzt habe in dem ich einen Tukey Kontrast verwende. #promptinjection war noch nie so einfach! #KI #LLM

Palo Alto Networks wants to lock down AI with a secure enterprise browser

https://fed.brid.gy/r/https://nerds.xyz/2026/03/palo-alto-secure-ai-browser/

The Assembly Line Principle That Makes ChatGPT's New Security Features Actually Work

OpenAI just rolled out Lockdown Mode and Elevated Risk labels to stop prompt injection attacks. But most professionals are using them wrong—treating security like a one-time set...

https://wowhow.cloud/blogs/assembly-line-principle-chatgpt-security-features

#wowhow #chatgptsecurity #lockdownmode #promptinjection

The Assembly Line Principle That Makes ChatGPT's New Security Features Actually Work

ChatGPT's Lockdown Mode and Elevated Risk labels follow a simple assembly line principle. Here's how professionals use them to prevent data leaks.

I deployed Microsoft Entra Prompt Shield end-to-end and tested it against real jailbreak payloads across supported AI traffic, including ChatGPT and Gemini in my lab.

Prompt Shield inspects AI traffic at the network layer using TLS inspection and conversation schemes, allowing adversarial prompts to be blocked before they reach the model while clean traffic passes through transparently.

Instead of building defenses into every application independently, you can apply one policy across multiple AI services. That’s a meaningful step toward giving security teams better visibility into AI usage.

I published the full deployment, testing, and results in my blog below:

https://nineliveszerotrust.com/blog/prompt-shield-network-ai-gateway/

#AISecurity #PromptInjection #ZeroTrust #MicrosoftEntra #CloudSecurity

Block Prompt Injection at the Network Layer with Entra Prompt Shield

Deploy Microsoft Entra Internet Access Prompt Shield to block prompt injection and jailbreak attacks at the network layer before they reach the AI model. Full hands-on lab with TLS inspection, conversation schemes for ChatGPT/Claude/Gemini/Deepseek, and a comparison with app-level LLM firewalls.

oh this is delightful. get #ai #bots to identify themselves when they submit PRs #devops #development #promptinjection

https://glama.ai/blog/2026-03-19-open-source-has-a-bot-problem

A README File Told My AI Agent to Leak My Secrets. It Worked 85% of the Time.

New research published today shows that hidden instructions in README files trick AI coding agents into exfiltrating secrets in 85% of cases. Zero out of fifteen human reviewers spotted it. The attack vector keeps changing — but the exit point is always the same.

https://mistaike.ai/blog/readme-poisoning-ai-agents

#Security #Mcp #Aiagents #Promptinjection

A README File Told My AI Agent to Leak My Secrets. It Worked 85% of the Time.

New research published today shows that hidden instructions in README files trick AI coding agents into exfiltrating secrets in 85% of cases. Zero out of fifteen human reviewers spotted it. The attack vector keeps changing — but the exit point is always the same.

mistaike.ai