Microsoft has pushed back against claims that multiple prompt injection and sandbox-related issues raised by a security engineer in its Copilot AI assistant constitute security vulnerabilities. The development highlights a growing divide between how vendors and researchers define risk in generative AI systems.

https://www.bleepingcomputer.com/news/security/are-copilot-prompt-injection-flaws-vulnerabilities-or-ai-limits/

Are Copilot prompt injection flaws vulnerabilities or AI limits?

Microsoft has pushed back against claims that multiple prompt injection and sandbox-related issues raised by a security engineer in its Copilot AI assistant constitute security vulnerabilities. The development highlights a growing divide between how vendors and researchers define risk in generative AI systems.

BleepingComputer
@BleepingComputer
Seems simple enough then. Don't bother reporting them to MS, as they "aren't vulnerabilities" just publish them as 0days.
I am betting MS will reconsider this stance after a few public humiliations...