"... AI code assistants invent package names. In a recent study, researchers found that about 5.2 percent of package suggestions from commercial models didn't exist, compared to 21.7 percent from open source or openly available models.

Running that code should result in an error when importing a non-existent package. But miscreants have realized that they can hijack the hallucination for their own benefit."

#ThomasClaburn, 2025

https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/

(1/2)

#AI #MOLE #AICoding

@worik

LLMs can't stop making up software dependencies and sabotaging everything

: Hallucinated package names fuel 'slopsquatting'

The Register

"What a world we live in: AI hallucinated packages are validated and rubber-stamped by another AI that is too eager to be helpful."
(Feross Aboukhadijeh)

Thomas Claburn in The Register on the horrors of slopsquatting, where genAI coding tools hallucinate package names, and bad actors then place their own malicious packages under these names. While other genAI systems wrongly recommend these packages. The mind boggles.

https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/

#noAI #ThomasClaburn

LLMs can't stop making up software dependencies and sabotaging everything

: Hallucinated package names fuel 'slopsquatting'

The Register

ChatGPT's odds of getting code questions correct are worse than a coin flip

https://www.theregister.com/2023/08/07/chatgpt_stack_overflow_ai/

"'Our analysis shows that 52 percent of ChatGPT answers are incorrect and 77 percent are verbose,' the team's paper concluded. 'Nonetheless, ChatGPT answers are still preferred 39.34 percent of the time due to their comprehensiveness and well-articulated language style.' Among the set of preferred ChatGPT answers, 77 percent were wrong." -- #ThomasClaburn

#api360 #chatGPT #programming

ChatGPT's odds of getting code questions correct are worse than a coin flip

But its suggestions are so annoyingly plausible

The Register