A study found that code-generating #LLMs often produce fictitious package names. Researchers analyzed 16 LLMs and found 21.7% of open-source and 5.2% of commercial #AI model recommendations were hallucinations, posing serious security #risks☝️👩💻
https://www.darkreading.com/application-security/ai-code-tools-widely-hallucinate-packages