RE: https://infosec.exchange/@SecurityWriter/116125021350696613

#Anthropic trained #Claude using pirate libraries (ao #LibGen) with hundreds of millions of copyright protected works (https://authorsguild.org/news/anthropic-ai-class-action-important-information-for-authors/) now complains about other AI models using Claude to train theirs through 'distillation attacks'.
Anthropic, currently battling to not use their $200 million Pentagon deal, cites alleged 'US security concerns' https://www.anthropic.com/news/detecting-and-preventing-distillation-attacks