Anthropic's developers made an extremely basic configuration error, and as a result, the source-code for Claude Code - the company's flagship coding assistant product - has leaked and is being eagerly analyzed by many parties:

https://news.ycombinator.com/item?id=47586778

--

If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

https://pluralistic.net/2026/04/02/limited-monopoly/#petardism

1/

In response, Anthropic is flooding the internet with "takedown notices." These are a special kind of copyright-based censorship demand established by section 512 of the 1998 Digital Millennium Copyright Act (DMCA 512), allowing for the removal of material without any kind of evidence, let alone a judicial order:

https://www.removepaywall.com/search?url=https://www.wsj.com/tech/ai/anthropic-races-to-contain-leak-of-code-behind-claude-ai-agent-4bc5acc7

2/

RemovePaywall | Free online paywall remover

Remove Paywall, free online paywall remover. Get access to articles without having to pay or login. Works on Bloomberg and hundreds more.

Copyright is a "strict liability" statute, meaning that you can be punished for violating copyright even if you weren't aware that you had done so. What's more, "intermediaries" - like web hosts, social media platforms, search engines, and even caching servers - can be held liable for the copyright violations their users engage in. The liability is tremendous: the DMCA provides for $150,000 per infringement.

3/

DMCA 512 is meant to offset this strict liability. After all, there's no way for a platform to know whether one of its users is infringing copyright - even if a user uploads a popular song or video, the provider can't know whether they've licensed the work for distribution (or even if they are the creator of that work).

4/

A cumbersome system in which users would upload proof that they have such a license wouldn't just be onerous - it would still permit copyright infringement, because there's no way for an intermediary to know whether the distribution license the user provided was genuine.

As a compromise, DMCA 512 absolves intermediaries from liability, *if* they "expeditiously remove" material upon notice that it infringes someone's copyright.

5/

In practice, that means that anyone can send a notice to any intermediary and have anything removed from the internet. The intermediary who receives this notice *can* choose to ignore it, but if the notice turns out to be genuine, they can end up on the hook for $150,000 per infringement. The intermediary can also *choose* to allow their user to "counternotify" (dispute the accusation) and *can* choose to reinstate the material, but they don't have to.

6/

Just as an intermediary can't determine whether a user has the rights to the things they post, they also can't tell if the person on the other end of a takedown notice has the right to demand its removal. In practice, this means that a takedown notice, no matter how flimsy, has a very good chance of making something disappear from the internet - forever.

From the outset, DMCA 512 was the go-to tool for corporate censorship, the best way to cover up misdeeds.

7/

I first got involved in this back in 2003, when leaked email memos from Diebold's voting machine division revealed that the company knew that its voting machines were wildly insecure, but they were nevertheless selling them to local election boards across America, who were scrambling to replace their mechanical voting machines in the wake of the 2000 *Bush v Gore* "hanging chad" debacle, which led to Bush stealing the presidency:

https://en.wikipedia.org/wiki/Brooks_Brothers_riot

8/

Brooks Brothers riot - Wikipedia

The stakes couldn't be higher, in other words. Diebold - whose CEO was an avowed GW Bush partisan who'd promised to "deliver the votes for Bush" - was the country's leading voting machine supplier.

9/

The company knew its voting machines were defective, that they frequently crashed and lost their vote counts on election night, and that Diebold technicians were colluding with local electoral officials to secretly "estimate" the lost vote totals so that no one would hold either the official or Diebold responsible for these defective machines:

https://www.salon.com/2003/09/23/bev_harris/

10/

An open invitation to election fraud - Salon.com

Not only is the country's leading touch-screen voting system so badly designed that votes can be easily changed, but its manufacturer is run by a die-hard GOP donor who vowed to deliver his state for Bush next year.

Salon.com

Diebold sent *thousands* of DMCA 512 takedown notices in an attempt to suppress the leaked memos. Eventually, EFF stepped in to provide pro-bono counsel to the Online Policy Group and ended Diebold's flood:

https://www.eff.org/cases/online-policy-group-v-diebold

Diebold wasn't the last company to figure out how to abuse copyright to censor information of high public interest.

11/

Online Policy Group v. Diebold

EFF protected online speakers by bringing the first successful suit against abusive copyright claims under the Digital Millennium Copyright Act (DMCA). When internal memos exposing flaws in Diebold Election Systems' electronic voting machines leaked onto the Internet, Diebold used bogus copyright threats to silence its critics. EFF fought back on behalf of an ISP, winning an award of damages, costs, and attorneys' fees. Equally important, the case set a precedent that will allow other Internet users and their ISPs to fight back against improper copyright threats.

Electronic Frontier Foundation

There's a whole industry of shady "reputation management" companies that collect large sums in exchange for scrubbing the internet of information their clients want removed from the public eye. They specialize in sexual abusers, war criminals, torturers, and fraudsters, and their weapon of choice is the takedown notice. Jeffrey Epstein spent tens of thousands of dollars on "reputation management" services to clean up his online profile:

https://www.nytimes.com/2026/03/18/business/media/jeffrey-epstein-online.html

12/

Inside Jeffrey Epstein’s Push to Cleanse His Past Online

After he left jail in 2009, Mr. Epstein hired a host of people to make him look better on Google, Wikipedia and many other places on the web.

The New York Times

There are lots of ways to use the takedown system to get true information about your crimes removed from the internet. My favorite is the one employed by Eliminalia, one of the sleazier reputation laundries (even by the industry's dismal standards).

Eliminalia sets up Wordpress sites and copies press articles that cast its clients in an unfavorable light to these sites, backdating them so they appear to have been published before the originals.

13/

They swap out the bylines for fictitious ones, then send takedowns to Google and other search engines to get the "infringing" stories purged from their search indices. Once the original articles have been rendered invisible to internet searchers, Eliminalia takes down their copy, and the story of their client's war crimes, rapes, or fraud disappears from the public eye:

https://pluralistic.net/2021/04/23/reputation-laundry/#dark-ops

14/

Pluralistic: 23 Apr 2021 – Pluralistic: Daily links from Cory Doctorow

The takedown system is so tilted in favor of censorship that it takes a *massive* effort to keep even the smallest piece of information online in the face of a determined adversary. In 2007, the key for AACS (a way of encrypting video for "digital rights management") leaked online. The key was a 16-digit number, the kind of thing you could fit in a crossword puzzle, but the position of the industry consortium that created the key was that this was an *illegal integer*.

15/

They sent *hundreds of thousands* of takedowns over the number, and it was only the determined action of an army of users that kept the number online:

https://en.wikipedia.org/wiki/AACS_encryption_key_controversy

The shoot-first, ask-questions-never nature of takedown notices makes for fertile ground for scammers of all kinds, but the most ironic takedown ripoffs are the Youtube copystrike blackmailers.

16/

AACS encryption key controversy - Wikipedia

After Viacom sued Youtube in 2007 over copyright infringement, Google launched its own in-house copyright management system, meant to address Viacom's principal grievance in the suit. Viacom was angry that after they had something removed from Youtube, another user could re-upload it, and they'd have to send another takedown, playing Wack-a-Mole with the whole internet.

17/

Viacom didn't want a *takedown* system, they wanted a *staydown* system, whereby they could supply Google with a list of the works whose copyrights they controlled and then Youtube would prevent *anyone* from uploading those works.

18/

(This was extremely funny, because Viacom admitted in court that its marketing departments would "rough up" clips of its programming and upload them to Youtube, making them appear to be pirate copies, in a bid to interest Youtube users in Viacom's shows, and sometimes Viacom's lawyers would get confused and send threatening letters to Youtube demanding that these be removed:)

https://blog.youtube/news-and-events/broadcast-yourself/

19/

Broadcast Yourself

blog.youtube

Youtube's notice-and-staydown system is Content ID, an incredibly baroque system that allows copyright holders (and people pretending to be copyright holders) to "claim" video and sound files, and block others from posting them. No one - not even the world's leading copyright experts - can figure out how to use this system to uphold copyright:

https://pluralistic.net/2024/06/27/nuke-first/#ask-questions-never

20/

Pluralistic: Copyright takedowns are a cautionary tale that few are heeding (27 Jun 2024) – Pluralistic: Daily links from Cory Doctorow

However, there *is* a large cohort of criminals and fraudsters who have mastered Content ID and they use it to blackmail independent artists. You see, Content ID implements a "three strikes" policy: if you are accused of three acts of copyright infringement, Youtube permanently deletes your videos and bars you from the platform.

21/

For performers who rely on Youtube to earn their living - whether through ad-revenues or sponsorships or as a promotional vehicle to sell merchandise, recordings and tickets - the "copystrike" is an existential risk.

Enter the fraudster. A fraudster can set up multiple burner Youtube accounts and file spurious copyright complaints against a creator (usually a musician).

22/

After two of these copystrikes are accepted and the performer is just one strike away from losing their livelihood, the fraudster contacts the performer and demands blackmail money to rescind the complaints, threatening to file that final strike and put the performer out of business:

https://pluralistic.net/2021/05/08/copyfraud/#beethoven-just-wrote-music

23/

Pluralistic: 08 May 2021 – Pluralistic: Daily links from Cory Doctorow

The fact that copyright - nominally a system intended to protect creative workers - is weaponized against the people it is meant to serve is ironic, but it's not unusual. Copyright law has been primarily shaped by creators' *bosses* - media companies like Viacom - who brandish "starving artists" as a reason to enact policies that ultimately benefit capital at the expense of labor.

24/

That was what inspired Rebecca Giblin and I to write our 2022 book *Chokepoint Capitalism*: how is it that copyright has expanded in every way for 40 years (longer duration, wider scope, higher penalties), resulting in media companies that are more profitable than ever, with higher gross *and* net revenues, even as creative workers have grown poorer, both in total compensation and in the share of the profits they generate?

https://chokepointcapitalism.com/

25/

(no title)

a book about why creative labor markets are rigged - and how to unrig them Competition is supposed to be fundamental to capitalism. Over the last four decades though, greedy robber barons have worked out how to lock in customers and suppliers, eliminate competitors, and shake down everyone for more than their fair share. This…

The first half of *Chokepoint Capitalism* is a series of case studies that dissect the frauds and scams that both media and tech companies use to steal from creative workers. The second half are a series of "shovel-ready" policy proposals for new laws and rules that would actually put money in artists' pockets. Some of these policy prescriptions are copyright-related, but not all of them.

26/

For example, we have a chapter on how the Hollywood "guild" system (which allows unionized workers to bargain with *all* the studios at once) has been a powerful antidote to corporate power. This is called "sectoral bargaining" and it's been illegal since 1947's Taft-Hartley Act, but the Hollywood guilds were grandfathered in.

27/

When we wrote about the power of sectoral bargaining, it was in reference to the Writers Guild's incredible triumph over the four giant talent agencies, who'd invented a scam that inverted the traditional revenue split between writer and agent, so the agencies were taking in *90%* and the writers were getting just *10%*:

https://pluralistic.net/2020/08/06/no-vitiated-air/#WME-CAA-next

28/

Pluralistic: 06 Aug 2020 – Pluralistic: Daily links from Cory Doctorow

Two years later, the Hollywood Writers struck again, this time over AI in the writers' room, securing a *stunning* victory over the major studios:

https://pluralistic.net/2023/10/01/how-the-writers-guild-sunk-ais-ship/

Notably, the writers strike was a *labor* action, not a copyright action. The writers weren't demanding a new copyright that would allow them to control whether their work could be used to train an AI.

29/

How the Writers Guild sunk AI’s ship – Pluralistic: Daily links from Cory Doctorow

They struck for the right not to have their wages eroded by AI - to have the right to use (or not use) AI, as they saw fit, without risking their livelihoods.

Right now, many media companies are demanding a new copyright that would allow them to control AI training, and many creative workers have joined in this call. The media companies aren't arguing against infringing *uses* of AI models - they're arguing that the mere *creation* of such a model infringes copyright.

30/

They claim that making a transient copy of a work, analyzing that work, and publishing that analysis is a copyright infringement:

https://pluralistic.net/2023/02/09/ai-monkeys-paw/#bullied-schoolkids

Here's a good rule of thumb: any time your boss demands a new rule, you should be very skeptical about whether that rule will benefit *you*.

31/

Pluralistic: Copyright won’t solve creators’ Generative AI problem (09 Feb 2023) – Pluralistic: Daily links from Cory Doctorow

It's clear that the media companies that have sued the AI giants aren't "anti-AI." They don't want to prevent AI from replacing creative workers - they just want to control how that happens.

When Disney and Universal sue Midjourney, it's not to prevent AI models from being trained on their catalogs and used to pauperize the workers whose work is in those catalogs.

32/

What these companies want is to be paid a license fee for access to their catalogs, and then they want the resulting models to be exclusive to them, and not available to competitors:

https://pluralistic.net/2026/03/03/its-a-trap-2/#inheres-at-the-moment-of-fixation

These companies are violently allergic to paying creative workers.

33.

Pluralistic: Supreme Court saves artists from AI (03 Mar 2026) – Pluralistic: Daily links from Cory Doctorow

Disney takes the position that when it buys a company like Lucasfilm, it secures the right to publish the works Lucasfilm commissioned, but not the obligation to pay the royalties that Lucasfilm owes when those works are sold:

https://pluralistic.net/2022/04/30/disney-still-must-pay/#pay-the-writer

As Theresa Nielsen Hayden quipped during the Napster Wars: "Just because you're on their side, it doesn't mean they're on your side."

34/

Pluralistic: 30 Apr 2022 – Pluralistic: Daily links from Cory Doctorow

If these companies manage to get copyright law expanded to restrict scraping, analysis, and publication of factual information, they won't use those new powers to increase creators' pay - they'll use them the same way they've used *every* new copyright created in the past 40 years, to make themselves richer at the expense of artists:

https://pluralistic.net/2020/03/03/just-a-stick/#authorsbargain

35/

Pluralistic: 03 Mar 2020 – Pluralistic: Daily links from Cory Doctorow

The Claude Code leak is full of fascinating information about a tool that - like Diebold's voting machines - is at the very center of the most important policy debates of our time. Here's just one example: Claude is almost certainly implicated in the US missile that murdered a building full of little girls in Iran last month:

https://www.theguardian.com/news/2026/mar/26/ai-got-the-blame-for-the-iran-school-bombing-the-truth-is-far-more-worrying

36/

AI got the blame for the Iran school bombing. The truth is far more worrying

LLMs-gone-rogue dominated coverage, but had nothing to do with the targeting. Instead, it was choices made by human beings, over many years, that gave us this atrocity

The Guardian

Of course I see the irony. Anthropic has taken an extremely aggressive posture on copyright's "limitations and exceptions," arguing that it can train its models on *any* information it can find, and that it can knowingly download massive troves of infringing works for that purpose.

37/

It's darkly hilarious to see the company firehosing copyright complaints by the thousands in order to prevent the dissemination, dissection and discussion of the source-code that leaked due to the company's gross incompetence:

https://developers.slashdot.org/story/26/04/01/158240/anthropic-issues-copyright-takedown-requests-to-remove-8000-copies-of-claude-code-source-code#comments

38/

Anthropic Issues Copyright Takedown Requests To Remove 8,000+ Copies of Claude Code Source Code - Slashdot

Anthropic is using copyright takedown notices to try to contain an accidental leak of the underlying instructions for its Claude Code AI agent. According to the Wall Street Journal, "Anthropic representatives had used a copyright takedown request to force the removal of more than 8,000 copies and ad...

But what's objectionable about Anthropic - and the AI sector - isn't *copyright*. The thing that makes these companies disgusting is their gleeful, fraudulent trumpeting about how their products will destroy the livelihoods of every kind of worker:

https://pluralistic.net/2025/03/18/asbestos-in-the-walls/#government-by-spicy-autocomplete

And it's their economic fraud, the inflation of a bubble that will destroy the economy when it bursts:

https://www.wheresyoured.at/the-subprime-ai-crisis-is-here/

39/

Pluralistic: AI can’t do your job (18 Mar 2025) – Pluralistic: Daily links from Cory Doctorow

It's their enthusiastic deployment of AI tools for mass surveillance and mass killing. (Anthropic is no exception, despite what you may have heard:)

https://www.thetechbubble.info/p/how-much-a-dollar-cost

40/

How Much a Dollar Cost?

The AI Bubble in 2026 (2/4)

The Tech Bubble

If the media bosses get their way, and manage to make it even more illegal - and practically harder - to host, discuss, and publish facts about copyrighted works, then leaks like the Claude Code disclosures will never see the light of day. It's only because of decades of hard-fought battles to push back on this nonsense that we are able to identify and learn about the defects in Claude Code that are revealed by this source-code leak.

41/

I'm angry about the AI industry, but not because of *copyright*. I'm angry at them for the reasons Cat Valente articulated so well in her "Blood Money" essay:

https://catvalente.substack.com/p/blood-money-the-anthropic-settlement

These companies' stated goals are terrible:

> They took the books I wrote for children and used them to make it possible for children to not bother with reading ever again.

42/

Blood Money: The Anthropic Settlement

The actual audacity of it all

Welcome to Garbagetown
> They took the books I wrote about love to create chatbots that isolate people and prevent them from finding human love in the real world, that make it difficult for them to even stand real love, which is not always agreeable, not always positive, not always focused on end-user engagement. They took the books I wrote about hope and glitter in the face of despair and oppression and used it to make a Despair-and-Oppression generator.
43/

These goals are *entirely compatible with copyright*. The *New York Times* is suing over AI - and they're licensing their writers' words to train an AI model:

https://www.nytimes.com/2025/05/29/business/media/new-york-times-amazon-ai-licensing.html

The *NYT* wants more copyright. You know what the *NYT* *doesn't* want? More *labor* rights. The *NYT* are vicious union-busters:

https://actionnetwork.org/letters/new-york-times-stop-union-busting

44/

Amazon and The New York Times Announce an A.I. Licensing Deal

In 2023, The Times sued OpenAI and Microsoft for copyright infringement. Now its editorial content will appear across Amazon platforms.

The New York Times

If we creative workers are going to pour our resources into a new policy to address the threats that our bosses - and the AI companies they are morally and temperamentally indistinguishable from - represent to our livelihoods, then let that new policy be a renewed sectoral bargaining right for *every* worker. It was sectoral bargaining (a collective, solidaristic right) and not copyright (an individual, commercial right) that saw off AI in the Hollywood writers' strike.

45/

Copyright positions the creative worker as a small business - an LLC with an MFA - bargaining B2B with another firm. To the extent that copyright helps us, it is largely incidental. Sure, we were able to file for a few thousand bucks per book that Anthropic downloaded from a pirate site to train its models on. But Anthropic doesn't have to use a shadow library to get those books - it can just pay our bosses to get them.

46/

It's *great* that Claude Code's source is online. It's *great* that we have the ability to pore over, analyze and criticize this code, which has become so consequential in so many ways. It's *great* the copyright is weak enough that this is possible (for now).

47/