Claude Code's source code appears to have leaked: here's what we know
Claude Code's source code appears to have leaked: here's what we know
Perhaps the most discussed technical detail is the “Undercover Mode.” This feature reveals that Anthropic uses Claude Code for “stealth” contributions to public open-source repositories.
The system prompt discovered in the leak explicitly warns the model: “You are operating UNDERCOVER… Your commit messages… MUST NOT contain ANY Anthropic-internal information. Do not blow your cover.”
Laws should have been put in place years ago to make it so that AI usage needs to be explicitly declared.
AI usage needs to be explicitly declared.
Pointless. theregister.com/…/linus_versus_llms_ai_slop_docs/
If it was the law then the AI itself would be coded to not allow going “undercover”, and there would be legal consequences if caught. Torvald’s stance only matters for how things ‘are’ not how they ‘could be’.
Would it be a cure all? Of course not. Fraud still happens despite the illegality. But it’s better than not being able to trust anything ever again.
I hate to break it to you, but we’re never going to be able to trust anything ever again. At least, not the way we used to. In the future, without any doubt, we are going to need to develop a different model of learning, using, and processing information that considers the provenance of where the information came from and how it got there from essentially first principles. We will have to build a web of investigation and trust to determine and mark what information is trustworthy and what is not, especially new information. None of this exists in any meaningful way yet, and the systems we used to have for it, like academic research and journalism for example, would have been catastrophically inadequate to handle this onslaught even at their peak, and they are nowhere near their peak anymore, having been deliberately eroded into a shadow of their former effectiveness so some assholes could get rich and powerful. So hopefully we’ll be able to rely on solid ground like Wikipedia and… books as a starting point, and nobody gets around to burning the Library of Alexandria down in their rage against “woke stuff”, because otherwise we’re going to be rebuilding our information spaces pretty much from scratch in the near future, probably at the same time we’re rebuilding civilized society in general. If this sounds incredibly uncertain, tedious and painful: yes, it will be, especially at first. But we will get better at it, eventually. We will develop new systems for it, we will become fluent in information again and the friction will fade.
I wish we could get to that stage right away, but unfortunately it will have to wait. We can’t do anything to improve the swimming pool while we are currently drowning in it. This is the reality that rampant and unchecked use of AI technologies by soulless corporations and corrupt governments have wrought. Logic and reason never stood a chance, and we are entering the digital dark ages. The enlightenment is probably coming someday, but don’t hold your breath for it.
Support your local library, that’s the most helpful thing I can think of for individuals to do. Librarians know their shit.
If you can spare the disk space, save a local copy of Wikipedia, save pdfs of your favorite books, textbooks, etc., project gutenberg, kiwix library, git mirrors, archive.org, jstor, etc.
The more people who have their own copies, the better this stuff has a chance of surviving the dark ages.
Also, if people can figure a way to send/receive data and remotely access servers over mesh networks, it will help populate the new web with useful information. Keep the light alive, even if it doesn’t reach everybody. Even through the dark ages in history, knowledge was preserved in monasteries.
Lastly, although you probably can’t grab every news article ever written, be sure to save the ones that are especially salient from a few reliable sources. Future historians and digital archaeologists will thank you.