0 Followers
0 Following
1 Posts
There is poetry for package management. Apparently uv is substantially faster at solving package dependencies although poetry is more feature rich. (I’ve only used poetry, so I know it is adequate, but I have had times I’ve sat there for minutes or even tens of minutes while it worked through installing all the right versions of all the right libraries.)
Yeah. When it comes down to it, the libs think the problem with Trump isn’t the fundamentals of what he is doing, it is that he is doing it without decorum or checking all the legal boxes or saying the usual lib pabulum to justify American imperialism. Skipping the legal checks and decorum is also bad, but in fact kids in cages was horrible when Obama was doing it the “right” way.
I wonder if one of the reasons Pete Hegseth is going so hard after Anthropic is that he and other idiots in the Pentagon unironically believes shit like AI 2027 and so wants to soft nationalize the frontier companies so to control the coming AGI. Considering that one of the uses the DoD allegedly wants LLMs for is fully autonomous weapons that at the very least have a very distorted view of what the technology is capable of. Or they want an accountability sink so they can kill people with even less accountability. …probably both.

Did you know that same week this fight was going public Anthropic gave up on their “Responsible Scaling Policy”? (Well, technically they changed to a new version of their RSP that was even more empty and toothless.) To be fair the RSP was basically doomer crit-hype safety theater (“we have a plan for if our AI is so dangerous it is a catastrophic risk”), but if they actually followed it, they would have to stop releasing new models (or else unhype their model’s capabilities), so it was obvious they would abandon the RSP at some point (even many lesswrongers and EAs expected this).

I would bet that the timing of ditching the RSP was a deliberate marketing strategy to mask one ethical backslide behind an ethical stand… except only booster and doomers even remotely expected the RSP to have any meaning in the first place. Still, comparing number of lesswrong, EA, and /r/singularity discussions on RSP v3 compared to discussions on the fight with the DoD, I think they did succeed in minimizing what little criticism they got.

That was their original pitch against openAI

So yeah. People on places like /r/singularity were starting to get skeptical of Anthropic’s claims about ethics, but after this current saga I see loads of comments glazing them and praising them, so mission success.

I wonder if Hegseth realizes he has basically given Anthropic’s marketing team exactly what they want?

I agree this is an important development in this continued saga, but as I said in the main thread, I really don’t like this articles framing (to the point I wouldn’t be surprised if the author is MAGA or at least prone to sanewashing MAGA).

Repost what I wrote in the other thread:

Anthropic CEO Dario Amodei picked a major fight with the Department of Defense last month, asserting that his company’s AI models couldn’t be used for mass surveillance of Americans or direct autonomous weapons systems.

As to who picked a fight with who, the DoD wanted to change the terms of their contract, to which Anthropic apparently compromised on every term except for mass surveillance of Americans (fuck the rest of the world I guess) and fully autonomous weapons (cause a human clicking “yes to confirm” makes slop-bot powered drones so much better). This wasn’t good enough for this authoritarian strongman administration, so Pete Hegseth took the fight public with tweets first. So the article framing it as Anthropic “picking a fight” is a bullshit framing. I mean, they did kind of bring it on themselves hyping up their slop machine like it was a sci-fi AGI, but they didn’t start the fight.

For one, “it’s 100 percent in the government’s prerogative to set the parameters of a contract,” Snell & Winter partner Brett Johnson told Wired, effectively meaning there may be very little chance of an appeal.

So they find a quote about contracts, but a Supply Chain Risk isn’t just the DoD deciding on contracts, it is a specific power that has specific mechanisms set by legislation. If (and it is a big if with the current Supreme Court’s composition) the court actually considers the terms set out in the legislation (including, most problematically for the DoD, a risk assessment and consideration of less intrusive alternatives), I think the DoD loses. Of course, the SC has all too often been willing to simply defer to the executive branch’s judgement, even if the process for the judgement was “Trump or one of his underlings made a choice on a spiteful or idiotic whim, announced it on twitter, and the departments underneath them rushed to retroactively invent a saner rationalization”. If the DoD decided to just end the contract (without all the public threats of SCR or invoking the Defense Production Act) Anthropic wouldn’t be in a position to sue and this drama wouldn’t have been as publicized in the first place.

But the lawsuit itself takes a dramatically different tone.

Yeah because one set of a language is a CEO trying to grovel and backtrack on one of the rare few ethical commitments he has ever made (edit well actually Anthropic has made lots of ethical commitments, many of which they’ve already folded on, this is one of the only ones he held against pressure and one of the only ones the media/public might actually expect him to hold to because the fight was so dramatically public), and the other is making a court case about the actual law.

If the DoD accidentally pop the AI bubble by triggering a cascade when Anthropic runs into issues; then later the DoD loses the court case in a humiliating enough way; then DoD loses a civil case with the money going to pay the debts owed in Anthropic’s bankruptcy proceedings, and the American public blames all of (without letting one shift the blame to the other) the Trump administration, the Republican party, the parts of the Democrat that acted as pathetic enablers, and the tech ceos for the following economic depression… I would count that as a relative win?

The specific article’s framing pisses me off…

Anthropic CEO Dario Amodei picked a major fight with the Department of Defense last month, asserting that his company’s AI models couldn’t be used for mass surveillance of Americans or direct autonomous weapons systems.

As to who picked a fight with who, the DoD wanted to change the terms of their contract, which Anthropic apparently compromised on every term except for mass surveillance of Americans (fuck the rest of the world I guess) and fully autonomous weapons (cause a human clicking “yes to confirm” makes slop-bot powered drones so much better). This wasn’t good enough for this authoritarian strongman administration, so Pete Hegseth took the fight public with tweets first. So the article framing it as Anthropic “picking a fight” is a bullshit framing. I mean, they did kind of bring it on themselves talking up their slop machine like it was a sci-fi AGI, but they didn’t start the fight.

For one, “it’s 100 percent in the government’s prerogative to set the parameters of a contract,” Snell & Winter partner Brett Johnson told Wired, effectively meaning there may be very little chance of an appeal.

So they find a quote about contracts, but a Supply Chain Risk isn’t just the DoD deciding on contracts, it is a specific power that has specific mechanisms set by legislation. If (and it is a big if with the current Supreme Court’s composition) the court actually considers the terms set out in the legislation (including, most problematically for the DoD, a risk assessment and consideration of less intrusive alternatives), I think the DoD loses. Of course, the SC has all too often been willing to simply defer to the executive branch’s judgement, even if the process for the judgement was “Trump or one of his underlings made a choice on a spiteful or idiotic whim, announced it on twitter, and the departments underneath them rushed to retroactively invent a saner rationalization”.

But the lawsuit itself takes a dramatically different tone.

Yeah because one set of a language is a CEO trying to grovel and backtrack on one of the rare few ethical commitments he has ever made, and the other is making a court case about the actual law.

It’s so fucking pathetic, he can’t even hold onto the very narrow and weak stand (because he left open a lot of things with Anthropics two red lines) he took without trying to backpedal and grovel.

your mode of analysis is closer to erotic Harry Potter fan fiction

To give Gary Marcus credit here, HPMOR may not be erotic, but many of Eliezer’s other works are (or at least attempt to be), the most notable being Planecrash/Project Lawful which has entire sections devoted to deliberately bad (as in deliberately not safe, sane, consensual) bdsm.

lib brains have a hard time comprehending that there can be multiple bad guys at a time, or that America was in fact a neocolonialist imperialistic empire even before Trump took over and took off the mask.