i thing i’m struggling with is AI has crossed a threshold where it’s actually useful for work, gasp, but the discourse has been so poisoned by over-hype and fascism it’s hard to talk about

i would say the hype is about 12-18mo ahead of the tech; opus 4.6 is about as good as people said this stuff was a year ago

ie what ppl said was an urgent reality one year ago has actually finally arrived

can you one shot vibe code production saas apps? no.

claude writes worse code than i do, isn’t very good at debugging, and it produces mid architecture. at least for now.

but in the hands of a skilled practitioner working patiently i feel like we’ve reached a stage where you can deliver much more ambitious projects than were possible before

because this is the fediverse, an ethics disclosure:

- AI has been very harmful to the open web’s infrastructure
- it’s plain to see that AI has hurt a lot of people’s cognitive and emotional skills
- the dumbest and most evil people alive misuse it constantly
- i don’t really believe in copyright tbh my ideal compromise is we make every academic paper free for everyone not just big tech companies
- so far AI’s externalities outweigh the positives
- the environmental costs are real but overstated; imho can be reduced to “capitalism is bad for the environment and rich people need to be stopped”

(also people really ought to disclose when they use it. nothing makes my blood boil like being asked to review slop they haven’t read, or realizing a blog author’s become prolific because they’re cutting a lot of corners. just disclose!)
@phillmv in work scenarios I see disclosure that amounts to plausible deniability. Essentially, “if there’s something embarrassingly wrong about this, blame the AI. If it’s helpful, I want the credit.”
@phillmv another thing making it hard to talk about imo is that anyone who's successfully boycotted it for those past 12-18 months now has an extremely out-of-date perspective on its capabilities. so it adds up to three different alternate realities talking past each other quite a lot. like i still see people opposing it on the basis that it "doesn't work" when from where i'm sitting we have a _far_ worse problem that all the other problems still apply but that one less and less so
@henry it’s still overhyped constantly. it’s a big struggle. hard to communicate that it’s still sloppy but useful

RE: https://hachyderm.io/@phillmv/116374969941559197

@phillmv Quoting you. What is there to talk about after we take all of that into consideration?

PS: I think it is hard to talk about because there's nothing to talk about besides special pleading.

@yoasif the past three-ish years it was extremely impressive but also kind of useless.

the harms obviously outweighed the benefit.

now however it caught up to (some) of the hype: i’m feeling excited about the kinds of projects i’ll be able to deliver with good quality.

@phillmv The harms haven't gone away - it sounds like you are just doing the special pleading thing.

@yoasif i’m happy to engage on the harms.

broadly speaking i think harms currently outweighs benefits; as of today if i could wish the technology away i think i would. as it is we need to regulate it more.

that said, does how other people use the tool impact the morality of how i use it? i don’t know. i’m not sending people spam.

i don’t really believe in intellectual property so we can skip “theft”.

this mostly leaves us with environmental concerns and social upheaval.

as a programmer it feels hypocritical to wax and wane about automation being inherently bad; automating tasks has been my whole career.

environment is kind of the strongest angle, but that’s downstream of not having clean energy. if you could built it all on wind and solar power then it’d be OK

RE: https://mastodon.social/@yoasif/116301328058936154

@phillmv I think that if you don't believe in IP, it's hard to get to a place where you are going to convince people that AI is good, unless you can somehow convince people that IP shouldn't exist.

I can't get there personally, since I know that much of the code powering these models were taken from people who were contributing with the knowledge that their contributions would be free forever (copyleft), and I fear that that goes away.

How does copyleft exist in a world without copyright?

@phillmv Beyond that, even if you believe in the abolition of copyright, what do we do about the stolen labor? Just ignore that it was stolen?

It isn't as if the LLM vendors are playing fair here - they knew that people were restricting their works under existing law, and instead of lobbying governments to abolish copyright, they are instead simply taking from the commons.

Should we simply ignore that?

@yoasif when Aaron Schwartz crawled all of JSTOR i thought that was cool. my ideal solution here is making all of JSTOR public.

i agree that the current equilibrium where only OpenAI and Anthropic get to copy all of JSTOR is deeply unfair.

@phillmv Aaron at least had an argument that the works he was pirating was based on foundational research funded by the public (owing their existence to them) - he wanted to return it to the public.

What us happening with OpenAI/Anthropic is deeply different - they are taking from people and companies who contributed to the commons (and who wanted it to remain there), and are selling it back to the monied interests.

Sort of a reverse robin hood - stealing from the poor to give to the rich.

@yoasif yeah i agree - i just think the solution is to do what Aaron was trying to do, not to go back to the status quo

@phillmv How is propping up the LLM companies doing what Aaron was trying to do?

Aaron was Robin Hood.

The LLM companies are the opposite.

@yoasif It isn't, and that's not what she said.

@clayote You want to tell us what she said?

This post has essentially been in support of the LLMs, with the related position that copyright abolishment is a good thing.

This is accompanied by speaking approvingly of Aaron Swartz's piracy as a "solution" - to... what, I'm not clear about, but it seemed to be the problem of intellectual property existing.

That's my interpretation and I am happy to be told I'm wrong.

@yoasif

my ideal solution here is making all of JSTOR public.

That's in the sense of "public domain," not just "available to pirates". This is implied by her following sentence:

i agree that the current equilibrium where only OpenAI and Anthropic get to copy all of JSTOR is deeply unfair.

@clayote That interpretation isn't all that consistent with the idea that intellectual property isn't a thing.

In that world, it would all be public domain.

Wouldn't it then be a positive that while it is "unfair" that only the pirates get access to data as public domain, it is better than that data be protected by copyright?

Besides which, that isn't totally true - people *can* run LLMs locally; the piracy is included.

@yoasif

That interpretation isn't all that consistent with the idea that intellectual property isn't a thing.

In that world, it would all be public domain.

These two sentences contradict each other. She wants the world where it's all public domain, and that's her solution to the problem where Aaron Swartz died for piracy, but Anthropic and OpenAI get to do all they want.

@clayote So you are telling me that she is saying that the LLM companies are doing what Aaron tried to do?

I'm confusing myself, so I don't know how productive this discussion is, when she can just tell me what she thinks. 🤷

@yoasif No, the LLM companies did not release the data they stole in a form everyone else could use for any purpose, like Aaron did.
@yoasif If being able to run a useful app on the data was the same as releasing it, then JSTOR's own search engine would be a "release" of everything it indexes, even the stuff you have to pay to read

@yoasif

she can just tell me what she thinks.

She did. She apparently doesn't have the patience to correct your persistent misunderstanding, like I'm doing.

@clayote I don't think that is happening, but I think you are making your own points.

It is hurting my brain to deal with the indirection here, so if you want to make your own points, by all means - but I'm not going to bother with trying to respond to your interpretation of her thoughts.

@yoasif copyleft is a hack that uses copyright as a way of enforcing contributions back to the commons. i generally license my code (A,L)GPL and i think ppl who complain about the GPL are fools

but! the important part is the existence of a commons, not the exact enforcement mechanism - i use a lot of MIT and Apache licensed code too. i prefer it when ppl are forced to share but sharing still happens without it

i wont go into too much detail cos im still working on a demo but my early vibe is the commons might stand to benefit; i think we’ll be able to use LLMs to clone proprietary software and place it in the commons

@phillmv I disagree and I just wrote about it: https://www.quippd.com/writing/2026/04/08/ai-code-is-hollowing-out-open-source-and-maintainers-are-looking-the-other-way.html

The idea that people will be able to clone proprietary software and place it into the commons is an interesting idea - except for the fact that the models are very much copying machines - if the proprietary software is built on innovation not already copied by the commons (and models), that clone isn't coming out the other end. That means using your brain.

Besides which, the LLMs aren't going to be cheap forever.

AI Code is Hollowing Out Open Source, and Maintainers are Looking the Other Way

TL;DR: The advent of AI based, LLM coding applications like Anthropic’s Claude and ChatGPT have prompted maintainers to experiment with integrating LLM contributions into open source codebases.

Youssuff Quips

@yoasif LLMs are actually quite good at disassembling existing software and translating it into new languages.

as of today this still requires a lot of human effort but i feel confident that before LLM innovation peters out we’ll be able to clone most things that expose an API

@phillmv But not really: https://blog.katanaquant.com/p/your-llm-doesnt-write-correct-code

The LLM reproduces code it has copied into its corpus, it is not producing new works based on language semantics.

Monkey see, monkey do.

Your LLM Doesn't Write Correct Code. It Writes Plausible Code.

One of the simplest tests you can run on a database:

Vagabond Research

@yoasif this article is complaining about a vibe-coded rust port; i don’t think you can vibe code a port of a project as complex as sqlite just yet.

my claim is more like that porting sqlite to rust has gone from a 2 year project to a 3-month project.

@phillmv When the code is in the corpus, the LLM generates plausible code.

That doesn't mean it is good, or that you can protect it in any way.

If you are saying that, people will be able to describe an app to produce something plausible if the code exists in the corpus... perhaps.

That assumes that people are interested in feeding the models for free - LLMs copy, so if it isn't already a solved problem, you are still going to need to use your brain.

@phillmv @yoasif it's not just the energy. AI data centers are stealing water from communities that need it badly. It's a water hog. I can imagine cooling that doesn't use it but that's not the realities right now.

@phillmv I'm here too. My job is to make product and engineering super effective. I can't escape the AI mandate. My low risk internal facing tools for silly things like OKRs are kind of the best use cases for this stuff. It's not like I'll ever get funding for a dev to help me build them.

If I want to learn about what my peers are doing, I guess I have to go back to LinkedIn or Bluesky. What a tragic statement.

@mayintoronto @phillmv

I think the first thing I'll have to find out is where exactly people are learning these successful techniques. So far all my work AI use has produced hot steaming festering garbage.

(On the one hand, sure, skill issue in AI use, but on the other hand, when I ask for a Bash invocation and it produces something that would never run, is that really me? So many questions.)

@zygmyd @mayintoronto i’ve only really played with opus 4.6. my suggestion is booting up a vm and running it in yolo mode so it can test what it gives you without blowing up your personal data