
Curious if vibecoding / AI agent assisted dev is currently common in ATmosphere dev? The feelings about AI generated code are def different on here than on fedi, so... I see a CLAUDE.md in bluesky-social/social-app https://github.com/bluesky-social/social-app/blob/main/CLAUDE.md
Example: https://bsky.app/profile/why.bsky.team/post/3meomclcfss2w
> Until December of last year I was using LLMs as fancy autocomplete for coding. It was nice for scaffolding out boilerplate, or giving me a gut check on some things, or banging out some boring routine stuff.
>
> In the past two months Claude has written about 99% of my code. Things are changing. Fast

Until December of last year I was using LLMs as fancy autocomplete for coding. It was nice for scaffolding out boilerplate, or giving me a gut check on some things, or banging out some boring routine stuff. In the past two months Claude has written about 99% of my code. Things are changing. Fast
I have this suspicion that the ATproto stack, at least the stuff from Bluesky, is heading towards "majority-vibecoded" but that's mostly just from seeing a lot of posts from the Bluesky eng team rather than me having spent much time in the codebase
Why is def hugely responsible for much of Bluesky/ATProto's design and if *he's* mostly letting Claude write 99% of his code, the rest of the eng team is likely to be heading in that direction too?
Also https://bsky.app/profile/pfrazee.com/post/3meogr22l3k2d
> A year ago, I thought LLMs were kind of neat but not that useful. I saw the code autocomplete and thought, meh.
>
>Last summer just flipped. I never ever thought I would see automated code generation like we see now.
>
> I know there’s baggage but you need to know the coders are being real about this

A year ago, I thought LLMs were kind of neat but not that useful. I saw the code autocomplete and thought, meh. Last summer just flipped. I never ever thought I would see automated code generation like we see now. I know there’s baggage but you need to know the coders are being real about this
Welp, there we go https://bsky.app/profile/why.bsky.team/post/3mgaqaaisfs2e
> Oh interesting, people who don’t know how to build software are getting mad at my post about building software. Cute.
>
> Let me be clear, over the next year, the job of software engineer will shift dramatically to no longer have typing syntax into an editor as its primary time sink.

Oh interesting, people who don’t know how to build software are getting mad at my post about building software. Cute. Let me be clear, over the next year, the job of software engineer will shift dramatically to no longer have typing syntax into an editor as its primary time sink.
@cwebber before* any judgement on whethe it is a good thing or not, it was expected, tbh. it is very much on brand from their team.
they always had the "tech enthusiast" ethos
*just before.
@cwebber I'm hanging out there a lot and yes there is a lot of vibecoding. However, they don't seem to vibecode more than the average paid software dev.
In 2024, I'd say about 20% of my friends vibecoded. Today the number looks more like 90%. This is not specific to atproto, my understanding is that most people vibecode nowadays.
@res260 Sadly a likely observation :\
So many people just giving up on their craft.
@res260 @erincandescent @cwebber if you care about the final product, surely you should care about how it’s made?
I see so many apologists for LLM usage recently trying to distinguish between the outcome and the process, as if the quality of the outcome isn’t defined by the process.
@erincandescent @res260 @cwebber @airtower
'modulo institutional knowledge' is doing a lot of heavy lifting there since that's half the problem with LLM usage
and the other half of the problem is the assumption that an LLM will produce identical code
so I don't think there's a useful discussion to be had if those are your assumptions
@benjamineskola @res260 @cwebber @airtower Look, I don’t think we’re talking about (original definition) vibe coding here, where nobody is looking at the output. We’re talking about cases where there’s a human in the loop.
If the tool is generating garbage code and the human is accepting it, that’s a human problem more than a tool problem.
I start from this assumption because we assume the human is competent and has taste. I assume they are not just letting the tool run wild on the codebase and make a mess.
There are issues and questions around institutional knowledge (if the human isn’t exploring the codebase in the same way, how much are they learning? how much do you pickup through review vs implementation?) but even then I’d argue that one of the primary criterions with regards to maintainability is how hard it is for a newcomer to pick something up and work on it.
@erincandescent @res260 @cwebber @airtower Except there is a huge problem with people actually just not looking at the code being generated. The wave of slop PRs inundating many open-source projects recently, for example.
People keep saying 'of course there is a human in the loop' but it seems increasingly clear to me that nobody is actually bothering to be the human in the loop themselves.
(Edit: but also, even when people are well-intentioned, I think the LLM-based process just makes it much harder to ensure quality than actually writing the code oneself.)
And yes, this is a human problem, it's all a human problem. But that's like saying 'guns don't kill people, people do'. True, but, the tool clearly exacerbates the problem.
As for your final paragraph I don't remotely see why you think LLMs solve this problem either.
@benjamineskola @res260 @cwebber @airtower
Except there is a huge problem with people actually just not looking at the code being generated. The wave of slop PRs inundating many open-source projects recently, for example.
People keep saying ‘of course there is a human in the loop’ but it seems increasingly clear to me that nobody is actually bothering to be the human in the loop themselves.
I know these are problems, but you’re moving the topic of conversation. There have always been bad developers with bad practices shovling crappy code over the fence. LLMs have made this easier and it sucks but it’s not new.
And yes, this is a human problem, it’s all a human problem. But that’s like saying ‘guns don’t kill people, people do’. True, but, the tool clearly exacerbates the problem.
Sure, but lazy/careless people use tool to produce bad results is not a unique problem. It’s very easy with a power drill to make messy holes, but we arent’ forcing everyone to use hand drills.
Saying using these tools results in necessarily bad output is just not backed up by available evidence.
I don’t pretend they’re perfect and I don’t pretend there aren’t problems. What I sense is that they’re not going away and are going to become and remain routine parts of toolboxes long into the future.
@erincandescent @res260 @cwebber @airtower > LLMs have made this easier and it sucks but it’s not new.
So why would we want to make it worse?
> Saying using these tools results in necessarily bad output is just not backed up by available evidence.
Every output I've seen from these things has been, at best, no better than a human would have done. And that's being generous.
> What I sense is that they’re not going away and are going to become and remain routine parts of toolboxes long into the future.
This is a self-fulfilling prophecy. Of course they won't go away if people insist on defending them.
@erincandescent @res260 @cwebber @airtower Given that every output of LLMs that I've seen that is identifiable as such has been mediocre at best, why would I assume without any evidence that there's a significant quantity of LLM-generated code that's actually good?
"There's no evidence of it but it's definitely there" is unpersuasive.
And I've also found that people's evaluations of LLM-generated code quality is wildly out of step with my own evaluations, so I would not automatically assume that because someone says it's good that it's actually good.
And then, even if the code was of acceptable quality, the negative effects on the process (increased difficult of reviewing ↔ decreased institutional knowledge, among other things) count against it too.
(And all of this is setting aside the ethical issues, which in practice I don't think we should do anyway. Like, even if LLMs produced good output they'd be ethically indefensible, and even if they were ethically acceptable the results are so poor that why would you bother with them?)
@sitcom_nemesis @res260 @cwebber I think there’s a spectrum
There’s code we keep repeating in broadly the exact same structure, just with different details fileld in. That’s boilerplate.
There’s code that’s unique and creative and requires thought. That’s “the meat of the problem”.
But there’s lots of stuff in the middle where it’s not quite creative, doesn’t really require thought, but either because of domain requirements, accidents of history, or just because you’re gluing two libraries together that hadn’t ever seen each other, is too irregular to really code generate but is not actually interesting.
@cwebber Earnest and also deeply befuddled question:
Do you happen to know if this "Why" is the same "Why" who wrote "Why's? (Poignant) Guide to Ruby"?
(Seems very unlikely to be the same person—but I'm so out of the loop that this name confusion also feels like when people are excitedly talking about the Swedish metal band "Ghost" but I mistakenly think they're excitedly talking about the Japanese experimental psych-folk band "Ghost".)
@ryanrandall https://github.com/whyrusleeping
pretty sure not the Ruby "Why" but I don't know for sure