
Curious if vibecoding / AI agent assisted dev is currently common in ATmosphere dev? The feelings about AI generated code are def different on here than on fedi, so... I see a CLAUDE.md in bluesky-social/social-app https://github.com/bluesky-social/social-app/blob/main/CLAUDE.md
Example: https://bsky.app/profile/why.bsky.team/post/3meomclcfss2w
> Until December of last year I was using LLMs as fancy autocomplete for coding. It was nice for scaffolding out boilerplate, or giving me a gut check on some things, or banging out some boring routine stuff.
>
> In the past two months Claude has written about 99% of my code. Things are changing. Fast

Until December of last year I was using LLMs as fancy autocomplete for coding. It was nice for scaffolding out boilerplate, or giving me a gut check on some things, or banging out some boring routine stuff. In the past two months Claude has written about 99% of my code. Things are changing. Fast
I have this suspicion that the ATproto stack, at least the stuff from Bluesky, is heading towards "majority-vibecoded" but that's mostly just from seeing a lot of posts from the Bluesky eng team rather than me having spent much time in the codebase
Why is def hugely responsible for much of Bluesky/ATProto's design and if *he's* mostly letting Claude write 99% of his code, the rest of the eng team is likely to be heading in that direction too?
Also https://bsky.app/profile/pfrazee.com/post/3meogr22l3k2d
> A year ago, I thought LLMs were kind of neat but not that useful. I saw the code autocomplete and thought, meh.
>
>Last summer just flipped. I never ever thought I would see automated code generation like we see now.
>
> I know there’s baggage but you need to know the coders are being real about this

A year ago, I thought LLMs were kind of neat but not that useful. I saw the code autocomplete and thought, meh. Last summer just flipped. I never ever thought I would see automated code generation like we see now. I know there’s baggage but you need to know the coders are being real about this
Welp, there we go https://bsky.app/profile/why.bsky.team/post/3mgaqaaisfs2e
> Oh interesting, people who don’t know how to build software are getting mad at my post about building software. Cute.
>
> Let me be clear, over the next year, the job of software engineer will shift dramatically to no longer have typing syntax into an editor as its primary time sink.

Oh interesting, people who don’t know how to build software are getting mad at my post about building software. Cute. Let me be clear, over the next year, the job of software engineer will shift dramatically to no longer have typing syntax into an editor as its primary time sink.
You can guess my opinions, but I have left them out of this particular thread. My goal here was actually just to see if my gut sense was correct that Bluesky was heading in a vibecoding direction. I think that feels fairly confirmed based off the discourse I highlighted and also what seems to be an indirect response from Bluesky's team (though I think that's more because of what @davidgerard wrote about it than what I did)
Anyway, gnarly time, but people can decide for themselves whether that aligns with their values. One thing seems for sure: it's a bet many orgs are making, and this will be one thing where in the ~decentralized space we'll be learning much more about whether that bet pays off and results in a better or worse ecosystem over time.
@cwebber ai usage has been normalised in most of the atproto dev ecosystem for a while now. just as some recent other examples,
tangled's seed round also explicitly mentions ai agents for coding as a use case (https://blog.tangled.org/seed) , and semble also targets ai agents as a use case (https://blog.cosmik.network/updates-february-2026)
@laurenshof is it a taboo or are people less willing to spend significant amounts of money, energy and water on generative frontier ai & thus contributing to bigtech becoming even more powerful & destroying our planet? Using models mostly (not all) built on theft & badly treated data wranglers? Do we really need to sacrifice all on the altar of AI?
I'm ambivalent about AI. Like all tech it has good & bad sides, for example the promise of empowerment & the destruction of it. Yet at this moment the bad stuff seems to mostly outweigh the good since I belief all other things mentioned are more important to me than the (promised) efficiency.
I guess the devil is in the details.
@res260
Please read my full reply, including my ambivalence towards AI. I'm happy to discuss this topic in different directions.
I don't doubt that you are, I'm just pointing out that this specific reply is a good representation of how people react to LLM discussions in general on fedi. This is not the case on Bluesky, so it's normal that people self-censor because no one likes getting negative replies to their posts.
For me, the vibe on fedi is: don't openly talk about positive things regarding LLMs, criticizing is okay, asking questions is okay
@res260 you might be right.
The Fediverse might seem way more negative than other spaces due to an AI Overton Window.
The mainstream seems mostly celebrating AI as a silver bullet solution for lots of (social) issues. It's questionable at best if (at all) & how AI might contribute to this. At the same time there's less attention to the side-effects of AI itself.
The Fediverse might counter-weigh this overly positive mainstream view at AI.?