I think i may need a break from mastodon.

it is not an exaggeration to say that the majority of my feed is now just various anti-AI posts.

the people pushing AI aren't here. they are not on your Mastodon instance. When you post about how terrible and ignorant and stupid they are they do not see it, and it's not like that sort of thing persuades anyone even if they did.

I want to keep up with the cool stuff you are all making and doing. But I realize I am not entitled to just pick and choose from the things you find important enough to share, so I am not sure what else to do when I find that reading my feed no longer improves my mental health.

@gloriouscow Yea, maybe I should cool it. Anti AI posts get me acknowledgment and dopamine hit.

I want to gush more about Horse Girl Game, but no one else seems to want to play it. My other projects are stalled while I do CI maintenance (and I'm procrastinating on that too).

Tbh, I'm not doing too well, and I don't think a lot of my peers are either. And my living situation is _good_.

@cr1901

The one thing that probably has the most influence on our beliefs about people are our personal relationships with people different from us, and realizing that they are still people at the end of the day. I don't know why it seems to be a particular quirk of the human soul that we often need a personal example before we can feel empathy, but that seems to just be how it is.

I know people that use Claude or other tools, and that is costing me a lot of mental energy and quite a bit of cognitive dissonance. I know some of these these people are talented, passionate, intelligent people who got into coding for the same reasons we all did. We can believe they are ethically challenged, perhaps. but we are all flawed, messy creatures who make daily ethical compromises in some way.

I'm really honestly surprised that more people here don't personally know anyone that they hold in any regard whatsoever that use an LLM because it seems like I'm the only one struggling with trying to understand why people I know and respect can look at the ethical costs and shrug. (meanwhile vegans are like "lol, first time?")

I know what I'm seeing is the result of a lot of frustration and hopelessness. I don't personally know what to do about it, either. But I'm just worried that we never will as long as we abandon nuance in favor of a perpetual fediverse circlejerk.

I've pretty much disowned my family for their beliefs. I am not ready to give up a good chunk of my friends as well. I can't. I can't just sit here angry at the world, utterly alone.

@gloriouscow I think your stance is reasonable. I think part of my problem is that I see using AI tools to build the future as a deeply personal attack on a worldview I hold dear ("maybe we shouldn't take our knowledge base for granted").

I've found AI stuff utterly hilarious before:

https://www.youtube.com/watch?v=kBHg832c_6I

God knows I'm not morally consistent 100% of the time (no one is).

RELEASE THE TRAINS!

YouTube

@cr1901

i can wax sixteen different ways of cynical about it.

I am not really so much concerned with the affect of software quality long-term as much as I am concerned with our eventual irrelevance.

That assumes a generous prediction of the trajectory of AI, of course. I do believe that AGI will be achieved, and I am absolutely convinced we have no plan for it whatsoever.

I think that people can actually use Copilot to review PRs without that being the end of open source itself and all of civilization, but it is a technological truce at best.

there's been a lot of discussion over what our motivations as programmers even are. I feel my sense of personal pride giving way to thoughts about my legacy and my lasting contributions to the world, and start to wonder, if AI could help me accomplish that, ... well, the intellectual opiate starts to smell temptingly sweet.

There is an undeniable jealousy to see the ease at which people can make their ideas real with a few prompts now.

What would probably help more than Claude is if I could stop starting projects I never fucking finish.

But everything I am struggling to make now feels like I am casting irrelevant, trivial detritus into the turbulent sea of an uncertain future.

oh, I gave the world a cycle-accurate 8088 emulator. I should get a goddamn nobel prize.

I miss feeling optimistic about our future, but I couldn't tell you the last time i did.

@gloriouscow @cr1901 I want to challenge your premise that we're speaking to an audience who isn't here, to the AI pushers.

The way I see it, if you're worried about being made irrelevant, if you believe this current push by the technofascist AI cult has any chance of leading to anything like "AGI", then you are our primary audience.

My goal in speaking out against and debunking their parlor tricks is to build a feeling among people who feel threatened by AI and forced into adopting it that all our enemy has is smoke and mirrors on top of standard capitalist abuses, not any miracle technology that is going to deliver them a win over us.

@dalias @cr1901

I scrolled your feed for quite some time and I saw one singular post about Lego figures that was anything besides angry reposts about AI.

I saw nothing about what you have created, what you are working on, why I would have any interest in your voice.

You might be absolutely right about everything you say and share, but I am never going to want to follow a feed of joyless anger.

I can't tell you how to utilize your social media or that your primary purpose for using it shouldn't be amplifying the messages you find most imporant.

It's just not why I'm here.

@gloriouscow @cr1901 Maybe federation does something weird, but from what I recall/skim in the past couple days I've posted or boosted joy at seeing Girlyman is back, vintage equipment, information exposing funding sources behind the "age verification" offensive, joy at moving to Codeberg, folks searching for work, stuff about portability vs Apple bugs, joy of ppl making games, requests for help choosing software, info on new privacy threats, ...

On top of a pretty large amount of boosts related to AI threats, but a number of these are about strategies for dealing with demands to use it, ways to be effective communicating with project maintainers requesting policies to keep it out, etc. And some warnings about key infrastructure being compromised. No "angry reposts".

@dalias @cr1901

I'll back up and retract the 'angry' thing, because that's a fair complaint - being firm in your beliefs is not necessarily anger, and I might be projecting.

The thing is I don't actually suspect you're not creating and doing or that you're not an interesting person. What I feel is loss that I can't immediately see that.

I simply do find a lot of the anti-ai arguments to be not in good faith. I will probably not convince you of that. There is a huge list of the reasons that AI is harmful that we do not need to exaggerate anything at all.

The fundamental paradox is the more stridently anti-AI you are, the more unlikely you are to ever use it or experiment with it yourself, and consequently the less informed you really are about what current LLMs can do (specifically in the area of programming.) Telling an overworked maintainer using copilot to review inbound PRs that they are basically a techno-fascist collaborator cultist destined to spiral into para-social delusions until their skills have atrophied into dust may be cathartic, and maybe even fundamentally correct.

But they are going to disregard you as a lunatic, because all they see is a tool helping them and giving them back valuable slices of their life.

@gloriouscow @cr1901 I don't buy your claim that, by not using AI, you're less informed about what it can actually do.

The people who *do* try/use it are more deluded about what it can do by its very nature as a cognitohazard.

It is entirely possible, from a distance, with a purely information-theoretic understanding, to *know* that AI cannot do any of the things people claim it can. To know that it's just applying pattern shortcuts that would be unacceptably sloppy if a human did the same thing, but that's deemed acceptable when the machine does it because the user doesn't see it as those shortcuts happening, just sees the illusion of a being with reasoning powers

@gloriouscow @cr1901 Unless the "overworked maintainer" is dealing with a flood of critical vulnerability reports that affect real people's safety, the obvious answer to being overworked is to *slow down*. Just tell people "this isn't going to happen this month, or even this year". It's okay to tell people no! In fact, that's your primary job as a maintainer.

I don't think they're willingly technofascist collaborators when they opt instead to use "AI". But I am very confident that they are undermining the safety of their users and the future of their projects, which will not have valid copyright provenance to be FOSS in all jurisdictions and will be full of technical debt that's impossible to extricate the project from without vastly inordinate amounts of human labor that won't be possible for most of them to get. And doing this to projects that are already deep in the dependency trees of our systems is an extremely damaging act of vandalism.

@dalias @cr1901

That's the basically the gist of it. You have a firm opinion on the quality of the code that say, Claude would produce for a given prompt.

It feels you're convinced that it is basically going to be guaranteed slop far worse than any human could do.

The implication that comes with that is that the programmers who have worked on a project for maybe a decade or more are somehow suddenly unable to judge the merits of code when they have had to do so for years for human contributions of potentially dubious quality. Like everyone is somehow magically struck stupid by some nearly paranormal phenomenon. I suspect some people will go "yes, pretty much." We're not going to agree on that point.

The accusation that it may produce useful work but derive it from plagiarized human effort is one I find far more personally convincing, and I think is less likely to fall on deaf ears.

You are not telling them they cannot see with their own eyes - you're telling them they can't ethically take the shortcut.

I see your follow-up post and I think we could have a whole other lengthy conversation about the pressure maintainers often feel , and I don't actually disagree with your second paragraph at all.

@gloriouscow @cr1901 I don't think it's guaranteed to be slop far worse than any human could do. Humans make really bad slop too. Rather, I think there are cognitive factors (part of the cognitohazard) that make people far more likely to ignore the quality problem than when it comes from a human. This is not a "paranormal phenomenon". It's a real thing backed up by research and that I've watched happen to real people I know.

@dalias @cr1901

I realize I often use flippant turns of phrase that end up derailing what I'm trying to say. The cost of striving to be clever maybe.

The scenario I can plausibly see is that it starts out fairly harmless to use an LLM to review PRs, and you notice that it often points out real issues, and the more you use it the more confident you get - you can catch the times where it says something demonstrably wrong, because you have the skillset to do so, but the trend may be to no longer think critically about the rest of the code, so what the LLM misses will never be caught because you will never look.

In that sense, that is the real risk of outsourcing intellectual labor. The counter-argument might be it could still be a net benefit over the quick scan you might only otherwise have managed to fit into your lifestyle and schedule.

I saw a particularly cynical take that I had to appreciate for its sheer audacity - the idea that just uncritically accepting everything an LLM spits out could actually be an improvement for a certain slice of the population getting their entire worldview from Fox News. That of course, assumes that the people in control aren't going to twist the dials.

I hear the term cognitohazard and I sort of scoff and think well gee that wouldn't ever happen to me. I'm a smart cow, I've been sorting through internet bullshit since I installed Trumpet Winsock, I have lurked on /b/, I have seen a billion shitposts twinkle in the dark off the Tannhauser gate. Surely I'm immune, right?

Then I think about kids growing up with ChatGPT in their pocket and that scares the shit out of me. What are the consequences of an entire generation that has never had to critically search for the truth?

Here's my ultimate acquiescence: AI is incredibly dangerous technology, sold to us in a half-baked form that requires an uncommon level of critical thought to use constructively, and its in the hands of oligarchs that -if not actual fascists - are at the bare minimum perfectly happy to kiss fascist ass. It will damage society in incalculable ways and the best argument I have to sputter "it's actually legitimately useful for programming and u guys are mean"

I'm just tired.