I think i may need a break from mastodon.

it is not an exaggeration to say that the majority of my feed is now just various anti-AI posts.

the people pushing AI aren't here. they are not on your Mastodon instance. When you post about how terrible and ignorant and stupid they are they do not see it, and it's not like that sort of thing persuades anyone even if they did.

I want to keep up with the cool stuff you are all making and doing. But I realize I am not entitled to just pick and choose from the things you find important enough to share, so I am not sure what else to do when I find that reading my feed no longer improves my mental health.

@gloriouscow Yea, maybe I should cool it. Anti AI posts get me acknowledgment and dopamine hit.

I want to gush more about Horse Girl Game, but no one else seems to want to play it. My other projects are stalled while I do CI maintenance (and I'm procrastinating on that too).

Tbh, I'm not doing too well, and I don't think a lot of my peers are either. And my living situation is _good_.

@cr1901

The one thing that probably has the most influence on our beliefs about people are our personal relationships with people different from us, and realizing that they are still people at the end of the day. I don't know why it seems to be a particular quirk of the human soul that we often need a personal example before we can feel empathy, but that seems to just be how it is.

I know people that use Claude or other tools, and that is costing me a lot of mental energy and quite a bit of cognitive dissonance. I know some of these these people are talented, passionate, intelligent people who got into coding for the same reasons we all did. We can believe they are ethically challenged, perhaps. but we are all flawed, messy creatures who make daily ethical compromises in some way.

I'm really honestly surprised that more people here don't personally know anyone that they hold in any regard whatsoever that use an LLM because it seems like I'm the only one struggling with trying to understand why people I know and respect can look at the ethical costs and shrug. (meanwhile vegans are like "lol, first time?")

I know what I'm seeing is the result of a lot of frustration and hopelessness. I don't personally know what to do about it, either. But I'm just worried that we never will as long as we abandon nuance in favor of a perpetual fediverse circlejerk.

I've pretty much disowned my family for their beliefs. I am not ready to give up a good chunk of my friends as well. I can't. I can't just sit here angry at the world, utterly alone.

@gloriouscow I think your stance is reasonable. I think part of my problem is that I see using AI tools to build the future as a deeply personal attack on a worldview I hold dear ("maybe we shouldn't take our knowledge base for granted").

I've found AI stuff utterly hilarious before:

https://www.youtube.com/watch?v=kBHg832c_6I

God knows I'm not morally consistent 100% of the time (no one is).

RELEASE THE TRAINS!

YouTube

@cr1901

i can wax sixteen different ways of cynical about it.

I am not really so much concerned with the affect of software quality long-term as much as I am concerned with our eventual irrelevance.

That assumes a generous prediction of the trajectory of AI, of course. I do believe that AGI will be achieved, and I am absolutely convinced we have no plan for it whatsoever.

I think that people can actually use Copilot to review PRs without that being the end of open source itself and all of civilization, but it is a technological truce at best.

there's been a lot of discussion over what our motivations as programmers even are. I feel my sense of personal pride giving way to thoughts about my legacy and my lasting contributions to the world, and start to wonder, if AI could help me accomplish that, ... well, the intellectual opiate starts to smell temptingly sweet.

There is an undeniable jealousy to see the ease at which people can make their ideas real with a few prompts now.

What would probably help more than Claude is if I could stop starting projects I never fucking finish.

But everything I am struggling to make now feels like I am casting irrelevant, trivial detritus into the turbulent sea of an uncertain future.

oh, I gave the world a cycle-accurate 8088 emulator. I should get a goddamn nobel prize.

I miss feeling optimistic about our future, but I couldn't tell you the last time i did.

@gloriouscow > What would probably help more than Claude is if I could stop starting projects I never fucking finish.

Yea, ain't that a f***ing mood. I joined the club, got the T-shirt, etc. Only advice I can give is "be kind to yourself" and "its a marathon, not a sprint. Cut off one hydra head at a time." I'm sure these are generic platitudes, but sometimes... they help more than you'd think.

@gloriouscow @cr1901 I want to challenge your premise that we're speaking to an audience who isn't here, to the AI pushers.

The way I see it, if you're worried about being made irrelevant, if you believe this current push by the technofascist AI cult has any chance of leading to anything like "AGI", then you are our primary audience.

My goal in speaking out against and debunking their parlor tricks is to build a feeling among people who feel threatened by AI and forced into adopting it that all our enemy has is smoke and mirrors on top of standard capitalist abuses, not any miracle technology that is going to deliver them a win over us.

@gloriouscow @cr1901 Yes surely as you say "AGI" is possible. It's possible to build a machine that replicates the nervous system of an animal with intelligence and give it matching sensory inputs, and any reasonable person would expect it to exhibit the same kind of intelligence.

This has nothing to do with the path the AI cult is on, which is literally Dan Brown level of bullshit, claiming that the secrets of uncovering new truths about the world are buried in the statistical patterns of past human language usage.

@dalias @cr1901

I scrolled your feed for quite some time and I saw one singular post about Lego figures that was anything besides angry reposts about AI.

I saw nothing about what you have created, what you are working on, why I would have any interest in your voice.

You might be absolutely right about everything you say and share, but I am never going to want to follow a feed of joyless anger.

I can't tell you how to utilize your social media or that your primary purpose for using it shouldn't be amplifying the messages you find most imporant.

It's just not why I'm here.

@gloriouscow @cr1901 Maybe federation does something weird, but from what I recall/skim in the past couple days I've posted or boosted joy at seeing Girlyman is back, vintage equipment, information exposing funding sources behind the "age verification" offensive, joy at moving to Codeberg, folks searching for work, stuff about portability vs Apple bugs, joy of ppl making games, requests for help choosing software, info on new privacy threats, ...

On top of a pretty large amount of boosts related to AI threats, but a number of these are about strategies for dealing with demands to use it, ways to be effective communicating with project maintainers requesting policies to keep it out, etc. And some warnings about key infrastructure being compromised. No "angry reposts".

@dalias @cr1901

I'll back up and retract the 'angry' thing, because that's a fair complaint - being firm in your beliefs is not necessarily anger, and I might be projecting.

The thing is I don't actually suspect you're not creating and doing or that you're not an interesting person. What I feel is loss that I can't immediately see that.

I simply do find a lot of the anti-ai arguments to be not in good faith. I will probably not convince you of that. There is a huge list of the reasons that AI is harmful that we do not need to exaggerate anything at all.

The fundamental paradox is the more stridently anti-AI you are, the more unlikely you are to ever use it or experiment with it yourself, and consequently the less informed you really are about what current LLMs can do (specifically in the area of programming.) Telling an overworked maintainer using copilot to review inbound PRs that they are basically a techno-fascist collaborator cultist destined to spiral into para-social delusions until their skills have atrophied into dust may be cathartic, and maybe even fundamentally correct.

But they are going to disregard you as a lunatic, because all they see is a tool helping them and giving them back valuable slices of their life.

@gloriouscow @cr1901 I don't buy your claim that, by not using AI, you're less informed about what it can actually do.

The people who *do* try/use it are more deluded about what it can do by its very nature as a cognitohazard.

It is entirely possible, from a distance, with a purely information-theoretic understanding, to *know* that AI cannot do any of the things people claim it can. To know that it's just applying pattern shortcuts that would be unacceptably sloppy if a human did the same thing, but that's deemed acceptable when the machine does it because the user doesn't see it as those shortcuts happening, just sees the illusion of a being with reasoning powers

@gloriouscow @cr1901 Unless the "overworked maintainer" is dealing with a flood of critical vulnerability reports that affect real people's safety, the obvious answer to being overworked is to *slow down*. Just tell people "this isn't going to happen this month, or even this year". It's okay to tell people no! In fact, that's your primary job as a maintainer.

I don't think they're willingly technofascist collaborators when they opt instead to use "AI". But I am very confident that they are undermining the safety of their users and the future of their projects, which will not have valid copyright provenance to be FOSS in all jurisdictions and will be full of technical debt that's impossible to extricate the project from without vastly inordinate amounts of human labor that won't be possible for most of them to get. And doing this to projects that are already deep in the dependency trees of our systems is an extremely damaging act of vandalism.

@dalias @cr1901

That's the basically the gist of it. You have a firm opinion on the quality of the code that say, Claude would produce for a given prompt.

It feels you're convinced that it is basically going to be guaranteed slop far worse than any human could do.

The implication that comes with that is that the programmers who have worked on a project for maybe a decade or more are somehow suddenly unable to judge the merits of code when they have had to do so for years for human contributions of potentially dubious quality. Like everyone is somehow magically struck stupid by some nearly paranormal phenomenon. I suspect some people will go "yes, pretty much." We're not going to agree on that point.

The accusation that it may produce useful work but derive it from plagiarized human effort is one I find far more personally convincing, and I think is less likely to fall on deaf ears.

You are not telling them they cannot see with their own eyes - you're telling them they can't ethically take the shortcut.

I see your follow-up post and I think we could have a whole other lengthy conversation about the pressure maintainers often feel , and I don't actually disagree with your second paragraph at all.

@gloriouscow @cr1901 I don't think it's guaranteed to be slop far worse than any human could do. Humans make really bad slop too. Rather, I think there are cognitive factors (part of the cognitohazard) that make people far more likely to ignore the quality problem than when it comes from a human. This is not a "paranormal phenomenon". It's a real thing backed up by research and that I've watched happen to real people I know.

@dalias @cr1901

I realize I often use flippant turns of phrase that end up derailing what I'm trying to say. The cost of striving to be clever maybe.

The scenario I can plausibly see is that it starts out fairly harmless to use an LLM to review PRs, and you notice that it often points out real issues, and the more you use it the more confident you get - you can catch the times where it says something demonstrably wrong, because you have the skillset to do so, but the trend may be to no longer think critically about the rest of the code, so what the LLM misses will never be caught because you will never look.

In that sense, that is the real risk of outsourcing intellectual labor. The counter-argument might be it could still be a net benefit over the quick scan you might only otherwise have managed to fit into your lifestyle and schedule.

I saw a particularly cynical take that I had to appreciate for its sheer audacity - the idea that just uncritically accepting everything an LLM spits out could actually be an improvement for a certain slice of the population getting their entire worldview from Fox News. That of course, assumes that the people in control aren't going to twist the dials.

I hear the term cognitohazard and I sort of scoff and think well gee that wouldn't ever happen to me. I'm a smart cow, I've been sorting through internet bullshit since I installed Trumpet Winsock, I have lurked on /b/, I have seen a billion shitposts twinkle in the dark off the Tannhauser gate. Surely I'm immune, right?

Then I think about kids growing up with ChatGPT in their pocket and that scares the shit out of me. What are the consequences of an entire generation that has never had to critically search for the truth?

Here's my ultimate acquiescence: AI is incredibly dangerous technology, sold to us in a half-baked form that requires an uncommon level of critical thought to use constructively, and its in the hands of oligarchs that -if not actual fascists - are at the bare minimum perfectly happy to kiss fascist ass. It will damage society in incalculable ways and the best argument I have to sputter "it's actually legitimately useful for programming and u guys are mean"

I'm just tired.

@gloriouscow @cr1901 "using" claude is blessed because it drains the enemies coffers. it's the paying for it that is cursed.

@gloriouscow @cr1901 For myself, with regards to dealing with the cognitive dissonance, of watching technologists I personally know and admire adopt LLMs (some of which are on here, too, and who I am somewhat embarrassed to say have seen my unhinged anti-AI posts 😅):

I think this has been much easier for me to deal with, because my personal observation for a long time has been that technologists have a very weak sense of ethics. Both in the sense of having good ethics that I agree with, and in the sense of having thought about the subject of ethics at all. Most technologists have not sat down and decided what their moral boundaries are, and what the relationship of their own morality is to the technology they use or develop. Even very skilled ones that I have learned a lot from. Most people are content to think that technologies are value-neutral, and are content to follow the trend of what everyone else is doing.

I have observed brilliant technologists, long before LLMs, shrug at the ethics of many other things, so for me it is entirely believable that they shrug at the ethics of this, too. I don't think many of these people are inherently morally bad, but I do think that they just don't care. This is bad because I do think that people just following the trend into widespread AI adoption is an ethically bad outcome. However, I also think that as AI backlash increases, if the pendulum swings back to anti-AI being the norm in software: they will follow as well.

It is also the sad truth that for minority women in many computer fields, we must work with brilliant peers who are not necessarily bad people, but who by way of their privileged position in life will say and do ignorant things. We must see someone saying something hurtful, but not make a fuss about it, because it's not worth the time of having to personally educate that person, to deal with the backlash, or to be labelled as a "confrontational" person. And we must do this as well, for brilliant colleagues and mentors and people we admire, and that we learn a lot from. So in that sense, I am very well practiced at this kind of cognitive dissonance - I do it in order to preserve a career.

I hope this is maybe a little helpful for you, though this is only my personal experience. And if you do take a break, I hope it is a restful and rejuvenating one!

@cxiao @cr1901

i mean i know the tech-bro type, and ever since John Carmack revealed some profoundly bad takes I have been careful about putting people on a pedestal just due to technical prowess. people don't typically become my friend just because they are good engineers, i like to become friends with people that make me laugh, who can be real with me, vulnerable, frank, and generous with knowledge and experience.

the kind of qualities that don't typically make me think someone's ethical machinery is broken.

@gloriouscow @cr1901 I hear you, and thanks for posting this. I am also jaded from this.

I know people who use these tools, and am finding the flattened argument against them hard to sit with. In particular, two friends who are severely dyslexic have found some solace in chatGPT as an aide. They are aware of the problems, but are also aware of their systematic exclusion based on ability. This is not as black and white as people tend to make it on here.

@gloriouscow @cr1901

Good idea to take a break. "If you get tired, learn to rest, not to quit," said Banksy.

I refreshed my Mastodon timeline once by starting completely over on a new instance, with a renewed handle. That helped. I followed less, had less followers and my timeline seemed better again with less noise. Filters helped me too to at least temporarily eliminate some of the noise.

Take good care of yourself. Take a break. Set down a load. Set out on new paths.

@gloriouscow I know the feeling.
@gloriouscow I feel you. Muting or forcing a content warning on people and tags helped me tremendously when I was gagging on foreign politics. There are so many other beautiful things here too.

@gloriouscow I mostly use the federated feed, and there are quite a few ai-pushers there, but I also just block them and move on, so I don't see them as often anymore.

On one hand I think the danger of AI is mostly being used to manufacture political and medical consent by rich folks, but on the other the folks getting laid off are mostly also rich folks who ruined the country with their fake, bullshit jobs and so on, so it's still a little funny, as a poor

@gloriouscow 'boo hoo my waymo won't get me to my air bnb' they can all go die in a fire lol
@gloriouscow join us it's fun to take pot shots at the slop bots
@gloriouscow you should follow some pretty hashtags @aeva suggested this to me and it saved my life
@gloriouscow i think i mean to say please stay i enjoy your presence

@lritter @aeva

give me some pretty hashtags

@gloriouscow @lritter also I recommend # macrophotography (and the various non-english equivalents) though fair warning it has the occasional closeup on insects. my wife introduced me to that one last night :)

@aeva @lritter

i would also like to see birds, and maybe foxes

@gloriouscow @lritter i don't know of any hash tags for those but i'd be astonished if there weren't any
@gloriouscow @lritter also I highly recommend following @ [email protected], and also all of the great lakes live bots
@aeva @gloriouscow @lritter I don't follow birds, but they come to my timeline anyway and the hash tag seems to be... # birds .

@anton @aeva @lritter

this is my surprised face

@gloriouscow @anton @aeva imagine if it were all like "nah you can't follow birds anymore nowadays it's all pensioners posting audrey hepburn lookalikes from the 60s"
@aeva @gloriouscow @lritter
Oh, also:
‘bird of the day’ @ birds @ moresci.sale
Mathstodon

@johncarlosbaez @aeva @lritter

i should make a feed of cows every fifteen minutes

@gloriouscow - each day it's a new species, most of them you'll never have seen before, and they are described sometimes quite gloriously.
@gloriouscow blocking keywords any use?
@Photo55 A few people have pointed this out and I'm reading up on the filtering function now. My main concern is that a two letter term might be an overly broad thing to filter, but I can give it a shot.
@gloriouscow 4 character. Space AI space will get many, and few false positives.
AGI tends to be more interesting posts but you could catch that. SpaceAIfullstop is another term.
@Photo55 I'll miss the posts about video game AI but I have to wonder if people talking about unit pathfinding and such aren't desperately avoiding that term anyway now

@gloriouscow I feel much the same way, but I don't feel I need to take a break at this time.

Part of this is that I perceive the endless (and somewhat pointless) anti-AI posts are a reaction to AI being thrust upon people against their wishes. "Screaming into the void" in a way.

The other part of this is potentially a little "herd signalling", saying loudly and proudly that they're part of the anti-AI herd and feeling community about that.

In my particular situation, it's pleasing to see that I'm not alone in my feelings about this, as I'm pretty much the lone AI Luddite in an office of people pushing LLMs. And as much as I've made my bed and will lie in it, it is pleasing to know that I'm not the only one out there in a similar situation.

@gloriouscow please take care! It's really difficult to avoid this topic lately :(
@gloriouscow honestly, whenever something like that happens to me i'll just put a filter on the term for a while. helps a ton :)