@hadon #memetics alert. Good description of the variables pulling people into bullshit. Socially, identity struggle with truth. #misinformation #trump #fascism
@TheBreadmonkey concepts are an evolutionary thing. They are shaped by whats needed to handle the world at a specific time and thus evolveover time much as species do in biology. Its all #memetics to me

I saw this on Mastodon and almost had a stroke.

@davidgerard wrote:

“Most of the AI coding claims are conveniently nondisprovable. What studies there are show it not helping coding at all, or making it worse

But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.

These guys had one good experience with the bot, they got one-shotted, and now if you say “perhaps the bot is not all that” they act like you’re trying to take their cocaine away.”

First, the term is falsifiable, and proving propositions about algorithms (i.e., code) is part of what I do for a living. Mathematically human-written code and AI-written code can be tested, which means you can falsify propositions about them. You would test them the same way.

There is no intrinsic mathematical distinction between code written by a person and code produced by an AI system. In both cases, the result is a formal program made of logic and structure. In principle, the same testing techniques can be applied to each. If it were really nondisprovable, you could not test to see what is generated by a human and what is generated by AI. But you can test it. Studies have found that AI-generated code tends to exhibit a higher frequency of certain types of defects. So, reviewers and testers know what logic flaws and security weaknesses to look for. This would not be the case if it were nondisprovable.

You can study this from datasets where the source of the code is known. You can use open-source pull requests identified as AI-assisted versus those written without such tools. You then evaluate both groups using the same industry-standard analysis tools: static analyzers, complexity metrics, security scanners, and defect classification systems. These tools flag bugs, vulnerabilities, performance issues, and maintainability concerns. They do so in a consistent way across samples.

A widely cited analysis of 470 real pull requests reported that AI-generated contributions contained roughly 1.7 times as many issues on average as human-written ones. The difference included a higher number of critical and major defects. It also included more logic and security-related problems. Because these findings rely on standard measurement tools — counting defects, grading severity, and comparing issue rates — the results are grounded in observable data. Again, I am making a point here. It’s testable and therefore disproveable.

This is a good paper that goes into it:

In this paper, we present a large-scale comparison of code authored by human developers and three state-of-the-art LLMs, i.e., ChatGPT, DeepSeek-Coder, and Qwen-Coder, on multiple dimensions of software quality: code defects, security vulnerabilities, and structural complexity. Our evaluation spans over 500k code samples in two widely used languages, Python and Java, classifying defects via Orthogonal Defect Classification and security vulnerabilities using the Common Weakness Enumeration. We find that AI-generated code is generally simpler and more repetitive, yet more prone to unused constructs and hardcoded debugging, while human-written code exhibits greater structural complexity and a higher concentration of maintainability issues. Notably, AI-generated code also contains more high-risk security vulnerabilities. These findings highlight the distinct defect profiles of AI- and human-authored code and underscore the need for specialized quality assurance practices in AI-assisted programming.

https://arxiv.org/abs/2508.21634

The big problem in discussions about AI in programming is the either-or thinking, when it’s not about using it everywhere or banning it entirely. Tools like AI have specific strengths and weaknesses. Saying ‘never’ or ‘always’ oversimplifies the issue and turns the narrative into propaganda that creates moral panic or shills AI. It’s a bit like saying you shouldn’t use a hammer just because it’s not good for brushing your teeth.

AI tends to produce code that’s simple, often a bit repetitive, and very verbose. It’s usually pretty easy to read and tweak. This helps with long-term maintenance. But AI doesn’t reason about code the way an experienced developer does. It makes mistakes that a human wouldn’t, potentially introducing security flaws. That doesn’t mean we shouldn’t use for where it works well, which is not everywhere.

AI works well for certain tasks, especially when the scope is narrow and the risk is low. Examples include generating boilerplate code, internal utilities, or prototypes. In these cases, the tradeoff is manageable. However, it’s not suitable for critical code like kernels, operating systems, compilers, or cryptographic libraries. A small mistake memory safety or privilege separation can lead to major failures. Problems with synchronization, pointer management, or access control can cause major problems, too.

Other areas where AI should not be used include memory allocation handling, scheduling, process isolation, or device drivers. A lot of that depends on implicit assumptions in the system’s architecture. Generative models don’t grasp these nuances. Instead of carefully considering the design, AI tends to replicate code patterns that seem statistically likely, doing so without understanding the purpose behind them.

Yes, I’m aware that Microsoft is using AI to write code everywhere I said it should not be used. That is the problem. However, political pundits, lobbyists, and anti-tech talking heads are discussing something they have no understanding of and aren’t specifying what the problem actually is. This means they can’t possibly lead grassroots initiatives into actual laws that specify where AI should not be used, which is why we have this weird astroturfing bullshit.

They’re taking advantage of the reaction to Microsoft using AI-generated code where it shouldn’t be used to argue that AI shouldn’t be used anywhere at all in any generative context. AI is useful for tasks like writing documentation, generating tests, suggesting code improvements, or brainstorming alternative approaches. These ideas should then be thoroughly vetted by human developers.

Something I’ve started to notice about a lot of the content on social media platforms is that most of the posts people are liking, sharing, and memetically mutating—and then spreading virally—usually don’t include any citations, sources, or receipts. It’s often just some out-of-context screenshot with no reference link or actual sources.

A lot of the anti-AI content is not genuine critique. It’s often misinformation, but people who hate AI don’t question it or ask for sources because it aligns with their biases. The propaganda on social media has gotten so bad that anything other than heavily curated and vetted feeds is pretty much useless, and it’s filled with all sorts of memetic contagions with nasty hooks that are optimized for you algorithmically. I am at the point where I will disregard anything that is not followed up with a source. Period. It is all optimized to persuade, coerce, or piss you off. I am only writing about this because this I’m actually able to contribute genuine information about the topic.

That they said symbolic propositions written by AI agents (i.e., code) are non-disprovable because they were written by AI boggles my mind. It’s like saying that an article written in English by AI is not English because AI generated it. It might be a bad piece of text, but it’s syntactically, semantically, and grammatically English.

Basically, any string of data can be represented in a base-2 system, where it can be interpreted as bits (0s and 1s). Those bits can be used as the basis for symbolic reasoning. In formal propositional logic, a proposition is a sequence of symbols constructed according to strict syntax rules (atomic variables plus logical connectives). Under a given semantics, it is assigned exactly one truth value (true or false) in a two-valued logic system.

They are essentially saying that code written by AI is not binary, isn’t symbolically logical at all, and cannot be evaluated as true or false by implying it is nondisproveable. At the lowest level, compiled code consists of binary machine instructions that a processor executes. At higher levels, source code is written in symbolic syntax that humans and tools use to express logic and structure. You can also translate parts of code into formal logic expressions. For example, conditions and assertions in a program can be modeled as Boolean formulas. Tools like SAT/SMT solvers or symbolic execution engines check those formulas for satisfiability or correctness. It blows my mind how confidently people talk about things they do not understand.

Furthermore that they don’t realize the projection is wild to me.

@davidgerard wrote:

“But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.”

They are presenting a story—i.e., saying that the studies are not disprovable—and accusing computer scientists of using anecdotal evidence without actually providing evidence to support this, while expecting people to take it prima facie. You’re doing what you are accusing others of doing.

It comes down to this: they feel that people ought not to use AI, so they are tacitly committed to a future in which people do not use AI. For example, a major argument against AI is the damage it is doing to resources, which is driving up the prices of computer components, as well as the ecological harm it causes. They feel justified in lying and misinforming others if it achieves the outcome they want—people not using AI because it is bad for the environment. That is a very strong point, but most people don’t care about that, which is why they lie about things people would care about.

It’s corrupt. And what’s really scary is that people don’t recognize when they are part of corruption or a corrupt conspiracy to misinform. Well, they recognize it when they see the other side doing it, that is. No one is more dangerous than people who feel righteous in what they are doing.

It’s wild to me that the idea that if you cannot persuade someone, it is okay to bully, coerce, harass them, or spread misinformation to get what you want—because your side is right—has become so normalized on the Internet that people can’t see why it is problematic.

That people think it is okay to hurt others to get them to agree is the most disturbing part of all of this. People have become so hateful. That is a large reason why I don’t interact with people on social media, really consume things from social media, or respond on social media and am writing a blog post about it instead of engaging with who prompted it.

Human-Written vs. AI-Generated Code: A Large-Scale Study of Defects, Vulnerabilities, and Complexity

As AI code assistants become increasingly integrated into software development workflows, understanding how their code compares to human-written programs is critical for ensuring reliability, maintainability, and security. In this paper, we present a large-scale comparison of code authored by human developers and three state-of-the-art LLMs, i.e., ChatGPT, DeepSeek-Coder, and Qwen-Coder, on multiple dimensions of software quality: code defects, security vulnerabilities, and structural complexity. Our evaluation spans over 500k code samples in two widely used languages, Python and Java, classifying defects via Orthogonal Defect Classification and security vulnerabilities using the Common Weakness Enumeration. We find that AI-generated code is generally simpler and more repetitive, yet more prone to unused constructs and hardcoded debugging, while human-written code exhibits greater structural complexity and a higher concentration of maintainability issues. Notably, AI-generated code also contains more high-risk security vulnerabilities. These findings highlight the distinct defect profiles of AI- and human-authored code and underscore the need for specialized quality assurance practices in AI-assisted programming.

arXiv.org

So, I am a Computational Biologist. Keep that in mind. I’m an actual scientist who works with ecological concepts, specifically the microbiome. One of the most insufferable reactions to the cyberpunk era we inhabit is the emergence of anti-science ideas from the left in response to techno-fascism. The strange part is that many people on the left do not even recognize them as anti-science, because they assume the left is aligned with science and the right opposed to it; ergo, if the left says it, it must be scientific. It is insane: washing your hands is technology. Medicine is technology.

I think, because the Internet has hijacked people’s brains, many conflate technology with electronics or machines. Anthropologically, technology consists of material objects, techniques, and organized practices through which humans intentionally intervene in their environments. Technology is culture, and human culture is technology. When someone learns a skill or a discipline from someone else, that is an extension of technology.

Technology encompasses craft traditions (blacksmithing), agriculture, and institutionalized processes of teaching and learning. Agriculture is one of the oldest forms of technology. Yes, farmers are tech workers. I write code, but I also spent a large amount of time on a farm, and I can tell you that many tech workers who pride themselves on writing code would not know what to do with farm equipment.

So, from that broad perspective, we can sum technology up in one word: education. A basic heuristic for determining if something is cultural or not is: can it be taught and learned? These words? I was taught English, and I am using an invented language to transmit knowledge to you; ergo, I am using technology to transmit cultural knowledge to you. Reading a book is thus using a piece of human technology. So, being anti-tech connotes being anti-education.

What got me thinking about this is a toot I read on Mastodon:

The truth is that society needs to develop ethically and ecologically more than it does technologically. That’s not to say that we should shun technology, but our development along other lines lags far behind our technological capacity.

↬ecoevo.social/@benlockwood/116052113455871454

Sounds valid, right? That is the distinct smell of bull shit.This is a clear example of what is called a platitude. Platitudes are memetically hijacking people’s brains. Memetics actually hijack your brain—they change it. It’s similar to how a retrovirus can alter the genome of its host. So, trying to have conversations with these people is pointless, which is why I avoid the chronically online Internet scene and arguing with them.

It made me want to scream. As I mentioned earlier, technology is basically a set of things you learn from other humans—typically within a culture—that helps you do or make something. You know what else is learned within human society? A normative set of cultural values about how we ought to behave. So, both technology and culture emerge from the same thing simultaneously and mutually. You cannot have humans intervening in things to achieve ecological development, because that is technology, and you cannot educate humans on ethics without an invented language. It is literally an anti-education argument.

Ethics and technology arise together from the same human conditions and social processes. It makes little sense to claim that technology is “outpacing” ethics. The two do not develop independently. We form ethical norms in response to new capacities and circumstances. There would be no cultural norms about how to use the Internet if the Internet did not exist. And, there would be no ethical debates about AI if AI did not exist. Ethical reflection emerges alongside technological change because both are products of human culture.

As new problems create new technologies that create new problems, societies respond by negotiating norms, rules, and expectations appropriate to those contexts. The same pattern appears in politics. Politics concerns who gets what, when, and how—it is the negotiation of power, rights, and resources. Without resources or competing claims, there would be nothing to negotiate. Ethics and politics are not trailing behind technology because they are co-emergent responses to the same underlying realities.

@benlockwood

Ben Lockwood, PhD (@[email protected])

The truth is that society needs to develop ethically and ecologically more than it does technologically. That's not to say that we should shun technology, but our development along other lines lags far behind our technological capacity.

ecoevo.social

Stepping Back From Social Media To Read a Book

I’m taking a break. After spending like two years in the worst parts of the Internet modeling the memetic spread of conspiracy-driven behavioral patterns and developing social media software as a side hustle, I think I’m going to take a step back and
 I don’t know
 maybe read a book? lol.

I’m a Computational Biologist who pretty much studies the memetics of conspiracy theories and how they act as another vector/epidemiological layer. I’ve also been working on various contracts for social media development stuff. Working on the shit I’ve been working on for years forces you to see the worst parts of people that they split off. It makes you hate everyone — and I mean everyone.

Your BlueSky Feed Is Porn You Didn’t Ask For Because Your Friends Are Gooners With a Severe Porn Addiction

A common complaint I see people make on Bluesky is: why am I being served so much porn or things I am not interested in? They will incorrectly believe that the algorithm is broken. It’s not broken. You didn’t know the people you knew as well as you thought you did. Porn addiction is a thing, and porn addiction is especially common with weebs. You’re seeing deranged shit because people you follow have porn addictions and are into deranged shit. So, though you may not be consuming porn, people in your network are. That activity kicks into your feeds.

The issue I have with that is that it essentially normalizes being sex pests in a space on the Internet. That sets the expectation that it is good—attractive, even—to act like that elsewhere. That expectation alienates relationships. Bluesky creates a cultural space that offers an unrealistic, bizarre representation of social relationships, which isolates and alienates the users who stay on there consuming erotica and porn like they do.

So, user repos in Bluesky have a property for likes. Bluesky’s underlying AT Protocol stores likes as first-class structured records in each user’s AT Protocol repository. In the AT Protocol lexicon, a like is an app.bsky.feed.like record type. Unlike a simple boolean flag on a post, it is its own record with a creation timestamp and a subject field that holds a strong reference to the liked record.

That strong reference is composed of an AT-URI and a CID. The AT-URI identifies the exact record in the network by DID, collection, and record key. The CID is a cryptographic content identifier that uniquely identifies the exact content of that liked record.

These like records exist under the app.bsky.feed.like namespace in the user’s repo. Bluesky’s repo model is built so that these repos are hosted on a user’s Personal Data Server and are publicly readable through the AT Protocol APIs. Because of that, the like record and its fields can be fetched, indexed, and used by any client or service that can query the protocol.

The protocol exposes operations like getLikes. This returns all of the like records tied to a particular subject’s AT-URI and CID. It also exposes getActorLikes. This returns all of the subject references a given actor has liked. Those API calls return structured like objects with timestamps and subject references directly from the public repository data.

Various feeds hosted by different PDSs use the likes property to construct the feeds that you see. Since the likes of people you follow are included in your social graph, along with your own likes, you’re going to get served the porn they are consuming. Because likes are public and anyone can write an algorithm to see everyone’s likes, you can clearly see just how much porn people are consuming.

Honestly, what started to turn my stomach about the people on Bluesky is how they behave across different contexts. If you look through the records of the posts they interact with, you’ll see them engaging with political posts in the replies like a normal person. Then, when you look through their AT Protocol records, you see hours and hours of them interacting with every kind of porn imaginable. I am not exaggerating. Hours of likes for porn posts within 1–10 minutes of each other. Am I sex-negative? A prude? No, this site is filled with furry, gay bara porn, lol. You can have a drink without being an alcoholic. The problem with these people is like people who can’t have one drink without drinking the whole fucking day; they can’t consume porn in healthy ways.

I think people assume that their feed is customized for them and based on their likes. No—feeds are generalized based on what everyone likes and then served to your subgraph. It’s not just about who you follow; it’s about who they follow. So if you follow someone who follows a lot of people with porn addictions, you will see porn. Bluesky isn’t weighting the algorithm to do this. Basically, it’s the people in your social network with furry, hentai, or trans porn addictions who are driving it.

BlueSky’s Solution To Moderating Is Moderating Without Moderating via Social Proximity

I have noticed a lot of people are confused about why some posts don’t show up on threads, though they are not labeled by the moderation layer. Bluesky has begun using what it calls social neighborhoods (or network proximity) as a ranking signal for replies in threads. Replies from people who are closer to you in the social graph, accounts you follow, interact with, or share mutual connections with, are prioritized and shown more prominently. Replies from accounts that are farther away in that network are down-ranked. They are pushed far down the thread or placed behind “hidden replies.”

Each person gets their own unique view of a thread based on their social graph. It creates the impression that replies from distant users simply don’t exist. This is true even though they’re still technically public and viewable if you expand the thread or adjust filters. Bluesky is explicitly using features of subgraphs to moderate without moderating. Their reasoning is that if you can’t see each other, you can’t harass each other. Ergo, there is nothing to moderate.

Bluesky mentions that here:

https://bsky.social/about/blog/10-31-2025-building-healthier-social-media-update

As a digression, I’m not going to lie: I really enjoyed working on software built on the AT protocol, but their fucking users are so goddamn weird. It’s sort of like enjoying building houses, but hating every single person who moves into them. But, you don’t have to deal with them because you’re just the contractor. That is how I feel about Bluesky. I hate the people. I really like the protocol and infrastructure.

I sort of am a sadist who does enjoy drama, so I do get schadenfreude from people with social media addictions and parasocial fixations who reply to random people on Bluesky, because they don’t realize their replies are disconnected from the author’s thread unless that person is within their network. They aren’t part of the conversation they think they are. They’re algorithmically isolated from everyone else. Their replies aren’t viewable from the author’s thread because of how Bluesky handles social neighborhoods.

Bluesky’s idea of social neighborhoods is about grouping users into overlapping clusters based on real interaction patterns rather than just the follow graph. Unlike Twitter, it does not treat the network as one big public square. Instead, it models networks of “social neighborhoods” made up of people you follow, people who follow you, people you frequently interact with, and people who are closely connected to those groups. They’re soft, probabilistic groupings rather than strict labels.

Everyone does not see the same replies. Bluesky is being a bit vague with “hidden.” Hidden means your reply is still anchored to the thread and can be expanded. There is another way Bluesky can handle this. Bluesky uses social neighborhoods to judge contextual relevance. Replies from people inside or near your social neighborhood are more likely to be shown inline with a thread, expanded by default, or served in feeds. Replies from outside your neighborhood are still public and still indexed, but they’re treated as lower-context contributions.

Basically, if you reply to a thread, you will see it anchored to the conversation, and everyone will see it in search results, as a hashtag, or from your profile, but it will not be accessible via the thread of the person you were replying to. It is like shadow-banning people from threads unless they are strongly networked.

Because people have not been working with the AT Protocol like I have, they assume they are shadow-banned across the entire Bluesky app view. No—everyone is automatically shadow-banned from everyone else unless they are within the same social neighborhood. In other words, you are not part of the conversation you think you are joining because you are not part of their social group.

Your replies will appear in profiles, hashtag feeds, or search results without being visually anchored to the full thread. Discovery impressions are neighborhood-agnostic: they serve content because it matches a query, tag, or activity stream. Once the reply is shown, the app then decides whether it’s worth pulling in the rest of the conversation for you. If the original author and most participants fall outside your neighborhood, Bluesky often chooses not to expand that context automatically.

Bluesky really is trying to avoid having to moderate, so this is their solution. Instead of banning or issuing takedown labels to DIDs, the system lets replies exist everywhere, but not in that particular instance of the thread.

I find this ironic because a large reason why many people are staying on Bluesky and not moving to the fediverse—thank God, because I do not want them there—is discoverability, virality, and engagement.

In case anyone is asking how I know so much about how these algorithms work: I was a consultant on a lot of these types of algorithms, so I certainly hope I’d know how they work, lol. No, you get no more details about the work I’ve done. I have no hand in the algorithm Bluesky is using, but I have proposed and implemented that type of algorithm before.

I have an interest in noetics and the noosphere. A large amount of my ontological work is an extension of my attempts to model domains that have no spatial or temporal coordinates. The question is how do you generalize a metric space that has no physically, spatial properties. I went to school to try to formalize those ideas. Turns out they’re rather useful for digital social networks, too. The ontological analog to spatial distance, when you have no space, is a graph of similarities.

This can be modeled by representing each item as a node in a weighted graph, where edges are weighted by dissimilarity rather than similarity. Highly similar items are connected by low-weight edges, while less similar items are connected by higher-weight edges. Distances in the graph, computed using standard shortest-path algorithms, then correspond to degrees of similarity. Closely related items are separated by short path lengths, while increasingly dissimilar items require longer paths through the graph. It turns out that attempts to generalize metric spaces for noetic domains—to model noetic/psychic spaces—are actually pretty useful for social media algorithms, lol.

Progress Update: Building Healthier Social Media - Bluesky

Over the next few months, we’ll be iterating on the systems that make Bluesky a better place for healthy conversations. Some experiments will stick, others will evolve, and we’ll share what we learn along the way.

Bluesky

Astroturfing Is Pretty Pointless When Social Subgraphs Are Fragmented (e.g., the Fediverse)

I am seeing astroturfing in the fediverse again, by AT Protocol developers implicitly trying to shill their products. I think it is stochastic behavior by developers with too much time on their hands. Honestly, I do not care. I like the people on ActivityPub more, but I like the AT Protocol better, and I have developed for both. Astroturfing on ActivityPub networks is fascinating to me because it is so pointless.

I am actually a Computational Biologist and Computer Scientist whose specialty is combinatorics, social graphs, graph theory, etc. Specifically, I use this to create epidemiological models for the memetic layer of human behaviors that act as vectors for diseases, using the SIRS model. I do not just study germs; I study human behaviors.

The models I construct extend into a “memetic layer,” in which beliefs, norms, and behaviors (such as risk-taking, compliance with public health measures, or susceptibility to misinformation) spread contagiously through social networks. These behaviors function as vectors that modulate biological transmission rates. As a result, the spread of ideas can accelerate, dampen, or reshape the spread of disease. By running computational simulations and agent-based models on these graphs, I study how network structure, influential nodes, clustering, and platform-specific dynamics affect behavioral contagion. I also examine how these factors influence epidemiological outcomes.

To say it very concisely, I study how the spread of bat-shit insane beliefs, shit posts, and memes influences whether or not there is a measles outbreak in Texas. Ironically, this is an evolution of my studying semiotics, memetics, and chaos magick in high school. I got a job where I can use occult, anarchist techniques professionally.

I think a large reason why I do not care about astroturfing in the fediverse is that it’s so pointless, lol. Astroturfing to manipulate the narrative would actually work better on Bluesky to keep people there than trying to recruit from the fediverse. Furthermore, big instances are relatively small. Some people on Bluesky have follower lists larger than an entire large instance in the fediverse.

Within ActivityPub networks, astroturfing rarely propagates far, because whether information spreads depends on properties of the social graph itself. Dense connectivity, short paths between communities, and a sufficient number of cross-cutting ties support diffusion. ActivityPub’s architecture tends to produce graphs that are fragmented and highly modular. This limits the reach of coordinated activity.

ActivityPub is a system where each instance maintains its own local user graph and exchanges activities through inboxes and outboxes. This makes it autonomous and decentralized. The network consists of loosely connected subgraphs. Cross-instance edges appear only through explicit follow relationships. The ActivityPub protocol does not provide a shared or complete view of the network. Measurements of the fediverse consistently show uneven connectivity between instances, clustering at the instance level, and relatively long effective path lengths across the network. Under these conditions, large cascades are uncommon.

Instance-level clustering means that in ActivityPub networks, users interact much more with others on the same server than with users on different servers. Because each instance has its own local timeline, culture, and moderation, connections form densely within instances and only sparsely across them through explicit follow relationships. This creates a network made up of tightly connected local communities linked by relatively few cross-instance ties, which slows the spread of information beyond its point of origin.

However, with the AT Protocol, global indexing and aggregation are explicitly supported. Relays and indexers can assemble near-complete views of the social graph. Applications built on top of this infrastructure operate over a graph that is denser and easier to traverse. There are fewer structural barriers between communities. The diffusion dynamics change substantially when content can move across the graph without relying on narrow federated paths.

Astroturfing depends on coordinated amplification, typically through tightly synchronized clusters of accounts intended to manufacture visibility. Work on coordinated inauthentic behavior shows that these tactics gain traction when they intersect highly connected regions of the graph or bridge otherwise separate communities. In networks with strong modularity, coordination remains local. ActivityPub’s federation model produces this kind of modularity by default. Coordinated clusters stand out clearly within instances. Their effects remain confined to those local neighborhoods.

Astroturfing on ActivityPub therefore tends to stall on its own because of the underlying graph topology. Without dense inter-instance connectivity or any form of global indexing, coordinated campaigns have a hard time moving beyond the immediate regions where they originate. Systems built on globally indexable social graphs, including those enabled by the AT Protocol, expose a much larger surface for viral spread. Network structure and connectivity account for the divergence where that is independent of moderation, cultural norms, ideology, or intent.

It’s just really funny to me how these stochastic techbro groups waste so many resources. I personally don’t want to go viral, which is why I avoid platforms where I can. The fact that it’s harder to achieve high virality on ActivityPub is exactly why I prefer the fediverse over the Atmosphere. One way to think about it is that you can change the ‘genetics’ of a system with a retrovirus, where memetic entities act as cultural retroviruses to reprogram the cultural loci of a space. That is their end goal. They are trying to hijack cultures memetically. You see this a lot with culture jamming.

Basically, the astroturfing on ActivityPub networks is designed to jam and subvert the culture. But, as I have already said, the topological structure makes memetic virality stall. They cannot achieve that kind of viral spread in the fediverse, which is why I cannot understand why they do this every year.

The Virulent Infection of BlueSky by Extremely Online, Brain-Rotten Zombies from X Continues

So, it appears a new migration from Twitter to Bluesky is underway. It appears to be some of the most virulent former 4chan users possible. Yep, I got off Bluesky just in time, lol. I’ve been keeping tabs on a particularly virulent and toxic subgraph on Twitter for years. It pretty much stayed off Bluesky because they couldn’t act like abusive dumpster fires there. Welp, looks like they’re becoming more active on Bluesky. It’s not looking good over there.

That they are on the move says something. It’s sort of like how the US is suddenly a place that is hospitable to measles. It was all but eradicated here.

My husband likes to say that you can tell where not to be by where I am looking from somewhere else. I like fires. So if I am observing your platform or community from a distance, you probably don’t want to be there.

Edit:

I had originally posted the above on a now-defunct federated blog. It got blasted to Mastodon. Someone replied and asked what I think is causing this. I debated actually answering, then decided that I’ve had enough of the dumpster fire that is social media. I decided not to wade through social media tech discourse into what will mostly likely be an Internet argument with a complete stranger. I am a techie dragon, and I engage with things to learn how they work so I can tinker with them. I only engaged with tech discourse to get my hands on how the tech works. There’s nothing in it for me to be part of larger conversations. Arguing with random strangers on social media is not an epistemically useful format. I do think I should answer, though. Just on my blog.

I treat social media like I do an addictive substance. I do not believe in abstinence, but I do believe in harm-reduction paradigms, so when I see everyone overdosing on social media, I pull back and shut down a lot of accounts. The Fediverse instance where the first part of this blog post was posted has been taken down, moved to this blog, and this section appended to it.

I often use the word weeb pejoratively. Here, I am using it categorically. There really isn’t an “official” name outside of otaku or weeb culture. I am at the fringes and intersections of it as a furry. My husband is a millennial weeb. With that being said—

The migration is in large part because Bluesky is capturing the otaku/weeb niche of X. X hosted networks that were ecosystems of “anime fans.” These included anime and manga artists, doujin and hentai artists, VTuber fans, NSFW illustrators, fandom shitposters, niche fetish communities, and other chronically and extremely online content creators and influencers. That culture relied heavily on timelines, informal networks, and discovery through reposts, replies, and algorithmic amplification.

Elon Musk pretty much destabilized X’s ecosystems and social networks from multiple directions at once. Algorithm changes made reach inconsistent. Moderation created anxiety and uncertainty about what would get suppressed or unintentionally “viral”. Bots, engagement farming, and blue-check reply spam actively poisoned fandom conversations.

Bluesky is the memetic and cultural progeny of early imageboard cultures. I conducted a phylogenetic analysis of the memetics, which you can check out here:

Bluesky is a competitor of X for otaku and fandom communities. Bluesky has a lot of the aspects of old Twitter dynamics around which fandom culture evolved. Recently, Bluesky introduced something big in those communities: going live. Since X is no longer habitable for weebs, they are moving to Bluesky.

For example, the AT protocol already has PinkSea:

https://pinksea.art

And, of course, there is WAFRN:

https://app.wafrn.net

I cope and deal with issues via personal, private sublimation and not so much exhibitionism of my art or consumption of art. So, while I do make comic books and do a shit ton of weeby art, it’s for the purpose of sublimation, so I’m not too interested in being a part of a community. That’s a large reason I am not active in those spaces. I’m quite cynical, in general, so I am suspicious of any community — and I mean any community, at all. Honestly, I am mildly contemptuous of mass participation or any sense of belonging. So, my art stays private, because it is created for me – and just me.

oekaki BBS

PinkSea is an Oekaki BBS running as an AT Protocol application. Log in and draw!

PinkSea