Cyan (aq)

@CyanChanges
2 Followers
24 Following
58 Posts
It's so fascinating to see the changes in one company's logo over the years.
Nya~

@tyil @lina @david_chisnall @commdserv
Imagine in Discord:

> A: Hey, how many eggs in this box [Image: box with 12 eggs)
> B: 12 ig
*A: removes the image*
*A: edits the message, to "How old are you?"*
*A: Report to Discord as underage.*
*at Discord moderation team: Age of 12?? This is insane, banned for sure**

@lina @commdserv

I can’t speak for fd.o, but I was in a leadership position on another project where we got a similar case disastrously wrong, so I might be able to illuminate how that happens.

The first mistake we made was not to differentiate harassment from conflict resolution. Most of the issues we had between contributors were personality clashes or technical disagreements that escalated. As you say, most of these have both parties acting in good faith. The main thing that the project needs to do is deescalate and get the people involved to talk again. This is absolutely the wrong approach in cases of harassment. There were two key causes of this:

First, (as you mentioned) no one involved had any formal (or, in most cases, informal) training in how to deal with harassment. Most employers offer this, but it’s rarely compulsory. After the initial incident, I signed up for this training with my employer (as did another colleague involved with the same project). This highlighted some of the things we did wrong, but it was quite illuminating who was there: we were the only men on the course who were there voluntarily. Most of the people were women who were there because they had been targets of harassment or bullying and wanted to understand the processes better. The rest were men who had been forced to take the training because they had been accused of harassment (and, from a lot of their comments, I suspect had been engaged in it long term).

Most F/OSS (or other community-led) projects don’t have any formal structure for providing this kind of training. And the work-provided training wasn’t sufficient. There were a bunch of ‘and this is where you need to escalate it to HR specialists (or the police)’ moments, but volunteer projects don’t have those experts. One of the biggest things a F/OSS charity could do to improve the situation would be to hire real experts that projects can use as consultants. Companies that back projects could help out be loaning HR as well as engineers to the projects.

Second, we had very poor visibility into what happened. There’s a natural tendency for humans to trust the first person who explains a situation. In our case, it was made worse because the only thing that happened on project infrastructure (and so the thing that we saw) was an IRC exchange where one project member connected and had a go at another member then left. We didn’t see the backstory, which involved a load of gamergate nonsense on Twitter and elsewhere (and those of us not in the Twitterverse had only a very vague idea of what Gamergate was. I thought it was a handful of people who were upset some game they didn’t like won an award, I had no idea that it was a coordinated harassment campaign). When a lot of the things that happened are private messages, or in non-project spaces, it’s hard to know what the real context is. We saw a load of things quoted out of context that made both people look bad. We also had friends of both people jumping in and defending them and attacking the other.

It really takes weeks of investigation to properly handle this kind of thing and dig to the truth. And this compounds the problem of the people dealing with it not having the right training. And, unless they are employees of a foundation backing the project, they also lack the time to do a good job. And, again, the assumption that people are basically decent (which is normally valid) hurts when one of the people is not and is actively trying to subvert the process. The evidence from an honest person reporting what happened and a dishonest person cherry-picking out-of-context comments will look very similar. Unless you personally know the people involved (which brings its own problems of bias) then it’s very hard to work out who is telling the truth. This is even harder when one or both people involved are highly visible in the community, because they will both be publicly sharing a narrative and one is mostly accurate (but only mostly: no one is 100% objective when they’re being personally attacked) while the other is a carefully crafted fabrication, but there’s pressure to respond quickly because both are public and the community is full of people who believe either one and are complaining.

In the last few years, the problem has become worse. A lot of CoC complaints now are malicious. Far-right folks absolutely love baiting people into saying things that look bad when quoted out of context, then deleting the context and reporting the remark. They make a game out of trying to get people kicked out of projects. So the workload has gone up, which compounds the other problems.

I wish I had a good answer for how to improve this.

Re-sharing this reply in the thread for visibility. This is how CoC teams fail. It's hard.

https://infosec.exchange/@david_chisnall/115433845704182344

This is amazing content. Please boost.

David Chisnall (*Now with 50% more sarcasm!*) (@[email protected])

@[email protected] @[email protected] I can’t speak for fd.o, but I was in a leadership position on another project where we got a similar case disastrously wrong, so I might be able to illuminate how that happens. The first mistake we made was not to differentiate harassment from conflict resolution. Most of the issues we had between contributors were personality clashes or technical disagreements that escalated. As you say, most of these have both parties acting in good faith. The main thing that the project needs to do is deescalate and get the people involved to talk again. This is absolutely the wrong approach in cases of harassment. There were two key causes of this: First, (as you mentioned) no one involved had any formal (or, in most cases, informal) training in how to deal with harassment. Most employers offer this, but it’s rarely compulsory. After the initial incident, I signed up for this training with my employer (as did another colleague involved with the same project). This highlighted some of the things we did wrong, but it was quite illuminating who was there: we were the only men on the course who were there voluntarily. Most of the people were women who were there because they had been targets of harassment or bullying and wanted to understand the processes better. The rest were men who had been forced to take the training because they had been accused of harassment (and, from a lot of their comments, I suspect had been engaged in it long term). Most F/OSS (or other community-led) projects don’t have any formal structure for providing this kind of training. And the work-provided training wasn’t sufficient. There were a bunch of ‘and this is where you need to escalate it to HR specialists (or the police)’ moments, but volunteer projects don’t have those experts. One of the biggest things a F/OSS charity could do to improve the situation would be to hire real experts that projects can use as consultants. Companies that back projects could help out be loaning HR as well as engineers to the projects. Second, we had very poor visibility into what happened. There’s a natural tendency for humans to trust the first person who explains a situation. In our case, it was made worse because the only thing that happened on project infrastructure (and so the thing that we saw) was an IRC exchange where one project member connected and had a go at another member then left. We didn’t see the backstory, which involved a load of gamergate nonsense on Twitter and elsewhere (and those of us not in the Twitterverse had only a very vague idea of what Gamergate was. I thought it was a handful of people who were upset some game they didn’t like won an award, I had no idea that it was a coordinated harassment campaign). When a lot of the things that happened are private messages, or in non-project spaces, it’s hard to know what the real context is. We saw a load of things quoted out of context that made both people look bad. We also had friends of both people jumping in and defending them and attacking the other. It really takes *weeks* of investigation to properly handle this kind of thing and dig to the truth. And this compounds the problem of the people dealing with it not having the right training. And, unless they are employees of a foundation backing the project, they also lack the time to do a good job. And, again, the assumption that people are basically decent (which is normally valid) hurts when one of the people is not and is actively trying to subvert the process. The evidence from an honest person reporting what happened and a dishonest person cherry-picking out-of-context comments will look very similar. Unless you personally know the people involved (which brings its own problems of bias) then it’s very hard to work out who is telling the truth. This is even harder when one or both people involved are highly visible in the community, because they will both be publicly sharing a narrative and one is mostly accurate (but only mostly: no one is 100% objective when they’re being personally attacked) while the other is a carefully crafted fabrication, but there’s pressure to respond quickly because both are public and the community is full of people who believe either one and are complaining. In the last few years, the problem has become worse. A lot of CoC complaints now are malicious. Far-right folks absolutely love baiting people into saying things that look bad when quoted out of context, then deleting the context and reporting the remark. They make a game out of trying to get people kicked out of projects. So the workload has gone up, which compounds the other problems. I wish I had a good answer for how to improve this.

Infosec Exchange
can job listings be serious
Imagine a browser where you type in “Taylor Swift” and it doesn’t even admit that her website exists. I write about Atlas, ChatGPT’s new anti-web browser that should come with a warning label. https://www.anildash.com/2025/10/22/atlas-anti-web-browser/
ChatGPT's Atlas: The Browser That's Anti-Web

A blog about making culture. Since 1999.

Anil Dash
Ya know, being a catgirl you'd think I'd use `Box<T>` a lot more than I actually do :v

Human code reviewers of Fedora project can only address questions of quality, fix security issues, or address missing functionality in AI code slop. It cannot answer questions about legality or licensing or even trace origin like:

> Where did that AI code come from?
> What licensing does AI created slop follows? Can you put it under GPL?
> So who actually wrote the code? Why person submitting putting their name on top of it?

The answer is: Nobody knows.

Fedora Linux now permits AI assisted contributions (code, docs and more), provided there is proper disclosure and transparency

https://pagure.io/Fedora-Council/tickets/issue/542

https://discussion.fedoraproject.org/t/council-policy-proposal-policy-on-ai-assisted-contributions/165092/242

Bad decision Fedora but it was expected when IBM/RedHat is basically fund it. Maybe that is what corporate client of RHEL/IBM wants. IDK. There are many other distros that bans AI/LLM but many contributions of Fedora ends up in upstream projects like Kernel or DEs.

Issue #542: Council Policy Proposal: Policy on AI-Assisted Contributions - tickets - Pagure.io