I start reading what is supposedly a powerful personal story about burnout in open source. Then I start to see the pattern - that familiar tempo. Admittedly there's no "Not x. Not y. Just z'. But there's that faint whiff, like the neighbor is burning toast. The paragraphs are just a bit too regular. Something about the headings. Yeah my dude, you used ChatGPT to write this, didn't you. I'm sure of it. Not sure enough of just how much to comment on the post, and I don't have the energy to get in an argument about it.

Ai both degrades trust - making us suspicious of everything we see and read - and sucks the life and personality out of absolutely everything it touches. I'm so sick of starting to read and going, oh, nope this is machine-generated nothingness. I hate this. I hate all of it.

@bluetea

Yeah, 100%. The poisoning of trust is the worst aspect of the whole thing. It voids any strategy that tries to maintain an LLM-free zone. It attacks us from inside the city walls.

#StopTheAICorruption

@the_roamer yeah it's a real issue. I'm dreading marking my next batch of essays - there's an obligation to maintain academic integrity, and I just don't know how I'm going to navigate it.

@bluetea @the_roamer One thing I've found when marking essays is that there's no reliable way to tell whether someone has used software to write.

BUT in the last couple of years students have started to hand in essays where the sources cited do exist, but they don't actually match the content.

So, for example, a student will write "Facebook has implemented new moderation techniques, including algorithm-based tone analysis (Agarwal, 2024, p. 12)"

but then when I go check Agarwal, this claim does not appear. So I check a couple of sources for each essay, and if they're not checking out do a more thorough check.

If it's a problem through the essay, I put it through our academic integrity system as "falsification of data/sources", rather than as "AI".

Perhaps in the process some students learn that "AI" isn't intelligent and doesn't produce reliable work.

@scroeser @the_roamer yeah I'm familiar with the mismatched sources issue, that's a big one. The subtle stylistic things and 'vagueness' are all a bit more difficult. It puts you in a difficult position - having to read 2500 words of AI garbage and marking it as if a human wrote it because you don't have enough evidence to be able to confidently call it out.
@bluetea Frustrating for sure. Why bother going to university not to learn?!