*Edit*: here at least, I am clearly not isolated!

Perhaps I am increasingly isolated in holding this position, but I have no interest in reading "AI"-generated slop.

I love reading.

I read people's blogs and toots and whatever *because people wrote them* and I want to read their own thoughts and opinions.

I buy books, and read numerous different authors. I like finding new authors, bringing new ideas, styles etc.

Same with "AI" images. I'd prefer no image at all.

@neil

Heh, what a coincidence to see this right as one of my old posts saying the same (from eight months ago) has been getting boosted around again. ;)

https://polymaths.social/@amin/statuses/01K03C7KYJ50AE33CQAYRVTYHM

Amin, minor deity of the legume realm (@[email protected])

I will repeat this as many times as I need to: no matter how terrible you think your writing is, I would far rather read it than anything that came out of an LLM.

polymaths.social

@amin In which case, perhaps I am not as isolated as I thought.

It doesn't surprise me that *some* other people have this perspective, but I wonder how common it is.

@neil

I think most people would hold it, actually, if not consciously. People get so obsessed over specific authors, for example; no reason to do that with AI. People want to read the thoughts of other people.

In the meantime, I think mostly we're seeing people interested in AI writing out of curiosity and because it's so new.

@amin @neil

People want to read new things. Humans have the imagination to write new things. AI regurgitates what others have wiritten. AI rearranges the furniture, but it cannot invent a new piece.

@neil @amin

Add me to your list.

But I'd add: there is a space for texts that are read purely for information. Which would be something that could be machine generated. Unfortunately those texts require (a) correctness, and (b) transparency about sources. Both things a transformer cannot give, due to it's architecture.

So genAI fails completely for me.

@neil @amin

Before Christmas a couple of big companies cancelled AI-generated ad campaigns because the negative feedback was causing harm to their brand at the start of their peak selling season. The more people complain about these things (don’t share the ads, just say ‘company X used to be okay but their latest ad campaign is slop and it makes me hate them’ - ad agencies consider people sharing the ads to be positive for raising brand awareness even if people hate them), the more that feedback will go from ad companies to their customers.

For an example of the parenthetical: about 15 years ago, Tango did an ad campaign that had people rolling fruit down Constitution Hill in Swansea, where it smashed at the bottom. There were a bunch of news articles about how they didn’t bother to clean up and left the mess for residents. There was only one problem: all of the fruit was CGI, there was no mess. The negative press made a load of people watch the ad. The claim that they made a mess was ‘leaked’ from the ad agency to news sources who didn’t do any basic fact checking (I lived just around the corner, it was easy for someone to pop down and see there was no mess). The campaign was considered a big success. So if people share an ad and say ‘I hate this’, it won’t necessarily have the right result. But if they share a single terrible frame, it might.