@emilymbender thank you for the post. This non-information oil spill is happening faster than I hoped :(
@emilymbender That little guy would last 14 seconds.

It's really sad/worrying that people can't immediately clock this as obviously fake.

It seems even worse that Google can't.

@emilymbender Time to get 1000 Olivetti’s

@emilymbender

I feel bad now for posting this picture in 2001 on Wikipedia in an article about how hummingbirds migrate South on the backs of geese. In fact, the geese stay around here all Winter.

@emilymbender Great points. Will the pollution generated by generative text and images mean a comeback of archaic sources of info such as paper books , libraries and museums?

@emilymbender

You attribute the image in your medium article to commons.wikimedia. But actually it was imported to commons from Flickr (meaning it can possibly be a case of flickr-washing (the process of uploading a foto to Flickr (that has nearly no save guards) and then importing it to commons to circumvent the safe guards of commons as commons relies on Flickr to deliver reliable content). What gives it away? The file name includes a large nunber - its Flickr-id. The source field and the author field in the information template. Both are links to Flickr. The Name of the uploader not matching the name of the author. And most importent: the accessment template by the Flickr Review Bot (FlickreviewR 2), proveing it was posted on Flickr with a license compatible with commons.

Is it a authentic baby peacock? Very probably yes: It is used in a number of wikipedias, especially the english language wikipedia.

How else could this be checked? By looking at the Flickr page, the author page on Flickr and other contributions by the author on Flickr.

Why should it not be trusted? The files metadata (EXIF) give as author „picasa“ (an image processing software - it has been digitally altered).

I have drawn all this from the image page on commons only (desktop version. On mobile you have to scroll to the bottom and click „desktop“ to view it).

This specific foto has been imported in 2018, but even if it was imported this year, it could still be verified at Flickr.

@emilymbender synchronicity: in my timeline, another excellent metaphor for your non-information "oilslick"

😂👍

@andrewfeeney https://phpc.social/@andrewfeeney/110840759972162414

Andrew Feeney (@[email protected])

@[email protected] @[email protected] Digital micro plastics.

PHP Community on Mastodon
@emilymbender big use case for a search engine limited to pre oil spill index

@emilymbender Another spot-on review of how the lack of any foresight with AI/ML tools is going to be a huge problem. We should have thought about how to highlight fakery and how to easily check before releasing this tech on the public.

While that image is so obviously fake to me, I do have a long career in design, photography and Photoshop, as well as an interest in wildlife so it just looked ‘too designed’ if that makes sense. I’m well aware many don’t see it that way

@emilymbender a "misinformation oil spill" is such an accurate analogy, I really hope this idea catches on.
@emilymbender thanks for the post; that’s a good example for #informationliteracy classes
@emilymbender I guess data really is “the new oil,” in the worst of ways
@emilymbender Yesterday’s NY Times article about “Shoddy guidebooks, promoted with deceptive reviews, have flooded Amazon in recent months” is another possible example of this informational oil spill, polluting the travel writing ecology? https://www.nytimes.com/2023/08/05/travel/amazon-guidebooks-artificial-intelligence.html
A New Frontier for Travel Scammers: A.I.-Generated Guidebooks

Shoddy guidebooks are flooding Amazon. Their authors claim to be renowned travel writers, but are they A.I. inventions? And how big is the problem?

The New York Times

@emilymbender Images like this certainly look fake right off the bat. But that won't be the case for long. It's going to be a dangerous problem when the poisoning of our collective information repository (the web) becomes pervasive. What happens when searching things like "how to change car battery" return wrong instructions? Or when info on medicines gives false results? Etc. People are going to get hurt.

AI generated content needs to be flagged in the meta data & kept separate.

@syntaxseed Yes, that's what I said in the linked article. I don't disagree, but I wonder why you are telling me?