Intel's deepfake detector tested on real and fake videos

https://lemmy.zip/post/676782

Intel's deepfake detector tested on real and fake videos - Lemmy.zip

Detecting real video as fake seems problematic where it might lead to apathy – folks just don’t believe any video anymore. Similar to Trump’s “everything is fake news” approach

Thus far these detectors kind of suck, both for deepfakes and AI generated text. They’re biased against non-native speakers and using them in a scholarly setting can result in punishing students that aren’t cheating.

The genie was let out of the bottle much too early.

GPT detectors are biased against non-native English writers

The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions. Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse. The published version of this study can be accessed at: www.cell.com/patterns/fulltext/S2666-3899(23)00130-7

arXiv.org

I used to work in the field of image forensics a few years ago, right as the GAN technology was entering the scene. Even when it was just making 200x200 pixel faces, everyone in the industry was starting to panic. Everything we had at the time was based off of detecting inconsistencies in the pixel content, repeating structures that indicated copy/paste attacks, or looking for metadata inconsistencies

For pixel inconsistencies, you can look at how the jpeg image is encoded to look for blocks that aren’t encoded consistently. This paper coversDCT and some others. scholar.google.com/scholar?q=dct+image+forensics&… That’s just one example, but it’s ultimately looking for things like someone photoshopping a region out or patching something in.

Similarly, copy-move detection would look for “edges” and “intersections” in images and creating constellations of points, which you can use scale invariant transforms to look for duplicates. This article covers an example where North Korea tried to make their landing force look more impressive theguardian.com/…/north-korea-photoshop-hovercraf…

The problem is that when the entire image is forged, there is no baseline to detect against. The whole thing is uniformly fake. So we’re back to the old “I can tell be looking at it” which is extremely imprecise and labor intensive. In fact, if you look at how GANs work, it’s trivial to embed any detector algorithm into the training process and make something that also defeats that detector.

Google Scholar

As someone not in the industry this is fascinating!

To get an idea of how they work, this is a great tool for laymen 29a.ch/photo-forensics/

Try uploading something from thispersondoesnotexist.com and see how badly it fails.

Forensically, free online photo forensics tools

Forensically is a set of free tools for digital image forensics. It includes clone detection, error level analysis, meta data extraction and more.

This seems like a very bad idea.
The problem is that yes, that could happen, but without a test we won’t be able to trust or believe anything soon.
This seems very close to owning the truth and could be a start to some very dark business.
Yup, if you think “Fox News truth” is a problem now, wait until there’s “Intel Truth” vs “AMD Truth”.
AI is the solution to everything. Even AI.