White AI faces judged human more often than actual human faces
White AI faces judged human more often than actual human faces
Not really much of an issue with SXDL any more, and even SD1.5 got quite good at the end. Admittedly haven’t stress-tested either, though (things like clasping hands etc). It’s also not a hand-specific thing, it just happens more commonly with fingers because they’re small features:
The thing that happens is that diffusion-type interference first nails down gross structure (which limb is where) and then fills in details. Sometimes steps somewhere in the middle decide that a limb should be somewhere else, though, and suddenly you have two, and if steps immediately after don’t think “that old limb doesn’t look like it should be there” and erase it, later steps will happily refine both to photorealism because they don’t even look at the overall composition. That is, it’s not an issue with anatomical knowledge, or not having seen enough hands, but the model changing its mind but not backtracking. It’s actually astonishing how good it can get at not making that mistake without being able to tell that it has two competing goals in mind.
Exactly. The AIs job is to generate humanness. The things that don’t look human get discarded, the things that have strong human indicators get kept. Oh look, the AI did its job. Shocked pikachu.
The white thing is probably just a case of biased training data. Which is going to be a problem across all AIs. I wouldn’t be surprised if in 5-10 years (if the fad lasts longer than NFTs lmao) we find out the ‘AIs’ have all been fed biased data as yet another means of large corporations controlling the narrative of the population.
The term comes from an old theory that said that humanity started out in the Caucasus and spread from there, people becoming darker as they were exposed to more sun. The guy who made that theory wasn’t racist, but his work was used by racists (and he railed against that, saying things like “there’s villages in Africa with greater artistic and philosophical output than [European region where one of his racist “admirers” was from]”). He was a scientist and interpreted archaeological evidence – which we now understand to be the evidence for the Urheimat, and spread of, Indo-European people. Who came to the Caucasus, just like everyone else, from Africa, but that evidence hadn’t been unearthed yet. (Technically the Urheimat is probably the Ukrainian plains, not mountains, but close enough).
All that is 2000-3000 years before the Pyramids.
There’s actually multiple different mutations which contribute to whiteness, all caused by the double-whammy of not getting as much vitamin d from the sun, and not getting as much vitamin d from meat with the advent of agriculture. Pre-agricultural Europeans (hunter-gatherers, pastoralists) were actually quite a bit tanner than we are now, the original Proto-Indo-Europeans were (very probably) nomadic cattle herders and thus probably also tanner, agriculture and another set of whiteness genes came from the Euphrat/Tigris region.
The term comes from an old theory that said that humanity started out in the Caucasus and spread from there, people becoming darker as they were exposed to more sun.
Not quite. The guy who coined the term, Blumenbach, believed that the Caucasians (in particular the Georgians) were the most beautiful and therefore must have been the original humans. Maybe “old theory” means the biblical belief that “Noah’s Ark” stranded in the Caucasus Mountains. I don’t know that Blumenbach used that as a justification. Biblical race doctrines defined races as descent from different sons of Noah.
The Caucasians are certainly far from the palest people on the planet. The south of the region is part of Turkey and Iran. Those are maybe the most well-known countries and the region and I’m sure that no one pictures very pale people. I remember an article about the considerable diplomatic and PR efforts that Turkey undertook in the early 20th century to be made a white country under US law. I wish I could recall the details.
Some are lucky enough to have dark complexions that shine like the finest of earth’s woods and minerals.
Lol what
An op? Making a misleading title? On Lemmy?
Man, it’s as if the severe lack of moderation and rules that so many people wanted when moving from Reddit is hurting the quality of posts on here.
It turns out that AI faces were rated as more human-like than actual humans
I tried guessing from the ArsTechnica article and got a whopping 1 out of 8 correct.
We used the 100 AI and 100 human White faces (half male, half female) from Nightingale and Farid. The AI faces were generated using StyleGAN2. The human faces were selected from the Flickr-Faces-HQ Dataset to match each of the AI faces as closely as possible (e.g., same gender, posture, and expression). All stimuli had blurred or mostly plain backgrounds, and AI faces were screened to ensure they had no obvious rendering artifacts (e.g., no extra faces in background). Screening for artifacts mimics how real-world users screen AI faces, either as scientists or for public use, and therefore captures the type and range of stimuli that appear online. Participants were asked to resize their screen so that stimuli had a visual angle of 12° wide × 12° high at ~50 cm viewing distance.
I don’t know why people (not saying you, more directed at the top commenter) keep acting like cherry picking AI images in these studies invalidate the results - cherry picking is how you use AI image generation tools, that’s why most will (or can) generate several at once so you can pick the best one. If a malicious actor was trying to fool people, of course they’d use the most “real” looking ones, instead of just the first to generate
Frankly the studies would be useless if they didn’t cherry pick, because it wouldn’t line up with real world usage
I understand why you’re cautious in the “accusation” (don’t put too much weight on accusation, it’s just the idea I want to convey, not any malicious intent) but in this case, I am saying that cherry picking invalidates the findings, as they are stated.
If the findings were framed around “it’s easier to fool people using white AI generated faces”, or something similar, I’d be on board with it. The way it sounds right now is “AI generated faces don’t have all these artifacts 99% of the time” (I’m paraphrasing A LOT, but you get what I mean.)
The way it sounds right now is “AI generated faces don’t have all these artifacts 99% of the time” (I’m paraphrasing A LOT, but you get what I mean.)
The only way it sounds like that is if you don’t read the article at all and draw all your conclusions from just reading the title.
Don’t get me wrong, I’m sure many do just that, but that’s not the fault of the study. They clearly state their method for selecting (or “cherry picking”) images
They used a clickbaity title, they’ll get clickbaity judgement.
It’s also not in their abstract, which is supposed.to contain the most important facts. Their first sentence is about how AI generated faces are indistinguishable. No, they’re not. It’s like saying “writing random numbers solves any numerical equation”, not mentioning that I took a gazillion random numbers and did my study on the ones that matched.
Are you seriously going to tell me that human male 47 is real
I do not believe you
So… sidebar. Hang on.
When human simulacra start to approach realism, they go into the “uncanny valley” once they’re pretty good but still obviously off. What’s after the uncanny valley once they’re totally convincing? Is there even a name for that?