#

Real or Not?

Science Fields
Tags

A Generative Adversarial Network (GAN) called StyleGAN2, used by researchers at Lancaster University in England creates fake human surrogates that are hard to distinguish from real human faces. Moreover, participants of the experiment displayed a tendency to interpret these surrogates as friendlier and more trustworthy than real human faces.

The GAN system creating these images uses two different artificial neural networks. The first, called a “generator”, produces an evolving series of synthetic faces. Scientists resemble this to a student drawing one draft after another. On the other hand, the second artificial neural network called a “discriminator”, trained on real face photographs, scores these images by comparing them with actual faces. Receiving feedback from the discriminator, the generator starts working with random pixels to create more and more realistic faces. After a point, the discriminator becomes unable to distinguish the real and fake images.

The researchers chose 400 synthetic and 400 real face images and conducted an experiment with 315 participants. They asked the participants to mark the real and fake faces among 128 images. Another group of 219 subjects performed the same experiment, after receiving elementary training on how to detect fake images. The final group consisting of 223 people looked at 128 face images and scored all of them for being trustworthy/sincere, within a range of 1 to 7 (1 – not at all trustworthy; 7 – very trustworthy).

According to the results, the first group was no better than tossing coins at distinguishing real faces from fake ones (48.2%). The second group remained at only 59%, even with the feedback from group one. Evaluation of the scores given by the third group revealed that synthetic faces received slightly more positive votes than real ones (an average of 4.82 for synthetic faces, and 4.48 points for real faces).

Researchers were also surprised by the results. They had expected that synthetic images would be found, on average, less trustworthy than real faces. Of course, not all of the generated images were indistinguishable from a real face, and participants were easily able to distinguish some fake images.

In this age where everyone can create fake images using Photoshop or a similar graphic editor, being able to create more realistic human faces than actual human faces ((photo-realism)) is a bit of a concern for some. Particularly, considering that the “deep fake” technology has stirred things a little in recent years. Yet, we do not yet have the technology to detect this type of fraud. On the other hand, the possibility that all these developments may start casting doubt on real images is another concern.

REFERENCES

  • 1. https://www.pnas.org/content/119/8/e2120481119
  • 2. https://www.scientificamerican.com/article/humans-find-ai-generated-faces-more-trustworthy-than-the-real-thing/
  • 3. https://futurism.com/the-byte/ai-faces-trustworthy