Do We Trust Fake Faces More Than Real Ones?

Psychologist Sophie Nightingale discuss her research on realistic and ‘more trustworthy’ AI-synthesized faces.

By Mark Travers, Ph.D. | March 16, 2022

A new study published in PNAS highlights the potential threat AI-generated faces might pose to our society because of our tendency to find them more trustworthy than real human faces.

I recently spoke to the lead author of the research, psychologist Sophie Nightingale from the University of Lancaster, to understand this tendency. Here is a summary of our conversation.

What inspired you to investigate the topic of AI-generated faces, how did you study it, and what did you find?

We've seen incredible advances in technology and the use of Artificial Intelligence to synthesize content is particularly exciting but also worrying. The development and sharing of such technology means that anyone can create synthetic content without specialized knowledge of Photoshop or CGI. This access to all creates a significantly larger threat for nefarious uses than previous specialized technologies.

Working with the talented NVIDIA team behind StyleGAN, we were able to modify the synthesis engines to create more diverse faces across race and gender. This then allowed us to study realism across a significantly more diverse population. We also went to great lengths to make sure that our synthetic and real faces were balanced in terms of age, race, gender, and overall appearance, which is critical to ensuring that our stimuli didn't create any undesirable confounds.

In addition to finding that naive users were at chance in determining if a face is real or synthetic, we also found that additional training and feedback only improved performance slightly. Perhaps most interesting, we found that not only are synthetic faces highly realistic but they are also deemed more trustworthy than real faces. As a result, it is reasonable to be concerned that these faces could be highly effective when used for nefarious purposes.

What makes a face seem more 'trustworthy' to the human mind and eye? What features do we register as trustworthy and/or untrustworthy?

While we can't say for sure why the synthetic faces are rated more trustworthy, we hypothesize that this is because synthesized faces tend to look more like average faces. This more average appearance is an artifact of how the synthesis techniques favors average faces as it is synthesizing a face. We also know that people show a preference for average or "typical-looking" faces because this provides a sense of familiarity. Therefore, it might be this sense of familiarity that elicits, on average, higher trust for the synthetic faces. Essentially, we're more likely to trust something that feels familiar to us.

We ensured that our synthetic and real faces were balanced in terms of age, race, gender, and overall appearance. Therefore, the most likely reason for the real faces being rated as less trustworthy than the synthetic ones is simply that they are less like average faces.

It's also worth noting that we checked for smiling as a feature that might account for the differences, that is, smiling faces might be considered more trustworthy, we found a similar distribution of smiling faces in the real and synthetic image sets (in fact, slightly more of the real faces were smiling).

Does your research highlight any key points when it comes to gender and racial aspects of these faces?

Overall, we find that white male faces are more difficult to classify correctly. We posit that this is because the synthesis techniques are trained on disproportionally more white male faces.

Despite this difference, overall, all faces, regardless of age, race, and gender, are difficult to correctly classify. I expect that any current differences will eventually vanish as the synthesis techniques improve and the training data sets expand in size and diversity.

Is there any way for an average person to identify if a face has been synthesized by AI?

In addition to finding that naive users were at chance in determining if a face is real or synthetic, we also found that additional training and feedback only improved performance slightly.

The training involved raising awareness of certain artifacts commonly created in the synthesis process — as the synthesis technology will continue to improve and these artifacts will eventually disappear meaning that the small improvement we saw in our second study might disappear in the future.

Also, initially, we thought that synthetic faces would be less trustworthy than real faces and that's why we explored this avenue as an attempt to find an indirect way of discriminating real from synthetic faces — we found the reverse.

At this time, I'm not aware of a reliable way for an average person to identify if a face has been AI-synthesized, however, we'll continue to conduct research to try to help.

Do you have plans for follow-up research? Where would you like to see research on AI-Synthesized faces and related technology go in the future?

The next step will be to consider what computational techniques we can develop to discriminate real from synthetic images. Also, video and audio synthesis is rapidly improving and we will need to turn our attention to understanding the concerns relating to this type of synthetic content and the nefarious uses it brings.

Given the rapid rise in sophistication and realism of synthetic media (a.k.a., deep fakes), we propose that those creating these technologies should incorporate reasonable precautions into their technology to mitigate some of the potential misuses in terms of non-consensual porn, fraud, and disinformation.

More broadly, we recommend that the larger research community consider adopting specific best practices for those in this field to help them manage the complex ethical issues involved in this type of research.