<
https://www.scientificamerican.com/article/humans-find-ai-generated-faces-more-trustworthy-than-the-real-thing/>
"When TikTok videos emerged in 2021 that seemed to show “Tom Cruise” making a
coin disappear and enjoying a lollipop, the account name was the only obvious
clue that this wasn’t the real deal. The creator of the “deeptomcruise” account
on the social media platform was using “deepfake” technology to show a
machine-generated version of the famous actor performing magic tricks and
having a solo dance-off.
One tell for a deepfake used to be the “uncanny valley” effect, an unsettling
feeling triggered by the hollow look in a synthetic person’s eyes. But
increasingly convincing images are pulling viewers out of the valley and into
the world of deception promulgated by deepfakes.
The startling realism has implications for malevolent uses of the technology:
its potential weaponization in disinformation campaigns for political or other
gain, the creation of false porn for blackmail, and any number of intricate
manipulations for novel forms of abuse and fraud. Developing countermeasures to
identify deepfakes has turned into an “arms race” between security sleuths on
one side and cybercriminals and cyberwarfare operatives on the other.
A new study published in the
Proceedings of the National Academy of Sciences
USA provides a measure of how far the technology has progressed. The results
suggest that real humans can easily fall for machine-generated faces—and even
interpret them as more trustworthy than the genuine article. “We found that not
only are synthetic faces highly realistic, they are deemed more trustworthy
than real faces,” says study co-author Hany Farid, a professor at the
University of California, Berkeley. The result raises concerns that “these
faces could be highly effective when used for nefarious purposes.”
“We have indeed entered the world of dangerous deepfakes,” says Piotr Didyk, an
associate professor at the University of Italian Switzerland in Lugano, who was
not involved in the paper. The tools used to generate the study’s still images
are already generally accessible. And although creating equally sophisticated
video is more challenging, tools for it will probably soon be within general
reach, Didyk contends."
Via
The Risks Digest Volume 33 Issue 92:
http://catless.ncl.ac.uk/Risks/33/92#subj11
Cheers,
*** Xanni ***
--
mailto:xanni@xanadu.net Andrew Pam
http://xanadu.com.au/ Chief Scientist, Xanadu
https://glasswings.com.au/ Partner, Glass Wings
https://sericyb.com.au/ Manager, Serious Cybernetics