Identifying Computer-Generated Faces
It’s the eyes:
The researchers note that in many cases, users can simply zoom in on the eyes of a person they suspect may not be real to spot the pupil irregularities. They also note that it would not be difficult to write software to spot such errors and for social media sites to use it to remove such content. Unfortunately, they also note that now that such irregularities have been identified, the people creating the fake pictures can simply add a feature to ensure the roundness of pupils.
And the arms race continues….
Research paper.
Clive Robinson • September 15, 2021 1:58 PM
@ Bruce,
This fragility of such “testing systems” where “needing obscurity” is a primary requirment of opperation is not good. One of the major points of “evidence” is that it be presented openly and that all methods are open to inspection and the application by others to be verified.
Untill fairly recently, science has been able to stop those producing fakes from being able to change their methods sufficiently, so that the fakes will pass as genuine. But that has only been true of “physical objects” not “informational objects”.
The question arises as to if it is actually possible to stop “informational object” counterfiting?
Especially when the cost of making copies of information objects is so small, that it is effectively negligible. So the only way of stopping new “fakes” appears to be by the use of digital signing. But even that is far from reliable as neither the hash or the signing of it is in any way intrinsically tied to the object so verifiable….
So on the assumption it is nolonger possible to stop fakes being created, “How will society react?”…