Using Machine Learning to Create Fake Fingerprints
Researchers are able to create fake fingerprints that result in a 20% false-positive rate.
The problem is that these sensors obtain only partial images of users’ fingerprints—at the points where they make contact with the scanner. The paper noted that since partial prints are not as distinctive as complete prints, the chances of one partial print getting matched with another is high.
The artificially generated prints, dubbed DeepMasterPrints by the researchers, capitalize on the aforementioned vulnerability to accurately imitate one in five fingerprints in a database. The database was originally supposed to have only an error rate of one in a thousand.
Another vulnerability exploited by the researchers was the high prevalence of some natural fingerprint features such as loops and whorls, compared to others. With this understanding, the team generated some prints that contain several of these common features. They found that these artificial prints were more likely to match with other prints than would be normally possible.
If this result is robust—and I assume it will be improved upon over the coming years—it will make the current generation of fingerprint readers obsolete as secure biometrics. It also opens a new chapter in the arms race between biometric authentication systems and fake biometrics that can fool them.
More interestingly, I wonder if similar techniques can be brought to bear against other biometrics are well.
Research paper.
Slashdot thread
Tatütata • November 23, 2018 8:43 AM
This item has been around for a couple of weeks, and I envisaged mentioning it in a squiddy thread, but when checked it out I realized that something rather similar had already been published here back in May 2017.
However, the prior work was done at Michigan State, whereas the one mentioned above comes from New York University.
Duplication or original research?