Faces Are The Next Target For Fraudsters
Hackers are pioneering new ways of tricking facial-recognition systems, from cutting the eyes out of photos to making a portrait ‘nod’ with artificial intelligence. Faces Are The Next Target For Fraudsters
The Future of Everything covers the innovation and technology transforming the way we live, work and play, with monthly issues on health, money, cities and more. This month is Artificial Intelligence, online starting July 2 and in the paper on July 9.
Facial-recognition systems, long touted as a quick and dependable way to identify everyone from employees to hotel guests, are in the crosshairs of fraudsters. For years, researchers have warned about the technology’s vulnerabilities, but recent schemes have confirmed their fears—and underscored the difficult but necessary task of improving the systems.
In the past year, thousands of people in the U.S. have tried to trick facial identification verification to fraudulently claim unemployment benefits from state workforce agencies, according to identity verification firm ID.me Inc.
The company, which uses facial-recognition software to help verify individuals on behalf of 26 U.S. states, says that between June 2020 and January 2021 it found more than 80,000 attempts to fool the selfie step in government ID matchups among the agencies it worked with.
That included people wearing special masks, using deepfakes—lifelike images generated by AI—or holding up images or videos of other people, says ID.me Chief Executive Blake Hall.
Facial recognition for one-to-one identification has become one of the most widely used applications of artificial intelligence, allowing people to make payments via their phones, walk through passport checking systems or verify themselves as workers.
Drivers for Uber Technologies Inc., for instance, must regularly prove they are licensed account holders by taking selfies on their phones and uploading them to the company, which uses Microsoft Corp.’s facial-recognition system to authenticate them.
Uber, which is rolling out the selfie-verification system globally, did so because it had grappled with drivers hacking its system to share their accounts. Uber declined to comment.
Amazon.com Inc. and smaller vendors like Idemia Group S.A.S., Thales Group and AnyVision Interactive Technologies Ltd. sell facial-recognition systems for identification. The technology works by mapping a face to create a so-called face print. Identifying single individuals is typically more accurate than spotting faces in a crowd.
Still, this form of biometric identification has its limits, researchers say.
Why criminals are fooling facial recognition
Analysts at credit-scoring company Experian PLC said in a March security report that they expect to see fraudsters increasingly create “Frankenstein faces,” using AI to combine facial characteristics from different people to form a new identity to fool facial ID systems.
The analysts said the strategy is part of a fast-growing type of financial crime known as synthetic identity fraud, where fraudsters use an amalgamation of real and fake information to create a new identity.
Until recently, it has been activists protesting surveillance who have targeted facial-recognition systems. Privacy campaigners in the U.K., for instance, have painted their faces in asymmetric makeup specially designed to scramble the facial-recognition software powering cameras while walking through urban areas.
Criminals have more reasons to do the same, from spoofing people’s faces to access the digital wallets on their phones, to getting through high-security entrances at hotels, business centers or hospitals, according to Alex Polyakov, the CEO of Adversa.ai, a firm that researches secure AI.
Any access control system that has replaced human security guards with facial-recognition cameras is potentially at risk, he says, adding that he has confused facial-recognition software into thinking he was someone else by wearing specially designed glasses or Band-Aids.
A growing threat
The idea of fooling these automated systems dates back several years. In 2017, a male customer of insurance company Lemonade tried to fool its AI for assessing claims by dressing in a blond wig and lipstick, and uploading a video saying his $5,000 camera had been stolen. Lemonade’s AI systems, which analyze such videos for signs of fraud, flagged the video as suspicious and found the man was trying to create a fake identity.
He had previously made a successful claim under his normal guise, the company said in a blog post. Lemonade, which says on its website that it uses facial recognition to flag claims submitted by the same person under different identities, declined to comment.
Earlier this year, prosecutors in China accused two people of stealing more than $77 million by setting up a fake shell company purporting to sell leather bags and sending fraudulent tax invoices to their supposed clients.
The pair was able to send out official-looking invoices by fooling the local government tax office’s facial-recognition system, which was set up to track payments and crack down on tax evasion, according to prosecutors cited in a March report in the Xinhua Daily Telegraph.
Prosecutors said in a posting on the Chinese chat service WeChat that the attackers had hacked the local government’s facial-recognition service with videos they had produced. The Shanghai prosecutors couldn’t be reached for comment.
The pair bought high-definition photographs of faces from an online black market, then used an app to create videos from the photos to make it look like the faces were nodding, blinking and opening their mouths, the report says.
The duo, who had the surnames Wu and Zhou, used a special mobile phone that would turn off its front-facing camera and upload the manipulated videos when it was meant to be taking a video selfie for Shanghai’s tax system, which uses facial recognition to authenticate tax returns, the report says. Wu and Zhou had been operating since 2018, according to prosecutors.
Spoofing a facial-recognition system doesn’t always require sophisticated software, according to John Spencer, chief strategy officer of biometric identity firm Veridium LLC. One of the most common ways of fooling a face-ID system, or carrying out a so-called presentation attack, is to print a photo of someone’s face and cut out the eyes, using the photo as a mask, he says.
Many facial-recognition systems, such as the ones used by financial trading platforms, check to see if a video shows a live person by examining their blinking or moving eyes.
Most of the time, Mr. Spencer says, his team could use this tactic and others to test the limits of facial-recognition systems, sometimes folding the paper “face” to give it more perceived depth. “Within an hour I break almost all of [these systems],” he says.
Apple Inc.’s Face ID, which was launched in 2017 with the iPhone X, is among the most difficult to fool, according to scientists. Its camera projects more than 30,000 invisible dots to create a depth map of a person’s face, which it then analyzes, while also capturing an infrared image of the face.
Using the iPhone’s chip, it then processes that image into a mathematical representation, which it compares with its own database of a user’s facial data, according to Apple’s website. An Apple spokeswoman says that the company’s website states that for privacy reasons, Face ID’s data never leaves an iPhone.
Some banks and financial-services companies use third-party facial-identification services, not Apple’s Face ID system, to sign up customers on their iPhone apps, Mr. Spencer says. This is potentially less accurate. “You end up looking at regular cameras on a mobile phone,” he says. “There’s no infrared capability, no dot projectors.”
Many online-only banks ask users to upload video selfies alongside a photo of their driver’s licenses or passports, and then use a third party’s facial-recognition software to match the video to the ID. The images sometimes go to human reviewers if the system flags something wrong, Mr. Spencer says.
Seeking a solution
Mr. Polyakov regularly tests the security of facial-recognition systems for his clients and says there are two ways to protect such systems from being fooled. One is to update the underlying AI models to beware of novel attacks by redesigning the algorithms that underpin them. The other is to train the models with as many examples as possible of the altered faces that could spoof them, known as adversarial examples.
Unfortunately, it can take 10 times the number of images needed to train a facial-recognition model to also protect it from spoofing—a costly and time-consuming process. “For each human person you need to add the person with adversarial glasses, with an adversarial hat, so that this system can know all combinations,” Mr. Polyakov says.
Companies such as Google, Facebook and Apple are working on finding ways to prevent presentation attacks, according to Mr. Polyakov’s firm’s analysis of more than 2,000 scientific research papers about AI security. Facebook, for instance, said last month that it is releasing a new tool for detecting deepfakes.
ID.me’s Mr. Hall says that by this past February, his company was able to stop almost all of the fraudulent selfie attempts on the government sites, bringing the number that got through down to single digits from among millions of claims.
The company got better at detecting certain masks by labeling images as fraudulent, and by tracking the device, IP addresses and phone numbers of repeat fraudsters across multiple fake accounts, he says.
It also now checks how the light of a smartphone reflects and interacts with a person’s skin or another material. The attempts at face-spoofing have also declined. “[Attackers] are typically unwilling to use their real face when committing a crime,” Mr. Hall says.
Deepfake Technology Is Now A Threat To Everyone. What Do We Do?
Legislation hasn’t kept up with the fast-moving technology, so the market may have to create its own solution.
In October, MIT Prof. Sinan Aral warned his Twitter followers that he had discovered a video of himself that he hadn’t recorded endorsing an investment fund’s stock-trading algorithm.
In reality, it wasn’t Prof. Aral in the video, but an artificial-intelligence creation in his likeness, or what is known as a highly persuasive “deepfake.”
It is striking that scammers targeted Prof. Aral, considering he is a leading expert on the study of misinformation online.
It also suggests that deepfake technology is now at an inflection point: Thanks to a number of free deepfake apps that are just a Google search away, anyone can become a victim of such a scam.
The term deepfake has its origins in pornography, but it has come to mean the use of AI to create synthetic media (images, audio, video) in which someone appears to be doing or saying what in reality they haven’t done or said.
The technology isn’t always misused. Cadbury, for example, joined with Bollywood celebrity Shahrukh Khan on a marketing campaign for small businesses in India hit by Covid-19.
Business owners uploaded details of their stores, and Cadbury used deepfake technology to create the effect of Mr. Khan promoting them in tailored TV ads. (The campaign was transparent about its fakery).
But positive use cases are likely to be overshadowed in coming years by the technology’s potential role in financial fraud, identity theft and worse—from the savaging of reputations to the stoking of civil and political unrest.
Current laws targeting fraudulent impersonation weren’t designed for a world with deepfake technology, and efforts at the federal level to update these laws have faltered so far. One stumbling block is the need to also protect parodies and other free speech.
Another big challenge is that in an online world where people can anonymously upload content, it can be difficult to find the individuals behind deepfakes.
Some researchers have proposed putting the onus on website platforms such as Facebook and YouTube by making their protections in relation to user-generated content conditional on their taking “reasonable steps” to police their own platforms.
Broad adoption of these kinds of laws could create meaningful deterrents—eventually. But the technology is moving so fast that lawmakers will likely always lag behind. That is why I believe we are going to have to rely on technology to protect us from a problem it helped create.
One such solution is to detect deepfakes via machine-learning methods. For instance, while deepfakes appear highly realistic, the technology isn’t yet capable of generating natural eye blinking in the impersonated individuals.
As such, machine-learning algorithms have been trained to detect deepfakes using eye-blinking patterns. While these detectors can be successful in the short term, people looking to evade such systems will likely just respond with better technology, creating a continuing and expensive cat-and-mouse game.
A better approach with a longer time horizon is media provenance or authentication systems to verify the origins of images and videos.
Microsoft, for instance, has developed a prototype of a system called AMP (Authentication of Media via Provenance) that enables media-content creators to create and assign a certificate of authenticity to their content.
Under such a system, every time you watch a video of, say, the U.S. president, the technology would help your browser or media-viewing software verify the source of the video (for example, a news network or the White House).
The process could be delivered as simply as through an icon—much like the current browser padlock icon that indicates any information you send to that particular website is protected from third-party tampering en route.
To be effective in practice, such systems would have to be widly adopted by all content creators, which will take time.
While legislation eventually may offer protection against deepfakes, I believe the market could be quicker—provided we, as consumers and citizens, care.
Faces Are The Next,Faces Are The Next,Faces Are The Next,Faces Are The Next,Faces Are The Next,Faces Are The Next,Faces Are The Next,Faces Are The Next,Faces Are The Next,Faces Are The Next,Faces Are The Next,