Open 24/7/365

We Have A Life-Time Warranty /
Guarantee On All Products. (Includes Parts And Labor)

Faces Are The Next Target For Fraudsters

Hackers are pioneering new ways of tricking facial-recognition systems, from cutting the eyes out of photos to making a portrait ‘nod’ with artificial intelligence. Faces Are The Next Target For Fraudsters

The Future of Everything covers the innovation and technology transforming the way we live, work and play, with monthly issues on health, money, cities and more. This month is Artificial Intelligence, online starting July 2 and in the paper on July 9.

Facial-recognition systems, long touted as a quick and dependable way to identify everyone from employees to hotel guests, are in the crosshairs of fraudsters. For years, researchers have warned about the technology’s vulnerabilities, but recent schemes have confirmed their fears—and underscored the difficult but necessary task of improving the systems.


In the past year, thousands of people in the U.S. have tried to trick facial identification verification to fraudulently claim unemployment benefits from state workforce agencies, according to identity verification firm Inc.

The company, which uses facial-recognition software to help verify individuals on behalf of 26 U.S. states, says that between June 2020 and January 2021 it found more than 80,000 attempts to fool the selfie step in government ID matchups among the agencies it worked with.

That included people wearing special masks, using deepfakes—lifelike images generated by AI—or holding up images or videos of other people, says Chief Executive Blake Hall.

Facial recognition for one-to-one identification has become one of the most widely used applications of artificial intelligence, allowing people to make payments via their phones, walk through passport checking systems or verify themselves as workers.

Drivers for Uber Technologies Inc., for instance, must regularly prove they are licensed account holders by taking selfies on their phones and uploading them to the company, which uses Microsoft Corp.’s facial-recognition system to authenticate them.

Uber, which is rolling out the selfie-verification system globally, did so because it had grappled with drivers hacking its system to share their accounts. Uber declined to comment. Inc. and smaller vendors like Idemia Group S.A.S., Thales Group and AnyVision Interactive Technologies Ltd. sell facial-recognition systems for identification. The technology works by mapping a face to create a so-called face print. Identifying single individuals is typically more accurate than spotting faces in a crowd.

Still, this form of biometric identification has its limits, researchers say.
Why criminals are fooling facial recognition

Analysts at credit-scoring company Experian PLC said in a March security report that they expect to see fraudsters increasingly create “Frankenstein faces,” using AI to combine facial characteristics from different people to form a new identity to fool facial ID systems.

The analysts said the strategy is part of a fast-growing type of financial crime known as synthetic identity fraud, where fraudsters use an amalgamation of real and fake information to create a new identity.

Until recently, it has been activists protesting surveillance who have targeted facial-recognition systems. Privacy campaigners in the U.K., for instance, have painted their faces in asymmetric makeup specially designed to scramble the facial-recognition software powering cameras while walking through urban areas.

Criminals have more reasons to do the same, from spoofing people’s faces to access the digital wallets on their phones, to getting through high-security entrances at hotels, business centers or hospitals, according to Alex Polyakov, the CEO of, a firm that researches secure AI.

Any access control system that has replaced human security guards with facial-recognition cameras is potentially at risk, he says, adding that he has confused facial-recognition software into thinking he was someone else by wearing specially designed glasses or Band-Aids.
A growing threat

The idea of fooling these automated systems dates back several years. In 2017, a male customer of insurance company Lemonade tried to fool its AI for assessing claims by dressing in a blond wig and lipstick, and uploading a video saying his $5,000 camera had been stolen. Lemonade’s AI systems, which analyze such videos for signs of fraud, flagged the video as suspicious and found the man was trying to create a fake identity.

He had previously made a successful claim under his normal guise, the company said in a blog post. Lemonade, which says on its website that it uses facial recognition to flag claims submitted by the same person under different identities, declined to comment.

Earlier this year, prosecutors in China accused two people of stealing more than $77 million by setting up a fake shell company purporting to sell leather bags and sending fraudulent tax invoices to their supposed clients.

The pair was able to send out official-looking invoices by fooling the local government tax office’s facial-recognition system, which was set up to track payments and crack down on tax evasion, according to prosecutors cited in a March report in the Xinhua Daily Telegraph.

Prosecutors said in a posting on the Chinese chat service WeChat that the attackers had hacked the local government’s facial-recognition service with videos they had produced. The Shanghai prosecutors couldn’t be reached for comment.

The pair bought high-definition photographs of faces from an online black market, then used an app to create videos from the photos to make it look like the faces were nodding, blinking and opening their mouths, the report says.

The duo, who had the surnames Wu and Zhou, used a special mobile phone that would turn off its front-facing camera and upload the manipulated videos when it was meant to be taking a video selfie for Shanghai’s tax system, which uses facial recognition to authenticate tax returns, the report says. Wu and Zhou had been operating since 2018, according to prosecutors.

Spoofing a facial-recognition system doesn’t always require sophisticated software, according to John Spencer, chief strategy officer of biometric identity firm Veridium LLC. One of the most common ways of fooling a face-ID system, or carrying out a so-called presentation attack, is to print a photo of someone’s face and cut out the eyes, using the photo as a mask, he says.

Many facial-recognition systems, such as the ones used by financial trading platforms, check to see if a video shows a live person by examining their blinking or moving eyes.

Most of the time, Mr. Spencer says, his team could use this tactic and others to test the limits of facial-recognition systems, sometimes folding the paper “face” to give it more perceived depth. “Within an hour I break almost all of [these systems],” he says.

Apple Inc.’s Face ID, which was launched in 2017 with the iPhone X, is among the most difficult to fool, according to scientists. Its camera projects more than 30,000 invisible dots to create a depth map of a person’s face, which it then analyzes, while also capturing an infrared image of the face.

Using the iPhone’s chip, it then processes that image into a mathematical representation, which it compares with its own database of a user’s facial data, according to Apple’s website. An Apple spokeswoman says that the company’s website states that for privacy reasons, Face ID’s data never leaves an iPhone.

Some banks and financial-services companies use third-party facial-identification services, not Apple’s Face ID system, to sign up customers on their iPhone apps, Mr. Spencer says. This is potentially less accurate. “You end up looking at regular cameras on a mobile phone,” he says. “There’s no infrared capability, no dot projectors.”

Many online-only banks ask users to upload video selfies alongside a photo of their driver’s licenses or passports, and then use a third party’s facial-recognition software to match the video to the ID. The images sometimes go to human reviewers if the system flags something wrong, Mr. Spencer says.
Seeking a solution

Mr. Polyakov regularly tests the security of facial-recognition systems for his clients and says there are two ways to protect such systems from being fooled. One is to update the underlying AI models to beware of novel attacks by redesigning the algorithms that underpin them. The other is to train the models with as many examples as possible of the altered faces that could spoof them, known as adversarial examples.

Unfortunately, it can take 10 times the number of images needed to train a facial-recognition model to also protect it from spoofing—a costly and time-consuming process. “For each human person you need to add the person with adversarial glasses, with an adversarial hat, so that this system can know all combinations,” Mr. Polyakov says.

Companies such as Google, Facebook and Apple are working on finding ways to prevent presentation attacks, according to Mr. Polyakov’s firm’s analysis of more than 2,000 scientific research papers about AI security. Facebook, for instance, said last month that it is releasing a new tool for detecting deepfakes.’s Mr. Hall says that by this past February, his company was able to stop almost all of the fraudulent selfie attempts on the government sites, bringing the number that got through down to single digits from among millions of claims.

The company got better at detecting certain masks by labeling images as fraudulent, and by tracking the device, IP addresses and phone numbers of repeat fraudsters across multiple fake accounts, he says.

It also now checks how the light of a smartphone reflects and interacts with a person’s skin or another material. The attempts at face-spoofing have also declined. “[Attackers] are typically unwilling to use their real face when committing a crime,” Mr. Hall says.

Updated: 12-7-2021

Deepfake Technology Is Now A Threat To Everyone. What Do We Do?

Legislation hasn’t kept up with the fast-moving technology, so the market may have to create its own solution.

In October, MIT Prof. Sinan Aral warned his Twitter followers that he had discovered a video of himself that he hadn’t recorded endorsing an investment fund’s stock-trading algorithm.

In reality, it wasn’t Prof. Aral in the video, but an artificial-intelligence creation in his likeness, or what is known as a highly persuasive “deepfake.”

It is striking that scammers targeted Prof. Aral, considering he is a leading expert on the study of misinformation online.

It also suggests that deepfake technology is now at an inflection point: Thanks to a number of free deepfake apps that are just a Google search away, anyone can become a victim of such a scam.

The term deepfake has its origins in pornography, but it has come to mean the use of AI to create synthetic media (images, audio, video) in which someone appears to be doing or saying what in reality they haven’t done or said.

The technology isn’t always misused. Cadbury, for example, joined with Bollywood celebrity Shahrukh Khan on a marketing campaign for small businesses in India hit by Covid-19.

Business owners uploaded details of their stores, and Cadbury used deepfake technology to create the effect of Mr. Khan promoting them in tailored TV ads. (The campaign was transparent about its fakery).

But positive use cases are likely to be overshadowed in coming years by the technology’s potential role in financial fraud, identity theft and worse—from the savaging of reputations to the stoking of civil and political unrest.

Current laws targeting fraudulent impersonation weren’t designed for a world with deepfake technology, and efforts at the federal level to update these laws have faltered so far. One stumbling block is the need to also protect parodies and other free speech.

Another big challenge is that in an online world where people can anonymously upload content, it can be difficult to find the individuals behind deepfakes.

Some researchers have proposed putting the onus on website platforms such as Facebook and YouTube by making their protections in relation to user-generated content conditional on their taking “reasonable steps” to police their own platforms.

Broad adoption of these kinds of laws could create meaningful deterrents—eventually. But the technology is moving so fast that lawmakers will likely always lag behind. That is why I believe we are going to have to rely on technology to protect us from a problem it helped create.

One such solution is to detect deepfakes via machine-learning methods. For instance, while deepfakes appear highly realistic, the technology isn’t yet capable of generating natural eye blinking in the impersonated individuals.

As such, machine-learning algorithms have been trained to detect deepfakes using eye-blinking patterns. While these detectors can be successful in the short term, people looking to evade such systems will likely just respond with better technology, creating a continuing and expensive cat-and-mouse game.

A better approach with a longer time horizon is media provenance or authentication systems to verify the origins of images and videos.

Microsoft, for instance, has developed a prototype of a system called AMP (Authentication of Media via Provenance) that enables media-content creators to create and assign a certificate of authenticity to their content.

Under such a system, every time you watch a video of, say, the U.S. president, the technology would help your browser or media-viewing software verify the source of the video (for example, a news network or the White House).

The process could be delivered as simply as through an icon—much like the current browser padlock icon that indicates any information you send to that particular website is protected from third-party tampering en route.

To be effective in practice, such systems would have to be widly adopted by all content creators, which will take time.

While legislation eventually may offer protection against deepfakes, I believe the market could be quicker—provided we, as consumers and citizens, care.

Faces Are The Next,Faces Are The Next,Faces Are The Next,Faces Are The Next,Faces Are The Next,Faces Are The Next,Faces Are The Next,Faces Are The Next,Faces Are The Next,Faces Are The Next,Faces Are The Next,

Related Articles:

Advertising Company Will Use Its Billboards To Track Passing Cellphones

REvil Ransomware Hits 200 Companies In MSP Supply-Chain Attack

What It Will Take To Protect Cities Against Cyber Threats

Home Security Company ADT Betting On Google Partnership To Build Revenue

Carnegie Cyber Kids Academy. World’s Most Prestigious Cyber Defense Training Facility

How To Opt Out Of Amazon’s Bandwidth-Sharing Sidewalk Network

Carnival Discloses Breach of Personal Data On Guests And Crew

UK Cyber Chief Cameron Says Ransomware Key Online Threat

The FBI Secretly Ran The Anom Messaging Platform, Yielding Hundreds Of Arrests In Global Sting

Federal Reserve Hacked More Than 50 Times In 4 Years

All of JBS’s US Beef Plants Were Forced Shut By Cyberattack

It Wasn’t Until Anonymous Payment Systems That Ransomware Became A Problem

How To Use Ian Coleman’s BIP39 Tool For Finding Bitcoin Addresses And Private Keys From A Seed Phrase

A New Ransomware Enters The Fray: Epsilon Red

This Massive Phishing Campaign Delivers Password-Stealing Malware Disguised As Ransomware

Biden Proposes Billions For Cybersecurity After Wave of Attacks

Mobile Crypto ‘Mining’ App Possibly Connected To Personal Data Leak

Ireland Confirms Second Cyber Attack On Health System

US Unveils Plan To Protect Power Grid From Foreign Hackers

Hackers Breach Thousands of Security Cameras, Exposing Tesla, Jails, Hospitals

A Hacker Was Selling A Cybersecurity Exploit As An NFT. Then OpenSea Stepped In

Clubhouse And Its Privacy & Security Risk

Using Google’s ‘Incognito’ Mode Fails To Prevent Tracking

Kia Motors America Victim of Ransomware Attack Demanding $20M In Bitcoin, Report Claims

The Long Hack: How China Exploited A U.S. Tech Supplier

Clubhouse Users’ Raw Audio May Be Exposed To Chinese Partner

Hacker Changed Chemical Level In Florida City’s Water System

UK Merger Watchdog Suffers 150 Data Breaches In Two Years

KeepChange Foils Bitcoin Theft But Loses User Data In Sunday Breach

Hacker Refuses To Hand Police Password For Seized Wallet With $6.5M In Bitcoin

SonicWall Says It Was Victim of ‘Sophisticated’ Hack

Tor Project’s Crypto Donations Increased 23% In 2020

Read This Now If Your Digital Wallet Which Holds Your Crypto-currencies Can Be Accessed Through Cellular, Wifi, Or Bluetooth

Armed Robbers Steal $450K From Hong Kong Crypto Trader

Is Your iPhone Passcode Off Limits To The Law? Supreme Court Ruling Sought

Researchers Warn 3 Apps Have Been Stealing Crypto Undetected For A Year

Ways To Prevent Phishing Scams In 2020

The Pandemic Turbocharged Online Privacy Concerns

US Treasury Breached By Foreign-Backed Hackers

FireEye Hack Portends A Scary Era Of Cyber-Insecurity

How FinCEN Became A Honeypot For Sensitive Personal Data

Apple And Google To Stop X-Mode From Collecting Location Data From Users’ Phones

Surge In Physical Threats During Pandemic Complicates Employee Security Efforts

Imagine A Nutrition Label—for Cybersecurity

Cybercriminals Attack GoDaddy-based Cryptocurrency Platforms

Biden Team Lacks Full U.S. Cybersecurity Support In Transition Fracas

Nasdaq To Buy Anti-Financial Crime Firm Verafin For $2.75 Billion

Mysterious Software Bugs Were Used To Hack iPhones and Android Phones and No One Will Talk About It

Dark Web Hackers Say They Hold Keys To 10,000 Robinhood Accounts #GotBitcoin

Hackers Steal $2.3 Million From Trump Wisconsin Campaign Account

Crypto Scammers Deface Trump Campaign Website One Week From Elections

Telecoms Protocol From 1975 Exploited To Target 20 Crypto Executives

With Traders Far From Offices, Banks Bring Surveillance To Homes

Financial Systems Set Up To Monitor Unemployment Insurance Fraud Are Being Overloaded (#GotBlockchain?)

A Millionaire Hacker’s Lessons For Corporate America

Container Shipping Line CMA CGM Says Data Possibly Stolen In Cyberattack

Major Hospital System Hit With Cyberattack, Potentially Largest In U.S. History

Hacker Releases Information On Las Vegas-Area Students After Officials Don’t Pay Ransom

Russian Troll Farms Posing As African-American Support For Donald Trump

US Moves To Seize Cryptocurrency Accounts Linked To North Korean Heists

These Illicit SIM Cards Are Making Hacks Like Twitter’s Easier

Uber Exec Allegedly Concealed 2016 Hack With $100K BTC ‘Bug Bounty’ Pay-Off

Senate Panel’s Russia Probe Found Counterintelligence Risks In Trump’s 2016 Campaign

Bockchain Based Surveillance Camera Technology Detects Crime In Real-Time

Trump Bans TicToc For Violating Your Privacy Rights While Giving US-Based Firm Go Ahead (#GotBitcoin?)

Facebook Offers Money To Reel In TikTok Creators

How A Facebook Employee Helped Trump Win—But Switched Sides For 2020

Facebook Rebuffs Barr, Moves Ahead on Messaging Encryption

Facebook Ad Rates Fall As Coronavirus Undermines Ad Spending

Facebook Labels Trump Posts On Grounds That He’s Inciting Violence

Crypto Prediction Markets Face Competition From Facebook ‘Forecasts’ (#GotBitcoin?)

Coronavirus Is The Pin That Burst Facebook And Google Online Ads Business Bubble

OpenLibra Plans To Launch Permissionless Fork Of Facebook’s Stablecoin (#GotBitcoin?)

Facebook Warns Investors That Libra Stablecoin May Never Launch (#GotBitcoin?)

FTC Approves Roughly $5 Billion Facebook Settlement (#GotBitcoin?)

How Facebook Coin’s Big Corporate Backers Will Profit From Crypto

Facebook’s Libra Is Bad For African Americans (#GotBitcoin?)

A Monumental Fight Over Facebook’s Cryptocurrency Is Coming (#GotBitcoin?)

Alert! 540 Million Facebook Users’ Data Exposed On Amazon Servers (#GotBitcoin?)

Facebook Bug Potentially Exposed Unshared Photos of Up 6.8 Million Users (#GotBitcoin?)

Facebook Says Millions of Users’ Passwords Were Improperly Stored in Internal Systems (#GotBitcoin?)

Advertisers Allege Facebook Failed to Disclose Key Metric Error For More Than A Year (#GotBitcoin?)

Ad Agency CEO Calls On Marketers To Take Collective Stand Against Facebook (#GotBitcoin?)

Thieves Can Now Nab Your Data In A Few Minutes For A Few Bucks (#GotBitcoin?)

New Crypto Mining Malware Beapy Uses Leaked NSA Hacking Tools: Symantec Research (#GotBitcoin?)

Equifax, FICO Team Up To Sell Your Financial Data To Banks (#GotBitcoin?)

Cyber-Security Alert!: FEMA Leaked Data Of 2.3 Million Disaster Survivors (#GotBitcoin?)

DMV Hacked! Your Personal Records Are Now Being Transmitted To Croatia (#GotBitcoin?)

Lithuanian Man Pleads Guilty In $100 Million Fraud Against Google, Facebook (#GotBitcoin?)

Hack Alert! Buca Di Beppo, Owned By Earl Enterprises Suffers Data Breach Of 2M Cards (#GotBitcoin?)

SEC Hack Proves Bitcoin Has Better Data Security (#GotBitcoin?)

Maxine Waters (D., Calif.) Rises As Banking Industry’s Overseer (#GotBitcoin?)

FICO Plans Big Shift In Credit-Score Calculations, Potentially Boosting Millions of Borrowers (#GotBitcoin?)

Our Facebook Page

Your Questions And Comments Are Greatly Appreciated.

Monty H. & Carolyn A.

Go back

Leave a Reply