How Hackers Use Our Brains Against Us And How We Can Fight Back
Cybercriminals take advantage of the unconscious processes that we all use to make decision making more efficient. Blame it on our ‘lizard brains.’ How Hackers Use Our Brains Against Us And How We Can Fight Back
The email from the boss asking to transfer money to a new supplier. The link sent from a client, but whose return email address is slightly off. The request from somebody in the IT department asking for system login information.
We read about these typical spearphishing attempts, and more often than not we think: Who falls for these? And how could they?
The answer to the first question is simple: You do. We do. Nearly 800,000 people fell victim to cyberscams in 2020, according to the Federal Bureau of Investigation’s Internet Crime Complaint Center. That was an increase of around 69% over 2019, with reported losses of more than $4 billion last year.
As for the second question? Blame it on our brains. Criminals lure smart people into their traps by taking advantage of the unconscious, automatic processes that act as shortcuts to make our decision-making more efficient. These cognitive biases—arising from what’s often referred to as our “lizard brains”—can cause us to misinterpret information and make snap judgments that may be irrational or inaccurate.
“Cybercriminals will do anything they can to trigger the lizard brain,” says Kelly Shortridge, a senior principal at Fastly, a cloud-computing-services provider.
They will use corporate logos we’re familiar with, or tell us to act fast or our bank account will be shut down, or hijack personal information from social media to impersonate a friend or an executive—whatever it takes to get users to click on a link, open an attachment, wire money or send compromising information.
Recognizing that we have biases is the first step to overcoming them. It isn’t easy, even if these biases are sometimes painfully obvious. “If we can’t even eat healthy foods and go to the gym regularly, how can we realistically ask users to check every single bias?” asks Ms. Shortridge.
We can start by understanding the big ones.
The idea is simple: People find the pain of loss much greater than the joy of a gain of equivalent value. That’s why, when people are given $100, and then asked if they want to participate in a coin toss in which they have a 50/50 chance of losing that $100 or doubling it to $200, people tend not to take the gamble. The idea of losing $100 is just too hard to swallow.
The result, says Cleotilde “Coty” Gonzalez, a professor of decision sciences at Carnegie Mellon University, is that “if something is presented as a loss, we are more willing to take a risk [to avoid it]; if it’s presented as a gain, we are OK with taking a safe option.”
Prof. Gonzalez says scammers use this insight when sending phishing emails. If an email arrives saying that your alarm service is about to be shut off because you haven’t paid a monthly fee, for example, you may click on the link to prevent losing your security system. If, however, the email had said you can click on the link to lower your monthly alarm payments, you might ignore the request.
Or a scammer might send a message to your work email, claiming that there is a problem with an account at one of your corporate suppliers, and warning that your shipment—one that your boss is counting on—will be delayed unless you verify your account information in a link provided by them.
The fake link leads to a fake website that looks like the real thing. By playing on your fear of losing access to your account, the scammer gets your credentials.
As humans, we inherently trust figures with power. It makes sense: How could we get anything done if we doubted everybody equally? But hackers know that if we get an email from a trusted source, we let down our guard.
“The epitome of the exploitation of this bias is ‘business email compromise,’ ” says Kevin Haley, director at Symantec, a division of Broadcom Inc., a global semiconductor and software business.
In a typical scam, criminals send an email message that looks like it comes from a known authority figure who is making a recognizable request.
It could be what looks like the boss sending an in-house email, asking you to send a current payment to a new bank-account number. Since you think you’ve seen that email address before, and your job is to fulfill your boss’s requests, you might do what is asked of you.
What you didn’t notice is that the email address is slightly different from the real address. Worse, Mr. Haley adds, “the more sophisticated attackers will phish the real email account of the boss.”
According to the FBI, such “business email compromise” scams resulted in about $1.8 billion in reported losses last year.
We all know the tendency to do things that are the most urgent, and this sense of urgency means we may not be as thoughtful as usual.
“If it’s the boss reaching out directly to you, asking you to do something quickly, you jump into action,” says Mr. Haley. “You don’t even realize that the email address isn’t exactly correct, because you are so concerned with pleasing the boss. That is perfect social engineering.”
Mr. Haley offers an example of how a scammer might prime a victim with a few emails that appear to come from the CEO, thus elevating the urgency (as well as triggering the authority bias) and overriding the voice telling you to follow the normal procedures.
The first email may ask: “Are you at your desk?” You wonder: Why is the CEO asking? Did I do something?
The second email asks: “I just sent you a wire request. Did you not get it?” Oh no, I’m going to screw up if I don’t fulfill that request.
The third email says, “I will have the invoice sent as soon I can access my computer. Email me the wire transfer when complete.” This is so urgent that the boss wants me to ignore protocol. I better act now.
Alana Maurushat, professor of cybersecurity and behavior at Western Sydney University in Australia, says emotions contribute to a sense of urgency. If it appears that the email from your boss is about an irate vendor, the anxiety level goes up. “The more emotions a cybercriminal can bring into context, the more likely someone is to play along,” she says. “When emotions are triggered, human brains go into a different mode.”
We all have positive views of brands, companies and people we like, and bad actors can take advantage of that. If an invitation to join an elite club or speak at an exclusive conference lands in your inbox, especially if the email appears to come from an organization you admire, there’s a good chance you’ll click on a link to sign up, and perhaps provide too much personal or company information.
Rod Simmons, vice president of product strategy at Omada, an identity governance and administration company, offers another example: A scammer impersonating your work credit-card company sends an email saying it has detected fraudulent use on your card, and it directs you to click on a link to verify the most recent transactions.
Since you have had favorable experiences with the company, you won’t question whether the request is legitimate. You click through and eventually are asked to log into your account.
It’s natural to choose smaller, immediate wins over bigger, future rewards. It’s all about instant gratification.
“People care about the present self more than the future self,” says Ms. Shortridge.
Imagine, she says, it’s the last week of the quarter and a team is about to land a big new customer.
If the sales leader receives an email attachment that seems related to the deal, she will give priority to her immediate preference of closing the deal, rather than giving equal weight to the concern about avoiding a data breach in the future—a breach that seems unlikely to happen, since most of the emails sales leaders receive with links and attachments are legitimate rather than malicious.
We tend to make judgments based on whatever we’ve most recently experienced. If we haven’t seen it before, our alarm bells don’t go off.
Patrick Murray, chief product officer at Tugboat Logic, a security compliance management platform, says this means that scammers may have more luck with novel social engineering attacks that employees at a particular company haven’t seen before. Maybe employees have been trained to spot typical, email-phishing attacks.
But then they receive a call from their seemingly legitimate IT help desk alerting them to some issue. The scammer may ask the victim to give them their login credentials, potentially handing over the keys to the entire kingdom.
Illusion Of Unique Invulnerability
Sometimes referred to as the “optimism bias,” this occurs when people think a bad thing is very unlikely to happen to them—so they give their credentials to a “colleague” who is actually a scammer.
“There is an adage that there are two types of people: those who have been hacked and know it, and those who have been hacked and don’t know it,” says Mr. Murray.
Dr. Maurushat says the training manuals given to cybercriminals “actually say that the target is a white male over 40.” They target these men, she says, “because they think they could never be scammed.”
Overcoming biases isn’t easy, because these shortcuts are baked into human thinking. But cybersecurity experts say training—especially gamifying exercises where potential targets get to respond to attacks that feel real—can help.
There also are technical solutions to counter the effect of biases, such as multifactor authentication, password managers, and changing communications channels if something seems fishy.
But potentially the most effective solutions are nontechnical and long term. One is getting employees to slow down when speed isn’t crucial. Ms. Shortridge says that building a culture without unnecessary deadlines, one that doesn’t incentivize constant urgency and allows employees to catch their breath “so they can trigger the philosophy brain, not the lizard brain,” would diminish successful attacks.
Another is to build a company culture that rewards good behavior, like reporting suspicious activity and questioning unusual requests. This, says Mr. Simmons, takes buy-in from the top. “If the CEO doesn’t care about training her employees to be better defenders,” he says, “why should anyone else?”
The Secret Vulnerability of Cybercriminals: Burnout
Police should focus less on the leaders and more on the legions of cybercrime workers and the networks they maintain.
Picture someone involved in cybercrime. Are you seeing a highly skilled lone-wolf hacker? Or maybe a spy for a foreign government, or an organized-crime boss? If you are, you’re missing the big picture.
Those characters are out there. But the most dangerous cybercrime isn’t the province of individual renegades. It’s big business.
And for most of the people involved, it isn’t the exciting, lucrative world—even glamorous, in its way—that some media depictions might suggest. It’s just a job. Often a boring, low-paid, dead-end job full of frustration that ultimately leads to burnout. Most cybercriminals are simply cogs in a sprawling network of services that support those who launch attacks.
And that has important implications for how to police cybercrime. In short, the key is to focus not on the leaders of criminal enterprises or their lieutenants, but rather on the legion of cybercrime workers and the networks they maintain.
Cybercrime has grown into a huge industry increasingly based on division of labor and specialization. The predominant business model is what has become known as cybercrime-as-a-service. For the most part, a group of artisans build sophisticated digital tools, and a much larger community of people buy them and use them to commit cybercrimes.
At the low end of the scale, teenagers are paying a $5 monthly subscription fee for so-called booter services, which allow them to direct botnets—networks of commandeered computers—to knock rival videogame players offline with denial-of-service attacks.
More-harmful services, such as ransomware attacks, are managed in a more business-to-business manner, requiring a lot more money. What the whole range of services have in common is that the users need almost no technical skill. For that, they rely on the providers, who not only sell them the necessary tools but also offer technical support.
This all relies on substantial criminal business operations, centered on networks of computer servers. And that has created a range of boring but essential jobs keeping these businesses’ hardware humming and managing their customers.
People need to set up servers, manage networks of infected computers, get a website up and running and oversee payment systems. When a customer can’t get your service to work, or they threaten to move to one of your competitors, you need community managers and support staff ready to respond, to avoid losing business.
My colleagues and I have spent late nights interviewing the people running these services, hung out in their online forums and chat channels, and scraped vast amounts of data (tens of millions of posts from dozens of forums) about what they’re up to.
Our interviewees—who often got into these businesses dreaming of eventually becoming a skilled hacker—told us that the entry-level administrative and customer-service work is perceived as easy, so it’s initially attractive to newcomers. “It’s autopilot,” said one interviewee working as an administrator of booter services. “I can sit in my chair, smoke weed and still make money.”
However, most of the people we interviewed and encountered in online forums were earning pocket change, and they weren’t developing hacking skills or building any reputation in the hacking community. And many complained about the hassles of the business; work-life balance isn’t one of the perks of the cybercrime industry.
For instance, if a server goes down at 4 a.m., someone has to get up and fix it or face a sea of angry customers. On a chat channel we were monitoring, we saw an administrator on vacation desperately trying to use a hotel’s patchy Wi-Fi to get a botnet running again.
As we spoke to these cybercrime service workers, it became clear that many were prone to a malady familiar to employees in more-reputable industries—burnout. As one cybercrime administrator said to me: “After [running a cybercrime service] for almost a year, I lost all motivation, and really didn’t care anymore.
So I just left and went on with life. It wasn’t challenging enough at all….Lots of people are starting to see what I and lots of others see. It’s a place where you learn nothing new and don’t go much of anywhere.”
Action And Messaging
This has important implications for policing. As part of our research we observed what happened when law-enforcement authorities used various tactics against the cybercrime businesses we studied. Tactics focused on arrests and harsh sentences for the leaders of cybercrime enterprises seem simply not to work.
When major players were arrested, the effect was negligible, with new businesses moving to take their place in a matter of days, sometimes using the same infrastructure of computer servers and service workers.
But when authorities targeted the support staff—the labor force that the cybercrime industry depends on—with a few arrests and made their jobs even more miserable than usual through coordinated shutdowns of server networks, the effect was much greater. This is not unlike putting pressure on a mafia accountant, as opposed to arresting crime bosses.
In our research, we saw that when authorities attacked the cybercrime infrastructure this way, the services became unreliable and their customers thought they were being scammed, flooding their chat channels with complaints. When servers went down, so did the business of all the criminals who were renting that infrastructure. Cyberattacks declined.
Conventional wisdom suggests that disrupting the infrastructure of cybercrime services by taking down their servers is merely a game of Whac-A-Mole, with these groups able to set up new systems fairly quickly. But that doesn’t take into account the effect on cybercrime workers: We found that these takedowns were extremely frustrating for the people working behind the scenes.
We even began to see people quitting the business, burned out from the stress of having to provide round-the-clock customer service and system administration under increasing scrutiny from the police.
This has implications not only for police action but also for messaging by law-enforcement authorities. When companies are hacked or the police launch a sting operation against cybercrime operations, the authorities are often at pains to emphasize how skilled and dangerous the criminals are, how much money they make, how much harm they cause.
However, this may be the wrong approach, risking making jobs in the cybercrime industry seem more skilled and glamorous than they actually are.
So perhaps rather than arrests, the way forward lies in disrupting cybercrime’s infrastructure, making the administrative work in the industry even less appealing than it already is and so driving people out of the business and discouraging potential new recruits.
Instead of describing hacking as skilled, exhilarating, lucrative work, police and media coverage might do well to reflect its reality: closer to “The Office” than “The Matrix.”
Your Questions And Comments Are Greatly Appreciated.
Monty H. & Carolyn A.