The Facebook Whistleblower, Frances Haugen, Says She Wants To Fix The Company, Not Harm It
The former Facebook employee says her goal is to help prompt change at the social-media giant. The Facebook Whistleblower, Frances Haugen, Says She Wants To Fix The Company, Not Harm It
The former Facebook Inc. employee who gathered documents that formed the foundation of The Wall Street Journal’s Facebook Files series said she acted to help prompt change at the social-media giant, not to stir anger toward it.
Frances Haugen, a former product manager hired to help protect against election interference on Facebook, said she had grown frustrated by what she saw as the company’s lack of openness about its platforms’ potential for harm and unwillingness to address its flaws.
She is scheduled to testify before Congress on Tuesday. She has also sought federal whistleblower protection with the Securities and Exchange Commission.
In a series of interviews, Ms. Haugen, who left the company in May after nearly two years, said that she had come into the job with high hopes of helping Facebook fix its weaknesses. She soon grew skeptical that her team could make an impact, she said. Her team had few resources, she said, and she felt the company put growth and user engagement ahead of what it knew through its own research about its platforms’ ill effects.
Toward the end of her time at Facebook, Ms. Haugen said, she came to believe that people outside the company—including lawmakers and regulators—should know what she had discovered.
“If people just hate Facebook more because of what I’ve done, then I’ve failed,” she said. “I believe in truth and reconciliation—we need to admit reality. The first step of that is documentation.”
In a written statement, Facebook spokesman Andy Stone said, “Every day our teams have to balance protecting the right of billions of people to express themselves openly with the need to keep our platform a safe and positive place.
We continue to make significant improvements to tackle the spread of misinformation and harmful content. To suggest we encourage bad content and do nothing is just not true.”
Ms. Haugen, 37 years old, resigned from Facebook in April. She stayed on another month to hand off some projects. She also sifted through the company’s internal social network, called Facebook Workplace, for instances where she believed the company had failed to be responsible about users’ welfare.
She said she was surprised by what she found. The Journal’s series, based in part on the documents she gathered as well as interviews with current and former employees, describes how the company’s rules favor elites; how its algorithms foster discord; and how drug cartels and human traffickers use its services openly.
An article about Instagram’s effects on teenage girls’ mental health was the impetus for a Senate subcommittee hearing last week in which lawmakers described the disclosures as a “bombshell.”
Ms. Haugen kept expecting to be caught, she said, as she reviewed thousands of documents over several weeks. Facebook logs employees’ activities on Workplace, and she was exploring parts of its network that, while open, weren’t related to her job.
She said that she began thinking about leaving messages for Facebook’s internal security team for when they inevitably reviewed her search activity. She liked most of her colleagues, she said, and knew some would feel betrayed. She knew the company would as well, but she thought the stakes were high enough that she needed to speak out, she said.
On May 17, shortly before 7 p.m., she logged on for the last time and typed her final message into Workplace’s search bar to try to explain her motives.
“I don’t hate Facebook,” she wrote. “I love Facebook. I want to save it.”
Ms. Haugen was born and raised in Iowa, the daughter of a doctor father and a mother who left behind an academic career to become an Episcopal priest. She said that she prides herself on being a rule-follower.
For the last four Burning Man celebrations, the annual desert festival popular with the Bay Area tech and art scene, she served as a ranger, mediating disputes and enforcing the community’s safety-focused code.
Ms. Haugen previously worked at Alphabet Inc.’s Google, Pinterest Inc. and other social networks, specializing in designing algorithms and other tools that determine what content gets served to users. Google paid for her to attend Harvard and get her master’s in business administration. She returned to the company in 2011 only to be confronted with an autoimmune disorder.
“I came back from business school, and I immediately started decaying,” she said. Doctors were initially baffled. By the time she was diagnosed with celiac disease, she had sustained lasting damage to nerves in her hands and feet, leaving her in pain. She went from riding a bicycle as much as 100 miles a day to struggling to move around.
Ms. Haugen resigned from Google at the beginning of 2014. Two months later, a blood clot in her thigh landed her in the intensive care unit.
A family acquaintance hired to assist her with errands became her main companion during a year she spent largely homebound. The young man bought groceries, took her to doctors’ appointments, and helped her regain the capacity to walk.
“It was a really important friendship, and then I lost him,” she said.
The friend, who had once held liberal political views, was spending increasing amounts of time reading online forums about how dark forces were manipulating politics.
In an interview, the man recalled Ms. Haugen as having unsuccessfully tried to intervene as he gravitated toward a mix of the occult and white nationalism. He severed their friendship and left San Francisco before later abandoning such beliefs, he said.
Ms. Haugen’s health improved, and she went back to work. But the loss of her friendship changed the way she thought about social media, she said.
“It’s one thing to study misinformation, it’s another to lose someone to it,” she said. “A lot of people who work on these products only see the positive side of things.”
When a Facebook recruiter got in touch at the end of 2018, Ms. Haugen said, she replied that she might be interested if the job touched on democracy and the spread of false information. During interviews, she said, she told managers about her friend and how she wanted to help Facebook prevent its own users from going down similar paths.
She started in June 2019, part of the roughly 200-person Civic Integrity team, which focused on issues around elections world-wide. While it was a small piece of Facebook’s overall policing efforts, the team became a central player in investigating how the platform could spread political falsehoods, stoke violence and be abused by malicious governments.
‘I have a lot of compassion for people spending their lives working on these things.’
— Frances Haugen
Ms. Haugen was initially asked to build tools to study the potentially malicious targeting of information at specific communities. Her team, comprising her and four other new hires, was given three months to build a system to detect the practice, a schedule she considered implausible.
She didn’t succeed, and received a poor initial review, she said. She recalled a senior manager telling her that people at Facebook accomplish what needs to be done with far less resources than anyone would think possible.
Around her, she saw small bands of employees confronting large problems. The core team responsible for detecting and combating human exploitation—which included slavery, forced prostitution and organ selling—included just a few investigators, she said.
“I would ask why more people weren’t being hired,” she said. “Facebook acted like it was powerless to staff these teams.”
Mr. Stone of Facebook said, “We’ve invested heavily in people and technology to keep our platform safe, and have made fighting misinformation and providing authoritative information a priority.”
Ms. Haugen said the company seemed unwilling to accept initiatives to improve safety if that would make it harder to attract and engage users, discouraging her and other employees.
“What did we do? We built a giant machine that optimizes for engagement, whether or not it is real,” read a presentation from the Connections Integrity team, an umbrella group tasked with “shaping a healthy public content ecosystem,” in the fall of 2019. The presentation described viral misinformation and societal violence as among the results.
Ms. Haugen came to see herself and the Civic Integrity team as an understaffed cleanup crew.
She worried about the dangers that Facebook might pose in societies gaining access to the internet for the first time, she said, and saw Myanmar’s social media-fueled genocide as a template, not a fluke.
She talked about her concerns with her mother, the priest, who advised her that if she thought lives were on the line, she should do what she could to save those lives.
Facebook’s Mr. Stone said that the company’s goal was to provide a safe, positive experience for its billions of users. “Hosting hateful or harmful content is bad for our community, bad for advertisers, and ultimately, bad for our business,” he said.
On Dec. 2, 2020, the founder and chief of the team, Samidh Chakrabarti, called an all-hands teleconference meeting.
From her San Francisco apartment, Ms. Haugen listened to him announce that Facebook was dissolving the team and shuffling its members into other parts of the company’s integrity division, the broader group tasked with improving the quality and trustworthiness of the platform’s content.
Mr. Chakrabarti praised what the team had accomplished “at the expense of our family, our friends and our health,” according to Ms. Haugen and another person at the talk.
He announced he was taking a leave of absence to recharge, but urged his staff to fight on and to express themselves “constructively and respectfully” when they see Facebook at risk of putting short-term interests above the long-term needs to the community. Mr. Chakrabarti resigned in August. He didn’t respond to requests for comment.
That evening after the meeting, Ms. Haugen sent an encrypted text to a Journal reporter who had contacted her weeks earlier. Given her work on a team that focused in part on counterespionage, she was especially cautious and asked him to prove who he was.
The U.S. Capitol riot came weeks later, and she said she was dismayed when Facebook publicly played down its connection to the violence despite widespread internal concern that its platforms were enabling dangerous social movements.
Mr. Stone of Facebook called any implication that the company caused the riot absurd, noting the role of public figures in encouraging it. “We have a long track record of effective cooperation with law enforcement, including the agencies responsible for addressing threats of domestic terrorism,” he said.
In March, Ms. Haugen left the Bay Area to take up residence in Puerto Rico, expecting to continue working for Facebook remotely.
Ms. Haugen had expected there wouldn’t be much left on Facebook Workplace that wasn’t already either written about or hidden away. Workplace is a regular source of leaks, and for years the company has been tightening access to sensitive material.
To her surprise, she found that attorney-client-privileged documents were posted in open forums. So were presentations to Chief Executive Mark Zuckerberg —sometimes in draft form, with notes from top company executives included.
Virtually any of Facebook’s more than 60,000 employees could have accessed the same documents, she said.
To guide her review, Ms. Haugen said she traced the careers of colleagues she admired, tracking their experiments, research notes and proposed interventions. Often the work ended in frustrated “badge posts,” goodbye notes that included denunciations of Facebook’s failure to take responsibility for harms it caused, she said.
The researchers’ career arcs became a framework for the material that would ultimately be provided to the SEC, members of Congress and the Journal.
The more she read, she said, the more she wondered if it was even possible to build automated recommendation systems safely, an unpleasant thought for someone whose career focused on designing them. “I have a lot of compassion for people spending their lives working on these things,” she said. “Imagine finding out your product is harming people—it’d make you unable to see and correct those errors.”
The move to Puerto Rico brought her stint at Facebook to a close sooner than she had planned. Ms. Haugen said Facebook’s human resources department told her it couldn’t accommodate anyone relocating to a U.S. territory. In mid-April, she agreed to resign the following month.
Ms. Haugen continued gathering material from inside Facebook through her last hour with access to the system. She reached out to lawyers at Whistleblower Aid, a Washington, D.C., nonprofit that represents people reporting corporate and government misbehavior.
In addition to her coming Senate testimony and her SEC whistleblower claim, she said she’s interested in cooperating with state attorneys general and European regulators. While some have called for Facebook to be broken up or stripped of content liability protections, she disagrees.
Neither approach would address the problems uncovered in the documents, she said—that despite numerous initiatives, Facebook didn’t address or make public what it knew about its platforms’ ill effects.
Mr. Stone of Facebook said, “We have a strong track record of using our research—as well as external research and close collaboration with experts and organizations—to inform changes to our apps.”
In Ms. Haugen’s view, allowing outsiders to see the company’s research and operations is essential. She also argues for a radical simplification of Facebook’s systems and for limits on promoting content based on levels of engagement, a core feature of Facebook’s recommendation systems.
The company’s own research has found that “misinformation, toxicity, and violent content are inordinately prevalent” in material reshared by users and promoted by the company’s own mechanics.
“As long as your goal is creating more engagement, optimizing for likes, reshares and comments, you’re going to continue prioritizing polarizing, hateful content,” she said.
Beyond that, she has some business ideas she’d like to pursue—and she would like to think about something other than Facebook.
“I’ve done a really good job figuring out how to be happy,” she said. “Talking about things that make you sad all the time is not the way to make yourself happy.”
The Facebook Files, Part 6: The Whistleblower
This transcript was prepared by a transcription service. This version may not be in its final form and may be updated.
Kate Linebaugh: This is The Facebook Files, a series from The Journal. We’re looking deep inside Facebook through its own internal documents. If you haven’t already heard our earlier episodes, start there. They’re already in your feed. Over the course of this series, we’ve examined a lot of Facebook documents. Those documents laid out how Facebook created secret rules that favored elites.
Speaker 1: They were literally applying a lower standard to the people who when they misbehaved it was most dangerous.
Kate Linebaugh: They’ve revealed that its platforms can have a dangerous effect on teen mental health.
Speaker 2: Page nine of this document says, “We make body image issues worse for one in three teen girls.”
Kate Linebaugh: That drug cartels and human traffickers use its services openly.
Speaker 1: They realized is that kind of the entire ecosystem of a human trafficking ring could exist on Facebook.
Kate Linebaugh: And how its algorithm fosters discord.
Speaker 2: The most explosive finding was just how harmful certain aspects of this algorithm change were and how much Facebook knew it.
Kate Linebaugh: These documents also show that inside Facebook, the company knows it hasn’t told the public the full story.
Speaker 1: And the line there was that importantly it is a breach of trust. We are not actually doing what we say we do publicly.
Kate Linebaugh: Behind the release of these documents is a person. Up to this point, this person has wanted to stay anonymous, but now she’s decided to come forward and reveal her identity. I went to a recording studio to meet her. She arrived in all black carrying a can of Diet Coke. Okay, everybody’s giving the thumbs up.
Frances Haugen: Cool. One second. I’m going to do the sound.
Kate Linebaugh: Yeah.
Frances Haugen: Yeah.
Kate Linebaugh: Oh, nice. Can you introduce yourself?
Frances Haugen: Sure. My name is Frances Haugen.
Kate Linebaugh: Frances Haugen. Frances is 37 years old. She left Facebook in May after two years with the company. During her time there, she learned information she believed needed to be made public. So earlier this year, she gathered internal Facebook documents that ultimately went to Congress, the Securities and Exchange Commission, and The Wall Street Journal. She also applied for federal whistleblower protection. And now she’s ready to talk.
Frances Haugen: The thing I want everyone to know is that Facebook is far, far more dangerous than anyone knows, and it is getting worse. We can’t expect it to fix itself on its own.
Kate Linebaugh: Welcome to The Journal, our show about money, business, and power. I’m Kate Linebaugh. This is The Facebook Files Part Six. Coming up on the show, a conversation with Frances Haugen, the Facebook whistleblower. Frances Haugen grew up in Iowa. Her father was a doctor and her mother a college professor.
Frances Haugen: I grew up in a house full of books, like so many books that there were bookshelves in the bathrooms. I’m very good at math. I love math. I love numbers. I love just sitting around and thinking about numbers. And that gave me a lot of opportunities. I went to Massachusetts for college. I then got hired by Google straight off college. Was there working on search for years. I ended up at Facebook eventually working on civic misinformation.
Kate Linebaugh: You also have been a ranger at Burning Man?
Frances Haugen: I am. I am a Burning Man Ranger.
Kate Linebaugh: What’s that like?
Frances Haugen: I like to go to Burning Man. I’ve been many, many, many times. Burning Man is a hard place to be. It’s hot. It’s dusty. People often there for weeks setting up things. Sometimes tempers can flare. And our job is to come and step in. It’s like being a boy scout, only sometimes you get to drive vehicles.
Kate Linebaugh: Frances has worked at some of the biggest companies in Silicon Valley. She started out at Google, but has also worked at Pinterest, Yelp, and the dating app Hinge. In June of 2019, she started at Facebook. Do you remember your first day there?
Frances Haugen: I do. Things I remember on my first day are I remember looking at my badge photo and it just represented so much hope to me. I just remember how much pride I felt. I remember walking into the space and the Facebook office is a remarkable building. It takes 10 or 15 minutes to walk from one end to the other. The ceilings are three-stories tall. It’s incredibly wide. Half the building’s laid out kind of like a little village, like the office rooms are scattered and the hallways zig back and forth.
Kate Linebaugh: Frances joined a team at Facebook called Civic Integrity, also known as the Civic Team. This was a group of a couple hundred people and they studied how harm was caused on Facebook by anyone from human traffickers to conspiracy theorists. And this team also had to find solutions.
Frances Haugen: The sphere of influence of Civic Integrity was to make sure that Facebook is a positive impact in society.
Kate Linebaugh: Frances had been hired as a product manager or PM, focused specifically on civic misinformation, content about politics and society that was misleading or fake. Frances says she had a personal reason for wanting to combat misinformation.
Frances Haugen: I joined Facebook because someone I was incredibly close to, who was really important to me, I lost them to misinformation on the internet, and I never want anyone to feel the pain that I felt.
Kate Linebaugh: Frances says their friendship started to fall apart during the 2016 presidential election. Her friend eventually severed ties with Frances and left San Francisco.
Frances Haugen: In 2016, he was a little disillusioned after Bernie lost the nomination, and he was susceptible to misinformation on the internet. He got really, really radicalized, and I don’t blame Facebook for what happened to him. I blame more 4chan and Reddit.
But he was making crazy claims about George Soros running the world economy and things like that. Things that are just super easy to invalidate. When I would send him these things, to give you a sense of how much misinformation on the internet can twist people, he would say things to me like, “Do you read your own citations? All of these references are to the mainstream media. How can you possibly believe them? ”
Kate Linebaugh: What did you hope to accomplish at Facebook?
Frances Haugen: I wanted to make the problem even a little bit less bad.
Kate Linebaugh: Frances got to work. At the outset, she said she was hopeful that she could have a positive impact.
Frances Haugen: I know that I have special skills. This will be my fourth social network I’ve worked at. I am a ranking specialist. I’m specifically deep in the algorithms, like the code of how do we choose what content to show people. I know I can make a difference on civic misinformation.
Kate Linebaugh: When did your sense that you could really make a difference start to change?
Frances Haugen: Almost immediately. Within a month of joining Facebook, I was very skeptical of our ability to actually make the impact. I assumed I was coming into it like civic misinformation, you must already have a team. 2016 happened three years ago, right? I come in, I showed up, and I found out my entire team was brand new.
Kate Linebaugh: Facebook had studied civic misinformation before, but there hadn’t been a dedicated team on it until Frances started. She was put in charge of a small group of around four engineers and data scientists. And straight away, she says she felt there were unreal expectations about what her team could accomplish.
Frances Haugen: The first big project that I worked on at Facebook was around narrowcast misinformation. Narrowcast misinformation is when a party, for example, this project was inspired by Russia, they had found that Russia was targeting specific sensitive populations in the United States like environmentalists, African American activists, police officers and sending them misinformation. And while Facebook had known this was a possible problem since 2016, it’s a very hard problem.
Kate Linebaugh: The project Frances was working on was to help stop a repeat of this in the 2020 election.
Frances Haugen: The first thing we developed was a way to segment the US population into sub-communities in a privacy conscious way. We developed a way to based on people’s engagement with different kinds of topics segment the United States so we could then look at how was information being directed to each of those 600 subpopulations.
As we worked on this project, my manager told me it had to be done within 12 weeks. And I was like, “That is unrealistic.” I told my manager almost immediately, “This team is not being set up for success.” I was told, “At Facebook, people do remarkable things with far less resources than anyone expected. Make it work.”
Kate Linebaugh: Facebook spokesman Andy Stone said the company has invested in people and technology to keep its platform safe and that the company has made fighting misinformation and providing authoritative information a priority. That narrowcast project that Frances was working on was part of a much bigger effort going on inside Facebook, something that was known internally as a lockdown.
Frances Haugen: At Facebook, there have been certain moments of inflection in the company’s history. When they realized that they weren’t on mobile, they did a lockdown. It was a mobile lockdown. Or at some point they did an Android lockdown because they always just focused on iPhones. They were like, “Oh no, there’s so many Android phones in the world. Got to get on Android.”
They’ve had these inflection points in a very small number, maybe up to that five or six. They did a lockdown for the 2020 election. They had done basically a war game for the 2020 election saying, “What could go off the rails?” They went and assessed what were all the vulnerabilities for Facebook, and they made a grid.
Imagine across the top you have a column for Facebook, you have a column for Messenger, you have a column for Instagram, WhatsApp, ads. And in the rows they picked the 10 biggest threats and then they colored in those squares.
Kate Linebaugh: The top 10 threats included things like hate speech, misinformation, harassment, impersonation, and voter suppression.
Frances Haugen: Maybe across the squares, there’s 70 squares, 60 squares. There were so many red squares. The entire thing was either red or yellow. There’s no green. There are so many red squares they had to have two colors of red to differentiate between the red ones that they couldn’t address during lockdown and the ones they were going to focus on during lockdown. And at that point, I was like, “Oh my God, we’re a year out from the election and this is how bad it is? This is a problem.”
Kate Linebaugh: You mentioned that as a moment that shifted your feelings.
Frances Haugen: Totally.
Kate Linebaugh: Why? How so?
Frances Haugen: Why?
Kate Linebaugh: Just seeing all those red squares?
Frances Haugen: Oh yeah. Oh yeah. The two colors of red squares. It’s like so much red, we have to have two colors of red.
There’s angry red and then normal red.
Kate Linebaugh: Frances says that she saw that grid of angry red squares as emblematic of a choice Facebook had made as a company.
Frances Haugen: Given that Facebook has under invested in us so much, so much, we could have had 10 times as many people working on it, and then we could have addressed all the red squares. We could have addressed everyone on those red squares, but Facebook wants to make $80 billion a year.
Would’ve made $2 billion less last year, but we’d had a safe democracy. Who gets to make that choice? Right now it’s Facebook and the shareholders.
Kate Linebaugh: Facebook spokesman said that hosting harmful or hateful content is bad for its community, bad for advertisers, and ultimately, bad for its business. Meanwhile, Frances says she was learning about the impact misinformation was having outside of the US from other teams within Civic Integrity.
Frances Haugen: I used to go to this meeting called Virality Review. Virality Review was run by the Social Cohesion team. The Social Cohesion team was focused on areas that were at risk for genocide.
The Virality Review was of content that was going viral in “at-risk countries.” People who actually are exposed to content from those markets would pull up the top 10 posts in each of those markets that had gone viral that week.
Kate Linebaugh: And they’d explain the reason each post had gone viral.
Frances Haugen: It’s just horrific content. It’s severed heads. It’s horrible. One of the red flags a for a society that is at risk for ethnic cleansing is you start comparing people to insects. This ethnic group is cockroaches because it dehumanizes them so you can then kill them.
Imagine you’re going to this meeting every other week and every single post is like that, and you’re talking about what allowed this post to go viral, what are the signals that were causing it to go viral?
And you’re sitting there being like, I am the civic misinformation PM, and I am seeing this misinformation and I feel no faith that I can do anything to address it. Imagine living with that every day and having that just grind you down.
Kate Linebaugh: And at this point, how did you feel about the work you were doing?
Frances Haugen: I felt it was incredibly essential. I felt like it was a thing where we needed 200 people, not five people, four people working on this problem.
Kate Linebaugh: Did you voice that to your manager?
Frances Haugen: I did. Totally.
Kate Linebaugh: And the response you got was make do with what you have?
Frances Haugen: I was told that the expectation of Facebook is that you accomplish the impossible with far less resources than anyone would expect.
Kate Linebaugh: Facebook spokesman told us, “Every day our teams have to balance protecting the right of billions of people to express themselves openly with the need to keep our platform a safe and positive place.” Did you start to get frustrated?
Frances Haugen: Yeah, it was extremely frustrating. One of the most painful things a person and can experience is living with a secret with intense consequences. So here I was inside of Facebook, I thought I was well-informed before I joined Facebook about misinformation. Now I know that the problem is way, way worse than anyone outside knows, and I’m staffed with a team that I have no faith can actually address this problem.
I know that no one outside knows these things. I’m walking around holding this information, trying really hard and basically living kind of an archetypal Facebook experience, which is there are many incredibly conscientious people who come in, learn what’s actually happening at Facebook, and push themselves almost to the point of burnout because they know that they know what’s going on.
And once they leave, they won’t be able to help with the problem. I remember the incredible anxiety I felt. By six months in, I had learned so much about the consequences of the problem and felt so powerless to actually make progress on it that I was starting to have panic attacks.
Kate Linebaugh: Frances says her anxiety kept building through the campaign season of 2020. And eventually, she hit a breaking point.
Frances Haugen: My inflection moment where I was like, “Oh, I’m going to need to probably tell someone,” was when they got rid of Civic Integrity.
Kate Linebaugh: In December, Facebook announced it was shutting down the team Frances had been working on, the Civic Integrity team. The plan was to reassign those employees to other parts of the company.
Frances Haugen: It was not entirely a surprise because I had heard rumors that it was coming. I was also not that surprised because Civic Integrity was viewed with a lot of suspicion inside of the company because, one, they did keep uncovering things that I think Facebook didn’t want people to document. And I think Civic Integrity was a problem for Facebook because they asked awkward questions and answered them.
Kate Linebaugh: What did it say to you that Facebook was dismantling the Civic Integrity team?
Frances Haugen: When I found out that the team was being dismantled, when they announced it, it was such a breach of trust. The idea that Facebook could have so much information about what its impact was and then dismantle the team.
Kate Linebaugh: Facebook declined to comment on the reorganization. A few weeks after the Civic Integrity team was disbanded, there were the Capitol riots.
Frances Haugen: Yes.
Kate Linebaugh: What was that moment like in Facebook?
Frances Haugen: Facebook turned off all sorts of protections that it had turned on for the 2020 election right after the election. And the reason they turned off those protections… These are things around like, how reactive is the platform? Is it viral? Those things about ranking, right?
Some of those signals that make it easier for angry things to go out, they got tamped down a little bit for the election because they didn’t want to have riots at the election. But all as things make Facebook grow a little slower.
They turned off all those safety mechanisms after or they went back to their old settings after the election. And the insurrection happens and immediately they throw them back on.
Kate Linebaugh: And how are you seeing this information?
Frances Haugen: Oh, because it’s just flowing freely across our internal version of Facebook. It’s called Workplace. People were putting up reports of what was happening. There’s literally a report, they’re called Break the Glass Measures. Literally when the insurrection happened, there was a document I saw where it listed here are all these Break the Glass Measures that we had on for the election to keep it safe.
And as soon as the election passed, we turn them off and now we’re turning them back on because clearly things have gone off the rails. Facebook knew that there were dangerous trade-offs they were making before the election, which is why they chose safer choices for the election. And as soon as they had passed that moment, they get rid of Civic Integrity.
They turn off these things that would make Facebook grow slower. And as a result, there was documentation that a lot of the Stop the Steal groups and all those things, they grew so fast because of choices Facebook made to prioritize growth over safety.
Kate Linebaugh: After the Capitol riots, there was pressure on Facebook to suspend then President Donald Trump’s account.
Frances Haugen: Yes.
Kate Linebaugh: Do you remember what that was like inside the company?
Frances Haugen: It was very contentious. And the fact that you literally had to have an insurrection and people storming the Capitol and going into political leaders offices with guns for Facebook to take the person who instigated the things off the platform.
If that’s our bar, Facebook is basically saying, “We will let societies destabilize to the point of rioting and then we’ll step in. We’re not going to slowly turn the heat down as it starts to warmer. We’re going to let the pot boil over and then we’ll do something.”
Kate Linebaugh: Facebook spokesman said, “The notion that the January 6th insurrection would not have happened but for Facebook is absurd.” He said the responsibility for the violence that occurred that day lies with those who attacked the capital and those who encouraged them.
Frances says that the 2020 election and its aftermath left her feeling like she needed to do something, so she turned to her mother for guidance.
Frances Haugen: I lived with my parents for 2020 because it was COVID, so I had lots and lots of time to talk to my parents about what I was feeling. My mother is an amazing resource because… So she was a professor, tenured professor, for years, decades and decades. And in her fifties, she became an Episcopal priest. If you are struggling with a crisis of conscience, it’s really useful to live with a priest.
Kate Linebaugh: You were struggling with a crisis of conscience?
Frances Haugen: I was. Yeah, yeah, yeah. I was like, oh my God, we are failing democracy on a really basic level. I never saw Facebook’s willingness to invest at the level they needed to invest to solve these problems. There are thousands of people in places like Russia and China and Iran whose jobs is to inject misinformation into the United States. There is less than 200 people in the entire company working on anything even slightly related to this.
Kate Linebaugh: And what did she say to you?
Frances Haugen: And she told me to follow my heart. That if I believe people’s lives were on the line, that I should do what I viewed was the most likely thing to save those lives. She told me no matter what I did, she would support me.
Kate Linebaugh: And what are the choices here?
Frances Haugen: I’ve kind of three choices. One is I can stay at Facebook and keep grinding and doing 70 hour weeks and not seeing the progress that I think is essential. I can quit and go do something else. And the third option is I can let everyone else know.
Kate Linebaugh: We’ll be right back. After talking with her mother and thinking about her options, Frances decided she was going to publicly reveal what she had learned inside Facebook. That’s when she reached out to our colleague Jeff Horowitz.
Frances Haugen: The first time Jeff and I hung out, I had been told by my friends who worked in security I should assume my devices are bugged. I didn’t know what level of paranoid have. I left my phone in the car.
He and I went for a walk out in the woods. It was beautiful. We literally sat on a picnic blanket like slightly off the trail, like under the trees. This is kind of like a job interview in some ways, right? I wanted to see what he felt about certain issues.
Like I wanted to know, did he care about what was happening in other countries? Because I could see like the ethnic violence issue and the risk of ethnic violence at that point was like the thing that I worried the most about. I knew that some media outlets didn’t care as much about those issues. He was checking those boxes.
I knew that I needed him to believe that I knew what I was talking about. We were just sitting there kind of feeling each other out like, is this someone that I want to invest a lot of time explaining things to? Do I trust that he’s going to do a good job, a rigorous job?
Kate Linebaugh: Frances decided to trust him. Internal Facebook documents that she gathered ultimately went to Congress, the SEC, and Jeff. What were you hoping to achieve?
Frances Haugen: What I was hoping to achieve? The thing I wanted was for the public to have enough information that they could make choices on what laws to have to regulate Facebook.
Kate Linebaugh: What are the regulations that you want to see?
Frances Haugen: I think there’s like different tiers of interventions that I think are necessary. At a minimum, we need radically more transparency and we need to, as a society, think about how can we not be dependent on whistle blowers like me to get basic information out of the company.
Facebook has told us, “You can either have growth or engagement.” If we make it safer, it won’t be as engaging. And now we actually have numbers saying, guess what? Facebook is trading off very small decreases in engagement for huge consequences in misinformation and hate speech and violence. Now we have those things documented. There are many internal documents that talk about the idea that the trade-offs that people are willing to accept.
If you had to decide between 1% few sessions and 10 or 20% more misinformation, Facebook is consistently saying 1% of sessions is worth 10% misinformation. The second thing is we need to have different regulations on engagement based ranking. Engagement based ranking is always going to prioritize the sensational. It’s always going to prioritize misinformation. And we need to take interventions to reduce virality, to make things less growth optimized.
Because we could have social media that was about our family and friends that we really enjoyed, that was less toxic. It’s just Facebook would grow slower. People would spend shorter sessions on Facebook. Facebook would make less money. We have to regulate it to get that world.
Kate Linebaugh: Facebook spokesman said the company’s incentive is to provide “a safe, positive experience for the billions of people who use Facebook. That’s why we’ve invested so heavily in safety and security. And to suggest we encourage bad content and do nothing is just not true.” If you could implement one change at Facebook, what would it be and why?
Frances Haugen: Magic bullets are always dangerous because they don’t exist. Let me think, one change. If I could only do one thing, I would improve transparency. Because if Facebook had to publish public data feeds daily on the most viral content, how much of the content people see is coming from groups?
How much hate speech is there? If all this data was transparent in public, you’d have YouTubers who would analyze this data and explain it to people.
Kate Linebaugh: Do you think Facebook should to be broken up?
Frances Haugen: When people ask me, should we break up Facebook, I say, definitively do not break up Facebook. All you will do is starve the individual parts of resources. And instead of being able to collaborate across those companies to figure out strategies to solve problems, you will divide up the teams and make them less capable.
One thing that I really, really want to emphasize is that a lot of the problems that are outlined here are not Facebook problems. They are problems with engagement based ranking. That when we allow algorithms, when we allow AI to choose what we get to see and don’t see, we need that same kind of system for all social media companies, because that’s the only way we’re going to get systems that are even minimally safe enough.
Think of how many people work on regulating cars. I don’t even know what the number is. Imagine if there were a hundred people who were paid by the public or half paid by the public and half by Facebook who were embedded inside of Facebook, who could ask these questions themselves.
Kate Linebaugh: Like the fed with banking.
Frances Haugen: With Banking. We do this for bank. Oh my goodness. We don’t let banks run themselves.
Kate Linebaugh: The Federal Reserve of Algorithms.
Frances Haugen: The Federal Reserve of Algorithms. Algorithmic governance. We need more thinking on algorithm governance.
Kate Linebaugh: I’m on Facebook because my son has a medical condition, and there’s a group of parents who found each other on Facebook. They came up with a way to use medical devices together that would improve patient outcomes. It would not have existed without Facebook.
Frances Haugen: Yeah, totally. Ugh, amazing. I love open source. I love open source medical things. I love when people think that solutions are either/or solutions. I love it, because then you can step in and say, “Guess what? We’re having an argument about A or B. Guess what? There are a C, D, and E. We don’t need to have this be an either/or conversation.” I think that’s a great example. Let’s take a step back and imagine what Facebook could look like if it was safer. I’m not asking you to give up your amazing open source medical devices group, because I agree with you.
Those things change the world. But what I am saying is instead of having a product where you have a group with 500,000 people in it, and every day that group makes a thousand posts and we’re going to trust the AI to pick three posts from that group and put them in your newsfeed. We know the solution, what happens in that world.
The algorithms, because they are juiced by engagement, end up picking the most extreme posts out of that. A thousand posts each day. Let’s imagine a different world. In a world where we designed social media such that it’s less relying on algorithms to pick what we should focus on, normal social interactions will regulate what we talk about.
We should have humans through our conversations, our normal interactions be the things that are choosing what we focus on, not machines. Humans over machines.
If you and I were having a conversation and I kept talking about the same thing over and over again, at some point you’re going to walk away from me, right? If I bring up at Thanksgiving dinner too many times some crazy conspiracy theory, you’re going to be like, “Hey, we’ve talked about that long enough. Let’s move on.” But algorithms will say, “Ooh, people engage with this topic the most. Let’s show it to you more.” And that’s part of the danger for like teenagers, right?
Part of the reason why these teen girls are getting eating disorders is they one time look up weight loss and the algorithm’s like, “Oh great. We’ll keep showing you more and more extreme weight loss things.”
Kate Linebaugh: In response to our story about Instagram, Instagram head Adam Mosseri said making fixes make things worse unintentionally.
Frances Haugen: Did he say what those fixes are?
Kate Linebaugh: Well, they’re working on some fixes that he told us about, but it was sort of cautionary. You can be prescriptive out there in the wild, but what you are ordering up could end up hurting this product.
Frances Haugen: Let’s take a step back. Part of the reason why Facebook has made lots of these choices is because they know for each one of these choices, people engaged with the product a little bit more. Yeah. If you got rid of a bunch of these growth hacks, Facebook might be 10% “less enjoyable.” I.e., you might consume 10% less content on it.
But the content you consume you might enjoy more because like parts… It’s kind of like fast food. They’ve been feeding us french fries. And ugh, french fries are delicious.
Kate Linebaugh: So good.
Frances Haugen: So good. Talk about a perfectly designed product. Instead having your entire diet be french fries, it was like half french fries or 10% french fries. Yeah, you would eat less, but you’d probably also feel better. This comes back again to why I said like if I could only choose to fix one thing, the thing I would fix is transparency.
When Adam Mosseri waves his hands and says, “Some things might be worse,” who gets to define the yard stick? Imagine if there was like a hundred people who got to define the yardstick, instead of Adam Mosseri saying it might be worse for people.
Kate Linebaugh: Frances resigned from Facebook in May. She’s moved away from California and is now focused on other tech projects and on working with lawmakers to regulate social media. How do you feel about Facebook now? You you’ve released these documents. You have strong feelings about the company. Do you hate Facebook?
Frances Haugen: Oh, no, no, no, no, no. A thing that I want people to remember is to do this project, I had to do a lot of work, to document the things I documented at the level I documented. It took a lot of work. You can’t do those things if you’re driven by hate, because hate burns you out. If I could work at Facebook again, I would work at Facebook again, because I think the most important work in the world is happening at Facebook because we have to figure out how to make social media safer.
Kate Linebaugh: Some people at Facebook may see your decision to release these documents as betrayal.
Frances Haugen: Oh, I totally can… I know that’s going to happen.
Kate Linebaugh: What do you say to them?
Frances Haugen: I totally see how they could come from that perspective, and all I want them to know is that one of the most important things I learned at Facebook… I had a manager who was amazing. He’s like a role model for who I want to be as a leader, right?
And at some point I was working on some problem and I ran into a roadblock and I got delayed. I was like a week late to give him something I promised him. He said to me, “I’m really disappointed in you.
Because if you had told me you were struggling with this, we could have solved this problem together. It is better to solve problems together than solve them alone.”
I want the employees at Facebook to know that I did this because I really believe that solving problems together is better solving them alone. And that Facebook has been struggling because a lot of the problems it needs to solve are about conflicts of interest, right?
Conflicts of interest between public safety and profits and growth. Those are problems that Facebook cannot solve alone. And that once it starts solving those problems together, it’ll be so much more constructive and the path forward will be so much easier.
Kate Linebaugh: Well, thanks so much, Frances.
Frances Haugen: My pleasure.
Kate Linebaugh: I appreciate you taking all this time. In an internal message sent to Facebook staff and leaked to the media, a Facebook executive said the company will continue to face scrutiny.
Some of it and some of it unfair. But he said, “We should also continue to hold our heads up high. You and your teams do incredible work. Our tools and products have a hugely positive impact on the world and in people’s lives.”
Mark Zuckerberg Breaks Silence On Facebook Whistleblower Testimony, Media Reports
‘Many of the claims don’t make any sense,’ CEO says, in his first public comments related to renewed debate over Facebook’s role in society.
Facebook Inc. Chief Executive Mark Zuckerberg said the company’s work and motives have been mischaracterized in recent media reports and whistleblower testimony and pledged that he would continue pursuing internal research into potential harms of social media.
Mr. Zuckerberg on Tuesday afternoon made his first public comments related to the renewed public debate over his company’s role in society. In a Facebook post, he acknowledged the difficulty in how children use social media, underscored the importance of the company’s research into tough issues and reiterated calls for more regulation of the industry.
In a memo sent to employees that he reshared on Facebook, Mr. Zuckerberg wrote that he waited to address employees and the public until after two recent congressional hearings.
Those focused on how Instagram affects teenagers’ mental health, including a hearing Tuesday with a whistleblower who provided internal documents that provided the foundation for The Wall Street Journal’s Facebook Files series.
The series shows how the company’s moderation rules favor elites, how its algorithms foster discord and how drug cartels and human traffickers use its services openly.
“Many of the claims don’t make any sense,” Mr. Zuckerberg wrote. “I think most of us just don’t recognize the false picture of the company that is being painted.”
During the hearing Tuesday, the whistleblower Frances Haugen, a former Facebook product manager, pressed the company to share internal and external research more broadly. In products such as cars and cigarettes, she said, independent researchers can evaluate health effects, but “the public cannot do the same with Facebook.”
Mr. Zuckerberg wrote Tuesday night that Facebook is “committed to doing more research ourselves and making more research publicly available.”
Mr. Zuckerberg wrote that he is especially focused on questions around Facebook’s work with children. “I’ve spent a lot of time reflecting on the kinds of experiences I want my kids and others to have online, and it’s very important to me that everything we build is safe and good for kids.”
He added: “The reality is that young people use technology.” Mr. Zuckerberg wrote that Facebook and other technology companies should build products that meet their needs while also keeping them safe, rather than ignoring younger people altogether.
“But when it comes to young people’s health or well-being, every negative experience matters,” he wrote. “It is incredibly sad to think of a young person in a moment of distress who, instead of being comforted, has their experience made worse.”
Mr. Zuckerberg also addressed Facebook’s hourslong outage Monday. He said the company has spent the past 24 hours debriefing what it can do to strengthen its systems.
Mark Zuckerberg Has Had A Terrible Week. And It’s Only Tuesday
Facebook Inc.’s worldwide crash exposed the risks of relying on its social networking products, bolstering European regulators’ drive to contain its reach just as a U.S. whistle-blower’s testimony threatens to attract more unwanted scrutiny at home.
While Europe awoke to find Facebook, Instagram, WhatsApp and Messenger services back online, the scale of Monday’s blackout quickly led to criticism. The European Union’s antitrust chief and digital czar, Margrethe Vestager, said the Facebook failure would focus minds on the company’s dominance.
“It’s always important that people have alternatives and choices. This is why we work on keeping digital markets fair and contestable,” Vestager said. “An outage as we have seen shows that it’s never good to rely only on a few big players, whoever they are.”
The networking problem that brought down services used by more than 2.75 billion people couldn’t have come at a worse time. After a U.S. television interview on Sunday, whistle-blower Frances Haugen will appear before a Senate subcommittee on Tuesday and will tell lawmakers what she calls the “frightening truth” about Facebook.
Haugen’s accusations that the company prioritizes profit over user safety were still making headlines as Facebook services were down.
The revelations prompted U.S. Representative Alexandria Ocasio-Cortez to highlight the risks faced by countries that rely on the services for communication.
Facebook climbed as much as 1.3% to $330.33 in New York, paring a 4.9% slump Monday.
Facebook already faces numerous antitrust and privacy investigations across Europe as well as intense scrutiny of even small deals, such as its planned takeover of a customer-service software provider. The company was fined 225 million euros ($261 million) last month over WhatsApp data failings and faces separate antitrust probes from the European Commission and German competition watchdog Bundeskartellamt.
EU lawmakers will in coming months vote on new laws that would curb the ability of powerful Internet platforms such as Facebook to expand into new services.
The services disruption showed the “serious consequences” of dependency on one company for key communication channels, and that Facebook should never have been allowed to buy Instagram and WhatsApp, said Rasmus Andresen, a German Green member of the European Parliament.
“Everyone in the European Union as well as in the U.S. must realize now at the latest that we need strong regulations against quasi-monopolies,” Andresen said in a statement. “We need close transatlantic cooperation.”
The event spurred calls for a new digital “order” by Turkish President Recep Tayyip Erdogan, a man with little tolerance for political criticism on social media.
The hours-long shutdown showed how “fragile” social networks are, said Fahrettin Altun, his presidential communications director, urging a rapid development of “domestic and national” alternatives. “The problem we have seen showed us how our data are in danger, how quickly and easily our social liberties can be limited,” Altun said in a series of Twitter posts.
The nationalist Alternative for Germany party welcomed the disruption, with lawmaker Beatrix von Storch saying that she hopes competitors will benefit.
In Nigeria, the blackout silenced President Muhammadu Buhari’s communications team, government officials and governors in 36 states for six hours. The government has increasingly relied on Facebook to inform the public after Twitter’s services were blocked in Africa’s most populous country on June 5. A spokesman for the president’s office declined to comment.
Hungarian opposition politicians who use Facebook products to circumvent state-owned media outlets lamented that the company couldn’t be relied on as they campaign against Prime Minister Viktor Orban.
Facebook is “for us opposition politicians one of the last media outlets where we can talk to you and which isn’t totally dominated by” Fidesz, Orban’s political party, Budapest Mayor Gergely Karacsony said in a video posted on Tuesday. Problems with the platform threaten the ability to disseminate information, he said.
The outage forced some phone companies to take action. The Polish Play unit of Paris-based telecommunications company Iliad SA recorded an eightfold increase in the number of calls to its customer service between 6:30 p.m. and 7:30 p.m. local time, it said in a blog post on its website. It had to reconfigure its network to prevent an overload.
“This outage does show the over-dependence we have on a single company, and the need for diversity and greater competition,” Jim Killock, executive director of the Open Rights Group in London, said in an interview. “Their reliance of data-driven, attention-optimizing products is dangerous and needs to be challenged through interventions enabling greater competition.”
Racist Emojis Are The Latest Test For Facebook, Twitter Moderators
The industry says dealing with the pictographs is a technical challenge, but critics say the companies are making it harder than it has to be.
In a soccer game in Liverpool’s Goodison Park in 1988, player John Barnes stepped away from his position and used the back of his heel to kick away a banana that had been thrown toward him. Captured in an iconic photo, the moment encapsulated the racial abuse that Black soccer players then faced in the U.K.
More than 30 years later, the medium has changed, yet the racism persists: After England lost to Italy this July in the final of the UEFA European Championship, Black players for the British side faced an onslaught of bananas.
Instead of physical fruit, these were emojis slung at their social media profiles, along with monkeys and other imagery. “The impact was as deep and as meaningful as when it was actual bananas,” says Simone Pound, director of equality, diversity, and inclusion for the U.K.’s Professional Footballers’ Association.
Facebook Inc. and Twitter Inc. faced wide criticism for taking too long to screen out the wave of racist abuse during this summer’s European championship.
The moment highlighted a long-standing issue: Despite spending years developing algorithms to analyze harmful language, social media companies often don’t have effective strategies for stopping the spread of hate speech, misinformation, and other problematic content on their platforms.
Emojis have emerged as a stumbling block. When Apple Inc. introduced emojis with different skin tones in 2015, the tech giant came under criticism for enabling racist commentary. A year later Indonesia’s government drew complaints after it demanded social networks remove LGBTQ-related emojis.
Some emojis, including the one depicting a bag of money, have been linked to anti-Semitism. Black soccer players have been frequently targeted: The Professional Footballers’ Association and data science company Signify conducted a study last year of racially abusive tweets directed at players and found that 29% included some form of emoji.
Over the past decade, the roughly 3,000 pictographs that constitute emoji language have been a vital part of online communication. Today it’s hard to imagine a text message conversation without them. The ambiguity that is part of their charm doesn’t come without problems, though.
A winking face can indicate a joke or a flirtation. Courts end up debating issues such as whether it counts as a threat to send someone an emoji of a pistol.
This matter is confusing to human lawyers, but it’s even more confounding for computer-based language models. Some of these algorithms are trained on databases that contain few emojis, says Hannah Rose Kirk, a doctoral researcher at the Oxford Internet Institute.
These models treat emojis as new characters, meaning the algorithms must start from scratch in analyzing their meaning based on context.
“It’s a new emerging trend, so people are not aware of it as much, and the models lag behind humans,” says Lucy Vasserman, who’s the engineering manager for a team at Google’s Jigsaw, which develops algorithms to flag abusive speech online. What matters is “how frequently they appear in your test and training data.”
Her team is working on two new projects that could improve analysis on emojis, one that involves mining vast amounts of data to understand trends in language, and another that factors in uncertainty.
Tech companies have cited technical complexity to obscure more straightforward solutions to many of the most common abuses, according to critics. “Most usage isn’t ambiguous,” says Matthew Williams, director of Cardiff University’s HateLab.
“We need not just better AI going forward but bigger and better moderation teams.”
Emoji use has been underanalyzed relative to its importance to modern online communication, Kirk says. She found her way to studying the pictographs after earlier work on memes. “The thing we found really puzzling as researchers was, why are Twitter and Instagram and Google’s solutions not better at emoji-based hate?” she says.
Frustrated by the poor performance of existing algorithms at detecting threatening use of emojis, Kirk built her own model, using humans to help teach the algorithms to understand emojis rather than leaving software to learn on its own.
The result, she says, was far more accurate than the original algorithms developed by Jigsaw and other academics that her team tested.
“We demonstrated, with relatively low effort and relatively few examples, you can very effectively teach emojis,” she says.
Mixing humans with tech, along with simplifying the approach to moderating speech, has also been a winning formula for startup Respondology in Boulder, Colo., which offers its screening tools across Nascar, the NBA, and the NFL. It works with the Detroit Pistons, Denver Broncos, and leading English soccer teams.
Rather than relying on a complicated algorithm, the company allows teams to hide comments that include certain phrases and emojis with a blanket screen. “Every single client that comes to us, particularly the sport clients—leagues, teams, clubs, athletes—all ask about emojis in the first conversation,” says Erik Swain, Respondology’s president. “You almost don’t need AI training of your software to do it.”
Facebook acknowledges that it told users incorrectly that certain emoji use during the UEFA European Championship this summer didn’t violate its policies when in fact it did. It says it’s begun automatically blocking certain strings of emojis associated with abusive speech, and it also allows users to specify which emojis they don’t want to see.
Twitter said in a statement that its rules against abusive posts include hateful imagery and emojis.
These actions may not be sufficient to quell critics. Professional athletes speaking out about the racist abuse they face has become yet another factor in the broader march toward potential government regulation of social media.
“We’ve all got hand-wringing and regret, but they didn’t do anything, which is why we have to legislate,” says Damian Collins, a U.K. Parliament member who’s leading work on an online safety bill.
“If people that have an interest in generating harmful content can see that the platforms are particularly ineffective at spotting the use of emojis, then we will see more and more emojis being used in that context.”
Facebook Says Its Rules Apply To All. Company Documents Reveal A Secret Elite That’s Exempt
A program known as XCheck has given millions of celebrities, politicians and other high-profile users special treatment, a privilege many abuse.
Mark Zuckerberg has publicly said Facebook Inc. allows its more than three billion users to speak on equal footing with the elites of politics, culture and journalism, and that its standards of behavior apply to everyone, no matter their status or fame.
In private, the company has built a system that has exempted high-profile users from some or all of its rules, according to company documents reviewed by The Wall Street Journal.
The program, known as “cross check” or “XCheck,” was initially intended as a quality-control measure for actions taken against high-profile accounts, including celebrities, politicians and journalists. Today, it shields millions of VIP users from the company’s normal enforcement process, the documents show.
Some users are “whitelisted”—rendered immune from enforcement actions—while others are allowed to post rule-violating material pending Facebook employee reviews that often never come.
At times, the documents show, XCheck has protected public figures whose posts contain harassment or incitement to violence, violations that would typically lead to sanctions for regular users. In 2019, it allowed international soccer star Neymar to show nude photos of a woman, who had accused him of rape, to tens of millions of his fans before the content was removed by Facebook.
Whitelisted accounts shared inflammatory claims that Facebook’s fact checkers deemed false, including that vaccines are deadly, that Hillary Clinton had covered up “pedophile rings,” and that then-President Donald Trump had called all refugees seeking asylum “animals,” according to the documents.
A 2019 internal review of Facebook’s whitelisting practices, marked attorney-client privileged, found favoritism to those users to be both widespread and “not publicly defensible.”
“We are not actually doing what we say we do publicly,” said the confidential review. It called the company’s actions “a breach of trust” and added: “Unlike the rest of our community, these people can violate our standards without any consequences.”
Despite attempts to rein it in, XCheck grew to include at least 5.8 million users in 2020, documents show. In its struggle to accurately moderate a torrent of content and avoid negative attention, Facebook created invisible elite tiers within the social network.
In describing the system, Facebook has misled the public and its own Oversight Board, a body that Facebook created to ensure the accountability of the company’s enforcement systems.
In June, Facebook told the Oversight Board in writing that its system for high-profile users was used in “a small number of decisions.”
In a written statement, Facebook spokesman Andy Stone said criticism of XCheck was fair, but added that the system “was designed for an important reason: to create an additional step so we can accurately enforce policies on content that could require more understanding.”
He said Facebook has been accurate in its communications to the board and that the company is continuing to work to phase out the practice of whitelisting. “A lot of this internal material is outdated information stitched together to create a narrative that glosses over the most important point: Facebook itself identified the issues with cross check and has been working to address them,” he said.
The documents that describe XCheck are part of an extensive array of internal Facebook communications reviewed by The Wall Street Journal. They show that Facebook knows, in acute detail, that its platforms are riddled with flaws that cause harm, often in ways only the company fully understands.
Moreover, the documents show, Facebook often lacks the will or the ability to address them.
This is the first in a series of articles based on those documents and on interviews with dozens of current and former employees.
At least some of the documents have been turned over to the Securities and Exchange Commission and to Congress by a person seeking federal whistleblower protection, according to people familiar with the matter.
Facebook’s stated ambition has long been to connect people. As it expanded over the past 17 years, from Harvard undergraduates to billions of global users, it struggled with the messy reality of bringing together disparate voices with different motivations—from people wishing each other happy birthday to Mexican drug cartels conducting business on the platform.
Those problems increasingly consume the company.
Time and again, the documents show, in the U.S. and overseas, Facebook’s own researchers have identified the platform’s ill effects, in areas including teen mental health, political discourse and human trafficking. Time and again, despite congressional hearings, its own pledges and numerous media exposés, the company didn’t fix them.
Sometimes the company held back for fear of hurting its business. In other cases, Facebook made changes that backfired. Even Mr. Zuckerberg’s pet initiatives have been thwarted by his own systems and algorithms.
The documents include research reports, online employee discussions and drafts of presentations to senior management, including Mr. Zuckerberg. They aren’t the result of idle grumbling, but rather the formal work of teams whose job was to examine the social network and figure out how it could improve.
They offer perhaps the clearest picture thus far of how broadly Facebook’s problems are known inside the company, up to the CEO himself. And when Facebook speaks publicly about many of these issues, to lawmakers, regulators and, in the case of XCheck, its own Oversight Board, it often provides misleading or partial answers, masking how much it knows.
One area in which the company hasn’t struggled is profitability. In the past five years, during which it has been under intense scrutiny and roiled by internal debate, Facebook has generated profit of more than $100 billion. The company is currently valued at more than $1 trillion.
For ordinary users, Facebook dispenses a kind of rough justice in assessing whether posts meet the company’s rules against bullying, sexual content, hate speech and incitement to violence. Sometimes the company’s automated systems summarily delete or bury content suspected of rule violations without a human review.
At other times, material flagged by those systems or by users is assessed by content moderators employed by outside companies.
Mr. Zuckerberg estimated in 2018 that Facebook gets 10% of its content removal decisions wrong, and, depending on the enforcement action taken, users might never be told what rule they violated or be given a chance to appeal.
Users designated for XCheck review, however, are treated more deferentially. Facebook designed the system to minimize what its employees have described in the documents as “PR fires”—negative media attention that comes from botched enforcement actions taken against VIPs.
If Facebook’s systems conclude that one of those accounts might have broken its rules, they don’t remove the content—at least not right away, the documents indicate. They route the complaint into a separate system, staffed by better-trained, full-time employees, for additional layers of review.
Most Facebook employees were able to add users into the XCheck system, the documents say, and a 2019 audit found that at least 45 teams around the company were involved in whitelisting. Users aren’t generally told that they have been tagged for special treatment. An internal guide to XCheck eligibility cites qualifications including being “newsworthy,” “influential or popular” or “PR risky.”
Neymar, the Brazilian soccer star whose full name is Neymar da Silva Santos Jr., easily qualified. With more than 150 million followers, Neymar’s account on Instagram, which is owned by Facebook, is one of the most popular in the world.
After a woman accused Neymar of rape in 2019, he posted Facebook and Instagram videos defending himself—and showing viewers his WhatsApp correspondence with his accuser, which included her name and nude photos of her. He accused the woman of extorting him.
Facebook’s standard procedure for handling the posting of “nonconsensual intimate imagery” is simple: Delete it. But Neymar was protected by XCheck.
For more than a day, the system blocked Facebook’s moderators from removing the video. An internal review of the incident found that 56 million Facebook and Instagram users saw what Facebook described in a separate document as “revenge porn,” exposing the woman to what an employee referred to in the review as abuse from other users.
“This included the video being reposted more than 6,000 times, bullying and harassment about her character,” the review found.
Facebook’s operational guidelines stipulate that not only should unauthorized nude photos be deleted, but that people who post them should have their accounts deleted.
“After escalating the case to leadership,” the review said, “we decided to leave Neymar’s accounts active, a departure from our usual ‘one strike’ profile disable policy.”
Neymar denied the rape allegation, and no charges were filed against him. The woman was charged by Brazilian authorities with slander, extortion and fraud. The first two charges were dropped, and she was acquitted of the third.
A spokesperson for Neymar said the athlete adheres to Facebook’s rules and declined to comment further.
The lists of those enrolled in XCheck were “scattered throughout the company, without clear governance or ownership,” according to a “Get Well Plan” from last year. “This results in not applying XCheck to those who pose real risks and on the flip-side, applying XCheck to those that do not deserve it (such as abusive accounts, persistent violators). These have created PR fires.”
In practice, Facebook appeared more concerned with avoiding gaffes than mitigating high-profile abuse. One Facebook review in 2019 of major XCheck errors showed that of 18 incidents investigated, 16 involved instances where the company erred in actions taken against prominent users.
Four of the 18 touched on inadvertent enforcement actions against content from Mr. Trump and his son, Donald Trump Jr. Other flubbed enforcement actions were taken against the accounts of Sen. Elizabeth Warren, fashion model Sunnaya Nash, and Mr. Zuckerberg himself, whose live-streamed employee Q&A had been suppressed after an algorithm classified it as containing misinformation.
Historically, Facebook contacted some VIP users who violated platform policies and provided a “self-remediation window” of 24 hours to delete violating content on their own before Facebook took it down and applied penalties.
Mr. Stone, the company spokesman, said Facebook has phased out that perk, which was still in place during the 2020 elections. He declined to say when it ended.
At times, pulling content from a VIP’s account requires approval from senior executives on the communications and public-policy teams, or even from Mr. Zuckerberg or Chief Operating Officer Sheryl Sandberg, according to people familiar with the matter.
In June 2020, a Trump post came up during a discussion about XCheck’s hidden rules that took place on the company’s internal communications platform, called Facebook Workplace. The previous month, Mr. Trump said in a post: “When the looting starts, the shooting starts.”
A Facebook manager noted that an automated system, designed by the company to detect whether a post violates its rules, had scored Mr. Trump’s post 90 out of 100, indicating a high likelihood it violated the platform’s rules.
For a normal user post, such a score would result in the content being removed as soon as a single person reported it to Facebook. Instead, as Mr. Zuckerberg publicly acknowledged last year, he personally made the call to leave the post up.
“Making a manual decision like this seems less defensible than algorithmic scoring and actioning,” the manager wrote.
Mr. Trump’s account was covered by XCheck before his two-year suspension from Facebook in June. So too are those belonging to members of his family, Congress and the European Union Parliament, along with mayors, civic activists and dissidents.
While the program included most government officials, it didn’t include all candidates for public office, at times effectively granting incumbents in elections an advantage over challengers. The discrepancy was most prevalent in state and local races, the documents show, and employees worried Facebook could be subject to accusations of favoritism.
Mr. Stone acknowledged the concern but said the company had worked to address it. “We made multiple efforts to ensure that both in federal and nonfederal races, challengers as well as incumbents were included in the program,” he said.
The program covers pretty much anyone regularly in the media or who has a substantial online following, including film stars, cable talk-show hosts, academics and online personalities with large followings. On Instagram, XCheck covers accounts for popular animal influencers including “Doug the Pug.”
In practice, most of the content flagged by the XCheck system faced no subsequent review, the documents show.
Even when the company does review the material, enforcement delays like the one on Neymar’s posts mean content that should have been prohibited can spread to large audiences. Last year, XCheck allowed posts that violated its rules to be viewed at least 16.4 billion times, before later being removed, according to a summary of the program in late December.
Facebook recognized years ago that the enforcement exemptions granted by its XCheck system were unacceptable, with protections sometimes granted to what it called abusive accounts and persistent violators of the rules, the documents show.
Nevertheless, the program expanded over time, with tens of thousands of accounts added just last year.
In addition, Facebook has asked fact-checking partners to retroactively change their findings on posts from high-profile accounts, waived standard punishments for propagating what it classifies as misinformation and even altered planned changes to its algorithms to avoid political fallout.
“Facebook currently has no firewall to insulate content-related decisions from external pressures,” a September 2020 memo by a Facebook senior research scientist states, describing daily interventions in its rule-making and enforcement process by both Facebook’s public-policy team and senior executives.
A December memo from another Facebook data scientist was blunter: “Facebook routinely makes exceptions for powerful actors.”
Mr. Zuckerberg has consistently framed his position on how to moderate controversial content as one of principled neutrality. “We do not want to become the arbiters of truth,” he told Congress in a hearing last year.
Facebook’s special enforcement system for VIP users arose from the fact that its human and automated content-enforcement systems regularly flub calls.
Part of the problem is resources. While Facebook has trumpeted its spending on an army of content moderators, it still isn’t capable of fully processing the torrent of user-generated content on its platforms.
Even assuming adequate staffing and a higher accuracy rate, making millions of moderation decisions a day would still involve numerous high-profile calls with the potential for bad PR.
Facebook wanted a system for “reducing false positives and human workload,” according to one internal document. The XCheck system was set up to do that.
To minimize conflict with average users, the company has long kept its notifications of content removals opaque. Users often describe on Facebook, Instagram or rival platforms what they say are removal errors, often accompanied by a screenshot of the notice they receive.
Facebook pays close attention. One internal presentation about the issue last year was titled “Users Retaliating Against Facebook Actions.”
“Literally all I said was happy birthday,” one user posted in response to a botched takedown, according to the presentation.
“Apparently Facebook doesn’t allow complaining about paint colors now?” another user complained after Facebook flagged as hate speech the declaration that “white paint colors are the worst.”
“Users like to screenshot us at our most ridiculous,” the presentation said, noting they often are outraged even when Facebook correctly applies its rules.
If getting panned by everyday users is unpleasant, inadvertently upsetting prominent ones is potentially embarrassing.
Last year, Facebook’s algorithms misinterpreted a years-old post from Hosam El Sokkari, an independent journalist who once headed the BBC’s Arabic News service, according to a September 2020 “incident review” by the company.
In the post, he condemned Osama bin Laden, but Facebook’s algorithms misinterpreted the post as supporting the terrorist, which would have violated the platform’s rules. Human reviewers erroneously concurred with the automated decision and denied Mr. El Sokkari’s appeal.
As a result, Mr. El Sokkari’s account was blocked from broadcasting a live video shortly before a scheduled public appearance. In response, he denounced Facebook on Twitter and the company’s own platform in posts that received hundreds of thousands of views.
Facebook swiftly reversed itself, but shortly afterward mistakenly took down more of Mr. El Sokkari’s posts criticizing conservative Muslim figures.
Mr. El Sokkari responded: “Facebook Arabic support team has obviously been infiltrated by extremists,” he tweeted, an assertion that prompted more scrambling inside Facebook.
After seeking input from 41 employees, Facebook said in a report about the incident that XCheck remained too often “reactive and demand-driven.” The report concluded that XCheck should be expanded further to include prominent independent journalists such as Mr. El Sokkari, to avoid future public-relations black eyes.
As XCheck mushroomed to encompass what the documents said are millions of users world-wide, reviewing all the questionable content became a fresh mountain of work.
In response to what the documents describe as chronic underinvestment in moderation efforts, many teams around Facebook chose not to enforce the rules with high-profile accounts at all—the practice referred to as whitelisting. In some instances, whitelist status was granted with little record of who had granted it and why, according to the 2019 audit.
“This problem is pervasive, touching almost every area of the company,” the 2019 review states, citing the audit. It concluded that whitelists “pose numerous legal, compliance, and legitimacy risks for the company and harm to our community.”
A plan to fix the program, described in a document the following year, said that blanket exemptions and posts that were never subsequently reviewed had become the core of the program, meaning most content from XCheck users wasn’t subject to enforcement. “We currently review less than 10% of XChecked content,” the document stated.
Mr. Stone said the company improved that ratio during 2020, though he declined to provide data.
The leeway given to prominent political accounts on misinformation, which the company in 2019 acknowledged in a limited form, baffled some employees responsible for protecting the platforms. High-profile accounts posed greater risks than regular ones, researchers noted, yet were the least policed.
“We are knowingly exposing users to misinformation that we have the processes and resources to mitigate,” said a 2019 memo by Facebook researchers, called “The Political Whitelist Contradicts Facebook’s Core Stated Principles.” Technology website The Information previously reported on the document.
In one instance, political whitelist users were sharing articles from alternative-medicine websites claiming that a Berkeley, Calif., doctor had revealed that chemotherapy doesn’t work 97% of the time. Fact-checking organizations have debunked the claims, noting that the science is misrepresented and that the doctor cited in the article died in 1978.
In an internal comment in response to the memo, Samidh Chakrabarti, an executive who headed Facebook’s Civic Team, which focuses on political and social discourse on the platform, voiced his discomfort with the exemptions.
“One of the fundamental reasons I joined FB Is that I believe in its potential to be a profoundly democratizing force that enables everyone to have an equal civic voice,” he wrote. “So having different rules on speech for different people is very troubling to me.”
Other employees said the practice was at odds with Facebook’s values.
“FB’s decision-making on content policy is influenced by political considerations,” wrote an economist in the company’s data-science division.
“Separate content policy from public policy,” recommended Kaushik Iyer, then lead engineer for Facebook’s civic integrity team, in a June 2020 memo.
Buzzfeed previously reported on elements of these documents.
That same month, employees debated on Workplace, the internal platform, about the merits of going public with the XCheck program.
As the transparency proposal drew dozens of “like” and “love” emojis from colleagues, the Civic Team’s Mr. Chakrabarti looped in the product manager overseeing the XCheck program to offer a response.
The fairness concerns were real and XCheck had been mismanaged, the product manager wrote, but “we have to balance that with business risk.“ Since the company was already trying to address the program’s failings, the best approach was “internal transparency,” he said.
On May 5, Facebook’s Oversight Board upheld the suspension of Mr. Trump, whom it accused of creating a risk of violence in connection with the Jan. 6 riot at the Capitol in Washington. It also criticized the company’s enforcement practices, recommending that Facebook more clearly articulate its rules for prominent individuals and develop penalties for violators.
As one of 19 recommendations, the board asked Facebook to “report on the relative error rates and thematic consistency of determinations made through the cross check process compared with ordinary enforcement procedures.”
A month later, Facebook said it was implementing 15 of the 19 recommendations. The one about disclosing cross check data was one of the four it said it wouldn’t adopt.
“It’s not feasible to track this information,” Facebook wrote in its responses. “We have explained this product in our newsroom,” it added, linking to a 2018 blog post that declared “we remove content from Facebook, no matter who posts it, when it breaks our standards.” Facebook’s 2019 internal review had previously cited that same blog post as misleading.
The XCheck documents show that Facebook misled the Oversight Board, said Kate Klonick, a law professor at St. John’s University. The board was funded with an initial $130 million commitment from Facebook in 2019, and Ms. Klonick was given special access by the company to study the group’s formation and its processes.
“Why would they spend so much time and money setting up the Oversight Board, then lie to it?” she said of Facebook after reviewing XCheck documentation at the Journal’s request. “This is going to completely undercut it.”
In a written statement, a spokesman for the board said it “has expressed on multiple occasions its concern about the lack of transparency in Facebook’s content moderation processes, especially relating to the company’s inconsistent management of high-profile accounts.”
Facebook is trying to eliminate the practice of whitelisting, the documents show and the company spokesman confirmed. The company set a goal of eliminating total immunity for “high severity” violations of FB rules in the first half of 2021. A March update reported that the company was struggling to rein in additions to XCheck.
“VIP lists continue to grow,” a product manager on Facebook’s Mistakes Prevention Team wrote. She announced a plan to “stop the bleeding” by blocking Facebook employees’ ability to enroll new users in XCheck.
One potential solution remains off the table: holding high-profile users to the same standards as everyone else.
“We do not have systems built out to do that extra diligence for all integrity actions that can occur for a VIP,” her memo said. To avoid making mistakes that might anger influential users, she noted, Facebook would instruct reviewers to take a gentle approach.
“We will index to assuming good intent in our review flows and lean into ‘innocent until proven guilty,’ ” she wrote.
The plan, the manager wrote, was “generally” supported by company leadership.
Facebook Knows Instagram Is Toxic For Teen Girls, Company Documents Show
Its own in-depth research shows a significant teen mental-health issue that Facebook plays down in public.
About a year ago, teenager Anastasia Vlasova started seeing a therapist. She had developed an eating disorder, and had a clear idea of what led to it: her time on Instagram.
She joined the platform at 13, and eventually was spending three hours a day entranced by the seemingly perfect lives and bodies of the fitness influencers who posted on the app.
“When I went on Instagram, all I saw were images of chiseled bodies, perfect abs and women doing 100 burpees in 10 minutes,” said Ms. Vlasova, now 18, who lives in Reston, Va.
Around that time, researchers inside Instagram, which is owned by Facebook Inc., were studying this kind of experience and asking whether it was part of a broader phenomenon. Their findings confirmed some serious problems.
“Thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse,” the researchers said in a March 2020 slide presentation posted to Facebook’s internal message board, reviewed by The Wall Street Journal. “Comparisons on Instagram can change how young women view and describe themselves.”
For the past three years, Facebook has been conducting studies into how its photo-sharing app affects its millions of young users. Repeatedly, the company’s researchers found that Instagram is harmful for a sizable percentage of them, most notably teenage girls.
“We make body image issues worse for one in three teen girls,” said one slide from 2019, summarizing research about teen girls who experience the issues.
“Teens blame Instagram for increases in the rate of anxiety and depression,” said another slide. “This reaction was unprompted and consistent across all groups.”
Among teens who reported suicidal thoughts, 13% of British users and 6% of American users traced the desire to kill themselves to Instagram, one presentation showed.
Expanding its base of young users is vital to the company’s more than $100 billion in annual revenue, and it doesn’t want to jeopardize their engagement with the platform.
More than 40% of Instagram’s users are 22 years old and younger, and about 22 million teens log onto Instagram in the U.S. each day, compared with five million teens logging onto Facebook, where young users have been shrinking for a decade, the materials show.
On average, teens in the U.S. spend 50% more time on Instagram than they do on Facebook.
“Instagram is well positioned to resonate and win with young people,” said a researcher’s slide posted internally. Another post said: “There is a path to growth if Instagram can continue their trajectory.”
In public, Facebook has consistently played down the app’s negative effects on teens, and hasn’t made its research public or available to academics or lawmakers who have asked for it.
“The research that we’ve seen is that using social apps to connect with other people can have positive mental-health benefits,” CEO Mark Zuckerberg said at a congressional hearing in March 2021 when asked about children and mental health.
In May, Instagram head Adam Mosseri told reporters that research he had seen suggests the app’s effects on teen well-being is likely “quite small.”
In a recent interview, Mr. Mosseri said: “In no way do I mean to diminish these issues.…Some of the issues mentioned in this story aren’t necessarily widespread, but their impact on people may be huge.”
He said he believes Facebook was late to realizing there were drawbacks to connecting people in such large numbers. “I’ve been pushing very hard for us to embrace our responsibilities more broadly,” he said.
He said the research into the mental-health effects on teens was valuable, and that Facebook employees ask tough questions about the platform. “For me, this isn’t dirty laundry. I’m actually very proud of this research,” he said.
Some features of Instagram could be harmful to some young users, and they aren’t easily addressed, he said. He added: “There’s a lot of good that comes with what we do.”
What Facebook knows
The Instagram documents form part of a trove of internal communications reviewed by the Journal, on areas including teen mental health, political discourse and human trafficking. They offer an unparalleled picture of how Facebook is acutely aware that the products and systems central to its business success routinely fail.
The documents also show that Facebook has made minimal efforts to address these issues and plays them down in public.
The company’s research on Instagram, the deepest look yet at what the tech giant knows about its impact on teens and their mental well-being, represents one of the clearest gaps revealed in the documents between Facebook’s understanding of itself and its public position.
Its effort includes focus groups, online surveys and diary studies in 2019 and 2020. It also includes large-scale surveys of tens of thousands of people in 2021 that paired user responses with Facebook’s own data about how much time users spent on Instagram and what they saw there.
The researchers are Facebook employees in areas including data science, marketing and product development who work on a range of issues related to how users interact with the platform. Many have backgrounds in computer science, psychology and quantitative and qualitative analysis.
In five presentations over 18 months to this spring, the researchers conducted what they called a “teen mental health deep dive” and follow-up studies.
They came to the conclusion that some of the problems were specific to Instagram, and not social media more broadly. That is especially true concerning so-called social comparison, which is when people assess their own value in relation to the attractiveness, wealth and success of others.
“Social comparison is worse on Instagram,” states Facebook’s deep dive into teen girl body-image issues in 2020, noting that TikTok, a short-video app, is grounded in performance, while users on Snapchat, a rival photo and video-sharing app, are sheltered by jokey filters that “keep the focus on the face.” In contrast, Instagram focuses heavily on the body and lifestyle.
The features that Instagram identifies as most harmful to teens appear to be at the platform’s core.
The tendency to share only the best moments, a pressure to look perfect and an addictive product can send teens spiraling toward eating disorders, an unhealthy sense of their own bodies and depression, March 2020 internal research states.
It warns that the Explore page, which serves users photos and videos curated by an algorithm, can send users deep into content that can be harmful.
“Aspects of Instagram exacerbate each other to create a perfect storm,” the research states.
The research has been reviewed by top Facebook executives, and was cited in a 2020 presentation given to Mr. Zuckerberg, according to the documents.
At a congressional hearing this March, Mr. Zuckerberg defended the company against criticism from lawmakers about plans to create a new Instagram product for children under 13. When asked if the company had studied the app’s effects on children, he said, “I believe the answer is yes.”
In August, Sens. Richard Blumenthal and Marsha Blackburn in a letter to Mr. Zuckerberg called on him to release Facebook’s internal research on the impact of its platforms on youth mental health.
In response, Facebook sent the senators a six-page letter that didn’t include the company’s own studies. Instead, Facebook said there are many challenges with conducting research in this space, saying, “We are not aware of a consensus among studies or experts about how much screen time is ‘too much,’ ” according to a copy of the letter reviewed by the Journal.
Facebook also told the senators that its internal research is proprietary and “kept confidential to promote frank and open dialogue and brainstorming internally.”
A Facebook spokeswoman said the company welcomed productive collaboration with Congress and would look for opportunities to work with external researchers on credible studies.
“Facebook’s answers were so evasive—failing to even respond to all our questions—that they really raise questions about what Facebook might be hiding,” Sen. Blumenthal said in an email. “Facebook seems to be taking a page from the textbook of Big Tobacco—targeting teens with potentially dangerous products while masking the science in public.”
Mr. Mosseri said in the recent interview, “We don’t send research out to regulators on a regular basis for a number of reasons.” He added Facebook should figure out a way to share high-level overviews of what the company is learning, and that he also wanted to give external researchers access to Facebook’s data.
He said the company’s plan for the Instagram kids product, which state attorneys general have objected to, is still in the works.
When told of Facebook’s internal research, Jean Twenge, a professor of psychology at San Diego State University who has published research finding that social media is harmful for some kids, said it was a potential turning point in the discussion about how social media affects teens.
“If you believe that R.J. Reynolds should have been more truthful about the link between smoking and lung cancer, then you should probably believe that Facebook should be more upfront about links to depression among teen girls,” she said.
Race For Teen Users
When Facebook paid $1 billion for Instagram in 2012, it was a tiny startup with 13 employees and already a hit. That year, Facebook for the first time had observed a decline in the number of teens using its namesake Facebook product, according to the documents. The company would come to see Instagram as Facebook’s best bet for growth among teens.
Facebook had been tracking the rise of buzzy features on competitor apps, including Snapchat, and in 2016 directed employees to focus on winning what they viewed as a race for teen users, according to former Instagram executives.
Instagram made photos the app’s focus, with filters that made it easy for users to edit images. It later added videos, feeds of algorithmically chosen content and tools that touched up people’s faces.
Before long, Instagram became the online equivalent of the high-school cafeteria: a place for teens to post their best photos, find friends, size each other up, brag and bully.
Facebook’s research indicates Instagram’s effects aren’t harmful for all users. For most teenagers, the effects of “negative social comparison” are manageable and can be outweighed by the app’s utility as a fun way for users to express themselves and connect with friends, the research says.
But a mounting body of Facebook’s own evidence shows Instagram can be damaging for many.
In one study of teens in the U.S. and U.K., Facebook found that more than 40% of Instagram users who reported feeling “unattractive” said the feeling began on the app. About a quarter of the teens who reported feeling “not good enough” said the feeling started on Instagram. Many also said the app undermined their confidence in the strength of their friendships.
Instagram’s researchers noted that those struggling with the platform’s psychological effects weren’t necessarily logging off. Teens regularly reported wanting to spend less time on Instagram, the presentations note, but lacked the self control to do so.
“Teens told us that they don’t like the amount of time they spend on the app but feel like they have to be present,” an Instagram research manager explained to colleagues, according to the documents. “They often feel ‘addicted’ and know that what they’re seeing is bad for their mental health but feel unable to stop themselves.”
During the isolation of the pandemic, “if you wanted to show your friends what you were doing, you had to go on Instagram,” said Destinee Ramos, 17, of Neenah, Wis. “We’re leaning towards calling it an obsession.”
Ms. Ramos and her friend Isabel Yoblonski, 18, believed this posed a potential health problem to their community, so they decided to survey their peers as a part of a national science competition. They found that of the 98 students who responded, nearly 90% said social media negatively affected their mental health.
In focus groups, Instagram employees heard directly from teens who were struggling. “I felt like I had to fight to be considered pretty or even visible,” one teen said of her experience on Instagram.
After looking through photos on Instagram, “I feel like I am too big and not pretty enough,” another teen told Facebook’s researchers. “It makes me feel insecure about my body even though I know I am skinny.”
“For some people it might be tempting to dismiss this as teen girls being sad,” said Dr. Twenge. But “we’re looking at clinical-level depression that requires treatment. We’re talking about self harm that lands people in the ER.”
‘Kick In The Gut’
Eva Behrens, a 17-year-old student at Redwood High School in Marin County, Calif., said she estimates half the girls in her grade struggle with body-image concerns tied to Instagram. “Every time I feel good about myself, I go over to Instagram, and then it all goes away,” she said.
When her classmate Molly Pitts, 17, arrived at high school, she found her peers using Instagram as a tool to measure their relative popularity. Students referred to the number of followers their peers had as if the number was stamped on their foreheads, she said.
Now, she said, when she looks at her number of followers on Instagram, it is most often a “kick in the gut.”
For years, there has been little debate among medical doctors that for some patients, Instagram and other social media exacerbate their conditions.
Angela Guarda, director for the eating-disorders program at Johns Hopkins Hospital and an associate professor of psychiatry in the Johns Hopkins School of Medicine, said it is common for her patients to say they learned from social media tips for how to restrict food intake or purge.
She estimates that Instagram and other social-media apps play a role in the disorders of about half her patients.
“It’s the ones who are most vulnerable or are already developing a problem—the use of Instagram and other social media can escalate it,” she said.
Lindsay Dubin, 19, recently wanted to exercise more. She searched Instagram for workouts and found some she liked. Since then the app’s algorithm has filled her Explore page with photos of how to lose weight, the “ideal” body type and what she should and shouldn’t be eating. “I’m pounded with it every time I go on Instagram,” she said.
Jonathan Haidt, a social psychologist at New York University’s Stern School of Business and co-author of the bestseller “The Coddling of the American Mind,” has been concerned about the effects of social media on teens since he started studying it in 2015.
He has twice spoken with Mr. Zuckerberg about Facebook’s effects on teen mental health, the first time after the CEO reached out in 2019.
Mr. Zuckerberg indicated that on the issues of political polarization and teen mental health, he believed that the research literature was contradictory and didn’t point clearly to any harmful causal effects, according to Mr. Haidt. He said he felt Mr. Zuckerberg at the time was “a partisan, but curious.”
“I asked Mark to help us out as parents,” he said. “Mark said he was working on it.”
In January 2020, Facebook invited Mr. Haidt to its Menlo Park, Calif., headquarters, where Mr. Mosseri and Instagram staff briefed him on the platform’s efforts to combat bullying and reduce social pressure on the platform.
Mr. Haidt said he found those efforts sincere and laudable but warned that they likely weren’t enough to battle what he believes is a mounting public-health epidemic.
“It was not suggested to me that they had internal research showing a problem,” he said.
The Facebook spokeswoman declined to comment on the interaction.
Some Instagram researchers said it was challenging to get other colleagues to hear the gravity of their findings. Plus, “We’re standing directly between people and their bonuses,” one former researcher said.
Instead of referencing their own data showing the negative effects of Instagram, Facebook executives in public have often pointed to studies from the Oxford Internet Institute that have shown little correlation between social-media use and depression.
Other studies also found discrepancies between the amount of time people say they use social media and the amount of time they actually use such services. Mr. Mosseri has pointed to these studies as evidence for why research using self-reported data might not be accurate.
Facebook has in the past been a donor to a researcher at the Oxford institute, which is part of the research and teaching department of Britain’s Oxford University.
Oxford’s lead researcher on the studies, Andrew Przybylski, who said he didn’t receive funding from Facebook, said companies like Facebook need to be more open about their research. “The data exists within the tech industry,” he said. “Scientists just need to be able to access it for neutral and independent investigation.”
In an interview, Mr. Przybylski said, “People talk about Instagram like it’s a drug. But we can’t study the active ingredient.”
Facebook executives have struggled to find ways to reduce Instagram’s harm while keeping people on the platform, according to internal presentations on the topic.
For years, Facebook experimented with hiding the tallies of “likes” that users see on their photos. Teens told Facebook in focus groups that “like” counts caused them anxiety and contributed to their negative feelings.
When Facebook tested a tweak to hide the “likes” in a pilot program they called Project Daisy, it found it didn’t improve life for teens. “We didn’t observe movements in overall well-being measures,” Facebook employees wrote in a slide they presented to Mr. Zuckerberg about the experiment in 2020.
Nonetheless, Facebook rolled out the change as an option for Facebook and Instagram users in May 2021 after senior executives argued to Mr. Zuckerberg that it could make them look good by appearing to address the issue, according to the documents.
“A Daisy launch would be received by press and parents as a strong positive indication that Instagram cares about its users, especially when taken alongside other press-positive launches,” Facebook executives wrote in a discussion about how to present their findings to Mr. Zuckerberg.
When Facebook rolled out Project Daisy, Mr. Mosseri acknowledged publicly that the feature didn’t actually change much about how users felt.
In the interview, he said he doesn’t think there are clear-cut solutions to fixing Instagram. He said he is cautiously optimistic about tools Instagram is developing to identify people who are in trouble and to try to “nudge” them toward more positive content.
Facebook made two researchers available to discuss their work. They said they are also testing a way to ask users if they want to take a break from Instagram. Part of the challenge, the researchers said, is they struggle to determine which users face the greatest risk. The researchers also said that the causality of some of their findings was unclear, and noted some of the studies had small sample sizes.
“I think anything and everything should be on the table,” Mr. Mosseri said. “But we have to be honest and embrace that there’s trade-offs here. It’s not as simple as turning something off and thinking it gets better, because often you can make things worse unintentionally.”
Zeroed In On Selfies
In the internal documents, Facebook’s researchers also suggested Instagram could surface “fun” filters rather than ones around beautification. They zeroed in on selfies, particularly filtered ones that allow users to touch-up their faces. “Sharing or viewing filtered selfies in stories made people feel worse,” the researchers wrote in January.
Sylvia Colt-Lacayo, a 20-year-old at Stanford University, said she recently tried out a face filter that thinned her cheeks and made them pink. But then Ms. Colt-Lacayo realized the filter had minimized her cheeks that she inherited from her Nicaraguan father, and made them look more European. That gave her “a bitter taste in my mouth,” she said.
Ms. Colt-Lacayo uses a wheelchair, and in the past Instagram made her feel like she didn’t look the way she was supposed to, or do the things that other teen girls on the app were doing, she said.
She said she began following people who use wheelchairs, or who are chronically ill or refer to other disabilities, and the platform became a place she could see images of older disabled people just being happy.
In March, the researchers said Instagram should reduce exposure to celebrity content about fashion, beauty and relationships, while increasing exposure to content from close friends, according to a slide deck they uploaded to Facebook’s internal message board.
A current employee, in comments on the message board, questioned that idea, saying celebrities with perfect lives were key to the app. “Isn’t that what IG is mostly about?” he wrote. Getting a peek at “the (very photogenic) life of the top 0.1%? Isn’t that the reason why teens are on the platform?”
A now-former executive questioned the idea of overhauling Instagram to avoid social comparison. “People use Instagram because it’s a competition,” the former executive said. “That’s the fun part.”
To promote more positive use of Instagram, the company has partnered with nonprofits to promote what it calls “emotional resilience,” according to the documents. Videos produced as part of that effort include recommending that teens consider daily affirmations to remind themselves that “I am in control of my experience on Instagram.”
Facebook’s researchers identified the over-sexualization of girls as something that weighs on the mental health of the app’s users. Shevon Jones, a licensed clinical social worker based in Atlanta, said this can affect Black girls especially because people often assume Black girls are older than they are and critique the bodies of Black girls more frequently.
“What girls often see on social media are girls with slimmer waists, bigger butts and hips, and it can lead them to have body image issues,” Ms. Jones said. “It’s a very critical time and they are trying to figure out themselves and everything around them.”
Teen boys aren’t immune. In the deep dive Facebook’s researchers conducted into mental health in 2019, they found that 14% of boys in the U.S. said Instagram made them feel worse about themselves. In their report on body image in 2020, Facebook’s researchers found that 40% of teen boys experience negative social comparison.
“I just feel on the edge a lot of the time,” a teen boy in the U.S. told Facebook’s researchers. “It’s like you can be called out for anything you do. One wrong move. One wrong step.”
Many of the teens interviewed for this article said they didn’t want Instagram to disappear. Ms. Vlasova, who no longer uses Instagram, said she is skeptical Facebook’s executives have tried hard enough to make their platform less toxic.
“I had to live with my eating disorder for five years, and people on Instagram are still suffering,” she said.