Trump Used Facebook To Try And Convince 3.5 Million Black Americans Not To Vote In 2016
Donald Trump’s 2016 presidential election campaign sought to deter millions of Black Americans in battleground states from voting by targeting them with negative Hillary Clinton ads on Facebook, an investigation broadcast Monday by Channel 4 News in London claims. Trump Used Facebook To Try And Convince 3.5 Million Black Americans Not To Vote In 2016
Channel 4 News says it obtained a leaked database of voter profiles used by the Trump campaign that included a category called “deterrence,” meaning voters who were likely to cast their ballots for Clinton or to not vote at all.
These 3.5 million voters, who were disproportionately Black, were targeted with “dark” ads to dissuade them from backing Clinton, according to the report. The report credits Cambridge Analytica, the Trump-connected data analysis firm that gained unauthorized access to tens of millions of Facebook profiles, with orchestrating the strategy.
Trump campaign communications director Tim Murtaugh dismissed the report as “fake news,” and said the president has built “a relationship of trust with African American voters,” with initiatives on criminal justice reform, opportunity zones and a recent announcement to invest $500 billion in the Black community.
Sen. Kamala Harris, the Democratic nominee for vice president, struck back Monday, saying Republicans approved laws to suppress the vote and that Trump’s appointees to the federal bench have issued rulings restricting the eligibility of voters. She urged voters to oppose Trump and Senate Republicans in the upcoming election, to prevent more conservative judges from being appointed to federal courts.
“Here’s The Thing: He knows he can’t win if the people vote,” Harris said. “Donald Trump is weak so he is throwing up every roadblock he can to try and suppress the vote. We the people cannot let him get away with it.”
According to the Roper Center for Public Opinion Research’s voting data from five previous presidential elections, the Democratic candidate has averaged 91% of the Black vote.
Facebook has made significant changes to how politicians and their campaigns can target users of its platforms since the 2016 election.
In a statement Monday, the company said: “What happened with Cambridge Analytica couldn’t happen today.”
“We have 35,000 people working to ensure the integrity of our platform, created a political ads library to make political advertising more transparent than anywhere else, and have protected more than 200 elections worldwide,” Facebook said in a statement. “We also have rules prohibiting voter suppression and are running the largest voting information campaign in American history.”
Last week when asked about voter suppression efforts targeted at the Black community, Facebook told USA TODAY that bad actors have changed their tactics during the 2020 election cycle.
During and after the 2016 election, Russian agents bought thousands of ads seeking to further inflame tensions already roiled by the Trump campaign’s racially charged rhetoric.
“The sophisticated actors that are using deception to target these communities are often smart enough not to share what anyone would consider to be voter suppression. They are very carefully choosing content that doesn’t violate our policies,” Nathaniel Gleicher, Facebook’s head of cybersecurity policy, said. “We certainly do see and know that foreign actors like Russia are continuing to target these communities. But what we have seen them try to do so far, although they are still trying, has not been that effective, this cycle around.”
Facebook And Twitter Are Concerned About What Is Going To Happen After Election Day
Misinformation about mail-in votes in the days following the election concerns social-media executives.
Securing democracy on social media may be hardest after Americans vote in the presidential election.
In what is shaping up as a newfangled nightmare in their efforts to stop election interference, Facebook Inc., Twitter Inc. and others are as concerned about misinformation and other issues in days after the U.S. election as they are in the months preceding it, including Election Day.
“How do we ensure that voters have accurate information?” as election results are counted in the days following the Nov. 3 presidential election, Nathaniel Gleicher, Facebook’s head of security policy, asked during a Tuesday webinar on protecting the upcoming elections.
He did not elaborate, but hinted there could be attempts by politically motivated groups to question the legitimacy of votes, including mail-in ballots.
President Donald Trump has endlessly claimed without evidence that voting by mail — expected to increase dramatically because of the pandemic — is susceptible to large-scale fraud. (Nearly one in four voters cast 2016 presidential ballots that way.) The specter of a weeks-long debate over the presidential winner in 2020 is drawing some parallels to the 2000 contest that ended up being determined by the U.S. Supreme Court.
Yoel Roth, head of site integrity at Twitter, echoed Gleicher’s concerns, but he added that social-media companies are better positioned this time around than four years ago. He said the micro-blogging service is promoting “credible, authoritative information” during political-party conventions, presidential and vice-presidential debates, and election results in November.
Gleicher added that Facebook is detecting more “bad actors” than in elections in 2018 and 2016, through a greater understanding of the risk, and through coordination with academia, media, and state and local officials.
Their fears come amid concerted efforts by Facebook, Twitter and others to tap the brakes on misinformation concerning the U.S. elections.
Facebook, which has repeatedly acknowledged its part in being exploited by foreign and domestic adversaries during the 2016 presidential election with fake news and misinformation, this month launched a Voting Information Center to help users with accurate, easy-to-find information about voting wherever they live.
The addendum will link to a new voter information hub similar to one about COVID-19 that Facebook says has been seen by billions of people globally. The labels will read, “Visit the Voting Information Center for election resources and official updates.”
Facebook expects the voter hub to reach at least 160 million people in the U.S. In July, the company began adding similar links to misleading posts by politicians, including Trump, about voting.
Twitter, meanwhile, has said it will roll out measures on new tools, policies and voting resources, as well as expand its “civic integrity policies” to address misrepresentations about mail-in voting. In January, the company created a feature that lets users report voter suppression and misinformation.
Among other companies, Snap Inc. SNAP, +4.31% has unveiled a “Voter Registration Mini” tool so users can register to vote directly in Snapchat. It also posted a “Voter Guide” with information about topics such as voting by mail and voter registration.
At the same time, states such as California are offering registered voters the chance to follow the status of their ballot until it is counted over online services.
Facebook Needs Trump Even More Than Trump Needs Facebook
Employees fear Zuckerberg’s commitment to free speech is more about protecting the president than the company’s ideals.
In late 2019, during one of Mark Zuckerberg’s many trips to Washington to defend Facebook in front of Congress, he stopped for a private dinner with Donald Trump and offered the president a titillating statistic. “I’d like to congratulate you,” Zuckerberg said. “You’re No. 1 on Facebook.”
At least that’s the story as told by Trump, on Rush Limbaugh’s radio show in January. Trump is technically not the top politician by followers on Facebook.
That would be former President Barack Obama. But as the country’s most powerful newsmaker and the person in charge of a government that’s been aggressively pursuing antitrust cases against big tech companies, he does have leverage over Zuckerberg. So the chief executive officer could be forgiven for flattering Trump.
Any moment that the president is happy with Facebook is a moment he’s not pursuing hostile regulation—or more likely, sparking a bad news cycle.
Facebook Inc. declined to comment on whether Zuckerberg indeed told Trump he was No. 1 and, if so, in what category he meant, but it’s adamant that its founder isn’t playing favorites.
After the New York Times speculated that the dinner between Zuckerberg and Trump might have involved a deal over whether Facebook would fact-check the president, Zuckerberg said he was simply stopping by the White House because he was in town. “The whole idea of a deal is pretty ridiculous,” he told Axios in July.
Longtime current and former employees say this denial may be a bit misleading. Zuckerberg isn’t easily influenced by politics. But what he does care about—more than anything else perhaps—is Facebook’s ubiquity and its potential for growth.
The result, critics say, has been an alliance of convenience between the world’s largest social network and the White House, in which Facebook looks the other way while Trump spreads misinformation about voting that could delegitimize the winner or even swing the election.
“Facebook, more so than other platforms, has gone out of its way to not ruffle feathers in the current administration,” says Jesse Lehrich, co-founder of Accountable Tech, an organization making recommendations to tech companies on public-policy issues. “At best, you could say it’s willful negligence.”
The pattern hasn’t been confined to U.S. politics. A Facebook executive in India was accused in August of granting special treatment to a lawmaker from Prime Minister Narendra Modi’s ruling Bharatiya Janata Party who’d called for violence against Rohingya Muslim immigrants.
(It was only after the Wall Street Journal reported on the posts that the company banned the lawmaker, T. Raja Singh.) A memo from a former employee, published by BuzzFeed on Sept. 14, detailed how Facebook had ignored or delayed taking action against governments using fake accounts to mislead their citizens. “I have blood on my hands,” she wrote.
“You can continuously see the challenge of them trying to have these kinds of broad principles around free expression and stopping harm, and then that mixing with the realpolitik of trying to keep the executive branch happy,” said Alex Stamos, the former top security executive at Facebook, at a conference in June.
Facebook executives say their only loyalty is to free speech. “The idea that there is systematic or deliberate political bias in our decisions I really don’t think is borne out by the facts,” says Nick Clegg, head of policy and communications. “Of course, there are isolated cases.”
“Mark is wrong, and I will endeavor in the loudest possible way to change his mind”
Facebook executives often point out that the company was seen as overly friendly to Democrats during the Obama years and that it takes plenty of heat from the Right. In the summer of 2016, the tech website Gizmodo wrote that it had been directing employees to suppress pro-Trump sites in its trending news section.
The story led to a scandal over supposed anticonservative bias at social media companies, and it was in response to this backlash that Facebook started to drift rightward. The company flew conservative commentators to its headquarters in Menlo Park, Calif., to reassure them that there was no need for concern about how Facebook operated.
The romancing continued after election day as Facebook celebrated Trump’s victory. In January 2017, the company co-hosted an inauguration party with the Daily Caller, which has published writers who have espoused white nationalist views. The lemons in the catered drinks were stamped with Facebook logos.
An internal report from around the same time touted Trump’s superior strategy with Facebook ads, noting that Trump followed advice and training from the company that his opponent, Hillary Clinton, had rejected. Trump “got elected because he ran the single best digital ad campaign I’ve ever seen from any advertiser. Period,” Andrew Bosworth, who currently runs the company’s efforts in augmented and virtual reality and who was head of ads at the time, wrote in a memo to employees in 2018.
Facebook’s mostly liberal staff saw its Republican relationship-building as the price of doing business, but as the company weathered public scrutiny—about Russia’s spread of election misinformation, as well as its failure to stop Cambridge Analytica’s data-gathering operation—something changed.
Among the rank and file, and even among some executives, the shift went from grudging professional admiration for Trump to a realization that he was using Facebook to attack many of the issues employees cared about. During the Supreme Court confirmation hearings for Brett Kavanaugh, during which Kavanaugh was accused of sexual assault, head of policy Joel Kaplan was shown seated directly behind the nominee, a close friend.
The days following, several Facebook employees say, was the first time they saw colleagues cry openly at Zuckerberg’s weekly question-and-answer sessions.
After the Kavanaugh hearings, employees began to notice that Kaplan, George W. Bush’s former deputy chief of staff, seemed more concerned about critiques of bias from conservatives than from liberals. Facebook’s internal data showed that conservative voices are consistently the most popular on the site.
(On a recent Monday morning, the top 10 Facebook posts, by interactions—such as likes, shares, and comments—included eight from conservative pundits and news outlets, one from Ivanka Trump, and one from NPR.)
But in 2019, under pressure from the same group who’d visited before Trump’s election, Kaplan commissioned a yearlong independent study that concluded the opposite. “There is still significant work to be done to satisfy the concerns we heard from conservatives,” it said.
Historically, Facebook had given executives in charge of its products immense leeway in making decisions, but suddenly the company’s policy team seemed to have veto power. In January 2018, Zuckerberg asked to reduce the prevalence of news in users’ feeds, especially from incendiary and untrustworthy outlets. The product team tweaked the news feed, but then members of Kaplan’s team reviewed test simulations.
They noted that the product change was causing traffic to drop more severely for right-wing outlets such as Fox News and Breitbart News, according to a person familiar with the incident who spoke with Bloomberg Businessweek on the condition of anonymity. Of course, this was because Fox and Breitbart tend to publish more incendiary content—Breitbart famously once had a “black crime” section on the site.
So the engineers were ordered to tweak the algorithm a little more until it punished liberal outlets as much as conservative ones, before releasing the update to 2.5 billion users. Fox maintained its position as the top publisher on Facebook.
This kind of review wasn’t unusual, employees say. The policy team would regularly investigate whether changes would affect right-leaning outlets while seeming less concerned about the effects on more left-leaning ones. A Facebook spokesman denies that Kaplan’s pushback was partisan and says the algorithm was not changed as a result of his concerns.
As employees started to worry about Facebook’s proximity to the Right, Facebook’s M-Team—“M” for management—seemed intent on pushing the company even closer to it. At one point, the group flew to New York for a leadership off-site at the headquarters of News Corp., which like Fox News is controlled by Rupert Murdoch and his family.
One executive, Instagram co-founder Kevin Systrom (who left the company in 2018), refused to attend, citing Fox’s polarizing influence, according to a person familiar with the matter. The company says it regularly meets with media outlets. Systrom didn’t respond to a request for comment.
Eventually, Trump pushed the limits. In the early morning hours of May 29, he posted a message to his 29.5 million Facebook followers, warning protesters in Minneapolis that they were risking violent retribution. “When the looting starts, the shooting starts,” the president wrote. It was a formulation that’s long been associated with police brutality. A similar threat was used by the segregationist presidential candidate George Wallace.
Trump had said the same on Twitter, which quickly hid his post, saying it violated rules against glorifying violence. Zuckerberg waited, leaving the post up for hours while consulting his top lieutenants to discuss what to do. Chief Operating Officer Sheryl Sandberg, Kaplan, and Clegg all weighed in. So did Maxine Williams, the company’s head of diversity.
And then, that afternoon, another very influential figure piped up: the president himself, who spoke to Zuckerberg by phone. Zuckerberg said later that he told Trump he disagreed with the post and found it unhelpful. But, crucially, he also didn’t think it went against Facebook’s rules.
Trump’s post remained on Facebook, sparking a virtual walkout. Employees began criticizing Zuckerberg openly and leaking to the press. “Mark is wrong,” tweeted Ryan Freitas, director of product design for the company’s news feed, “and I will endeavor in the loudest possible way to change his mind.”
A flurry of stories appeared over the next two months detailing instances that reinforced the suspicions about the alliance between Facebook and Trump.
For instance, media outlets reported that the president had no negative hashtags associated with his name on Instagram, while Joe Biden had lots; that a Facebook employee was fired after complaining that the company seemed to be allowing far-right pundits, such as Diamond and Silk, to break rules about misinformation; and that an investigation into Ben Shapiro, whose site the Daily Wire routinely broke the rules to boost its audience, was thwarted by Kaplan’s policy group.
Employees also noticed a difference between Zuckerberg’s relationship with Trump and his interactions with his Democratic opponent. In a June 29 letter addressed to Clegg, Biden’s campaign manager, Jen O’Malley Dillon, pointed out three instances where she felt Trump had shared voting misinformation and asked if Facebook “will apply its policies impartially.”
In a separate letter on July 10, campaign general counsel Dana Remus accused Facebook of hypocrisy. “Your company’s actions fail to live up to its stated commitments,” she wrote. Zuckerberg hasn’t spoken to Biden all year.
After the looting-and-shooting post, Facebook’s policy team contacted the White House to explain the company’s process, which lead to Trump’s phone call with Zuckerberg. During the delay, the post got millions of views. Around the same time, Biden posted an open letter asking Facebook to stem the tide of misinformation. Facebook clapped back in public.
“The people’s elected representatives should set the rules, and we will follow them,” it said in a blog post. “There is an election coming in November and we will protect political speech, even when we strongly disagree with it.” The message was clear: We listen to the government in charge.
By now, Facebook’s failures during the 2016 election are well known. A group backed by the Russian government used the company’s products to promote Trump and disparage Clinton, according to the report issued by special counsel Robert Mueller. For instance, Russian operatives created fake accounts aimed at Black voters, seen as a key part of Clinton’s base. They told people who followed these accounts they shouldn’t bother voting, or that they should vote by text message, which isn’t possible. In all, the Russian posts reached more than 150 million Americans.
The job of rooting out fake content created by foreign governments falls to Facebook’s election integrity and cybersecurity groups, which are separate from the policy team and, in theory, nonpartisan. Facebook has gotten better at finding these campaigns. Last year alone, it removed 50 networks of accounts like the Russian one from 2016. But some former employees have complained of being ignored or sidelined based on political concerns.
In 2018, Yaël Eisenstat, a former CIA intelligence officer, worked on a plan to use software to scan ads for language that could give false information about voting procedures. The proposal was rejected, Eisenstadt was told, because the problem wasn’t urgent enough. She left that November.
The following year, Facebook did make rules against giving incorrect information about how to vote, but then it froze when Trump actually put the policy to the test. On May 20, a week before the looting-and-shooting post, the president claimed that officials in Michigan and Nevada were sending out mail-in ballots illegally, which was not true.
A few days later, on May 26, Trump posted that California was mailing ballots to “anyone living in the state,” another lie. The posts stayed up, and Zuckerberg went on Fox News to criticize Twitter, which had fact-checked similar posts. An outside civil rights auditor later concluded that Facebook failed to enforce its own policies in both instances.
Instead Zuckerberg came up with something new—what he called “the largest voting information campaign in U.S. history,” a plan to register 4 million voters. Facebook also designed a “voting information center,” a webpage with facts about the election compiled from state authorities.
The social media network has been promoting the page atop every user’s Facebook and Instagram feed and attaches a link to it with every post on the service that mentions the election process. The hub “ensures that people can see the post and hear from their elected officials, warts and all, but also have accurate context about what the experts are saying,” Nathaniel Gleicher, Facebook’s head of cybersecurity policy, told reporters in August.
But the links below Trump’s ever-more-frequent postings about voting do not warn Facebook users if the information is untrue—they simply advertise an information center. Moreover, after Republicans complained about the voter registration efforts, Facebook seemed to back off further, according to emails obtained by the Tech Transparency Project.
The company had planned a two-day promotion over the July 4th holiday on Facebook, as well as on Instagram and Messenger, but then cut that down to a one-day push on Facebook alone.
Facebook has said that the suggestion that the company scaled down its voter registration plans for political reasons is “pure fabrication.” Another spokesman, replying to a Twitter user who suggested the same, responded with a picture of a woman in a tin foil hat.
The company, of course, knows lots about conspiracy theorists, who thrive on the site. There’s QAnon, a far-right movement that espouses a complex theory involving a cabal of elites engaged in child sex trafficking. The FBI deemed it a form of domestic terrorism in August 2019, but Facebook only started removing accounts in May.
The company also initially ignored posts tied to a Kenosha, Wis., militia in which users discussed shooting Black Lives Matter protesters. The militia’s event page was flagged more than 400 times, but moderators allowed it to stay up, according to BuzzFeed. Not long after the posts began appearing, a 17-year-old with an assault rifle shot and killed two people at a protest in the city.
Even as employees accuse Facebook of aiding Trump’s reelection effort, the administration has kept up the pressure on the company. In late May the president signed an executive order threatening to revoke the immunity enjoyed by social media companies, including Facebook, under Section 230 of the Communications Decency Act of 1996, if they showed political bias.
The order was an apparent threat to social networks that censored posts from Trump and his allies. Facebook responded by saying the move would restrict free speech.
Trump’s threats haven’t yet materialized into anything crippling—at least not for Facebook. The U.S. Department of Justice is preparing a case against the company’s main rival, Google, which it’s expected to file before Election Day. Meanwhile, Trump has forced another key Facebook competitor, Chinese-owned TikTok, to find a U.S. buyer or face ejection from the country.
So far, Zuckerberg’s cultivation of Trump has seemed to keep Facebook safe from the president’s ire. But Trump is trailing by 7 points or so nationally, and it’s likely that a Biden administration would seek to regulate Facebook. In July, Zuckerberg got a preview of the Democrats’ playbook when he faced the House Judiciary subcommittee on antitrust, alongside all the other major tech executives.
Representatives’ questions for him were pointed, prosecutorial, and informed by thousands of internal emails and chat logs that seemed to suggest a path for regulators to argue that the company should be broken up or penalized in some other way. “All of these companies engage in behavior which is deeply disturbing and requires Congress to take action,” David Cicilline, the Rhode Island representative who’s the panel’s chairman, told Bloomberg in August, the same day Facebook’s stock hit a record high.
He said he was especially struck by the “casual way” Zuckerberg admitted to buying Instagram and WhatsApp to eliminate them as competitors. Facebook disputes Cicilline’s statement and says both acquisitions have not harmed competition.
Biden, meanwhile, has said he also favors removing Section 230 protections and holding executives personally liable. “I’ve never been a big Zuckerberg fan,” he told the New York Times in January. Zuckerberg seems keenly aware of the risks of a Trump loss. He’s told employees that Facebook is likely to fare better under Republicans, according to people familiar with the conversations.
This isn’t to say Facebook wouldn’t adapt if Biden wins in November. In June, Zuckerberg announced he’d rehired Chris Cox, former chief product officer, who’d been active in Democratic politics since leaving Facebook last year. Cox is widely considered to be the most likely candidate to become CEO if the Facebook founder ever stepped down.
“Nothing stays the same, of course not,” says Clegg, the VP for communications, when asked how Facebook would adjust to a future Biden administration. “We’ll adapt to the environment in which we’re operating.”
Trump Loses Social Media Megaphone As Facebook, Twitch Act
Facebook Inc. and other internet giants stripped Donald Trump of his social media megaphone after his online posts encouraged violent rioters that stormed the U.S. Capitol, an unprecedented move that will leave the president without one of his favorite and most volatile powers in his final days in office.
Facebook on Thursday said it was extending a ban on Trump’s posts “indefinitely,” or for at least two weeks, until President-elect Joe Biden takes over. Snap Inc. has also banned the president from posting on its Snapchat app until further notice, while Twitter put the president on a 12-hour hold after requiring him to delete tweets that supported the rioters, and warned that he may be banned permanently.
Twitch, the live-streaming service owned by Amazon.com Inc., also disabled Trump’s account indefinitely.
“We believe the risks of allowing the President to continue to use our service during this period are simply too great,” Chief Executive Officer Mark Zuckerberg said in a post Thursday. The restrictions in place will be extended “until the peaceful transition of power is complete.” Biden is to be sworn in on Jan. 20.
Facebook’s move extended an initial 24-hour ban — its first-ever ban on the president — on both its main social network and photo-sharing app Instagram. Twitter Inc. asked Trump to remove three tweets, including one video message of Trump expressing love for the insurgents and calling the election “fraudulent.”
On Google’s YouTube, if Trump makes false claims about the election in another video, he’ll get a “strike,” which will temporarily prevent him from uploading new content or live-streaming, a spokesperson said. Channels that receive three strikes in the same 90-day period will be permanently removed from the site.
Trump has used his social media accounts to start debates, attack rivals, set policies, spread misinformation and provoke fights during his four years in office. The platforms have become potent tools for him, outside the traditional media and government structures that usually act as checks and balances on a president.
The insurrection in Washington on Wednesday caused public outrage, much of it targeted at social media companies, who were blamed for continuing to provide the president with a digital pulpit from which to incite violence. Trump used Twitter as well as other services, including Facebook and YouTube, to urge supporters to strike out, and he remained silent for hours as the mob scene grew dangerous.
Many observers said the actions were long overdue. For years, social media critics have called on the companies to get tougher on Trump. Twitter and Facebook last year had begun to remove some of the president’s content or put it behind a warning screen if it contained misinformation or incendiary language, but neither company had ever completely suspended his account, saying his messages had inherent news value.
Zuckerberg on Thursday said that although Facebook has allowed Trump’s posts with few restrictions for years, the current context is “fundamentally different.”
Trump’s decision to “use his platform to condone rather than condemn the actions of his supporters at the Capitol building has rightly disturbed people in the U.S. and around the world,” Zuckerberg said.
The social network removed those statements because they judged their effect and “likely their intent” would be to provoke further violence.
Silencing the president on their networks underscores the power these platforms have to shape political discussion and real-world events, based on their choices about what to amplify or tamp down.
Members of Congress have called technology executives to several public hearings to discuss whether they’re using their power responsibly and explore the need for further regulation, especially to remove legal liability protections granted by the Communications Decency Act.
“These isolated actions are both too late and not nearly enough,” Senator Mark Warner, a Democrat from Virginia, said in a statement Thursday. “These platforms have served as core organizing infrastructure for violent, far right groups and militia movements for several years now.” Senator Joe Manchin, a Democrat from West Virginia, called on Twitter to extend its ban on Trump for at least 13 days, until after inauguration, “in the interest of public safety.”
A Twitter spokesperson confirmed late Wednesday that Trump had deleted the tweets as the company required, meaning the president had access to his account again Thursday. He didn’t tweet until after 7 p.m. New York time when he posted a video condemning the riot and saying he was focused on a smooth transition to the new administration.
Zuckerberg addressed Facebook workers in a companywide question-and-answer session, in which he repeated his reasoning and condemned Trump’s actions. Other Facebook leaders put the decision in context of the work the company has tried to do to prevent violence — for instance, taking down groups organizing armed protests.
Amazon’s Twitch streaming service, meanwhile, says it has suspended the president’s channel until at least the inauguration, at which point it will reassess the situation, a spokeswoman said.
The sanctions against Trump started to spread beyond social media on Thursday. Shopify Inc., an e-commerce platform, said it pulled all digital stores affiliated with Trump, including the official retail site of the Trump Organization, Trumpstore.com.
“Shopify does not tolerate actions that incite violence,” a spokeswoman said by email. Trump’s actions violate the company’s Acceptable Use Policy, “which prohibits promotion or support of organizations, platforms or people that threaten or condone violence to further a cause,” she said. “As a result, we have terminated stores affiliated with President Trump.”
Since Trump’s inauguration, the nation has rarely gone so long without hearing from the president via the internet. When he came down with Covid-19 in October, observers took it as a sign he was truly sick when the tweets stopped for a few hours.
Critics have long called on the platforms to enforce their own rules against Trump. After Biden’s inauguration, Trump becomes a private citizen and is more vulnerable to permanent bans if he breaks social media rules.
The Web’s Dark Recesses Can Save Social Media From Itself
Forcing the most extreme political discourse off the mainstream platforms of Facebook and YouTube would be a good thing.
Facebook Inc. has long defended its lackadaisical approach to misinformation by saying that if it imposed stricter conditions, such content would proliferate elsewhere anyway. Far better to monitor conversation itself … or something.
But pushing false information and incitement to violence to the darker recesses of the web would be better.
Facebook’s line of reasoning has long seemed disingenuous, not least because of how many people turn to the social media giant. It has 2.5 billion monthly users across its platforms, which include Instagram and WhatsApp. Alongside Alphabet Inc.’s YouTube, these properties represent the greatest agglomeration of eyeballs that the world has ever known.
Where else can misinformation find such a massive audience?
Facebook’s banning of Donald Trump, alongside a similar decision by Twitter Inc., means we might now find out. The first alternative for many was Parler, a social media app that boasts of being a bastion of free speech and is backed by the billionaire Mercer family.
But such free speech came at a cost: Its failure to moderate content organizing the violence at the Capitol last week prompted Amazon.com Inc. to suspend Parler’s use of its web-hosting services, while Apple Inc. and Google removed its app from their mobile stores. Trump has since said he may build his own social network as Parler scrambles to get back on its feet.
Whatever happens to Parler, forcing the most outlandish strands of political discourse off the mainstream platforms would be a good thing. To boost user engagement, social media companies tend to reward provocative content with greater exposure while also deploying algorithms that personalize user feeds.
That produces engines that incubate and accentuate radicalization, which can have the effect of turning moderates into radicals.
Facebook is now taking down all mentions of “stop the steal,” the slogan used by U.S. election conspiracy theorists, while Twitter has banned more than 70,000 QAnon accounts.
If QAnon, electoral fraud conspiracists and flat-earthers are encouraged to move elsewhere — to Parler, for instance, or Gab, a site reportedly frequented by white supremacists — then it might reduce the number of people sucked out of the mainstream and into their conspiratorial vortexes.
Web users would have to actively seek out Covid-19 or anti-vaccination misinformation, rather than happening across it organically on their Facebook or YouTube feeds.
Such an approach would not, of course, end online radicalization, according to Dipayan Ghosh, author of “Terms of Disservice: How Silicon Valley Is Destructive by Design.” Indeed, it could even push those already inside the bubble into an even greater frenzy. But “it’s the right thing to do because it removes extremist views from the mainstream,” he said.
The main platforms needn’t worry about getting overtaken. For all of Facebook Chief Executive Officer Mark Zuckerberg’s assertions that competition in social media is fierce, the advertising giant continues to grow user numbers, revenue and profit. Its lead is not easily surmountable.
Parler’s peak daily active user count was just 3.4 million globally, back in November, and it had only 1.6 million daily users last week, according to app analytics firm Apptopia. That’s 0.09% of the 1.8 billion people who log into a Facebook service every day.
Rivals will only start to threaten the dominance of Facebook and YouTube if they’re able to build sustainable business models.
That means securing advertising dollars, which is not straightforward given that brands are less likely to want to blazon ads alongside troublesome content.
It also means strictly managing costs. Google and Apple have already made that more difficult, by stipulating that Parler must adhere to its own content moderation policies if the app is to be available on their stores. In other words, the app needs to hire a stack of content moderators.
It’s the right thing to do, given the evidence that last week’s riots and future violence have been planned on the platform.
Facebook should also have built those costs into its business early on, but it opted for low overheads initially in order to scale quickly.
Pushing more extreme political discourse out of the mainstream would formalize the content bubbles that essentially already exist. It would be a far cry from the 1990s cyber-utopian vision of the internet as a village green or an agora for the free and open exchange of ideas. But it would be for the best.
Trump Used Facebook To,Trump Used Facebook To,Trump Used Facebook To,Trump Used Facebook To,Trump Used Facebook To,Trump Used Facebook To,Trump Used Facebook To,Trump Used Facebook To,Trump Used Facebook To,