On Friday, 27 September 2019, Roger McNamee, Evgeny Morozov, Max Schrems and Shalini Randeria will be talking about Facebook and how it’s changing our societies on the panel “Social Networks or Social Nightmares?” at the Vienna Humanities Festival.
Where: Globe Marx/Marx Halle
When: 19:00 to 20:30
Register here: www.viennacontemporary.at
More info: www.humanitiesfestival.at
Get ready for the panel by reading Roger McNamee’s 2018 analysis of Facebook.
My story with Facebook began in early 2006. Mark Zuckerberg, then only 22, was looking for advice. I had been a technology investor for more than two decades and had no stake in the outcome.
When he told me his story, I predicted he would soon get a billion-dollar offer from either Microsoft or Yahoo, and everyone would advise him to take it. I thought he should say no. At two years old, Facebook was still far from its first profit. But I was convinced that Mark had created a game-changing platform that would eventually be bigger than Google was at the time – the first social network combining true identity with scalable technology. The real value would come when busy adults, parents and grandparents, started using the network to keep in touch…
My little speech only took a few minutes. What ensued was the most painful silence of my professional career. It turned out Yahoo had already made that billion-dollar offer and everyone – from the board, to the staff, to his parents – was telling him to take it. Instead, he took my advice, and so began a four-year mentoring relationship. I later became an investor, and helped him recruit Sheryl Sandberg as chief operating officer.
In my 35-year career in investing, I have never made a bigger contribution to a company’s success than at Facebook. It’s my proudest accomplishment. I also became an expert in using the platform by marketing my rock band, Moonalice, through a Facebook page. As the administrator, I learned to maximize the organic reach of my posts and use small amounts of advertising dollars to extend and target that reach. Our page was among the highest engagement fan pages on the platform. Facebook was working.
That is, until early in 2016. The Democratic primary was just getting under way in New Hampshire, when I noticed a flood of viciously misogynistic anti-Clinton memes originating from Facebook groups supporting Bernie Sanders. I knew how to build engagement organically on Facebook. This was not organic. It appeared to be well organized, with an advertising budget, and probably not from the Sanders campaign. Was Facebook being hijacked?
A month later I noticed an unrelated but equally disturbing news item. A consulting firm was revealed to be scraping data about people interested in the Black Lives Matter movement and selling it to police departments. Only after the harm had already been done did Facebook cut off the company’s access to the information. That got my attention. Here was a bad actor violating the platform’s terms of service, doing a lot of harm, and then being slapped on the wrist.
Facebook wasn’t paying attention.
Meanwhile, the flood of anti-Clinton memes continued, going viral to a degree that didn’t seem at all organic. At the same time, something similar was going on across the Atlantic.
When Britons voted to leave the European Union in June 2016, most observers were stunned. The polls had predicted a victory for the “Remain” campaign. And common sense made it hard to believe that British voters would do something so obviously contrary to their self-interest. But neither common sense nor the polling data fully accounted for a crucial new factor: the power of social platforms to amplify negative messages.
You are the Product
Facebook, Google and other social media platforms make their money from advertising. As with all ad-supported businesses, that means advertisers are the true customers, while the audience is the product. Until the past decade, media were locked into a broadcast model. Success with advertisers depended on content that would appeal to the largest possible audience. But no distribution media could expect to hold a consumer’s attention for more than a few hours. TVs weren’t mobile. Laptops were mobile, but awkward. Newspapers and books were mobile and not awkward, but relatively cerebral. Movie theaters were fun, but inconvenient.
On computers, internet platforms were at a disadvantage. Their proprietary content couldn’t compete with traditional media and their delivery medium, the PC, was generally only usable at a desk. Their one advantage – a wealth of personal data – was not enough to overcome the disadvantages. So web advertising had to be underpriced.
Smartphones changed the advertising game completely. It took only a few years for this all-purpose content delivery system to find its way into billions of hands effectively around the clock. Media became a battle to hold users’ attention as long as possible. Here Facebook and Google had a prohibitive advantage over traditional media: With their vast reservoirs of real-time data on two billion users, they could personalize content, monopolizing user attention on smartphones and making the platforms uniquely attractive to advertisers. Why pay a newspaper in the hopes of catching a portion of its audience, when you can pay Facebook to reach exactly the people you want?
Whenever you log into Facebook, there are millions of posts the platform could show you. The key to its business model is the use of algorithms, driven by individual user data, to show you stuff you’re more likely to react to. And much of it is negative. Fear and anger, it turns out, produce a lot more engagement than joy. The result is that the algorithms favor sensation over substance. Of course, this is nothing new, hence the old news adage: “If it bleeds, it leads.” But for traditional media, this was constrained by the limitations of delivery systems.
Not so internet platforms on smartphones. Here are billions of individual channels, each of which can be pushed further into negativity and extremism without the risk of alienating other audiences. To the contrary: The platforms help people self-segregate into like-minded filter bubbles, reducing the risk of exposure to challenging ideas. People become resistant to facts that do not conform with their beliefs.
Heart Over Head
It took Brexit for me to see the dynamic. It seemed likely that Facebook might have had a big impact on the vote because one side’s message was perfect for the algorithms and the other’s wasn’t. The “Leave” campaign made an absurd promise – that savings from leaving the European Union would refinance the National Health System – while exploiting xenophobia by casting Brexit as the best way to protect English culture and jobs from immigrants. It was too-good-to-be-true nonsense mixed with fear-mongering.
Meanwhile, the “Remain” campaign was making an appeal to reason. Leave’s crude, emotional message would have been turbocharged by sharing far more than Remain’s. In addition, many Leave supporters were probably less wealthy and therefore cheaper for the advertiser to target: The price of Facebook (and Google) ads is determined by auction, and the cost of targeting more upscale consumers gets bid up higher by actual businesses trying to sell them things.
As a consequence, Facebook was a much cheaper and more effective platform for Leave in terms of cost per user reached. And filter bubbles would ensure that people on the Leave side would rarely have their beliefs challenged. Facebook’s model may have had the power to reshape an entire continent.
It was becoming increasingly clear that bad actors were exploiting an unguarded platform, and that Facebook’s algorithms were favoring negative messages, distorting political results. Meanwhile, the press revealed that the Russians were behind the server hack at the Democratic National Committee and that Trump’s campaign manager had ties to Russian oligarchs close to Vladimir Putin.
In late October, I wrote to Mark Zuckerberg and Sheryl Sandberg, hoping to make them aware of the problems so they could fix them. They “appreciated” my reaching out, but were unwilling to accept that there was a systemic issue. Facebook was not a media company, they said, and therefore was not responsible for the actions of third parties.
Then came the U.S. election. The next day, I lost it: There was a flaw in Facebook’s business model, I told them. The platform was being exploited by a range of bad actors, including supporters of extremism; yet management claimed the company was not responsible. The brand was at risk of becoming toxic. I urged Zuckerberg and Sandberg to protect the platform and its users.
The last interaction I had with Facebook was in early February 2017. By then there was increasing evidence that the Russians had used a variety of methods to interfere in our election.
I started looking for allies.
In April 2017, I found Tristan Harris, formerly the design ethicist at Google. An expert in persuasive technology, he described the techniques tech platforms used to create addiction, which they exploit to increase profits. He called it “brain hacking.”
Parrots in an Echo Chamber
The most important tool is the filter bubble. The use of algorithms to give consumers “what they want” leads to an unending stream of posts that confirm each user’s existing beliefs. On Facebook, it’s your news feed, while on Google it’s your customized search results. Continuous reinforcement tends to entrench those beliefs, making them more extreme and resistant to challenge. Facebook takes the concept one step further with its “groups”, which encourages like-minded users to congregate around shared interests. While ostensibly a benefit to users, the larger benefit goes to advertisers, who can target audiences even more effectively.
The problems were inherent in the attention- based, algorithm-driven advertising business model. And what I suspected was that Russia’s meddling in 2016 was only a prelude. The level of political discourse, already in the gutter, was going to get even worse. We hoped to trigger a national conversation about the role of internet monopolies in our society, our economy and our politics.
Beginning in New York in May, we met with journalists and activist groups like the ACLU, focusing on issues of addiction, threats to democracy and monopoly power. Meeting with members of Congress in July, we shared our hypothesis that Russian interference dated back years, amplifying polarizing issues like immigration, white supremacy, gun rights and secession. (We already knew that the California secession site had been hosted in Russia.) Facebook’s algorithms would have favored Trump’s crude message and the anti-Clinton conspiracy theories that thrilled his supporters, with the likely consequence that Trump backers paid less than Clinton for Facebook advertising per person reached.
Once users were in groups, we hypothesized, the Russians used fake American troll accounts and computerized “bots” to share incendiary messages and organize events, creating the illusion of greater support for radical ideas than actually existed. Real users “like” posts shared by trolls and bots and re-share them on their own news feeds, ultimately reaching tens of millions of people. The Russian interference included Twitter and other internet platforms in a coordinated way. Without immediate and aggressive action from Washington, bad actors of all kinds would be able to use Facebook and other platforms to manipulate future elections.
Thanks to the hard work of journalists and investigators, virtually all our hypotheses would be confirmed over the ensuing weeks. Almost every day brought new revelations of how Facebook, Twitter and Google had been manipulated by the Russians.
Rubles for Rumors
We now know that the Russians indeed exploited topics like Black Lives Matter and white nativism to promote fear and distrust, and appear to have invested heavily in emotionally charged content sent to supporters of Bernie Sanders and Jill Stein. Once the nominations were set, the Russians continued to undermine Clinton with social media targeted at likely Democratic voters.
We also have evidence now that Russia used these tactics to manipulate the Brexit vote. In November, researchers reported that more than 150,000 Russian-language Twitter accounts posted pro-Leave messages in the run-up to the referendum.
Our second trip to Capitol Hill was surreal – three jam-packed days of meetings with people who were focused on our issues. This time we brought along Renee DiResta, an expert in how conspiracy theories are spread on the internet.
DiResta described how bad actors plant a rumor on sites like 4chan and Reddit, leverage the disenchanted users to create buzz, build phony news sites with “press” versions of the rumor, push the story onto Twitter to attract the real media, then blow up the story for the masses on Facebook. It was sophisticated hacker technique, yet not expensive. We hypothesized that the Russians were able to manipulate tens of millions of American voters for a sum less than it would take to buy a single F-35 fighter jet.
In Washington, we found we could help policymakers and staff understand the inner workings of Facebook, Google and Twitter. By the end of September, a conversation on the dangers of internet platform monopolies was in full swing.
Facebook and Google are the most powerful companies in the global economy. Thanks to the United States government’s laissez-faire approach to regulation, they were able to pursue business strategies that would not have been allowed in prior decades. No one stopped them from using free products to centralize the internet and then replace its core functions. No one stopped them from siphoning off the profits of content creators. No one stopped them from gathering data on every aspect of every user’s internet life. No one stopped them from amassing market share not seen since the days of Standard Oil. No one stopped them from running massive social and psychological experiments on their users. No one demanded that they police their platforms.
It has Been a Sweet Deal
Facebook and Google are now so large that traditional tools of regulation may no longer be effective, even if wielded by a major player. The European Union challenged Google’s shopping price comparison engine on antitrust grounds, citing unfair use of Google’s search and AdWords data.
The harm was clear: Most of Google’s European competitors suffered crippling losses. The most successful survivor lost 80 percent of its market share in one year. The EU awarded a record $2.7 billion judgment – which Google is appealing. Google investors shrugged, and, as far as I can tell, the company has not altered its behavior. The largest antitrust fine in EU history bounced off Google like a spitball off a battleship.
It reads like the plot of a sci-fi novel: A technology celebrated for bringing people together is exploited by a hostile power to drive people apart, undermine democracy and create misery. We had constructed a modern Maginot Line – half the world’s defense spending and cyber-hardened financial centers, all built to ward off attacks from abroad, never imagining that an enemy could infect the minds of our citizens through inventions of our own making, at minimal cost.
We still don’t know the exact degree of collusion between the Russians and the Trump campaign. But the debate over collusion, while important, risks missing what should be an obvious point: Facebook, Google, Twitter and other platforms were manipulated by the Russians to shift outcomes in Brexit and the U.S. presidential election, and unless major changes are made, they will be manipulated again. Next time, there is no telling who the manipulators will be.
The Roadmap to fix Social Media
The platform monopolies Facebook and Google are so powerful that when they are manipulated, there is real damage to our public discourse. High time to reintroduce serious anti-trust regulation.
Awareness of the role of Facebook, Google and others in Russia’s interference in the 2016 election has increased dramatically in recent months, thanks in large part to congressional hearings on Oct. 31 – Nov. 1, 2017. This has led to calls for regulation, and the Honest Ads Act, sponsored by Senators Mark Warner, Amy Klobuchar, and John McCain, to extend current regulation of political ads on networks to online platforms.
Facebook and Google are opposed, insisting that government regulation would kill innovation and hurt global competitiveness and should be left to the industry.
But we’ve seen where self-regulation leads. This problem is just too complicated.
First, we must address the filter bubbles. Polls suggest that about a third of Americans believe Russian interference is fake news, despite unanimous agreement to the contrary by the country’s intelligence agencies. Helping them accept the truth is a priority.
To do this, Facebook must be required to contact each person touched by Russian content with a personal message saying, “You, and we, were manipulated by the Russians. This really happened; here is the evidence.” And include every Russian message the user received. There’s no doubt Facebook has the capacity to do this. No matter the cost, they must absorb the price for their carelessness.
Second, the chief executives of Facebook, Google, and Twitter – not just their lawyers – must testify before congressional committees in open session. This is particularly important for the employees. While the bosses are often libertarians, the people who work there tend to be idealists, who want to believe what they’re doing is good. Forcing tech CEOs like Mark Zuckerberg to justify the unjustifiable would go a long way to puncturing their cults of personality.
Regulatory Fixes: A few Ideas
- Digital bots must not impersonate humans. Bots distort the “public square” in a way never possible in history. At a minimum, the law on bots should require explicit labeling, the ability to block, and liability on the part of platform for the harm they cause. Platforms must be accountable.
- New acquisitions must be blocked until platforms have addressed the damage and taken steps to prevent future harm and allow open competition. Platform growth has often depended on gobbling up smaller firms to extend their monopoly power.
- Transparency about the sponsors of political and issues-based communication. The Honest Ads Act is a good start, but should also cover issue-based messages.
- Transparency about the algorithms. Users deserve to know why they see what they see in their news feeds and search results. If Facebook and Google had to be up-front about the reason you’re seeing conspiracy theories – namely, that it’s good for business – they would be far less likely to stick to that tactic.
- Equitable contracts with users. Facebook and Google have asserted unprecedented rights in their terms of service, which can change at any time. All software platforms should be required to offer a legitimate opt-out, increasing transparency and consumer choice, and forcing more care in every new rollout. It would limit the risk that platforms would run massive social experiments on millions of users without prior notification.
- Limits on the commercial exploitation of consumer data. Currently the platforms are using personal data in ways consumers do not understand, and might not accept if they did. And they will use that data forever, unless someone tells them to stop.
- Consumer ownership of their own data. Users created this data, so they should have the right to export it to other social networks. The likely outcome would be an explosion of innovation and entrepreneurship. Startups and established players would build new products that incorporate people’s existing social graphs, forcing Facebook to compete.
- Return to traditional antitrust law. Since the Reagan era, antitrust law has focused on prices for consumers, allowing Facebook and Google to dominate several industries—not just search and social media but also email, video, photos and digital ad sales. This approach ignores the social costs of addiction, manipulated elections and reduced innovation. All of these costs are evident today.
Increasing awareness of the threat posed by platform monopolies creates an opportunity to reframe the discussion about concentration of market power. Limiting the power of Facebook and Google not only won’t harm America or Europe, it will almost certainly unleash levels of creativity and innovation that have not been seen in the technology industry since the early days of, well, Facebook and Google.
Before you dismiss regulation as impossible in the current climate, consider this. Nine months ago, when Tristan Harris and I joined forces, hardly anyone was talking about these issues. Now lots of people are, including policymakers. And while it’s hard to be optimistic, that’s no excuse for inaction. There’s far too much at stake.
Watch Roger McNamee speak with Bill Maher about fixing Facebook.