Column: Ex-Google manager leads a drive to rein in the pernicious impact of social media - Los Angeles Times
Advertisement

Column: Ex-Google manager leads a drive to rein in the pernicious impact of social media

Share via

In 2012, Tristan Harris made a presentation to his bosses at Google arguing that “we had a moral responsibility to create an attention economy that doesn’t weaken people’s relationships or distract people to death.â€

That presentation “lit a fire at Google for a brief moment in time,†Harris told me, leading to his informal appointment as the company’s design ethicist. “No one who was studying the ethics of influencing 2 billion people’s attention.†As things developed, “they didn’t implement any of the changes I recommended — not because they were a greedy corporation, but it just wasn’t a priority.â€

At the end of 2018, three years after Harris left, Google finally offered a screen time tracker, dubbed Digital Wellbeing, allowing users to monitor how much time they were spending on their Android phones and to bring their screen addictions under control. (Apple almost simultaneously released a similar app for iPhones.)

Advertisement

The ‘free’ business model is the most expensive business model ever invented.

— Tristan Harris

That was a “baby step,†Harris says, but it was at least partially a response to the Time Well Spent movement he launched in his new role as a co-founder of the nonprofit Center for Humane Technology.

Harris, 34, more recently has expanded his campaign to bring the negative impacts of the internet economy to the forefront of public attention. There’s a lot to cover; the spread of misinformation and disinformation on social media platforms such as Facebook and YouTube (which is owned by Google), the promotion of conspiracy theorists such as Alex Jones, invasions of privacy, Russian influence on our elections, and a rise in political polarization.

Advertisement

Harris has become an increasingly prominent evangelist on this theme. He spoke about screen addiction on “60 Minutes†in April 2017, and on his larger themes on network news shows and at the Milken Institute Global Conference in recent weeks. On April 23, he staged a 45-minute presentation that he calls “An Inconvenient Truth†for technology, referring to the 2006 documentary on climate change written by and starring former Vice President Al Gore.

Harris’ key insight is that it’s a mistake to treat the drawbacks of mobile technologies and social media as separate and unrelated. In fact, “they’re all connected to an extractive attention economy,†he told an audience at the Milken conference.

“When your business model is extracting data and attention out of people,†he said, the result is “a race to the bottom of the brain stem†in which social media platforms feed users more and more of whatever content will keep them onsite. In practice, that means more radical and extreme content that feeds on human weaknesses.

Advertisement

Young women searching for YouTube videos on dieting will be steered, via the platform’s “recommendations,†to sites about bulimia; parents looking for information about child vaccines will be served anti-vaccine conspiracy videos. The platforms aren’t making value judgments — except to the extent their recommendation algorithms know that these connections will keep users clicking.

Users of mobile technologies aren’t entirely unaware of how the platforms manipulate their attention spans, Harris says. But they probably underestimate the scale of the problem. Facebook has more than 2 billion monthly active users, and YouTube almost 2 billion.

YouTube recommended Alex Jones videos to users 15 billion times, Harris says. (The figure comes from Guillaume Chaslot, a former Google engineer who says he worked on the recommendation algorithm and is now an advisor to Harris’ center.) “Let’s say only one in 1,000 people take Alex Jones seriously; the scale is still really enormous.†YouTube’s algorithms are just trying to figure out what videos keep people on the site the longest, and if it determines that conspiracy videos do that, “they’ll recommend those.â€

The impact is hard to avoid, even by signing off social media altogether. Parents who don’t spend a minute on YouTube may be sending their kids to schools where anti-vaccine sentiment purveyed by the platform has put all the kids at risk. The undermining of trust in institutions and what was once a shared faith in science and history harms all of us.

Harris calls the phenomenon “human downgrading,†a useful term to bring all these undesirable impacts under a single umbrella. It reflects the tendency of social media to seize our attention by appealing to our most primitive emotions.

Advertisement

Facebook, YouTube, Twitter and their ilk aren’t pioneers in their quest to manipulate human emotions for profit — the concern dates back to the first printed advertisements and reached a fever pitch with the spread of television. But Harris argues that today’s social media are set apart by a pervasiveness that allows them to amplify users’ beliefs and encourages users to express their beliefs without the social constraints of face-to-face or small group interaction.

Mobile technology rewards expressions couched in extreme or outraged language. Harris points to research indicating that the use of extreme language in tweets sharply increases their likelihood of being retweeted. Jay Van Bavel and colleagues at New York University, for example, have found that the use of moral-emotional words in social media messages increased their diffusion by “a factor of 20% for each additional word.†In social media, in other words, outrage sells.

Harris is not the first critic to point to how technology’s influence on human judgment can outrun our ability to deal with the phenomenon. Virtual reality pioneer Jaron Lanier sounded a warning a decade ago in his book “You Are Not a Gadget†about a web culture increasingly dominated by advertising and aimed at imposing conformity.

The late Neil Postman first placed the term “disinformation†into the political lexicon in his 1985 book “Amusing Ourselves to Death,†defining it as “misplaced, irrelevant, fragmented or superficial information … that creates the illusion of knowing something but which in fact leads one away from knowing.â€

Harris’ critique is echoed within the tech community. Just this week, Facebook co-founder Chris Hughes called for a breakup of the company, specifically citing the unprecedented power of its CEO Mark Zuckerberg “to monitor, organize and even censor the conversations of two billion people.â€

Advertisement

Facebook algorithms that determine which comments get displayed to users, Hughes wrote, have “enabled the spread of fringe political views and fake news, which made it easier for Russian actors to manipulate the American electorate.â€

Venture investor Roger McNamee, an early backer of Facebook, writes in his new book “Zucked: Waking Up to the Facebook Catastrophe†of his conversion to the view that “democracy has been undermined because of design choices and business decisions by internet platforms that deny responsibility for the consequences of their actions.â€

Harris observes that what drives these trends is the thirst for advertising dollars, which are dependent on audience engagement. The users of Facebook and YouTube may not realize that they aren’t those companies’ customers — just the raw material that are pitched to the real customers, advertisers. The users think they’re getting services for free because they don’t realize how much they’re giving up.

“The ‘free’ business model is the most expensive business model ever invented,†Harris told me. “What does ‘free’ buy us? It buys us free social isolation, free downgrading of our attention spans, free downgrading of democracy, free teen mental health issues.… That’s what the business model of maximizing attention has bought us.â€

Until society forces the issue, the platforms themselves won’t do more than pay lip service to their responsibilities. Zuckerberg dismissed accusations that Facebook had played a significant role in Russian interference with the 2016 election, until he was dragged before a Senate committee and forced to acknowledge that 146 million Americans had been exposed to Russian election propaganda via Facebook and its subsidiary Instagram.

Putting this malevolent genie back in the bottle won’t be easy, Harris acknowledges, especially if the goal is to implement protections against user manipulation before the 2020 election. “We need a full-court press from policymakers, shareholders, media, the public to get as many safeguards in place as we can,†he says. (Sen Elizabeth Warren [D-Mass.], a candidate for her party’s presidential nomination, started the ball rolling on March 8 with a proposal to break up the tech giants.)

Advertisement

Initiatives such as Facebook’s employment of thousands of content checkers to weed out noxious or false content look like a Band-Aid, as long as the social media platforms don’t feel legal liability for what they funnel to the public.

The most controversial element of Harris’ thinking may involve Section 230 of the 1996 Communications Decency Act. This 26-word provision, which states in part that the provider of interactive computer services can’t be viewed as “the publisher or speaker of any information provided by another information content provider,†has been described as “the backbone†and “the most important law†of the internet.

That’s because it makes online reviewers, commenters, Facebook posters and tweeters legally responsible for their statements — not the platforms, whether Facebook, YouTube, Yelp, Twitter or newspapers hosting online comment threads. That facilitates open discourse, because without this immunity no platform would dare to host online discussions.

The platforms’ immunity stems from their image as neutral purveyors of others’ content. Harris argues that this concept no longer applies to platforms that are actively serving recommendations to billions of users.

“We probably need a new classification†under 230, he says — a “recommendation platform,†say, requiring disclosure of the degree to which the platform has recommended content later determined to be problematic. Harris also would carve out a rule making the platforms responsible for allowing mass shooters to publish their intent to kill people. “The companies should be responsible for identifying that. We have to stop the bleeding.â€

Most of Harris’ efforts thus far have been devoted to bringing the drawbacks of the technology business model to public attention. When he looks for optimistic glimmers, he finds it in the growth of public awareness.

Advertisement

“Three years ago,†he says, “no one was paying attention to these issues at all. I never would have expected that we’ve made even the tiny bit of progress we’ve made since then.â€

Keep up to date with Michael Hiltzik. Follow @hiltzikm on Twitter, see his Facebook page, or email [email protected].

Return to Michael Hiltzik’s blog.

Advertisement