In the blame games following the U.S. election, the social networks, especially Facebook, are getting a hard time for allegedly aiding the spread of fake news. The New York Times, Vox (parent to Racked, Recode, the Verge, etc.), Inc. and many lesser-known websites have all run stories taking issue with Facebook founder Mark Zuckerberg's rejection of the idea that fake stories circulating on social networks affected the election's outcome.
The issue is far more complicated, though. It's possible that technology has hit the natural limit of what it can meaningfully do to news and that the news industry has reached the boundaries of possible synergy with tech. At the same time, the audience's trust in what they collectively, and incorrectly, describe as "the media" has hit a low point. All three interconnected problems can be fixed, but that would require some old-fashioned inputs such as journalistic skill, along with editorial and entrepreneurial courage.
Fake news doesn't spread because of Facebook's algorithmic attempts to deliver to the user what the company thinks he or she wants to see. It spreads because of a fundamental disconnect between the definitions of "engagement" in the advertising industry and the newsrooms.
To Facebook and Twitter, engagement is likes and shares. They sell such interactions to advertisers, and they prioritize posts with high engagement on newsfeeds. To a journalist, it's how many people have read or watched the full story and intellectually engaged with it. An engaged reader is someone who, after reading this column, will respond intelligently in the comment section or send me an email with her thoughts.
The trouble is that people who "like" and share content often don't read it -- beyond the headline, that is. According to a recent study by Maksym Gabielkov and collaborators, 59 percent of links circulated on Twitter are never clicked. NPR ran a brilliant experiment on Facebook in 2014 proving that people will often comment after reading the headline and nothing else. A recent survey of millennials revealed that one in five of them only ever read headlines (and I suspect the other four weren't quite frank with the researchers).
Facebook, Google, Twitter and the Macedonian hustlers who produced fake pro-Trump stories (headlines, really -- it doesn't matter what's in the body of the article) in bulk to get traffic and make a few dollars through Google AdSense -- all want to keep things as they are. They don't care whether people read what they share and repost because that's not how their incentives work.
Editors, by contrast, hate this setup. Instead of employing thorough, accurate reporters and well-informed columnists, they might as well outsource most of the work to robots and concentrate on writing catchy headlines. That would kill off the journalistic profession and leave the public woefully uninformed.
Because of the commercial symbiosis between editorial operations and tech platforms, there are all sorts of uneasy compromises. Editors write sensationalist headlines that don't always match the stories beneath them, and they develop social media strategies to spread these headlines as widely as they can -- knowing full well that even a majority of those who interact with the posts won't read the linked stories.
Tech companies pretend they want to police the fakes -- and in the process, they perfect their capability to block content based on certain words. Twitter's recent decision to let users block "abuse" by filtering feeds for certain words falls in the same category.
Getting serious about automated fake detection requires a lot of human input: Essentially, as Victoria Rubin and collaborators specified in a 2015 paper, it would require building a dataset of various types of fake news to train natural language processing systems. Even if an "automatic crap detector" is ever built, I wouldn't trust it. Journalists, who are professional fact-gatherers and fact-checkers, may disagree about a set of facts. But at least they can argue it out; artificial intelligence is a black box, and if it is allowed to make decisions about which news is fake and which is "real," there will be no way to verify these decisions without some complex reverse-engineering.
In any case, fact-checking has been weaponized and discredited during the U.K. referendum campaign and the U.S. election: The efforts to analyze the arguments have been defiantly partisan. Besides, what was supposed to be fact-based reporting left most people unprepared for election-day shocks.
The essentially economic conflict around the meaning of engagement is destroying the news industry's value proposition. It is no longer a trusted source of information. This year, only 32 percent of Americans, and 14 percent of Republicans, have a "great deal" or even a "fair amount" of trust in the media -- compared with 54 percent and 52 percent in 1998.
A small minority of people are willing to pay serious amounts of money for truthful, painstakingly collected information. Those who don't pay for it have to expect their news won't have so fine a filter on it. By nature, only propaganda is free because it's the consumers, not the content, who are being trafficked.
If publications certain of the quality of their information were more resolute in placing all their content behind paywalls, without loopholes or exceptions meant to increase "reach," "engagement" and ad revenues, they would end up with less money and smaller audiences. They would also be forced to prioritize coverage -- something many readers would welcome, I suspect. The social networks would cease to be a major channel for quality content: The links would only be shared among subscribers. Editors would have far more responsive and engaged audiences to deal with. I don't see it happening.
Perhaps the increasingly profitable tech giants will want to show some civic responsibility by rethinking their business model in relation to news. Advertisers shouldn't be sold deceptive "engagement metrics": Only a story that has been read in full should generate income. That would kill off most of the fakes and sensationalist headlines.
Perhaps some combination of these two approaches could be worked out in a dialogue between the news and tech industries. I hesitate to suggest regulatory interference in freedom of speech matters, but governments could help regulate advertising in a way that would align commercial interests with editorial ones. It's clear that action is needed: Accurate, substantive news is on the brink of extinction, and it's not all the social networks' fault.
To contact the author of this story: Leonid Bershidsky at email@example.com