I was, frankly, amazed when I saw this tweet:
Let me remind you that Washington Post Editor-in-Chief Baron’s industry — newspapers — is an industry without a business model (Baron’s newspaper is more fortunate than most in its reliance on a billionaire’s largesse). Said lack of business model is leading to a dwindling of local coverage, click-chasing, and, arguably, Donald Trump. That seems like a pretty big problem!
Fake news, on the other hand, tells people who’ve already made up their minds what they want to hear. Certainly it’s not ideal, but the tradeoffs in dealing with the problem, at least in terms of Facebook, are very problematic. I wrote last fall in Fake News:
I get why top-down solutions are tempting: fake news and filter bubbles are in front of our faces, and wouldn’t it be better if Facebook fixed them? The problem is the assumption that whoever wields that top-down power will just so happen to have the same views I do. What, though, if they don’t? Just look at our current political situation: those worried about Trump have to contend with the fact that the power of the executive branch has been dramatically expanded over the decades; we place immense responsibility and capability in the hands of one person, forgetting that said responsibility and capability is not so easily withdrawn if we don’t like the one wielding it.
To that end I would be far more concerned about Facebook were they to begin actively editing the News Feed; as I noted last week I’m increasingly concerned about Zuckerberg’s utopian-esque view of the world, and it is a frighteningly small step from influencing the world to controlling the world. Just as bad would be government regulation: our most critical liberty when it comes to a check on tyranny is the freedom of speech, and it would be directly counter to that liberty to put a bureaucrat — who reports to the President — in charge of what people see.
As if to confirm my worst fears, Zuckerberg, a few months later, came out with a manifesto committing Facebook to political action, leading me to call for checks on the company’s monopoly. What was perhaps the most interesting lesson about that manifesto, though, was that most of the media — which to that point had been resolutely opposed to Facebook — were by and large unified in their approval. It was, I suspect, a useful lesson for tech executives: ensure the established media controls the narrative, and your company’s dominance may proceed without criticism.
Google’s Algorithm Change
Today Google announced its own fake-news motivated changes. From Bloomberg:
The Alphabet Inc. company is making a rare, sweeping change to the algorithm behind its powerful search engine to demote misleading, false and offensive articles online. Google is also setting new rules encouraging its “raters” — the 10,000-plus staff that assess search results — to flag web pages that host hoaxes, conspiracy theories and what the company calls “low-quality” content.
The moves follow months after criticism of Google and Facebook Inc. for hosting misleading information, particular tied to the 2016 U.S. presidential election. Google executives claimed the type of web pages categorized in this bucket are relatively small, which is a reason why the search giant hadn’t addressed the issue before. “It was not a large fraction of queries — only about a quarter percent of our traffic — but they were important queries,” said Ben Gomes, vice president of engineering for Google.
I noted above that deciding how to respond to fake news is a trade-off; in the case of Facebook, the fact that fake news is largely surfaced to readers already inclined to believe it means I see the harm as being less than Facebook actively taking an editorial position on news stores.
Google, on the other hand, is less in the business of driving engagement via articles you agree with, than it is in being a primary source of truth. The reason to do a Google search is that you want to know the answer to a question, and for that reason I have long been more concerned about fake news in search results, particularly “featured snippets”:
My concern here is quite straightforward: yes, Facebook may be pushing you news, fake, slanted, or whatever bias there may be, but at least it is not stamping said news with its imprimatur or backing it with its reputation (indeed, many critics wish that that is exactly what Facebook would do), and said news is arriving on a rather serendipitous basis. Google, on the other hand, is not only serving up these snippets as if they are the truth, but serving them up as a direct response to someone explicitly searching for answers. In other words, not only is Google effectively putting its reputation behind these snippets, it is serving said snippets to users in a state where they are primed to believe they are true.
To that end I am pleased that Google is making this change, at least at a high level. The way Google is approaching it, though, is very problematic.
Google and Authority
Danny Sullivan, who has been covering Google for years, has one of the best write-ups on Google’s changes, including this frank admission that the change is PR-driven:
Problematic searches aren’t new but typically haven’t been an big issue because of how relatively infrequent they are. In an interview last week, Pandu Nayak — a Google Fellow who works on search quality — spoke to this: “This turns out to be a very small problem, a fraction of our query stream. So it doesn’t actually show up very often or almost ever in our regular evals and so forth. And we see these problems. It feels like a small problem,” Nayak said.
But over the past few months, they’ve grown as a major public relations nightmare for the company…“People [at Google] were really shellshocked, by the whole thing. That, even though it was a small problem [in terms of number of searches], it became clear to us that we really needed to solve it. It was a significant problem, and it’s one that we had I guess not appreciated before,” Nayak said.
Suffice it to say, Google appreciates the problem now. Hence today’s news, to stress that it’s taking real action that it hopes will make significant changes.
Sullivan goes on to explain the changes Google is making to autocomplete search suggestions and featured snippets, particularly the opportunity to provide immediate feedback. What was much more convoluted, though, was a third change: an increased reliance on “authoritative content”.
The other and more impactful way that Google hopes to attack problematic Featured Snippets is by improving its search quality generally to show more authoritative content for obscure and infrequent queries…
How’s Google learning from the data to figure out what’s authoritative? How’s that actually being put into practice? Google wouldn’t comment about these specifics. It wouldn’t say what goes into determining how a page is deemed to be authoritative now or how that is changing with the new algorithm. It did say that there isn’t any one particular signal. Instead, authority is determined by a combination of many factors.
This simply isn’t good enough: Google is going to be making decisions about who is authoritative and who is not, which is another way of saying that Google is going to be making decisions about what is true and what is not, and that demands more transparency, not less.
Again, I tend to agree that fake news is actually more of a problem on Google than it is Facebook; moreover, I totally understand that Google can’t make its algorithms public because they will be gamed by spammers and fake news purveyors. But even then, the fact remains that the single most important resource for finding the truth, one that is dominant in its space thanks to the fact that being bigger inherently means being better, is making decisions about what is true without a shred of transparency.
More Monopoly Trade-offs
I wrote last week about Facebook and the Cost of Monopolies: Facebook wins because, by virtue of connecting everyone on earth, its apps both provide a better user experience even as they build impregnable moats. The moat is the network is the superior user experience. The cost, though, as I sought to quantify, at least in theory, is the aforementioned decay in our media diet, increasing concentration of advertising, and, in the long run, diminished innovation.
That raises the question, though, of what to do about it; I noted in a follow-up that Facebook hasn’t done anything wrong, and under the current interpretation of the law, isn’t even really a monopoly. The fact of the matter is that people like Facebook and that it generates a massive amount of consumer surplus. It follows, then, that any action to break up that monopoly is inherently anti-consumer, at least in the short-run.
The conundrum is even worse with Google, in large part because the company’s core service is even more critical to its users: being able to search the entire Internet is a truly awesome feat, and, thanks to that capability, it is critical that Google get the answer right. That, though, means that Google’s power is even greater, with all of the problems that entails.
At the same time, that is why Google needs to be a whole lot more explicit about how it is ranking news. Perhaps the most unanticipated outcome of the unfettered nature of the Internet is that the sheer volume of information didn’t disperse influence, but rather concentrated it to a far greater degree than ever before, not to those companies that handle distribution (because distribution is free) but to those few that handle discovery. The result is an environment where what is best for the individual in the short-term is potentially at odds with what is best for a free society in the long-term; it would behoove Google to push off this reckoning by being more open, not less.
Sadly, it seems unlikely that my request for more transparency will get much support; Google’s announcement was widely applauded, and why not? It is the established media that will have a leg up when it comes to authority. That, it seems, is all they ever wanted, even if it means Google and Facebook taking all of the money.