Implementation Of Europe’s ‘Right To Be Forgotten’ About As Absurd As You’d Expect
Last week, Google began the process of erasing links from its search engines in Europe in order to comply with a May ruling from the European Union’s highest courtthat essentially created out of whole cloth a so-called “right to be forgotten.” Under that ruling, European citizens could now demand Google remove search results about them with regard to whether or not the material linked to was true, or even whether or not it would be considered defamatory under the law. As Wired’s Marcus Wohlsen demonstrates, and as probably could have been predicted from the beginning, the entire process is turning into something of a disaster:
The recent European Union ruling that granted citizens the “right to be forgotten” from Google’s search results is morphing into a nightmare for the web giant.
British news organizations are reporting that Google is now removing links to some of their articles, including stories that involve the disgraceful actions of powerful people.
On Wednesday, BBC economics editor Robert Preston said he received a notice from Google informing him that a 2007 blog post he wrote on ex-Merrill Lynch CEO Stan O’Neal would no longer turn up in search results in Europe. Meanwhile, the Guardian reports that it received automated notifications on six articles whose links would be axed from European view. These included stories on a scandalized Scottish Premier League soccer referee, an attorney facing fraud allegations, and, without explanation, a story on French office workers making Post-It art.
In some ways, Google is just following the EU’s dictates. The company fought the EU on the right-to-be-forgotten issue, but now it has no choice but to implement the ruling, which the court says applies “where the information is inaccurate, inadequate, irrelevant or excessive.” By that standard, these takedowns would seem to overstep the letter of a decision ostensibly intended to protect the reputation of individuals, not censor news. But the issue for Google isn’t just freedom of speech or freedom of the press. The “right to be forgotten” decision is calling unwanted attention to the easy-to-forget fact that-one way or another—fallible human hands are always guiding Google’s seemingly perfect search machine.
The removal of links to one article may be a blip, but the steady accumulation of removed links—especially to quality journalism written in a clear spirit of public interest—starts to erode trust in the reliability of Google search results. Now, anyone who does a Google search even just for the article mentioned above will have to wonder whether they’re getting the whole story. And anything that suggests compromise, lack of transparency, or incompleteness in search results plants a seed that starts to undermine the idea of what Google is supposed to be.
As I noted when I first wrote about this issue last month, the very idea of a “right to be forgotten” in the sense that the European Court has framed, and the rights that its decision has given to the more than 500 million people who are citizens of a state that is a member of the E.U., is utterly absurd. In the end, a search engine is nothing more than an index of the information available on the Internet and, if a company like Google or Bing is going to be successful it will be a search engine that returns results that are both relevant to the user and free from obvious link-spamming intended to bring someone to the top of a given search result. As Wohlsen notes, that does mean that, to some extent, the search algorithm does need to be tinkered with from time to time in order to ensure the best possible results, and that inevitably means some degree of human interaction with the results that you and I see when we type something into a Google or Bing search box. That kind of tinkering, however, is wholly different from the idea that certain types of links should be removed from the search engine entirely because someone who is mentioned at the link finds the material embarrassing in some way. To the extent that you think of a search engine as the card catalog at a library, that’s the equivalent of removing a card from the catalog referencing a specific book. Even if the book is never removed from the stacks, it’s far less likely that someone will ever find it unless they happen to stumble across it at some point. That makes the very idea of the search engine far less useful to some degree.
Another important point is the one brought up by Wholsen’s article, namely that people in positions of power and influence will use this kind of “right to be forgotten” to purge information about them that they find embarrassing even if it happens to be true. The BBC articles about Merrill Lynch CEO Stan O’Neal that are part of the current Google “purge” (for lack of a better word at the moment), are just the tip of the potential iceberg there. What happens when other industry leaders, or even political leaders, try to use the European court’s ruling to get embarrassing material removed from search engine results? The ability of someone in a position of power to use this ruling to have such information sent down a memory hole from which it would never return should be of concern to everyone. The Monkey Cage’s Henry Farrell is correct to point out that the court’s ruling doesn’t necessarily require the removal of material that is just embarrassing, and that it contains language stating that the ruling is not meant to undermine the public’s right to know information that is a matter of public interest. However, what we are seeing in how the order is being implemented is that Google is, understandably, taking a ‘better safe than sorry’ approach when it comes to handling these take down requests. This is especially obvious given the fact that the Court’s order doesn’t really provide any specific guidelines for how a company like Google is supposed to decide when the “public interest” outweighs the “right to be forgotten.” In that kind of environment, it is inevitable that even links to information of public interest will be taken down and, unless someone notices it, they will disappear forever. That doesn’t strike me as a good result.
The Court decision, though, could only be the beginning of the way that Europe will attempt to assert a right to “privacy” that overrides the long-standing idea of an Internet where information is freely and openly available:
With a new Parliament still assembling after recent European elections — and a new European Commission taking over this year — the legislation’s prospects are difficult to handicap. The one certainty, political analysts say, is that the Google controversy seems certain to galvanize lobbyists whether they support or oppose the legislation.
The recent court decision relates solely relate to search services like Google and Bing, which is run by Microsoft. But the European privacy legislation would affect any company or website that holds European customers’ digital information. The turmoil surrounding Google’s response to the European court decision could be multiplied and magnified when other companies other than search engines — including social media providers and e-commerce sites — are compelled to respond to people’s requests that their digital data be expunged.
“The scope of the new regulation will be much wider,” said Peter Church, an associate at the law firm Linklaters in London.
The court’s ruling “only applies to people’s names in search results,” he said, adding, “The new rules apply to more than just search engines.”
It is possible, of course, that if the legislation becomes law, it would provide clarity to a process that Google seems, whether by choice or default, to be improvising as it goes along.
It’s also possible that such legislation would end up making the situation worse by putting content providers, rather than just search engines, under some obligation to remove information from the Internet even when it happens to be true. Indeed, given the nature of legislation in general I am willing to bet that whatever comes out of the European Parliament is far more likely to make the situation worse than it is to make it better.
The Guardian’s David Mitchell raises a very valid point about this entire idea of the “right to be forgotten” as it applies to what we all thought the Internet was supposed to be:
The only thing I ever liked about the internet was that I thought it would help historians – that, assuming there wasn’t an all-data-destroying power surge, millions of searchable written sources would be left to posterity. Without that, it’s all just grooming and bookshop closures and mind-blowing opportunities for fraud. So this news that Ozymandias can apply to have records of his works suppressed in case they invoke too much despair in the Mighty – ie prospective employers – is a real blow.
You may say that Ozymandias is dead – or rather fictional but, even in the fiction, dead – so couldn’t apply to have his virtual trunkless legs buried in the unsearchable sand (I will retain control of this metaphor). The internet can still be accurate about the deceased, you might think. I don’t. They’re the very people you can say anything about, true or false, because they cannot be libelled. Only the living have legal recourse to ensure accuracy, but why would anyone bother to get things corrected if they can effectively just delete anything written about them that they’re not keen on?
People’s right to suppress unpleasant lies which are publicly told is being extended to unpleasant truths – until they die when it’s suddenly open season on slander. The internet will become constructed entirely of two different sorts of untruth: contemporaneous unalloyed praise and posthumous defamatory hearsay.
No one has the right to be forgotten, any more than they have the right to be remembered. Our only right in this regard should be not to be lied about.
There are legitimate concerns about privacy in the Internet era, of course. We share so much of our lives online today that it is virtually impossible to get entirely beyond the reach of Google and Bing once you go online. Whether its a comment you’ve posted at a blog (if you used your real name) or your Facebook page, something is likely to come up when someone types your name into a search engine. Sometimes, that information will be embarrassing, or it will be information that you wouldn’t want prospective employers, family members, or just the general public to see. The solution to that kind of a problem, though, is to not share the information in the first place rather than claiming the existence of some right to ‘delete information about me’ in the future. Additionally, if there are news articles online in which a person’s name is mentioned and that information is truthful, then I see no rational reason why a search engine should be forced to delink the article. Indeed, even if the article contains untrue or defamatory information it strikes me that the remedy should be against the publisher of the information, not the search engine that links to it due to an algorithm that is designed to give users all the articles it can find that are related to a given set of search terms. Privacy is important, but so is the idea of a free and open Internet and freedom of speech and the press. The Europeans seem to me to be coming down far too much on the side of a non-existent and wholly invented privacy right, and in the process doing damage to other fundamental rights and to the public interest.