When you watch a magician wave his right hand in a flourish, keep an eye on his left, because that’s usually where the real action is. This lesson is certainly true of Facebook’s own recent “magical” flourishes.
First, Facebook unveiled a new name — Meta Platforms FB — as part of its project to build the “metaverse,” a virtual world of the sort technologists and science-fiction writers have dreamed of for decades.
When reaction to the metaverse was underwhelming, for its next trick Facebook announced plans to delete a vast stockpile of facial-recognition training data. In each case, one hand produced a shiny bauble while the other hand was still up to its old tricks of power, greed, distraction and information-based manipulation.
I’ve been writing and teaching about privacy and technology for more than 20 years. I’ve worked as a lawyer on some of the most important cases against Facebook and other tech companies, and my new book “Why Privacy Matters” explains what I’ve learned in all this time.
One of the most important of these lessons is that Facebook under its current leadership simply cannot be trusted.
Take the metaverse — Meta Platforms’ co-founder, Chairman and CEO Mark Zuckerberg’s current obsession that promises a virtual world to transcend our messy, complicated lives. The metaverse, in his words, offers a place for everyone with a Facebook headset to “do almost anything you can imagine — get together with friends and family, work, learn, play, shop, create.” After all, who wouldn’t want to “feel present” with their friends, play basketball, or attend more immersive Zoom meetings?
On the other hand, we already have a place to do all these things — it’s called “the world.” In any event, the much-heralded Meta announcement couldn’t have come at a better time for Facebook. Whistleblower Frances Haugen’s credible allegations that documented how Facebook is placing profit over the mental health of its millions of teenage users were no surprise to anyone who has been paying attention.
After all, this is the company that failed to protect its human customers from Cambridge Analytica, a U.K.-based psychological warfare company that manipulated both the Brexit vote and U.S. presidential election in 2016. In that scandal, Cambridge Analytica used Facebook data to infer personality traits from more than 100 million voters. Then it sent those voters finely calibrated ads using information warfare techniques to manipulate them based upon their individual psychologies.
Zuckerberg himself announced self-servingly in 2010 that Facebook decided “the age of privacy is over.” Haugen’s allegations seem to have struck a nerve among the public and, critically, among U.S. lawmakers of both parties. After all, in spite of the politicization of many new things over the past several years, opposing Facebook’s emotional manipulation of its customers and it callous disregard for the mental health of our children is perhaps the one thing lawmakers of both parties can get behind.
Enter the metaverse as a convenient distraction. Zuckerberg’s promised realm seems ripped straight from “The Matrix” or “Ready Player One,” only promising all of the awesome and none of the dystopia. Just pop on your Meta-Oculus headset and enter a world of pure wonder and imagination.
Yet to anyone with even a casual familiarity with science fiction, such a utopian world seems too good to be true. Virtual and augmented reality have been a promise for decades, and they certainly could be wonderful. Facebook/Meta certainly seems to think so, claiming that near-future technology might make them finally possible.
But once again, Facebook is focused only on the technical problems and not the human ones — the manipulation, discrimination, fraud, harassment and misinformation — that have plagued it since the beginning. Let’s also not forget that Facebook/Meta is, first and foremost a human-data company, using what it learns about us to more effectively target advertisements and manipulate us into buying things for its paying corporate customers.
Metaverse monitoring
Any metaverse that Facebook engineers build will give the company one huge advantage over the physical world: Facebook’s metaworld would be entirely owned and monitored by the company, which would see everything that happens in it. This is the real reason for Zuckerberg’s enthusiasm, because Facebook would use that information for ever-better monitoring, targeting, persuasion and manipulation.
Does anyone trust Facebook to use that human information responsibly? Information is power, and human information confers power over humans — you, me and everyone we know. Put simply, the company that was fined a record $5 billion for the Cambridge Analytica scandal has proved that it should never be trusted or allowed to build any kind of virtual world along the lines of Zuckerberg’s current spectacle of hubris.
Perhaps the public has become used to Zuckerberg’s ostentatious tricks, because the metaverse has not exactly received the enthusiastic reception he had hoped for. Perhaps that is why Facebook is waving its right hand in another flourish lately. The company recently announced that it was finally deleting billions of the face prints tied to real names it had used to create its facial recognition engine with the supervillain name of “DeepFace.”
As many scholars and activists have documented, facial recognition is a dangerous technology — itself seemingly straight out of science fiction. Indeed, government security services and immigration enforcement bureaus have eagerly sought the technology, believing it promises perfect control and enforcement.
From this perspective, Facebook raising one hand to show the deletion of the data seems a step in the right direction — a pro-privacy move from a company finally trying to do better. But once again, keep an eye on the other hand. If we believe Facebook, the data may be deleted, but DeepFace appears to be not just built, but ready for action.
These technologies use data to “train” them to recognize faces and identify them with real names, and Facebook’s dataset of faces linked to real names seems to have been a fantastic training program to get this A.I. up to speed. From this perspective, letting the data evaporate in a flash of flame is all an illusion, because the facial recognition tool is already built, kept behind the magician’s back but nonetheless ready for action.
Of course, in the metaverse, facial recognition would be unnecessary because Facebook will render our faces for us. But for all Meta’s hype, we will all continue to live in a real world in which facial recognition can make us vulnerable.
Facebook is a powerful data company with a frankly appalling record on privacy. Information privacy matters because it is about information power. Privacy will continue to matter to any future society, real or virtual, that free humans will want to inhabit.
Privacy enables so many things that we care about, even if it makes it harder for companies like Facebook to serve personalized and surveillance-based ads. Privacy allows us to develop our identities, helping us to figure out our beliefs, our sexualities, and our politics, free from the chilling and withering gaze of disapproving others.
Privacy protects our political freedom — it places a meaningful check on aspiring autocrats and protects us from Cambridge Analytica-style political manipulation by sophisticated ad companies. In an information economy, privacy allows consumer protection, placing a barrier against economic manipulation, targeting, and the exploitation of our known biases and human frailties.
Put simply, privacy enables us to better trust the digital world that is being built around us — it lets us develop our identities as humans, engage as free citizens, and participate not just as consumers but as members of the digital society. That’s why privacy matters, and it’s something that Facebook/Meta simply cannot recognize under its current leadership.
That leadership has only ever seen profit, power and opportunities of its own, and it has never taken privacy seriously. We need to recognize Facebook’s deception and demand better privacy protections for us all that are guaranteed by law.
Neil Richards is the Koch Distinguished Professor in Law at Washington University in St. Louis, where he also directs the Cordell Institute. He is the author of Why Privacy Matters (Oxford University Press, 2021).