The Nightmare Du Jour: Clearview AI Brings 1984 to 2020

Professor FrancoBy Alexandra M. Franco, Esq.

Have you ever had a picture of your face as your profile picture on a social media website? If the answer is yes, then it is very likely that a company called Clearview AI has it. Have you ever heard of Clearview AI? You probably haven’t—that is, unless you watched this alarming John Oliver segment or read this spine-chilling report from Kashimir Hill in The New York Times which gives any Stephen King novel a run for its money. If you are amongst the majority of people in the U.S. who has not heard of Clearview, it’s about time you did.

Clearview is in the business of facial recognition technology; it works primarily by searching the internet for images of people’s faces posted on social media websites such as Facebook and YouTube and uploading them to its database. Once Clearview a finds a picture of your face, the company takes the measurements of your facial geometry—a form of biometric data. Biometric data are types of measurements and scans of certain biological features which are unique to each person on earth for example, a person’s fingerprint. Thus, much like a fingerprint, a scan of your facial geometry enables anyone who has it to figure out your identity from a picture alone.

But Clearview doesn’t stop there. Once it has created a scan of your facial geometry, its algorithm keeps looking through the internet and matches the scan to any other pictures of you it finds—whether you’re aware of their existence or not and even if you have deleted them. It does this without your knowledge or consent. It does this without regard to social media sites’ terms of use, some of which explicitly prohibit the collection of people’s images.

So far, Clearview has done this process with over three billion (yes, billion with a b) images of people’s faces from the internet.

Indeed, what makes Clearview’s facial recognition service so powerful is, in part, their indiscriminate, careless and unethical collection of people’s photos en masse from the internet. So far, the majority of companies in the business of facial recognition have limited the sources from which they collect people’s images to, for example, mugshots. To truly understand how serious a threat to people’s privacy Clearview’s business model is, think about this: even Google—a company that can hardly be described as a guardian of people’s privacy rights—has refused to develop this type of technology as it can be used “in a very bad way.”

There is another thing that places Clearview miles ahead of other facial recognition services: its incredible efficiency in recognizing people’s faces from many types of photos—even if they are blurry or taken from a bad angle. You might be tempted to think: “But wait! we’re wearing masks now, surely they can’t identify our faces if we’re wearing masks.” Well, the invasiveness of Clearview’s insanely powerful algorithm surpasses even that of COVID-19; it can recognize a face even if it is partially covered. Masks can’t protect you from this one.

And Clearview has unleashed this monstrous threat to people’s privacy largely hidden behind the seemingly endless parade of nightmares the year 2020 has unleashed upon us.

2020 has not only been the COVID-19 year.  It has also been the year in which millions of people across the U.S. have taken to the streets to protest the police’s systematic racism, abuse and violence towards African Americans and other minorities. Have you been to one of those protests lately? In the smartphone era, protests are events in which hundreds of people are taking myriad pictures with their smartphones and uploading them to social media sites in the blink of an eye. If you have been to a protest, chances are someone has taken your picture and uploaded it to the internet. If so, it is very likely that Clearview has uploaded it to their system.

And to whom does Clearview sell access to its services?  To law enforcement!

Are you one of those Americans who have exercised their constitutional right to freedom of speech, expression and assembly during this year’s protests? Are you concerned about your personal safety during a protest in light of reports such as this one showing police brutality and retaliatory actions against demonstrators? Well, you may want to know that Clearview thought it was a great marketing idea to give away free trials of its facial recognition service to individual police officers—yes, not just to the police departments, to individual officers. So, in addition to riot gear, tear gas and batons, Clearview has given individual police officers access to a tool that allows them, at will and for any reason, to “instantaneously identify everyone at a protest or political rally.”

Does the Stasi-style federal “police” force taking demonstrators into unmarked vehicles have access to Clearview’s service? Who knows.

Also, as I’ve mentioned in the past, facial recognition technologies are particularly bad when it comes to identifying minorities such as African Americans. Is Clearview’s algorithm sufficiently accurate so that it doesn’t arrest or even shoot a law-abiding Black citizen because his face is mistaken for someone else’s? Again, who knows.

In its website, Clearview states that its mission is to enable law enforcement “to catch the most dangerous criminals… And make communities safer, especially the most vulnerable among us.” In light of images such as the one in this article and this one, such statement is slap in the face of the reality that vulnerable, marginalized communities have to endure every single day of their lives.

I would like to tell you that there is a clear, efficient way to stop Clearview, but the road ahead will inevitably be tortuous. So far, the American Civil Liberties Union has filed a lawsuit in Illinois State Court under the Illinois Biometric Privacy Act, seeking to enjoin Clearview from continuing their collection of people’s pictures. However, even though BIPA is the most stringent biometric privacy law in the U.S., it is still a state law subject to limitations. As a Stanford Law Professor put it, “absent a very strong federal privacy law, we’re all screwed,” and there isn’t one. And we all know that in light of the Chernobylesque meltdown our federal system of government is experiencing, there won’t be one anytime soon.

If there is anything that COVID-19 has taught us—or at least, reminded us of—is that some of the most significant threats to life and safety are largely invisible. Some take the form of deadly pathogens capable of killing millions of people. Others take the form of powerful algorithms that, in the words of a Clearview investor, could further lead us down the path towards “a dystopian future or something.” And, speaking of a dystopian future, in his—very, very often referenced—novel 1984, George Orwell wrote: “if you want a picture of the future, imagine a boot stomping on a human face—for ever.”

Clearview probably has that one, too.


Alexandra M. Franco is a Visiting Assistant Professor at IIT Chicago-Kent College of Law and an Affiliated Scholar with IIT Chicago-Kent’s Institute for Science, Law and Technology.