The Nightmare Du Jour: Clearview AI Brings 1984 to 2020

Professor FrancoBy Alexandra M. Franco, Esq.

Have you ever had a picture of your face as your profile picture on a social media website? If the answer is yes, then it is very likely that a company called Clearview AI has it. Have you ever heard of Clearview AI? You probably haven’t—that is, unless you watched this alarming John Oliver segment or read this spine-chilling report from Kashimir Hill in The New York Times which gives any Stephen King novel a run for its money. If you are amongst the majority of people in the U.S. who has not heard of Clearview, it’s about time you did.

Clearview is in the business of facial recognition technology; it works primarily by searching the internet for images of people’s faces posted on social media websites such as Facebook and YouTube and uploading them to its database. Once Clearview a finds a picture of your face, the company takes the measurements of your facial geometry—a form of biometric data. Biometric data are types of measurements and scans of certain biological features which are unique to each person on earth for example, a person’s fingerprint. Thus, much like a fingerprint, a scan of your facial geometry enables anyone who has it to figure out your identity from a picture alone.

But Clearview doesn’t stop there. Once it has created a scan of your facial geometry, its algorithm keeps looking through the internet and matches the scan to any other pictures of you it finds—whether you’re aware of their existence or not and even if you have deleted them. It does this without your knowledge or consent. It does this without regard to social media sites’ terms of use, some of which explicitly prohibit the collection of people’s images.

So far, Clearview has done this process with over three billion (yes, billion with a b) images of people’s faces from the internet.

Indeed, what makes Clearview’s facial recognition service so powerful is, in part, their indiscriminate, careless and unethical collection of people’s photos en masse from the internet. So far, the majority of companies in the business of facial recognition have limited the sources from which they collect people’s images to, for example, mugshots. To truly understand how serious a threat to people’s privacy Clearview’s business model is, think about this: even Google—a company that can hardly be described as a guardian of people’s privacy rights—has refused to develop this type of technology as it can be used “in a very bad way.”

There is another thing that places Clearview miles ahead of other facial recognition services: its incredible efficiency in recognizing people’s faces from many types of photos—even if they are blurry or taken from a bad angle. You might be tempted to think: “But wait! we’re wearing masks now, surely they can’t identify our faces if we’re wearing masks.” Well, the invasiveness of Clearview’s insanely powerful algorithm surpasses even that of COVID-19; it can recognize a face even if it is partially covered. Masks can’t protect you from this one.

And Clearview has unleashed this monstrous threat to people’s privacy largely hidden behind the seemingly endless parade of nightmares the year 2020 has unleashed upon us.

2020 has not only been the COVID-19 year.  It has also been the year in which millions of people across the U.S. have taken to the streets to protest the police’s systematic racism, abuse and violence towards African Americans and other minorities. Have you been to one of those protests lately? In the smartphone era, protests are events in which hundreds of people are taking myriad pictures with their smartphones and uploading them to social media sites in the blink of an eye. If you have been to a protest, chances are someone has taken your picture and uploaded it to the internet. If so, it is very likely that Clearview has uploaded it to their system.

And to whom does Clearview sell access to its services?  To law enforcement!

Are you one of those Americans who have exercised their constitutional right to freedom of speech, expression and assembly during this year’s protests? Are you concerned about your personal safety during a protest in light of reports such as this one showing police brutality and retaliatory actions against demonstrators? Well, you may want to know that Clearview thought it was a great marketing idea to give away free trials of its facial recognition service to individual police officers—yes, not just to the police departments, to individual officers. So, in addition to riot gear, tear gas and batons, Clearview has given individual police officers access to a tool that allows them, at will and for any reason, to “instantaneously identify everyone at a protest or political rally.”

Does the Stasi-style federal “police” force taking demonstrators into unmarked vehicles have access to Clearview’s service? Who knows.

Also, as I’ve mentioned in the past, facial recognition technologies are particularly bad when it comes to identifying minorities such as African Americans. Is Clearview’s algorithm sufficiently accurate so that it doesn’t arrest or even shoot a law-abiding Black citizen because his face is mistaken for someone else’s? Again, who knows.

In its website, Clearview states that its mission is to enable law enforcement “to catch the most dangerous criminals… And make communities safer, especially the most vulnerable among us.” In light of images such as the one in this article and this one, such statement is slap in the face of the reality that vulnerable, marginalized communities have to endure every single day of their lives.

I would like to tell you that there is a clear, efficient way to stop Clearview, but the road ahead will inevitably be tortuous. So far, the American Civil Liberties Union has filed a lawsuit in Illinois State Court under the Illinois Biometric Privacy Act, seeking to enjoin Clearview from continuing their collection of people’s pictures. However, even though BIPA is the most stringent biometric privacy law in the U.S., it is still a state law subject to limitations. As a Stanford Law Professor put it, “absent a very strong federal privacy law, we’re all screwed,” and there isn’t one. And we all know that in light of the Chernobylesque meltdown our federal system of government is experiencing, there won’t be one anytime soon.

If there is anything that COVID-19 has taught us—or at least, reminded us of—is that some of the most significant threats to life and safety are largely invisible. Some take the form of deadly pathogens capable of killing millions of people. Others take the form of powerful algorithms that, in the words of a Clearview investor, could further lead us down the path towards “a dystopian future or something.” And, speaking of a dystopian future, in his—very, very often referenced—novel 1984, George Orwell wrote: “if you want a picture of the future, imagine a boot stomping on a human face—for ever.”

Clearview probably has that one, too.


Alexandra M. Franco is a Visiting Assistant Professor at IIT Chicago-Kent College of Law and an Affiliated Scholar with IIT Chicago-Kent’s Institute for Science, Law and Technology.

 

Fake News: A Little White Lie or a Dangerous Crock?

Blog Photo CroppedBy Michael Goodyear

Since early November, press coverage on the problem of fake news stories has exploded.  These fake stories have included everything from the Pope endorsing Donald Trump to a woman stealing 24 dogs from an animal shelter. While they may seem harmless enough, the impact of people releasing such stories can range from simple confusion to active violence.

But what happens when the police create fake news? Even if it is well-intended, police dissemination of fake news can lead to a series of consequences, such as negative impact on neighborhoods, increased danger for citizens, violence, and distrust.

A few days ago, the Santa Maria Times uncovered a fictional news release in court documents, ten months after it had reported the same story as fact. The news release stated that two cousins, Jose Santos Melendez and Jose Marino Melendez, had been taken in for identity theft and were now in the custody of immigration authorities. It seemed like a simple report; in actually, it was part of an elaborate, but deceitful, plan—not by crooks, but by the police. .

The Santa Maria Police Department had been running Operation Matador for months at this point. The police had been eavesdropping on members of MS-13, a dangerous international gang, in the goal of eventually arresting gang members. Through wiretaps, they learned that MS-13 planned to murder the Melendez cousins. This raised a new issue: if they acted to save the two cousins, their operation would be exposed and the progress of the past months would be lost. A fake news story could solve this problem. The police took the Melendez cousins into hiding for their safety while the fake news story provided a cover, explaining the disappearance of the Melendez cousins without arousing suspicion and also protecting the cousins’ family, which might have been harmed by MS-13 if they believed the cousins were merely hiding.

In the following weeks, the police brought Operation Matador to a successful conclusion: 17 gang members were arrested on charges of murder and intent to kill in March. In July, a criminal grand jury indicted all 17 of them on a combined 50 felony counts. Lives were saved and gang members were successfully arrested, so what is the problem?

Whether well intentioned or not, fake news can have real consequences. By releasing false information about crime or police action, the police alter public perceptions of their community. If the police falsely report a crime in one neighborhood to divert attention from another, that reported neighborhood will seem more dangerous to the populace, even though in actuality the stated crime didn’t occur there.  This could lead to a downturn in local business and desire to live in that neighborhood. It would also make the neighborhood where the crime actually happened seem better in the eyes of the unwitting public, who might go to that neighborhood despite the dangers it could present.

Similarly, reporting that a crime has been solved, while in fact it has not, would also alter the public’s perceptions and possibly their actions. For example, the police could falsely report that they had solved crimes or reduced crime rates in a neighborhood in order to improve confidence in the police and intimidation of criminals. But it could also make people unreasonably more confident in the safety of an area, causing more people to go into what in actuality is still a dangerous neighborhood.

In addition, reporting that a crime has been solved when it has not could lead to greater violence or harm the police’s chances of actually solving the crime. For example, saying that the police have uncovered information about a crime or solved a crime when they haven’t could lead a perpetrator to harm those whom he thinks may have informed the police about him. It could also cause the perpetrator to flee the area to avoid arrest.

The police making it seem like crimes are being committed when they actually aren’t could also lead to harmful individual action. For example, earlier this week a fake conspiracy theory that Hillary Clinton was operating a child sex ring from Comet Ping Pong, a popular Washington, D.C., pizza parlor, led to a vigilante action. Edgar Maddison Welch decided to go investigate “Pizzagate.” Inside the restaurant, he fired a shotgun, damaging the interior of Comet Ping Pong but not injuring anyone inside. Although bloodshed was adverted in this case (Welch surrendered peacefully when he found no sign of the fabricated child sex ring), fake news undoubtedly put people’s lives at risk.

Although the Pizzagate example was not caused by the police, the police reporting fake crimes could lead to similar results: vigilantism and violence. As CNN aptly put it in regards to Pizzagate, “fake news, real violence.”

Fake news also harms our collective knowledge and our ability to tell truth from lie. While any piece of fake news has the potential to mislead and harm others, the police releasing such a story is especially harmful to our trust. We look to the police as honest defenders of justice; releasing fabricated stories undermines that, duping the public and the press as well as the suspect. As Louis Dekmar, vice president of the International Association of Chiefs of Police, pointed out, such ruses create “a real distrust between the police and the folks we rely on.” Such a lack of trust undermines the relationship between police and the community, and, according to the Department of Justice, trust is one of the key factors in maintaining public safety and effective policing. Although fake lures are often used in sting operations, such as fake prizes, fake news on this scale is unprecedented.

Although police use of fake news may be rare, the police have a widely-used precedent for faking: fake Facebook profiles. Cops across the country have created fake Facebook profiles to uncover more information about suspects and even help track them down. For example, back in 2009 the police created a fake profile picture of an attractive young woman and friended Adam Bauer, a 19-year old college student, to access pictures of him drinking that were posted on his account, later ticketing him for underage drinking.

And even though Facebook officially bans the practice, a federal judge ruled back in 2014 that cops can create fake social network profiles for investigative purposes. The Department of Justice even said that police usage of fake Facebook profiles is ethical. Yet this is at odds with the Department of Justice stressing the importance of trust between police and the community. Bauer and other college students that were charged with underage drinking through photographic evidence from Facebook stated that the fake Facebook profiles undermined trust between college students and police.

This mostly likely will not be the last time the police fake a news story. In regards to the fake news story in Operation Matador, Ralph Martin, the Santa Maria police chief, defended the tactic, even saying he would not rule out releasing a fake news story again in order to protect lives. But given the risks with fake news, in general and especially when the police are behind it, such a tactic could have much more costly ramifications than predicted.

Michael Goodyear, who has a BA in History and Near Eastern Languages and Civilizations from the University of Chicago, is part of the ISLAT team.