Can the Law Eradicate Deep Fakes?

By Andrew White

As a wave of new technology surges forward, law tries to keep up with the surge’s negative ripple effects.  But is the law up to the task of regulating deep fakes? Recent advances in artificial intelligence have made it possible to create from whole-cloth videos and audio which make it appear that subjects in the video have done or said things they really have not.  These puppet-like videos are called deep fakes.

Deep fakes are most commonly created with GAN artificial intelligence algorithms, which function by bouncing existing images of the intended target back and forth until a life-like video puppet is created, or they succeed in overlaying an individual’s face onto an existing video.   These videos may be used to further political agendas.

For example, this video, created by a French AIDS charity, falsely depicts President Trump declaring an end to the AIDS crisis.  While not technically deep fakes, other types of political altered media have been met with viral success on social media. This manipulated video, which seemingly represents the Speaker of the House as drunk and incoherent on the job, quickly circulated Facebook and Twitter, even caught a retweet from Rudy Giuliani.  Finally, deep fake videos have also been used to create revenge-porn by scorned ex-partners.

Danielle Citron, a Professor of Law at Boston University, suggested in her testimony before the House Permanent Select Committee on Intelligence that a combination of legal, technological, and societal efforts is the best solution to the misuse of deep fakes:

“[w]e need the law, tech companies, and a heavy dose of societal resilience to make our way through these challenges.”

Google is working to improve their technology to detect deep fakes.  Facebook, Microsoft, the Partnership on AI, and Amazon have teamed up to create the Deep fake Detection Challenge. Twitter is actively collecting survey responses to gauge how users of its platform would like to see deep fakes handled, whether through outright removal of deep fake videos, labelling deep fake videos, or alerting when users are about to share a deep fake video.  There also have been efforts in the technology world to curb the influence of altered media and deep fake videos on the user-side. Users may inquire into the media which they see on their own.

Three mechanisms of technological block chain regulation. By Andrew White 2019.

For example, this algorithm tracks subtle head movements to detect whether a video is real or fake. The Department of Defense has created another algorithm which tracks eye blinking of subjects in videos to compare with bona fide videos.  Deep fakes are becoming so well-crafted, though, that there may come a time where they cannot be reliably detected.  Other methods have been developing alongside advances in artificial intelligence, such as the use of blockchain verification to establish the provenance of videos and audio before they are posted.

From a legal perspective, legislatures have begun to realize the impact which deep fakes have on American’s political and sexual autonomy. The federal government is working on legislation to require the Department of Homeland Security to research the status and effects of deep fakes.  Legislation restricting the distribution of deep fakes has already been passed in various states, but as the statutes demonstrate, it may be more difficult than anticipated to truly impact the influx of deep fakes.

Texas, in enacting S.B. no. 751, targets deep fakes whose creators’ intent is to influence the outcome of an election.  This broad statute criminalizes the creation or distribution of a deep fake video with the intent to influence an election or injure a candidate within 30 days of the election. Interestingly, the Texas legislature specified that a “deep fake video [is a] video created with artificial intelligence [depicting] a real person performing an action that did not occur in reality.” This area of law is rapidly evolving, and where the contours of this law lie have not been clearly established. For example, it is not clear whether the altered video of Nancy Pelosi would be included in this bill.  In the Pelosi video, the video was slowed down, and the pitch of the speech was raised to make it appear that the slowed voice was actually Nancy Pelosi. These material alterations weren’t created with artificial intelligence. In addition, Pelosi did actually speak the words and in the same order as the altered video. Would this fall under the statute’s proscription of videos where the subject is “performing an action that did not occur in reality”?

A recent Virginia statute targets a different category of deep fakes: revenge porn. S.B. no. 1736 adds the phrase “including a falsely created videographic or still image” to the existing revenge porn statute. This broader language seems to include those pornographic likenesses that are created by GAN (generative adversarial networks) or other algorithms.  Would this bill protect a video which contains a likeness created to look like a victim, but due to a minor difference (such as a missing or added tattoo) makes the likeness different enough to fall outside the protection of the statute?

A similar cause of action was added to California law by A.B. no. 602, which was signed into law by Governor Newsom in October, 2019. This statute adds a private right of action to the existing revenge porn statute for victims who have been face- or body-swapped into a recording of a sexual act which is published without their consent.

California also passed AB 730 alongside the revenge porn amendment.  This law disallows the distribution of any “deceptive audio or visual media … with the intent to injure the candidate’s reputation or to deceive a voter” within 60 days of an election.  The law defines “materially deceptive audio or visual media” as that which “would falsely appear to a reasonable person to be authentic and would cause a reasonable person to have a fundamentally different understanding . . . than that person would have if the person were hearing or seeing the unaltered, original version of the image or audio or video recording.”

This law also has a notable exception, which is that it does not apply to newspapers or other news media, nor does the law apply to paid campaign ads. These exceptions may serve to undermine the entire purpose of the bill, as Facebook has publicly asserted that it will not verify the truth or falsehood of political ads purchased on their platform.

Finally, traditional tort law may allow for recovery in certain situations where state statutes fail.  The torts of intentional infliction of emotional distress, defamation, and false light all could apply, depending on the fact situations.  These redresses, though, may only provide monetary damages and not the removal of the video itself. The problem with applying tort law in the deep fake context is similar to the limitations of AB 730.  Finding the creator of a deep fake, and then proving the creator’s intent may be a Herculean task.  After finding the creator, it is difficult to mount a full civil case against them.  Even if you do manage to bring a cause of action against a deep fake creator, the damage may already have been done.

The area of AI and deep fakes is a rapidly evolving one, both from a technological and a legal perspective.  The coming together of technology and law to combat the dark side of advances in artificial intelligence is encouraging, even as technology rushes forward to realize the more positive effects of artificial intelligence.  It seems, then, that the only solution to the problem of deep fakes is a combination of legal and technological remedies, and, in the words of Danielle Citron, “a heavy dose of societal resilience.”


Andrew White is a 1L Research Fellow at the Institute for Science, Law & Technology at IIT Chicago-Kent College of Law.  Andrew received his Master of Science in Law from Northwestern Pritzker School of Law and his Bachelor of Science from the University of Michigan, where he studied Cellular and Molecular Biology and French and Francophone Studies.

Fake News: A Little White Lie or a Dangerous Crock?

Blog Photo CroppedBy Michael Goodyear

Since early November, press coverage on the problem of fake news stories has exploded.  These fake stories have included everything from the Pope endorsing Donald Trump to a woman stealing 24 dogs from an animal shelter. While they may seem harmless enough, the impact of people releasing such stories can range from simple confusion to active violence.

But what happens when the police create fake news? Even if it is well-intended, police dissemination of fake news can lead to a series of consequences, such as negative impact on neighborhoods, increased danger for citizens, violence, and distrust.

A few days ago, the Santa Maria Times uncovered a fictional news release in court documents, ten months after it had reported the same story as fact. The news release stated that two cousins, Jose Santos Melendez and Jose Marino Melendez, had been taken in for identity theft and were now in the custody of immigration authorities. It seemed like a simple report; in actually, it was part of an elaborate, but deceitful, plan—not by crooks, but by the police. .

The Santa Maria Police Department had been running Operation Matador for months at this point. The police had been eavesdropping on members of MS-13, a dangerous international gang, in the goal of eventually arresting gang members. Through wiretaps, they learned that MS-13 planned to murder the Melendez cousins. This raised a new issue: if they acted to save the two cousins, their operation would be exposed and the progress of the past months would be lost. A fake news story could solve this problem. The police took the Melendez cousins into hiding for their safety while the fake news story provided a cover, explaining the disappearance of the Melendez cousins without arousing suspicion and also protecting the cousins’ family, which might have been harmed by MS-13 if they believed the cousins were merely hiding.

In the following weeks, the police brought Operation Matador to a successful conclusion: 17 gang members were arrested on charges of murder and intent to kill in March. In July, a criminal grand jury indicted all 17 of them on a combined 50 felony counts. Lives were saved and gang members were successfully arrested, so what is the problem?

Whether well intentioned or not, fake news can have real consequences. By releasing false information about crime or police action, the police alter public perceptions of their community. If the police falsely report a crime in one neighborhood to divert attention from another, that reported neighborhood will seem more dangerous to the populace, even though in actuality the stated crime didn’t occur there.  This could lead to a downturn in local business and desire to live in that neighborhood. It would also make the neighborhood where the crime actually happened seem better in the eyes of the unwitting public, who might go to that neighborhood despite the dangers it could present.

Similarly, reporting that a crime has been solved, while in fact it has not, would also alter the public’s perceptions and possibly their actions. For example, the police could falsely report that they had solved crimes or reduced crime rates in a neighborhood in order to improve confidence in the police and intimidation of criminals. But it could also make people unreasonably more confident in the safety of an area, causing more people to go into what in actuality is still a dangerous neighborhood.

In addition, reporting that a crime has been solved when it has not could lead to greater violence or harm the police’s chances of actually solving the crime. For example, saying that the police have uncovered information about a crime or solved a crime when they haven’t could lead a perpetrator to harm those whom he thinks may have informed the police about him. It could also cause the perpetrator to flee the area to avoid arrest.

The police making it seem like crimes are being committed when they actually aren’t could also lead to harmful individual action. For example, earlier this week a fake conspiracy theory that Hillary Clinton was operating a child sex ring from Comet Ping Pong, a popular Washington, D.C., pizza parlor, led to a vigilante action. Edgar Maddison Welch decided to go investigate “Pizzagate.” Inside the restaurant, he fired a shotgun, damaging the interior of Comet Ping Pong but not injuring anyone inside. Although bloodshed was adverted in this case (Welch surrendered peacefully when he found no sign of the fabricated child sex ring), fake news undoubtedly put people’s lives at risk.

Although the Pizzagate example was not caused by the police, the police reporting fake crimes could lead to similar results: vigilantism and violence. As CNN aptly put it in regards to Pizzagate, “fake news, real violence.”

Fake news also harms our collective knowledge and our ability to tell truth from lie. While any piece of fake news has the potential to mislead and harm others, the police releasing such a story is especially harmful to our trust. We look to the police as honest defenders of justice; releasing fabricated stories undermines that, duping the public and the press as well as the suspect. As Louis Dekmar, vice president of the International Association of Chiefs of Police, pointed out, such ruses create “a real distrust between the police and the folks we rely on.” Such a lack of trust undermines the relationship between police and the community, and, according to the Department of Justice, trust is one of the key factors in maintaining public safety and effective policing. Although fake lures are often used in sting operations, such as fake prizes, fake news on this scale is unprecedented.

Although police use of fake news may be rare, the police have a widely-used precedent for faking: fake Facebook profiles. Cops across the country have created fake Facebook profiles to uncover more information about suspects and even help track them down. For example, back in 2009 the police created a fake profile picture of an attractive young woman and friended Adam Bauer, a 19-year old college student, to access pictures of him drinking that were posted on his account, later ticketing him for underage drinking.

And even though Facebook officially bans the practice, a federal judge ruled back in 2014 that cops can create fake social network profiles for investigative purposes. The Department of Justice even said that police usage of fake Facebook profiles is ethical. Yet this is at odds with the Department of Justice stressing the importance of trust between police and the community. Bauer and other college students that were charged with underage drinking through photographic evidence from Facebook stated that the fake Facebook profiles undermined trust between college students and police.

This mostly likely will not be the last time the police fake a news story. In regards to the fake news story in Operation Matador, Ralph Martin, the Santa Maria police chief, defended the tactic, even saying he would not rule out releasing a fake news story again in order to protect lives. But given the risks with fake news, in general and especially when the police are behind it, such a tactic could have much more costly ramifications than predicted.

Michael Goodyear, who has a BA in History and Near Eastern Languages and Civilizations from the University of Chicago, is part of the ISLAT team.