The Nightmare Du Jour: Clearview AI Brings 1984 to 2020

Professor FrancoBy Alexandra M. Franco, Esq.

Have you ever had a picture of your face as your profile picture on a social media website? If the answer is yes, then it is very likely that a company called Clearview AI has it. Have you ever heard of Clearview AI? You probably haven’t—that is, unless you watched this alarming John Oliver segment or read this spine-chilling report from Kashimir Hill in The New York Times which gives any Stephen King novel a run for its money. If you are amongst the majority of people in the U.S. who has not heard of Clearview, it’s about time you did.

Clearview is in the business of facial recognition technology; it works primarily by searching the internet for images of people’s faces posted on social media websites such as Facebook and YouTube and uploading them to its database. Once Clearview a finds a picture of your face, the company takes the measurements of your facial geometry—a form of biometric data. Biometric data are types of measurements and scans of certain biological features which are unique to each person on earth for example, a person’s fingerprint. Thus, much like a fingerprint, a scan of your facial geometry enables anyone who has it to figure out your identity from a picture alone.

But Clearview doesn’t stop there. Once it has created a scan of your facial geometry, its algorithm keeps looking through the internet and matches the scan to any other pictures of you it finds—whether you’re aware of their existence or not and even if you have deleted them. It does this without your knowledge or consent. It does this without regard to social media sites’ terms of use, some of which explicitly prohibit the collection of people’s images.

So far, Clearview has done this process with over three billion (yes, billion with a b) images of people’s faces from the internet.

Indeed, what makes Clearview’s facial recognition service so powerful is, in part, their indiscriminate, careless and unethical collection of people’s photos en masse from the internet. So far, the majority of companies in the business of facial recognition have limited the sources from which they collect people’s images to, for example, mugshots. To truly understand how serious a threat to people’s privacy Clearview’s business model is, think about this: even Google—a company that can hardly be described as a guardian of people’s privacy rights—has refused to develop this type of technology as it can be used “in a very bad way.”

There is another thing that places Clearview miles ahead of other facial recognition services: its incredible efficiency in recognizing people’s faces from many types of photos—even if they are blurry or taken from a bad angle. You might be tempted to think: “But wait! we’re wearing masks now, surely they can’t identify our faces if we’re wearing masks.” Well, the invasiveness of Clearview’s insanely powerful algorithm surpasses even that of COVID-19; it can recognize a face even if it is partially covered. Masks can’t protect you from this one.

And Clearview has unleashed this monstrous threat to people’s privacy largely hidden behind the seemingly endless parade of nightmares the year 2020 has unleashed upon us.

2020 has not only been the COVID-19 year.  It has also been the year in which millions of people across the U.S. have taken to the streets to protest the police’s systematic racism, abuse and violence towards African Americans and other minorities. Have you been to one of those protests lately? In the smartphone era, protests are events in which hundreds of people are taking myriad pictures with their smartphones and uploading them to social media sites in the blink of an eye. If you have been to a protest, chances are someone has taken your picture and uploaded it to the internet. If so, it is very likely that Clearview has uploaded it to their system.

And to whom does Clearview sell access to its services?  To law enforcement!

Are you one of those Americans who have exercised their constitutional right to freedom of speech, expression and assembly during this year’s protests? Are you concerned about your personal safety during a protest in light of reports such as this one showing police brutality and retaliatory actions against demonstrators? Well, you may want to know that Clearview thought it was a great marketing idea to give away free trials of its facial recognition service to individual police officers—yes, not just to the police departments, to individual officers. So, in addition to riot gear, tear gas and batons, Clearview has given individual police officers access to a tool that allows them, at will and for any reason, to “instantaneously identify everyone at a protest or political rally.”

Does the Stasi-style federal “police” force taking demonstrators into unmarked vehicles have access to Clearview’s service? Who knows.

Also, as I’ve mentioned in the past, facial recognition technologies are particularly bad when it comes to identifying minorities such as African Americans. Is Clearview’s algorithm sufficiently accurate so that it doesn’t arrest or even shoot a law-abiding Black citizen because his face is mistaken for someone else’s? Again, who knows.

In its website, Clearview states that its mission is to enable law enforcement “to catch the most dangerous criminals… And make communities safer, especially the most vulnerable among us.” In light of images such as the one in this article and this one, such statement is slap in the face of the reality that vulnerable, marginalized communities have to endure every single day of their lives.

I would like to tell you that there is a clear, efficient way to stop Clearview, but the road ahead will inevitably be tortuous. So far, the American Civil Liberties Union has filed a lawsuit in Illinois State Court under the Illinois Biometric Privacy Act, seeking to enjoin Clearview from continuing their collection of people’s pictures. However, even though BIPA is the most stringent biometric privacy law in the U.S., it is still a state law subject to limitations. As a Stanford Law Professor put it, “absent a very strong federal privacy law, we’re all screwed,” and there isn’t one. And we all know that in light of the Chernobylesque meltdown our federal system of government is experiencing, there won’t be one anytime soon.

If there is anything that COVID-19 has taught us—or at least, reminded us of—is that some of the most significant threats to life and safety are largely invisible. Some take the form of deadly pathogens capable of killing millions of people. Others take the form of powerful algorithms that, in the words of a Clearview investor, could further lead us down the path towards “a dystopian future or something.” And, speaking of a dystopian future, in his—very, very often referenced—novel 1984, George Orwell wrote: “if you want a picture of the future, imagine a boot stomping on a human face—for ever.”

Clearview probably has that one, too.


Alexandra M. Franco is a Visiting Assistant Professor at IIT Chicago-Kent College of Law and an Affiliated Scholar with IIT Chicago-Kent’s Institute for Science, Law and Technology.

 

Countdown to Health Care Privacy Compliance; GDPR Minus One Day

By Joan M. LeBow and Clayton W. Sutherland

As we hurtle to our deadline of March 25, 2018 for the European Union’s General Data Protection Regulation (GDPR) implementation, health care providers are quickly assessing gaps in their understanding of what is required by GDPR.  A key area of concern is how the GDPR’s requirements compare to previous requirements under HITECH/HIPAA and FTC requirements.

Elements of Consent and Article 7

Consent in the GDPR can be made easier to understand by breaking down the definition into principle elements and correlating them with the obligations found in the GDPR. The Article 4 definition can be divided into four parts: consent must be freely given, specific, informed, and include an unambiguous indication of affirmative consent. We will address each element in different blogs, starting with “freely given.”

“Freely Given” Element

“Freely given,” under the GDPR definition, is focused on protecting individuals from an imbalance of power between them and data controllers. Accordingly, the Article 29 Working Party (WP29)—the current data protection advisory board created by the Data Protection Directive—has issued guidance for interpreting when consent is freely given. Per this guidance material, consent is only valid if: the data subject is able to exercise a real choice; there is no risk of deception, intimidation, or coercion; and there will not be significant negative consequences if the data subject elects not to consent.[i] Consequently, consent must be as easy to withdraw as it is to grant for organizations to be compliant. Additionally, GDPR recital 43 states the controller needs to demonstrate that it is possible to refuse or withdraw consent without detriment.[ii]

Controllers (who determine the purposes for data processing and how data processing occurs[iii]) bear the burden to prove that withdrawing consent does not lead to any costs for the data subject and thus no clear disadvantage for those withdrawing consent. As a general rule, if consent is withdrawn, all data processing operations that were based on consent and took place before the withdrawal of consent—and in accordance with the GDPR—remain lawful. However, the controller must stop future processing actions. If there is no other lawful basis justifying the processing (e.g. further storage) of the data, it should be deleted or anonymized by the controller.[iv] Furthermore, GDPR recital 43 clarifies that if the consent process/procedure does not allow data subjects to give separate consent for personal data processing operations (granularity), consent is not freely given.[v] Thus, if the controller has compiled multiple processing purposes together and has not attempted to seek separate consent for each purpose, there is a lack of freedom, and the specificity component comes into question. Article 7(4)’s conditionality provision, according to WP 29 guidance, is crucial to determining the “freely given” element.[vi]

GDPR vs. HIPAA/HITECH and FTC Part 2

GDPRHIPAA/HITECHFTC
“Freely given,” under the GDPR definition, is focused on protecting individuals from an imbalance of power between themselves and data controllers.The limitations on health data use and authorization requirements are to help ensure the privacy of patients and protect their right to limit how their data is used.

This protection has various applications, including how data is used for marketing purposes as well as when or if data can be sold.
The FTC protects consumers from the imbalance of power between themselves and businesses providing services. They protect consumers, generally, with FTC Act § 5 powers.
A service may involve multiple processing operations for more than one purpose. In such cases, the data subjects should be free to choose which purpose they accept, rather than having to consent to a bundle of processing purposes.

Consent is not considered to be free if the data subject is unable to refuse or withdraw his or her consent without detriment. Examples of detriment are deception, intimidation, coercion or significant negative consequences if the data subject does not consent.

Article 7 (4) of the GDPR indicates that, among other things, the practice of “bundling” consent with acceptance of terms or conditions or “tying” the provision of a contract or a service to a consent request for processing personal data not necessary for the performance of that contract or service, is considered highly undesirable.
When such practices occur, consent is presumed not to be freely given.
An Authorization must include a description of each purpose of the requested use or disclosure of protected health information.
A covered entity may not condition the provision of treatment, payment, enrollment in a health plan, or benefit eligibility to an individual based on the acquisition of an authorization unless it falls under one any of the three enumerated exceptions, which are for psychotherapy notes, marketing or sale of Protected Health Information.

Under HIPAA/HITECH, generally bundling authorizations in with other documents, such as consent for treatment, is prohibited. However, there are three circumstances when authorizations can compound together to cover multiple documents or authorizations.
Unfair and Deceptive Business Practices:

Deceiving/ misleading customers about participating in a privacy program.

Failing to honor consumer privacy choices.

Unfair/unreasonable data security practices.

Failing to obtain consent when tracking consumer locations.

Children's Online Privacy Protection Rule ("COPPA")
A website or online service that is directed to children under 13 cannot collect personal information about them without parental consent.
Under the GDPR, the right to withdraw consent must be as easy a procedure as the one that grants consent for organizations to be compliant.

As a general rule, if consent is withdrawn, all data processing operations that were based on consent and took place before the withdrawal of consent—and in accordance with the GDPR— remain lawful. However, the controller must stop future processing actions.

If there is no other lawful basis justifying the processing (e.g. further storage) of the data, they should be deleted or anonymized by the controller.

GDPR recital 43 states the controller needs to demonstrate that it is possible to refuse or withdraw consent without detriment.
The right to withdraw an authorization is similar to the GDPR right to withdraw consent. The Covered Entity, like the Controllers, has the responsibility of informing data subjects of that right.

The revocation must be in writing, and is not effective until the covered entity receives it. In addition, a written revocation is not effective with respect to actions a covered entity took in reliance on a valid Authorization or if provision of a contract or service was conditioned on obtaining the authorization.

The Privacy Rule requires that the Authorization must clearly state the individual’s right to revoke; and the process for revocation must either be set forth clearly on the Authorization itself, or if the covered entity creates the Authorization, and its Notice of Privacy Practices contains a clear description of the process, the Authorization can reference the Notice of Privacy Practices.
According to better business practice promulgated by the FTC, companies should provide key information as clearly as possible and not embedded within blanket agreements like a privacy policy, terms of use, or even in the HIPAA authorization itself.

For example, if a consumer is providing health information only to her doctor, she should not be required to click on a “patient authorization” link to learn that it is also going to be viewable by the public. And the provider should not promise to keep information confidential in large, boldface type, but then ask the consumer in a much less prominent manner to sign an authorization that says the information will be shared.

Further, the health care provider should evaluate the size, color and graphics of all of their disclosure statements to ensure they are clear and conspicuous.

[i] Working Party 29. “Guidelines for Consent under Regulation 2016/679.” Working Party 29 Newsroom, Regulation 2016/679 Guidance, November 28, 2017, 7-9 30.

[ii] European Parliament, and European Council. “REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation).” Official Journal of the European Union, Legislation, 119/8 (May 4, 2016). [Herein after GDPR Publication].

[iii] See, id at 119/33 for Art. 4 (7).

[iv] Working Party 29. “Guidelines for Consent under Regulation 2016/679.” Working Party 29 Newsroom, Regulation 2016/679 Guidance, November 28, 2017, 21, 30.

[v] See, GDPR Publication at 119/8.

[vi] Working Party 29. “Guidelines for Consent under Regulation 2016/679.” Working Party 29 Newsroom, Regulation 2016/679 Guidance, November 28, 2017, 10, 30.

Joan M. LeBow is the Healthcare Regulatory and Technology Practice Chair in the Chicago office of Quintairos, Prieto, Wood & Boyer, P.A. Clayton W. Sutherland is a Class of 2018 graduate of the IIT Chicago-Kent College of Law.

Android’s Watching You. Now You Can Watch Back.

By Raymond Fang

On November 24, 2017, Yale Law School’s Privacy Lab announced the results of their study of 25 common trackers hidden in Google Play apps. The study, conducted in partnership with Exodus Privacy, a French non-profit digital privacy research group, examined over 300 Android apps to analyze the apps’ permissions, trackers, and transmissions. Exodus Privacy built the software to extract the apps’ permissions, trackers, and transmissions from the apps, and Yale’s Privacy Lab studied the results. The authors found that more than 75% of the apps they studied installed trackers on the user’s device, primarily for the purposes of “targeted advertising, behavioral analytics, and location tracking.” Yale’s Privacy Lab has made the 25 studied tracker profiles available online, and Exodus Privacy has made the code for their free, open-source privacy auditing software available online as well

The Exodus Privacy platform currently lacks an accessible user interface, so the average person cannot use the program to test apps of their choosing. Though the Exodus Privacy website does contain a video tutorial of how to “Try it [Exodus Privacy] at home,” the video tutorial requires the user to write code on an unknown platform (possibly using the code available on Github) to run the privacy auditing software, which requires some knowledge of computer science. Instead, the average person must rely on the reports generated on Exodus Privacy’s website. Exodus Privacy’s software automatically crawls through Google Play to update tracker and permission data for all the apps in its database, and is constantly adding more apps.

As of December 4, 2017, the Exodus Privacy website has generated reports on 511 apps. These reports yield interesting information about how some very popular apps track your personal information for advertising purposes. Snapchat (500,000,000+ downloads), for example, contains an advertising tracker from data aggregator company DoubleClick. Spotify Music (100,000,000+ downloads) contains advertising trackers from DoubleClick, Flurry, and ComScore. Though it’s hard to tell exactly what data about your social media usage and music preferences these trackers are collecting from Exodus Privacy’s reports, which just say the trackers collect “data about you or your usages,” DoubleClick’s privacy policy states that it collects “your web request, IP address, browser type, browser language, the date and time of your request, and one or more cookies that may uniquely identify your browser,” “your device model, browser type, or sensors in your device like the accelerometer,” and “precise location from your mobile device.” If cookies are not available, as on mobile devices, the privacy policy states that Doubleclick will use “technologies that perform similar functions to cookies,” tracking what you look at and for how long. Obviously, you may want to keep some of this information private for various reasons; however, the widespread use of these advertising trackers in Android apps means that this data related to your social media content and music preferences can easily be sold to advertisers and exposed.

Beyond the tracking done on social media and music apps, Exodus Privacy’s reports show that some health and dating apps also collect and sell your intimate and personal data. Spot On Period, Birth Control, & Cycle Tracker (100,000+ downloads), Planned Parenthood’s sexual and reproductive health app, contains advertising trackers from AppsFlyer, Flurry and DoubleClick. If you were pregnant, trying to conceive, or even just sexually active, data aggregator companies could conceivably sell that information to advertisers, who may then send you related advertisements. If someone was borrowing your computer or looking over your shoulder, they may be able to see the ads and figure out you were pregnant, trying to conceive, or sexually active. Such accidental exposure could cause you emotional harm if you were not ready or willing to share that private information with others. Grindr (10,000,000+ downloads), the popular dating app for gay and bisexual men, has advertising trackers from DoubleClick and Mopub. If advertisements about your sexuality started popping up whenever you used the Internet, they may accidentally reveal your sexuality before you are ready to tell certain people, which may cause a lot of emotional distress.

There is clearly cause for concern when it comes to Android apps’ tracking and selling your personal information. Unfortunately, selling user data to advertisers is a very lucrative and reliable way for tech companies to monetize their services and turn a profit, so it’s hard to envision an alternative system where all of your personal data would be protected from commodification. However difficult it may now be to imagine a world where your privacy is adequately protected in the digital space, it will be up to privacy-conscious consumers, researchers, scholars, lawyers, and policymakers to make that world a reality.

Raymond Fang, who has a B.A. in Anthropology and the History, Philosophy, & Social Studies of Science and Medicine from the University of Chicago, is a member of the ISLAT team.

Hate Speech, Free Speech, and the Internet

By Raymond Fang

In the wake of the August 12, 2017 white supremacist terrorist attack in Charlottesville, Virginia that killed one person and injured 19 others, how are Internet platforms handling racist, sexist, and other offensive content posted on their servers and websites? What are the legal ramifications of their actions?

According to a July 2017 Pew Research Center Report, 79% of Americans believe online services have a responsibility to step in when harassing behavior occurs. If white supremacist content can be counted as a form of harassment, then online platforms have certainly taken up this call in recent weeks. In the week following the Charlottesville attack:

White supremacists have reacted to these bans and other anti-white-supremacy movements by casting themselves as an oppressed group, supposedly denied free speech, and fearful to speak their minds on so-called intolerant, overly-PC liberal college campuses lest they be attacked and belittled. (Never mind the fact that people of color, women, immigrants, LGBTQ individuals, poor people, people with disabilities, and other marginalized groups have faced and continue to face serious and real discrimination every day).

Somewhat unsurprisingly, the Pew Research Center Report finds stark gender differences on opinions about the balance between protecting the ability to speak freely online, and the importance of making people feel welcome and safe in digital spaces. 64% of men age 18-29 believe protecting free speech is imperative, while 57% of women age 18-29 believe the ability to feel safe and welcomed is most important. Unfortunately, the Pew Research Center Report does not contain any data about racial differences on the speech v. safety question, nor does it have cross-tabbed data on race and gender together (e.g. black women, white men, Hispanic men).

Legally, digital media companies are allowed to ban people from their servers and services at their discretion, as First Amendment guarantees of free speech do not necessarily apply to private companies and their own terms of service. There are dangerous implications to this standard. As CloudFlare’s CEO, Matthew Prince, wrote in a company email about his decision to kick The Daily Stormer off their servers, “Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.” Prince later wrote a blog post on CloudFlare’s website where he discussed his decision, emphasized the importance of due process when decisions are made about speech, and called for the creation of stronger legal frameworks around digital content restrictions that are “clear, transparent, consistent and respectful of Due Process.” In other words, not all online speech deserves protection, but delineating which online speech does and doesn’t deserve protection should be a clear, transparent, and democratic process. Though white supremacists and neo-Nazis were the rightful target of Silicon Valley’s wrath this time, that may not be the case in the future – perhaps policymakers would do well to heed Prince’s call.

Raymond Fang, who has a B.A. in Anthropology and the History, Philosophy, & Social Studies of Science and Medicine from the University of Chicago, is a member of the ISLAT team.

Ransomware: Digital Hijacking in the 21st Century

By George Suh

Ransomware is gaining traction as one of the most significant cyber threats online.  On May 12th, 2017, the ransomware “WannaCry” began infecting PCs all over the world.  The impact of Wannacry is staggering, infecting over 150 countries and 300,000 computers.  Ransomware is a type of malware that encrypts or locks your computer’s data and files for ransom.  The use of Bitcoin is a very popular form of currency with cyber attackers, because the money is anonymized to prevent the extortionists from being tracked by federal and international authorities.  Moreover, there is no guarantee that paying the ransom will give the infected user access to their computer.  Thus, if you do not create a backup of your data, paying the ransom can lead to a costly or futile outcome and leave potentially sensitive data in the hands of clandestine criminals.

Ransomware is not a new phenomenon.  This type of malware was first reported in Russia and parts of Eastern Europe in 2005.  And starting around 2012, the use of ransomware has grown exponentially.  Moreover, the rise in ransomware has proven to be a very lucrative black market enterprise for hackers, with the FBI estimating that another major ransomware, CryptoWall, generated at least $27 million from its victims.  Even police departments were among CryptoWall’s victims.  In Swansea, Massachusetts, a police department’s computer system became infected.  Ultimately, the department paid the ransom of 2 Bitcoins (around $750 at the time), instead of figuring out how to unencrypt the malware.  Swansea Police Lt. Gregory Ryan told the Herald News that “CryptoWall is so complicated and successful that you have to buy these Bitcoins, which we had never heard of.”

As recent ransomware events have shown, there is a growing concern about high profile attacks that are an ever growing trend in the cyber landscape.  Businesses and organizations that maintain personally identifiable information should take into account the potential legal ramifications for failing to secure critical data:

  • Federal Trade Commission Enforcement. In a November 2016 blog entry, the FTC warned that “a business’ failure to secure its networks from ransomware can cause significant harm to the consumers whose personal data is hacked.  And in some cases, a business’ inability to maintain its day-to-day operations during a ransomware attack could deny people critical access to services like health care in the event of an emergency.”  The FTC also highlighted that “a company’s failure to update its systems and patch vulnerabilities known to be exploited by ransomware could violate Section 5 of the FTC Act.”  When data breach occurs, the FTC may also consider the accuracy of the security promises made to the consumer.  Under Section 5 of the FTC Act, the “unfair or deceptive acts or practices” doctrine gives the FTC the authority to pursue legal actions against businesses and organizations that misrepresent security measures used to protect sensitive data.
  • Breach Notification Requirements. In the U.S., 48 States, the District of Columbia, U.S. Virgin Islands, Guam, and Puerto Rico contain laws that require notification to affected individuals in the event of a breach. Some States also require notification to regulators. Federal laws, such as the Health Insurance Portability and Accountability Act (“HIPPA”), also have specific breach notification requirements.  Moreover, U.S. businesses and organizations that operate or sell products internationally may also be subjected to stricter notification laws.  For example, on May 25th, 2018, E.U.’s upcoming General Data Protection Regulation (“GDPR”) will require notification to affected individuals “within 72 hours of first having become aware of the breach.”  Penalties for businesses or organizations that violate the GDPR can be fined up to a maximum of 4% of annual global turnover or €20 Million (whichever is greater).

Understanding the applicable breach notification laws can save a business or organization from significant legal and monetary complications.  The unfortunate reality is that ransomware may be the beginning of much more sophisticated and sinister malware attacks.  Therefore, businesses and organizations that maintain personal data should ensure they are complying with data privacy and cyber security laws.  With the high profitability and anonymity that ransomware provides for cyber criminals, there will certainly be more attacks in the future.

George Suh is a 3L at Chicago-Kent. He is the co-founder and current Vice President of Chicago-Kent’s Cyber Security and Data Privacy Society.

Bots Can Order Pizza For You. And Then Spy on You.

By Keisha McClellan

Although we are well into 2017, here’s a belated welcome to the Year of the Bot. Bots are revolutionizing the way we fuse technology with our everyday lives and posing challenges to our privacy.

Your actual phone or smart speaker may record your wish for a cheap plane ticket or an Uber ride, but it is the software application known as a bot that executes the command. At their core, bots engage us in an interaction where we can give a command and the device can execute the command. Some bots enable two-way conversations with us, others offer more simplistic engagement.

From celebrity chatbots like Kim Kardashian’s or Maroon 5’s, to bots that can help us with health queries or financial budgeting, bots are popping up in our lives in all kinds of nifty ways. But the bots associated with “smart speakers,” such as Amazon.com’s Alexa and Google Home, are particularly wrapping convenience and controversy all into one.

Why should we care? The benefits of gaining a virtual assistant in the devices we carry around or use at home come with a creepy caveat: bots can infringe on our privacy in ways we never imagined.

Take Apple’s Siri, Amazon’s Alexa, or a Google Home smart speaker: these essentially are voice-controlled virtual assistants that can make life simpler, speedier and, perhaps, more enjoyable. You’re literally only a shout-across-the-room-away from ordering your favorite pizza.

That bots listen for our commands is innovative. Echo’s Alexa allows you to do many things including making a to-do list, providing a weather forecast, placing a toy order and streaming a podcast on voice command.

That the technology can also listen and record things you’re saying without you realizing it, is scary. It may even be incriminating.

An Arkansas prosecutor demanded Amazon turn over recorded data on an Echo in hopes that the speaker was recording at the time a man died in a friend’s hot tub. The device, at times, records the goings on in one’s home even when it hasn’t been directed to do so. In this case, the prosecutor hoped that the cloud recordings would shed light on how the man died. Until the owner granted consent for his Echo information to be turned over to prosecutors, Amazon refused to comply with requests for the recorded data citing the First Amendment as protecting the recordings.

The fact that these smart speakers may be listening and recording you without your knowledge, is disturbing enough. A reporter was startled when a private conversation between him and his wife was eerily interrupted when Echo’s Alexa “barged into the conversation with what sounded like a rebuke.”

But more troubling is what are companies doing with the data these smart speakers collect? Bots gather “massive amounts of data about us. And that raises a dark side of this technology: the privacy risks and possible misuse by technology companies,” says the Washington Post’s Vivek Wadhwa.

In all, Albert Gidari, director for privacy at Stanford Law School’s Center for Internet and Society, says the “reality is that technology…kind of blurs law for privacy.”

Bots behaving badly can take many forms. For instance, Lin-Manuel Miranda of “Hamilton” fame was so alarmed about bots driving up the price of tickets to sports, music events and Broadway shows in some cases by more than 1,000 percent, that he penned an op-ed in the New York Times blasting brokers’ use of ticket bots.

President Obama and Congress were concerned enough about the potential for some bots to harm consumers that they passed the BOTS Act of 2016 to deter local ticket scalpers going hi-tech using bots.

Sure, bots can do bad things. But like two sides of every coin, bots have good capabilities too.

Siri helped a little boy save his mother’s life. When his mother fell unconscious, a 4-year old used his mother’s finger to open an iPhone and he used Siri to call 911 and reach an operator for help.

In this year of the bot, you may be itching to take the plunge and buy a new gadget that features a bot virtual assistant. While the numerous benefits are many, be sure to protect your privacy in the process. For starters, Nextadvisor.com’s Jocelyn Baird advises that you review the settings of your device’s microphone and even consider adding an “audible tone when it’s active, so you know when it’s recording.”

Keisha McClellan is a rising 2L law student at Chicago-Kent College of Law and a founding board member of Chicago-Kent’s Cyber Security and Data Privacy Society.

 

Just a Fingerprint Away: The Risks of Fingerprint Scanning

By Michael Goodyear

The fingerprint scanner is perhaps one of the best known security features in the world. In spy movies, no safe or villain’s lair is complete with one. But they aren’t foolproof: in “Diamonds Are Forever,” James Bond uses a fake fingerprint to get past such a scanner. In the nearly 50 years since that movie was released, fingerprint scanners have become increasingly ubiquitous and as a common protection mechanism for smartphones, they are the sealed gate to your data. But that gate is not as secure as we might think, and it no longer takes a legendary spy like 007 to crack it open.

A recent study by researchers at New York University and Michigan State University brought the technological risks of fingerprint scanning to light. The researchers used computer simulations to create “MasterPrints,” real fingerprints from databases or synthetically created ones that can spoof one of the stored fingerprints in a scanner’s database to unlock a phone. Although the study did not use real phones, instead using cropped images on the commercial verification software Verifinger, the findings were still alarming. The researchers’ generated prints could match the real ones up to 65% of the time. Even if the percentage with phones was much lower, it would still be a considerable risk.

One of the greatest weaknesses of your phone’s fingerprint scanning technology is that it doesn’t actually take a full fingerprint scan. Those would be nearly impossible to falsify. But your iPhone or Android phone only scan partial fingerprints, a much smaller area with fewer unique features. This risk is exacerbated by the fact that your phone typically takes eight to ten scans, giving the fingerprint scanner a database of eight to ten fingerprints it can use. Now hackers have eight to ten chances to spoof your fingerprint rather than just one. If you give register other people’s fingerprints on your phone (your spouse or children perhaps), it increased the risk again. It’s like if you have a lockbox with several different keys; the greater the number of keys, the greater risk that someone will get their hands on one or be able to copy one.

Professor Stephanie Schuckers, Director of the Center for Identification Technology Research at Clarkson University, noted that because the study didn’t involve actual phones, the takeaways were limited.

But while a full study of Apple and Android fingerprint recognition programs will be necessary to uncover the exact risk of falsifying fingerprints, any risk is too high. Our phones hold a world of data about us. By unlocking your phone, someone wouldn’t just be able to make a call, but would know your deepest secrets. Your contacts, your intimate texts and emails, your interests, and even your health data, all stored on your phone with only fingerprint recognition to protect them, would be at risk.

Perhaps the most alarming consequence of this security vulnerability is what it means for your finances. Services such as Apple Pay and Android Pay allow you to make purchases with the swipe of your finger. Banks are increasingly starting to have fingerprint recognition for signing into your app (and all of your financial data). Large banking institutions such as Chase and Bank of America, as well as credit card companies such as Capital One, are now just a swipe away for you…and your hacker.

When someone’s information gets stolen due to a false fingerprint, who will be liable? The phone developer and financial institution, by having used falsifiable fingerprint tracking technology, would be at risk of being held responsible. In the short term, however, it is the user who will suffer. Their personal and financial information will be compromised, leading to countless hours trying to secure everything again, not to permanent damage that could be done by your data getting out.

Fingerprint technology is not the only option (written passwords are usually still offered), so customers do have a choice of whether or not to trust that the fingerprint technology will protect their data. But since fingerprints are unique, fingerprint scanners have been seen as the safe choice, a much more secure method than a four number password.

Reporters have actually questioned the security of fingerprint scanning systems for years. But while previous fears were often just lists of everything that could go wrong, the new NYU and MSU study has quantifiable data to prove that fingerprints can be spoofed.

Technology has advanced so much that you can do practically anything from your smartphone. But we have to remember that with progress come downsides. When all that stands between your sensitive personal information and a thief is a fingerprint, you need the technology to be ironclad. James Bond may have had noble aims in tricking a fingerprint scanner, but it is unlikely that data hackers will have those same scruples. It may be easy to flip your finger and open your phone and all of your apps, but ease is not worth the risk of losing your information to modern day spies.

Michael Goodyear, who has a BA in History and Near Eastern Languages and Civilizations from the University of Chicago, is part of the ISLAT team.

 

Altering Prisoners’ Sense of Time: The Moral Regression of a Futuristic Technology

By Caroline Thiriot

What if you could give a prisoner a pill that changed their perception of time? A 10-year sentence could feel like millennia. Or a person could experience a 10-year sentence in two years.

Science has already brought us to the brink of this technology. In a paper published in the Journal of Neuroscience, the nature of time perception is outlined and science seems to conclude in favor of Kant’s “subjective” and “ideal” view of the matter. Indeed, “[o]ur perception of time constrains our experience of the world and exerts a pivotal influence over a myriad array of cognitive and motor functions.” (emphasis in the original). The result of the study demonstrated “anatomical, neurochemical, and task specificity, which suggested that a neurotransmitter called GABA (Gamma-Amino Butyric Acid) contributes to individual differences in time perception”. With this increased understanding of how we perceive time, perception altering medications may follow.

Psychoactive drugs could be used to distort the prisoners’ perception of time and make them feel like they were serving a 1,000-year sentence, which is legally available in the United States. As detailed in Slate and Aeon, philosopher Rebecca Roache is undertaking a thought experiment to explore the ethical issues involved in using perception altering drugs and life extension technologies in the corrections context.

Medical and scientific advance could change the way prisoners serve time and dramatically alter our prison system. As for an economic purpose we could imagine that prisoners would physically spend one day in prison while they would psychologically experience it as lasting x years. Considering the high costs of prisons, psychoactive drugs could thus be a solution to save money. However, the risk benefit ratio does not seem to be favorable at all.

There already are cases where perceptual distortions such as “disorientation in time” can occur, we can relate to the practice of solitary confinement. “There is long history of using the prison environment itself to affect prisoners’ subjective experience,” highlights Rebecca Roache. On October 18, 2011, Special Rapporteur of the Human Rights Council on torture and other cruel, inhuman or degrading treatment or punishment, Juan E. Méndez presented his thematic report on solitary confinement to the United Nations General Assembly. He called on all countries “to ban the solitary confinement of prisoners except in very exceptional circumstances and for as short a time as possible, with an absolute prohibition in the case of juveniles and people with mental disabilities.” He stressed as well that “Solitary confinement is a harsh measure which is contrary to rehabilitation, the aim of the penitentiary system.”

Two points made in the statement above are worth being further discussed. First, we will address the issue of torture and other cruel, inhuman or degrading treatment or punishment. Then, we will focus on a more philosophical controversy: the aim of the penitentiary system.

Torture is universally condemned. The prohibition against torture is well established under customary international law as jus cogens as well as under various international treaties such as the Convention against Torture or Other Cruel, Inhuman or Degrading Treatment or Punishment ratified by 136 countries (including the United States in 1994). Even if the effects of perception altering drugs have not been studied yet, we can relate to those of solitary confinement to some extent. Indeed, one can picture the subject whose time perception is altered as experiencing another reality than the one commonly experienced. Thus, as the physically isolated prisoner, it seems to be reasonable to conclude that this subject will be deprived of normal human interaction and may eventually suffer from mental health problems including anxiety, panic, insomnia, paranoia, aggression and depression. In addition to the mental health risks, there exist physical health risks as well because the needs for sleep or food may be perceived differently.

As for the aim of the penitentiary system, several questions arise, especially concerning rehabilitation and recidivism. Some authors argue that prisons should be abolished and replaced by “anti-prisons,” that is, locked, secure residential colleges, therapeutic communities, and centers for human development. Indeed, nowadays it makes no doubt that punishment fails and rehabilitation works. From such perspective, altering prisoners’ time perception in order to make them feel like they spend more time in jails could thus be seen as a step backwards instead of a progress. According to the American Correctional Association (ACA) 1986 Study of prison industry, there are three categories of contemporary prison institution goals. Those are offender-based (good work habits, real work experience, vocational training, life management experience), institution-oriented (reducing idleness, structuring daily activities, reducing the net cost of corrections) or societal (repayment to society, dependent support, victim restitution). If such technology was implemented, most of these goals would not be completed. On the opposite, it appears we would instead go back to an era when solitary confinement was thought to foster penitence and to encourage reformation, but in a rather extreme form causing more harm to the individual possibly to the extent of mental illness.

The Eighth Amendment states that “Excessive bail shall not be required, nor excessive fines imposed, nor cruel and unusual punishments inflicted.” Professor Richard S. Frase has analyzed constitutional proportionality requirements. He noticed that since 1980, the Supreme Court has ruled in favor of the prisoner only once out of the six cases in which the duration of a prison sentence was attacked on Eighth Amendment grounds. Even if “The Court has never made clear what it means by proportionality in the context of prison sentences. Justice Scalia believes (and perhaps so does Justice Thomas) that this concept only has meaning in relation to retributive sentencing goals,” he concludes. When it comes to sentencing goals, one should thus distinguish retributive goals from non-retributive. While the first theory considers only the defendant’s past actions and focuses on the punishment itself, the second one (also considered as “utilitarian”) takes the future effects of the punishment into account. On the basis of such distinction, one must conclude that the possibility of making prisoners feel like they were spending a very long time in jails would only serve a retributive purpose and would absolutely fail at addressing the non-retributive ones.

Also, we live in a society in which, if individualism seems to be the supreme rule, interdependence remains a governing concept. To the question “What else can matter to us, other than how our lives feel from the inside?” Robert Nozick asks in this famous “Experience Machine,” he concludes that, dealing with pleasure, we would rather choose the everyday reality rather than an apparently preferable simulated reality. Despite the fact his thought experiment deals with a notion that is opposite to punishment, we can rely on the conclusion that the reality we commonly experience matters more than our subjective experience of it. As a consequence, one should not forget that the victim’s subjective perception of justice matters as well. Therefore, it may appear difficult for them to know that a criminal is out of jail after having spent a little time there and is able to enjoy the rest of their life free. Even if we only focus on retributive goals, such technology seems to have subjective limitations.

In the end, there seems to be no argument but the economic advantage for allowing the use of such psychoactive drugs able to distort the prisoners’ perception of time. On the contrary, it can be seen as torture and does not serve any rehabilitation aim, which is the main focus of prison sentences nowadays. The use of such technology would therefore appear to be regressive rather than progressive.

Caroline Thiriot, who has a Master’s in International Law and Human Rights from the Université Panthéon-Assas and an LL.M. in international and transnational law from Chicago-Kent College of Law, is currently a Master’s student in Bioethics at Université Paris Descartes.

Fake News: A Little White Lie or a Dangerous Crock?

Blog Photo CroppedBy Michael Goodyear

Since early November, press coverage on the problem of fake news stories has exploded.  These fake stories have included everything from the Pope endorsing Donald Trump to a woman stealing 24 dogs from an animal shelter. While they may seem harmless enough, the impact of people releasing such stories can range from simple confusion to active violence.

But what happens when the police create fake news? Even if it is well-intended, police dissemination of fake news can lead to a series of consequences, such as negative impact on neighborhoods, increased danger for citizens, violence, and distrust.

A few days ago, the Santa Maria Times uncovered a fictional news release in court documents, ten months after it had reported the same story as fact. The news release stated that two cousins, Jose Santos Melendez and Jose Marino Melendez, had been taken in for identity theft and were now in the custody of immigration authorities. It seemed like a simple report; in actually, it was part of an elaborate, but deceitful, plan—not by crooks, but by the police. .

The Santa Maria Police Department had been running Operation Matador for months at this point. The police had been eavesdropping on members of MS-13, a dangerous international gang, in the goal of eventually arresting gang members. Through wiretaps, they learned that MS-13 planned to murder the Melendez cousins. This raised a new issue: if they acted to save the two cousins, their operation would be exposed and the progress of the past months would be lost. A fake news story could solve this problem. The police took the Melendez cousins into hiding for their safety while the fake news story provided a cover, explaining the disappearance of the Melendez cousins without arousing suspicion and also protecting the cousins’ family, which might have been harmed by MS-13 if they believed the cousins were merely hiding.

In the following weeks, the police brought Operation Matador to a successful conclusion: 17 gang members were arrested on charges of murder and intent to kill in March. In July, a criminal grand jury indicted all 17 of them on a combined 50 felony counts. Lives were saved and gang members were successfully arrested, so what is the problem?

Whether well intentioned or not, fake news can have real consequences. By releasing false information about crime or police action, the police alter public perceptions of their community. If the police falsely report a crime in one neighborhood to divert attention from another, that reported neighborhood will seem more dangerous to the populace, even though in actuality the stated crime didn’t occur there.  This could lead to a downturn in local business and desire to live in that neighborhood. It would also make the neighborhood where the crime actually happened seem better in the eyes of the unwitting public, who might go to that neighborhood despite the dangers it could present.

Similarly, reporting that a crime has been solved, while in fact it has not, would also alter the public’s perceptions and possibly their actions. For example, the police could falsely report that they had solved crimes or reduced crime rates in a neighborhood in order to improve confidence in the police and intimidation of criminals. But it could also make people unreasonably more confident in the safety of an area, causing more people to go into what in actuality is still a dangerous neighborhood.

In addition, reporting that a crime has been solved when it has not could lead to greater violence or harm the police’s chances of actually solving the crime. For example, saying that the police have uncovered information about a crime or solved a crime when they haven’t could lead a perpetrator to harm those whom he thinks may have informed the police about him. It could also cause the perpetrator to flee the area to avoid arrest.

The police making it seem like crimes are being committed when they actually aren’t could also lead to harmful individual action. For example, earlier this week a fake conspiracy theory that Hillary Clinton was operating a child sex ring from Comet Ping Pong, a popular Washington, D.C., pizza parlor, led to a vigilante action. Edgar Maddison Welch decided to go investigate “Pizzagate.” Inside the restaurant, he fired a shotgun, damaging the interior of Comet Ping Pong but not injuring anyone inside. Although bloodshed was adverted in this case (Welch surrendered peacefully when he found no sign of the fabricated child sex ring), fake news undoubtedly put people’s lives at risk.

Although the Pizzagate example was not caused by the police, the police reporting fake crimes could lead to similar results: vigilantism and violence. As CNN aptly put it in regards to Pizzagate, “fake news, real violence.”

Fake news also harms our collective knowledge and our ability to tell truth from lie. While any piece of fake news has the potential to mislead and harm others, the police releasing such a story is especially harmful to our trust. We look to the police as honest defenders of justice; releasing fabricated stories undermines that, duping the public and the press as well as the suspect. As Louis Dekmar, vice president of the International Association of Chiefs of Police, pointed out, such ruses create “a real distrust between the police and the folks we rely on.” Such a lack of trust undermines the relationship between police and the community, and, according to the Department of Justice, trust is one of the key factors in maintaining public safety and effective policing. Although fake lures are often used in sting operations, such as fake prizes, fake news on this scale is unprecedented.

Although police use of fake news may be rare, the police have a widely-used precedent for faking: fake Facebook profiles. Cops across the country have created fake Facebook profiles to uncover more information about suspects and even help track them down. For example, back in 2009 the police created a fake profile picture of an attractive young woman and friended Adam Bauer, a 19-year old college student, to access pictures of him drinking that were posted on his account, later ticketing him for underage drinking.

And even though Facebook officially bans the practice, a federal judge ruled back in 2014 that cops can create fake social network profiles for investigative purposes. The Department of Justice even said that police usage of fake Facebook profiles is ethical. Yet this is at odds with the Department of Justice stressing the importance of trust between police and the community. Bauer and other college students that were charged with underage drinking through photographic evidence from Facebook stated that the fake Facebook profiles undermined trust between college students and police.

This mostly likely will not be the last time the police fake a news story. In regards to the fake news story in Operation Matador, Ralph Martin, the Santa Maria police chief, defended the tactic, even saying he would not rule out releasing a fake news story again in order to protect lives. But given the risks with fake news, in general and especially when the police are behind it, such a tactic could have much more costly ramifications than predicted.

Michael Goodyear, who has a BA in History and Near Eastern Languages and Civilizations from the University of Chicago, is part of the ISLAT team.

The Need for Speed: When Apps Inspire Dangerous Behavior

Photo for ISLAT 1-croppedBy Nadia Daneshvar

Mobile apps may be designed with good intentions, but what happens when those aims lead to dangerous user behavior? This is the case for Strava, a popular cycling app whose promotion of speed led to deadly consequences and spurred new questions regarding the responsibilities of app developers.

Strava lets users record cycling data using a smartphone or GPS device and upload that information to track, analyze, and share with friends or the public. The app records where cyclists rode and how long and how fast they rode. It then compares a user’s times with personal records as well as the fastest times of other users.

The app also tracks a cyclist’s performance on “segments”—any stretch of road, path, or trail mapped out by a user for the purpose of a multiplayer competition of who can go the fastest, whether up a hill, down the street, or on a descent. Strava compares each user’s times on a particular segment to the times of everyone else who has ridden it before and uploaded the data to the app. The fastest riders are given the title “King of the Mountain” (“KOM”) or “Queen of the Mountain” (“QOM”).

Although the app may record data virtually, the cycling and decisions of users are very much in the real world. A Strava employee admitted that Strava does not account for safety, danger, stop signs, speed limits, or the fact that in order to beat certain KOM records, users would have to break the law. But, after at least three people have died in incidents related to the Strava app, perhaps we should expect Strava app developers to account for such factors, adjusting app design to comply with the realities—including the laws and regulations—of the real world.

On June 19, 2010, William “Kim” Flint, Jr., an avid Strava user, died after he hit an SUV while speeding downhill on a Strava segment on South Park Drive, the steepest road in the East Bay area of San Francisco. Flint had learned that his record was taken by another rider shortly before the accident, and Flint had set out to reclaim his KOM title when he hit the car. He was going too fast to stop.

Despite this incident, in 2012 Strava began fueling even more competition, sending alerts notifying users that their record was broken: “Uh oh! [another Strava user] just stole your KOM….Better get out there and show them who’s boss!” Since then, they changed the message to: “Uh oh! [another Strava user] just stole your KOM….Get out there, be safe and have fun!”

On March 29, 2012, Chris Bucchere was tracking himself using Strava while riding a segment known as the “Castro Bomb” when he hit and killed a pedestrian, 71-year-old Sutchi Hui, who was crossing the street with his wife. According to Bucchere, as he entered the intersection where he hit Hui, he was “way too committed to stop.” According to a witness, “he crouched down to push his body weight forward and intentionally accelerated,” milliseconds before hitting Hui. Bucchere was charged with a felony for vehicular manslaughter. He later pled guilty.

On September 18, 2014, Jason Marshall, an avid Strava user, hit and killed a pedestrian, Jill Tarlov, in Central Park as he was illegally speeding downhill in lanes reserved for pedestrians and child cyclists. According to a witness, Marshall did not stop or slow down at all, but instead yelled to Tarlov to “Get out of the way!” Hours before the accident, Marshall had recorded 32.2 miles of cycling in Central Park, with his highest speed at 35.6 MPH, which is over the 25 MPH speed limit for bikes in Central Park. Marshall had fastidiously recorded every one of his previous rides that year—yet there was no Strava record of his ride that fateful afternoon.

What can be done to avert such tragedies?

Education

Educating the general public about these tragic examples of light-hearted biking gone wrong could help reduce them. The day after Tarlov’s death, Bike Snob NYC launched a “#noStrava” hashtag on Twitter as a “gesture of respect” to Tarlov’s family, arguing that Strava shamelessly capitalizes on cyclists’ competitive inclinations.

Take away the leader board

Strava’s leaderboard is what gives rise to the spirit of competition that has arguably contributed to all of these tragedies. Furthermore, Strava’s arrangement of the cycling data on the leaderboards is problematic. As Suffolk University’s Professor Michael Rustad noted: “[I]t’s like Strava is creating a drag race. [Strava is] not just posting what third parties do—they’re organizing it…. Its [undifferentiated-skill-level] leaderboards are comparable to taking people from the bunny slopes up to the black-diamond run. Even ski trails are marked by degrees of difficulty.”

Legal action against the rider

Some might also consider taking legal action against the rider. As noted, Bucchere was charged with a vehicular manslaughter felony. Additionally, the Huis brought a civil suit against him (which was later dismissed). This approach might make riders think twice before risky riding, nudging them to consider the legal and moral consequences of their actions.

Legal action against the developer

The parents of Kim Flint filed a wrongful death suit, deciding that “enough is enough.” In the complaint, they claimed Strava was negligent, and “breached their duty of care by: (1) failing to warn cyclists competing in KOM challenge that the road conditions were not suited for racing and that it was unreasonably dangerous given those conditions; (2) failing to take adequate measures to ensure the KOM challenges took place on safe courses, and (3) encouraging dangerous behavior.” The complaint went on, “It was foreseeable that the failure to warn of dangerous conditions, take safety measures, and encourage dangerous behavior would cause Kim Flint Jr. to die since Kim Flint Jr. justifiably relied on [Strava] to host a safe challenge. Had [Strava] done the aforementioned acts, Kim Flint Jr. would not have died as he did.”

The Flints’ lawyer argued: “The danger and harm alleged in this case originates out of Strava’s own actions in…manipulating it through its designed software into leaderboards, and then using those leaderboards to encourage cyclists to race at increasingly faster speeds for awards and titles.”

Strava’s attorneys based their argument for the case’s dismissal on the principle that Flint explicitly assumed the risks implied in cycling by agreeing to Strava’s terms and conditions when he joined the network. Strava’s terms and conditions stated: “In no event shall Strava be liable to you or any third party for any direct, indirect, punitive, incidental, special or consequential damages arising out of or in any way connected with… your use of the site.” The case was eventually dismissed on these same grounds.

All three of these deaths received attention from news sources across the country, with writers and the public wondering how this could have happened. Even those changes that Strava made since the deaths in 2010 and 2012 did not fix the problem. Although the Flint case may have been dismissed, Strava has played a role in the promotion of risky and illegal behavior. But where exactly the line lies between user agency and developer responsibility remains to be determined.

Nadia Daneshvar is a former ISLAT Fellow, and is currently a second-year student at The George Washington University Law School.