Alexa, Am I Violating Legal Ethics?

By Peggy Wojkowski

Thomson Reuters announced its release of Workplace Assistant, which allows attorneys to record, inquire about, and use a timer to calculate billing entries via Amazon Echo and Alexa-enabled devices. The new Workplace Assistant interacts with the existing Elite 3E platform used by law firms to manage workflow and to streamline tasks. Thomson Reuters indicates that Workplace Assistant “always works within the firm’s security walls.” Workplace Assistant does, however, interact with the Amazon environment, although Thomson Reuter’s considers the interaction “low touch,” which means very little interaction between Workplace Assistant and the Amazon environment. This minimal interaction, beyond a firm’s security walls, could draw ethical concerns for the attorneys who use the Alexa-enabled aspects of Workplace Assistant.

Alexa- enabled voice assistants, such as Amazon Echo and Amazon Dot, respond to voice requests from users. These devices either stream or record the voice requests to servers which access the requests and form responses. For the Alexa-enabled Amazon products, the wake-up word, “Alexa,” activates these voice assistants, which then respond to voice requests.  Therefore, in order to hear the wake-up word, the voice assistant’s microphone must be active even when a user is not actually making a request, i.e., the voice assistant is listening, even when the device is not awake. When an Alexa-enabled product is used with the Workplace Assistant, the device is listening for the wake-up word inside the attorney’s office. The Workplace Assistant manages the voice requests regarding client billing with the client information from the Elite 3E platform, the law firm management software. Even if the Elite 3E platform is ultimately handling any voice requests pertaining to billing, it is not clear who is handling any other voice requests or who has access to the microphone when Alexa is not awake.  This scenario requires investigation in order to comply with the American Bar Association’s Model Rules of Professional Conduct.

The American Bar Association’s Model Rule 1.6 pertaining to confidentiality of information in an attorney-client relationship indicates in part (c) that “a lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client” (emphasis added.) The committee provides insight as to what defines reasonable efforts in Comment 18 to Model Rule 1.6, which requires attorneys to act competently to preserve confidentiality. In acting competently, attorneys know not to discuss confidential information in public places, with others outside of the legal team, or with those individuals with whom communication is not necessary to adequately represent clients.  Because competent representation includes awareness of individuals (physically and electronically) present when discussing confidential information, the Workplace Assistant could pose a problem as to conclusively determining who is listening, or has access to, the microphone and its recordings on the Alexa-enabled device.

Model Rule 1.1 also requires that an attorney provide competent representation to clients and, in its comments, addresses technology used by attorneys. According to Comment 8 of Model Rule 1.1, this competency includes the attorney keeping up-to-date on changing law “including the benefits and risks associated with relevant technology.”  Therefore, attorneys cannot blindly use technology without knowing the security measures and the possible ramifications on client representation. The benefit of the Workplace Assistant is the time saved in recording and inquiring about billing. The risk is having an active microphone within an attorney’s office able to record client-privileged information, which may be a risk that attorneys do not want to take.

However, Amazon does have another product, the Amazon Tap, which may lessen the risk associated with voice assistants but still allow attorneys to use the Workplace Assistant program.  Although this device also uses Alexa to respond to voice requests, a wake-up word is not required because the user must touch the button on the top of the device to activate the microphone. Therefore, the microphone is not listening for the wake-up words, which alleviates some concerns regarding confidentiality.

Either way, attorneys may still hesitate to use any of these gadgets due to actual reactions from clients, who may step into the office for a meeting and see the microphone in an area where they want to discuss private, confidential information.

Peggy Wojkowski graduated from Chicago-Kent College of Law in May 2017.  She will be joining a large IP boutique firm in September 2017 after sitting for the Illinois bar exam in July 2017.  

Bots Can Order Pizza For You. And Then Spy on You.

By Keisha McClellan

Although we are well into 2017, here’s a belated welcome to the Year of the Bot. Bots are revolutionizing the way we fuse technology with our everyday lives and posing challenges to our privacy.

Your actual phone or smart speaker may record your wish for a cheap plane ticket or an Uber ride, but it is the software application known as a bot that executes the command. At their core, bots engage us in an interaction where we can give a command and the device can execute the command. Some bots enable two-way conversations with us, others offer more simplistic engagement.

From celebrity chatbots like Kim Kardashian’s or Maroon 5’s, to bots that can help us with health queries or financial budgeting, bots are popping up in our lives in all kinds of nifty ways. But the bots associated with “smart speakers,” such as’s Alexa and Google Home, are particularly wrapping convenience and controversy all into one.

Why should we care? The benefits of gaining a virtual assistant in the devices we carry around or use at home come with a creepy caveat: bots can infringe on our privacy in ways we never imagined.

Take Apple’s Siri, Amazon’s Alexa, or a Google Home smart speaker: these essentially are voice-controlled virtual assistants that can make life simpler, speedier and, perhaps, more enjoyable. You’re literally only a shout-across-the-room-away from ordering your favorite pizza.

That bots listen for our commands is innovative. Echo’s Alexa allows you to do many things including making a to-do list, providing a weather forecast, placing a toy order and streaming a podcast on voice command.

That the technology can also listen and record things you’re saying without you realizing it, is scary. It may even be incriminating.

An Arkansas prosecutor demanded Amazon turn over recorded data on an Echo in hopes that the speaker was recording at the time a man died in a friend’s hot tub. The device, at times, records the goings on in one’s home even when it hasn’t been directed to do so. In this case, the prosecutor hoped that the cloud recordings would shed light on how the man died. Until the owner granted consent for his Echo information to be turned over to prosecutors, Amazon refused to comply with requests for the recorded data citing the First Amendment as protecting the recordings.

The fact that these smart speakers may be listening and recording you without your knowledge, is disturbing enough. A reporter was startled when a private conversation between him and his wife was eerily interrupted when Echo’s Alexa “barged into the conversation with what sounded like a rebuke.”

But more troubling is what are companies doing with the data these smart speakers collect? Bots gather “massive amounts of data about us. And that raises a dark side of this technology: the privacy risks and possible misuse by technology companies,” says the Washington Post’s Vivek Wadhwa.

In all, Albert Gidari, director for privacy at Stanford Law School’s Center for Internet and Society, says the “reality is that technology…kind of blurs law for privacy.”

Bots behaving badly can take many forms. For instance, Lin-Manuel Miranda of “Hamilton” fame was so alarmed about bots driving up the price of tickets to sports, music events and Broadway shows in some cases by more than 1,000 percent, that he penned an op-ed in the New York Times blasting brokers’ use of ticket bots.

President Obama and Congress were concerned enough about the potential for some bots to harm consumers that they passed the BOTS Act of 2016 to deter local ticket scalpers going hi-tech using bots.

Sure, bots can do bad things. But like two sides of every coin, bots have good capabilities too.

Siri helped a little boy save his mother’s life. When his mother fell unconscious, a 4-year old used his mother’s finger to open an iPhone and he used Siri to call 911 and reach an operator for help.

In this year of the bot, you may be itching to take the plunge and buy a new gadget that features a bot virtual assistant. While the numerous benefits are many, be sure to protect your privacy in the process. For starters,’s Jocelyn Baird advises that you review the settings of your device’s microphone and even consider adding an “audible tone when it’s active, so you know when it’s recording.”

Keisha McClellan is a rising 2L law student at Chicago-Kent College of Law and a founding board member of Chicago-Kent’s Cyber Security and Data Privacy Society.


Just a Fingerprint Away: The Risks of Fingerprint Scanning

By Michael Goodyear

The fingerprint scanner is perhaps one of the best known security features in the world. In spy movies, no safe or villain’s lair is complete with one. But they aren’t foolproof: in “Diamonds Are Forever,” James Bond uses a fake fingerprint to get past such a scanner. In the nearly 50 years since that movie was released, fingerprint scanners have become increasingly ubiquitous and as a common protection mechanism for smartphones, they are the sealed gate to your data. But that gate is not as secure as we might think, and it no longer takes a legendary spy like 007 to crack it open.

A recent study by researchers at New York University and Michigan State University brought the technological risks of fingerprint scanning to light. The researchers used computer simulations to create “MasterPrints,” real fingerprints from databases or synthetically created ones that can spoof one of the stored fingerprints in a scanner’s database to unlock a phone. Although the study did not use real phones, instead using cropped images on the commercial verification software Verifinger, the findings were still alarming. The researchers’ generated prints could match the real ones up to 65% of the time. Even if the percentage with phones was much lower, it would still be a considerable risk.

One of the greatest weaknesses of your phone’s fingerprint scanning technology is that it doesn’t actually take a full fingerprint scan. Those would be nearly impossible to falsify. But your iPhone or Android phone only scan partial fingerprints, a much smaller area with fewer unique features. This risk is exacerbated by the fact that your phone typically takes eight to ten scans, giving the fingerprint scanner a database of eight to ten fingerprints it can use. Now hackers have eight to ten chances to spoof your fingerprint rather than just one. If you give register other people’s fingerprints on your phone (your spouse or children perhaps), it increased the risk again. It’s like if you have a lockbox with several different keys; the greater the number of keys, the greater risk that someone will get their hands on one or be able to copy one.

Professor Stephanie Schuckers, Director of the Center for Identification Technology Research at Clarkson University, noted that because the study didn’t involve actual phones, the takeaways were limited.

But while a full study of Apple and Android fingerprint recognition programs will be necessary to uncover the exact risk of falsifying fingerprints, any risk is too high. Our phones hold a world of data about us. By unlocking your phone, someone wouldn’t just be able to make a call, but would know your deepest secrets. Your contacts, your intimate texts and emails, your interests, and even your health data, all stored on your phone with only fingerprint recognition to protect them, would be at risk.

Perhaps the most alarming consequence of this security vulnerability is what it means for your finances. Services such as Apple Pay and Android Pay allow you to make purchases with the swipe of your finger. Banks are increasingly starting to have fingerprint recognition for signing into your app (and all of your financial data). Large banking institutions such as Chase and Bank of America, as well as credit card companies such as Capital One, are now just a swipe away for you…and your hacker.

When someone’s information gets stolen due to a false fingerprint, who will be liable? The phone developer and financial institution, by having used falsifiable fingerprint tracking technology, would be at risk of being held responsible. In the short term, however, it is the user who will suffer. Their personal and financial information will be compromised, leading to countless hours trying to secure everything again, not to permanent damage that could be done by your data getting out.

Fingerprint technology is not the only option (written passwords are usually still offered), so customers do have a choice of whether or not to trust that the fingerprint technology will protect their data. But since fingerprints are unique, fingerprint scanners have been seen as the safe choice, a much more secure method than a four number password.

Reporters have actually questioned the security of fingerprint scanning systems for years. But while previous fears were often just lists of everything that could go wrong, the new NYU and MSU study has quantifiable data to prove that fingerprints can be spoofed.

Technology has advanced so much that you can do practically anything from your smartphone. But we have to remember that with progress come downsides. When all that stands between your sensitive personal information and a thief is a fingerprint, you need the technology to be ironclad. James Bond may have had noble aims in tricking a fingerprint scanner, but it is unlikely that data hackers will have those same scruples. It may be easy to flip your finger and open your phone and all of your apps, but ease is not worth the risk of losing your information to modern day spies.

Michael Goodyear, who has a BA in History and Near Eastern Languages and Civilizations from the University of Chicago, is part of the ISLAT team.


Hacking at the Downbeat: How Music Can Take Over Our Devices

By Michael Goodyear

Hacking into electronic systems is certainly not new. People have taken over entire smart homes and data breaches have cost companies such as Target and Home Depot millions of dollars. But a team of researchers has found a new way to hack: music.

Researchers at the University of Michigan and the University of South Carolina have found a weakness in Microelectromechanical systems (MEMS) accelerometers, standard components of electronic systems ranging from your smartphone to automobiles and drones. MEMS accelerometers have a sensing mass that shifts depending on the accelerative forces exerted on it, which in turn sends out a voltage signal that correlates to the sensed acceleration function. By exerting acoustic interference, the researchers displaced the sensing mass, basically causing involuntary actions in the device.

These acoustic attacks could just be a relatively harmless interference. For example, by using a YouTube music video interspersed with special tones, the researchers spoofed a MEMS accelerometer to send out a signal that resembled the word “WALNUT,” which became the name of the team’s acoustic attack.

But the consequences could be much more dire. Some systems depend on the MEMS accelerometers to make automated decisions. By playing a malicious audio file, the hacker could take control of these devices or surreptitiously influence them.

WALNUT was used to take over a remote-control car via an app on an infected phone. While a rogue remote-control may not be too scary, MEMS accelerometers are also used in much larger systems, such as cars and drones, which could cause immense amounts of damage if they were taken over.

The researchers also used WALNUT to alter the amount of steps on a Fitbit. While the researchers did not think such an attack posed a serious security risk (they instead pointed out that it could be used to garner free Fitbit rewards through programs such as, the ability to alter health data on a device could have serious consequences. If health data such as that on a Fitbit can be changed, the resulting inaccuracies could negatively impact those that depend on the apps or devices for managing their health, potentially leading them to follow incorrect data and make a decision that could damage their health. Even more dangerous, mobile health apps that control devices such as pacemakers or insulin pumps, or even the devices themselves, could be changed to create a fatal heart rhythm or administer the wrong dosage of insulin.

WALNUT is not just a fringe technology that can only affect the occasional device. The researchers found that 65% of the accelerometers (15 of 20 accelerometer models by 5 different app manufacturers) were vulnerable to an acoustic output control, where devices such as the remote-control car could be taken over. They also found that 75% of the accelerometers they tested were vulnerable to an acoustic output biasing hack, where information like your Fitbit step count could be altered.

The Internet of Things offers many advantages, but as WALNUT illustrates, it can be infiltrated with something as simple as a YouTube song. The consequences of our dependence on technology could not only hurt our privacy, but also our physical wellbeing. In their paper, the WALNUT team outlined how to better protect against the acoustic takeovers, but if the accelerometer chip makers don’t follow the advice, maestro hackers may just have one more instrument in their orchestra for assailing the Internet of Things.

Michael Goodyear, who has a BA in History and Near Eastern Languages and Civilizations from the University of Chicago, is part of the ISLAT team.



Altering Prisoners’ Sense of Time: The Moral Regression of a Futuristic Technology

By Caroline Thiriot

What if you could give a prisoner a pill that changed their perception of time? A 10-year sentence could feel like millennia. Or a person could experience a 10-year sentence in two years.

Science has already brought us to the brink of this technology. In a paper published in the Journal of Neuroscience, the nature of time perception is outlined and science seems to conclude in favor of Kant’s “subjective” and “ideal” view of the matter. Indeed, “[o]ur perception of time constrains our experience of the world and exerts a pivotal influence over a myriad array of cognitive and motor functions.” (emphasis in the original). The result of the study demonstrated “anatomical, neurochemical, and task specificity, which suggested that a neurotransmitter called GABA (Gamma-Amino Butyric Acid) contributes to individual differences in time perception”. With this increased understanding of how we perceive time, perception altering medications may follow.

Psychoactive drugs could be used to distort the prisoners’ perception of time and make them feel like they were serving a 1,000-year sentence, which is legally available in the United States. As detailed in Slate and Aeon, philosopher Rebecca Roache is undertaking a thought experiment to explore the ethical issues involved in using perception altering drugs and life extension technologies in the corrections context.

Medical and scientific advance could change the way prisoners serve time and dramatically alter our prison system. As for an economic purpose we could imagine that prisoners would physically spend one day in prison while they would psychologically experience it as lasting x years. Considering the high costs of prisons, psychoactive drugs could thus be a solution to save money. However, the risk benefit ratio does not seem to be favorable at all.

There already are cases where perceptual distortions such as “disorientation in time” can occur, we can relate to the practice of solitary confinement. “There is long history of using the prison environment itself to affect prisoners’ subjective experience,” highlights Rebecca Roache. On October 18, 2011, Special Rapporteur of the Human Rights Council on torture and other cruel, inhuman or degrading treatment or punishment, Juan E. Méndez presented his thematic report on solitary confinement to the United Nations General Assembly. He called on all countries “to ban the solitary confinement of prisoners except in very exceptional circumstances and for as short a time as possible, with an absolute prohibition in the case of juveniles and people with mental disabilities.” He stressed as well that “Solitary confinement is a harsh measure which is contrary to rehabilitation, the aim of the penitentiary system.”

Two points made in the statement above are worth being further discussed. First, we will address the issue of torture and other cruel, inhuman or degrading treatment or punishment. Then, we will focus on a more philosophical controversy: the aim of the penitentiary system.

Torture is universally condemned. The prohibition against torture is well established under customary international law as jus cogens as well as under various international treaties such as the Convention against Torture or Other Cruel, Inhuman or Degrading Treatment or Punishment ratified by 136 countries (including the United States in 1994). Even if the effects of perception altering drugs have not been studied yet, we can relate to those of solitary confinement to some extent. Indeed, one can picture the subject whose time perception is altered as experiencing another reality than the one commonly experienced. Thus, as the physically isolated prisoner, it seems to be reasonable to conclude that this subject will be deprived of normal human interaction and may eventually suffer from mental health problems including anxiety, panic, insomnia, paranoia, aggression and depression. In addition to the mental health risks, there exist physical health risks as well because the needs for sleep or food may be perceived differently.

As for the aim of the penitentiary system, several questions arise, especially concerning rehabilitation and recidivism. Some authors argue that prisons should be abolished and replaced by “anti-prisons,” that is, locked, secure residential colleges, therapeutic communities, and centers for human development. Indeed, nowadays it makes no doubt that punishment fails and rehabilitation works. From such perspective, altering prisoners’ time perception in order to make them feel like they spend more time in jails could thus be seen as a step backwards instead of a progress. According to the American Correctional Association (ACA) 1986 Study of prison industry, there are three categories of contemporary prison institution goals. Those are offender-based (good work habits, real work experience, vocational training, life management experience), institution-oriented (reducing idleness, structuring daily activities, reducing the net cost of corrections) or societal (repayment to society, dependent support, victim restitution). If such technology was implemented, most of these goals would not be completed. On the opposite, it appears we would instead go back to an era when solitary confinement was thought to foster penitence and to encourage reformation, but in a rather extreme form causing more harm to the individual possibly to the extent of mental illness.

The Eighth Amendment states that “Excessive bail shall not be required, nor excessive fines imposed, nor cruel and unusual punishments inflicted.” Professor Richard S. Frase has analyzed constitutional proportionality requirements. He noticed that since 1980, the Supreme Court has ruled in favor of the prisoner only once out of the six cases in which the duration of a prison sentence was attacked on Eighth Amendment grounds. Even if “The Court has never made clear what it means by proportionality in the context of prison sentences. Justice Scalia believes (and perhaps so does Justice Thomas) that this concept only has meaning in relation to retributive sentencing goals,” he concludes. When it comes to sentencing goals, one should thus distinguish retributive goals from non-retributive. While the first theory considers only the defendant’s past actions and focuses on the punishment itself, the second one (also considered as “utilitarian”) takes the future effects of the punishment into account. On the basis of such distinction, one must conclude that the possibility of making prisoners feel like they were spending a very long time in jails would only serve a retributive purpose and would absolutely fail at addressing the non-retributive ones.

Also, we live in a society in which, if individualism seems to be the supreme rule, interdependence remains a governing concept. To the question “What else can matter to us, other than how our lives feel from the inside?” Robert Nozick asks in this famous “Experience Machine,” he concludes that, dealing with pleasure, we would rather choose the everyday reality rather than an apparently preferable simulated reality. Despite the fact his thought experiment deals with a notion that is opposite to punishment, we can rely on the conclusion that the reality we commonly experience matters more than our subjective experience of it. As a consequence, one should not forget that the victim’s subjective perception of justice matters as well. Therefore, it may appear difficult for them to know that a criminal is out of jail after having spent a little time there and is able to enjoy the rest of their life free. Even if we only focus on retributive goals, such technology seems to have subjective limitations.

In the end, there seems to be no argument but the economic advantage for allowing the use of such psychoactive drugs able to distort the prisoners’ perception of time. On the contrary, it can be seen as torture and does not serve any rehabilitation aim, which is the main focus of prison sentences nowadays. The use of such technology would therefore appear to be regressive rather than progressive.

Caroline Thiriot, who has a Master’s in International Law and Human Rights from the Université Panthéon-Assas and an LL.M. in international and transnational law from Chicago-Kent College of Law, is currently a Master’s student in Bioethics at Université Paris Descartes.

Fake News: A Little White Lie or a Dangerous Crock?

Blog Photo CroppedBy Michael Goodyear

Since early November, press coverage on the problem of fake news stories has exploded.  These fake stories have included everything from the Pope endorsing Donald Trump to a woman stealing 24 dogs from an animal shelter. While they may seem harmless enough, the impact of people releasing such stories can range from simple confusion to active violence.

But what happens when the police create fake news? Even if it is well-intended, police dissemination of fake news can lead to a series of consequences, such as negative impact on neighborhoods, increased danger for citizens, violence, and distrust.

A few days ago, the Santa Maria Times uncovered a fictional news release in court documents, ten months after it had reported the same story as fact. The news release stated that two cousins, Jose Santos Melendez and Jose Marino Melendez, had been taken in for identity theft and were now in the custody of immigration authorities. It seemed like a simple report; in actually, it was part of an elaborate, but deceitful, plan—not by crooks, but by the police. .

The Santa Maria Police Department had been running Operation Matador for months at this point. The police had been eavesdropping on members of MS-13, a dangerous international gang, in the goal of eventually arresting gang members. Through wiretaps, they learned that MS-13 planned to murder the Melendez cousins. This raised a new issue: if they acted to save the two cousins, their operation would be exposed and the progress of the past months would be lost. A fake news story could solve this problem. The police took the Melendez cousins into hiding for their safety while the fake news story provided a cover, explaining the disappearance of the Melendez cousins without arousing suspicion and also protecting the cousins’ family, which might have been harmed by MS-13 if they believed the cousins were merely hiding.

In the following weeks, the police brought Operation Matador to a successful conclusion: 17 gang members were arrested on charges of murder and intent to kill in March. In July, a criminal grand jury indicted all 17 of them on a combined 50 felony counts. Lives were saved and gang members were successfully arrested, so what is the problem?

Whether well intentioned or not, fake news can have real consequences. By releasing false information about crime or police action, the police alter public perceptions of their community. If the police falsely report a crime in one neighborhood to divert attention from another, that reported neighborhood will seem more dangerous to the populace, even though in actuality the stated crime didn’t occur there.  This could lead to a downturn in local business and desire to live in that neighborhood. It would also make the neighborhood where the crime actually happened seem better in the eyes of the unwitting public, who might go to that neighborhood despite the dangers it could present.

Similarly, reporting that a crime has been solved, while in fact it has not, would also alter the public’s perceptions and possibly their actions. For example, the police could falsely report that they had solved crimes or reduced crime rates in a neighborhood in order to improve confidence in the police and intimidation of criminals. But it could also make people unreasonably more confident in the safety of an area, causing more people to go into what in actuality is still a dangerous neighborhood.

In addition, reporting that a crime has been solved when it has not could lead to greater violence or harm the police’s chances of actually solving the crime. For example, saying that the police have uncovered information about a crime or solved a crime when they haven’t could lead a perpetrator to harm those whom he thinks may have informed the police about him. It could also cause the perpetrator to flee the area to avoid arrest.

The police making it seem like crimes are being committed when they actually aren’t could also lead to harmful individual action. For example, earlier this week a fake conspiracy theory that Hillary Clinton was operating a child sex ring from Comet Ping Pong, a popular Washington, D.C., pizza parlor, led to a vigilante action. Edgar Maddison Welch decided to go investigate “Pizzagate.” Inside the restaurant, he fired a shotgun, damaging the interior of Comet Ping Pong but not injuring anyone inside. Although bloodshed was adverted in this case (Welch surrendered peacefully when he found no sign of the fabricated child sex ring), fake news undoubtedly put people’s lives at risk.

Although the Pizzagate example was not caused by the police, the police reporting fake crimes could lead to similar results: vigilantism and violence. As CNN aptly put it in regards to Pizzagate, “fake news, real violence.”

Fake news also harms our collective knowledge and our ability to tell truth from lie. While any piece of fake news has the potential to mislead and harm others, the police releasing such a story is especially harmful to our trust. We look to the police as honest defenders of justice; releasing fabricated stories undermines that, duping the public and the press as well as the suspect. As Louis Dekmar, vice president of the International Association of Chiefs of Police, pointed out, such ruses create “a real distrust between the police and the folks we rely on.” Such a lack of trust undermines the relationship between police and the community, and, according to the Department of Justice, trust is one of the key factors in maintaining public safety and effective policing. Although fake lures are often used in sting operations, such as fake prizes, fake news on this scale is unprecedented.

Although police use of fake news may be rare, the police have a widely-used precedent for faking: fake Facebook profiles. Cops across the country have created fake Facebook profiles to uncover more information about suspects and even help track them down. For example, back in 2009 the police created a fake profile picture of an attractive young woman and friended Adam Bauer, a 19-year old college student, to access pictures of him drinking that were posted on his account, later ticketing him for underage drinking.

And even though Facebook officially bans the practice, a federal judge ruled back in 2014 that cops can create fake social network profiles for investigative purposes. The Department of Justice even said that police usage of fake Facebook profiles is ethical. Yet this is at odds with the Department of Justice stressing the importance of trust between police and the community. Bauer and other college students that were charged with underage drinking through photographic evidence from Facebook stated that the fake Facebook profiles undermined trust between college students and police.

This mostly likely will not be the last time the police fake a news story. In regards to the fake news story in Operation Matador, Ralph Martin, the Santa Maria police chief, defended the tactic, even saying he would not rule out releasing a fake news story again in order to protect lives. But given the risks with fake news, in general and especially when the police are behind it, such a tactic could have much more costly ramifications than predicted.

Michael Goodyear, who has a BA in History and Near Eastern Languages and Civilizations from the University of Chicago, is part of the ISLAT team.

The Need for Speed: When Apps Inspire Dangerous Behavior

Photo for ISLAT 1-croppedBy Nadia Daneshvar

Mobile apps may be designed with good intentions, but what happens when those aims lead to dangerous user behavior? This is the case for Strava, a popular cycling app whose promotion of speed led to deadly consequences and spurred new questions regarding the responsibilities of app developers.

Strava lets users record cycling data using a smartphone or GPS device and upload that information to track, analyze, and share with friends or the public. The app records where cyclists rode and how long and how fast they rode. It then compares a user’s times with personal records as well as the fastest times of other users.

The app also tracks a cyclist’s performance on “segments”—any stretch of road, path, or trail mapped out by a user for the purpose of a multiplayer competition of who can go the fastest, whether up a hill, down the street, or on a descent. Strava compares each user’s times on a particular segment to the times of everyone else who has ridden it before and uploaded the data to the app. The fastest riders are given the title “King of the Mountain” (“KOM”) or “Queen of the Mountain” (“QOM”).

Although the app may record data virtually, the cycling and decisions of users are very much in the real world. A Strava employee admitted that Strava does not account for safety, danger, stop signs, speed limits, or the fact that in order to beat certain KOM records, users would have to break the law. But, after at least three people have died in incidents related to the Strava app, perhaps we should expect Strava app developers to account for such factors, adjusting app design to comply with the realities—including the laws and regulations—of the real world.

On June 19, 2010, William “Kim” Flint, Jr., an avid Strava user, died after he hit an SUV while speeding downhill on a Strava segment on South Park Drive, the steepest road in the East Bay area of San Francisco. Flint had learned that his record was taken by another rider shortly before the accident, and Flint had set out to reclaim his KOM title when he hit the car. He was going too fast to stop.

Despite this incident, in 2012 Strava began fueling even more competition, sending alerts notifying users that their record was broken: “Uh oh! [another Strava user] just stole your KOM….Better get out there and show them who’s boss!” Since then, they changed the message to: “Uh oh! [another Strava user] just stole your KOM….Get out there, be safe and have fun!”

On March 29, 2012, Chris Bucchere was tracking himself using Strava while riding a segment known as the “Castro Bomb” when he hit and killed a pedestrian, 71-year-old Sutchi Hui, who was crossing the street with his wife. According to Bucchere, as he entered the intersection where he hit Hui, he was “way too committed to stop.” According to a witness, “he crouched down to push his body weight forward and intentionally accelerated,” milliseconds before hitting Hui. Bucchere was charged with a felony for vehicular manslaughter. He later pled guilty.

On September 18, 2014, Jason Marshall, an avid Strava user, hit and killed a pedestrian, Jill Tarlov, in Central Park as he was illegally speeding downhill in lanes reserved for pedestrians and child cyclists. According to a witness, Marshall did not stop or slow down at all, but instead yelled to Tarlov to “Get out of the way!” Hours before the accident, Marshall had recorded 32.2 miles of cycling in Central Park, with his highest speed at 35.6 MPH, which is over the 25 MPH speed limit for bikes in Central Park. Marshall had fastidiously recorded every one of his previous rides that year—yet there was no Strava record of his ride that fateful afternoon.

What can be done to avert such tragedies?


Educating the general public about these tragic examples of light-hearted biking gone wrong could help reduce them. The day after Tarlov’s death, Bike Snob NYC launched a “#noStrava” hashtag on Twitter as a “gesture of respect” to Tarlov’s family, arguing that Strava shamelessly capitalizes on cyclists’ competitive inclinations.

Take away the leader board

Strava’s leaderboard is what gives rise to the spirit of competition that has arguably contributed to all of these tragedies. Furthermore, Strava’s arrangement of the cycling data on the leaderboards is problematic. As Suffolk University’s Professor Michael Rustad noted: “[I]t’s like Strava is creating a drag race. [Strava is] not just posting what third parties do—they’re organizing it…. Its [undifferentiated-skill-level] leaderboards are comparable to taking people from the bunny slopes up to the black-diamond run. Even ski trails are marked by degrees of difficulty.”

Legal action against the rider

Some might also consider taking legal action against the rider. As noted, Bucchere was charged with a vehicular manslaughter felony. Additionally, the Huis brought a civil suit against him (which was later dismissed). This approach might make riders think twice before risky riding, nudging them to consider the legal and moral consequences of their actions.

Legal action against the developer

The parents of Kim Flint filed a wrongful death suit, deciding that “enough is enough.” In the complaint, they claimed Strava was negligent, and “breached their duty of care by: (1) failing to warn cyclists competing in KOM challenge that the road conditions were not suited for racing and that it was unreasonably dangerous given those conditions; (2) failing to take adequate measures to ensure the KOM challenges took place on safe courses, and (3) encouraging dangerous behavior.” The complaint went on, “It was foreseeable that the failure to warn of dangerous conditions, take safety measures, and encourage dangerous behavior would cause Kim Flint Jr. to die since Kim Flint Jr. justifiably relied on [Strava] to host a safe challenge. Had [Strava] done the aforementioned acts, Kim Flint Jr. would not have died as he did.”

The Flints’ lawyer argued: “The danger and harm alleged in this case originates out of Strava’s own actions in…manipulating it through its designed software into leaderboards, and then using those leaderboards to encourage cyclists to race at increasingly faster speeds for awards and titles.”

Strava’s attorneys based their argument for the case’s dismissal on the principle that Flint explicitly assumed the risks implied in cycling by agreeing to Strava’s terms and conditions when he joined the network. Strava’s terms and conditions stated: “In no event shall Strava be liable to you or any third party for any direct, indirect, punitive, incidental, special or consequential damages arising out of or in any way connected with… your use of the site.” The case was eventually dismissed on these same grounds.

All three of these deaths received attention from news sources across the country, with writers and the public wondering how this could have happened. Even those changes that Strava made since the deaths in 2010 and 2012 did not fix the problem. Although the Flint case may have been dismissed, Strava has played a role in the promotion of risky and illegal behavior. But where exactly the line lies between user agency and developer responsibility remains to be determined.

Nadia Daneshvar is a former ISLAT Fellow, and is currently a second-year student at The George Washington University Law School.

Poképrivacy: Privacy and Legal Issues in Pokémon GO

Blog Photo CroppedBy Michael Goodyear

When Pokémon GO was released in the United States on July 6 it garnered 15 million downloads in just the first week. Pokémon GO has rapidly become one of the biggest apps ever. Its daily active user total has now outstripped Twitter and on current installs it has beat most other popular mobile app games. But despite its quick rise to fame, Pokémon GO has raised a series of concerns about privacy, from what permissions and personal information the app itself accesses to how the app potentially infringes on personal residences.

Pokémon GO is an augmented reality game that inserts virtual imaginary creatures, Pokémon, onto the physical world via your phone. They can appear anywhere, even on your wife’s hospital bed as she is giving birth. The goal is to catch and train Pokémon as part of one of three teams.

Privacy concerns abound with Pokémon GO, even attracting the attention of Senator Al Franken, the Ranking Member of the Senate Judiciary Subcommittee on Privacy, Technology and the Law. The chief initial concern with Pokémon GO was that iOS users had granted “full access” to their Google accounts. While technically this could include being able to see the contents of Gmail and all other Google programs, in reality Niantic, the company that developed Pokémon GO, only accessed basic account information, such as the name of the user and their Gmail address. More than anything it was a combination of Niantic using an out-of-date version of the Google sign-in process and poor wording that led to this seemingly alarming concern. Niantic has since made an update that fixed this problem, with the app now only requesting access to basic information.

Although this problem has received by far the most press, there are other legitimate concerns about how Pokémon GO handles privacy. The app itself has access to your IP address and the most recent webpage you visited, providing some indicators about your location and habits. In addition, the app tracks your GPS location and has control over your camera. While these are essential to using the app, just consider the possible implications if some third party acquired this data. Unless Niantic’s security is ironclad, there is always the possibility that hackers could get this information and have access to your phone. And with an app as huge as Pokémon GO, hackers will definitely be on the lookout.

Others with malicious intent have already started taking advantage of the app’s security shortcomings. A function of the app is that you can create a beacon, which attracts more players and Pokémon to an area. This has been a hotbed for muggers taking advantage of unsuspecting players. Muggers have used the beacons to lure in players and rob them. Police departments from O’Fallon, Missouri to Australia have expressed their concern over the security risks the app creates, especially when players are paying so much attention to the virtual surroundings on their phone that they are not aware of their physical surroundings.

In addition to beacons created by the players, Niantic itself has created virtual Pokémon gyms, basically battle hubs for players to come and play against other teams for control over the gym and win the accompanying prestige, across the globe. Naturally this makes them fairly popular spots. For this reason, Niantic generally locates the gyms at popular sites, although this occasionally goes awry, from the controversial (Trump Tower and the Westboro Baptist Church) to the just plain dangerous (on the South Korea-North Korea border). And sometimes it even registers people’s homes as gyms. Now, it is not as though Niantic asked the permission of Donald Trump or Boon Sheridan to put a gym on their property, but of course the gym itself is a virtual entity, albeit one with very real consequences. As major draws to players, gyms can attract dozens to hundreds of people, infringing on the privacy and peace and quiet of individuals and businesses. And this brings up a host of legal trespassing issues and questions of attractive nuisances that are bound to be raised, all for the benefit of trying to catch a rare Pokémon.

So while players are battling with their captured Pokémon, they also have to be on the defensive. Mind property laws and don’t infringe on real estate, and protect your privacy and safety, or else it might be your information being captured instead of the Pokémon.

Michael Goodyear, who has a BA in History and Near Eastern Languages and Civilizations from the University of Chicago, is part of the ISLAT team.

Privacy Concerns Influence Consumer Purchases

Blog Photo CroppedBy Michael Goodyear

Back in 2011 just 54% of U.S. consumers, a slim majority, stated that they decided not to purchase a product due to concerns about their personal information’s confidentiality. But that number has been on the rise. Today that figure has grown to 82%, the vast majority of U.S. consumers. Of course many potentially privacy-invading products are not bought on a yearly basis, such as computers or cell phones. Yet even in the past 12 months 35% of consumers still decided not to purchase goods from a specific company due to privacy concerns.

Different groups of consumers considered privacy concerns differently in regards to making a purchase. The portion of the American population that reacts most to privacy factors is that with a higher income and also that with a higher level of education (consumers with a college or post-graduate degree). Although respondents noted several chief concerns about privacy, 52% of U.S. consumers identified identity theft as their greatest concern. This was a sharp increase from 2011, when identity theft constituted only 24% of respondents’ chief privacy concerns. In addition, the next highest figure was the greatest privacy concern of only 10% of consumers, compared to the 52% on identity theft.

These findings are from a November 2015 online survey of 900 consumers, undertaken by the law firm Morrison & Foerster to gather quantitative data on the emerging trend of privacy presenting real threats to business. The results confirm the increasing role of privacy in our lives. In this case, privacy concerns influence our decisions as consumers,
but what other aspects of our lives have privacy concerns also come to influence? What about our privacy when downloading a mobile app or entering our social security number into an online application? With advances in technology and the increasing amount of personal information that ends up online, privacy concerns are here to stay. It is up to each individual to decide how he will engage with his personal privacy concerns and to what degree those concerns might influence his decisions and his life.

Michael Goodyear, who has a BA in History and Near Eastern Languages and Civilizations from the University of Chicago, is part of the ISLAT team.

I Can Do All Things Through Technology, Which Enables Me: Churches, Facial Recognition and Spiritual Dynamics

Alex FrancoBy Alexandra Franco, JD

In my work as a privacy lawyer, I’ve become slightly desensitized to the pervasive privacy invasions that we have learned to live with—the fact that Facebook is well-aware of my love of makeup and will constantly remind me of “cool new eyeshadows to try” is something I don’t even think about anymore. However, there is a new technology threatening privacy that struck me as particularly appalling.

A company called Churchix provides churches with facial recognition software “designed for Church administrators and event managers who want to save the pain of manually tracking their members attendance to their events.” The software allows users to “receive demographic data of people attending [their] event (Gender, Age),” and “receive identification reports for a specific event, group of events and attendance of a specific member.” To get the facial recognition software going, churches must first take photos of their faithful to “register and enroll into the data base of Churchix.” After this, the churches will have access to streamlined, automatic attendance data—and won’t have to go through what Churchix calls the “pain” of personal interaction with their attendees.

The number of churches currently using this technology is as high as 40. Speaking at a conference at Loyola University Chicago School of Law, privacy attorney and partner at Edelson PC, Ari Scharg, mentioned that this technology is being used to track people’s church attendance patterns, such as how often they attend and how early they arrive, and that the churches can use this information to understand how much money church goers can be asked to donate.

Churchix claims that despite “honest concerns over privacy” and people’s “‘Big BrotChurch 4her’ mentality” about what the technology entails, it “think[s] that [such beliefs] are mostly a bad feeling derived from a possible abuse of the technology rather than actual threats.” The company website explains that “on the contrary, face recognition software helps catching the bad guys… .” But even the company’s own PR efforts on its website include articles that criticize Churchix for the serious privacy concerns that its technology raises.

As Michael Casey from CBS News says, “the growth of this [facial recognition] technology has far outpaced any efforts to regulate it… .” and if it keeps going the way it is going, it will be very difficult for regulatory bodies to take a stand fast enough to make a difference. The technology is already being used by advertisers in shopping malls to analyze what you are looking at on a store shelf, analyze your demographic information based on your facial characteristics and later show you a targeted advertisement with another item that you may be interested in based on all of this information. Churchix is a branch of Face-Six, the facial recognition business that offers the technology to shopping malls. In addition to offering its services to churches (through Churchix) and shopping malls, Face-Six offers its services to airports, border control, law enforcement, casinos and also for home security purposes.

When a single company is behind all of the different applications of the technology—from shopping malls and targeted advertisements to church attendance—how do we know that  people’s images uploaded to the Churchix database will not end up being used to sell them religious books later when they visit a mall that uses the same technology? What if you have been missing church for a few weeks, would you like to see an advertisement for a book about “regaining your faith?”

A few states—such as Illinois—have enacted laws protecting people’s biometric information. The Illinois statute protects people’s biometric identifiers, such as “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry,” among other things, by requiring that entities planning to collect such data inform the person in writing before collecting it, tell the person for how long and for what purpose they are collecting the data and have the person sign a written release. It also prohibits entities from selling or profiting from someone’s biometric data and requires that entities in possession of such data develop policies and procedures for its destruction.  However, Illinois is one of a few states currently taking steps to protect people’s biometric information and we are still far away from a comprehensive national regulatory regime.

Let’s instead think about this for a moment from the perspective of individual church members and the church community as a whole. Faith is a deeply personal thing which should be between the person and that which he or she believes in, something out of the human realm and out of the reach of human hands. It is a sacred communication between the person and something that transcends the physically human. Is it okay for a third eye in the sky to observe that person’s movements in and out of his or her place of worship? What are the deeper connotations of a pervasive intervention between a person and his or her faith? If church goers become aware that their movements in and out of the church are constantly being tracked, this may alter their church-going habits (as they may dislike being observed and tracked without having control over it) and may decide to stop attending church altogether. On the other hand, those who refuse to give up going to church will always have to think about that third eye who knows whether he or she went to church last week or not.

And what happens if we were to replace the word “church” in the last paragraph with the word “mosque”? It is not hard to imagine the potential for profiling and even more invasive targeting this technology—which works across different settings through the photo database—can bring.

For the most part, places of worship are still the heart and soul of their respective communities. They are groups of families and individuals who look out for each other and have each other’s back. When a congregation member is absent for a long time, other members will express their concern and reach out. If such interactions are interrupted by an automated attendance tracker, will it interfere with the community’s spiritual dynamics? To what extent will we allow technologies to alter human dynamics in their most essential manifestations? Only time will tell.

This isn’t about makeup. This is one of the most personal and private aspects of a person’s life, and we should not become desensitized to technologies which invade it.

Alexandra Franco is a Research Associate at the Institute for Science, Law and Technology at IIT Chicago-Kent College of Law.  The title of this essay is based on Philippians 4:13 “I can do all things through Christ who strengthens me.”