Android’s Watching You. Now You Can Watch Back.

By Raymond Fang

On November 24, 2017, Yale Law School’s Privacy Lab announced the results of their study of 25 common trackers hidden in Google Play apps. The study, conducted in partnership with Exodus Privacy, a French non-profit digital privacy research group, examined over 300 Android apps to analyze the apps’ permissions, trackers, and transmissions. Exodus Privacy built the software to extract the apps’ permissions, trackers, and transmissions from the apps, and Yale’s Privacy Lab studied the results. The authors found that more than 75% of the apps they studied installed trackers on the user’s device, primarily for the purposes of “targeted advertising, behavioral analytics, and location tracking.” Yale’s Privacy Lab has made the 25 studied tracker profiles available online, and Exodus Privacy has made the code for their free, open-source privacy auditing software available online as well

The Exodus Privacy platform currently lacks an accessible user interface, so the average person cannot use the program to test apps of their choosing. Though the Exodus Privacy website does contain a video tutorial of how to “Try it [Exodus Privacy] at home,” the video tutorial requires the user to write code on an unknown platform (possibly using the code available on Github) to run the privacy auditing software, which requires some knowledge of computer science. Instead, the average person must rely on the reports generated on Exodus Privacy’s website. Exodus Privacy’s software automatically crawls through Google Play to update tracker and permission data for all the apps in its database, and is constantly adding more apps.

As of December 4, 2017, the Exodus Privacy website has generated reports on 511 apps. These reports yield interesting information about how some very popular apps track your personal information for advertising purposes. Snapchat (500,000,000+ downloads), for example, contains an advertising tracker from data aggregator company DoubleClick. Spotify Music (100,000,000+ downloads) contains advertising trackers from DoubleClick, Flurry, and ComScore. Though it’s hard to tell exactly what data about your social media usage and music preferences these trackers are collecting from Exodus Privacy’s reports, which just say the trackers collect “data about you or your usages,” DoubleClick’s privacy policy states that it collects “your web request, IP address, browser type, browser language, the date and time of your request, and one or more cookies that may uniquely identify your browser,” “your device model, browser type, or sensors in your device like the accelerometer,” and “precise location from your mobile device.” If cookies are not available, as on mobile devices, the privacy policy states that Doubleclick will use “technologies that perform similar functions to cookies,” tracking what you look at and for how long. Obviously, you may want to keep some of this information private for various reasons; however, the widespread use of these advertising trackers in Android apps means that this data related to your social media content and music preferences can easily be sold to advertisers and exposed.

Beyond the tracking done on social media and music apps, Exodus Privacy’s reports show that some health and dating apps also collect and sell your intimate and personal data. Spot On Period, Birth Control, & Cycle Tracker (100,000+ downloads), Planned Parenthood’s sexual and reproductive health app, contains advertising trackers from AppsFlyer, Flurry and DoubleClick. If you were pregnant, trying to conceive, or even just sexually active, data aggregator companies could conceivably sell that information to advertisers, who may then send you related advertisements. If someone was borrowing your computer or looking over your shoulder, they may be able to see the ads and figure out you were pregnant, trying to conceive, or sexually active. Such accidental exposure could cause you emotional harm if you were not ready or willing to share that private information with others. Grindr (10,000,000+ downloads), the popular dating app for gay and bisexual men, has advertising trackers from DoubleClick and Mopub. If advertisements about your sexuality started popping up whenever you used the Internet, they may accidentally reveal your sexuality before you are ready to tell certain people, which may cause a lot of emotional distress.

There is clearly cause for concern when it comes to Android apps’ tracking and selling your personal information. Unfortunately, selling user data to advertisers is a very lucrative and reliable way for tech companies to monetize their services and turn a profit, so it’s hard to envision an alternative system where all of your personal data would be protected from commodification. However difficult it may now be to imagine a world where your privacy is adequately protected in the digital space, it will be up to privacy-conscious consumers, researchers, scholars, lawyers, and policymakers to make that world a reality.

Raymond Fang, who has a B.A. in Anthropology and the History, Philosophy, & Social Studies of Science and Medicine from the University of Chicago, is a member of the ISLAT team.

Blockchain: Web 3.0 or Web 3.No?

By Debbie Ginsberg

Welcome to the brave new world of blockchain. Some say it’s the future lifeblood of the internet and commerce. It will provide the foundation of the most robust information security system ever created. It will allow access to economic tools currently unavailable to billions. You may have seen many articles on blockchain recently. Maybe you’ve never heard of blockchain. Or maybe all you’ve heard about it is the hype.

But what’s a blockchain? The short explanation: It’s a network-based tool for storing information securely and permanently. The information in a blockchain can be authenticated by members of the public, but the information can be accessed only by those who have permission.

Blockchains can take any information—from simple ledgers to complex contracts—and store it in online containers called “blocks.” These blocks are then encrypted, meaning that the information is translated into a unique series of letters and numbers called the “hash.” The hash is created using a special encryption key. Only users who have the key can read the information.

The blocks are then linked together. Each block’s information includes the hash of the previous block in the chain, along with a time stamp. For example, a hash might look like this: 00002fg5d5500aae9046ff80cccefa. The tools that create hashes use sophisticated cryptographic calculations so that each hash in a particular chain will start with a set of standard characters, such as 0000.

How does this keep information secure?  The blocks are “decentralized,” meaning that different blocks are stored on different computers, creating a distributed network of information. This network is public, so members of the public can see the chain and read the hashes.

Changing the information in any block changes its hash. This change then moves up the chain, changing their hashes as well. The hashes in blocks that are further up the chain will no longer start with the standard characters. For example, they won’t start with 0000 and the time stamp will have changed. That means anyone can now see that data in the chain has been compromised.

Some blockchains are single chains, but many blockchains work by distributing copies of the whole chain in the decentralized network. If the copies don’t agree with one another, the blockchain’s users will elect to accept only those chains that match, and will discard any compromised chains.

Keep reading after the comic below.

Click to enlarge comic.

Foundations: Bitcoin

If you’ve heard of blockchain, it’s probably in relation to Bitcoin, an online currency that is recorded in a blockchain. Bitcoin isn’t issued by a government or bank; instead, it is created through sophisticated mathematical algorithms and distributed over a large network.

Bitcoin’s popularity stems from two features not available in most other monetary transactions. First, no intermediary, such as a bank or PayPal, is needed. These intermediaries often take a hefty fee, particularly in international transactions. Users can transfer money directly to each other, and the parties don’t need to trust each other. By using Bitcoin’s blockchain, the parties know their transaction is secure. Second, no copies of the funds are made—as happens in many online transactions—so the funds cannot be “double spent.” The records in the chain containing Bitcoin funds simply point to a new (anonymous) owner when a transaction is made.

While many praise Bitcoin’s anonymity, this trait has given the online currency a somewhat shady reputation. Many ransomware viruses demand that payments be made in Bitcoin. Often, users affected by these viruses don’t know what Bitcoin is, let alone where to buy it. The currency is sold in special online exchanges.

Who Is Using Blockchain?

The financial industry has been investing in blockchain. Some of this investment has been outside the mainstream financial sector. For example, there are now several hundred Bitcoin-type currencies known as cryptocurrencies. A few of these, such as Ethereum and, have been gaining ground on Bitcoin. They may eventually take over a significant part of the cryptocurrency market.

Major financial companies such as JP Morgan Chase are investing in their own blockchain-based applications. However, these applications will likely work somewhat differently than cryptocurrencies. Bitcoin and other online currencies use public blockchains, meaning that some information about the chain can be viewed by anyone. Visitors to may access any block on the Bitcoin public blockchain. However, information about who owns the currency and how to access it is not public.

Instead, large financial companies are investing in private blockchains. Companies have full control over these blockchains because they are not distributed publicly. The companies themselves control the blockchain network. However, private blockchains might be more vulnerable to hackers because they aren’t distributed as widely as the public chains.

In addition, blockchain now plays a role in distributing intellectual property. For example, uses a blockchain system to manage a music cooperative. Similarly, is using blockchain to create a media file platform that embeds digital rights management. Even Walmart is experimenting with blockchain to better track products from farms and factories to shelves.

Uses in Law

Just as artificial intelligence (AI) has already affected how legal work is done, blockchain also offers several ways to automate and outsource legal processes. Smart contracts have generated the most discussion. These contracts are coded into blockchains and make contract execution work more smoothly.

First, there is only one copy of the contract and all parties have access to it. The contract is completely transparent, and the terms of the contract are coded into the blockchain. It is therefore impossible to create fraudulent or inaccurate copies of the contract because the terms can’t be changed without the agreement of all parties to the contract.

Second, the smart contract can be configured to be self-executing. That is, verifiable events trigger the next stage of the contract. Proof of those events can be added to the chain. For example, Widgette Co. agrees to sell Acme Co. 100 widgets for $1,000 and ship them one week after payment. When Widgette Co. produces the 100 widgets, its system adds this information to the blockchain with a time stamp. Acme Co:s system pays $1,000 and adds that information to the chain, also with a time stamp. Widgette Co. then ships the widgets, and that time-stamped information is added. Finally, Acme Co. records when it receives the widgets.

The blockchain can even include a dispute resolution mechanism. If a problem arises—such as Acme Co. claims that Widgette Co. shipped the widgets after two weeks instead of one week—this information can be verified by reviewing the information in the blockchain. The chain can then arbitrate the dispute based on preset terms. For example, Acme Co. automatically receives a 1 percent refund for each additional week that shipping is delayed.

The legal possibilities are not limited to contracts. Lawyers have been considering smart wills that can execute themselves, thereby avoiding probate. Blockchain could also be used in real estate transactions to help avoid using third parties in each transaction. For example, a blockchain real estate transaction wouldn’t need the services of an escrow firm.

Governments Respond

Governments have started to take notice of the possibilities that blockchain offers. Arizona’s governor recently signed an amendment to Title 44, Chapter 26. This amendment allows use of blockchain technology in the state, declaring that “(a] signature that is secured through blockchain technology is considered to be in an electronic form and to be an electronic signature” and “[a] record or contract that is secured through blockchain technology is considered to be in an electronic form and to be an electronic record.” Vermont is also working on a bill to allow the use of blockchain technology. Other governments and organizations, including the European Union, are investigating blockchain’s possibilities. The Republic of Georgia uses blockchain to secure government transactions involving property. Other governments are considering following suit, including those in Sweden, Honduras, and Cook County, Illinois.

Educational Blockchains

Are blockchains useful only for financial transactions? Absolutely not. Educational institutions are considering putting transcripts and graduation credentials on blockchains. This would permit alumni to easily access their own information and verify its authenticity.

It would also help those students who enroll in classes at different universities to pull their information together into a single source. The Massachusetts Institute of Technology already offers blockchain-based certificates for some programs.

Roadblocks and Possibilities

Despite the many possibilities blockchain offers, before being widely implemented, it must first overcome several issues. There are already existing systems and regulations in place for many problems that blockchain could solve. For example, there has been discussion of using blockchain for health records, yet the current regulatory environment for these records would make creating a new system difficult.

Blockchains are also not easy to implement. Setting up a blockchain requires sophisticated technology skills. Lawyers— particularly lawyers working with self-executing contracts—would need to work with coders to create them.

That said, the use of blockchain will most likely continue to grow, particularly to solve problems involving security and authentication. One area that offers great possibility is using blockchains to create secure online identities that could be used to access online services and password-protected websites. New approaches are needed to prevent ID and data theft, and the blockchain may be just the tool for the job.

This article was originally published in the September/October 2017 [Volume 22, Number 1] issue of AALL Spectrum.

Debbie Ginsberg is the Educational Technology Librarian at the Chicago-Kent College of Law Library.

Hate Speech, Free Speech, and the Internet

By Raymond Fang

In the wake of the August 12, 2017 white supremacist terrorist attack in Charlottesville, Virginia that killed one person and injured 19 others, how are Internet platforms handling racist, sexist, and other offensive content posted on their servers and websites? What are the legal ramifications of their actions?

According to a July 2017 Pew Research Center Report, 79% of Americans believe online services have a responsibility to step in when harassing behavior occurs. If white supremacist content can be counted as a form of harassment, then online platforms have certainly taken up this call in recent weeks. In the week following the Charlottesville attack:

White supremacists have reacted to these bans and other anti-white-supremacy movements by casting themselves as an oppressed group, supposedly denied free speech, and fearful to speak their minds on so-called intolerant, overly-PC liberal college campuses lest they be attacked and belittled. (Never mind the fact that people of color, women, immigrants, LGBTQ individuals, poor people, people with disabilities, and other marginalized groups have faced and continue to face serious and real discrimination every day).

Somewhat unsurprisingly, the Pew Research Center Report finds stark gender differences on opinions about the balance between protecting the ability to speak freely online, and the importance of making people feel welcome and safe in digital spaces. 64% of men age 18-29 believe protecting free speech is imperative, while 57% of women age 18-29 believe the ability to feel safe and welcomed is most important. Unfortunately, the Pew Research Center Report does not contain any data about racial differences on the speech v. safety question, nor does it have cross-tabbed data on race and gender together (e.g. black women, white men, Hispanic men).

Legally, digital media companies are allowed to ban people from their servers and services at their discretion, as First Amendment guarantees of free speech do not necessarily apply to private companies and their own terms of service. There are dangerous implications to this standard. As CloudFlare’s CEO, Matthew Prince, wrote in a company email about his decision to kick The Daily Stormer off their servers, “Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.” Prince later wrote a blog post on CloudFlare’s website where he discussed his decision, emphasized the importance of due process when decisions are made about speech, and called for the creation of stronger legal frameworks around digital content restrictions that are “clear, transparent, consistent and respectful of Due Process.” In other words, not all online speech deserves protection, but delineating which online speech does and doesn’t deserve protection should be a clear, transparent, and democratic process. Though white supremacists and neo-Nazis were the rightful target of Silicon Valley’s wrath this time, that may not be the case in the future – perhaps policymakers would do well to heed Prince’s call.

Raymond Fang, who has a B.A. in Anthropology and the History, Philosophy, & Social Studies of Science and Medicine from the University of Chicago, is a member of the ISLAT team.

Ransomware: Digital Hijacking in the 21st Century

By George Suh

Ransomware is gaining traction as one of the most significant cyber threats online.  On May 12th, 2017, the ransomware “WannaCry” began infecting PCs all over the world.  The impact of Wannacry is staggering, infecting over 150 countries and 300,000 computers.  Ransomware is a type of malware that encrypts or locks your computer’s data and files for ransom.  The use of Bitcoin is a very popular form of currency with cyber attackers, because the money is anonymized to prevent the extortionists from being tracked by federal and international authorities.  Moreover, there is no guarantee that paying the ransom will give the infected user access to their computer.  Thus, if you do not create a backup of your data, paying the ransom can lead to a costly or futile outcome and leave potentially sensitive data in the hands of clandestine criminals.

Ransomware is not a new phenomenon.  This type of malware was first reported in Russia and parts of Eastern Europe in 2005.  And starting around 2012, the use of ransomware has grown exponentially.  Moreover, the rise in ransomware has proven to be a very lucrative black market enterprise for hackers, with the FBI estimating that another major ransomware, CryptoWall, generated at least $27 million from its victims.  Even police departments were among CryptoWall’s victims.  In Swansea, Massachusetts, a police department’s computer system became infected.  Ultimately, the department paid the ransom of 2 Bitcoins (around $750 at the time), instead of figuring out how to unencrypt the malware.  Swansea Police Lt. Gregory Ryan told the Herald News that “CryptoWall is so complicated and successful that you have to buy these Bitcoins, which we had never heard of.”

As recent ransomware events have shown, there is a growing concern about high profile attacks that are an ever growing trend in the cyber landscape.  Businesses and organizations that maintain personally identifiable information should take into account the potential legal ramifications for failing to secure critical data:

  • Federal Trade Commission Enforcement. In a November 2016 blog entry, the FTC warned that “a business’ failure to secure its networks from ransomware can cause significant harm to the consumers whose personal data is hacked.  And in some cases, a business’ inability to maintain its day-to-day operations during a ransomware attack could deny people critical access to services like health care in the event of an emergency.”  The FTC also highlighted that “a company’s failure to update its systems and patch vulnerabilities known to be exploited by ransomware could violate Section 5 of the FTC Act.”  When data breach occurs, the FTC may also consider the accuracy of the security promises made to the consumer.  Under Section 5 of the FTC Act, the “unfair or deceptive acts or practices” doctrine gives the FTC the authority to pursue legal actions against businesses and organizations that misrepresent security measures used to protect sensitive data.
  • Breach Notification Requirements. In the U.S., 48 States, the District of Columbia, U.S. Virgin Islands, Guam, and Puerto Rico contain laws that require notification to affected individuals in the event of a breach. Some States also require notification to regulators. Federal laws, such as the Health Insurance Portability and Accountability Act (“HIPPA”), also have specific breach notification requirements.  Moreover, U.S. businesses and organizations that operate or sell products internationally may also be subjected to stricter notification laws.  For example, on May 25th, 2018, E.U.’s upcoming General Data Protection Regulation (“GDPR”) will require notification to affected individuals “within 72 hours of first having become aware of the breach.”  Penalties for businesses or organizations that violate the GDPR can be fined up to a maximum of 4% of annual global turnover or €20 Million (whichever is greater).

Understanding the applicable breach notification laws can save a business or organization from significant legal and monetary complications.  The unfortunate reality is that ransomware may be the beginning of much more sophisticated and sinister malware attacks.  Therefore, businesses and organizations that maintain personal data should ensure they are complying with data privacy and cyber security laws.  With the high profitability and anonymity that ransomware provides for cyber criminals, there will certainly be more attacks in the future.

George Suh is a 3L at Chicago-Kent. He is the co-founder and current Vice President of Chicago-Kent’s Cyber Security and Data Privacy Society.

Facebook after Death

 By Michael Goodyear

With nearly two billion users, Facebook is firmly entrenched in 21st century life. A person’s Facebook account serves as a digital doppelgänger: their thoughts, interests, pictures, friends, and memories are available in perpetuity. But what happens to the digital profile when its physical owner dies? With physical property, there is often a presumed heir. A Facebook account, even if it doesn’t have monetary value, can have significant emotional value—after all, it is a record of life, a sense of personality and who the deceased individual was. Should a parent, significant other, sibling, or someone else have access to a Facebook account after the owner has passed away?

No, said a Berlin court yesterday.  A 15-year-old girl had been killed by a subway train back in 2012. Her parents wanted to access her Facebook to determine whether she had committed suicide by looking at her posts and reading her chats. The parents had petitioned Facebook to grant them access to the account, but when it was denied the parents went to the German courts.

Back in 2015, a regional court had ruled in favor of the parents, classifying Facebook messages and posts as similar to letters and diaries, which can be inherited. But the court of appeals instead looked to the privacy of those with whom the deceased girl had communicated. Granting her parents access to her account would compromise those other individuals’ constitutional right to privacy. The case could be appealed all the way to Germany’s Federal Court of Justice.

But for now, Facebook’s policies on a deceased person’s account are maintained. There are actually only three options for a deceased person’s Facebook account: 1) leaving it, 2) memorializing it, and 3) removing it. Facebook has a special form for a deceased person’s account, but this is only for memorializing or removing the account, not accessing it. Facebook’s policy is to not allow anyone other than the account user to log in to their account, including the family of the deceased.

But while Facebook does not turn over account access to family members, it does respond to requests from the government, including access to posts and messages if the government supplies a warrant. This means that there is a threshold where even privacy is outweighed by a greater goal.

Back in 2005, the family of a deceased marine, Justin Ellsworth, was granted accessed to his Yahoo email account after an Oakland County probate judge ordered Yahoo to grant them access. Cybersecurity law experts Julie E. Cohen of Georgetown University Law Center and Henry H. Perritt, Jr., of Chicago-Kent College of Law argued that emails were like other types of information or property routinely accessed or transferred after someone’s death and that access should be granted to survivors.

But seven years later in 2012, a California district court quashed a subpoena by a deceased individual’s family members to have access to the contents of her Facebook account. Sahar Daftary had died falling from the 12th floor of an apartment building in Manchester, England. Similar to the German case, her family wanted to know whether it was an unfortunate accident or suicide. The court upheld Facebook’s policy, noting that the Stored Communications Act, 18 U.S.C. §§ 2701-2712, protected the contents of Daftary’s Facebook account. The court did note that Facebook could turn over the contents voluntarily, but that would be unlikely given Facebook’s policy on the accounts of deceased persons.

“I think it’s a good idea for sites not to have a blanket policy to hand this stuff over to survivors.  This information is private and you assume that it’s private, you assume that your Facebook account is private, you assume that your email account is private,” said Rebecca Jeschke of the Electronic Frontier Foundation.

While someone’s public posts on Facebook were intended for others’ eyes, their private messages were not. Even in the case of children, it is unlikely that they would want their parents reading their private messages when they were alive. Why should it be different now that they are deceased? In addition to the deceased individual themselves, the privacy of those with whom they communicated would also be at stake. The German appeals court’s decision supports a broader point: a loved one’s death is tragic, but even it should not trump the constitutional right to privacy.

Michael Goodyear, who has a BA in History and Near Eastern Languages and Civilizations from the University of Chicago, is part of the ISLAT team.

Alexa, Am I Violating Legal Ethics?

By Peggy Wojkowski

Thomson Reuters announced its release of Workplace Assistant, which allows attorneys to record, inquire about, and use a timer to calculate billing entries via Amazon Echo and Alexa-enabled devices. The new Workplace Assistant interacts with the existing Elite 3E platform used by law firms to manage workflow and to streamline tasks. Thomson Reuters indicates that Workplace Assistant “always works within the firm’s security walls.” Workplace Assistant does, however, interact with the Amazon environment, although Thomson Reuter’s considers the interaction “low touch,” which means very little interaction between Workplace Assistant and the Amazon environment. This minimal interaction, beyond a firm’s security walls, could draw ethical concerns for the attorneys who use the Alexa-enabled aspects of Workplace Assistant.

Alexa- enabled voice assistants, such as Amazon Echo and Amazon Dot, respond to voice requests from users. These devices either stream or record the voice requests to servers which access the requests and form responses. For the Alexa-enabled Amazon products, the wake-up word, “Alexa,” activates these voice assistants, which then respond to voice requests.  Therefore, in order to hear the wake-up word, the voice assistant’s microphone must be active even when a user is not actually making a request, i.e., the voice assistant is listening, even when the device is not awake. When an Alexa-enabled product is used with the Workplace Assistant, the device is listening for the wake-up word inside the attorney’s office. The Workplace Assistant manages the voice requests regarding client billing with the client information from the Elite 3E platform, the law firm management software. Even if the Elite 3E platform is ultimately handling any voice requests pertaining to billing, it is not clear who is handling any other voice requests or who has access to the microphone when Alexa is not awake.  This scenario requires investigation in order to comply with the American Bar Association’s Model Rules of Professional Conduct.

The American Bar Association’s Model Rule 1.6 pertaining to confidentiality of information in an attorney-client relationship indicates in part (c) that “a lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client” (emphasis added.) The committee provides insight as to what defines reasonable efforts in Comment 18 to Model Rule 1.6, which requires attorneys to act competently to preserve confidentiality. In acting competently, attorneys know not to discuss confidential information in public places, with others outside of the legal team, or with those individuals with whom communication is not necessary to adequately represent clients.  Because competent representation includes awareness of individuals (physically and electronically) present when discussing confidential information, the Workplace Assistant could pose a problem as to conclusively determining who is listening, or has access to, the microphone and its recordings on the Alexa-enabled device.

Model Rule 1.1 also requires that an attorney provide competent representation to clients and, in its comments, addresses technology used by attorneys. According to Comment 8 of Model Rule 1.1, this competency includes the attorney keeping up-to-date on changing law “including the benefits and risks associated with relevant technology.”  Therefore, attorneys cannot blindly use technology without knowing the security measures and the possible ramifications on client representation. The benefit of the Workplace Assistant is the time saved in recording and inquiring about billing. The risk is having an active microphone within an attorney’s office able to record client-privileged information, which may be a risk that attorneys do not want to take.

However, Amazon does have another product, the Amazon Tap, which may lessen the risk associated with voice assistants but still allow attorneys to use the Workplace Assistant program.  Although this device also uses Alexa to respond to voice requests, a wake-up word is not required because the user must touch the button on the top of the device to activate the microphone. Therefore, the microphone is not listening for the wake-up words, which alleviates some concerns regarding confidentiality.

Either way, attorneys may still hesitate to use any of these gadgets due to actual reactions from clients, who may step into the office for a meeting and see the microphone in an area where they want to discuss private, confidential information.

Peggy Wojkowski graduated from Chicago-Kent College of Law in May 2017.  She will be joining a large IP boutique firm in September 2017 after sitting for the Illinois bar exam in July 2017.  

Bots Can Order Pizza For You. And Then Spy on You.

By Keisha McClellan

Although we are well into 2017, here’s a belated welcome to the Year of the Bot. Bots are revolutionizing the way we fuse technology with our everyday lives and posing challenges to our privacy.

Your actual phone or smart speaker may record your wish for a cheap plane ticket or an Uber ride, but it is the software application known as a bot that executes the command. At their core, bots engage us in an interaction where we can give a command and the device can execute the command. Some bots enable two-way conversations with us, others offer more simplistic engagement.

From celebrity chatbots like Kim Kardashian’s or Maroon 5’s, to bots that can help us with health queries or financial budgeting, bots are popping up in our lives in all kinds of nifty ways. But the bots associated with “smart speakers,” such as’s Alexa and Google Home, are particularly wrapping convenience and controversy all into one.

Why should we care? The benefits of gaining a virtual assistant in the devices we carry around or use at home come with a creepy caveat: bots can infringe on our privacy in ways we never imagined.

Take Apple’s Siri, Amazon’s Alexa, or a Google Home smart speaker: these essentially are voice-controlled virtual assistants that can make life simpler, speedier and, perhaps, more enjoyable. You’re literally only a shout-across-the-room-away from ordering your favorite pizza.

That bots listen for our commands is innovative. Echo’s Alexa allows you to do many things including making a to-do list, providing a weather forecast, placing a toy order and streaming a podcast on voice command.

That the technology can also listen and record things you’re saying without you realizing it, is scary. It may even be incriminating.

An Arkansas prosecutor demanded Amazon turn over recorded data on an Echo in hopes that the speaker was recording at the time a man died in a friend’s hot tub. The device, at times, records the goings on in one’s home even when it hasn’t been directed to do so. In this case, the prosecutor hoped that the cloud recordings would shed light on how the man died. Until the owner granted consent for his Echo information to be turned over to prosecutors, Amazon refused to comply with requests for the recorded data citing the First Amendment as protecting the recordings.

The fact that these smart speakers may be listening and recording you without your knowledge, is disturbing enough. A reporter was startled when a private conversation between him and his wife was eerily interrupted when Echo’s Alexa “barged into the conversation with what sounded like a rebuke.”

But more troubling is what are companies doing with the data these smart speakers collect? Bots gather “massive amounts of data about us. And that raises a dark side of this technology: the privacy risks and possible misuse by technology companies,” says the Washington Post’s Vivek Wadhwa.

In all, Albert Gidari, director for privacy at Stanford Law School’s Center for Internet and Society, says the “reality is that technology…kind of blurs law for privacy.”

Bots behaving badly can take many forms. For instance, Lin-Manuel Miranda of “Hamilton” fame was so alarmed about bots driving up the price of tickets to sports, music events and Broadway shows in some cases by more than 1,000 percent, that he penned an op-ed in the New York Times blasting brokers’ use of ticket bots.

President Obama and Congress were concerned enough about the potential for some bots to harm consumers that they passed the BOTS Act of 2016 to deter local ticket scalpers going hi-tech using bots.

Sure, bots can do bad things. But like two sides of every coin, bots have good capabilities too.

Siri helped a little boy save his mother’s life. When his mother fell unconscious, a 4-year old used his mother’s finger to open an iPhone and he used Siri to call 911 and reach an operator for help.

In this year of the bot, you may be itching to take the plunge and buy a new gadget that features a bot virtual assistant. While the numerous benefits are many, be sure to protect your privacy in the process. For starters,’s Jocelyn Baird advises that you review the settings of your device’s microphone and even consider adding an “audible tone when it’s active, so you know when it’s recording.”

Keisha McClellan is a rising 2L law student at Chicago-Kent College of Law and a founding board member of Chicago-Kent’s Cyber Security and Data Privacy Society.


Just a Fingerprint Away: The Risks of Fingerprint Scanning

By Michael Goodyear

The fingerprint scanner is perhaps one of the best known security features in the world. In spy movies, no safe or villain’s lair is complete with one. But they aren’t foolproof: in “Diamonds Are Forever,” James Bond uses a fake fingerprint to get past such a scanner. In the nearly 50 years since that movie was released, fingerprint scanners have become increasingly ubiquitous and as a common protection mechanism for smartphones, they are the sealed gate to your data. But that gate is not as secure as we might think, and it no longer takes a legendary spy like 007 to crack it open.

A recent study by researchers at New York University and Michigan State University brought the technological risks of fingerprint scanning to light. The researchers used computer simulations to create “MasterPrints,” real fingerprints from databases or synthetically created ones that can spoof one of the stored fingerprints in a scanner’s database to unlock a phone. Although the study did not use real phones, instead using cropped images on the commercial verification software Verifinger, the findings were still alarming. The researchers’ generated prints could match the real ones up to 65% of the time. Even if the percentage with phones was much lower, it would still be a considerable risk.

One of the greatest weaknesses of your phone’s fingerprint scanning technology is that it doesn’t actually take a full fingerprint scan. Those would be nearly impossible to falsify. But your iPhone or Android phone only scan partial fingerprints, a much smaller area with fewer unique features. This risk is exacerbated by the fact that your phone typically takes eight to ten scans, giving the fingerprint scanner a database of eight to ten fingerprints it can use. Now hackers have eight to ten chances to spoof your fingerprint rather than just one. If you give register other people’s fingerprints on your phone (your spouse or children perhaps), it increased the risk again. It’s like if you have a lockbox with several different keys; the greater the number of keys, the greater risk that someone will get their hands on one or be able to copy one.

Professor Stephanie Schuckers, Director of the Center for Identification Technology Research at Clarkson University, noted that because the study didn’t involve actual phones, the takeaways were limited.

But while a full study of Apple and Android fingerprint recognition programs will be necessary to uncover the exact risk of falsifying fingerprints, any risk is too high. Our phones hold a world of data about us. By unlocking your phone, someone wouldn’t just be able to make a call, but would know your deepest secrets. Your contacts, your intimate texts and emails, your interests, and even your health data, all stored on your phone with only fingerprint recognition to protect them, would be at risk.

Perhaps the most alarming consequence of this security vulnerability is what it means for your finances. Services such as Apple Pay and Android Pay allow you to make purchases with the swipe of your finger. Banks are increasingly starting to have fingerprint recognition for signing into your app (and all of your financial data). Large banking institutions such as Chase and Bank of America, as well as credit card companies such as Capital One, are now just a swipe away for you…and your hacker.

When someone’s information gets stolen due to a false fingerprint, who will be liable? The phone developer and financial institution, by having used falsifiable fingerprint tracking technology, would be at risk of being held responsible. In the short term, however, it is the user who will suffer. Their personal and financial information will be compromised, leading to countless hours trying to secure everything again, not to permanent damage that could be done by your data getting out.

Fingerprint technology is not the only option (written passwords are usually still offered), so customers do have a choice of whether or not to trust that the fingerprint technology will protect their data. But since fingerprints are unique, fingerprint scanners have been seen as the safe choice, a much more secure method than a four number password.

Reporters have actually questioned the security of fingerprint scanning systems for years. But while previous fears were often just lists of everything that could go wrong, the new NYU and MSU study has quantifiable data to prove that fingerprints can be spoofed.

Technology has advanced so much that you can do practically anything from your smartphone. But we have to remember that with progress come downsides. When all that stands between your sensitive personal information and a thief is a fingerprint, you need the technology to be ironclad. James Bond may have had noble aims in tricking a fingerprint scanner, but it is unlikely that data hackers will have those same scruples. It may be easy to flip your finger and open your phone and all of your apps, but ease is not worth the risk of losing your information to modern day spies.

Michael Goodyear, who has a BA in History and Near Eastern Languages and Civilizations from the University of Chicago, is part of the ISLAT team.


Hacking at the Downbeat: How Music Can Take Over Our Devices

By Michael Goodyear

Hacking into electronic systems is certainly not new. People have taken over entire smart homes and data breaches have cost companies such as Target and Home Depot millions of dollars. But a team of researchers has found a new way to hack: music.

Researchers at the University of Michigan and the University of South Carolina have found a weakness in Microelectromechanical systems (MEMS) accelerometers, standard components of electronic systems ranging from your smartphone to automobiles and drones. MEMS accelerometers have a sensing mass that shifts depending on the accelerative forces exerted on it, which in turn sends out a voltage signal that correlates to the sensed acceleration function. By exerting acoustic interference, the researchers displaced the sensing mass, basically causing involuntary actions in the device.

These acoustic attacks could just be a relatively harmless interference. For example, by using a YouTube music video interspersed with special tones, the researchers spoofed a MEMS accelerometer to send out a signal that resembled the word “WALNUT,” which became the name of the team’s acoustic attack.

But the consequences could be much more dire. Some systems depend on the MEMS accelerometers to make automated decisions. By playing a malicious audio file, the hacker could take control of these devices or surreptitiously influence them.

WALNUT was used to take over a remote-control car via an app on an infected phone. While a rogue remote-control may not be too scary, MEMS accelerometers are also used in much larger systems, such as cars and drones, which could cause immense amounts of damage if they were taken over.

The researchers also used WALNUT to alter the amount of steps on a Fitbit. While the researchers did not think such an attack posed a serious security risk (they instead pointed out that it could be used to garner free Fitbit rewards through programs such as, the ability to alter health data on a device could have serious consequences. If health data such as that on a Fitbit can be changed, the resulting inaccuracies could negatively impact those that depend on the apps or devices for managing their health, potentially leading them to follow incorrect data and make a decision that could damage their health. Even more dangerous, mobile health apps that control devices such as pacemakers or insulin pumps, or even the devices themselves, could be changed to create a fatal heart rhythm or administer the wrong dosage of insulin.

WALNUT is not just a fringe technology that can only affect the occasional device. The researchers found that 65% of the accelerometers (15 of 20 accelerometer models by 5 different app manufacturers) were vulnerable to an acoustic output control, where devices such as the remote-control car could be taken over. They also found that 75% of the accelerometers they tested were vulnerable to an acoustic output biasing hack, where information like your Fitbit step count could be altered.

The Internet of Things offers many advantages, but as WALNUT illustrates, it can be infiltrated with something as simple as a YouTube song. The consequences of our dependence on technology could not only hurt our privacy, but also our physical wellbeing. In their paper, the WALNUT team outlined how to better protect against the acoustic takeovers, but if the accelerometer chip makers don’t follow the advice, maestro hackers may just have one more instrument in their orchestra for assailing the Internet of Things.

Michael Goodyear, who has a BA in History and Near Eastern Languages and Civilizations from the University of Chicago, is part of the ISLAT team.



Altering Prisoners’ Sense of Time: The Moral Regression of a Futuristic Technology

By Caroline Thiriot

What if you could give a prisoner a pill that changed their perception of time? A 10-year sentence could feel like millennia. Or a person could experience a 10-year sentence in two years.

Science has already brought us to the brink of this technology. In a paper published in the Journal of Neuroscience, the nature of time perception is outlined and science seems to conclude in favor of Kant’s “subjective” and “ideal” view of the matter. Indeed, “[o]ur perception of time constrains our experience of the world and exerts a pivotal influence over a myriad array of cognitive and motor functions.” (emphasis in the original). The result of the study demonstrated “anatomical, neurochemical, and task specificity, which suggested that a neurotransmitter called GABA (Gamma-Amino Butyric Acid) contributes to individual differences in time perception”. With this increased understanding of how we perceive time, perception altering medications may follow.

Psychoactive drugs could be used to distort the prisoners’ perception of time and make them feel like they were serving a 1,000-year sentence, which is legally available in the United States. As detailed in Slate and Aeon, philosopher Rebecca Roache is undertaking a thought experiment to explore the ethical issues involved in using perception altering drugs and life extension technologies in the corrections context.

Medical and scientific advance could change the way prisoners serve time and dramatically alter our prison system. As for an economic purpose we could imagine that prisoners would physically spend one day in prison while they would psychologically experience it as lasting x years. Considering the high costs of prisons, psychoactive drugs could thus be a solution to save money. However, the risk benefit ratio does not seem to be favorable at all.

There already are cases where perceptual distortions such as “disorientation in time” can occur, we can relate to the practice of solitary confinement. “There is long history of using the prison environment itself to affect prisoners’ subjective experience,” highlights Rebecca Roache. On October 18, 2011, Special Rapporteur of the Human Rights Council on torture and other cruel, inhuman or degrading treatment or punishment, Juan E. Méndez presented his thematic report on solitary confinement to the United Nations General Assembly. He called on all countries “to ban the solitary confinement of prisoners except in very exceptional circumstances and for as short a time as possible, with an absolute prohibition in the case of juveniles and people with mental disabilities.” He stressed as well that “Solitary confinement is a harsh measure which is contrary to rehabilitation, the aim of the penitentiary system.”

Two points made in the statement above are worth being further discussed. First, we will address the issue of torture and other cruel, inhuman or degrading treatment or punishment. Then, we will focus on a more philosophical controversy: the aim of the penitentiary system.

Torture is universally condemned. The prohibition against torture is well established under customary international law as jus cogens as well as under various international treaties such as the Convention against Torture or Other Cruel, Inhuman or Degrading Treatment or Punishment ratified by 136 countries (including the United States in 1994). Even if the effects of perception altering drugs have not been studied yet, we can relate to those of solitary confinement to some extent. Indeed, one can picture the subject whose time perception is altered as experiencing another reality than the one commonly experienced. Thus, as the physically isolated prisoner, it seems to be reasonable to conclude that this subject will be deprived of normal human interaction and may eventually suffer from mental health problems including anxiety, panic, insomnia, paranoia, aggression and depression. In addition to the mental health risks, there exist physical health risks as well because the needs for sleep or food may be perceived differently.

As for the aim of the penitentiary system, several questions arise, especially concerning rehabilitation and recidivism. Some authors argue that prisons should be abolished and replaced by “anti-prisons,” that is, locked, secure residential colleges, therapeutic communities, and centers for human development. Indeed, nowadays it makes no doubt that punishment fails and rehabilitation works. From such perspective, altering prisoners’ time perception in order to make them feel like they spend more time in jails could thus be seen as a step backwards instead of a progress. According to the American Correctional Association (ACA) 1986 Study of prison industry, there are three categories of contemporary prison institution goals. Those are offender-based (good work habits, real work experience, vocational training, life management experience), institution-oriented (reducing idleness, structuring daily activities, reducing the net cost of corrections) or societal (repayment to society, dependent support, victim restitution). If such technology was implemented, most of these goals would not be completed. On the opposite, it appears we would instead go back to an era when solitary confinement was thought to foster penitence and to encourage reformation, but in a rather extreme form causing more harm to the individual possibly to the extent of mental illness.

The Eighth Amendment states that “Excessive bail shall not be required, nor excessive fines imposed, nor cruel and unusual punishments inflicted.” Professor Richard S. Frase has analyzed constitutional proportionality requirements. He noticed that since 1980, the Supreme Court has ruled in favor of the prisoner only once out of the six cases in which the duration of a prison sentence was attacked on Eighth Amendment grounds. Even if “The Court has never made clear what it means by proportionality in the context of prison sentences. Justice Scalia believes (and perhaps so does Justice Thomas) that this concept only has meaning in relation to retributive sentencing goals,” he concludes. When it comes to sentencing goals, one should thus distinguish retributive goals from non-retributive. While the first theory considers only the defendant’s past actions and focuses on the punishment itself, the second one (also considered as “utilitarian”) takes the future effects of the punishment into account. On the basis of such distinction, one must conclude that the possibility of making prisoners feel like they were spending a very long time in jails would only serve a retributive purpose and would absolutely fail at addressing the non-retributive ones.

Also, we live in a society in which, if individualism seems to be the supreme rule, interdependence remains a governing concept. To the question “What else can matter to us, other than how our lives feel from the inside?” Robert Nozick asks in this famous “Experience Machine,” he concludes that, dealing with pleasure, we would rather choose the everyday reality rather than an apparently preferable simulated reality. Despite the fact his thought experiment deals with a notion that is opposite to punishment, we can rely on the conclusion that the reality we commonly experience matters more than our subjective experience of it. As a consequence, one should not forget that the victim’s subjective perception of justice matters as well. Therefore, it may appear difficult for them to know that a criminal is out of jail after having spent a little time there and is able to enjoy the rest of their life free. Even if we only focus on retributive goals, such technology seems to have subjective limitations.

In the end, there seems to be no argument but the economic advantage for allowing the use of such psychoactive drugs able to distort the prisoners’ perception of time. On the contrary, it can be seen as torture and does not serve any rehabilitation aim, which is the main focus of prison sentences nowadays. The use of such technology would therefore appear to be regressive rather than progressive.

Caroline Thiriot, who has a Master’s in International Law and Human Rights from the Université Panthéon-Assas and an LL.M. in international and transnational law from Chicago-Kent College of Law, is currently a Master’s student in Bioethics at Université Paris Descartes.