Inheriting the Facebook Graveyard

By Michael Goodyear

Last year, I wrote about a German court case that struggled with the question of whether anyone could have access to a deceased individual’s social media accounts. The case centered on a 15-year-old girl who had been killed by a subway train, and her parents wanted to access her Facebook account to see if they could determine from her posts and messages if she committed suicide. In May 2017, the German court of appeals reversed a lower court ruling in favor of the parents, holding that allowing the parents access to their daughter’s account would compromise the constitutional expectation of telecommunications privacy of the third parties with whom she had interacted online. On July 12, 2018, the highest German court, the Federal Court of Justice (BGH), overruled the court of appeals, agreeing with the initial lower court decision and holding that online data can be inherited just like physical writings such as personal diaries or letters.

While there is a strong policy interest in probate, and social media does appear to fit into a broad interpretation of written communications traditionally included in probate, social media accounts include a much greater breadth of information than those traditional sources. While probate law stretches back centuries, social media does not. Today, instead of a simple spoken conversation, which could not be inherited, we engage in lengthy conversations on social media and via texting. In many cases, our social media accounts and private messages are reflections of our personal thoughts. Although diaries also contained sensitive thoughts, they cannot compare to the magnitude of personal information present on Facebook. This is a fundamental change from prior physical document inheritance under probate law.

The German case was also particularly tricky due to the girl’s age. Since she was a minor, her parents had an expanded range of rights which they would not have had upon her reaching legal adulthood. The BGH ruled broadly that digital content can be inherited, but it is unclear if this could be limited to only minors.

Such a limitation could be one way to achieve the court’s goal while still preserving data privacy for third parties. If such a policy were implemented, third parties would know when communicating over Facebook with underage individuals that their communications are not necessarily limited to the recipient’s eyes alone.

Facebook’s own policies do not allow access to a deceased individual’s account, even if the requesters are family members or he or she was a child. The only options are to leave the Facebook account as is, memorialize it, or remove it. The BGH ruling will likely force Facebook to reevaluate its current deceased users policy, which provides the perfect opportunity to adapt its policies to better protect user privacy.

While the BGH’s decision does not directly affect those outside Germany, Facebook’s reaction to the decision, including any policy changes, could apply to the rest of Europe and the United States as well. However, Facebook has previously resisted broadly applying EU privacy protections to its users who do not reside in the EU. It could very well maintain separate positions on accessing deceased users’ accounts as well.

The prevailing standard in the United States is that third-party communications are protected under the Stored Communications Act, 18 U.S.C. §§ 2701-2712. This broad privacy protection is not only important for third parties, but also for online services themselves. Platforms such as Facebook can simply refuse to disclose, except under limited circumstances, and cite the shield of the Stored Communications Act.

A possible alternative, in addition to drawing a distinction for minors, that would still comply with the Stored Communications Act and ameliorate the issue of the Facebook graveyard would be allowing the inclusion of social media accounts in your will. Since over 10,000 Facebook users die every day, there is a pressing question of what to do with this every-increasing digital graveyard of accounts filled with personal information. Delaware adopted a law for fiduciary access to digital assets and digital accounts in 2014. Under this law, an individual can list social media access in their will, despite sites like Facebook not allowing such a transfer. There are already services to hand over social media account access after the user’s death. Furthermore, courts have held that users can consent to the disclosure of their online communications in cases such as In re Facebook, 923 F. Supp. 2d 1204 (N.D. Cal. 2012), and Ajemian v. Yahoo!, Inc., 84 N.E.3d 766 (Mass. 2017).

The seemingly impenetrable wall between Facebook accounts and the outside world has already been penetrated. Facebook divulges account information and private messages to government officials with a warrant. Facebook private data is subject to discovery requests in litigation. Providing for access in a user’s will would be another step in compliance with the law that would allow the user to use his own discretion and also forewarn third parties that their communications might be shared. While the exact privacy rights of children are trickier, following the BGH’s ruling, Facebook should craft a new policy to best meet the interests of the dead, the living, and privacy.

Michael Goodyear is a former ISLAT member and is currently a rising 2L at the University of Michigan Law School, where he is the President of Michigan’s Privacy and Technology Law Association.

Countdown to Health Care Privacy Compliance; GDPR Minus One Day

By Joan M. LeBow and Clayton W. Sutherland

As we hurtle to our deadline of March 25, 2018 for the European Union’s General Data Protection Regulation (GDPR) implementation, health care providers are quickly assessing gaps in their understanding of what is required by GDPR.  A key area of concern is how the GDPR’s requirements compare to previous requirements under HITECH/HIPAA and FTC requirements.

Elements of Consent and Article 7

Consent in the GDPR can be made easier to understand by breaking down the definition into principle elements and correlating them with the obligations found in the GDPR. The Article 4 definition can be divided into four parts: consent must be freely given, specific, informed, and include an unambiguous indication of affirmative consent. We will address each element in different blogs, starting with “freely given.”

“Freely Given” Element

“Freely given,” under the GDPR definition, is focused on protecting individuals from an imbalance of power between them and data controllers. Accordingly, the Article 29 Working Party (WP29)—the current data protection advisory board created by the Data Protection Directive—has issued guidance for interpreting when consent is freely given. Per this guidance material, consent is only valid if: the data subject is able to exercise a real choice; there is no risk of deception, intimidation, or coercion; and there will not be significant negative consequences if the data subject elects not to consent.[i] Consequently, consent must be as easy to withdraw as it is to grant for organizations to be compliant. Additionally, GDPR recital 43 states the controller needs to demonstrate that it is possible to refuse or withdraw consent without detriment.[ii]

Controllers (who determine the purposes for data processing and how data processing occurs[iii]) bear the burden to prove that withdrawing consent does not lead to any costs for the data subject and thus no clear disadvantage for those withdrawing consent. As a general rule, if consent is withdrawn, all data processing operations that were based on consent and took place before the withdrawal of consent—and in accordance with the GDPR—remain lawful. However, the controller must stop future processing actions. If there is no other lawful basis justifying the processing (e.g. further storage) of the data, it should be deleted or anonymized by the controller.[iv] Furthermore, GDPR recital 43 clarifies that if the consent process/procedure does not allow data subjects to give separate consent for personal data processing operations (granularity), consent is not freely given.[v] Thus, if the controller has compiled multiple processing purposes together and has not attempted to seek separate consent for each purpose, there is a lack of freedom, and the specificity component comes into question. Article 7(4)’s conditionality provision, according to WP 29 guidance, is crucial to determining the “freely given” element.[vi]

GDPR vs. HIPAA/HITECH and FTC Part 2

GDPRHIPAA/HITECHFTC
“Freely given,” under the GDPR definition, is focused on protecting individuals from an imbalance of power between themselves and data controllers.The limitations on health data use and authorization requirements are to help ensure the privacy of patients and protect their right to limit how their data is used.

This protection has various applications, including how data is used for marketing purposes as well as when or if data can be sold.
The FTC protects consumers from the imbalance of power between themselves and businesses providing services. They protect consumers, generally, with FTC Act § 5 powers.
A service may involve multiple processing operations for more than one purpose. In such cases, the data subjects should be free to choose which purpose they accept, rather than having to consent to a bundle of processing purposes.

Consent is not considered to be free if the data subject is unable to refuse or withdraw his or her consent without detriment. Examples of detriment are deception, intimidation, coercion or significant negative consequences if the data subject does not consent.

Article 7 (4) of the GDPR indicates that, among other things, the practice of “bundling” consent with acceptance of terms or conditions or “tying” the provision of a contract or a service to a consent request for processing personal data not necessary for the performance of that contract or service, is considered highly undesirable.
When such practices occur, consent is presumed not to be freely given.
An Authorization must include a description of each purpose of the requested use or disclosure of protected health information.
A covered entity may not condition the provision of treatment, payment, enrollment in a health plan, or benefit eligibility to an individual based on the acquisition of an authorization unless it falls under one any of the three enumerated exceptions, which are for psychotherapy notes, marketing or sale of Protected Health Information.

Under HIPAA/HITECH, generally bundling authorizations in with other documents, such as consent for treatment, is prohibited. However, there are three circumstances when authorizations can compound together to cover multiple documents or authorizations.
Unfair and Deceptive Business Practices:

Deceiving/ misleading customers about participating in a privacy program.

Failing to honor consumer privacy choices.

Unfair/unreasonable data security practices.

Failing to obtain consent when tracking consumer locations.

Children's Online Privacy Protection Rule ("COPPA")
A website or online service that is directed to children under 13 cannot collect personal information about them without parental consent.
Under the GDPR, the right to withdraw consent must be as easy a procedure as the one that grants consent for organizations to be compliant.

As a general rule, if consent is withdrawn, all data processing operations that were based on consent and took place before the withdrawal of consent—and in accordance with the GDPR— remain lawful. However, the controller must stop future processing actions.

If there is no other lawful basis justifying the processing (e.g. further storage) of the data, they should be deleted or anonymized by the controller.

GDPR recital 43 states the controller needs to demonstrate that it is possible to refuse or withdraw consent without detriment.
The right to withdraw an authorization is similar to the GDPR right to withdraw consent. The Covered Entity, like the Controllers, has the responsibility of informing data subjects of that right.

The revocation must be in writing, and is not effective until the covered entity receives it. In addition, a written revocation is not effective with respect to actions a covered entity took in reliance on a valid Authorization or if provision of a contract or service was conditioned on obtaining the authorization.

The Privacy Rule requires that the Authorization must clearly state the individual’s right to revoke; and the process for revocation must either be set forth clearly on the Authorization itself, or if the covered entity creates the Authorization, and its Notice of Privacy Practices contains a clear description of the process, the Authorization can reference the Notice of Privacy Practices.
According to better business practice promulgated by the FTC, companies should provide key information as clearly as possible and not embedded within blanket agreements like a privacy policy, terms of use, or even in the HIPAA authorization itself.

For example, if a consumer is providing health information only to her doctor, she should not be required to click on a “patient authorization” link to learn that it is also going to be viewable by the public. And the provider should not promise to keep information confidential in large, boldface type, but then ask the consumer in a much less prominent manner to sign an authorization that says the information will be shared.

Further, the health care provider should evaluate the size, color and graphics of all of their disclosure statements to ensure they are clear and conspicuous.

[i] Working Party 29. “Guidelines for Consent under Regulation 2016/679.” Working Party 29 Newsroom, Regulation 2016/679 Guidance, November 28, 2017, 7-9 30.

[ii] European Parliament, and European Council. “REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation).” Official Journal of the European Union, Legislation, 119/8 (May 4, 2016). [Herein after GDPR Publication].

[iii] See, id at 119/33 for Art. 4 (7).

[iv] Working Party 29. “Guidelines for Consent under Regulation 2016/679.” Working Party 29 Newsroom, Regulation 2016/679 Guidance, November 28, 2017, 21, 30.

[v] See, GDPR Publication at 119/8.

[vi] Working Party 29. “Guidelines for Consent under Regulation 2016/679.” Working Party 29 Newsroom, Regulation 2016/679 Guidance, November 28, 2017, 10, 30.

Joan M. LeBow is the Healthcare Regulatory and Technology Practice Chair in the Chicago office of Quintairos, Prieto, Wood & Boyer, P.A. Clayton W. Sutherland is a Class of 2018 graduate of the IIT Chicago-Kent College of Law.

Countdown to Health Care Privacy Compliance; GDPR Minus Eight Days

By Joan M. LeBow and Clayton W. Sutherland

Are you a US healthcare provider with concerns about data privacy, a patient, or a reporter or policymaker trying to understand the changing healthcare privacy landscape?  If you are, then our blog series will help you sort through the essential question about the relevance of the GDPR to you.

The European Council and European Parliament passed Regulation 2016/679, better known as the General Data Protection Regulation (GDPR), to repeal and replace Directive 95/46/EC, known as the Data Protection Directive (DPD). The new regulation creates a single set of privacy protection laws to be implemented in Member States and complied with by participants of the digital information market. Data processing under the GDPR is based on seven core principles: accountability; lawfulness, fairness and transparency; purpose limitation; data minimization; accuracy; storage limitation; as well as integrity and confidentiality.[i] These principles provide the foundation for the GDPR and its various compliance requirements. The GDPR applies to processors and controllers of data, similar to the DPD. For clarification, the controller says how and why the data is collected and processed, while the processor acts on the controller’s behalf.

The broadened scope of the GDPR is laid out in Article 3. The regulation applies to all companies processing personal data of EU residents regardless of the company’s location or where the processing takes place.[ii] Further, the GDPR applies to data processing by controllers or processors not established in the EU when the company offers goods or services to EU citizens and the monitoring of data subjects takes place in the European Union. Specifically, Article 3 §2 applies to entities established outside the EU but that conduct data processing activities under certain conditions. According to § 2(a), if you offer goods or services to data subject in the EU or, under § 2(b) if you monitor a data subject’s behavior that occurs in the EU, the GDPR will apply.[iii]

Under the GDPR, processing activities is broadly defined. Consequently, it should be understood as a set of activities—automated or not—that includes: data collection, storage, use, consultation, and disclosure by transmission among other activities.[iv] For example, a company’s medical app that transmits data concerning EU residents to doctors in the US for consultative services would be subject to the GDPR; for the US consultant, the transmission of the data is the prong that triggers application. Moreover, the GDPR applies when a company operates a website that meets Art. 3 § 2, of offering goods and services (business activities) or monitoring data subject behavior in the EU (business activities).

The GDPR data privacy security obligations, requirements, and rights are closing fast on providers in the US. The GDPR goes into effect on May 25, 2018. In the health care arena, US companies must comply with both the GDPR and existing US data security standards. Our blog series will assist with this reconciliation and normalization process for compliance officers and counsel trying to make sense of these overlapping frameworks.

We will start this series by introducing Article 6, and review consent under GDPR as a lawful basis for processing data. Next, we will analyze the GDPR’s definition of consent to help understand the four  primary elements and the conditions for consent found in Article 7. Then we proceed to Article 9, discussing the five  most relevant justifications for health and medical industry participants that want to process special categories of data and how such justifications relate to current compliance requirements in the US.

Consent and Article 6

Under the GDPR, data processing is only lawful if and when it falls under one of the six enumerated justifications in Article 6, including consent, performance of a contract, and satisfying legal obligations. We will primarily focus on consent and relevant sections in this review.

Consent is at the core of the GDPR regulation and is an area of expected focus for enforcement. Article 6(1) states that data processing, when relying on consent, is only lawful if and to the extent that (a) the data subject has given consent to the processing of their data for one or more primary purposes. Thus, obtaining valid consent is always preceded by the determination of a specific, explicit and legitimate purpose for the intended processing activity. Generally, consent can only be an appropriate lawful basis if a data subject is offered control and a genuine choice with regard to accepting or declining (without detriment/retaliation) the terms offered.

In the table below, we compare and contrast current regimes in the US regarding consent requirements and the GDPR requirements most relevant to the healthcare industry.

GDPR vs. HIPAA/HITECH & FTC

GDPRHIPAA/HITECHFTC
Consent – Not presumed to be given, must be actual consent.

Generally, only an appropriate lawful basis if a data subject is offered control and a genuine choice with regard to accepting or declining (without detriment/retaliation) the terms offered.
HIPAA/HITECH presumes consent to uses and disclosures for treatment, payment, and health care operations in the absence of a patient’s instructions to the contrary, if the provider complies with regulatory requirements.

The Privacy Rule permits, but does not require, a covered entity voluntarily to obtain patient consent for uses and disclosures of protected health information for treatment, payment, and health care operations.

The Privacy Rule requires explicit consent for various uses and disclosures including research, marketing and solicitation.
FTC enforcement of consent requirements (regarding health information) generally applies to ancillary providers and specific categories of clinical records not covered by HIPAA/HITECH. Some circumstances call for shared jurisdiction with other agencies.

In addition to the general consumer protection power enumerated in the FTC Act, the FTC has specific enforcement jurisdiction over specific laws that feature consent obligations, including COPPA.
Data processing, when relying on “consent,” is only lawful if and to the extent that:

(a) the data subject has given consent to the processing of their data for one or more primary purposes. Thus, obtaining valid consent is always preceded by the determination of a specific, explicit and legitimate purpose for the intended processing activity.
By contrast, an authorization is required by the Privacy Rule for uses and disclosures of protected health information not otherwise allowed by the Rule.

An authorization is a detailed document that gives covered entities permission to use protected health information for specified purposes including research, marketing and solicitation.
FTC jurisdiction for health information includes:

Medical billing companies that collect consumers’ personal medical information without their consent.

Medical transcript companies that outsourced services without making sure the company could reasonably implement appropriate security measures.

Medical billing and revenue management companies that allowed access to consumer information to employees that did not need it to complete their jobs.

Apps that are medical devices that could pose a risk to patient safety if they do not work properly.
Member states have freedom to make laws, usually ones relating to special categories, more stringent than the general consent requirements in the GDPR.Under state law, consent is required by most states for constituencies such as minors, HIV and AIDS patients. Under federal law, a complex consent process attaches to select kinds of substance abuse treatment. All such consent requirements preempt HIPAA/HITECH under the applicable state laws.Before collecting, using or disclosing personal information from a minor, you must get their parent’s “verifiable consent.” Consent must be obtained through a technological medium that is reasonable given the available technology.

[i] See Commission Regulation 2016/679 of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC). 2016 (L 119) 35, 36 [hereinafter General Data Protection Regulation].

[ii] See General Data Protection Regulation at 32-33.

[iii] See id at 33.

[iv] See id at 33 (Definition (2)).

Joan M. LeBow is the Healthcare Regulatory and Technology Practice Chair in the Chicago office of Quintairos, Prieto, Wood & Boyer, P.A. Clayton W. Sutherland is a Class of 2018 graduate of the IIT Chicago-Kent College of Law.

Spring Cleaning

By Raymond Fang

Quick, take a guess—how many times do you think you touch your cell phone every day? 50? 100? 200? Wrong. How about over 2,000? That’s right, according to a report from the research firm Dscout, the average American touches their cell phone at least 2,617 times a day. To get this number, the researchers recruited 94 Android users and installed an app on their phones that tracked “every tap, type, swipe and click,” 24 hours a day, for five days straight. Then they divided the total number of touches recorded by the app by the number of days and the number of users to get the average number of touches per person per day—2,617. If you consider the number of times you touch your phone every day in addition to tapping, typing, swiping, and clicking—to pick it up, to put it in your pocket, to check the time, to charge it, and so on—the actual number of touches is probably even higher than 2,617. But why does any of this matter?

To put it simply, cell phones are dirty. Very, very dirty. One study found that cell phones carry 10 times more bacteria than toilet seats. Though most of these bacteria are perfectly harmless because they originate from your skin and your natural skin oils, researchers have still found dangerous bacteria like streptococcus, MRSA, and E. coli on cell phones. Another study found that roughly one out of every six smartphones has traces of fecal matter on it. Yet another study found “between about 2,700 and 4,200 units of coliform bacteria,” an indicator of fecal contamination, on eight randomly tested cell phones. For comparison, in drinking water it’s recommended that the water have less than one unit of coliform bacteria per ml of water. Much of this bacteria accrues either when you touch something dirty with your hands and then touch your phone (such as if you take out the trash and then use your phone afterwards without washing your hands), or when you expose your phone to a dirty environment (such as if you bring your phone into the bathroom with you, since flushing the toilet releases germs into the nearby environment).

So, if your cell phone is potentially harboring all sorts of nasty bacteria, what can you do about it? While some companies sell $60 UV light-emitting-devices that claim to kill 99% of the bacteria on your phone, the best and most economical solution is probably to wash your hands regularly several times a day, leave your phone out of the bathroom, and wipe down your phone with a moist microfiber cloth daily. If you’re really committed to sanitizing your phone, you can also create a 1:1 water and 70% isopropyl alcohol mix, spray it onto a microfiber cloth, and wipe down your phone with the isopropyl-alcohol-dampened microfiber cloth every week. This is the method that is effective for eliminating more dangerous and enduring bacteria like “clostridium difficile (which can cause diarrhea or even inflammation of the colon) and flu viruses” that will not yield to a microfiber cloth moistened only with water. Although Apple’s website warns against using “window cleaners, household cleaners, compressed air, aerosol sprays, solvents, ammonia, or abrasives” to clean your phone, researchers found that the isopropyl alcohol mixture is necessary to eliminate the more pesky and dangerous bacteria. While it may be yet another chore to complete, regular phone cleaning can help provide peace of mind and prevent the spread of germs and disease. Happy spring cleaning!

Raymond Fang, who has a B.A. in Anthropology and the History, Philosophy, & Social Studies of Science and Medicine from the University of Chicago, is a member of the ISLAT team.

Android’s Watching You. Now You Can Watch Back.

By Raymond Fang

On November 24, 2017, Yale Law School’s Privacy Lab announced the results of their study of 25 common trackers hidden in Google Play apps. The study, conducted in partnership with Exodus Privacy, a French non-profit digital privacy research group, examined over 300 Android apps to analyze the apps’ permissions, trackers, and transmissions. Exodus Privacy built the software to extract the apps’ permissions, trackers, and transmissions from the apps, and Yale’s Privacy Lab studied the results. The authors found that more than 75% of the apps they studied installed trackers on the user’s device, primarily for the purposes of “targeted advertising, behavioral analytics, and location tracking.” Yale’s Privacy Lab has made the 25 studied tracker profiles available online, and Exodus Privacy has made the code for their free, open-source privacy auditing software available online as well

The Exodus Privacy platform currently lacks an accessible user interface, so the average person cannot use the program to test apps of their choosing. Though the Exodus Privacy website does contain a video tutorial of how to “Try it [Exodus Privacy] at home,” the video tutorial requires the user to write code on an unknown platform (possibly using the code available on Github) to run the privacy auditing software, which requires some knowledge of computer science. Instead, the average person must rely on the reports generated on Exodus Privacy’s website. Exodus Privacy’s software automatically crawls through Google Play to update tracker and permission data for all the apps in its database, and is constantly adding more apps.

As of December 4, 2017, the Exodus Privacy website has generated reports on 511 apps. These reports yield interesting information about how some very popular apps track your personal information for advertising purposes. Snapchat (500,000,000+ downloads), for example, contains an advertising tracker from data aggregator company DoubleClick. Spotify Music (100,000,000+ downloads) contains advertising trackers from DoubleClick, Flurry, and ComScore. Though it’s hard to tell exactly what data about your social media usage and music preferences these trackers are collecting from Exodus Privacy’s reports, which just say the trackers collect “data about you or your usages,” DoubleClick’s privacy policy states that it collects “your web request, IP address, browser type, browser language, the date and time of your request, and one or more cookies that may uniquely identify your browser,” “your device model, browser type, or sensors in your device like the accelerometer,” and “precise location from your mobile device.” If cookies are not available, as on mobile devices, the privacy policy states that Doubleclick will use “technologies that perform similar functions to cookies,” tracking what you look at and for how long. Obviously, you may want to keep some of this information private for various reasons; however, the widespread use of these advertising trackers in Android apps means that this data related to your social media content and music preferences can easily be sold to advertisers and exposed.

Beyond the tracking done on social media and music apps, Exodus Privacy’s reports show that some health and dating apps also collect and sell your intimate and personal data. Spot On Period, Birth Control, & Cycle Tracker (100,000+ downloads), Planned Parenthood’s sexual and reproductive health app, contains advertising trackers from AppsFlyer, Flurry and DoubleClick. If you were pregnant, trying to conceive, or even just sexually active, data aggregator companies could conceivably sell that information to advertisers, who may then send you related advertisements. If someone was borrowing your computer or looking over your shoulder, they may be able to see the ads and figure out you were pregnant, trying to conceive, or sexually active. Such accidental exposure could cause you emotional harm if you were not ready or willing to share that private information with others. Grindr (10,000,000+ downloads), the popular dating app for gay and bisexual men, has advertising trackers from DoubleClick and Mopub. If advertisements about your sexuality started popping up whenever you used the Internet, they may accidentally reveal your sexuality before you are ready to tell certain people, which may cause a lot of emotional distress.

There is clearly cause for concern when it comes to Android apps’ tracking and selling your personal information. Unfortunately, selling user data to advertisers is a very lucrative and reliable way for tech companies to monetize their services and turn a profit, so it’s hard to envision an alternative system where all of your personal data would be protected from commodification. However difficult it may now be to imagine a world where your privacy is adequately protected in the digital space, it will be up to privacy-conscious consumers, researchers, scholars, lawyers, and policymakers to make that world a reality.

Raymond Fang, who has a B.A. in Anthropology and the History, Philosophy, & Social Studies of Science and Medicine from the University of Chicago, is a member of the ISLAT team.

Blockchain: Web 3.0 or Web 3.No?

By Debbie Ginsberg

Welcome to the brave new world of blockchain. Some say it’s the future lifeblood of the internet and commerce. It will provide the foundation of the most robust information security system ever created. It will allow access to economic tools currently unavailable to billions. You may have seen many articles on blockchain recently. Maybe you’ve never heard of blockchain. Or maybe all you’ve heard about it is the hype.

But what’s a blockchain? The short explanation: It’s a network-based tool for storing information securely and permanently. The information in a blockchain can be authenticated by members of the public, but the information can be accessed only by those who have permission.

Blockchains can take any information—from simple ledgers to complex contracts—and store it in online containers called “blocks.” These blocks are then encrypted, meaning that the information is translated into a unique series of letters and numbers called the “hash.” The hash is created using a special encryption key. Only users who have the key can read the information.

The blocks are then linked together. Each block’s information includes the hash of the previous block in the chain, along with a time stamp. For example, a hash might look like this: 00002fg5d5500aae9046ff80cccefa. The tools that create hashes use sophisticated cryptographic calculations so that each hash in a particular chain will start with a set of standard characters, such as 0000.

How does this keep information secure?  The blocks are “decentralized,” meaning that different blocks are stored on different computers, creating a distributed network of information. This network is public, so members of the public can see the chain and read the hashes.

Changing the information in any block changes its hash. This change then moves up the chain, changing their hashes as well. The hashes in blocks that are further up the chain will no longer start with the standard characters. For example, they won’t start with 0000 and the time stamp will have changed. That means anyone can now see that data in the chain has been compromised.

Some blockchains are single chains, but many blockchains work by distributing copies of the whole chain in the decentralized network. If the copies don’t agree with one another, the blockchain’s users will elect to accept only those chains that match, and will discard any compromised chains.

Keep reading after the comic below.

Click to enlarge comic.

Foundations: Bitcoin

If you’ve heard of blockchain, it’s probably in relation to Bitcoin, an online currency that is recorded in a blockchain. Bitcoin isn’t issued by a government or bank; instead, it is created through sophisticated mathematical algorithms and distributed over a large network.

Bitcoin’s popularity stems from two features not available in most other monetary transactions. First, no intermediary, such as a bank or PayPal, is needed. These intermediaries often take a hefty fee, particularly in international transactions. Users can transfer money directly to each other, and the parties don’t need to trust each other. By using Bitcoin’s blockchain, the parties know their transaction is secure. Second, no copies of the funds are made—as happens in many online transactions—so the funds cannot be “double spent.” The records in the chain containing Bitcoin funds simply point to a new (anonymous) owner when a transaction is made.

While many praise Bitcoin’s anonymity, this trait has given the online currency a somewhat shady reputation. Many ransomware viruses demand that payments be made in Bitcoin. Often, users affected by these viruses don’t know what Bitcoin is, let alone where to buy it. The currency is sold in special online exchanges.

Who Is Using Blockchain?

The financial industry has been investing in blockchain. Some of this investment has been outside the mainstream financial sector. For example, there are now several hundred Bitcoin-type currencies known as cryptocurrencies. A few of these, such as Ethereum and Ripple.com, have been gaining ground on Bitcoin. They may eventually take over a significant part of the cryptocurrency market.

Major financial companies such as JP Morgan Chase are investing in their own blockchain-based applications. However, these applications will likely work somewhat differently than cryptocurrencies. Bitcoin and other online currencies use public blockchains, meaning that some information about the chain can be viewed by anyone. Visitors to Blockchain.info may access any block on the Bitcoin public blockchain. However, information about who owns the currency and how to access it is not public.

Instead, large financial companies are investing in private blockchains. Companies have full control over these blockchains because they are not distributed publicly. The companies themselves control the blockchain network. However, private blockchains might be more vulnerable to hackers because they aren’t distributed as widely as the public chains.

In addition, blockchain now plays a role in distributing intellectual property. For example, Resonate.is uses a blockchain system to manage a music cooperative. Similarly, DotBlockchainMusic.com is using blockchain to create a media file platform that embeds digital rights management. Even Walmart is experimenting with blockchain to better track products from farms and factories to shelves.

Uses in Law

Just as artificial intelligence (AI) has already affected how legal work is done, blockchain also offers several ways to automate and outsource legal processes. Smart contracts have generated the most discussion. These contracts are coded into blockchains and make contract execution work more smoothly.

First, there is only one copy of the contract and all parties have access to it. The contract is completely transparent, and the terms of the contract are coded into the blockchain. It is therefore impossible to create fraudulent or inaccurate copies of the contract because the terms can’t be changed without the agreement of all parties to the contract.

Second, the smart contract can be configured to be self-executing. That is, verifiable events trigger the next stage of the contract. Proof of those events can be added to the chain. For example, Widgette Co. agrees to sell Acme Co. 100 widgets for $1,000 and ship them one week after payment. When Widgette Co. produces the 100 widgets, its system adds this information to the blockchain with a time stamp. Acme Co:s system pays $1,000 and adds that information to the chain, also with a time stamp. Widgette Co. then ships the widgets, and that time-stamped information is added. Finally, Acme Co. records when it receives the widgets.

The blockchain can even include a dispute resolution mechanism. If a problem arises—such as Acme Co. claims that Widgette Co. shipped the widgets after two weeks instead of one week—this information can be verified by reviewing the information in the blockchain. The chain can then arbitrate the dispute based on preset terms. For example, Acme Co. automatically receives a 1 percent refund for each additional week that shipping is delayed.

The legal possibilities are not limited to contracts. Lawyers have been considering smart wills that can execute themselves, thereby avoiding probate. Blockchain could also be used in real estate transactions to help avoid using third parties in each transaction. For example, a blockchain real estate transaction wouldn’t need the services of an escrow firm.

Governments Respond

Governments have started to take notice of the possibilities that blockchain offers. Arizona’s governor recently signed an amendment to Title 44, Chapter 26. This amendment allows use of blockchain technology in the state, declaring that “(a] signature that is secured through blockchain technology is considered to be in an electronic form and to be an electronic signature” and “[a] record or contract that is secured through blockchain technology is considered to be in an electronic form and to be an electronic record.” Vermont is also working on a bill to allow the use of blockchain technology. Other governments and organizations, including the European Union, are investigating blockchain’s possibilities. The Republic of Georgia uses blockchain to secure government transactions involving property. Other governments are considering following suit, including those in Sweden, Honduras, and Cook County, Illinois.

Educational Blockchains

Are blockchains useful only for financial transactions? Absolutely not. Educational institutions are considering putting transcripts and graduation credentials on blockchains. This would permit alumni to easily access their own information and verify its authenticity.

It would also help those students who enroll in classes at different universities to pull their information together into a single source. The Massachusetts Institute of Technology already offers blockchain-based certificates for some programs.

Roadblocks and Possibilities

Despite the many possibilities blockchain offers, before being widely implemented, it must first overcome several issues. There are already existing systems and regulations in place for many problems that blockchain could solve. For example, there has been discussion of using blockchain for health records, yet the current regulatory environment for these records would make creating a new system difficult.

Blockchains are also not easy to implement. Setting up a blockchain requires sophisticated technology skills. Lawyers— particularly lawyers working with self-executing contracts—would need to work with coders to create them.

That said, the use of blockchain will most likely continue to grow, particularly to solve problems involving security and authentication. One area that offers great possibility is using blockchains to create secure online identities that could be used to access online services and password-protected websites. New approaches are needed to prevent ID and data theft, and the blockchain may be just the tool for the job.

This article was originally published in the September/October 2017 [Volume 22, Number 1] issue of AALL Spectrum.

Debbie Ginsberg is the Educational Technology Librarian at the Chicago-Kent College of Law Library.

Hate Speech, Free Speech, and the Internet

By Raymond Fang

In the wake of the August 12, 2017 white supremacist terrorist attack in Charlottesville, Virginia that killed one person and injured 19 others, how are Internet platforms handling racist, sexist, and other offensive content posted on their servers and websites? What are the legal ramifications of their actions?

According to a July 2017 Pew Research Center Report, 79% of Americans believe online services have a responsibility to step in when harassing behavior occurs. If white supremacist content can be counted as a form of harassment, then online platforms have certainly taken up this call in recent weeks. In the week following the Charlottesville attack:

White supremacists have reacted to these bans and other anti-white-supremacy movements by casting themselves as an oppressed group, supposedly denied free speech, and fearful to speak their minds on so-called intolerant, overly-PC liberal college campuses lest they be attacked and belittled. (Never mind the fact that people of color, women, immigrants, LGBTQ individuals, poor people, people with disabilities, and other marginalized groups have faced and continue to face serious and real discrimination every day).

Somewhat unsurprisingly, the Pew Research Center Report finds stark gender differences on opinions about the balance between protecting the ability to speak freely online, and the importance of making people feel welcome and safe in digital spaces. 64% of men age 18-29 believe protecting free speech is imperative, while 57% of women age 18-29 believe the ability to feel safe and welcomed is most important. Unfortunately, the Pew Research Center Report does not contain any data about racial differences on the speech v. safety question, nor does it have cross-tabbed data on race and gender together (e.g. black women, white men, Hispanic men).

Legally, digital media companies are allowed to ban people from their servers and services at their discretion, as First Amendment guarantees of free speech do not necessarily apply to private companies and their own terms of service. There are dangerous implications to this standard. As CloudFlare’s CEO, Matthew Prince, wrote in a company email about his decision to kick The Daily Stormer off their servers, “Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.” Prince later wrote a blog post on CloudFlare’s website where he discussed his decision, emphasized the importance of due process when decisions are made about speech, and called for the creation of stronger legal frameworks around digital content restrictions that are “clear, transparent, consistent and respectful of Due Process.” In other words, not all online speech deserves protection, but delineating which online speech does and doesn’t deserve protection should be a clear, transparent, and democratic process. Though white supremacists and neo-Nazis were the rightful target of Silicon Valley’s wrath this time, that may not be the case in the future – perhaps policymakers would do well to heed Prince’s call.

Raymond Fang, who has a B.A. in Anthropology and the History, Philosophy, & Social Studies of Science and Medicine from the University of Chicago, is a member of the ISLAT team.

Ransomware: Digital Hijacking in the 21st Century

By George Suh

Ransomware is gaining traction as one of the most significant cyber threats online.  On May 12th, 2017, the ransomware “WannaCry” began infecting PCs all over the world.  The impact of Wannacry is staggering, infecting over 150 countries and 300,000 computers.  Ransomware is a type of malware that encrypts or locks your computer’s data and files for ransom.  The use of Bitcoin is a very popular form of currency with cyber attackers, because the money is anonymized to prevent the extortionists from being tracked by federal and international authorities.  Moreover, there is no guarantee that paying the ransom will give the infected user access to their computer.  Thus, if you do not create a backup of your data, paying the ransom can lead to a costly or futile outcome and leave potentially sensitive data in the hands of clandestine criminals.

Ransomware is not a new phenomenon.  This type of malware was first reported in Russia and parts of Eastern Europe in 2005.  And starting around 2012, the use of ransomware has grown exponentially.  Moreover, the rise in ransomware has proven to be a very lucrative black market enterprise for hackers, with the FBI estimating that another major ransomware, CryptoWall, generated at least $27 million from its victims.  Even police departments were among CryptoWall’s victims.  In Swansea, Massachusetts, a police department’s computer system became infected.  Ultimately, the department paid the ransom of 2 Bitcoins (around $750 at the time), instead of figuring out how to unencrypt the malware.  Swansea Police Lt. Gregory Ryan told the Herald News that “CryptoWall is so complicated and successful that you have to buy these Bitcoins, which we had never heard of.”

As recent ransomware events have shown, there is a growing concern about high profile attacks that are an ever growing trend in the cyber landscape.  Businesses and organizations that maintain personally identifiable information should take into account the potential legal ramifications for failing to secure critical data:

  • Federal Trade Commission Enforcement. In a November 2016 blog entry, the FTC warned that “a business’ failure to secure its networks from ransomware can cause significant harm to the consumers whose personal data is hacked.  And in some cases, a business’ inability to maintain its day-to-day operations during a ransomware attack could deny people critical access to services like health care in the event of an emergency.”  The FTC also highlighted that “a company’s failure to update its systems and patch vulnerabilities known to be exploited by ransomware could violate Section 5 of the FTC Act.”  When data breach occurs, the FTC may also consider the accuracy of the security promises made to the consumer.  Under Section 5 of the FTC Act, the “unfair or deceptive acts or practices” doctrine gives the FTC the authority to pursue legal actions against businesses and organizations that misrepresent security measures used to protect sensitive data.
  • Breach Notification Requirements. In the U.S., 48 States, the District of Columbia, U.S. Virgin Islands, Guam, and Puerto Rico contain laws that require notification to affected individuals in the event of a breach. Some States also require notification to regulators. Federal laws, such as the Health Insurance Portability and Accountability Act (“HIPPA”), also have specific breach notification requirements.  Moreover, U.S. businesses and organizations that operate or sell products internationally may also be subjected to stricter notification laws.  For example, on May 25th, 2018, E.U.’s upcoming General Data Protection Regulation (“GDPR”) will require notification to affected individuals “within 72 hours of first having become aware of the breach.”  Penalties for businesses or organizations that violate the GDPR can be fined up to a maximum of 4% of annual global turnover or €20 Million (whichever is greater).

Understanding the applicable breach notification laws can save a business or organization from significant legal and monetary complications.  The unfortunate reality is that ransomware may be the beginning of much more sophisticated and sinister malware attacks.  Therefore, businesses and organizations that maintain personal data should ensure they are complying with data privacy and cyber security laws.  With the high profitability and anonymity that ransomware provides for cyber criminals, there will certainly be more attacks in the future.

George Suh is a 3L at Chicago-Kent. He is the co-founder and current Vice President of Chicago-Kent’s Cyber Security and Data Privacy Society.

Facebook after Death

 By Michael Goodyear

With nearly two billion users, Facebook is firmly entrenched in 21st century life. A person’s Facebook account serves as a digital doppelgänger: their thoughts, interests, pictures, friends, and memories are available in perpetuity. But what happens to the digital profile when its physical owner dies? With physical property, there is often a presumed heir. A Facebook account, even if it doesn’t have monetary value, can have significant emotional value—after all, it is a record of life, a sense of personality and who the deceased individual was. Should a parent, significant other, sibling, or someone else have access to a Facebook account after the owner has passed away?

No, said a Berlin court yesterday.  A 15-year-old girl had been killed by a subway train back in 2012. Her parents wanted to access her Facebook to determine whether she had committed suicide by looking at her posts and reading her chats. The parents had petitioned Facebook to grant them access to the account, but when it was denied the parents went to the German courts.

Back in 2015, a regional court had ruled in favor of the parents, classifying Facebook messages and posts as similar to letters and diaries, which can be inherited. But the court of appeals instead looked to the privacy of those with whom the deceased girl had communicated. Granting her parents access to her account would compromise those other individuals’ constitutional right to privacy. The case could be appealed all the way to Germany’s Federal Court of Justice.

But for now, Facebook’s policies on a deceased person’s account are maintained. There are actually only three options for a deceased person’s Facebook account: 1) leaving it, 2) memorializing it, and 3) removing it. Facebook has a special form for a deceased person’s account, but this is only for memorializing or removing the account, not accessing it. Facebook’s policy is to not allow anyone other than the account user to log in to their account, including the family of the deceased.

But while Facebook does not turn over account access to family members, it does respond to requests from the government, including access to posts and messages if the government supplies a warrant. This means that there is a threshold where even privacy is outweighed by a greater goal.

Back in 2005, the family of a deceased marine, Justin Ellsworth, was granted accessed to his Yahoo email account after an Oakland County probate judge ordered Yahoo to grant them access. Cybersecurity law experts Julie E. Cohen of Georgetown University Law Center and Henry H. Perritt, Jr., of Chicago-Kent College of Law argued that emails were like other types of information or property routinely accessed or transferred after someone’s death and that access should be granted to survivors.

But seven years later in 2012, a California district court quashed a subpoena by a deceased individual’s family members to have access to the contents of her Facebook account. Sahar Daftary had died falling from the 12th floor of an apartment building in Manchester, England. Similar to the German case, her family wanted to know whether it was an unfortunate accident or suicide. The court upheld Facebook’s policy, noting that the Stored Communications Act, 18 U.S.C. §§ 2701-2712, protected the contents of Daftary’s Facebook account. The court did note that Facebook could turn over the contents voluntarily, but that would be unlikely given Facebook’s policy on the accounts of deceased persons.

“I think it’s a good idea for sites not to have a blanket policy to hand this stuff over to survivors.  This information is private and you assume that it’s private, you assume that your Facebook account is private, you assume that your email account is private,” said Rebecca Jeschke of the Electronic Frontier Foundation.

While someone’s public posts on Facebook were intended for others’ eyes, their private messages were not. Even in the case of children, it is unlikely that they would want their parents reading their private messages when they were alive. Why should it be different now that they are deceased? In addition to the deceased individual themselves, the privacy of those with whom they communicated would also be at stake. The German appeals court’s decision supports a broader point: a loved one’s death is tragic, but even it should not trump the constitutional right to privacy.

Michael Goodyear, who has a BA in History and Near Eastern Languages and Civilizations from the University of Chicago, is part of the ISLAT team.

Alexa, Am I Violating Legal Ethics?

By Peggy Wojkowski

Thomson Reuters announced its release of Workplace Assistant, which allows attorneys to record, inquire about, and use a timer to calculate billing entries via Amazon Echo and Alexa-enabled devices. The new Workplace Assistant interacts with the existing Elite 3E platform used by law firms to manage workflow and to streamline tasks. Thomson Reuters indicates that Workplace Assistant “always works within the firm’s security walls.” Workplace Assistant does, however, interact with the Amazon environment, although Thomson Reuter’s considers the interaction “low touch,” which means very little interaction between Workplace Assistant and the Amazon environment. This minimal interaction, beyond a firm’s security walls, could draw ethical concerns for the attorneys who use the Alexa-enabled aspects of Workplace Assistant.

Alexa- enabled voice assistants, such as Amazon Echo and Amazon Dot, respond to voice requests from users. These devices either stream or record the voice requests to servers which access the requests and form responses. For the Alexa-enabled Amazon products, the wake-up word, “Alexa,” activates these voice assistants, which then respond to voice requests.  Therefore, in order to hear the wake-up word, the voice assistant’s microphone must be active even when a user is not actually making a request, i.e., the voice assistant is listening, even when the device is not awake. When an Alexa-enabled product is used with the Workplace Assistant, the device is listening for the wake-up word inside the attorney’s office. The Workplace Assistant manages the voice requests regarding client billing with the client information from the Elite 3E platform, the law firm management software. Even if the Elite 3E platform is ultimately handling any voice requests pertaining to billing, it is not clear who is handling any other voice requests or who has access to the microphone when Alexa is not awake.  This scenario requires investigation in order to comply with the American Bar Association’s Model Rules of Professional Conduct.

The American Bar Association’s Model Rule 1.6 pertaining to confidentiality of information in an attorney-client relationship indicates in part (c) that “a lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client” (emphasis added.) The committee provides insight as to what defines reasonable efforts in Comment 18 to Model Rule 1.6, which requires attorneys to act competently to preserve confidentiality. In acting competently, attorneys know not to discuss confidential information in public places, with others outside of the legal team, or with those individuals with whom communication is not necessary to adequately represent clients.  Because competent representation includes awareness of individuals (physically and electronically) present when discussing confidential information, the Workplace Assistant could pose a problem as to conclusively determining who is listening, or has access to, the microphone and its recordings on the Alexa-enabled device.

Model Rule 1.1 also requires that an attorney provide competent representation to clients and, in its comments, addresses technology used by attorneys. According to Comment 8 of Model Rule 1.1, this competency includes the attorney keeping up-to-date on changing law “including the benefits and risks associated with relevant technology.”  Therefore, attorneys cannot blindly use technology without knowing the security measures and the possible ramifications on client representation. The benefit of the Workplace Assistant is the time saved in recording and inquiring about billing. The risk is having an active microphone within an attorney’s office able to record client-privileged information, which may be a risk that attorneys do not want to take.

However, Amazon does have another product, the Amazon Tap, which may lessen the risk associated with voice assistants but still allow attorneys to use the Workplace Assistant program.  Although this device also uses Alexa to respond to voice requests, a wake-up word is not required because the user must touch the button on the top of the device to activate the microphone. Therefore, the microphone is not listening for the wake-up words, which alleviates some concerns regarding confidentiality.

Either way, attorneys may still hesitate to use any of these gadgets due to actual reactions from clients, who may step into the office for a meeting and see the microphone in an area where they want to discuss private, confidential information.

Peggy Wojkowski graduated from Chicago-Kent College of Law in May 2017.  She will be joining a large IP boutique firm in September 2017 after sitting for the Illinois bar exam in July 2017.