The Nightmare Du Jour: Clearview AI Brings 1984 to 2020

Professor FrancoBy Alexandra M. Franco, Esq.

Have you ever had a picture of your face as your profile picture on a social media website? If the answer is yes, then it is very likely that a company called Clearview AI has it. Have you ever heard of Clearview AI? You probably haven’t—that is, unless you watched this alarming John Oliver segment or read this spine-chilling report from Kashimir Hill in The New York Times which gives any Stephen King novel a run for its money. If you are amongst the majority of people in the U.S. who has not heard of Clearview, it’s about time you did.

Clearview is in the business of facial recognition technology; it works primarily by searching the internet for images of people’s faces posted on social media websites such as Facebook and YouTube and uploading them to its database. Once Clearview a finds a picture of your face, the company takes the measurements of your facial geometry—a form of biometric data. Biometric data are types of measurements and scans of certain biological features which are unique to each person on earth for example, a person’s fingerprint. Thus, much like a fingerprint, a scan of your facial geometry enables anyone who has it to figure out your identity from a picture alone.

But Clearview doesn’t stop there. Once it has created a scan of your facial geometry, its algorithm keeps looking through the internet and matches the scan to any other pictures of you it finds—whether you’re aware of their existence or not and even if you have deleted them. It does this without your knowledge or consent. It does this without regard to social media sites’ terms of use, some of which explicitly prohibit the collection of people’s images.

So far, Clearview has done this process with over three billion (yes, billion with a b) images of people’s faces from the internet.

Indeed, what makes Clearview’s facial recognition service so powerful is, in part, their indiscriminate, careless and unethical collection of people’s photos en masse from the internet. So far, the majority of companies in the business of facial recognition have limited the sources from which they collect people’s images to, for example, mugshots. To truly understand how serious a threat to people’s privacy Clearview’s business model is, think about this: even Google—a company that can hardly be described as a guardian of people’s privacy rights—has refused to develop this type of technology as it can be used “in a very bad way.”

There is another thing that places Clearview miles ahead of other facial recognition services: its incredible efficiency in recognizing people’s faces from many types of photos—even if they are blurry or taken from a bad angle. You might be tempted to think: “But wait! we’re wearing masks now, surely they can’t identify our faces if we’re wearing masks.” Well, the invasiveness of Clearview’s insanely powerful algorithm surpasses even that of COVID-19; it can recognize a face even if it is partially covered. Masks can’t protect you from this one.

And Clearview has unleashed this monstrous threat to people’s privacy largely hidden behind the seemingly endless parade of nightmares the year 2020 has unleashed upon us.

2020 has not only been the COVID-19 year.  It has also been the year in which millions of people across the U.S. have taken to the streets to protest the police’s systematic racism, abuse and violence towards African Americans and other minorities. Have you been to one of those protests lately? In the smartphone era, protests are events in which hundreds of people are taking myriad pictures with their smartphones and uploading them to social media sites in the blink of an eye. If you have been to a protest, chances are someone has taken your picture and uploaded it to the internet. If so, it is very likely that Clearview has uploaded it to their system.

And to whom does Clearview sell access to its services?  To law enforcement!

Are you one of those Americans who have exercised their constitutional right to freedom of speech, expression and assembly during this year’s protests? Are you concerned about your personal safety during a protest in light of reports such as this one showing police brutality and retaliatory actions against demonstrators? Well, you may want to know that Clearview thought it was a great marketing idea to give away free trials of its facial recognition service to individual police officers—yes, not just to the police departments, to individual officers. So, in addition to riot gear, tear gas and batons, Clearview has given individual police officers access to a tool that allows them, at will and for any reason, to “instantaneously identify everyone at a protest or political rally.”

Does the Stasi-style federal “police” force taking demonstrators into unmarked vehicles have access to Clearview’s service? Who knows.

Also, as I’ve mentioned in the past, facial recognition technologies are particularly bad when it comes to identifying minorities such as African Americans. Is Clearview’s algorithm sufficiently accurate so that it doesn’t arrest or even shoot a law-abiding Black citizen because his face is mistaken for someone else’s? Again, who knows.

In its website, Clearview states that its mission is to enable law enforcement “to catch the most dangerous criminals… And make communities safer, especially the most vulnerable among us.” In light of images such as the one in this article and this one, such statement is slap in the face of the reality that vulnerable, marginalized communities have to endure every single day of their lives.

I would like to tell you that there is a clear, efficient way to stop Clearview, but the road ahead will inevitably be tortuous. So far, the American Civil Liberties Union has filed a lawsuit in Illinois State Court under the Illinois Biometric Privacy Act, seeking to enjoin Clearview from continuing their collection of people’s pictures. However, even though BIPA is the most stringent biometric privacy law in the U.S., it is still a state law subject to limitations. As a Stanford Law Professor put it, “absent a very strong federal privacy law, we’re all screwed,” and there isn’t one. And we all know that in light of the Chernobylesque meltdown our federal system of government is experiencing, there won’t be one anytime soon.

If there is anything that COVID-19 has taught us—or at least, reminded us of—is that some of the most significant threats to life and safety are largely invisible. Some take the form of deadly pathogens capable of killing millions of people. Others take the form of powerful algorithms that, in the words of a Clearview investor, could further lead us down the path towards “a dystopian future or something.” And, speaking of a dystopian future, in his—very, very often referenced—novel 1984, George Orwell wrote: “if you want a picture of the future, imagine a boot stomping on a human face—for ever.”

Clearview probably has that one, too.


Alexandra M. Franco is a Visiting Assistant Professor at IIT Chicago-Kent College of Law and an Affiliated Scholar with IIT Chicago-Kent’s Institute for Science, Law and Technology.

 

Autism Spectrum Disorders in Children Conceived with Donor Sperm: How Should the Law Respond?

Laurie Rosenow

In 2017 an Illinois mother of two children diagnosed with Autism Spectrum Disorder  (ASD) filed a complaint against a sperm bank alleging that the sperm donor used to conceive both of the children was not the man he claimed to be.[i]  Not only did Danielle Rizzo learn that donor H898 lied about his education, but he had failed to disclose a history of learning disabilities and other developmental issues.[ii]  Ms. Rizzo later discovered that she was not alone.  To date at least a dozen other children conceived with donor H898’s sperm have been diagnosed with Autism Spectrum Disorder.[iii]

In 2010 Ms. Rizzo purchased donor H898’s sperm from Idant Laboratories, who listed the donor as a 6’1 blonde-haired, blue-eyed college graduate with a master’s degree that had passed all of the lab’s health screenings.[iv]  The only thing that turned out to be true was his appearance.  Based on conversations with other women who had used donor H898 to conceive their children, some of whom had even met him, Rizzo learned that the donor had neither an undergraduate nor graduate degree as advertised and was diagnosed with ADHD, did not speak until age 3, and attended a special school for children with learning and emotional disabilities.[v]

When Rizzo’s children were 3 and 4 years old, she contacted geneticist and autism researcher Stephen Scherer, Director, Centre for Applied Genomics at The Hospital for Sick Children in Toronto and connected him with other families who had affected children from donor H898.  This group, known as an autism “cluster” offer a rare opportunity for scientist to study what causes and how to treat the disease.  Dr. Scherer cautioned that while his research to date is still preliminary, his hypothesis is that something in the donor’s DNA caused the children to develop ASD.[vi]

The word “autism” is derived from the Greek root for “self” and describes a wide range of interpersonal behaviors that include impaired communication and social interaction, repetitive behaviors, and limited interests.  These can be associated with psychiatric, neurological, physical, as well as intellectual disabilities that range from mild to severe.[vii]

Such a person may often appear removed from social interaction becoming an “isolated self.”[viii]  The Diagnostic and Statistical Manual of Mental Disorders (DSM-5) uses a broad definition of “autism spectrum disorder” that includes what were once distinct diagnostic disorders such as autistic disorder and Asperger syndrome.[ix]  ASD affects four times more males than females and symptoms usually manifest by the age of three.[x]\

The March of Dimes estimates that 6% of children born worldwide each year will have a serious birth defect that has a genetic basis.[xi]  The occurrence of ASD varies but is thought to be as high as 1% of the population.[xii]  Unlike diseases such as Cystic Fibrosis or Tay Sachs Disease for which carrier testing exists, the genetics of ASD is not yet well understood.  Geneticists such as Dr. Scherer suspect as many as 100 different genes may be associated with ASD.  Over 100 genetic disorders can exhibit features of ASD such as Rett Syndrome and Fragile X Syndrome, further complicating the diagnosis and understanding of ASD.  Dr. Scherer estimates that a subset of “high- impact” genes are involved in 5-20% of all ASD Diagnosis.[xiii]  Danielle Rizzo’s children were found by Dr. Scherer to carry two mutations associated with ASD.[xiv]

Despite the fact that genetic screening is available for many diseases, the United States does not require any genetic screening for gamete donors.  Under federal law, sperm banks in the United States are regulated by the Food and Drug Administration which requires donors of reproductive cells or tissue to undergo testing for certain enumerated communicable diseases such as HIV, Hepatitis B and C, chlamydia, and gonorrhea.[xv]  “Sexually intimate partners,” however, are exempted from such screening.[xvi]  The FDA also requires that an establishment that conducts donor screening must also review the donor’s medical records and social behavior for increased risk for communicable disease and conduct a physical exam of the donor.[xvii]  Retesting of donors is required after six months for any subsequent donations.[xviii]

Sperm banks in the U.S. are also not required to limit the number of semen samples sold or to keep track of live births resulting from their donors.  And no law prohibits a man from donating to as many sperm banks as he likes.  For example, a donor in Michigan who donated his semen twice a week between 1980 and 1994 had fathered at least 400 children by 2010.[xix]  A mother of a donor child was able to trace at least 150 half-siblings to her son using web-based registries.[xx]  Danielle Rizzo discovered that her donor, H898, was still being sold by at least four sperm banks, despite receiving calls and letters warning them of her experience.[xxi]  With the popularity of DNA home testing kits such as 23andMe and Ancestry.com as well as voluntary donor registries such as Donor Sibling Registry, even more children from donors like H898 are likely to be discovered.

In addition to the FDA rules mandating screening for communicable disease, the American Society for Reproductive Medicine (“ASRM”) advises sperm banks only include donors who are between the ages of 18-40 and provide a psychological evaluation and counseling to prospective donors performed by a mental health professional. [xxii]   The industry group recommends genetic testing for cystic fibrosis of all donors and other genetic testing that is indicated by the donor’s ethnic background.  The group does not recommend a chromosomal analysis of all donors.

The American College of Obstetricians and Gynecologists (“ACOG”) as well as ASRM recommend limiting the number of children born to a single gamete donor. [xxiii]  While populations will vary, to limit the possibility of consanguinity, AGOG recommends a maximum of 25 children born from a single donor per population of 800,000.[xxiv]  The challenge in setting limits on the number of children born to a sperm donor lies in obtaining the information and keeping updated records.  Many women purchase sperm from banks across the country and even the globe with no legal incentive to inform a sperm bank of any resulting children or their health status.  Sperm banks are also unlikely to share information with donors regarding the number of their semen vials sold let alone any children that result.

Despite the lack of a federal mandate, most sperm banks voluntarily screen for genetic defects.[xxv]  However, like ASD, many diseases that are thought to have a hereditary component cannot be tested for and clinicians and patients are forced to rely on the donor to give truthful and accurate medical and family histories as well as the banks to accurately document such information.

Like Danielle Rizzo, another mother of two children diagnosed with ASD conceived with donor H898’s sperm filed a lawsuit against Idant Labs including claims for fraud, negligent misrepresentation, strict products liability, false advertising, deceptive business practices, battery, and negligence.[xxvi]  Danielle Rizzo settled her claims against Idant’s parent company Daxor Co. in 2017 for $250,000 though she alleges it is a fraction of the estimated $7 million in care that will be needed for both of her children.[xxvii]

Similar lawsuits were filed against Xytex, a sperm bank based in Atlanta, Georgia, regarding sperm it sold from Donor #9623 named Chris Aggeles who was advertised as having a genius level IQ of 160 pursing a PhD in neuroscience engineering.[xxviii]   In fact, the donor at the time was a high school drop out with a history of mental disorders including schizophrenia, bipolar disorder, and narcissistic personality disorder and a criminal record.[xxix]  He had been a donor at Xytex for fourteen years. The plaintiffs claimed the company did not verify any of the information the donor had given them but Xytex claims it discloses to prospective clients that any representations by the donor were his alone.[xxx]  Recently nine families with 13 children conceived with sperm from Aggeles settled their claims for wrongful birth, failure to investigate, and fraud.[xxxi]

Despite monetary damages awarded in settlement of these lawsuits, a case filed in the Third Circuit against Idant was dismissed because the court found the argument of liability based on quality of sperm to be indistinguishable from New York’s prohibition against wrongful life claims.[xxxii]  “The difficulty B.D. now faces and will face are surely tragic, but New York law, which controls here, states that she ‘like any other [child], does not have a protected right to be born free of genetic defects.’”  Idant in this case had sold the sperm of Donor G738 to a mother in Pennsylvania whose daughter was diagnosed as a Fragile X carrier.

Both mothers of the children diagnosed with ASD from donor H898’s sperm left professional careers to care for their children and alleged severe financial losses as a result.[xxxiii]  Donor H898, however, was not a party to the suits nor were any other establishments selling vials of his semen and therefore not bound to any settlement agreements reached which might restrict future donations.  Since lawsuits only offer the possibility of damages and other relief after an injury has occurred and some jurisdictions, like New York, will not even consider claims related to defective sperm, policies that focus on avoiding harm prior to insemination should be considered.

Because screening is not available for many diseases that likely have a strong genetic component, the family and medical history of donors becomes critical as a secondary method of screening.  Sperm banks could require signed, sworn affidavits from donors attesting to the truthfulness and accuracy of the information they provide to encourage more accurate reporting by donors.  Many banks claim they run criminal background checks on donors but they could also verify claims of employment and education with a simple phone call.  Laws mandating a cap on the number of vials an individual may donate make sense in light of the vast numbers of children possibly being conceived from popular donors.  It may also be time for sperm banks in the U.S. to follow the example of the UK which allows children conceived from donor gametes to obtain medical information from donors at age 16 and the full name, date of birth, and address of their donors at age 18.[xxxiv]  In the age of DNA testing, social media, and cyber stalking, anonymity may be unrealistic.  If sperm banks do not tighten their internal policies for screening donors, more avoidable tragedies are likely to occur.

Laurie Rosenow in an attorney and former Senior Fellow at the Institute for Science, Law & Technology.

 

[i] Rizzo v. Idant Labs, Case No. 17-cv-00998, N.D. Ill. Jan 31, 2017.

[ii] IdSee also, Arianna Eunjung Cha, “The Children of Donor H898,” Wash. Post, Sept. 14, 2019.

[iii] IdSee also, Doe v. Idant Labs, complaint filed NY State Supreme Court, Civil Branch, June 2016.

[iv] Cha, “The Children of Donor H898.”

[v] Id.

[vi] Id.  Dr. Scherer also noted, however, that the donor could have other biological children who are not affected.

[vii] Yuen, R.K.C. et al, “Whole Genome Sequencing Resource Identifies 18 New Candidate Genes for Autism Spectrum Disorder,”  20 Nat. Neurosci., 602-611 (2017).

[viii] “What Does the Word ‘Autism’ Mean? WebMD, available at https://www.webmd.com/brain/autism/what-does-autism-mean#1.

[ix] Autism Spectrum Disorder, Diagnostic Criteria, Centers for Disease Control, available at https://www.cdc.gov/ncbddd/autism/hcp-dsm.html.

[x] Yuen et al, “Whole Genome Sequencing Resource Identifies 18 New Candidate Genes for Autism Spectrum Disorder.”  For examples of common behaviors found in children with ASD, see National Institute of Mental Health. Autism Spectrum Disorder Overview, available at https://www.nimh.nih.gov/health/topics/autism-spectrum-disorders-asd/index.shtml.

[xi] March of Dimes Global Report on Birth Defects 2006, available at https://www.marchofdimes.org/global-report-on-birth-defects-the-hidden-toll-of-dying-and-disabled-children-full-report.pdf

[xii] Anney, Richard et al, “A Genome-wide Scan for Common Alleles Affecting Risk for Autism,” Hum. Mol. Gen. Vol. 19, No. 20, p. 4072-4082 (2010).

[xiii] Cha, “The Children of Donor H898.”

[xiv] The genetic mutations found in her sons were MBD1 and SHANK1, Cha, “The Children of Donor H898,” Washington Post, Sept. 14, 2019.

[xv] 21 C.F.R. Sec. 1271.75.

[xvi] 21 C.F.R. Sec. 1271.90.

[xvii] 21 C.F.R. Sec. 1271.50 (2006).  See also, https://www.fda.gov/vaccines-blood-biologics/safety-availability-biologics/what-you-should-know-reproductive-tissue-donationDonor screening consists of reviewing the donor’s relevant medical records for risk factors for, and clinical evidence of, relevant communicable disease agents and diseases.  These records include a current donor medical history interview to determine medical history and relevant social behavior, a current physical examination, and treatments related to medical conditions that may suggest the donor is at increased risk for a relevant communicable disease.

[xviii] 21 C.F.R. Sec. 1271.85 (d).

[xix] Newsweek Staff, “Genetic Lessons from a Prolific Sperm Donor,” Newsweek, Dec. 15, 2009, also available at https://www.newsweek.com/genetic-lessons-prolific-sperm-donor-75467See also, Hayes, Daniel, “9 Sperm Donors Whose Kids could Populate a Small Town,” Thought Catalog, Jan. 13, 2016, available at https://thoughtcatalog.com/daniel-hayes/2016/01/9-sperm-donors-whose-kids-could-populate-a-small-town/.

[xx] Meraz, Jacqueline, “One Sperm Donor; 150 Offspring,” New York Times, Sept. 5, 2011, also available at https://www.nytimes.com/2011/09/06/health/06donor.html.

[xxi] Cha, “The Children of Donor H898.”

[xxii] “Recommendations for Gamete and Embryo Donation,” 99 Fertility and Sterility 1, p.47-62,  Jan. 2013,  available at, https://www.fertstert.org/article/S0015-0282(12)02256-X/fulltext#sec1See also, https://www.reproductivefacts.org/news-and-publications/patient-fact-sheets-and-booklets/documents/fact-sheets-and-info-booklets/third-party-reproduction-sperm-egg-and-embryo-donation-and-surrogacy/

[xxiii] ACOG Committee Opinion: Genetic Screening of Gamete Donors, Int’l Jour. Gyn & Obst. 60 (1998) 190-192, available at https://obgyn.onlinelibrary.wiley.com/doi/abs/10.1016/S0020-7292%2897%2990229-0

[xxiv] Id.

[xxv] See, e.g. California Cryobank, one the of the largest sperm banks in the United States: https://www.cryobank.com/services/genetic-counseling/donor-screening/

[xxvi] Doe v. Idant Labs, complaint filed N.Y. State Supreme Ct., June 2016, available at https://www.donorsiblingregistry.com/sites/default/files/Rizzo%20complaint.pdf.

[xxvii] Cha, Ariana Eunjung, “Danielle Rizzo’s Donor-conceived Sons Both Have Autism.  Should Someone be Held Responsible?” Wash. Post, Oct. 3, 2019, available at https://www.washingtonpost.com/health/2019/10/03/danielle-rizzos-sons-donor-conceived-sons-both-have-autism-should-someone-be-held-responsible/

[xxviii] Johnson, Joe, “UGA Employee at Center of Sperm Bank Fraud,” Athens Banner-Herald, Sept. 3, 2016.

[xxix] IdSee also, Van Dusen, Christine, “A Georgia Sperm Bank, a Troubled Donor, and the Secretive Business of Babymaking,” Atlanta Magazine (March 2018), also available at https://www.atlantamagazine.com/great-reads/georgia-sperm-bank-troubled-donor-secretive-business-babymaking/ (Feb. 13, 2018).

[xxx] Djoulakian, Hasmik, “The “Outing” of Sperm Donor 9623,” Biopolitical Times, June 30, 2016, available at https://www.geneticsandsociety.org/biopolitical-times/outing-sperm-donor-9623See also, Johnson, Joe, “UGA Employee at Center of Sperm Bank Fraud,” Athens Banner-Herald, Sept. 3, 2016.

[xxxi] Hersh & Hersh law firm, “Major Settlement of Sperm Bank/Deceptive Business Practice Case,” available at https://hershlaw.com/success-2/See also, Khandaker, Tamara, “Lawsuit Alleges Sperm Bank’s Genius Donor Was Actually a Schizophrenic Ex-Con,” Vice News, Apr. 15 2016, available at https://www.vice.com/en_us/article/neykmx/lawsuit-alleges-sperm-banks-genius-donor-was-actually-a-schizophrenic-ex-con

[xxxii] D.D. v. Idant Labs, (3rd Cir. 2010).

[xxxiii] Doe v. Idant Labs, Complaint; Cha, “The Children of Donor H898.”  See also, Cha, Danielle Rizzo’s Donor-conceived Sons Both Have Autism.”

[xxxiv] Human Fertilitisation and Embryology Authority, “Rules Around Releasing Donor Information,” available at https://www.hfea.gov.uk/donation/donors/rules-around-releasing-donor-information/.

Can the Law Eradicate Deep Fakes?

By Andrew White

As a wave of new technology surges forward, law tries to keep up with the surge’s negative ripple effects.  But is the law up to the task of regulating deep fakes? Recent advances in artificial intelligence have made it possible to create from whole-cloth videos and audio which make it appear that subjects in the video have done or said things they really have not.  These puppet-like videos are called deep fakes.

Deep fakes are most commonly created with GAN artificial intelligence algorithms, which function by bouncing existing images of the intended target back and forth until a life-like video puppet is created, or they succeed in overlaying an individual’s face onto an existing video.   These videos may be used to further political agendas.

For example, this video, created by a French AIDS charity, falsely depicts President Trump declaring an end to the AIDS crisis.  While not technically deep fakes, other types of political altered media have been met with viral success on social media. This manipulated video, which seemingly represents the Speaker of the House as drunk and incoherent on the job, quickly circulated Facebook and Twitter, even caught a retweet from Rudy Giuliani.  Finally, deep fake videos have also been used to create revenge-porn by scorned ex-partners.

Danielle Citron, a Professor of Law at Boston University, suggested in her testimony before the House Permanent Select Committee on Intelligence that a combination of legal, technological, and societal efforts is the best solution to the misuse of deep fakes:

“[w]e need the law, tech companies, and a heavy dose of societal resilience to make our way through these challenges.”

Google is working to improve their technology to detect deep fakes.  Facebook, Microsoft, the Partnership on AI, and Amazon have teamed up to create the Deep fake Detection Challenge. Twitter is actively collecting survey responses to gauge how users of its platform would like to see deep fakes handled, whether through outright removal of deep fake videos, labelling deep fake videos, or alerting when users are about to share a deep fake video.  There also have been efforts in the technology world to curb the influence of altered media and deep fake videos on the user-side. Users may inquire into the media which they see on their own.

Three mechanisms of technological block chain regulation. By Andrew White 2019.

For example, this algorithm tracks subtle head movements to detect whether a video is real or fake. The Department of Defense has created another algorithm which tracks eye blinking of subjects in videos to compare with bona fide videos.  Deep fakes are becoming so well-crafted, though, that there may come a time where they cannot be reliably detected.  Other methods have been developing alongside advances in artificial intelligence, such as the use of blockchain verification to establish the provenance of videos and audio before they are posted.

From a legal perspective, legislatures have begun to realize the impact which deep fakes have on American’s political and sexual autonomy. The federal government is working on legislation to require the Department of Homeland Security to research the status and effects of deep fakes.  Legislation restricting the distribution of deep fakes has already been passed in various states, but as the statutes demonstrate, it may be more difficult than anticipated to truly impact the influx of deep fakes.

Texas, in enacting S.B. no. 751, targets deep fakes whose creators’ intent is to influence the outcome of an election.  This broad statute criminalizes the creation or distribution of a deep fake video with the intent to influence an election or injure a candidate within 30 days of the election. Interestingly, the Texas legislature specified that a “deep fake video [is a] video created with artificial intelligence [depicting] a real person performing an action that did not occur in reality.” This area of law is rapidly evolving, and where the contours of this law lie have not been clearly established. For example, it is not clear whether the altered video of Nancy Pelosi would be included in this bill.  In the Pelosi video, the video was slowed down, and the pitch of the speech was raised to make it appear that the slowed voice was actually Nancy Pelosi. These material alterations weren’t created with artificial intelligence. In addition, Pelosi did actually speak the words and in the same order as the altered video. Would this fall under the statute’s proscription of videos where the subject is “performing an action that did not occur in reality”?

A recent Virginia statute targets a different category of deep fakes: revenge porn. S.B. no. 1736 adds the phrase “including a falsely created videographic or still image” to the existing revenge porn statute. This broader language seems to include those pornographic likenesses that are created by GAN (generative adversarial networks) or other algorithms.  Would this bill protect a video which contains a likeness created to look like a victim, but due to a minor difference (such as a missing or added tattoo) makes the likeness different enough to fall outside the protection of the statute?

A similar cause of action was added to California law by A.B. no. 602, which was signed into law by Governor Newsom in October, 2019. This statute adds a private right of action to the existing revenge porn statute for victims who have been face- or body-swapped into a recording of a sexual act which is published without their consent.

California also passed AB 730 alongside the revenge porn amendment.  This law disallows the distribution of any “deceptive audio or visual media … with the intent to injure the candidate’s reputation or to deceive a voter” within 60 days of an election.  The law defines “materially deceptive audio or visual media” as that which “would falsely appear to a reasonable person to be authentic and would cause a reasonable person to have a fundamentally different understanding . . . than that person would have if the person were hearing or seeing the unaltered, original version of the image or audio or video recording.”

This law also has a notable exception, which is that it does not apply to newspapers or other news media, nor does the law apply to paid campaign ads. These exceptions may serve to undermine the entire purpose of the bill, as Facebook has publicly asserted that it will not verify the truth or falsehood of political ads purchased on their platform.

Finally, traditional tort law may allow for recovery in certain situations where state statutes fail.  The torts of intentional infliction of emotional distress, defamation, and false light all could apply, depending on the fact situations.  These redresses, though, may only provide monetary damages and not the removal of the video itself. The problem with applying tort law in the deep fake context is similar to the limitations of AB 730.  Finding the creator of a deep fake, and then proving the creator’s intent may be a Herculean task.  After finding the creator, it is difficult to mount a full civil case against them.  Even if you do manage to bring a cause of action against a deep fake creator, the damage may already have been done.

The area of AI and deep fakes is a rapidly evolving one, both from a technological and a legal perspective.  The coming together of technology and law to combat the dark side of advances in artificial intelligence is encouraging, even as technology rushes forward to realize the more positive effects of artificial intelligence.  It seems, then, that the only solution to the problem of deep fakes is a combination of legal and technological remedies, and, in the words of Danielle Citron, “a heavy dose of societal resilience.”


Andrew White is a 1L Research Fellow at the Institute for Science, Law & Technology at IIT Chicago-Kent College of Law.  Andrew received his Master of Science in Law from Northwestern Pritzker School of Law and his Bachelor of Science from the University of Michigan, where he studied Cellular and Molecular Biology and French and Francophone Studies.

Your Soul for Bonus Miles! The Unhinged Collection of Biometric Data is Today’s Faustian Bargain

By Alexandra M. Franco, Esq.

What do tanning salons, amusement parks, Asian food restaurants, airlines and the FBI have in common? They all collect people’s biometric information.

What is biometric information? The most well-known form of biometric information are people’s fingerprints—perhaps because TV shows and movies have disseminated the knowledge that fingerprints have historically been law enforcement agencies’ biometric identifiers of choice.

But fingerprints are just one type of biometric information that can be collected from a person. Stand up and look into the mirror. Do you see the angles between the different points in your face? Do you notice the particular distance between the end of your nose and the top of your upper lip? Your facial geometry is unlike any other human’s. Now, look closer; take a look at your iris—the colored part of your eye that surrounds the pupil. Do you see all of those little dots, streaks and swirls of different color shades? Those intricate patterns within your irises are as unique to you as your fingerprints. How about the white part of your eye? The retina is home to many tiny blood vessels, the shapes of which are also unique to you.

All of these are your biometric identifiers. Your biometric identifiers cannot be found on any other human on earth, and are a part of you until the day you die.

This is what has made fingerprints so important for law enforcement. Fingerprints allow police agencies to keep accurate records of the people they arrest—whether or not they give the police a false name or are carrying a false identification. Those who commit crimes are also likely to leave fingerprints in a crime scene, also making fingerprints essential in identifying a criminal in a case.

So far, we have entrusted law enforcement and other government agencies to collect our fingerprints because: (1) we trust these entities to keep this information safe (more on this later); and (2) because the societal benefit of doing so for law enforcement and security purposes is significant. Nevertheless, when collected, biometric identifiers become the most sensitive type of data that can be collected about a person. If someone hacks into a database and steals your fingerprints, they can use them to steal your identity in the same way someone who obtains your social security number can steal your identity. However, unlike a social security number that has been stolen, you cannot change your fingerprints. You cannot alter the patterns within your irises. Your biometric data is a permanent part of who you are.

Today, the collection of biometric data has grown exponentially outside of law enforcement agencies. For example, some employers now collect employees’ fingerprints to set up fingerprint access to work areas. Some may argue that depending on the type of work done at these places, fingerprint access may be warranted and more secure—in lieu of another method such as a passcode entry box or a key card entry box.

The problem is that businesses’ use of biometric data is increasing beyond the simple fingerprint entry access. Unfortunately, companies are beginning to think about biometric data in the same way in which we think of physical keys, key fobs, or even retail loyalty cards—as mere tools to make business practices more efficient. These businesses often portray these practices as a perk for costumers. Would you like to skip the line to order your favorite stir fry? Let us scan your face! Do you want the freedom to use any of our tanning salon locations? Cool! Let us have your fingerprints. As a result of this conceptual cheapening of biometric data, the collection and storage of people’s biometric identifiers has exploded out of control in recent years, as businesses are embracing their use as part of their business models.

But there is a difference between collecting and storing people’s biometric data for law enforcement and security purposes and doing so as part of private business models. The collection and storage of biometric data for business gain can have disastrous and irreversible consequences for people.

The best example to illustrate the potential dangers from this new trend is the recent announcement by United Airlines that it is working with Clear—a company that sells biometric technologies to airlines and stadiums—to implement iris and fingerprint scanners at Chicago’s O’Hare airport’s security checkpoints. On a July 29, 2019 interview for WBEZ’s “All Things Considered” newscast, United promoted its new business practice as an exciting new perk for its customers: “Not only do you get the benefit of not having to take out your ID but you also get the benefit of going right in front of the security lane.” The Wall Street Journal reported the service to usually cost $179, but United noted that it would offer discounts to some of its customers and “enroll its top-tier frequent fliers free of charge.”

The information that United has provided raises significant questions. For example, what specific steps will United take to ensure that the biometric data it collects through Clear will be adequately protected from a breach? Clear’s evasive statement to the WSJ as to this issue was that it “has never had a breach” as if a past streak of good luck were an automatic assurance to a certain future.

A mere day after United’s enthusiastic announcement of its new biometric venture, the news broke about Capital One having been breached in an attack where a hacker obtained “access to 100 million Capital One credit card applications and accounts” in one of the worst data breaches in history. The number of data breaches happening each year continues to grow. Heck, not even the U.S. Government can prevent data breaches.

Clear also claims that it does not share or sell people’s biometric data. That’s great. Will it continue to never share or sell that data? What about if Clear decides to change this particular policy in the future? In such case Clear could present a customer who has a few minutes to run and catch her United flight with a long and dense, digital “Notice of Policy Change” and nifty little box to check that says “I have read and accept the terms and conditions in the Notice of Policy Change.” That is, if Clear and/or United feel magnanimous enough to give any notice to their customers at all.

This leads yet to another issue. Illinois is one of the few states with legislation—the Biometric Information Privacy Act—which among other things, requires businesses who wish to collect people’s biometric data provide those people with detailed information about the security measures it takes to store and dispose of the data before collecting it. This is so that people giving up their fingerprints and iris scans to avoid the oh-so-terrible hassle of taking out their ID at the airport security checkpoint, understand the benefits and risks in agreeing to give up their data. The issue is once more, that passengers will be presented with the familiar “I have read and accept the Notice of Privacy and Data Security” check box, without reading or understanding the implications of their actions.

Even more issues arise from United’s “partnership” with Clear. First, as the WSJ reported, United has actually obtained an ownership stake in Clear. This creates a clear (no pun intended) conflict of interest. In light of this conflict, can United ensure that it will conduct a strict oversight of Clear’s data collection and storage practices? Can United guarantee to do everything in its power to address, remedy and timely notify customers in the case of a hypothetical future breach even if doing so will harm its bottom line?

The second issue from United’s “partnership” with Clear has to do with Clear’s claim that it does not share customers’ data. Delta—another airline using Clear’s technology—also has an equity stake in Clear. Although Clear claims not to share its customers’ information, it is not clear (again, no pun intended) if this policy applies to absolutely everyone under the sun or just anyone so long as they don’t have an ownership stake in Clear. Do companies that have purchased equity in Clear get to look at and share customers’ information with one another? Clear does not sell or share the data, but will United and Delta do it?

It would behoove United and Clear to answer these questions to their customers. It’s already bad enough that United is marketing its new biometric collection business model as a perk for customers who get it free or at a reduced rate—that is, those who don’t get charged the whooping $179 for the privilege. Of course, what United and Clear don’t tell customers is that the data they collect will likely bring these companies significant economic gain; people’s data is inherently—and greatly—valuable to those who collect it. It is valuable enough for companies to offer people chump change to lure them to give it up.

The risks presented by the indiscriminate collection of biometric data are significant. This is due to the extremely sensitive nature of biometric data and what can occur when it is misused, which can range from identity theft to being implicated in a crime you did not commit—facial recognition technology is particularly imprecise when it comes to anyone who is not a white male. Further, an era in which people’s biometric information can be used to track their movements, the places they visit—from gas stations to addiction treatment centers—and even the products they look at in a store, the issue of data sharing and selling is of significant importance for people’s privacy. In 2016, I wrote a blog considering the implications of using facial scans to track people’s attendance in churches. The persistent, continuous and increasing collection and sharing of people’s biometric information for any purpose has connotations beyond data security and privacy which we may not even consider yet. Is it worth to give up one of the most essential aspects of your humanity to enter an airport lounge? In the era of data breaches, artificial intelligence and deepfakes, the answer to that question will likely determine the future of these technologies and how they will shape our society in the years to come.

Meanwhile, in considering whether to give up your fingerprint for a lounge pass to an airline which has come under fire in the recent past for the unfortunate effects of its “established procedures” on customers, it is worth remembering this: if your social security number is stolen, you can change it—yes, the process is painful and tortuous even, but the point is that it can be done.

You cannot ever change your fingerprints.

 

Alexandra M. Franco is a Visiting Assistant Professor at IIT Chicago-Kent College of Law and an Affiliated Scholar with IIT Chicago-Kent’s Institute for Science, Law and Technology.

 

Inheriting the Facebook Graveyard

By Michael Goodyear

Last year, I wrote about a German court case that struggled with the question of whether anyone could have access to a deceased individual’s social media accounts. The case centered on a 15-year-old girl who had been killed by a subway train, and her parents wanted to access her Facebook account to see if they could determine from her posts and messages if she committed suicide. In May 2017, the German court of appeals reversed a lower court ruling in favor of the parents, holding that allowing the parents access to their daughter’s account would compromise the constitutional expectation of telecommunications privacy of the third parties with whom she had interacted online. On July 12, 2018, the highest German court, the Federal Court of Justice (BGH), overruled the court of appeals, agreeing with the initial lower court decision and holding that online data can be inherited just like physical writings such as personal diaries or letters.

While there is a strong policy interest in probate, and social media does appear to fit into a broad interpretation of written communications traditionally included in probate, social media accounts include a much greater breadth of information than those traditional sources. While probate law stretches back centuries, social media does not. Today, instead of a simple spoken conversation, which could not be inherited, we engage in lengthy conversations on social media and via texting. In many cases, our social media accounts and private messages are reflections of our personal thoughts. Although diaries also contained sensitive thoughts, they cannot compare to the magnitude of personal information present on Facebook. This is a fundamental change from prior physical document inheritance under probate law.

The German case was also particularly tricky due to the girl’s age. Since she was a minor, her parents had an expanded range of rights which they would not have had upon her reaching legal adulthood. The BGH ruled broadly that digital content can be inherited, but it is unclear if this could be limited to only minors.

Such a limitation could be one way to achieve the court’s goal while still preserving data privacy for third parties. If such a policy were implemented, third parties would know when communicating over Facebook with underage individuals that their communications are not necessarily limited to the recipient’s eyes alone.

Facebook’s own policies do not allow access to a deceased individual’s account, even if the requesters are family members or he or she was a child. The only options are to leave the Facebook account as is, memorialize it, or remove it. The BGH ruling will likely force Facebook to reevaluate its current deceased users policy, which provides the perfect opportunity to adapt its policies to better protect user privacy.

While the BGH’s decision does not directly affect those outside Germany, Facebook’s reaction to the decision, including any policy changes, could apply to the rest of Europe and the United States as well. However, Facebook has previously resisted broadly applying EU privacy protections to its users who do not reside in the EU. It could very well maintain separate positions on accessing deceased users’ accounts as well.

The prevailing standard in the United States is that third-party communications are protected under the Stored Communications Act, 18 U.S.C. §§ 2701-2712. This broad privacy protection is not only important for third parties, but also for online services themselves. Platforms such as Facebook can simply refuse to disclose, except under limited circumstances, and cite the shield of the Stored Communications Act.

A possible alternative, in addition to drawing a distinction for minors, that would still comply with the Stored Communications Act and ameliorate the issue of the Facebook graveyard would be allowing the inclusion of social media accounts in your will. Since over 10,000 Facebook users die every day, there is a pressing question of what to do with this every-increasing digital graveyard of accounts filled with personal information. Delaware adopted a law for fiduciary access to digital assets and digital accounts in 2014. Under this law, an individual can list social media access in their will, despite sites like Facebook not allowing such a transfer. There are already services to hand over social media account access after the user’s death. Furthermore, courts have held that users can consent to the disclosure of their online communications in cases such as In re Facebook, 923 F. Supp. 2d 1204 (N.D. Cal. 2012), and Ajemian v. Yahoo!, Inc., 84 N.E.3d 766 (Mass. 2017).

The seemingly impenetrable wall between Facebook accounts and the outside world has already been penetrated. Facebook divulges account information and private messages to government officials with a warrant. Facebook private data is subject to discovery requests in litigation. Providing for access in a user’s will would be another step in compliance with the law that would allow the user to use his own discretion and also forewarn third parties that their communications might be shared. While the exact privacy rights of children are trickier, following the BGH’s ruling, Facebook should craft a new policy to best meet the interests of the dead, the living, and privacy.

Michael Goodyear is a former ISLAT member and is currently a rising 2L at the University of Michigan Law School, where he is the President of Michigan’s Privacy and Technology Law Association.

Countdown to Health Care Privacy Compliance; GDPR Minus One Day

By Joan M. LeBow and Clayton W. Sutherland

As we hurtle to our deadline of March 25, 2018 for the European Union’s General Data Protection Regulation (GDPR) implementation, health care providers are quickly assessing gaps in their understanding of what is required by GDPR.  A key area of concern is how the GDPR’s requirements compare to previous requirements under HITECH/HIPAA and FTC requirements.

Elements of Consent and Article 7

Consent in the GDPR can be made easier to understand by breaking down the definition into principle elements and correlating them with the obligations found in the GDPR. The Article 4 definition can be divided into four parts: consent must be freely given, specific, informed, and include an unambiguous indication of affirmative consent. We will address each element in different blogs, starting with “freely given.”

“Freely Given” Element

“Freely given,” under the GDPR definition, is focused on protecting individuals from an imbalance of power between them and data controllers. Accordingly, the Article 29 Working Party (WP29)—the current data protection advisory board created by the Data Protection Directive—has issued guidance for interpreting when consent is freely given. Per this guidance material, consent is only valid if: the data subject is able to exercise a real choice; there is no risk of deception, intimidation, or coercion; and there will not be significant negative consequences if the data subject elects not to consent.[i] Consequently, consent must be as easy to withdraw as it is to grant for organizations to be compliant. Additionally, GDPR recital 43 states the controller needs to demonstrate that it is possible to refuse or withdraw consent without detriment.[ii]

Controllers (who determine the purposes for data processing and how data processing occurs[iii]) bear the burden to prove that withdrawing consent does not lead to any costs for the data subject and thus no clear disadvantage for those withdrawing consent. As a general rule, if consent is withdrawn, all data processing operations that were based on consent and took place before the withdrawal of consent—and in accordance with the GDPR—remain lawful. However, the controller must stop future processing actions. If there is no other lawful basis justifying the processing (e.g. further storage) of the data, it should be deleted or anonymized by the controller.[iv] Furthermore, GDPR recital 43 clarifies that if the consent process/procedure does not allow data subjects to give separate consent for personal data processing operations (granularity), consent is not freely given.[v] Thus, if the controller has compiled multiple processing purposes together and has not attempted to seek separate consent for each purpose, there is a lack of freedom, and the specificity component comes into question. Article 7(4)’s conditionality provision, according to WP 29 guidance, is crucial to determining the “freely given” element.[vi]

GDPR vs. HIPAA/HITECH and FTC Part 2

GDPRHIPAA/HITECHFTC
“Freely given,” under the GDPR definition, is focused on protecting individuals from an imbalance of power between themselves and data controllers.The limitations on health data use and authorization requirements are to help ensure the privacy of patients and protect their right to limit how their data is used.

This protection has various applications, including how data is used for marketing purposes as well as when or if data can be sold.
The FTC protects consumers from the imbalance of power between themselves and businesses providing services. They protect consumers, generally, with FTC Act § 5 powers.
A service may involve multiple processing operations for more than one purpose. In such cases, the data subjects should be free to choose which purpose they accept, rather than having to consent to a bundle of processing purposes.

Consent is not considered to be free if the data subject is unable to refuse or withdraw his or her consent without detriment. Examples of detriment are deception, intimidation, coercion or significant negative consequences if the data subject does not consent.

Article 7 (4) of the GDPR indicates that, among other things, the practice of “bundling” consent with acceptance of terms or conditions or “tying” the provision of a contract or a service to a consent request for processing personal data not necessary for the performance of that contract or service, is considered highly undesirable.
When such practices occur, consent is presumed not to be freely given.
An Authorization must include a description of each purpose of the requested use or disclosure of protected health information.
A covered entity may not condition the provision of treatment, payment, enrollment in a health plan, or benefit eligibility to an individual based on the acquisition of an authorization unless it falls under one any of the three enumerated exceptions, which are for psychotherapy notes, marketing or sale of Protected Health Information.

Under HIPAA/HITECH, generally bundling authorizations in with other documents, such as consent for treatment, is prohibited. However, there are three circumstances when authorizations can compound together to cover multiple documents or authorizations.
Unfair and Deceptive Business Practices:

Deceiving/ misleading customers about participating in a privacy program.

Failing to honor consumer privacy choices.

Unfair/unreasonable data security practices.

Failing to obtain consent when tracking consumer locations.

Children's Online Privacy Protection Rule ("COPPA")
A website or online service that is directed to children under 13 cannot collect personal information about them without parental consent.
Under the GDPR, the right to withdraw consent must be as easy a procedure as the one that grants consent for organizations to be compliant.

As a general rule, if consent is withdrawn, all data processing operations that were based on consent and took place before the withdrawal of consent—and in accordance with the GDPR— remain lawful. However, the controller must stop future processing actions.

If there is no other lawful basis justifying the processing (e.g. further storage) of the data, they should be deleted or anonymized by the controller.

GDPR recital 43 states the controller needs to demonstrate that it is possible to refuse or withdraw consent without detriment.
The right to withdraw an authorization is similar to the GDPR right to withdraw consent. The Covered Entity, like the Controllers, has the responsibility of informing data subjects of that right.

The revocation must be in writing, and is not effective until the covered entity receives it. In addition, a written revocation is not effective with respect to actions a covered entity took in reliance on a valid Authorization or if provision of a contract or service was conditioned on obtaining the authorization.

The Privacy Rule requires that the Authorization must clearly state the individual’s right to revoke; and the process for revocation must either be set forth clearly on the Authorization itself, or if the covered entity creates the Authorization, and its Notice of Privacy Practices contains a clear description of the process, the Authorization can reference the Notice of Privacy Practices.
According to better business practice promulgated by the FTC, companies should provide key information as clearly as possible and not embedded within blanket agreements like a privacy policy, terms of use, or even in the HIPAA authorization itself.

For example, if a consumer is providing health information only to her doctor, she should not be required to click on a “patient authorization” link to learn that it is also going to be viewable by the public. And the provider should not promise to keep information confidential in large, boldface type, but then ask the consumer in a much less prominent manner to sign an authorization that says the information will be shared.

Further, the health care provider should evaluate the size, color and graphics of all of their disclosure statements to ensure they are clear and conspicuous.

[i] Working Party 29. “Guidelines for Consent under Regulation 2016/679.” Working Party 29 Newsroom, Regulation 2016/679 Guidance, November 28, 2017, 7-9 30.

[ii] European Parliament, and European Council. “REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation).” Official Journal of the European Union, Legislation, 119/8 (May 4, 2016). [Herein after GDPR Publication].

[iii] See, id at 119/33 for Art. 4 (7).

[iv] Working Party 29. “Guidelines for Consent under Regulation 2016/679.” Working Party 29 Newsroom, Regulation 2016/679 Guidance, November 28, 2017, 21, 30.

[v] See, GDPR Publication at 119/8.

[vi] Working Party 29. “Guidelines for Consent under Regulation 2016/679.” Working Party 29 Newsroom, Regulation 2016/679 Guidance, November 28, 2017, 10, 30.

Joan M. LeBow is the Healthcare Regulatory and Technology Practice Chair in the Chicago office of Quintairos, Prieto, Wood & Boyer, P.A. Clayton W. Sutherland is a Class of 2018 graduate of the IIT Chicago-Kent College of Law.

Countdown to Health Care Privacy Compliance; GDPR Minus Eight Days

By Joan M. LeBow and Clayton W. Sutherland

Are you a US healthcare provider with concerns about data privacy, a patient, or a reporter or policymaker trying to understand the changing healthcare privacy landscape?  If you are, then our blog series will help you sort through the essential question about the relevance of the GDPR to you.

The European Council and European Parliament passed Regulation 2016/679, better known as the General Data Protection Regulation (GDPR), to repeal and replace Directive 95/46/EC, known as the Data Protection Directive (DPD). The new regulation creates a single set of privacy protection laws to be implemented in Member States and complied with by participants of the digital information market. Data processing under the GDPR is based on seven core principles: accountability; lawfulness, fairness and transparency; purpose limitation; data minimization; accuracy; storage limitation; as well as integrity and confidentiality.[i] These principles provide the foundation for the GDPR and its various compliance requirements. The GDPR applies to processors and controllers of data, similar to the DPD. For clarification, the controller says how and why the data is collected and processed, while the processor acts on the controller’s behalf.

The broadened scope of the GDPR is laid out in Article 3. The regulation applies to all companies processing personal data of EU residents regardless of the company’s location or where the processing takes place.[ii] Further, the GDPR applies to data processing by controllers or processors not established in the EU when the company offers goods or services to EU citizens and the monitoring of data subjects takes place in the European Union. Specifically, Article 3 §2 applies to entities established outside the EU but that conduct data processing activities under certain conditions. According to § 2(a), if you offer goods or services to data subject in the EU or, under § 2(b) if you monitor a data subject’s behavior that occurs in the EU, the GDPR will apply.[iii]

Under the GDPR, processing activities is broadly defined. Consequently, it should be understood as a set of activities—automated or not—that includes: data collection, storage, use, consultation, and disclosure by transmission among other activities.[iv] For example, a company’s medical app that transmits data concerning EU residents to doctors in the US for consultative services would be subject to the GDPR; for the US consultant, the transmission of the data is the prong that triggers application. Moreover, the GDPR applies when a company operates a website that meets Art. 3 § 2, of offering goods and services (business activities) or monitoring data subject behavior in the EU (business activities).

The GDPR data privacy security obligations, requirements, and rights are closing fast on providers in the US. The GDPR goes into effect on May 25, 2018. In the health care arena, US companies must comply with both the GDPR and existing US data security standards. Our blog series will assist with this reconciliation and normalization process for compliance officers and counsel trying to make sense of these overlapping frameworks.

We will start this series by introducing Article 6, and review consent under GDPR as a lawful basis for processing data. Next, we will analyze the GDPR’s definition of consent to help understand the four  primary elements and the conditions for consent found in Article 7. Then we proceed to Article 9, discussing the five  most relevant justifications for health and medical industry participants that want to process special categories of data and how such justifications relate to current compliance requirements in the US.

Consent and Article 6

Under the GDPR, data processing is only lawful if and when it falls under one of the six enumerated justifications in Article 6, including consent, performance of a contract, and satisfying legal obligations. We will primarily focus on consent and relevant sections in this review.

Consent is at the core of the GDPR regulation and is an area of expected focus for enforcement. Article 6(1) states that data processing, when relying on consent, is only lawful if and to the extent that (a) the data subject has given consent to the processing of their data for one or more primary purposes. Thus, obtaining valid consent is always preceded by the determination of a specific, explicit and legitimate purpose for the intended processing activity. Generally, consent can only be an appropriate lawful basis if a data subject is offered control and a genuine choice with regard to accepting or declining (without detriment/retaliation) the terms offered.

In the table below, we compare and contrast current regimes in the US regarding consent requirements and the GDPR requirements most relevant to the healthcare industry.

GDPR vs. HIPAA/HITECH & FTC

GDPRHIPAA/HITECHFTC
Consent – Not presumed to be given, must be actual consent.

Generally, only an appropriate lawful basis if a data subject is offered control and a genuine choice with regard to accepting or declining (without detriment/retaliation) the terms offered.
HIPAA/HITECH presumes consent to uses and disclosures for treatment, payment, and health care operations in the absence of a patient’s instructions to the contrary, if the provider complies with regulatory requirements.

The Privacy Rule permits, but does not require, a covered entity voluntarily to obtain patient consent for uses and disclosures of protected health information for treatment, payment, and health care operations.

The Privacy Rule requires explicit consent for various uses and disclosures including research, marketing and solicitation.
FTC enforcement of consent requirements (regarding health information) generally applies to ancillary providers and specific categories of clinical records not covered by HIPAA/HITECH. Some circumstances call for shared jurisdiction with other agencies.

In addition to the general consumer protection power enumerated in the FTC Act, the FTC has specific enforcement jurisdiction over specific laws that feature consent obligations, including COPPA.
Data processing, when relying on “consent,” is only lawful if and to the extent that:

(a) the data subject has given consent to the processing of their data for one or more primary purposes. Thus, obtaining valid consent is always preceded by the determination of a specific, explicit and legitimate purpose for the intended processing activity.
By contrast, an authorization is required by the Privacy Rule for uses and disclosures of protected health information not otherwise allowed by the Rule.

An authorization is a detailed document that gives covered entities permission to use protected health information for specified purposes including research, marketing and solicitation.
FTC jurisdiction for health information includes:

Medical billing companies that collect consumers’ personal medical information without their consent.

Medical transcript companies that outsourced services without making sure the company could reasonably implement appropriate security measures.

Medical billing and revenue management companies that allowed access to consumer information to employees that did not need it to complete their jobs.

Apps that are medical devices that could pose a risk to patient safety if they do not work properly.
Member states have freedom to make laws, usually ones relating to special categories, more stringent than the general consent requirements in the GDPR.Under state law, consent is required by most states for constituencies such as minors, HIV and AIDS patients. Under federal law, a complex consent process attaches to select kinds of substance abuse treatment. All such consent requirements preempt HIPAA/HITECH under the applicable state laws.Before collecting, using or disclosing personal information from a minor, you must get their parent’s “verifiable consent.” Consent must be obtained through a technological medium that is reasonable given the available technology.

[i] See Commission Regulation 2016/679 of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC). 2016 (L 119) 35, 36 [hereinafter General Data Protection Regulation].

[ii] See General Data Protection Regulation at 32-33.

[iii] See id at 33.

[iv] See id at 33 (Definition (2)).

Joan M. LeBow is the Healthcare Regulatory and Technology Practice Chair in the Chicago office of Quintairos, Prieto, Wood & Boyer, P.A. Clayton W. Sutherland is a Class of 2018 graduate of the IIT Chicago-Kent College of Law.

Spring Cleaning

By Raymond Fang

Quick, take a guess—how many times do you think you touch your cell phone every day? 50? 100? 200? Wrong. How about over 2,000? That’s right, according to a report from the research firm Dscout, the average American touches their cell phone at least 2,617 times a day. To get this number, the researchers recruited 94 Android users and installed an app on their phones that tracked “every tap, type, swipe and click,” 24 hours a day, for five days straight. Then they divided the total number of touches recorded by the app by the number of days and the number of users to get the average number of touches per person per day—2,617. If you consider the number of times you touch your phone every day in addition to tapping, typing, swiping, and clicking—to pick it up, to put it in your pocket, to check the time, to charge it, and so on—the actual number of touches is probably even higher than 2,617. But why does any of this matter?

To put it simply, cell phones are dirty. Very, very dirty. One study found that cell phones carry 10 times more bacteria than toilet seats. Though most of these bacteria are perfectly harmless because they originate from your skin and your natural skin oils, researchers have still found dangerous bacteria like streptococcus, MRSA, and E. coli on cell phones. Another study found that roughly one out of every six smartphones has traces of fecal matter on it. Yet another study found “between about 2,700 and 4,200 units of coliform bacteria,” an indicator of fecal contamination, on eight randomly tested cell phones. For comparison, in drinking water it’s recommended that the water have less than one unit of coliform bacteria per ml of water. Much of this bacteria accrues either when you touch something dirty with your hands and then touch your phone (such as if you take out the trash and then use your phone afterwards without washing your hands), or when you expose your phone to a dirty environment (such as if you bring your phone into the bathroom with you, since flushing the toilet releases germs into the nearby environment).

So, if your cell phone is potentially harboring all sorts of nasty bacteria, what can you do about it? While some companies sell $60 UV light-emitting-devices that claim to kill 99% of the bacteria on your phone, the best and most economical solution is probably to wash your hands regularly several times a day, leave your phone out of the bathroom, and wipe down your phone with a moist microfiber cloth daily. If you’re really committed to sanitizing your phone, you can also create a 1:1 water and 70% isopropyl alcohol mix, spray it onto a microfiber cloth, and wipe down your phone with the isopropyl-alcohol-dampened microfiber cloth every week. This is the method that is effective for eliminating more dangerous and enduring bacteria like “clostridium difficile (which can cause diarrhea or even inflammation of the colon) and flu viruses” that will not yield to a microfiber cloth moistened only with water. Although Apple’s website warns against using “window cleaners, household cleaners, compressed air, aerosol sprays, solvents, ammonia, or abrasives” to clean your phone, researchers found that the isopropyl alcohol mixture is necessary to eliminate the more pesky and dangerous bacteria. While it may be yet another chore to complete, regular phone cleaning can help provide peace of mind and prevent the spread of germs and disease. Happy spring cleaning!

Raymond Fang, who has a B.A. in Anthropology and the History, Philosophy, & Social Studies of Science and Medicine from the University of Chicago, is a member of the ISLAT team.

Android’s Watching You. Now You Can Watch Back.

By Raymond Fang

On November 24, 2017, Yale Law School’s Privacy Lab announced the results of their study of 25 common trackers hidden in Google Play apps. The study, conducted in partnership with Exodus Privacy, a French non-profit digital privacy research group, examined over 300 Android apps to analyze the apps’ permissions, trackers, and transmissions. Exodus Privacy built the software to extract the apps’ permissions, trackers, and transmissions from the apps, and Yale’s Privacy Lab studied the results. The authors found that more than 75% of the apps they studied installed trackers on the user’s device, primarily for the purposes of “targeted advertising, behavioral analytics, and location tracking.” Yale’s Privacy Lab has made the 25 studied tracker profiles available online, and Exodus Privacy has made the code for their free, open-source privacy auditing software available online as well

The Exodus Privacy platform currently lacks an accessible user interface, so the average person cannot use the program to test apps of their choosing. Though the Exodus Privacy website does contain a video tutorial of how to “Try it [Exodus Privacy] at home,” the video tutorial requires the user to write code on an unknown platform (possibly using the code available on Github) to run the privacy auditing software, which requires some knowledge of computer science. Instead, the average person must rely on the reports generated on Exodus Privacy’s website. Exodus Privacy’s software automatically crawls through Google Play to update tracker and permission data for all the apps in its database, and is constantly adding more apps.

As of December 4, 2017, the Exodus Privacy website has generated reports on 511 apps. These reports yield interesting information about how some very popular apps track your personal information for advertising purposes. Snapchat (500,000,000+ downloads), for example, contains an advertising tracker from data aggregator company DoubleClick. Spotify Music (100,000,000+ downloads) contains advertising trackers from DoubleClick, Flurry, and ComScore. Though it’s hard to tell exactly what data about your social media usage and music preferences these trackers are collecting from Exodus Privacy’s reports, which just say the trackers collect “data about you or your usages,” DoubleClick’s privacy policy states that it collects “your web request, IP address, browser type, browser language, the date and time of your request, and one or more cookies that may uniquely identify your browser,” “your device model, browser type, or sensors in your device like the accelerometer,” and “precise location from your mobile device.” If cookies are not available, as on mobile devices, the privacy policy states that Doubleclick will use “technologies that perform similar functions to cookies,” tracking what you look at and for how long. Obviously, you may want to keep some of this information private for various reasons; however, the widespread use of these advertising trackers in Android apps means that this data related to your social media content and music preferences can easily be sold to advertisers and exposed.

Beyond the tracking done on social media and music apps, Exodus Privacy’s reports show that some health and dating apps also collect and sell your intimate and personal data. Spot On Period, Birth Control, & Cycle Tracker (100,000+ downloads), Planned Parenthood’s sexual and reproductive health app, contains advertising trackers from AppsFlyer, Flurry and DoubleClick. If you were pregnant, trying to conceive, or even just sexually active, data aggregator companies could conceivably sell that information to advertisers, who may then send you related advertisements. If someone was borrowing your computer or looking over your shoulder, they may be able to see the ads and figure out you were pregnant, trying to conceive, or sexually active. Such accidental exposure could cause you emotional harm if you were not ready or willing to share that private information with others. Grindr (10,000,000+ downloads), the popular dating app for gay and bisexual men, has advertising trackers from DoubleClick and Mopub. If advertisements about your sexuality started popping up whenever you used the Internet, they may accidentally reveal your sexuality before you are ready to tell certain people, which may cause a lot of emotional distress.

There is clearly cause for concern when it comes to Android apps’ tracking and selling your personal information. Unfortunately, selling user data to advertisers is a very lucrative and reliable way for tech companies to monetize their services and turn a profit, so it’s hard to envision an alternative system where all of your personal data would be protected from commodification. However difficult it may now be to imagine a world where your privacy is adequately protected in the digital space, it will be up to privacy-conscious consumers, researchers, scholars, lawyers, and policymakers to make that world a reality.

Raymond Fang, who has a B.A. in Anthropology and the History, Philosophy, & Social Studies of Science and Medicine from the University of Chicago, is a member of the ISLAT team.

Blockchain: Web 3.0 or Web 3.No?

By Debbie Ginsberg

Welcome to the brave new world of blockchain. Some say it’s the future lifeblood of the internet and commerce. It will provide the foundation of the most robust information security system ever created. It will allow access to economic tools currently unavailable to billions. You may have seen many articles on blockchain recently. Maybe you’ve never heard of blockchain. Or maybe all you’ve heard about it is the hype.

But what’s a blockchain? The short explanation: It’s a network-based tool for storing information securely and permanently. The information in a blockchain can be authenticated by members of the public, but the information can be accessed only by those who have permission.

Blockchains can take any information—from simple ledgers to complex contracts—and store it in online containers called “blocks.” These blocks are then encrypted, meaning that the information is translated into a unique series of letters and numbers called the “hash.” The hash is created using a special encryption key. Only users who have the key can read the information.

The blocks are then linked together. Each block’s information includes the hash of the previous block in the chain, along with a time stamp. For example, a hash might look like this: 00002fg5d5500aae9046ff80cccefa. The tools that create hashes use sophisticated cryptographic calculations so that each hash in a particular chain will start with a set of standard characters, such as 0000.

How does this keep information secure?  The blocks are “decentralized,” meaning that different blocks are stored on different computers, creating a distributed network of information. This network is public, so members of the public can see the chain and read the hashes.

Changing the information in any block changes its hash. This change then moves up the chain, changing their hashes as well. The hashes in blocks that are further up the chain will no longer start with the standard characters. For example, they won’t start with 0000 and the time stamp will have changed. That means anyone can now see that data in the chain has been compromised.

Some blockchains are single chains, but many blockchains work by distributing copies of the whole chain in the decentralized network. If the copies don’t agree with one another, the blockchain’s users will elect to accept only those chains that match, and will discard any compromised chains.

Keep reading after the comic below.

Click to enlarge comic.

Foundations: Bitcoin

If you’ve heard of blockchain, it’s probably in relation to Bitcoin, an online currency that is recorded in a blockchain. Bitcoin isn’t issued by a government or bank; instead, it is created through sophisticated mathematical algorithms and distributed over a large network.

Bitcoin’s popularity stems from two features not available in most other monetary transactions. First, no intermediary, such as a bank or PayPal, is needed. These intermediaries often take a hefty fee, particularly in international transactions. Users can transfer money directly to each other, and the parties don’t need to trust each other. By using Bitcoin’s blockchain, the parties know their transaction is secure. Second, no copies of the funds are made—as happens in many online transactions—so the funds cannot be “double spent.” The records in the chain containing Bitcoin funds simply point to a new (anonymous) owner when a transaction is made.

While many praise Bitcoin’s anonymity, this trait has given the online currency a somewhat shady reputation. Many ransomware viruses demand that payments be made in Bitcoin. Often, users affected by these viruses don’t know what Bitcoin is, let alone where to buy it. The currency is sold in special online exchanges.

Who Is Using Blockchain?

The financial industry has been investing in blockchain. Some of this investment has been outside the mainstream financial sector. For example, there are now several hundred Bitcoin-type currencies known as cryptocurrencies. A few of these, such as Ethereum and Ripple.com, have been gaining ground on Bitcoin. They may eventually take over a significant part of the cryptocurrency market.

Major financial companies such as JP Morgan Chase are investing in their own blockchain-based applications. However, these applications will likely work somewhat differently than cryptocurrencies. Bitcoin and other online currencies use public blockchains, meaning that some information about the chain can be viewed by anyone. Visitors to Blockchain.info may access any block on the Bitcoin public blockchain. However, information about who owns the currency and how to access it is not public.

Instead, large financial companies are investing in private blockchains. Companies have full control over these blockchains because they are not distributed publicly. The companies themselves control the blockchain network. However, private blockchains might be more vulnerable to hackers because they aren’t distributed as widely as the public chains.

In addition, blockchain now plays a role in distributing intellectual property. For example, Resonate.is uses a blockchain system to manage a music cooperative. Similarly, DotBlockchainMusic.com is using blockchain to create a media file platform that embeds digital rights management. Even Walmart is experimenting with blockchain to better track products from farms and factories to shelves.

Uses in Law

Just as artificial intelligence (AI) has already affected how legal work is done, blockchain also offers several ways to automate and outsource legal processes. Smart contracts have generated the most discussion. These contracts are coded into blockchains and make contract execution work more smoothly.

First, there is only one copy of the contract and all parties have access to it. The contract is completely transparent, and the terms of the contract are coded into the blockchain. It is therefore impossible to create fraudulent or inaccurate copies of the contract because the terms can’t be changed without the agreement of all parties to the contract.

Second, the smart contract can be configured to be self-executing. That is, verifiable events trigger the next stage of the contract. Proof of those events can be added to the chain. For example, Widgette Co. agrees to sell Acme Co. 100 widgets for $1,000 and ship them one week after payment. When Widgette Co. produces the 100 widgets, its system adds this information to the blockchain with a time stamp. Acme Co:s system pays $1,000 and adds that information to the chain, also with a time stamp. Widgette Co. then ships the widgets, and that time-stamped information is added. Finally, Acme Co. records when it receives the widgets.

The blockchain can even include a dispute resolution mechanism. If a problem arises—such as Acme Co. claims that Widgette Co. shipped the widgets after two weeks instead of one week—this information can be verified by reviewing the information in the blockchain. The chain can then arbitrate the dispute based on preset terms. For example, Acme Co. automatically receives a 1 percent refund for each additional week that shipping is delayed.

The legal possibilities are not limited to contracts. Lawyers have been considering smart wills that can execute themselves, thereby avoiding probate. Blockchain could also be used in real estate transactions to help avoid using third parties in each transaction. For example, a blockchain real estate transaction wouldn’t need the services of an escrow firm.

Governments Respond

Governments have started to take notice of the possibilities that blockchain offers. Arizona’s governor recently signed an amendment to Title 44, Chapter 26. This amendment allows use of blockchain technology in the state, declaring that “(a] signature that is secured through blockchain technology is considered to be in an electronic form and to be an electronic signature” and “[a] record or contract that is secured through blockchain technology is considered to be in an electronic form and to be an electronic record.” Vermont is also working on a bill to allow the use of blockchain technology. Other governments and organizations, including the European Union, are investigating blockchain’s possibilities. The Republic of Georgia uses blockchain to secure government transactions involving property. Other governments are considering following suit, including those in Sweden, Honduras, and Cook County, Illinois.

Educational Blockchains

Are blockchains useful only for financial transactions? Absolutely not. Educational institutions are considering putting transcripts and graduation credentials on blockchains. This would permit alumni to easily access their own information and verify its authenticity.

It would also help those students who enroll in classes at different universities to pull their information together into a single source. The Massachusetts Institute of Technology already offers blockchain-based certificates for some programs.

Roadblocks and Possibilities

Despite the many possibilities blockchain offers, before being widely implemented, it must first overcome several issues. There are already existing systems and regulations in place for many problems that blockchain could solve. For example, there has been discussion of using blockchain for health records, yet the current regulatory environment for these records would make creating a new system difficult.

Blockchains are also not easy to implement. Setting up a blockchain requires sophisticated technology skills. Lawyers— particularly lawyers working with self-executing contracts—would need to work with coders to create them.

That said, the use of blockchain will most likely continue to grow, particularly to solve problems involving security and authentication. One area that offers great possibility is using blockchains to create secure online identities that could be used to access online services and password-protected websites. New approaches are needed to prevent ID and data theft, and the blockchain may be just the tool for the job.

This article was originally published in the September/October 2017 [Volume 22, Number 1] issue of AALL Spectrum.

Debbie Ginsberg is the Educational Technology Librarian at the Chicago-Kent College of Law Library.