When Your Car Spies on You

Lori Head Shot 2014 v.2 small

By Lori Andrews

Cars are getting smarter.  Some can show you a video of what is behind you to help you park in a tight spot.  Others can automatically apply the brakes if you are about to run into the car in front of you.

Now cars have a new power.  They can snitch to an insurance company about your driving.   A tracking device can be installed in your car to monitor how and when and how far you drive.  Progressive and other insurers offer discounts on car insurance to drivers based on data from such devices.

Do you accelerate sharply, corner too closely, travel at night or drive great distances?   Those traits can be used against you and prevent you from getting a discount.  But many of those factors are beyond your control.  If your job requires you to work in the evening, why should you be penalized by your insurer?

Most insurers’ devices are installed in the data port of car, under the drivers’ side of the dashboard, which limits their use to cars sold after 1998.  But the Canadian insurer Desjardins uses a mobile phone app, Ajusto, that doesn’t even need to be installed in the car.  But phone apps raise additional issues.  Nothing prevents an insurer from matching data from the phone driving app with other information.  Nearly two-thirds of smartphone owners look up health information on their devices.  What if you’ve done a Google search for the side effects of an allergy medication?  The insurer might take that to mean you are using the medication while driving, despite the drug’s warnings about drowsiness.

Who else will ultimately get the driving information?   Will the police want to know who is driving faster than the speed limit?   As a phone app, Ajusto can tap into location information.  Will spouses and employers want to know where the driver has been?  Already, information from toll passes has been used as evidence in criminal cases and divorce cases.  If you get into an accident while using Progressive’s Snapshot device, Progressive will turn over their information about your driving style and history to the court.

These programs to reward safe drivers might actually lead to more accidents.  A friend who used the Progressive device heard a series of beeps from his car if he braked too quickly.  The only way to avoid the beeps was to stay four car lengths behind the car in front of him, but that meant other cars were constantly swerving in front of him.  It also greatly increased the chance of his being rear-ended.

The tracking devices for cars are touted as a way to save you money.  But the data they collect can be used against you.  Progressive announced that it will start charging higher rates to drivers who volunteer to use its Snapshot device, but whose driving does not measure up.  Courts can order that you turn over your driving information to someone who sues you.   Tracking devices have real risks. What you might save in premiums, you’ll lose in privacy.

The Thin Red Line of Predictive Genetic Testing in the Military

Bryan Helwig

By Bryan Helwig, PhD

A military segregated by genetics? The possibility is more reality than science fiction and an issue I encountered while leading a research team for the Department of Defense.  Recent advances in science and technology have produced genetic tests that are low cost, easily performed and able to produce significant amounts of genetic information about individuals. Once confined only to scientific experiments, the general public now has options to trace their family origins from a cheek swab, detect genetic abnormalities prior to birth from a sample of the mother’s blood, and determine their genetic profile using saliva.

Since the mid-1990s the Department of Defense has required that all new recruits provide a DNA sample that can be used for identification purposes. Now, advances in genetic technology are helping to identify genes profiles associated with a predisposition to post-traumatic stress disorder (PTSD) or suicide. The use of genetic testing in this manner is considered predictive genetic testing.

Proponents of predictive genetic testing in the military note the invaluable role testing provides in keeping armed forces safe. Critics contend that mandatory genetic testing is an invasion of privacy and a violation of civil liberties. These individuals contend that the Genetic Information Nondiscrimination Act (GINA) of 2008, which protects civilians from job-related discrimination based on genetic test results, should also apply to military personnel.  Specifically, §202 and §203 prohibit employment discrimination practices based on genetic information. With few exceptions, §203 reads “it shall be an unlawful employment practice for an employer to request, require, or purchase genetic information with respect to an employee or a family member of the employee . . . “ However, the military is a unique environment in which the needs of the unit are a higher priority than those of the individual, complicating the application of civilian policies such as GINA to members of the armed forces.

Military duty is characterized by physical demands and exposure to environments that are unpredictable and often extreme. As a result, work in military environments can result in manifestation of genetic abnormalities that would remain unknown without diagnostic genetic testing in which screening occurs for specific genes that are diagnostic for a condition.

During the last five years, the expansion of genetic testing has been proposed. An advisory panel of independent scientists produced the JASON report in 2010 recommending “The DoD should establish policies that result in the collection of genotype and phenotype data, the application of bioinformatics tools to support the health and effectiveness of military personnel, and the resolution of ethical and social issues that arise from these activities.” The idea is robust and one I frequently encountered during my career directing a Biomedical Research Laboratory for the Department of Defense.

The focus of my team’s work was to better understand how and why the human body responds in extreme environments. For instance, the expression of a subset of genes allows for adaptation to high altitude, low-oxygen environments such as the mountains. Although not as well established, a similar set of genes also may be advantageous to prolonged work in hot and cold environments. Thus, predictive genetic screening in the military could be used to identity individuals that would have advantageous or disadvantageous physiological responses to hot, cold or high-altitude environments. In addition, the JASON report proposes the use of predictive genetic testing to identify service members at increased risk of blood coagulation abnormalities, bone fracture risk, tolerance to sleep deprivation and over two hundred other health-related phenotypes of interest to the military.

Although not widely recognized, each of us undergoes a diagnostic genetic test at birth for phenylketonuria, more commonly known as PKU, an inborn error of protein metabolism that can have profound negative affects on development if not identified early in life. In comparison, the use of predictive genetic screening is in its infancy. Genetic tests are highly accurate in quantifying gene expression, however use of the results in a predictive capacity is less accurate and often over-exaggerated by the media.

For instance, a genetic profile that affords natural protection in a hot environment is likely to be comprised of up and down regulation of hundreds or even thousands of genes. Some genes may be affected by health status, nutrition, sleep, etc. Thus, the use of predictive genetic testing requires identification of stable gene profiles that serve as accurate predictors of health status and only change expression in the environment being studied. Additionally, many scientists cite a two-fold change in gene expression as significant.  However, a two-fold change is arbitrary and not always indicative of a significant physiological impact. Despite rapid expansion of genomic technology, the reliability of predictive gene profiling remains nascent.

Despite the scientific gaps, legal and ethical issues need to be addressed before genetic testing achieves an accuracy allowing its use en masse. Initial efforts should focus on privacy, including modification of GINA to protect privacy of military members in a way that is similar to the general public. Secondly, if GINA cannot be modified, discussions regarding new policies associated with predictive genomic testing that address the intersection of military personnel privacy and mission readiness should be encouraged. Instrumental will be deciding how broadly predictive genetic testing should be used by the Department of Defense. Conversations must also include updated policies regarding the handling or even destruction of DNA samples and specimens after military service ends and related rules for governing the almost fifty million samples in the Department of Defense Serum Repository (DoDSR). Such policy decisions should be balanced with the knowledge that the DoDSR is the largest repository of samples in the world and its use in understanding disease has been substantial.

Some service members refused to provide DNA for inclusion in the DoDSR and the punishment was harsh, including court martial, a reduction in rank and loss of pay. The Hawaii District Court held that requiring DNA samples from service members does not violate the Fourth Amendment right to be free from unreasonable searches [Mayfield v. Dalton, 901 F. Supp. 300 (D. Haw. 1995), vacated as moot, 109 F. 3d 1423 (9th Cir. 1997)]. Objection to inclusion may become more common if predictive genetic testing is used without privacy protection. The military must revisit the thin red line between privacy and military needs; a line that currently favors minimizing individual needs.

Standard informed consent required whenever biological samples are obtained must also be re-evaluated to better reflect the current practices. Informed consent forms should be re-written, allowing the service member to give different levels of permission regarding future use of their DNA beyond the required baseline diagnostic screening and identification purposes. Importantly, this option must be revocable at any time, during or after their military career. The military should also consider the alternative of an external third party to perform predictive genetic screening, the results remaining private, and released only as required by strict criteria. Regardless of the results, policies must be in place to prevent discriminatory practices related to genetic results in military and post-military career advancement.

The military benefit to the Warfighter from genetic testing is significant and, if used responsibly, can help protect a soldier’s health. However, many ethical and legal hurdles exist that must be resolved before predictive genetic testing becomes mainstream.  Conversations addressing such issues need to occur now; the issues are central to protecting the privacy of those who keep us safe.

Bryan Helwig, PhD is a first-year law student at Chicago-Kent College of Law (Class of 2017) with an interest in the intersection of intellectual property, genetics and privacy. During the five years preceding law school he directed a Biomedical Research Lab for the Department of Defense.

ISPs as Public Utilities

By Adam RouseAdam Rouse Headshot

In late 2010, the Federal Communications Commission (FCC) issued the Preserving the Free and Open Internet Order [1] mandating a set of “net neutrality” policies that required internet service providers (ISPs) to essentially treat all internet traffic the same as it traversed the various individual networks making up the internet as a whole. Verizon filed suit against the FCC claiming that the FCC had no legal authority to issue the order and the FCC exceeded the scope of the Telecommunications Act of 1934 and the Telecommunications Act of 1996.[2] The FCC countered by arguing that it was regulating the activity of broadband providers under its ancillary jurisdiction [3] to regulate certain aspects of internet communication services. The DC Circuit Court held that the FCC lacked authority to issue the anti-blocking and anti-discrimination rules that were part of the Preserving the Free and Open Internet Order, effectively gutting it.[4] Many broadband providers were upset with Verizon for filing suit when they did. They were concerned that the FCC, when challenged, could reclassify broadband and wireless internet service providers as Title II Common Carriers, subjecting them to the hundreds of regulations that were already in place for telecommunications providers.[5]

On Thursday, February 26, 2015, the FCC did exactly what internet service providers feared it would: broadband internet service is now classified as a telecommunications service under Title II of the Communications Act.[6] By reclassifying broadband internet service under Title II the FCC has secured its own authority to strictly regulate almost every aspect of the broadband internet industry. Reaction from internet service providers in the cable and telecommunications industry was dour, and the order is expected to be challenged in court. This time, however, with internet service classified as a Title II utility, there is ample backing for the FCC’s legal authority to impose regulations as suggested by the court in the Verizon v FCC case.[7] Regardless of the eventual outcome of the anticipated court cases, the decision by the FCC to reclassify broadband internet to a Title II utility is widely considered the first step in maintaining a fair and open internet that all can take advantage of.[8]

It is critical to understand what the order will require and similarly what it will not. The FCC is required by the U.S. Congress to refrain from enforcing Title II regulations are not in the public interest. It is within this spirit of public interest that the FCC press release stated that the following provisions of Title II regulations would not be enforced by the new order:

  1. There will not be any rate regulation for broadband internet services. This means that every provider is welcome to set their own pricing provided that they are not anti-competitive or gouging the market – both restrictions which were in place before the order.
  2. There is no change to Universal Service Fund contributions from broadband providers. This means that there will not be any new FCC imposed fees showing up on consumer’s internet service invoices. A Universal Service Fee for broadband is already under consideration by the FCC and is not impacted by this order.
  3. Internet service providers will not be required to perform “last mile unbundling” services. Currently telecommunications providers must lease out portions of their networks to competitors at wholesale pricing (set by regulation) to foster competition in the telecommunications industry. By not requiring network unbundling in the case of ISPs the FCC is removing the fear of sudden competition from the major internet service providers. They will be allowed, for the time being, to maintain their monopolistic grasp on the major service markets.
  4. Broadband access will remain free from taxation by local and state governments.

The order does the following:

  1. Gives the FCC authority to investigate and resolve consumer complaints made against broadband ISPs.
  2. Applies the core principles of anti-discrimination and no unjust or unreasonable practices or policies. ISPs cannot charge more or offer different levels of service based on any discriminatory practices and they must make their services available where reasonable to do so.
  3. Grants consumers greater privacy rights, restricting the information that ISPs can share with third parties about subscribers without the prior consent of the subscriber.
  4. Ensures that internet service providers that want to expand and grow their networks have fair access to the current utility infrastructure such as telephone poles and underground wiring conduits.

The order further imposes the following regulations that are separate from standard Title II regulation, but are allowed under Title II’s authority.

  1. ISPs may not block access to any legal content on the internet.
  2. ISPs may not throttle (slow down) legal content based on the type of content, application, service, or device – so long as the content is not harmful to the network.
  3. ISPs may not favor paid traffic over non-paid traffic. This is the ending of so-called fast lanes on the internet where some companies or consumers would pay to have their traffic prioritized over the traffic of those who could not afford or chose not to pay.

Finally, the order states that while ISPs can engage in practices that are necessary for reasonable network management, they cannot use network management as a guise for instituting anti-consumer policies such as artificial or arbitrary data caps on plans that were advertised and sold as “unlimited” bandwidth plans. ISPs have admitted that there is not a congestion problem on their networks that the extra fees associated with the higher data users is not a cap – it’s a method to lower prices for users who use lessor amounts of data in a billing cycle. Consumer advocates see these artificial caps as ways for ISPs to squeeze additional money out of consumers who were told they would have unlimited service. The order specifically states that any policies (including data capping or throttling) enacted for the purposes of network management must be reasonable, take into account the type of technology at issue, and cannot be instituted for a business purpose – such as attempting to profit from consumers who use more of the services they are already entitled to under their plan.

The FCC’s reclassification of broadband internet services to a Title II utility attempts to cement the FCC’s regulatory authority of internet communications – even with the light touch of all the regulations subject to forbearance. Only time will reveal the eventual impact on the internet service industry, however, the policies seem rooted in the desire to foster an open and free internet that exists as a communications and information vehicle for the common person, not just those who can afford to pay for fast lanes of unblocked traffic.

[1] Federal Communications Commission Order 10-201 (2010),  https://apps.fcc.gov/edocs_public/attachmatch/FCC-10-201A1_Rcd.pdf

[2] 47 U.S.C.

[3] John Blevins, Jurisdiction as Competition Promotion: A Unified Theory of the FCC’s Ancillary Jurisdiction, 36 FLA. ST. U. L. REV. 585 (2009).

[4] Verizon v. FCC, 740 F.3d 623 (D.C. Cir. 2014).

[5] Jon Brodkin, ISPs ‘secretly furious’ at Verizon, scared of stronger net neutrality rules, arstechnica.com, October 3, 2014, http://arstechnica.com/tech-policy/2014/10/isps-secretly-furious-at-verizon-scared-of-stronger-net-neutrality-rules/

[6] Federal Communications Commission, Press Release – FCC Adopts Strong, Sustainable Rules to Protect the Open Internet, February 26, 2015, http://transition.fcc.gov/Daily_Releases/Daily_Business/2015/db0226/DOC-332260A1.pdf

[7] See, Verizon, 740 F.3d. 623 (D.C. Cir. 2014).

[8] Haley S. Edwards, “FCC Votes ‘Yes’ On Strongest Net Neutrality Rules,” Time, February 26, 2015, http://time.com/3723722/fcc-net-neutrality-2/

A White House Invitation to Launch Precision Medicine

By Lori Andrews

President Obama at the launch of the Initiative

Last Friday, I was a guest at the White House for President Obama’s launch of the Precision Medicine Initiative.  The goal of the Initiative is to sequence people’s genomes and read the nuances of their genes to determine how to prevent disease or more precisely treat it. The President illustrated how this would work by introducing Bill Elder, a 27 year old with cystic fibrosis. Bill has a rare mutation in his cystic fibrosis gene and a drug was fast-tracked at the FDA to target that mutation.  “And one night in 2012, Bill tried it for the first time,” explained President Obama. “Just a few hours later he woke up, knowing something was different, and finally he realized what it was:  He had never been able to breathe out of his nose before.  Think about that.”

When Bill was born, continued the President, “27 was the median age of survival for a cystic fibrosis patient.  Today, Bill is in his third year of medical school.”  Bill expects to live to see his grandchildren.

The Precision Medicine Initiative will involve sequencing the genomes of a million Americans.  Such a project would have been unimaginable if we hadn’t won the Supreme Court case challenging gene patents.  Prior to that victory, genetic sequencing cost up to $2,000 per gene due to patent royalties.  Now it will cost less than ten cents per gene.

Bill Elder at the White House event

The people who volunteer as research subjects for the project may expect cures for their own diseases.  But, even when genetic mutations are discovered, cures are a long way off.   “Medical breakthroughs take time, and this area of precision medicine will be no different,” said President Obama. And despite the fanfare surrounding genetics, researchers often find that environmental factors play a huge role in illness. At the same time the White House was preparing for the launch of the Precision Medicine Initiative, Stanford researchers and their colleagues across the globe were publishing a study in the January 15 issue of the prestigious journal Cell challenging the value of sequencing research.  Their study, “Variation in the Human Immune System is Largely Driven by Non-Heritable Influences,” tested sets of twins’ immune system markers.  The result: Nearly 60% of the immune system differences were based on the environment rather than genes.

Capturing environmental information about the million volunteers will involve invasions of their privacy as their health and behavior is categorized and quantified from every perspective.  Their genetic data will be combined with medical record data, environmental and lifestyle data, and personal device and sensor data.  If not handled properly, this data could be used to stigmatize the research participants or discriminate against them.  Will they be properly informed of the risks in advance?  Will sufficient protections be in place for their device and sensor data, which is often not covered by medical privacy laws such as HIPAA?

At the White House last Friday, President Obama said, “We’re going to make sure that protecting patient privacy is built into our efforts from day one. It’s not going to be an afterthought.” He promised that patient rights advocates “will help us design this initiative from the ground up, making sure that we harness new technologies and opportunities in a responsible way.”

Professor Andrews with Henrietta Lacks’ descendants at the White House

President Obama underscored that commitment by inviting members of Henrietta Lacks’ family to last Friday’s event. In 1951, Henrietta Lacks was dying of cervical cancer.  A researcher at Johns Hopkins University undertook research on her cells without her knowledge or consent (or that of her family).  Her immortalized human cell lines provided the basis for generations of research in the biological sciences, as well as research by commercial companies.  When her husband learned about it years later, he said, “As far as them selling my wife’s cells without my knowledge and making a profit—I don’t like it at all.”

A former Constitutional Law professor, President Obama is aware of the importance of people’s rights.  Let’s hope that his aspiration of an Initiative that guards research subjects’ autonomy and privacy will be honored by the scientists who will actually operationalize the $215 million project.

The City of Chicago: Profiteering From Irregular Yellow-Light Timing?

Adam Rouse Headshot

By Adam Rouse

Anyone who has driven in a red-light camera city long enough has either witnessed the flash of a strobe light documenting another motorist being photographed for an alleged red light violation or experienced the feeling of dread and annoyance of seeing the strobe flash in his or her own rear-view mirror. City officials and red light camera vendors love to talk about how the cameras are all about safety and reducing traffic collisions. Opponents of traffic cameras will often state that it is all about the money – revenue generated for both the private companies and the local government. The truth, it seems, may lie somewhere in the middle.

It isn’t difficult to believe that cities like Chicago, which are facing tough budget decisions and looming debt, would turn to new sources of revenue to boost their income. Red-light cameras can be very lucrative for big cities especially in California where drivers pay a massive $480 fine for every violation. Chicago, which leads the nation in the total number of red-light camera enforced intersections, issues administrative fines for red-light camera citations in the amount of $100 per incident generating over $500 million in revenue for the City thus far. Chicago quietly changed its policy of issuing citations when the yellow-light interval was greater or equal to 3.0 seconds to issuing citations when the yellow-light interval was greater or equal to 2.9 seconds. While one tenth of a second may not seem an appreciable amount, consider that this change in policy generated an additional $7.7 million for Chicago in the just over half a year it was in effect.

The Chicago Inspector General’s office concludes that the City did not deliberately alter yellow-light times to generate tickets and the variances in yellow-light timing stemmed from power fluctuations in the traffic signals that were within the acceptable variance. The report from the Inspector General includes a document from PEEK Traffic documenting the acceptable actual display time for a yellow-light programmed to last 3.0 seconds is anywhere from 3.12 seconds to 2.89 seconds based on the fluctuation of power cycles in the timing circuits. There are reports that when independently tested, yellow-lights in Chicago can fall outside of the acceptable times noted in the Inspector General’s report. This video, produced by Barnet Fagel, an expert witness in many red light camera cases in Chicago, illustrates the issues with yellow-light times at the intersections generating the most income for Chicago. All of the intersections were timed with yellow-lights lasting less than 3 seconds, and some fell outside of the acceptable range mentioned above.

As questionable as the yellow-light times are in Chicago, municipalities in Florida were caught deliberately reducing their yellow-light durations after a change in language in state law allowed for a shorter yellow-light duration. Florida law bases the yellow-light duration minimums strictly on the speed limit of the streets that make up the intersection. Florida does not require the US DOT recommended traffic studies to determine the 85th percentile speed at which drivers enter the intersection nor the extra half-second interval addition for intersections with a high percentage of use by elderly drivers or trucks with loads—both of which require either extra time or distance to come to a complete stop. The result was predictably higher revenue from fines and outrage from Floridians who felt they were duped and unfairly ticketed.

Shortening yellow-light times to the minimum limits may be perfect for lining governmental coffers, but does this practice work for reducing accidents at intersections where red-lights are enforced by automated camera systems? The City of Chicago certainly wants people to think so; according to the data it released claiming that crashes at camera enforced intersections were down by 33% over a 7 year period since the first cameras were installed in 2005. At first glance a 33% reduction in crashes seems to suggest that the cameras really are increasing safety in Chicago.Figure 1

The problem is the misleading statistical data and methodology that the City of Chicago uses to draw the conclusion that cameras are increasing safety. When looking at the data, every intersection with a camera is included in the comparison of number of crashes in 2005 to the number of crashes in 2012. Figure 1 shows when all the cameras used to report crash statistics were actually installed. The vast majority of the intersections used by Chicago in its data did not have cameras installed in 2005 thus it is entirely possible that the intersections with cameras installed between 2007 and 2009 could have been trending down before the cameras were ever installed. Another huge problem with Chicago’s so-called analysis is the lack of any control data. An independent study suggests that accident rates were falling generally during the same time period that Chicago analyzes in its report. Thus, there is no reliable way based on the data presented by the City of Chicago to infer any sort of causal link between the installation of red-light cameras and an increase in traffic safety at those intersections. With the data presented, it is difficult to draw even a correlative conclusion between the red-light cameras and increased traffic safety.

Improperly timing the yellow-light duration can lead to more collisions at an intersection because drivers do not have enough time to safely stop. The Federal Highway Administration (“FHWA” part of the U.S. Department of Transportation) partnered with traffic engineers and safety experts to create a method to determine safe yellow-light durations. The following formula is a result of the research performed to determine a safe yellow-light duration at intersections: yellow-light duration (in seconds) = t + (1.47 x V85 / 2d + 2Gg). This formula seems complicated until the constants are filled in and explained. The variable t will almost always be 1 because the average reaction time of a driver to a change of a traffic signal is 1 second. This variable should be increased in areas where it is known that drivers have a longer reaction time to changing signals. The V85 variable is the speed at which 85% of the traffic travels on the streets that comprise the intersection assuming that traffic is flowing in an unobstructed flow. This 85% speed may or may not be near the speed limit of the streets, and can only be determined by a properly executed traffic study.  The average deceleration of a stopping vehicle is 10 ft/s2 and is the constant d in the equation. The last portion of the equation 2Gg deals with grades (up or down hill slope) at the intersection. Because most, if not all, intersections in Chicago have no significant grade and are relatively flat it is safe to assume that 2Gg will equal 0 in the flowing calculations. Thus, when simplified for Chicago intersections the formula becomes: yellow-light duration = 1 + (1.47 x V85 / 20).

Applying the formula to intersections in Chicago yields some very interesting results. Assuming that traffic is flowing through an intersection at the posted speed limit of 30mph then the yellow-light time would equal 1 + (1.47 x 30 / 20) or 3.2 seconds. Taking the variations in yellow-light duration produced by the variations in power supplied to signals noted in the PEEK report above, the actual time the yellow-light duration should be programmed for 3.33 seconds producing actual yellow light times of 3.20 to 3.44 seconds to be considered safe by the FHWA. For intersections where the traffic is flowing at an average approach speed of 35mph the yellow-light duration should be 3.57, or programmed for 3.69 seconds to produce yellow durations between 3.57 and 3.81 seconds. Based on the above formula and calculations the yellow-light durations are set below the correct safety standards recommended by the FHWA and traffic engineers.

Properly administered red-light cameras can have a positive effect on intersection safety provided that the crashes are caused by driver behavior that is able to be changed by enforcement measures. The FHWA notes that the best results are achieved by combing traffic engineering, driver education, and enforcement measures where needed. If Chicago was truly interested in improving traffic safety in the city perhaps the half-a-billion dollars collected from the red-light camera program thus far could be used to perform meaningful traffic studies to collect valid data and fund driver education programs to prevent red-light running and the associated crashes. Once Chicago traffic engineers had proper data and studies to work with they could adjust the yellow-light times to the applicable safe standards and implement other traffic engineering solutions that were designed to increase safety and not rely on red-light cameras that do little more than generate revenue for the city at the expense of its citizens.

Apple and Google Make the Next Generation of Smartphones More Secure

Adam Rouse Headshot

By Adam Rouse

Apple recently announced that starting with the release of iOS 8 that device encryption would be enabled by default. On the heels of Apple’s announcement, Google also announced that it would be turning on whole device encryption by default with the release of its Android 5 operating system. Previously, on both Apple and Android devices a consumer would have to go in to the settings of the device and enable encryption. Apple and Google added that neither company would hold the keys to the kingdom by maintaining cryptographic keys capable of decrypting secured devices. Apple states that there is no longer a way for the company to decrypt a locked device, even if presented with a valid warrant from law enforcement personnel. Google also reiterated that Android devices have never stored cryptographic keys anywhere other than the encrypted device. Thus, Google also claims that it cannot decrypt an encrypted device for law enforcement, even when presented with a valid warrant.

Even though device encryption by default provides additional protection, a lock is only as strong as the key required to unlock it. Apple and Android devices (which make up 96.4% of the world cellular device market), as part of the device encryption, will ask the user to create some sort of passcode the first time the device is powered on. This passcode should be a strong password. All of the device encryption in the world can’t help you if all it takes to unlock your device is typing in “1234” to the PIN field. On average a 4 digit PIN on an Android device can be broken in just under 17 hours using a commonly available phone hacking tool. Interestingly, increasing the PIN to a 10 digit number ups the time required to brute force unlock the device to just less than 2 centuries. Apple iOS devices fare a bit better because they lock devices out for successively longer times after repeated incorrect PIN entries. Both Android and Apple iOS devices can also be setup to use an alphanumeric password to access the device. While an alphanumeric password offers better security for the device it is much less convenient to type a full password than to enter a PIN code.

Smartphones suffer from the same security dilemma that all computing devices do: securing the device and data within often makes for an inconvenient end user experience. On average people check their smartphone or other mobile device 150 times a day. While Apple and Google could require complex passwords for lock screens to greatly improve security the consumer backlash could very well be crippling. It’s doubtful that the average consumer would want to type “dR#41nfE” on a smartphone keyboard 150 times a day just to check email or retrieve a text. There is a middle-of-the-road solution that could bridge the gap between effortless convenience and good security practice.

Apple and Google could require a unique, strong, password to decrypt the device when it powers on, but allow for a more convenient PIN or password to be used for a screen lock. Another feature could be added to the devices that would automatically power them down if an incorrect password or PIN was entered 10 times in a row. This feature would make it much less likely that someone could guess or brute force the screen lock password or PIN. Thus forcing even complex forensic programs to brute force attack the more complex and secure power on password. Incidentally, it would take about 14 years to brute force guess “dR#41nfE” on a computer capable of trying 2.6 million passwords per second. Any 4 digit PIN would take less than a second on the same computer. Thus, while the transition to decryption by default is a wonderful leap in the right direction for privacy minded consumers; the addition of the ability to have complex power on passwords separate from the lock screen credentials would help protect privacy while not being so inconvenient  that people will do nothing but disable the security feature.

While moving to whole device encryption is commendable for Apple and Google, there are two security features that should be avoided in their current state. These features are little more than security theater; you may feel secure by using them but there are fatal flaws with each that could leave you exposed to the snooping eyes of the government.

The first security feature to avoid is Apple iOS’s (as well as some upcoming Android devices) option to use a biometric lock with a thumb or fingerprint. Besides the problem of the sensor technology being defeated by gummy bears, there is a legal issue with a fingerprint lock on your device. Recently, a court in Virginia issued an opinion that stated that because fingerprints are non-testimonial in nature, police can legally require a detainee to provide their fingerprint to unlock a device.

A federal judge in the Eastern District of Michigan held that a password is testimonial in nature and thus protected from forced disclosure to the government by the Fifth Amendment (which applies to the states via the 14th Amendment). Justice Stevens in U.S. v. Hubbell distinguished between someone being forced to provide a key to a lockbox and being forced to reveal the combination to a safe. Providing a key to the government is a physical act, the key exists independently of the mental processes of the person who possesses it. Conversely, a password exists exclusively in the realm of a person’s mind and thus becomes testimonial in nature and protected under the 5th and 14th Amendments. Justice Stevens also stated in Hubbell that the act of providing physical evidence such as forcing someone “to put on a shirt, to provide a blood sample or handwriting exemplar, or to make a recording of his voice” was wholly separate from compelling someone to provide testimonial knowledge.

Thus, passwords and PINs appear to be protected by the 5th and 14th Amendments as being testimonial in nature because they exist as the exclusive result of your own mental process. You may, however, be required to provide your physical attributes such as finger prints, voice sample, or photograph to the police, who could then use the sample like a key on a biometric lock as suggested by the court in Virginia.

The second security feature to avoid is Android’s pattern unlock feature. This option displays 9 dots on the screen and allows you to draw a pattern connecting between 4 and 9 of the dots. This pattern serves as the method to unlock the phone in place of a typed PIN or password. The pattern lock appears to cause the government problems when trying to access data on a pattern locked phone. The issue is that Google can simply reset the lock pattern on the phone when presented with a court order requiring them to do so. Thus, while the pattern may initially stifle prying government eyes from peering into the locked device, the protection is lost when a warrant is issued with an order for Google to reset the pattern so the device can be unlocked. Google cannot reset a PIN or password the same way.

Of course, all of the device security in the world can’t protect your data in the cloud from snooping eyes. Most cell phones today store various amounts of data in the cloud automatically without any user intervention. For example, when creating contacts on Android phones you have to option to associate them to the Google account on the phone. This option is great if you switch phones or otherwise lose access to your original phone. This also means that the government doesn’t need to take or unlock your phone to see your contact information. They can simply show up to Google with a warrant and you may never know that they were there. In fact, Apple and Google are perfectly able and willing to hand over cloud stored data to law enforcement, sometimes proactively.

You can disable the cloud storage features of your Apple or Android device entirely, or simply choose what you are willing to store in the cloud for convenience and what information you wish to remain truly private. Overall the decision of both Apple and Google to enable device encryption by default in the new operation systems is a great step forward in the struggle for privacy in the digital age, but the consumer also needs to do their part and use smart, strong, passwords to help protect their privacy.

Digital Sexual Assault: A Disturbing Trend




By Colleen Canniff

“Hopefully the Class of 2018 is paying attention, because otherwise the UEA is going to have to rape harder…”

In the fall of 2014, the incoming University of Chicago freshman class and the wider University of Chicago (UC) community were targeted and threatened as an anonymous hacker group, UChicago Electronic Army (UEA), posted the name of a sexual assault survivor and activist, purportedly in retaliation of the posting of The Hyde Park list.  The Hyde Park List, posted to Tumblr, was compiled by and for UC students and contained a list of UC students who, according to the list, are “individuals we would warn our friends about, because of their troubling behavior towards romantic or sexual partners.”  The original purpose of the list was to “[keep] the community safe—since the University won’t.”  One could argue that the UEA’s actions were out of a concern of denying due process to the students named.  But their method—threatening rape, and naming and identifying a victim of sexual assault—is a prime example of a troubling trend on the internet: using threats of sexual violence to silence individuals with whom a group of people doesn’t agree with.  As threats go online, the law is racing to keep up.

The UC incident is just the tip of the digital-assault iceberg.  Other reports of online sexual assault such as the iCloud hack, theft, and online release of dozens of celebrities’ personal nude photos, YouTuber Sam Peppers’ videos of sexually harassing women on the street, and the continued threats against Anita Sarkeesian, founder of Feminist Frequency (a website where she discusses and critiques female representations in videogames), exemplify the pervasiveness of online sexual violence against women. Continue reading

The Cold Truth, Revisited: Egg Freezing as an Employee Benefit

By Nadia Daneshvar

Faced with attracting women to join their predominantly male workforce, Facebook decided to offer an usual benefit: up to $20,000 in coverage toward egg freezing procedure and storage costs for female employees. Apple similarly has plans to offer both full- and part-time female employees the same coverage starting in January 2015. While many say the companies have taken a step toward gender equality in the workplace, others see it as a step in the wrong direction.

The American Society for Reproductive Medicine’s recent revocation of egg freezing’s “experimental” status has caused an increase in the number of women opting to freeze their eggs for social or nonmedical reasons before they reach “advanced maternal ages.” But egg freezing does not come without risks and stigma. Nearly all stages involve risks (e.g., hormone injections and extraction, transfer, and gestation) and after two rounds of egg retrieval, the chance of live birth is just over 20% if eggs are harvested at or before age 25 and decreases with age: only 16% of women who underwent two rounds of egg freezing at age 30 will have live births. Other considerations include the increased risks associated with pregnancy in older ages. Continue reading

DRONE SEASON: Can You Shoot Down a Drone That Flies Over Your Property?

Michael Holloway Liberty Image 12.12.13 CC_smallBy Michael Holloway

As unmanned aerial vehicles (UAVs) – drones – become an increasingly common sight, more and more people wonder whether they may legally shoot down a drone flying over their property.  The question is not confined to a radical fringe: at a 2012 Congressional hearing on drones, U.S. Representative Louis Gohmert asked, “Can you shoot down a drone over your property?”  Separately, conservative pundit Charles Krauthammer offered: “I would predict—I’m not encouraging—but I predict the first guy who uses a Second Amendment weapon to bring a drone down that’s been hovering over his house is going to be a folk hero in this country.”

Traditionally, under the ad coeium doctrine, a property owner had control over his property “from the depths to the heavens.”  According to Black’s Law Dictionary, “Cjust est solum, ejus est usque ad coelom et ad inferos – to whomever the soil belongs, he owns also to the sky and the depths.”  But that changed with the advent of the airplane.  In 1926, Congress passed the Air Commerce Act, 49 U.S.C. § 40103(a)(1), which gave the federal government “exclusive sovereignty of airspace of the United States.”  In United States v. Causby, 328 U.S. 256, 261 (1946), Justice William Douglas wrote that the ad coeium doctrine “has no place in the modern world.”  Rather, with the advent of air travel, the national airspace is akin to a “public highway.”  But despite this, a property owner retains exclusive control over the space he or she can reasonably use in connection with the land, and may be entitled to compensation if the government encroaches on this airspace.  Similarly, as the Ninth Circuit pointed out in Hinman v. Pacific Air Transport, a person may become liable to a property owner for trespassing on this space.

Nor are these merely idle threats: a group of animal rights activists in Pennsylvania has repeatedly had its drones shot down while aerially videotaping “pigeon shoots” at a private club.  In April 2014, the town of Deer Trail, Colorado, voted on a proposed ordinance to issue drone hunting licenses; the ordinance offered a $100 bounty for shooting down drones and bringing in “identifiable parts of an unmanned aerial vehicle whose markings and configuration are consistent with those used on any similar craft known to be owned or operated by the United States federal government.”  The initiative ultimately lost badly, with 73% of voters opposed.

Law professor Greg McNeal writes that a person shooting down a government or commercial drone would violate constitute a violation of 18 U.S.C. § 32, which states that anyone who damages or destroys any aircraft in flight in the United States has committed a crime punishable by up to twenty years in prison or a fine of up to $250,000.  McNeal’s analysis assumes that drones constitute “aircraft” within the meaning of the statute, but that has recently come into question.  In March 2014, a National Transportation Safety Board (NTSB) administrative law judge set aside the Federal Aviation Administration (FAA)’s first-ever fine against a commercial drone operator, finding that the small drone at issue was only a “model aircraft,” and not an “aircraft” within the FAA’s regulatory authority.  The drone’s operator, Raphael Pirker, had been hired by a promotional company to shoot aerial video over the University of Virginia campus.  According to the FAA’s complaint, Pirker operated the drone recklessly, including causing one pedestrian to take “immediate evasive action” to avoid being hit.  The FAA fined Pirker $10,000 for operating the drone “in a careless or reckless manner so as to endanger the life or property of another” in violation of 41 C.F.R. § 91.13.

The ALJ tossed the fine, pointing to a 1981 “advisory circular” on model aircraft issued by the FAA, which provided model aircraft operators with voluntary advice such as to maintain distance from populated and noise-sensitive areas, fly below 400 feet, and cooperate with nearby airports.  In the ALJ’s view, the advisory circular represented a binding statement of policy by the FAA that model airplanes were exempt from its general regulatory authority over “aircraft,” a position it could not change later without going through a notice-and-comment period and implementing formal regulations under the Administrative Procedure Act (5 U.S.C. §§ 500).

There are problems with the ALJ’s decision.  It ignores that Congress, by the statute’s clear terms, in 5 U.S.C. §§ 500, gave the FAA the express authority to regulate all “aircraft,” defined expansively in 49 U.S.C. § 40102(a)(6) as “any contrivance invented, used, or designed to navigate, or fly in, the air.”  Ordinarily, when a statute’s terms are clear, it is considered improper for a judge to engage in more subtle acts of interpretation, and the statute here could not be clearer.  While the ALJ considered it a “risible argument” that someone could face FAA enforcement for flying a balsa wood glider or paper airplane without the FAA’s permission, such is the power Congress gave to the FAA in 1926.  The case is currently on appeal before the full NTSB.

In any case, whether or not shooting down a drone could result in a 20-year prison term or a quarter-million dollar fine, it is certainly a bad idea.  As the FAA has stated, shooting down a drone “could result in criminal or civil liability, just as would firing at a manned airplane.”   Expressing your concerns directly to your friendly neighborhood drone pilot is surely a better remedy.

Proposed Chicago Data Sensors Raise Concerns over Privacy, Hidden Bias

Michael Holloway Liberty Image 12.12.13 CC_small   John McElligott 135px

By Michael Holloway, John McElligott

Beginning in mid-July, Chicagoans may notice decorative metal boxes appearing on downtown light poles.  They may not know that the boxes will contain sophisticated data sensors that will continuously collect a stream of data on “air quality, light intensity, sound volume, heat, precipitation, and wind.”  The sensors will also collect data on nearby foot traffic by counting signals from passing cell phones.  According to the Chicago Tribune, project leader Charlie Catlett says the project will “give scientists the tools to make Chicago a safer, more efficient and cleaner place to live.” Catlett’s group is seeking funding to install hundreds of the sensors throughout the city.  But the sensors raise issues concerning potential invasions of privacy, as well as the creation of data sets with hidden biases that may then be used to guide policy to the disadvantage of poor and elderly people and members of minority groups.

Continue reading