Step Aside, States?

Erik Jonesby Erik Jones

In the fall of 2005, 44 state attorneys general came together to send a joint letter to Congress and share their concerns about a proposed federal law on data breach notification. Such letters are exceedingly rare, as state attorneys general come from competing political parties and states with varied interests. However, nothing brings state government officials together like a congressional effort to limit states’ power to assist their own residents. And that is just what Congress was doing.

At the time, a number of states had already passed their own data breach notification laws, or were in the process of doing so. The state laws were and are popular because they are based upon a simple but powerful argument. When a company suffers a data breach that includes consumers’ sensitive personal information, it should be required to inform the affected consumers so the consumers can quickly take steps to limit any potential fraud or identity theft stemming from the breached information.

In response to a growing number of data breaches in 2005, including an extensive and widely publicized breach suffered by a data broker named ChoicePoint, members of Congress were also pushing bills that would require companies to notify consumers when their sensitive personal information was subject to a data breach. The state attorneys general supported Congress’s effort, as many had championed the same law in their respective states. But to their disappointment, the legislation also included provisions that would nullify existing state laws on data breach notification and prevent states from passing additional laws on the matter in the future. When Congress does this, it’s known as pre-emption. The prospect of pre-emption alarmed the state attorneys general because of its potential negative impact for consumers.

I have now experienced both sides of this debate. I’ve worked at both the federal and state levels, and I’m currently an assistant attorney general for Illinois. And I share the concerns over pre-emption raised by those state attorneys general nearly a decade ago.

Thankfully, these fears have not yet been realized. Congress has failed to pass a data breach notification law, leaving state laws intact. This failure can be attributed, in part, to Congress’s own disagreement over pre-emption. But that may soon change, as the conditions might be right for data security legislation to move in this Congress. A rash of large, costly data breaches has galvanized public interest in the issue. One political party controls both the Senate and the House of Representatives. And President Obama has rightfully put the issue at the top of his agenda.

During a recent speech at the Federal Trade Commission, and again in the State of the Union this week, President Obama called on Congress to pass a national law on data breach notification. For Congress, the challenge will not be deciding whether to pursue such a law, which is widely supported. The difficult part will be determining the role states will play in data security moving forward.

In order to pass a national law on data breach notification, Congress will have to decide what to do about the 47 state laws on breach notification that have already been enacted—only Alabama, South Dakota, and New Mexico do not have laws requiring data breach notification. The president has argued for the need to end, in his wording, this “patchwork” of state laws. But such a move, if done incorrectly, could be a disaster for consumers.

The state laws on breach notification have been critical for consumers. They are the reason consumers were made aware of the significant data breaches that caught Congress’s attention in 2005. And they are the reason millions of consumers were notified of the payment card breaches that Target, Home Depot, and other large retailers suffered more recently. Without the state laws, companies would not have been legally obligated to notify their customers of the breaches.

In their letter to Congress in 2005, the state attorneys general made similar points, noting that “states have been able to respond more quickly to concerns about privacy and identity theft involving personal information, and have enacted laws in these areas years before the federal government.” The state attorneys general also looked to the future and predicted that pre-emption would interfere with “state legislatures’ democratic role as laboratories of innovation.” Time has validated their assertion.

Over the past decade, as states have developed expertise on the issue, they have also updated their laws to address problems and to adapt them to changes in technology. With the growth of cloud computing and e-commerce, Florida and California have included breaches of login information for online accounts as triggers for a notification requirement. In response to the increased use of fingerprint-reading software, Iowa, Nebraska, North Carolina, and Wisconsin have mandated notification if a breach of biometric information occurs. More than 30 states have enacted laws requiring companies to dispose of sensitive data securely, and a number of states are now requiring companies that handle sensitive personal information to develop reasonable data security practices to protect it.

States have also passed laws that require companies suffering breaches to provide notice directly to the state attorney general. Such a requirement, for example, has enabled California to maintain awebsite of data breaches affecting California residents, which any state resident can access. There are thousands of data breaches on the list. Some are national in scope, but most are local or regional in nature and not covered by the national media. The list helps ensure California residents have the opportunity to learn about the data breaches that have affected them.

If Congress had succeeded in pre-empting state law in 2005, it is likely that none of these protections would exist. States would have been precluded from enacting them. And given the difficulty Congress has had passing a simple data breach notification law, Congress would also likely have had a difficult time updating or expanding the law.

For four years I served in various capacities for the Senate Committee on Commerce, Science, and Transportation, while my boss, Sen. Jay Rockefeller, D-W.Va., was working to pass consumer protection legislation on data security and legislation to protect our nation’s most critical infrastructure from cyberattacks. Throughout my time with the committee, I had a front-row seat for the debate over pre-emption.

On one side were the consumer advocates, who were concerned that a weak national law, combined with pre-emption, would mean fewer protections for consumers. On the other side was the business community, which complained that meeting the requirements of nearly 50 separate laws on breach notification was inefficient and burdensome. At the time, I thought I understood the costs and benefits of pre-emption. I now know that I did not.

In 2013 I took a position working for Illinois Attorney General Lisa Madigan. Through it, I have experienced firsthand the important role states play for consumers. State attorneys general hear directly from the residents they serve on a daily basis. In Illinois, thousands of residents have asked our office for help with data security and identity theft. They have not asked that we step aside so that the federal government can handle it.

This year Attorney General Madigan will be proposing a number of updates to Illinois’ data breach notification law. These updates are based upon the lessons we have learned through our efforts to enforce our data breach notification law and consumer protection laws. It would be a shame if we were prevented from using these insights and pursuing these updates, which are designed to protect consumers, because of an overly broad pre-emption provision in federal law.

While a national law on data breach notification is long overdue and very much needed, a perverse outcome is possible, in which Congress pre-empts states and at the same time passes a weak notification law that provides consumers with notice of data breaches only when very specific conditions are met. If not narrowly tailored, a pre-emption provision could place a wedge between consumers and the very state agencies that serve them.

This piece originally appeared in Slate’s Future Tense section.

Erik C. Jones is the policy director and an assistant attorney general in the Illinois Attorney General’s Office and an Adjunct Professor at IIT Chicago-Kent College of Law.


This post also appeared on CKPrivacy.org (archived version)

A White House Invitation to Launch Precision Medicine

By Lori Andrews

President Obama at the launch of the Initiative

Last Friday, I was a guest at the White House for President Obama’s launch of the Precision Medicine Initiative.  The goal of the Initiative is to sequence people’s genomes and read the nuances of their genes to determine how to prevent disease or more precisely treat it. The President illustrated how this would work by introducing Bill Elder, a 27 year old with cystic fibrosis. Bill has a rare mutation in his cystic fibrosis gene and a drug was fast-tracked at the FDA to target that mutation.  “And one night in 2012, Bill tried it for the first time,” explained President Obama. “Just a few hours later he woke up, knowing something was different, and finally he realized what it was:  He had never been able to breathe out of his nose before.  Think about that.”

When Bill was born, continued the President, “27 was the median age of survival for a cystic fibrosis patient.  Today, Bill is in his third year of medical school.”  Bill expects to live to see his grandchildren.

The Precision Medicine Initiative will involve sequencing the genomes of a million Americans.  Such a project would have been unimaginable if we hadn’t won the Supreme Court case challenging gene patents.  Prior to that victory, genetic sequencing cost up to $2,000 per gene due to patent royalties.  Now it will cost less than ten cents per gene.

Bill Elder at the White House event

The people who volunteer as research subjects for the project may expect cures for their own diseases.  But, even when genetic mutations are discovered, cures are a long way off.   “Medical breakthroughs take time, and this area of precision medicine will be no different,” said President Obama. And despite the fanfare surrounding genetics, researchers often find that environmental factors play a huge role in illness. At the same time the White House was preparing for the launch of the Precision Medicine Initiative, Stanford researchers and their colleagues across the globe were publishing a study in the January 15 issue of the prestigious journal Cell challenging the value of sequencing research.  Their study, “Variation in the Human Immune System is Largely Driven by Non-Heritable Influences,” tested sets of twins’ immune system markers.  The result: Nearly 60% of the immune system differences were based on the environment rather than genes.

Capturing environmental information about the million volunteers will involve invasions of their privacy as their health and behavior is categorized and quantified from every perspective.  Their genetic data will be combined with medical record data, environmental and lifestyle data, and personal device and sensor data.  If not handled properly, this data could be used to stigmatize the research participants or discriminate against them.  Will they be properly informed of the risks in advance?  Will sufficient protections be in place for their device and sensor data, which is often not covered by medical privacy laws such as HIPAA?

At the White House last Friday, President Obama said, “We’re going to make sure that protecting patient privacy is built into our efforts from day one. It’s not going to be an afterthought.” He promised that patient rights advocates “will help us design this initiative from the ground up, making sure that we harness new technologies and opportunities in a responsible way.”

Professor Andrews with Henrietta Lacks’ descendants at the White House

President Obama underscored that commitment by inviting members of Henrietta Lacks’ family to last Friday’s event. In 1951, Henrietta Lacks was dying of cervical cancer.  A researcher at Johns Hopkins University undertook research on her cells without her knowledge or consent (or that of her family).  Her immortalized human cell lines provided the basis for generations of research in the biological sciences, as well as research by commercial companies.  When her husband learned about it years later, he said, “As far as them selling my wife’s cells without my knowledge and making a profit—I don’t like it at all.”

A former Constitutional Law professor, President Obama is aware of the importance of people’s rights.  Let’s hope that his aspiration of an Initiative that guards research subjects’ autonomy and privacy will be honored by the scientists who will actually operationalize the $215 million project.

Improving Defenses: Data Breaches and Security Standards

Richard warnerby Richard Warner

The recent wave of massive data breaches shows that businesses holding sensitive data need to do a better job of protecting it. That has fueled renewed calls to give businesses an incentive to improve data security by promulgating industry or statutory standards. The irony is that the breaches also show that it is extremely difficult for standards—statutory or industry—to sufficiently improve security. Target, for example, complied with all relevant industry standards but was easily breached.

The problem runs much deeper than the usual concern about industry capture. To begin with, standards are often too specific, addressing just a few of the wide range of problems associated with contemporary networks attacks. For example, Target’s point of sale systems were PCI (Payment Card Industry) compliant, but that provided no protection for the rest of Target’s complex network. Further, promulgated standards, no matter how wide reaching, are always behind the curve in the rapidly escalating war of network attack and defense. For example, PCI standards did not, at the time of the Target breach, require that credit card information be encrypted for the milliseconds it took to transfer it from the payment terminal to the network, so the hackers simply recorded the information at that point. Finally, standards are simply a roadmap for attackers. They just tell them what networks guard against and what they probably don’t.

So should we abandon the idea of using statutes or industry standards to give businesses an incentive to improve data security? That would almost certainly be a mistake since market incentives run the wrong way. Consumers have been unwilling to pay for the added value of security through slightly higher retail prices or credit card fees, and companies dependent on consumer sales don’t offer what consumers don’t want. Consumers end up paying even more to cover the high cost of data breaches, but that fact has not created any “pay more for security” reaction.

So the task is clear: formulate standards with sufficient detail to provide genuine guidance but with enough flexibility to encourage innovation and keep pace with rapid change. It is just the solution that eludes us.


This post was originally published on CKPrivacy.org (archived link)

Apple and Google Make the Next Generation of Smartphones More Secure

Adam Rouse Headshot

By Adam Rouse

Apple recently announced that starting with the release of iOS 8 that device encryption would be enabled by default. On the heels of Apple’s announcement, Google also announced that it would be turning on whole device encryption by default with the release of its Android 5 operating system. Previously, on both Apple and Android devices a consumer would have to go in to the settings of the device and enable encryption. Apple and Google added that neither company would hold the keys to the kingdom by maintaining cryptographic keys capable of decrypting secured devices. Apple states that there is no longer a way for the company to decrypt a locked device, even if presented with a valid warrant from law enforcement personnel. Google also reiterated that Android devices have never stored cryptographic keys anywhere other than the encrypted device. Thus, Google also claims that it cannot decrypt an encrypted device for law enforcement, even when presented with a valid warrant.

Even though device encryption by default provides additional protection, a lock is only as strong as the key required to unlock it. Apple and Android devices (which make up 96.4% of the world cellular device market), as part of the device encryption, will ask the user to create some sort of passcode the first time the device is powered on. This passcode should be a strong password. All of the device encryption in the world can’t help you if all it takes to unlock your device is typing in “1234” to the PIN field. On average a 4 digit PIN on an Android device can be broken in just under 17 hours using a commonly available phone hacking tool. Interestingly, increasing the PIN to a 10 digit number ups the time required to brute force unlock the device to just less than 2 centuries. Apple iOS devices fare a bit better because they lock devices out for successively longer times after repeated incorrect PIN entries. Both Android and Apple iOS devices can also be setup to use an alphanumeric password to access the device. While an alphanumeric password offers better security for the device it is much less convenient to type a full password than to enter a PIN code.

Smartphones suffer from the same security dilemma that all computing devices do: securing the device and data within often makes for an inconvenient end user experience. On average people check their smartphone or other mobile device 150 times a day. While Apple and Google could require complex passwords for lock screens to greatly improve security the consumer backlash could very well be crippling. It’s doubtful that the average consumer would want to type “dR#41nfE” on a smartphone keyboard 150 times a day just to check email or retrieve a text. There is a middle-of-the-road solution that could bridge the gap between effortless convenience and good security practice.

Apple and Google could require a unique, strong, password to decrypt the device when it powers on, but allow for a more convenient PIN or password to be used for a screen lock. Another feature could be added to the devices that would automatically power them down if an incorrect password or PIN was entered 10 times in a row. This feature would make it much less likely that someone could guess or brute force the screen lock password or PIN. Thus forcing even complex forensic programs to brute force attack the more complex and secure power on password. Incidentally, it would take about 14 years to brute force guess “dR#41nfE” on a computer capable of trying 2.6 million passwords per second. Any 4 digit PIN would take less than a second on the same computer. Thus, while the transition to decryption by default is a wonderful leap in the right direction for privacy minded consumers; the addition of the ability to have complex power on passwords separate from the lock screen credentials would help protect privacy while not being so inconvenient  that people will do nothing but disable the security feature.

While moving to whole device encryption is commendable for Apple and Google, there are two security features that should be avoided in their current state. These features are little more than security theater; you may feel secure by using them but there are fatal flaws with each that could leave you exposed to the snooping eyes of the government.

The first security feature to avoid is Apple iOS’s (as well as some upcoming Android devices) option to use a biometric lock with a thumb or fingerprint. Besides the problem of the sensor technology being defeated by gummy bears, there is a legal issue with a fingerprint lock on your device. Recently, a court in Virginia issued an opinion that stated that because fingerprints are non-testimonial in nature, police can legally require a detainee to provide their fingerprint to unlock a device.

A federal judge in the Eastern District of Michigan held that a password is testimonial in nature and thus protected from forced disclosure to the government by the Fifth Amendment (which applies to the states via the 14th Amendment). Justice Stevens in U.S. v. Hubbell distinguished between someone being forced to provide a key to a lockbox and being forced to reveal the combination to a safe. Providing a key to the government is a physical act, the key exists independently of the mental processes of the person who possesses it. Conversely, a password exists exclusively in the realm of a person’s mind and thus becomes testimonial in nature and protected under the 5th and 14th Amendments. Justice Stevens also stated in Hubbell that the act of providing physical evidence such as forcing someone “to put on a shirt, to provide a blood sample or handwriting exemplar, or to make a recording of his voice” was wholly separate from compelling someone to provide testimonial knowledge.

Thus, passwords and PINs appear to be protected by the 5th and 14th Amendments as being testimonial in nature because they exist as the exclusive result of your own mental process. You may, however, be required to provide your physical attributes such as finger prints, voice sample, or photograph to the police, who could then use the sample like a key on a biometric lock as suggested by the court in Virginia.

The second security feature to avoid is Android’s pattern unlock feature. This option displays 9 dots on the screen and allows you to draw a pattern connecting between 4 and 9 of the dots. This pattern serves as the method to unlock the phone in place of a typed PIN or password. The pattern lock appears to cause the government problems when trying to access data on a pattern locked phone. The issue is that Google can simply reset the lock pattern on the phone when presented with a court order requiring them to do so. Thus, while the pattern may initially stifle prying government eyes from peering into the locked device, the protection is lost when a warrant is issued with an order for Google to reset the pattern so the device can be unlocked. Google cannot reset a PIN or password the same way.

Of course, all of the device security in the world can’t protect your data in the cloud from snooping eyes. Most cell phones today store various amounts of data in the cloud automatically without any user intervention. For example, when creating contacts on Android phones you have to option to associate them to the Google account on the phone. This option is great if you switch phones or otherwise lose access to your original phone. This also means that the government doesn’t need to take or unlock your phone to see your contact information. They can simply show up to Google with a warrant and you may never know that they were there. In fact, Apple and Google are perfectly able and willing to hand over cloud stored data to law enforcement, sometimes proactively.

You can disable the cloud storage features of your Apple or Android device entirely, or simply choose what you are willing to store in the cloud for convenience and what information you wish to remain truly private. Overall the decision of both Apple and Google to enable device encryption by default in the new operation systems is a great step forward in the struggle for privacy in the digital age, but the consumer also needs to do their part and use smart, strong, passwords to help protect their privacy.

DRONE SEASON: Can You Shoot Down a Drone That Flies Over Your Property?

Michael Holloway Liberty Image 12.12.13 CC_smallBy Michael Holloway

As unmanned aerial vehicles (UAVs) – drones – become an increasingly common sight, more and more people wonder whether they may legally shoot down a drone flying over their property.  The question is not confined to a radical fringe: at a 2012 Congressional hearing on drones, U.S. Representative Louis Gohmert asked, “Can you shoot down a drone over your property?”  Separately, conservative pundit Charles Krauthammer offered: “I would predict—I’m not encouraging—but I predict the first guy who uses a Second Amendment weapon to bring a drone down that’s been hovering over his house is going to be a folk hero in this country.”

Traditionally, under the ad coeium doctrine, a property owner had control over his property “from the depths to the heavens.”  According to Black’s Law Dictionary, “Cjust est solum, ejus est usque ad coelom et ad inferos – to whomever the soil belongs, he owns also to the sky and the depths.”  But that changed with the advent of the airplane.  In 1926, Congress passed the Air Commerce Act, 49 U.S.C. § 40103(a)(1), which gave the federal government “exclusive sovereignty of airspace of the United States.”  In United States v. Causby, 328 U.S. 256, 261 (1946), Justice William Douglas wrote that the ad coeium doctrine “has no place in the modern world.”  Rather, with the advent of air travel, the national airspace is akin to a “public highway.”  But despite this, a property owner retains exclusive control over the space he or she can reasonably use in connection with the land, and may be entitled to compensation if the government encroaches on this airspace.  Similarly, as the Ninth Circuit pointed out in Hinman v. Pacific Air Transport, a person may become liable to a property owner for trespassing on this space.

Nor are these merely idle threats: a group of animal rights activists in Pennsylvania has repeatedly had its drones shot down while aerially videotaping “pigeon shoots” at a private club.  In April 2014, the town of Deer Trail, Colorado, voted on a proposed ordinance to issue drone hunting licenses; the ordinance offered a $100 bounty for shooting down drones and bringing in “identifiable parts of an unmanned aerial vehicle whose markings and configuration are consistent with those used on any similar craft known to be owned or operated by the United States federal government.”  The initiative ultimately lost badly, with 73% of voters opposed.

Law professor Greg McNeal writes that a person shooting down a government or commercial drone would violate constitute a violation of 18 U.S.C. § 32, which states that anyone who damages or destroys any aircraft in flight in the United States has committed a crime punishable by up to twenty years in prison or a fine of up to $250,000.  McNeal’s analysis assumes that drones constitute “aircraft” within the meaning of the statute, but that has recently come into question.  In March 2014, a National Transportation Safety Board (NTSB) administrative law judge set aside the Federal Aviation Administration (FAA)’s first-ever fine against a commercial drone operator, finding that the small drone at issue was only a “model aircraft,” and not an “aircraft” within the FAA’s regulatory authority.  The drone’s operator, Raphael Pirker, had been hired by a promotional company to shoot aerial video over the University of Virginia campus.  According to the FAA’s complaint, Pirker operated the drone recklessly, including causing one pedestrian to take “immediate evasive action” to avoid being hit.  The FAA fined Pirker $10,000 for operating the drone “in a careless or reckless manner so as to endanger the life or property of another” in violation of 41 C.F.R. § 91.13.

The ALJ tossed the fine, pointing to a 1981 “advisory circular” on model aircraft issued by the FAA, which provided model aircraft operators with voluntary advice such as to maintain distance from populated and noise-sensitive areas, fly below 400 feet, and cooperate with nearby airports.  In the ALJ’s view, the advisory circular represented a binding statement of policy by the FAA that model airplanes were exempt from its general regulatory authority over “aircraft,” a position it could not change later without going through a notice-and-comment period and implementing formal regulations under the Administrative Procedure Act (5 U.S.C. §§ 500).

There are problems with the ALJ’s decision.  It ignores that Congress, by the statute’s clear terms, in 5 U.S.C. §§ 500, gave the FAA the express authority to regulate all “aircraft,” defined expansively in 49 U.S.C. § 40102(a)(6) as “any contrivance invented, used, or designed to navigate, or fly in, the air.”  Ordinarily, when a statute’s terms are clear, it is considered improper for a judge to engage in more subtle acts of interpretation, and the statute here could not be clearer.  While the ALJ considered it a “risible argument” that someone could face FAA enforcement for flying a balsa wood glider or paper airplane without the FAA’s permission, such is the power Congress gave to the FAA in 1926.  The case is currently on appeal before the full NTSB.

In any case, whether or not shooting down a drone could result in a 20-year prison term or a quarter-million dollar fine, it is certainly a bad idea.  As the FAA has stated, shooting down a drone “could result in criminal or civil liability, just as would firing at a manned airplane.”   Expressing your concerns directly to your friendly neighborhood drone pilot is surely a better remedy.

Proposed Chicago Data Sensors Raise Concerns over Privacy, Hidden Bias

Michael Holloway Liberty Image 12.12.13 CC_small   John McElligott 135px

By Michael Holloway, John McElligott

Beginning in mid-July, Chicagoans may notice decorative metal boxes appearing on downtown light poles.  They may not know that the boxes will contain sophisticated data sensors that will continuously collect a stream of data on “air quality, light intensity, sound volume, heat, precipitation, and wind.”  The sensors will also collect data on nearby foot traffic by counting signals from passing cell phones.  According to the Chicago Tribune, project leader Charlie Catlett says the project will “give scientists the tools to make Chicago a safer, more efficient and cleaner place to live.” Catlett’s group is seeking funding to install hundreds of the sensors throughout the city.  But the sensors raise issues concerning potential invasions of privacy, as well as the creation of data sets with hidden biases that may then be used to guide policy to the disadvantage of poor and elderly people and members of minority groups.

Continue reading

Why Reporters Need to Learn Cryptography

Lori Head Shot 2014 v.2 small

By Lori Andrews, JD

Julian Assange, Feb 27th The Media Consortium conference, Chicago, IL

Julian Assange Skyped into the TMC conference to discuss “The Use and Abuse of Whistleblowers” with Juan Gonzales, Democracy Now!; Gavin MacFadyen, Centre for Investigative Journalism; and Bea Edwards, Government Accountability Project.

They use burner phones as they cross borders.  They buy old Lenovo computers because there are fewer backdoors into those computers that allow surveillance.

They are not spies or criminals.  They are investigative reporters trying to get on-the-ground stories to help us understand and sometimes change our world.

Last week, The Media Consortium and IIT Chicago-Kent College of Law joined forces to describe the challenges that reporters face in an era when intelligence agencies such as the National Security Agency and corporations such as Google spy on what each of us is doing on our digital devices.  At the joint workshop, Josh Stearns of Free Press and the Freedom of the Press Foundation reported on how many journalists had been killed and jailed in last year.  And it’s not just a problem abroad.  According to the World Press Freedom Index, the United States has slipped to number 46 in a ranking of countries on how much freedom it gives its reporters, well below even countries such as Ghana and Uruguay.  amalia deloney of the Center for Media Justice described how surveillance in general disproportionately affects people of color.  She showed a slide of a police tower that one might have guessed was situated in Guatemala or another oppressive nation.  Instead, it was in a primarily African-American neighborhood in Charlotte, North Carolina.

What information is the NSA collecting about activists, reporters and you?  The NSA gathers the phone numbers, locations, and length of virtually all phone calls in the United States.  It collects records of nearly everything you do online, including your browsing history and the contents of your emails and instant messages.  It can create detailed graphs of your network of personal connections.  It can create phony wireless connections in order to access your computer directly.  It can intercept the delivery of an electronic device and add an “implant” allowing the agency to access it remotely. Continue reading

California’s Revenge Porn Statute: A Start but not a Solution

Blog Photo_Lori

By Lori Andrews

Susan, a professional woman in her 30s, met a man she thought she’d ultimately marry.  Their relationship was sufficiently intimate that she sent him a naked photo of herself.  When she caught him cheating, she broke up with him.  He took revenge by posting that selfie on a revenge porn website, along with her name, the name of her town, and her social media contact information.  She received messages from complete strangers asking for more naked photos.  As she went about her daily life, she was afraid that one of those men would stalk her.  She worried that her co-workers might have come across the photo.  She knew that if she applied for a new job, that nude photo would come up in a Google search of her name.  She’d been branded with a modern Scarlet Letter.

Across the Web, thousands of people attack their exes by posting disgusting comments about them, warnings not to date them, or nude photos of them.  On October 1, California Governor Jerry Brown signed into law a bill criminalizing what has become known as revenge porn.  The law assesses a thousand dollar fine in a narrow situation.  It is a misdemeanor for a person to photograph “the intimate body part or parts of another identifiable person, under circumstances where the parties agree or understand that the image shall remain private, and the person subsequently distributes the image taken, with the intent to cause serious emotional distress, and the depicted person suffers serious emotional distress.” Continue reading

Something’s Rotten in the State of California: Google’s Network “Sniffing” Fails Ninth Circuit’s Smell Test

Dan Massoglia_1 v.2

By Dan Massoglia

It’s a crisp afternoon on the Northwest Side of Chicago.  A white Opel Astra cruises down the block, its roof-mounted camera capturing photos dedicated to Google’s now ubiquitous Street View service.  Far more than taking pictures of streets and sidewalks, however, Google’s cars have been collecting digital information from inside homes as well, covertly sucking down data sent via unsecured wireless routers, picking up emails, passwords, and even documents and videos from the families inside.

Continue reading

NSA SPYING VIOLATES FIRST AND FOURTH AMENDMENTS

Blog Photo_Lori Blog Photo_Jake 

By Lori Andrews and Jake Meyer

The United States Foreign Intelligence Surveillance Court in Washington, D.C. in a top secret court order ordered Verizon to produce to the National Security Agency (NSA) “all call detail records or ‘telephony metadata’ created by Verizon for communications (i) between the United States and abroad; or (ii) wholly within the United States, including local telephone calls.”

Since we’re Verizon users, this order means that the NSA knows who we called, where we called them from, and for how long.  The NSA even knows that we’ve talked to each other. Continue reading