Published by Al Saikali

The consequences of a data breach reached new heights last week when Yahoo announced the resignation of its General Counsel in response to a series of security incidents the company suffered.  A more fulsome explanation of the security incidents and Yahoo’s response can be found in item seven of the company’s 10-K, but here are the highlights:

  • Yahoo suffered three security incidents from 2013 to 2016, one of which involved the theft of approximately 500 million user accounts from Yahoo’s network by a state-sponsored actor in late 2014. The stolen information included names, email addresses, telephone numbers, dates of birth, hashed passwords and encrypted or unencrypted security questions and answers.  (Note that under most, but not all, data breach notification laws, unauthorized access of these data elements would not create a legal obligation to notify affected individuals).
  • An independent committee of Yahoo’s board of directors undertook an investigation with the assistance of a forensic firm and outside counsel.
  • The committee concluded that Yahoo’s information security team knew of the 2014 security incident at that time, but the incident was not disclosed until September 2016.
  • “[S]enior executives did not properly comprehend or investigate, and therefore failed to act sufficiently upon, the full extent of knowledge known internally by the Company’s information security team.”
  • Yahoo knew, as early as December 2014, that an attacker had acquired personal data of Yahoo users, but it is not clear whether and to what extent this information was conveyed to those outside the information security team.
  • The legal team, however, “had sufficient information to warrant substantial further inquiry in 2014, and they did not sufficiently pursue it. As a result, the 2014 Security Incident was not properly investigated and analyzed at the time, and the Company was not adequately advised with respect to the legal and business risks associated with the 2014 Security Incident.”  (Emphasis added).  The 10-K does not identify the “sufficient information” or explain what “further inquiry” would have been required (or why).
  • The committee found “failures in communication, management, inquiry and internal reporting,” which all contributed to lack of understanding and handling of the 2014 Security Incident.
  • The committee also found that Yahoo’s board of directors was “not adequately informed of the full severity, risks, and potential impacts of the 2014 Security Incident and related matters.”

It’s not clear from the 10-K exactly why Yahoo’s General Counsel was asked to step down.  It’s highly unusual that a General Counsel would be held directly (and publicly) responsible for a data breach.  Nevertheless, the outcome raises a couple of questions:  (1) will this represent a new trend for in house counsel generally, and (2) how will this outcome affect how companies approach investigations of data incidents in the future?

Regarding the latter question, a colleague at another firm suggested that this outcome may make corporate legal departments less inclined to involve themselves in breach response or direct investigations of suspected data breaches.  I disagree.  Looking the other way or sticking one’s head in the sand is never the right response to a data incident.  In fact, the legal department would create bigger problems if it did little or nothing.

So what can a corporate legal department do to minimize its own risks?  Here are a few suggestions:

  • Retain a forensic firm through the legal department or outside counsel in advance of an incident to ensure that resources are available to begin an investigation immediately, and to maximize the applicability of the attorney-client privilege and work product doctrine.
  • Engage outside counsel skilled in privacy and data security law and experienced in helping similarly situated companies prepare for and respond to data incidents. There is a growing glut of lawyers who hold themselves out as privacy experts, so I recommend asking for and contacting references.  Most clients are happy to speak about their level of satisfaction with their outside counsel while avoiding details of the incident that led to the engagement.
  • Prepare written protocols, with the cooperation of your information security department, to guide your investigation when an incident occurs. These protocols are different from incident response plans; they focus specifically on the process of initiating, directing, and concluding an investigation at the direction of legal counsel for the purpose of advising the company on its compliance with privacy and data security laws.  They include rules on communication, documentation, and scope.
  • Engage in real dialogue with the information security officer(s) before an incident occurs, in an effort to identify appropriate rules of engagement for when the corporate legal department should be involved in incident response. Some companies involve legal in every data incident (that’s too much), some don’t involve them at all and maintain that data incidents are entirely within the purview of information security (that’s too little . . . and create significant legal risks), but the challenge lies in defining the middle ground.  It is easy to say “legal gets involved when Information Security determines that an incident is serious,” but it is often difficult to know at the outset of an incident whether it will become serious, and by the time you’ve figured that out it may be too late.  There is, however, a way to strike that balance.
  • Test, test, test – regularly simulate data incidents to test the protocols, rules of engagement, and incident response plans. I’ve been involved in some phenomenal tabletop exercises, which clients have used to benchmark their response readiness against other similarly situated companies.  I’ve been consistently impressed with one particular forensic firm in this space.  Legal and information security departments can and should work together to undertake these exercises.

Information security officers will not be the only high-level executives to have their feet held to the fire when a data breach occurs.  I predict that C-level executives and boards of directors will increasingly hold corporate legal departments responsible (at least in part) for how the company investigates and responds to a suspected data breach.  So it will be important for legal departments to proactively educate themselves on the legal issues that arise when an incident occurs, identify their roles in the incident response procedure, and prepare to act quickly and thoroughly when the time comes.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

Ever wonder how your credit card gets compromised and how the bad guys get your information? This report on tonight’s episode of 60 Minutes provides an overview of what happens from the moment you swipe your card at the point-of-sale terminal to the moment when the card number is compromised and sold on a black market website to the moment when the bad guy who buys your credit card number online uses it to create a counterfeit card. The report also investigates why most companies learn of these breaches from third parties rather than their own information security team. I highly recommend it to anyone interested in learning about this risk as this year’s holiday season begins.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

My last post described what the recently passed Florida Information Protection Act (FIPA) will do.  This post analyzes how FIPA differs from Florida’s existing breach notification law and explains why those differences will hurt or help companies that maintain information about Florida residents.  Florida’s Governor must still sign the FIPA into law, but his signature is expected given the unanimous support of FIPA in the state legislature.  Once signed, the law will go into effect on July 1, 2014.  So what do businesses need to know about FIPA?

Attorney General Notification

The first significant difference between FIPA and Florida’s existing breach notification law is that, with some limited exceptions, breached entities will be required to notify Florida’s Attorney General within 30 days of any breach that affects more than 500 Florida residents.  Until now, Florida has been part of the majority of states that does not require notice of a breach to the state Attorney General.

The law also requires breached entities to notify the Attorney General’s office even when the entities decide notification to affected consumers is not necessary because the breach will not likely result in harm to affected individuals.  It remains to be seen whether this change in the law will result in a flood of “non-notifications” to the Attorney General’s office.

The FIPA provides teeth for the Attorney General’s Office to enforce it.  A violation of FIPA may be automatically considered a violation of Florida’s Deceptive and Unfair Trade Practices Act.  Though the FIPA does not create a private cause of action, we could see the Attorney General actively enforce the law against breached entities that fail to meet the law’s requirements.

Broader Definition of PII

Another significant change in Florida law as a result of the FIPA is the expansion of the definition of personally identifiable information (PII). PII will now include the username or email address in combination with a password or security questions and answer that would permit access to an online account.  This change is based on a realization that consumers are increasingly storing information online and, unfortunately, often using the same usernames and passwords.  The net result, however, will be an increased number of data breaches under the law.

Shortening the Breach Notification Period

FIPA also shortens the time a breached entity has to notify affected individuals of a breach.  Currently, breached entities must notify affected individuals “without unreasonable delay” but they have up to 45 days.  The new law requires breached entities to notify affected individuals “as expeditiously as practicable, but no later than 30 days after the determination of the breach or reason to believe a breach occurred,” unless a waiver or authorized delay is issued.

This change raises a couple of concerns for breached entities.  First, while in most instances 30 days may be enough time to notify affected individuals of a breach, in some cases that will not be enough time.  There are many steps that must take place as part of the notification process, including determining the source and scope of the instruction, identifying what information is affected, identifying who is affected and where they live, and ensuring that the threat is no longer in existence.  Adopting a bright line deadline may end up punishing breached entities that are working as quickly as possible to respond to a breach.

Second, it is not clear under the FIPA what starts the clock running on the 30 days.  When is “determination of the breach” triggered?  Is it when the breached entity reasonably believes an intrusion has occurred?  Is it when the entity knows that PII has been affected?  Is it when the entity knows whose PII has been affected?  I would argue that clock shouldn’t start running until the entity knows that the PII of a Florida resident has been affected, but we are left to guess how regulators will interpret this requirement.

Notification by Email

A welcome change that the FIPA will usher in is breach notification by email.  This will help significantly reduce the cost of breach notification in matters that involve a large number of Florida consumers.  It is also recognition that the best contact information a company may have for its customers is their email address.

Be prepared to turn over your incident and forensic reports

Perhaps the most significant change is that the FIPA purports to require breached entities to provide incident reports, data forensic reports, and the company’s policies in place regarding breaches, if the Florida Attorney General’s Office requests them.  These documents sometimes contain unintentionally damaging statements or proprietary information about a company’s security infrastructure that the company would not want to be made public.  And, once disclosed to the Attorney General’s office, the documents may become subject to a public records request, though this bill (which also awaits the Governor’s signature) tries to limit that risk?  As a result of this change, we could see breached entities either not requesting reports at all (out of concern that they will have to disclose them to third parties), or requesting two versions – a sanitized version that contains little information and can be produced to the Attorney General, and a more fulsome version for internal use.  Either result could not have been what the legislature intended when it passed this law.  It will also be interesting to see how the FIPA will affect the work product and self-critical analysis privileges that apply to data forensic reports prepared at the direction of counsel.

Proactive Security Requirements

The FIPA adds a new type of protection of PII:  it requires that an entity maintaining PII adopt “reasonable measures” to protect and secure the PII.  With this change, Florida joins the minority of jurisdictions that statutorily require entities maintaining PII to adopt safeguards regardless of whether the entity ever suffers a breach.  To be sure, adopting safeguards to protect PII is a good idea regardless of whether it is statutorily required, and the failure to adopt those safeguards could expose a company to an enforcement action by the FTC or state attorney general under the FTC Act or “little FTC Acts,” respectively, even in states where those safeguards are not required.  But the FIPA provides no guidance as to what is meant by “reasonable measures.”  Does this mean encryption?  Password protection?  Are written policies and training required?  Does it differ depending on the size of the breached entity?  Again, we are left to guess.

Some Final Observations

A few closing observations about the FIPA:

  • The definition of a breach is still limited to electronic personal information; so a breach involving purely paper records may not trigger the statute.
  • A violation of the statute is automatically considered a violation of Florida’s Deceptive and Unfair Trade Practices Act, but that violation appears to be enforceable only by the Florida Attorney General and not a private cause of action.
  • A breach now means unauthorized “access” of PII, where before it was defined as unauthorized “acquisition” of PII.  This change broadens the number of scenarios that could be considered a breach.

In short, the FIPA is generally a consumer-friendly law that will increase the number of breaches that require notification, shorten the time by which notification must take place, require that the Attorney General be included in the breach notification process, and demand that companies adopt security safeguards to protect PII regardless of whether they ever suffer a breach.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

The Florida Legislature recently passed the Florida Information Protection Act of 2014 (FIPA).  This post describes the FIPA and analyzes the advantages and disadvantages to businesses governed by the new law.  The FIPA must still be signed by the Governor, but the law received unanimous support in the legislature, so his signature is expected.  Once signed, the law would go into effect in a less than two months.

What is the FIPA?  The FIPA will replace Florida’s existing data breach notification law.  It has a reactive component (what companies must do after a breach) and a proactive component (what companies must do to protect personally identifiable information they control regardless of whether they ever suffer a breach).  The FIPA governs “covered entities.”  A covered entity is a commercial entity that acquires, maintains, stores or uses personally identifiable information.  A “breach” triggering the FIPA is the unauthorized access of data in electronic form containing personally identifiable information (PII).  The FIPA applies only to PII in electronic form, though an argument can be made that the secure disposal requirement under the FIPA applies to PII in any form given its use of the term “shredding.”

What is PII?  PII is defined as a first name or first initial and last name in combination with any of the following:

  • social security number;
  • driver’s license or ID card number, passport number, military identification number, or other similar number issued on a government document used to verify identity;
  • a financial account number or credit or debit card number, in combination with any required security code, access code, or password that is necessary to permit access to an individual’s financial account;
  • information regarding an individual’s medical history, mental or physical condition, or medical treatment or diagnosis by a health care professional; or
  • an individual’s health insurance policy number or subscriber identification number and any unique identifier used by a health insurer to identify the individual.

PII also includes a username or email address in combination with a password or security question and answer that would permit access to an online account.  The FIPA does not apply to PII that is encrypted, secured, or modified such that the PII is rendered unusable.

Do covered entities have to notify the Florida Attorney General’s Office of a breach?  Yes.  Covered entities must notify Florida’s Department of Legal Affairs (i.e., the Florida Office of the Attorney General) of any breach that affects more than 500 people.  Notice must be provided as expeditiously as practicable, but no later than 30 days after determination of the breach or reason to believe a breach occurred.  An additional 15 days is permitted if good cause for delay is provided in writing to the Attorney General within 30 days after determination of the breach or reason to believe a breach occurred.

The notice to the Attorney General must include:

  • a synopsis of the events surrounding the breach;
  • the number of affected Floridians;
  • any services related to the breach being offered without charge to the affected individuals (e.g., credit monitoring) and instructions as to how to use such services;
  • a copy of the notice sent to affected individuals or an explanation as to why such notice was not provided (e.g., there was no risk of financial harm); and
  • the name, address, telephone number, and email address of the employee or agent of the covered entity from whom additional information may be obtained about the breach.

Additionally, if the Attorney General asks for any of the following, the covered entity must provide it:

  • a police report
  • an incident report
  • a computer forensics report
  • a copy of the policies in place regarding breaches
  • steps that have been taken to rectify the breach

When must affected individuals be notified?  Notice to affected individuals must be made as expeditiously as practicable and without unreasonable delay.  The law allows covered entities to take into account the time necessary to allow the covered entity to determine the scope of the breach of security, to identify individuals affected by the breach, and to restore the reasonable integrity of the data system that was breached.  But even with those considerations, notice to affected individuals cannot take longer than 30 days after determining or having a reason to believe that a breach has occurred.

Two exceptions can permissibly delay or eliminate the obligation to notify affected individuals.  One exception is an authorized delay, which occurs when law enforcement determines that notice to individuals would interfere with a criminal investigation.  The determination must be in writing and must provide a specified period for the delay, based on what law enforcement determines to be reasonably necessary.  The delay may be shortened or extended at the discretion of law enforcement.

The second exception is a waiver, which occurs where, after an investigation and consultation with law enforcement, the covered entity reasonably determines that the breach has not and will not likely result in identity theft or any other financial harm to the affected individuals.  If a waiver applies, the covered entity must document it, maintain the documentation for five years, and provide the documentation to the Attorney General within 30 days after the determination.

How must notice to affected individuals take place and what must it include?  Direct notice to affected individuals can take one of two forms:  it can be in writing (sent to the mailing address of the individual in the records of the covered entity) or it can be by email to the email address of the individual in the records of the covered entity.  In either form, the notice must include:  (a) the date, estimated date, or estimated date range of the breach; (b) a description of the PII that was accessed; and, (c) information that the individual can use to contact the covered entity to inquire about the breach and the PII that the covered entity maintained about the individual.

Can a covered entity provide substitute notice to affected individuals?  If the cost of direct notice would exceed $250,000, more than 500,000 individuals are affected, or the covered entity does not have a mailing or email address for the affected individuals, then substitute notice can be provided.  The substitute notice must include a conspicuous notice on the covered entity’s website and notice in print and to broadcast media where the affected individuals reside.

What if the covered entity is governed by HIPAA or some other federal regulations?  Notice provided pursuant to rules, regulations, procedures, or guidelines established by the covered entity’s primary or functional federal regulator is deemed to be in compliance with the notice requirement to individuals under the FIPA. However, a copy of that notice must be timely provided to the Attorney General.  For example, if a company is governed by HIPAA, then their notice pursuant to the Breach Notification Rule will be sufficient to meet the requirements under the FIPA, but a copy of that notice still must be sent to the Attorney General.

Do covered entities have to notify credit reporting agencies?  If more than 1,000 individuals are affected, then notice to all consumer reporting agencies must be provided without unreasonably delay.

What if the breach occurs with a third-party agent (e.g., a vendor)?  A third-party agent is an entity that has been contracted to maintain, store, or process PII on behalf of a covered entity or governmental entity.  If a third-party agent suffers a breach, it must notify the covered entity within 10 days following the determination of the breach or reason to believe the breach occurred.  Upon receiving notice of the breach, the covered entity must then comply with the requirements to notify affected individuals and the Attorney General.  In that case, the third-party agent must provide all information necessary for the covered entity to comply with its notice requirements.  The third-party agent may notify affected individuals and the Attorney General on behalf of the covered entity, but the agent’s failure to provide proper notice is deemed a violation against the covered entity.

Are there obligations other than notification after a breach?  In addition to the reactive component of the FIPA (actions covered entities must take after a data breach), the FIPA also has a proactive component that imposes obligations on covered entities regardless of whether they ever suffer a breach.  Specifically, covered entities must take reasonable measures to protect and secure PII.  Covered entities must also take reasonable measures to dispose, or arrange for the disposal, of customer records containing PII within their custody or control when the records are no longer to be retained.  Such disposal must involve shredding, erasing, or otherwise modifying the PII in the records to make it unreadable or undecipherable through any means.

Who enforces the FIPA and how?  A violation of the FIPA is an unfair or deceptive trade practice subject to an action by the Attorney General under Florida’s Deceptive and Unfair Trade Practices Act against the covered entity or third-party agent.  A covered entity that does not properly notify affected individuals or the Attorney General may be fined up to $500,000 per breach, depending on the number of days in which the covered entity is in violation of the FIPA.  The law creates no private cause of action, nor does the presumed FDUTPA violation for the Attorney General appear to apply to a private action under FDUTPA.

The law will become effective on July 1, 2014 if it is signed by the Governor.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

 

If you have noticed an increasing number of high profile problems for healthcare organizations with respect to privacy and security issues these last few weeks you’re not alone.  The issues have ranged from employee misuse of protected health information, web-based breaches, photocopier breaches, and theft of stolen computers that compromised millions of records containing unsecured protected health information (PHI).  These issues remind us that healthcare companies face significant risks in collecting, using, storing, and disposing of protected health information.

Pharmacy Hit With $1.4 Million Jury Verdict For Unlawful Disclosure of PHI

An Indiana jury recently awarded more than $1.4 million to an individual whose protected health information was allegedly disclosed unlawfully by a pharmacy.  The pharmacist, who was married to the plaintiff’s ex-boyfriend, allegedly looked up the plaintiff’s prescription history and shared it with the pharmacist’s husband and plaintiff’s ex-boyfriend.  The lawsuit alleged theories of negligent training and negligent supervision.  The pharmacy intends to appeal the judgment.

Health Insurer Fined $1.7 Million For Web-Based Database Breach

Meanwhile, the Department of Health and Human Services (HHS) recently fined a health insurer $1.7 million for engaging in conduct inconsistent with HIPAA’s privacy and security rules following a breach of protected health information belonging to more than 612,000 of its customers. The breach arose from an unsecured web-based database that allowed improper access to protected health information of its customers.

HHS’s investigation determined that the insurer:

(1) did not implement policies and procedures for authorizing access to electronic protected health information (ePHI) maintained in its web-based application database;

(2) did not perform an adequate technical evaluation in response to a software upgrade, an operational change affecting the security of ePHI maintained in its web-based application database that would establish the extent to which the configuration of the software providing authentication safeguards for its web-based application met the requirements of the Security Rule;

(3) did not adequately implement technology to verify that a person or entity seeking access to ePHI maintained in its web-based application database is the one claimed; and,

(4) impermissibly disclosed the ePHI, including the names, dates of birth, addresses, Social Security Numbers, telephone numbers and health information, of approximately 612,000 individuals whose ePHI was maintained in the web-based application database.

Health Plan Fined $1.2 Million For Photocopier Breach

In another example of privacy and security issues causing legal problems for a healthcare organization, HHS settled with a health plan for $1.2 million in a photocopier breach case.  The health plan was informed by CBS Evening News that CBS had purchased a photocopier previously leased by the health plan.  (Of all the companies to get the photocopier after the health plan, it had to be CBS News).  The copier’s hard drive contained protected health information belonging to approximately 345,000 individuals.  HHS fined the health plan for impermissibly disclosing the PHI of those individuals when it returned the photocopiers to the leasing agents without erasing the data contained on the copier hard drives.  HHS was also concerned that the health plan failed to include the existence of PHI on the photocopier hard drives as part of its analysis of risks and vulnerabilities required by HIPAA’s Security Rule, and it failed to implement policies and procedures when returning the photocopiers to its leasing agents.

blogged about photocopier data security issues last year, after the Federal Trade Commission issued a guide for businesses on the topic of photocopier data security.  Another resource I recommend to my clients on the topic of media sanitization is a document prepared by the National Institute of Standards and Technology, issued last fall.

Medical Group Breach May Affect Up To Four Million Patients

Lastly, a medical group recently suffered what is believed to be the second-largest loss of unsecured protected health information reported to HHS since mandatory reporting began in September 2009.  The cause?  Four unencrypted desktop computers were stolen from the company’s administrative office.  The computers contained protected health information of  more than 4 million patients.  As a result, the medical group is mapping all of its computer and software systems to identify where patient information is stored and ensuring it is secured.  The call center set up to handle inquiries following the notification of the patients is receiving approximately 2,000 calls each day.

The Takeaways 

So what are five lessons companies should take away from these developments?

  • Having policies that govern the proper use and disclosure of PHI is a first step, but it is important that companies audit whether their employees are complying with these policies and discipline  employees who don’t comply so that a message is sent to everyone in the company that non-compliance will not be tolerated.
  • As technology is upgraded or changed, it is important that companies continue to evaluate any potential new security risks associated with these changes.  An assumption should not be made that simply because the software is an “upgrade” the security risks remain the same.
  • There are hidden risks, such as photocopier hard drives.  Stay apprised of these potential risks, identify and assess them in your risk assessment (required by HIPAA), then implement administrative and technical safeguards to minimize these risks.  With respect to photocopiers, maybe this means ensuring that the hard drives are wiped clean or written over before they are returned to the leasing agent.
  • Encrypt sensitive information at rest and in motion where feasible, and to the extent it isn’t feasible, build in other technical safeguards to protect the information.
  • Train, train, train – having a fully informed legal department and management doesn’t do much good if employees don’t understand these risks and aren’t trained to avoid them. Do your employees know how seemingly simple and uneventful conduct like photocopying a medical record, leaving a laptop unaccompanied, clicking on a link in an email, or doing a favor to a friend who needs PHI about a loved one, can lead to very significant unintended consequences for your company (and, as a result, them)?  Train them in a way that brings these risks to life, update the training and require it annually, and audit that your employees are undertaking the training.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

 

Legislation was introduced in the U.S. Senate late last week that, if passed, would create proactive and reactive requirements for companies that maintain personal information about U.S. citizens and residents.  The legislation, titled the “Data Security and Breach Notification Act of 2013” (s. 1193) creates two overarching obligations:  to secure personal information and to notify affected individuals if the information is breached.  The bill requires companies to take reasonable measures to protect and secure data in electronic form containing personal information.  If that information is breached, companies are required to notify affected individuals “as expeditiously as practicable and without unreasonable delay” if the company reasonably believes the breach caused or will cause identity theft or other actual financial harm.

A violation of the obligations to secure or notify are considered unfair or deceptive trade practices that may be investigated and pursued by the FTC.  Companies that violate the law could be fined up to $1,000,000 for violations arising out of the same related act or omission ($500,000 maximum for failing to secure the personal information and $500,000 maximum for failing to notify about the breach of the personal information).

The legislation defines personal information as social security numbers, driver’s license numbers, passports numbers, government identification, and financial account numbers or credit/debit card numbers with their required PIN number.  The bill includes a safe harbor for personal information that is encrypted, redacted, or otherwise secured in a way that renders it unusable.

Here are some other important provisions of the legislation:

  • There is no guidance as to what “reasonable measures” means under the obligation to secure personal information, which is problematic (although not very different from state data breach notification laws) because it provides no certainty as to when a company may face liability for failing to adopt certain security safeguards.
  • With respect to the duty to notify, the bill explicitly allows for a reasonable period of time after a breach for the breached entity to determine the scope of the breach and to identify individuals affected by the breach.
  • The legislation would preempt state data breach notification laws, but compliance with other federal laws that require breach notification (e.g., HIPAA/HITECH) is deemed to be compliance with this law.
  • The bill requires that breached entities notify the Secret Service or the FBI if a breach affects more than 10,000 individuals.
  • The bill also allows for a delay of notification if such notification would threaten national or homeland security, or if law enforcement determines that notification would interfere with a civil or criminal investigation.
  • There is no private cause of action for violating the legislation.  The bill is silent as to whether private causes of action based on common law or other statutory claims (e.g., negligence, state unfair trade practices claims, etc.) may be pursued, to the extent such causes of action are recognized.

The remains, however, a big question as to whether this legislation will ultimately become law.  Given the political climate in D.C. and the lack of success of similar federal legislation in the past, the outlook is bleak.  The ambiguity of the required proactive security measures and the lack of clarity as to whether private causes of action may be pursued for non-statutory violations also raise political problems for the legislation on both sides of the aisle.   Nevertheless, there is growing climate of concern regarding privacy and security issues that may result in this legislation being included within a larger package of legislation on cybersecurity and data privacy.  It will be important to keep an eye on the status of this bill moving forward.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

Until recently, individuals whose information was compromised as a result of a company suffering a data breach faced an uphill battle when suing the company in a class action lawsuit.  Far more often than not, Courts dismissed the lawsuits or entered summary judgment in favor of defendants on grounds that the plaintiffs could not establish a cognizable injury, preemption by breach notification statutes, or lack of evidence that the data breach (as opposed to some other act of identity theft) caused the plaintiff’s damages.  I’m still convinced that the pro-defendant environment remains the norm.  Nevertheless, four recent cases are being used to support the argument that the tide may be turning in favor of plaintiffs.

Burrows v. Purchasing Power, 12-cv-22800-UU (S.D. Fla.)

The most recent example is a proposed settlement in a class action lawsuit against Winn-Dixie and one of its service providers arising from a breach of personally identifiable information of Winn-Dixie grocery store employees.  The employees’ personally identifiable information was allegedly compromised when an employee of a company that provided an employee benefit program to Winn-Dixie employees misused his access to the PII and filed fraudulent tax returns with it.

Approximately 43,500 employees filed a class action lawsuit in the Southern District of Florida against Winn-Dixie and its employee benefits service provider.  The lawsuit includes counts of negligence, violation of Florida’s Deceptive and Unfair Trade Practice statute, and invasion of privacy.  Plaintiffs alleged that Defendants failed to adequately protect and secure the plaintiffs’ personally identifiable information, and that the defendants failed to provide the plaintiffs with prompt and sufficient notice of the breach.

The defendants’ attempts to defeat the plaintiffs lawsuit on the pleadings failed.  Winn-Dixie was subsequently voluntarily dismissed from the lawsuit and the case proceeded against the service provider, which ultimately entered into a proposed settlement with the plaintiffs, agreeing to pay approximately $430,000 ($225,000 towards a settlement fund, $200,000 in attorney’s fees and costs, and a $3,500 incentive aware to the named plaintiff).  The settlement states that it was entered into “for the purpose of avoiding the burden, expense, risk, and uncertainty of continuing to litigate the Action, . . . and without any admission of any liability or wrongdoing whatsoever.”

The settlement requires the service provider to maintain rigorous security safeguards to minimize the risk of a similar incident in the future.  The settlement fund will be divided into four groups:  (1) a tax refund fraud fund (class members who show they were victims of tax refund fraud can be compensated for a portion of lost interest); (2) a tax preparer loss fund (class members can be compensated for fees paid to tax preparers for notifying the IRS of a tax fraud claim or assisting in resolving issues arising from the tax refund fraud, not to exceed $100); (3) a credit card fraud fund (class members who show they were victims of identity theft other than tax refund fraud that resulted in fraudulent credit card charges that the credit card company did not waive, up to $500); and, (4) a credit monitoring fraud (class members who receive compensation in any of the previous three groups may receive credit monitoring services for one year).  To “prove” they were victims of fraud, plaintiffs must prepare a statement under penalty of perjury regarding the facts and circumstances of their stolen identity.

The settlement was preliminarily approved by the court on April 12, 2013, and a fairness hearing is scheduled for October 4, 2013.  The amount of money being paid to plaintiffs and their lawyers in this case should give corporate counsel monitoring these lawsuits pause for concern.  The District Court’s order allowing the case to proceed beyond the pleadings phase will likely be used as an instruction manual for plaintiffs in future data breach cases.

Resnick v. AvMed, Inc., 1:10-cv-24513-JLK (S.D. Fla.)

I previously blogged about the Eleventh U.S. Circuit Court of Appeal’s opinion that allowed a data breach class action to proceed where the plaintiffs claimed they were victims of identify theft arising from the theft of a laptop computer containing their personal information.  I encourage corporate counsel to read that post to learn more about the factors the Eleventh Circuit looked to in allowing that case to proceed beyond the pleadings phase. That lawsuit remains pending in the U.S. Southern District of Florida.

Harris v. comScore, Inc., No. 11-C-5807 (N.D. Ill. Apr. 2, 2013)

Another recent legal development considered by many to be favorable to plaintiffs was a decision by the U.S. District Court for the District of Chicago court certifying a class of possibly more than one million people who claim that the online data research company comScore, Inc. collected personal information from the individuals’ computers and sells it to media outlets without consent.  Although the lawsuit did not arise from a data breach, some of the arguments regarding lack of injury and whether class certification is appropriate are the same.  The plaintiffs allege violations of several federal statutes including the Electronic Communications Privacy Act and the Stored Communications Act. The court rejected comScore’s arguments challenging class certification, including its argument that the issue of whether each plaintiff suffered damages from comScore’s actions precludes certification.  The lawsuit remains pending.

Tyler v. Michaels Stores Inc., SJC-11145, 2013 WL 854097 (Mass. Mar. 11, 2013)

The Massachusetts Supreme Judicial Court broadened the definition of the term “personal information” to include ZIP codes.  The court held that because retailers can use ZIP codes to find other personal information, retailers where prohibited by Massachusetts law (the Song-Beverly Credit Card Act) from collecting ZIP codes.  The court also ruled that the plaintiffs did not have to prove identity theft to recover under the statute.  They could instead rely on the fact that they received unwanted marketing materials and that their data was sold to a third party.  The fact that plaintiffs can proceed with their lawsuit without having to show that their information was actually compromised will undoubtedly be used by plaintiffs in data breach litigation to argue that the threshold for injury in such cases is lower that in other cases.

What’s the Takeaway?

What should corporate counsel take from these cases? It is still too early to tell if these cases are outliers or if they mark a new trend in favor of plaintiffs in privacy and data breach cases that will embolden the plaintiffs’ bar.  The most important takeaway for corporate counsel at this stage is that they must, at a minimum, monitor the litigation risks associated with data breaches and other privacy violations so they can advise their companies about these risks, which can in turn consider these risks when building security and privacy into various products and services.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

Another court has weighed in on the issue of what constitutes a cognizable injury in a data breach case. In a lengthy opinion, the U.S. District Court for the Western District of Kentucky in Holmes v. Countrywide Financial Corp. dismissed a lawsuit against Countrywide by plaintiffs who claimed that their personal information had been compromised as a result of the criminal activity of a Countrywide employee. The court ruled that although Plaintiffs had standing, they did not suffer a cognizable injury, and they could not prove the elements of the causes of action pled in their complaint. The opinion is significant for at least two reasons: (1) it lends further support for the position that plaintiffs in data breach cases must show actual, measurable, direct harm to recover, and (2) the degree of analysis and the amount of authority cited by the court could make this a frequently cited opinion in the future.

Background

In 2008, the FBI discovered that a Countrywide employee had stolen sensitive personal and financial information from millions of Countrywide’s customers. The employee then sold that data to a third party, but there was little evidence that the information was actually misused. Countrywide notified the affected individuals and offered two years of free credit monitoring. The lawsuit was filed by two sets of plaintiffs – the first set (the Holmes) purchased credit monitoring services because someone had unsuccessfully sought credit under their names; the second set (the Stiers) spent money to cancel their telephone service as a result of increased solicitations and time spent researching the hazards of identity theft. Neither set suffered actual monetary damages from fraud or identity theft.

Standing

The court first addressed the issue of whether Plaintiffs had standing to sue Countrywide. The court noted that while several other courts have held that plaintiffs who have only suffered an increased risk of identity theft do not have standing, the Sixth Circuit’s opinion in Lambert v. Hartman, 517 F.3d 433 (6th Cir. 2008), compelled the court to conclude that an increased risk of identity theft and credit monitoring satisfied the requisite injury necessary for standing.

Injury

Just because Plaintiffs had standing, however, did not mean that they suffered recompensable injuries. The court concluded that Plaintiffs injuries as alleged were not cognizable or recompensable.

First, the court rejected Plaintiffs argument that the risk of future identity theft was a cognizable injury. It concluded that such damages were too speculative and might never materialize. The court stated that no lawsuit based on risk of future identity theft has ever proceeded past a motion to dismiss.

The court next considered whether Plaintiffs could recover for credit monitoring services. Plaintiffs attempted to analogize credit monitoring to medical monitoring in a personal injury case where a plaintiff is exposed to a substance that causes no harm at the time but creates an increased risk of future physical harm. The court rejected these damages, too. It first cited a number of cases where expenses for credit monitoring were not considered a cognizable injury. With respect to the medical monitoring analogy, the court cited Kentucky law requiring a plaintiff seeking damages for medical monitoring to have also suffered a present injury. The court rejected Plaintiffs' argument that the fact someone had attempted unsuccessfully to obtain credit using their personal information meant they were at risk for identity theft. The court also rejected Plaintiffs’ reliance on Anderson, which allowed the plaintiffs in a data breach case to recover for the mitigation expenses of card replacement and credit monitoring services because they had suffered “financial injuries that exhibited actual misuse and identity theft.” Here, Plaintiffs suffered no unauthorized charges and there were no attempts to take funds. In other words, according to the court, “the victims in Anderson were faced with a much graver threat to their personal information and resources.” Accordingly, credit monitoring expenses were not compensable injuries.

Next, the court considered whether telephone cancellation fees incurred to avoid the bombardment of telemarketers constituted a cognizable injury. The court rejected these damages, relying on cases where the courts held that no cognizable injury occurred where the only harm is an increase in junk mail and unwanted telephonic/electronic correspondences.

Finally, the court considered whether time spent by Plaintiffs monitoring their credit was a compensable injury. In rejecting those damages, the court relied on decisions in other jurisdictions that refused to recognize such damages as recompensable.

Causes of Action

After rejecting all of Plaintiffs' damages, the court nevertheless proceeded to address whether Plaintiffs' causes of action were applicable theories of recovery in a data breach case such as this one.

Plaintiffs sued Countrywide for unjust enrichment, arguing that Countrywide collected application and processing fees relating to applications for mortgages, as well as fees for credit monitoring services being offered by Countrywide and its subsidiary. The court dismissed this cause of action because an explicit contract existed between the parties, requiring Plaintiffs to make monthly mortgage payments and obligating Countrywide to protect Plaintiffs’ personal information.

Plaintiffs also sued Countrywide for common law fraud, contending that Countrywide made material misrepresentations about the storage of their personal information and the severity of the breach. The court dismissed this count because the only financial damages suffered “were self inflicted.”

Plaintiffs sued Countrywide for breach of contract, covenant of good faith, and covenant of fair dealing. They alleged that Countrywide agreed, but failed, to safeguard their personal information. The court dismissed these counts based on the fact that each cause of action required a cognizable injury as an element, which Plaintiffs had not pled.

Plaintiffs also included a count for “state security notification” (the data breach notification laws of New Jersey, where some of Plaintiffs resided). They claimed that Countrywide failed to abide by the data breach notification requirements set forth under New Jersey law. The court dismissed this cause of action on the ground that, under the court’s interpretation, the statute did not create a private right of action and Plaintiffs had not provided precedent proving otherwise.

Next, Plaintiffs' operative complaint included counts for violation of state consumer fraud laws (deceptive business practices). The court dismissed those counts on the ground that Plaintiffs had not shown that they suffered an ascertainable loss.

Plaintiffs also alleged that Countrywide violated the Fair Credit Reporting Act; namely, that Countrywide is a “consumer credit reporting agency” under the FCRA, that it failed to maintain reasonable procedures to “furnish” consumer reports, and that consumer reports were released in violation of the statute’s provisions. The court dismissed this cause of action on the ground that Countrywide did not “furnish” any consumer reports a third party in violation of the statute. The court relied on Plaintiffs’ allegation that Countrywide’s employee (“a ne’er-do-well who independently stole Countrywide’s customer information and engaged in a scheme to sell it to his criminal associates”) transmitted Plaintiffs’ information to a third party without Countrywide’s permission.

Finally, Plaintiffs’ operative complaint included a claim for civil conspiracy. The court dismissed that count because Plaintiffs failed to establish an injury.

Conclusion

The Holmes opinion is another example of a court that is skeptical of a plaintiff’s ability to recover from a defendant who suffers a data breach that potentially exposes the plaintiff’s personal information to a third party. Courts like the one in Holmes are requiring actual, measurable monetary damages as a result of the data breach for a plaintiff to proceed with a lawsuit; a risk of harm is not enough. Even if a plaintiff can show that her personal information was misused, without evidence that the misuse resulted in fraudulent charges or other similar loss, the plaintiff would ostensibly have no cause of action under Holmes. The opinion is also of interest for the level of supportive authority it cites, demonstrating that data security law is quickly maturing and the issues arising in those cases are being addressed and written about all over the country.

 

DISCLAIMER: The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients. Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients. All of the data and information provided on this site is for informational purposes only. It is not legal advice nor should it be relied on as legal advice.

 

Following my post on the subject last week, I had the chance to speak with Colin O’Keefe of LXBN regarding the class action suit filed against LinkedIn following their recent high-profile data breach. In the brief interview, I explain the background of the case, what damages the plaintiffs are alleging and why it’s too early to tell which way the case is going to go.

The title of this blog entry is somewhat of a misnomer because there is no single national data breach notification law that governs all information the same way as the state data breach notification laws do.  So, for the time being, companies and consumers are forced to determine which state data breach notification laws apply to them and what the differences are between them.  Nevertheless, there are federal laws that require disclosure of data breaches in certain instances, and usually these laws are “industry specific.”

Examples of federal laws that require data breach notification are two laws governing the health care industry – the Health Insurance Portability and Accountability Act (HIPAA) and the Health Information Technology for Economic and Clinical Health Act (HITECH).  Together, these laws require “covered entities” and many of their service providers to maintain administrative, technical, and physical safeguards to ensure the confidentiality, integrity, and availability of “protected health information” (commonly referred to as “PHI”).  A covered entity is a health plan, a health clearinghouse, or a health care provider who transmits health information.

If there is a breach, the covered entity must notify the individuals whose information has been accessed (and law enforcement) without unreasonable delay and no later than 60 days after the breach was discovered.  (The law also requires notification to the media in cases where the breach affects more than 500 individuals).  Whether there is a breach that triggers the duty to notify depends on whether, with some exceptions, there was an impermissible use or disclosure that compromises the security or privacy of the PHI such that the use or disclosure poses a significant risk of financial, reputational, or other harm to the affected individual.  The notice must state what occurred, what type of information was accessed by the breach, what steps individuals should take in response, what is being done to investigate, mitigate, and protect against further harm, and contact information should be provided.  HITECH imposes these same notification requirements on the covered entity’s vendors and service providers.

Another example of a federal data breach notification requirement is found within the Gramm-Leach-Bliley Act (GLB), which governs companies engaged in financial services.  Under GLB, when a financial institution becomes aware of an incident of unauthorized access to sensitive customer information, the institution should conduct an investigation to determine the likelihood that the information has been or will be misused.  If there is a determination that the misuse has occurred or is reasonably possible, the institution must notify the affected customer as soon as possible, save a law enforcement determination that notification will interfere with a criminal investigation.

Sometimes a company’s duty to disclose may be required by a government agency.  For example, publicly traded companies need to be aware of the October 13, 2011, SEC Disclosure Guidance:  Topic No. 2.  Although the guidance is not the law but rather an agency’s interpretation of the law, it clearly states that publicly traded companies should report significant instances of cyber incidents to the SEC. The company must determine whether a reasonable investor would consider information about the incident important to an investment decision.  In making this determination, a company should consider several factors, set forth in the guidance, in determining whether to make the disclosure.  The guidance also states what information should be in the disclosure.

These examples and the descriptions of them are admittedly very superficial and are not meant to capture the entire universe of federal laws requiring data breach notification.  The point of this post is that there is no uniform federal data breach notification law.  Data breach notification requirements at the federal level arise from a variety of laws and other legal authority.  As a result, a company that believes it may have suffered a data breach must consult the laws of any state where any of its customers reside, a variety of federal legal sources that regulate the company’s industry, and—as will be explained in an upcoming post—international law. If your company has customers overseas, it will need to be aware of data breach notification requirements abroad.  The next part of this series on data breach notification laws will focus on Europe as a case study of how data breaches notifications are addressed in other countries.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.