Published by Al Saikali

The SEC recently agreed to a $1,000,000 settlement of an enforcement action against Morgan Stanley for its failure to have sufficient data security policies and procedures to protect customer data. The settlement was significant for its amount. The true noteworthiness here, however, lies not in the end result but the implications of how it was reached: (1) the “reasonableness” of a company’s data security safeguards shall be judged in hindsight, and (2) almost any data breach could give rise to liability. The SEC has left no room for error in making sure that your cybersecurity procedures and controls actually and always work.

What Happened?

Morgan Stanley maintained personally identifiable information collected from its brokerage and investment advisory services customers on two internal company portals. Between 2011 and 2014, an employee unlawfully downloaded and transferred confidential data for approximately 730,000 accounts from the portals to his own personal data storage device/website. It is unclear whether the transfer of information was for the employee’s personal convenience or a more nefarious purpose. Soon thereafter, however, the employee suffered a cyberattack on his personal storage device, leading to portions of the data being posted to at least three publicly available Internet sites. Morgan Stanley discovered the information leak through a routine Internet sweep, they immediately confronted the employee, and voluntarily brought the matter to law enforcement’s attention.

The employee who transferred the information to his personal device was criminally convicted for violating the Computer Fraud and Abuse Act by exceeding his access to a computer, he was sentenced to 36 months probation, and ordered to pay $600,000 in restitution. He also entered into a consent order with the SEC barring him from association with any broker, dealer, and investment adviser for five years.

Morgan Stanley entered into a consent order with the SEC pursuant to which Morgan Stanley agreed to pay a $1,000,000 fine, but did not admit or deny the findings in the order.

SIDE NOTE TO COMPLIANCE OFFICERS READING THIS BLOG POST – if ever you need a way to deter your employees from sending corporate information to their personal devices or email accounts, tell them about this case.

What Does The Law Require?

Federal security laws (Rule 30(a) of Regulation S-P – the “Safeguards Rule”) require registered broker-dealers and investment advisers to adopt written policies and procedures reasonably designed to:

  1. Insure the security and confidentiality of customer records and information;
  2. Protect against any anticipated threats or hazards to the security or integrity of customer records and information; and
  3. Protect against unauthorized access to or use of customer records or information that could result in substantial harm or inconvenience to any customer.

Here, the SEC based Morgan Stanley’s liability on the fact that Morgan Stanley:

failed to ensure the reasonable design and proper operation of its policies and procedures in safeguarding confidential customer data. In particular, the authorization modules were ineffective in limiting access with respect to one report available through one of the portals and absent with respect to a couple of the reports available through the portals. Moreover, Morgan Stanley failed to conduct any auditing or testing of the authorization modules for the portals at any point since their creation, and that testing would likely have revealed the deficiencies in the modules. Finally, Morgan Stanley did not monitor user activity in the portals to identify any unusual or suspicious patterns.

In other words, the authorization modules did not work in this instance, nor was there auditing to test and possibly identify the problem, nor had Morgan Stanley invested in sophisticated monitoring applications that would have identified that the employee was engaging in suspicious activity.

Why Should Companies Worry?

The most concerning part of the Morgan Stanley consent order is this paragraph, which describes some robust safeguards Morgan Stanley had implemented before the incident occurred:

MSSB [Morgan Stanley] adopted certain policies and restrictions with respect to employees’ access to and handling of confidential customer data available through the Portals. MSSB had written policies, including its Code of Conduct, that prohibited employees from accessing confidential information other than what employees had been authorized to access in order to perform their responsibilities. In addition, MSSB designed and installed authorization modules that, if properly implemented, should have permitted each employee to run reports via the Portals only with respect to the data for customers whom that employee supported. These modules required FAs [Financial Advisors] and CSAs [Client Service Associates] to input numbers associated with the user’s branch and FA or FA group number. MSSB’s systems then should have permitted the user to access data only with respect to those customers whose data the user was properly entitled to view. Finally, MSSB installed and maintained technology controls that, among other things, restricted employees from copying data onto removable storage devices and from accessing certain categories of websites.

Lesson learned: it doesn’t matter how robustly designed your policies and procedures may be, if they don’t actually work as designed then you could be liable under the Safeguards Rule.


The standard applied by the SEC in this enforcement action is higher than a “reasonableness” standard. It is easy, after the fact, to find a weakness that could have been exploited. Indeed, it is unusual if you cannot identify such a vulnerability after the fact. If a criminal or departing employee is set on unlawfully accessing sensitive information he can likely do so no matter what hurdles you place in his way. A company should not be held liable for failing to stop every data incident. Some may argue that a company like Morgan Stanley must be held to a higher standard because of the known threats to the financial services industry and the potentially significant consequences to consumers of a financial services company suffering a data breach. Nevertheless, the law as written requires policies and procedures that are only “reasonably designed” to protect sensitive information; the law does not require that these policies and procedures be perfectly designed nor that they be effective 100% of the time, nor could it.

Hindsight is 20/20 and regulators would be hard pressed to find any organization that could show their policies and procedures are always followed. Could audits and testing have detected the fact that Morgan Stanley’s authorization module was not preventing the type of unauthorized access and transfer of sensitive information in this case? Possibly, depending on the depth of the audit and foresight of the auditors. But little benefit appears to have inured to Morgan Stanley for the fact it actually had an authorization module, data security training for its employees, policies and restrictions regarding employee access of information, controls that prevented the copying of data onto removable storage devices, and the fact that it voluntarily brought this matter to law enforcement’s attention.

Is there a risk now that the SEC’s interpretation of “reasonableness” will be applied similarly by state Attorneys General, the Health and Human Services’ Office of Civil Rights, the Federal Trade Commission, or other regulators? All of this reminds us that the definition of reasonableness in the context of data security is subjective, and that subjectivity is a risk that companies must address.


There are some important practical takeaways for companies from this settlement: (1) perform a risk assessment to determine how your organization could suffer from a similar risk (employee transferring corporate information to a personal device); (2) implement an authorization module and other policies and procedures to limit access (and identify unauthorized access) to sensitive information to those who have a legitimate business need; and (3) make sure you audit and test these controls so ensure that they actually work. Additionally, CISOs, compliance officers, and in house counsel would be well served to ensure that the story of this enforcement action becomes part of their organization’s data security training as part of the onboarding and annual training process.


DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.


A significant change is happening to payment card technology. Any company that accepts credit cards as a form of payment needs to know about it if they intend to continue accepting payment cards in the future. The technology is called “EMV” (EuroPay, MasterCard, Visa). The card brands hope that EMV technology will significantly reduce the amount of fraud in transactions where the payment card is present. This blog post will discuss how EMV works, why it was adopted, how merchants can comply with its requirements, the incentives to adopt (and penalties in failing to adopt) the technology, and the security pros/cons of EMV.

SPOILER ALERT: EMV is effective in reducing the risk of fraud from counterfeit payment cards used for in-person transactions, but the best way to minimize payment card fraud is through implementation of point-to-point encryption and tokenization. The liability shift is an appealing incentive to adopt the technology, but many merchants have been reporting difficulty finding EMV software that works properly with the EMV hardware.

What is EMV?  EMV (EuroPay, MasterCard, Visa) is a payment method that combines a plastic card with a microchip. Unlike your typical credit card, credit cards with an EMV chip generate a different code with every purchase a consumer makes. The code is shared with the issuing bank as part of the transaction to authenticate that the card is legitimate. Because the code changes with each transaction, even if a thief steals the information contained on the magnetic stripe of the credit card, he cannot create a counterfeit card because he cannot replicate the codes generated by the microchip. It is this inability to make counterfeit cards that makes EMV technology so important to the card brands and issuing banks.

Why was EMV created?  EMV was created because criminals/hackers were stealing credit card information, selling it, and using it to create counterfeit credit cards. Those counterfeit cards are then often sold or used as part of identity theft and dark web crime rings. By requiring a microchip that generates a random code for each transaction, card brands have made it almost impossible to create a counterfeit card. EMV technology, however, is only helpful in preventing fraud where the card is present. Online transactions, for example, do not benefit from the technology because an e-commerce transaction usually does not require that a card be inserted or swiped into a point of sale terminal (which would allow for the micro-chip’s unique code generation). There are, however, other technologies like point-to-point encryption and tokenization (discussed below) that could potentially eliminate payment card data breaches.

How does a merchant become EMV-compliant?  To become EMV-compliant, a merchant must install EMV-enabled point-of-sale terminals and obtain certification from its acquiring bank that its payment application for each card network is certified for EMV. The cost of a new EMV-compliant terminal can be between $250 and $500, depending on whether the merchant wants to purchase one that will also accept near-field communications payments like Apple Pay. The merchant also needs to ensure that EMV-compliant software is installed in these terminals. Several of payment card forensic contacts and merchant clients have told me that they are having issues implementing the software solutions.

What if a consumer wants to swipe her card instead of use the chip feature?  Assuming the consumer is using an EMV card at an EMV-enabled terminal, the terminal will require the consumer to use the chip instead of swiping the card.

Is a signature or PIN required to complete a transaction?  Each issuing bank will have different requirements. Visa has said that a signature accompanying the chip is sufficient. MasterCard, however, appears to prefer use of a PIN with the chip. If a merchant does not support the “Chip and PIN” system, but the subject transaction could have been performed with a PIN, then the merchant may be responsible for chargebacks related to those transactions.

Will merchants pay lower interchange fees if they adopt the EMV-compliant terminals?  No and there are no current plans to change that, though it is possible the card brands could change their mind if EMV is not adopted quickly enough.

What is the “liability shift”?  Until recently, issuing banks were responsible for card-present counterfeit fraud losses. As a way to encourage merchants to adopt EMV, the card brands have implemented a shift of liability as a “carrot” and “stick” approach. For most merchants, as of October 1, 2015, if they have been certified through their acquiring banks as EMV-compliant and they subsequently suffer a breach, they are not responsible for card-present counterfeit fraud losses. MasterCard requires that 95% of its transactions originate from EMV-compliant POS terminals for the liability shift to apply to 100% of the charges; the liability shift applies to only 50% of affected MasterCard transactions if only 75% of MasterCard transactions originate from EMV-compliant POS terminals. Merchants are not required to be EMV-compliant, but doing so gives them the protection of this liability shift. The liability shift likely applies to both magnetic stripe cards and EMV cards that are compromised, but the card brands have released public statements that create ambiguity.

Are there any exceptions to the October 1, 2015, deadline for the liability shift?  Yes. The liability shift does not apply to automated fuel dispensers (gas pumps) until October 2017. Also, MasterCard is shifting its liability to ATM owners in October 2016; Visa is shifting that liability in October 2017. The EMV software for fuel dispensers and ATMs has been particularly lacking, making it extremely challenging for merchants to fully implement EMV technology. For small businesses that accept mobile payments (like Square), merchants will need to purchase new EMV readers. (Square has been assuming the liability until its customers purchase the EMV readers). Unfortunately, these delays may harm these merchants because they could result in a spike in fraud for those companies as criminals shift their focus to these targets that are easier to compromise without the EMV technology.

Besides the liability shift, why else should merchants move quickly to become EMV compliant?  First, EMV has been shown to significantly lower fraudulent activity for card-present transactions. Second, fraud migrates to non-EMV compliant terminals. Third, if 75% of transactions are processed through EMV-enabled terminals and the terminals support contact and contactless transactions, the annual PCI DSS compliance validation with a QSA is no longer required. Fourth, you may be protected from assessments by card brands arising from a compromise of magnetic stripe cardholder information, if 95% of card-present transactions are from EMV-capable terminals 30 days before the start of the compromise event. Fifth, from a public relations standpoint, you do not want to be known as a company that doesn’t take customer security seriously. Finally, the EMV-capable POS terminals also allow the merchant to accept contactless transaction devices, which may be a feature the merchant does not currently offer.

Are there security weaknesses to EMV?  Yes. As mentioned earlier, EMV is only helpful in reducing fraud where the payment card is present during the transaction; online purchases and other e-commerce would not be protected by EMV (for the time being). Also, some payment card security experts have observed that an EMV-compliant merchant still possesses personal account numbers for credit cards because EMV merely attaches the randomly generated code to the personal account number, meaning that a hacker could still potentially access the payment card information by merely removing/scrubbing the code from the personal account number, assuming the hackers access unencrypted information.

Are there better ways to secure payment card transactions?  Absolutely. EMV is a “fraud-reducer,” but point-to-point encryption (P2PE) and tokenization is a “fraud eliminator.” P2PE encrypts the personal account number through the entire transaction process, so the merchant never possesses unencrypted personal account numbers and does not have the keys necessary to unlock the encrypted information. Tokenization takes security one step further by replacing the personal account number with a different number that is worthless to a hacker, so the merchant never possesses this valuable information. An example of tokenization is Apple Pay – when you pay for goods or services with Apple Pay you are not providing the merchant with your credit card number, but rather with a random number that would be useless in any other context.

In short, companies that accept credit and debit cards as a form of payment should move quickly to become EMV compliant. While EMV is not a panacea to protect against fraud, it can significantly reduce it and, more importantly, provide other benefits to a company, like the liability shift. Companies that want to take their security to the next level, however, should consider implementing P2PE and tokenization.

DISCLAIMER: The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients. Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients. All of the data and information provided on this site is for informational purposes only. It is not legal advice nor should it be relied on as legal advice.


In 2014, the Food and Drug Administration (“FDA”) articulated its expectations for how device manufacturers address cybersecurity premarket in Content of Premarket Submissions for Management of Cybersecurity in Medical Devices. Recently, the FDA released complementary draft guidance in Postmarket Management of Cybersecurity in Medical Devices. In the new guidance, the FDA explains what constitutes an effective cybersecurity risk management program, how manufacturers should evaluate postmarket cybersecurity vulnerabilities, and when manufacturers must report to the FDA cybersecurity risks and improvements. Comments on the draft guidance are due by April 21, 2016.

Here are the key takeaways from the guidance:

  • Cybersecurity programs should be documented, systematic, and comprehensive.
  • Consider medical device cybersecurity throughout the device’s entire lifecycle.
  • When evaluating a medical device’s cybersecurity, consider a broad range of quality information and focus on cybersecurity threats that may compromise a device’s essential functions.

Components of an Effective Cybersecurity Risk Management Program

The new guidance exhorts manufacturers to create a cybersecurity risk management program that will address a device’s cybersecurity from the drawing board to the dustbin.

Premarket, manufacturers should account for cybersecurity by designing cybersecurity-related inputs for their devices and incorporating a cybersecurity management approach that determines (A) assets, threats, and vulnerabilities; (B) how threats and vulnerabilities may affect device functionality and end users/patients; (C) the likelihood of threats and exploitation of vulnerabilities; (D) risk levels and suitable mitigation strategies; and (E) residual risk and risk acceptance criteria. (The FDA gave the same recommendations in its guidance 2014 premarket guidance.)

Adequate postmarket cybersecurity management requires a program that is systematic, structured, documented, consistent with the Quality System Regulation (21 C.F.R. Part 820), and incorporates the National Institute of Standards and Technology’s (NIST) Framework for Improving Critical Infrastructure Cybersecurity (cybersecurity guidelines the NIST created pursuant to a presidential executive order and with input from public and private stakeholders). Key components include:

  • monitoring quality cybersecurity information sources—such as complaints, service records, and data provided through cybersecurity Information Sharing Analysis Organizations (“ISAOs”)—for identification and detection of cybersecurity vulnerabilities and risk;
  • establishing, communicating, and documenting processes for vulnerability intake and handling;
  • understanding, assessing, and detecting the presence and impact of vulnerabilities;
  • clearly defining essential clinical performance to develop mitigations that protect, respond, and recover from the cybersecurity risk;
  • adopting a coordinated vulnerability disclosure policy and practice; and
  • deploying mitigations that address cybersecurity risk early and prior to exploitation.

Assessing Postmarket Cybersecurity Vulnerabilities

Acknowledging that not all vulnerabilities threaten patient safety and that manufacturers may not be able to identify every threat, the guidance advises manufacturers to identify a device’s “essential clinical performance” and focus on identifying and resolving risks to that performance. Manufacturers should define a device’s essential clinical performance by considering the conditions necessary for the device to operate safely and effectively. Manufacturers should assess a vulnerability’s risk by evaluating its exploitability and health dangers resulting from its exploitation. The draft guidance recommends tools for each evaluation: the Common Vulnerability Scoring System v3.0 for exploitability and the standards in ANSI/AAMI/ISO 14971: 2007/(R)2010: Medical Devices – 442 Application of Risk Management to Medical Devices for health dangers caused by exploitation.

The guidance divides risks into two groups and recommends manufacturers do the same. Low or “controlled” risk exists when, after accounting for existing controls, there is an acceptable amount of risk that the device’s essential clinical performance could be compromised by a cybersecurity vulnerability. High or “uncontrolled” risk exists when insufficient controls and mitigations create an unacceptable amount of risk that the device’s essential clinical performance could be compromised by a cybersecurity vulnerability.

Reporting Mitigation

A risk’s classification affects whether a manufacturer may address the risk without reporting the risk and its remediation to the FDA under 21 C.F.R. Part 806, which obligates manufacturers to report to the FDA when they repair, modify, or adjust a device to reduce the device’s health risk. Manufacturers may ameliorate controlled risks without reporting the risk or enhancement under Part 806. (But for Class III devices, manufacturers must disclose the risk and the remediation in its periodic report to the FDA under 21 C.F.R. § 814.84.) Uncontrolled risks are a different matter: manufacturers must report them and their remediation unless (A) there are no known serious adverse events or deaths associated with the vulnerability; (B) within 30 days of learning of the vulnerability, the manufacturer identifies and implements device changes and/or compensating controls to bring the residual risk to an acceptable level and notifies users; and (C) the manufacturer participates in an ISAO.

What the Draft Guidance Means for Device Manufacturers

Device manufacturers should not delay assessing the strength of their cybersecurity management program. The U.S. Department of Health and Human Services, Office of Inspector General identified cybersecurity of medical devices as one of its priorities for 2016. And the draft guidance explains that the FDA may consider devices with uncontrolled risk to violate the FDCA and be subject to enforcement action.

To see how their program measures up to what the draft guidance describes, device manufacturers should start by ask themselves these key questions:

  • Is our cybersecurity management program addressing cybersecurity throughout each device’s lifecycle?
  • Is our program proactive?
  • Are there quality data security sources, such as ISAOs, we have not used but should?
  • Do we need to develop and deploy new training or messaging to colleagues about cybersecurity?
  • Are we using good cyber hygiene?

When deciding how to move forward with strengthening a cybersecurity program, manufacturers will want to keep in mind the need to safeguard devices against malicious and non-malicious attacks. Vulnerable devices may become infected by malware that cannot tell the difference between a personal computer and a pacemaker. That example is not farfetched: J.M. Porup recently reported for Slate that malware designed to steal credit card information infected and disabled vulnerable fetal heart monitors.

Ever wonder how your credit card gets compromised and how the bad guys get your information? This report on tonight’s episode of 60 Minutes provides an overview of what happens from the moment you swipe your card at the point-of-sale terminal to the moment when the card number is compromised and sold on a black market website to the moment when the bad guy who buys your credit card number online uses it to create a counterfeit card. The report also investigates why most companies learn of these breaches from third parties rather than their own information security team. I highly recommend it to anyone interested in learning about this risk as this year’s holiday season begins.


DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

My last post described what the recently passed Florida Information Protection Act (FIPA) will do.  This post analyzes how FIPA differs from Florida’s existing breach notification law and explains why those differences will hurt or help companies that maintain information about Florida residents.  Florida’s Governor must still sign the FIPA into law, but his signature is expected given the unanimous support of FIPA in the state legislature.  Once signed, the law will go into effect on July 1, 2014.  So what do businesses need to know about FIPA?

Attorney General Notification

The first significant difference between FIPA and Florida’s existing breach notification law is that, with some limited exceptions, breached entities will be required to notify Florida’s Attorney General within 30 days of any breach that affects more than 500 Florida residents.  Until now, Florida has been part of the majority of states that does not require notice of a breach to the state Attorney General.

The law also requires breached entities to notify the Attorney General’s office even when the entities decide notification to affected consumers is not necessary because the breach will not likely result in harm to affected individuals.  It remains to be seen whether this change in the law will result in a flood of “non-notifications” to the Attorney General’s office.

The FIPA provides teeth for the Attorney General’s Office to enforce it.  A violation of FIPA may be automatically considered a violation of Florida’s Deceptive and Unfair Trade Practices Act.  Though the FIPA does not create a private cause of action, we could see the Attorney General actively enforce the law against breached entities that fail to meet the law’s requirements.

Broader Definition of PII

Another significant change in Florida law as a result of the FIPA is the expansion of the definition of personally identifiable information (PII). PII will now include the username or email address in combination with a password or security questions and answer that would permit access to an online account.  This change is based on a realization that consumers are increasingly storing information online and, unfortunately, often using the same usernames and passwords.  The net result, however, will be an increased number of data breaches under the law.

Shortening the Breach Notification Period

FIPA also shortens the time a breached entity has to notify affected individuals of a breach.  Currently, breached entities must notify affected individuals “without unreasonable delay” but they have up to 45 days.  The new law requires breached entities to notify affected individuals “as expeditiously as practicable, but no later than 30 days after the determination of the breach or reason to believe a breach occurred,” unless a waiver or authorized delay is issued.

This change raises a couple of concerns for breached entities.  First, while in most instances 30 days may be enough time to notify affected individuals of a breach, in some cases that will not be enough time.  There are many steps that must take place as part of the notification process, including determining the source and scope of the instruction, identifying what information is affected, identifying who is affected and where they live, and ensuring that the threat is no longer in existence.  Adopting a bright line deadline may end up punishing breached entities that are working as quickly as possible to respond to a breach.

Second, it is not clear under the FIPA what starts the clock running on the 30 days.  When is “determination of the breach” triggered?  Is it when the breached entity reasonably believes an intrusion has occurred?  Is it when the entity knows that PII has been affected?  Is it when the entity knows whose PII has been affected?  I would argue that clock shouldn’t start running until the entity knows that the PII of a Florida resident has been affected, but we are left to guess how regulators will interpret this requirement.

Notification by Email

A welcome change that the FIPA will usher in is breach notification by email.  This will help significantly reduce the cost of breach notification in matters that involve a large number of Florida consumers.  It is also recognition that the best contact information a company may have for its customers is their email address.

Be prepared to turn over your incident and forensic reports

Perhaps the most significant change is that the FIPA purports to require breached entities to provide incident reports, data forensic reports, and the company’s policies in place regarding breaches, if the Florida Attorney General’s Office requests them.  These documents sometimes contain unintentionally damaging statements or proprietary information about a company’s security infrastructure that the company would not want to be made public.  And, once disclosed to the Attorney General’s office, the documents may become subject to a public records request, though this bill (which also awaits the Governor’s signature) tries to limit that risk?  As a result of this change, we could see breached entities either not requesting reports at all (out of concern that they will have to disclose them to third parties), or requesting two versions – a sanitized version that contains little information and can be produced to the Attorney General, and a more fulsome version for internal use.  Either result could not have been what the legislature intended when it passed this law.  It will also be interesting to see how the FIPA will affect the work product and self-critical analysis privileges that apply to data forensic reports prepared at the direction of counsel.

Proactive Security Requirements

The FIPA adds a new type of protection of PII:  it requires that an entity maintaining PII adopt “reasonable measures” to protect and secure the PII.  With this change, Florida joins the minority of jurisdictions that statutorily require entities maintaining PII to adopt safeguards regardless of whether the entity ever suffers a breach.  To be sure, adopting safeguards to protect PII is a good idea regardless of whether it is statutorily required, and the failure to adopt those safeguards could expose a company to an enforcement action by the FTC or state attorney general under the FTC Act or “little FTC Acts,” respectively, even in states where those safeguards are not required.  But the FIPA provides no guidance as to what is meant by “reasonable measures.”  Does this mean encryption?  Password protection?  Are written policies and training required?  Does it differ depending on the size of the breached entity?  Again, we are left to guess.

Some Final Observations

A few closing observations about the FIPA:

  • The definition of a breach is still limited to electronic personal information; so a breach involving purely paper records may not trigger the statute.
  • A violation of the statute is automatically considered a violation of Florida’s Deceptive and Unfair Trade Practices Act, but that violation appears to be enforceable only by the Florida Attorney General and not a private cause of action.
  • A breach now means unauthorized “access” of PII, where before it was defined as unauthorized “acquisition” of PII.  This change broadens the number of scenarios that could be considered a breach.

In short, the FIPA is generally a consumer-friendly law that will increase the number of breaches that require notification, shorten the time by which notification must take place, require that the Attorney General be included in the breach notification process, and demand that companies adopt security safeguards to protect PII regardless of whether they ever suffer a breach.


DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

The Florida Legislature recently passed the Florida Information Protection Act of 2014 (FIPA).  This post describes the FIPA and analyzes the advantages and disadvantages to businesses governed by the new law.  The FIPA must still be signed by the Governor, but the law received unanimous support in the legislature, so his signature is expected.  Once signed, the law would go into effect in a less than two months.

What is the FIPA?  The FIPA will replace Florida’s existing data breach notification law.  It has a reactive component (what companies must do after a breach) and a proactive component (what companies must do to protect personally identifiable information they control regardless of whether they ever suffer a breach).  The FIPA governs “covered entities.”  A covered entity is a commercial entity that acquires, maintains, stores or uses personally identifiable information.  A “breach” triggering the FIPA is the unauthorized access of data in electronic form containing personally identifiable information (PII).  The FIPA applies only to PII in electronic form, though an argument can be made that the secure disposal requirement under the FIPA applies to PII in any form given its use of the term “shredding.”

What is PII?  PII is defined as a first name or first initial and last name in combination with any of the following:

  • social security number;
  • driver’s license or ID card number, passport number, military identification number, or other similar number issued on a government document used to verify identity;
  • a financial account number or credit or debit card number, in combination with any required security code, access code, or password that is necessary to permit access to an individual’s financial account;
  • information regarding an individual’s medical history, mental or physical condition, or medical treatment or diagnosis by a health care professional; or
  • an individual’s health insurance policy number or subscriber identification number and any unique identifier used by a health insurer to identify the individual.

PII also includes a username or email address in combination with a password or security question and answer that would permit access to an online account.  The FIPA does not apply to PII that is encrypted, secured, or modified such that the PII is rendered unusable.

Do covered entities have to notify the Florida Attorney General’s Office of a breach?  Yes.  Covered entities must notify Florida’s Department of Legal Affairs (i.e., the Florida Office of the Attorney General) of any breach that affects more than 500 people.  Notice must be provided as expeditiously as practicable, but no later than 30 days after determination of the breach or reason to believe a breach occurred.  An additional 15 days is permitted if good cause for delay is provided in writing to the Attorney General within 30 days after determination of the breach or reason to believe a breach occurred.

The notice to the Attorney General must include:

  • a synopsis of the events surrounding the breach;
  • the number of affected Floridians;
  • any services related to the breach being offered without charge to the affected individuals (e.g., credit monitoring) and instructions as to how to use such services;
  • a copy of the notice sent to affected individuals or an explanation as to why such notice was not provided (e.g., there was no risk of financial harm); and
  • the name, address, telephone number, and email address of the employee or agent of the covered entity from whom additional information may be obtained about the breach.

Additionally, if the Attorney General asks for any of the following, the covered entity must provide it:

  • a police report
  • an incident report
  • a computer forensics report
  • a copy of the policies in place regarding breaches
  • steps that have been taken to rectify the breach

When must affected individuals be notified?  Notice to affected individuals must be made as expeditiously as practicable and without unreasonable delay.  The law allows covered entities to take into account the time necessary to allow the covered entity to determine the scope of the breach of security, to identify individuals affected by the breach, and to restore the reasonable integrity of the data system that was breached.  But even with those considerations, notice to affected individuals cannot take longer than 30 days after determining or having a reason to believe that a breach has occurred.

Two exceptions can permissibly delay or eliminate the obligation to notify affected individuals.  One exception is an authorized delay, which occurs when law enforcement determines that notice to individuals would interfere with a criminal investigation.  The determination must be in writing and must provide a specified period for the delay, based on what law enforcement determines to be reasonably necessary.  The delay may be shortened or extended at the discretion of law enforcement.

The second exception is a waiver, which occurs where, after an investigation and consultation with law enforcement, the covered entity reasonably determines that the breach has not and will not likely result in identity theft or any other financial harm to the affected individuals.  If a waiver applies, the covered entity must document it, maintain the documentation for five years, and provide the documentation to the Attorney General within 30 days after the determination.

How must notice to affected individuals take place and what must it include?  Direct notice to affected individuals can take one of two forms:  it can be in writing (sent to the mailing address of the individual in the records of the covered entity) or it can be by email to the email address of the individual in the records of the covered entity.  In either form, the notice must include:  (a) the date, estimated date, or estimated date range of the breach; (b) a description of the PII that was accessed; and, (c) information that the individual can use to contact the covered entity to inquire about the breach and the PII that the covered entity maintained about the individual.

Can a covered entity provide substitute notice to affected individuals?  If the cost of direct notice would exceed $250,000, more than 500,000 individuals are affected, or the covered entity does not have a mailing or email address for the affected individuals, then substitute notice can be provided.  The substitute notice must include a conspicuous notice on the covered entity’s website and notice in print and to broadcast media where the affected individuals reside.

What if the covered entity is governed by HIPAA or some other federal regulations?  Notice provided pursuant to rules, regulations, procedures, or guidelines established by the covered entity’s primary or functional federal regulator is deemed to be in compliance with the notice requirement to individuals under the FIPA. However, a copy of that notice must be timely provided to the Attorney General.  For example, if a company is governed by HIPAA, then their notice pursuant to the Breach Notification Rule will be sufficient to meet the requirements under the FIPA, but a copy of that notice still must be sent to the Attorney General.

Do covered entities have to notify credit reporting agencies?  If more than 1,000 individuals are affected, then notice to all consumer reporting agencies must be provided without unreasonably delay.

What if the breach occurs with a third-party agent (e.g., a vendor)?  A third-party agent is an entity that has been contracted to maintain, store, or process PII on behalf of a covered entity or governmental entity.  If a third-party agent suffers a breach, it must notify the covered entity within 10 days following the determination of the breach or reason to believe the breach occurred.  Upon receiving notice of the breach, the covered entity must then comply with the requirements to notify affected individuals and the Attorney General.  In that case, the third-party agent must provide all information necessary for the covered entity to comply with its notice requirements.  The third-party agent may notify affected individuals and the Attorney General on behalf of the covered entity, but the agent’s failure to provide proper notice is deemed a violation against the covered entity.

Are there obligations other than notification after a breach?  In addition to the reactive component of the FIPA (actions covered entities must take after a data breach), the FIPA also has a proactive component that imposes obligations on covered entities regardless of whether they ever suffer a breach.  Specifically, covered entities must take reasonable measures to protect and secure PII.  Covered entities must also take reasonable measures to dispose, or arrange for the disposal, of customer records containing PII within their custody or control when the records are no longer to be retained.  Such disposal must involve shredding, erasing, or otherwise modifying the PII in the records to make it unreadable or undecipherable through any means.

Who enforces the FIPA and how?  A violation of the FIPA is an unfair or deceptive trade practice subject to an action by the Attorney General under Florida’s Deceptive and Unfair Trade Practices Act against the covered entity or third-party agent.  A covered entity that does not properly notify affected individuals or the Attorney General may be fined up to $500,000 per breach, depending on the number of days in which the covered entity is in violation of the FIPA.  The law creates no private cause of action, nor does the presumed FDUTPA violation for the Attorney General appear to apply to a private action under FDUTPA.

The law will become effective on July 1, 2014 if it is signed by the Governor.


DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.


Plaintiffs’ lawyers were falling over themselves last week in a race to the courthouse to sue Target as a result of its recent data breach.  By at least one report, over 40 lawsuits have already been filed against Target, the first of which was filed the day after the breach became public.  This post will provide an overview of the lawsuits, analyze their merits, identify potential concerns for Target, and address some of the larger public policy implications raised by the lawsuits.  My next post will provide more specific details about a sample of the lawsuits.

A (Coordinated) Race to the Courthouse

The lawsuits were filed in Federal courts all over the country, including Alabama, California, Florida, Illinois, Minnesota, Oregon, and Rhode Island.  At least four of them were the result of coordinated efforts between plaintiffs’ firms that filed the lawsuits in California, Illinois, and Oregon, given the similarity of language and structure used in  those complaints.  (That’s not particularly unusual, but let’s not pretend that there isn’t a coordinated effort involved here).  The lawsuits will likely be consolidated or become part of a multidistrict litigation panel, and there will be an internal battle between the plaintiffs’ lawyers as to whom will serve as class counsel.

Also interesting is when the lawsuits were filed.  All of these lawsuits were filed within a few days of the data breach becoming public.  They were filed before knowing what caused the breach, before knowing when Target learned of the breach, and before knowing what Target did to prevent the breach from occurring in the first place.  The developing data breach legal landscape has shown us that liability from a data breach arises not from the breach itself (almost every company suffers a breach), but from what the company did before or after the breach to prevent it and notify affected individuals.  So the fact that these lawsuits were filed before we know much about what led to the breach and how Target responded should raise initial skepticism about the merits of the lawsuits.

On to the Merits . . .

Generally speaking, the lawsuits are not only premature, but weak for at least two reasons: their legal theories are not sufficiently specific, and almost none of them allege cognizable harm.

The lawsuits contain numerous causes of action (negligence, statutory violations, breach of implied and express contracts, invasion of privacy, bailment, etc.), but the causes of action are based primarily on two legal theories:  (1) Target failed to act reasonably in adopting safeguards that would have prevented the breach from happening; and/or, (2) Target didn’t notify affected consumers quickly enough.  Let’s evaluate these theories and other weaknesses in the lawsuits separately.

“Failure to Adopt Reasonable Safeguards”

Plaintiffs allege that Target failed to act reasonably to adopt safeguards to prevent the breach from occurring, but there are no allegations as to what specifically Target did wrong.  In the LinkedIn lawsuit, for example, there were allegations that LinkedIn failed to salt or hash sensitive information, and that LinkedIn’s conduct contradicted a specific provision of its consumer-facing privacy policy.  The LinkedIn complaint was dismissed because the court held that the plaintiffs lacked standing, but you knew upon reading it what the plaintiffs were claiming LinkedIn did (or failed to do) wrong.

There are no similarly specific allegations in the lawsuits against Target, probably because the plaintiffs don’t know enough about the facts to plead anything with the requisite specificity.  They don’t know yet what Target did wrong, or even if it did anything wrong.  The highly ambiguous pleading now puts Target in the position of trying to defend itself against a “moving target” (no pun intended) that plaintiffs will interpret differently to best suit their needs as the lawsuit progresses.

“Failure to Timely Notify Affected Consumers”

The plaintiffs also claim that Target failed to timely notify affected consumers of the breach, but there are currently  no facts that support this theory.  According to all accounts, the breach occurred between November 27th and December 15th, and Target notified potentially affected customers a few days thereafter by email and by creating a special web page (linked to with regularly updated information about the breach and Target’s response.

As anyone with breach response experience will tell you, there are a number of time-consuming steps in the breach response process before notification can take place.  First, you need to identify and understand the nature of the compromise, and you have to be reasonably sure that the compromise has been contained and remediated so it is no longer a threat.  This step alone can take days or weeks to complete depending on the level of sophistication of the attack.  Further complicating this step is the coordination with  law enforcement, who may be concerned that acting too quickly will inhibit their ability to identify the perpetrators.  After the integrity of your system has been restored, you need to identify what information was affected by the breach.  If you learn that personal information was potentially compromised as a result of the breach, you need to know whose information was affected so you can quickly inform them and regulatory authorities in compliance with applicable legal requirements.  Undertaking this entire process can often take weeks.  Target appears to have done it within a few days.

There is another factor that must be considered in determining whether Target complied with any legal obligation to notify consumers – the various data breach notification laws. 46 states have their own data breach notification laws and they are triggered by the location of the individual whose information is compromised, not by the location of the company that suffered the breach (meaning that they’re all in play with a breach this size).  Most require notification within a “reasonable” period of time, and for some that means the breached entity may have as long as 30 to 45 days to undertake notification.  These laws usually do not “start the clock running” on notification until the company reasonably believes that it has identified the full scope of the breach and has contained it.  This makes sense because you wouldn’t want to tip off the hackers that you are on to them by issuing a public notification when your systems are still compromised.  Additionally, it is very difficult to undertake notification until you know who you need to notify (i.e., whose information was compromised, where do they live, how can I contact them, etc.), which can take some time to determine.  Finally, almost all of these laws allow for a delay in notification where law enforcement believes that such notification would impede their ability to identify and investigate the hackers. We do not know whether such a “law enforcement hold” was in place in this breach.  (Some of the plaintiffs allege in their complaints that no law enforcement hold was in place, but they couldn’t possibly know that yet).

It is possible that facts could emerge at a later date showing that Target knew of the compromise much earlier but chose not to notify affected consumers, but for the time being, the fact that Target notified affected consumers within a few days of the compromise becoming known easily disposes of the allegation that Target delayed notifying consumers.

Cognizable Harm

The plaintiffs will also have a very difficult time proving that they suffered cognizable harm, as evident by the difficulty they have in pleading it.  Almost half of the lawsuits allege that they suffered “compensatory damages” or “harm” generally, but fail to describe their damages with any specificity.  They likely cannot identify any cognizable harm at this point, further demonstrating the premature nature of these lawsuits.  Some of the lawsuits seek damages for a “risk” of harm at some unforeseeable point in the future, or for fraudulent charges that were almost certainly reimbursed or will be reimbursed by the consumers’ financial institutions, or for potential damage to their credit scores.  None of these types of damages have been recognized as cognizable in a data breach lawsuit.

This is not to say that all damages are not cognizable.  In a few jurisdictions, courts have held that plaintiffs can proceed in pursuing certain damages.  In the First Circuit, for example, consumers are allowed to pursue “mitigation expenses” (e.g., the unreimbursed cost of replacing their cards, obtaining credit reports and credit insurance, etc.).  In the Eleventh Circuit, consumers have been allowed to pursue the portion of their service fees/premiums to a company that was used for securing the consumers’ personal information.  To the extent the plaintiffs have filed lawsuits in these jurisdictions and are seeking these types of damages, their allegations of damages may be stronger.


Finally, Plaintiffs will have to deal with the majority of case law in data breach lawsuits that, with some limited exceptions, has not allowed the lawsuits to proceed.  Two of the most important decisions will be the U.S. Supreme Court’s decision in Clapper v. Amnesty International and the Northern District of Illinois’s decision in In re Barnes & Noble Pin Pad Litigation.  Clapper raised the bar for demonstrating cognizable harm and standing in privacy violation cases such as this one.  The Clapper decision was relied on by the Northern District of Illinois in dismissing a data breach lawsuit against Barnes & Noble that arose from an almost identical set of facts — the compromise of consumers’ personal information stolen from PIN pads at a major retailer.  The court held that the plaintiffs lacked standing because they could not allege that a threatened injury was “certainly impending” as a result of the breach.

I expect the plaintiffs to rely on the recent decisions by the Eleventh Circuit, the First Circuit, and the Southern District of Florida that allowed data breach lawsuits to proceed.  Therefore, I would closely monitor what happens in the two Florida lawsuits and the Rhode Island lawsuit, or any others that are subsequently filed in the Eleventh or First U.S. Circuits.

Should Target Still Be Worried?

Despite the premature nature and overall weaknesses of the lawsuits as filed, Target still has cause for concern. First, even though legal precedent is heavily in its favor (this blog post cites only a few of the many opinions dismissing data breach lawsuits), the development of the law is still in its early phases, and as evident from the previous paragraph, some courts where lawsuits against Target are pending have allowed data breach lawsuits to proceed.

Another concern is how the facts emerge.  For example, if it turns out that Target knew about the breach long before it was disclosed publicly, knew that personal information had been compromised, knew whose information had been compromised, knew that the information was not encrypted, and was under a legal obligation to notify affected individuals, then the plaintiffs’ “failure to timely notify” will strengthen.

Target also has to be concerned about trying to keep the focus where the law requires it.  The plaintiffs’ lawyers are going to try to shift the focus from what Target did (the sophisticated and complex information security program Target likely had in place) to what Target could have done (the one “error” Target made that could have prevented the breach).  According to one study, 97% of breaches are avoidable (in hindsight) through simple or intermediate controls.  Why is that important?  Because I have little doubt that the plaintiffs’ lawyers will be able to find a cybersecurity “expert” somewhere willing to testify that Target could have done something that would have prevented the breach from occurring, thereby trying to create an issue of fact as to the reasonableness of Target’s conduct.  Target will need to try hard to keep the focus on the correct legal standard.  The legal standard isn’t whether Target could have done something to prevent the breach, but whether it acted reasonably to prevent the breach.  In other words, the plaintiffs’ lawyers will try to persuade the courts that liability should be determined by whether the breach was preventable, and Target will try to keep the focus on the fact that it adopted a highly sophisticated, expensive, and (for the most part) very effective information security program and made the security of its consumers’ information the highest priority.  If plaintiffs succeed in shifting the focus away from the legal standard, every company should be very concerned, because so many data breaches are, in hindsight, preventable, which means that almost every company could face potential liability if they suffer a breach.

So why should EVERY Company Care About These Lawsuits . . .

The lawsuits are premature, not well supported by precedent, and based heavily on rank speculation as to the safeguards Target had in place and how quickly it responded.  Despite these weaknesses, however, every company should care about what happens to these lawsuits.  Target is a very large company that undoubtedly had in place complex and sophisticated safeguards to protect against this type of a data breach, and from what we know so far, they notified affected individuals very quickly.  If there is anything less than a dismissal or summary judgment entered in all of these cases, then the proverbial blood will be in the water and we can expect the floodgates of data breach litigation to open.  Almost every company that suffers a data breach could be held liable because few are going to have the level of security and response efforts that an organization like Target has in place.

The public policy consequences of Target being held liable are significant.  Companies will be less inclined to reveal breaches due to potential liability exposure, so consumers will be less likely to know when their information has been accessed, precluding them from responding adequately to protect themselves.  Instead of investing resources into physical, technical, and administrative safeguards that could improve the security of consumers’ information, companies will be forced to spend their resources on litigation costs, settlements, and awards to plaintiffs.  The individuals who will benefit most won’t be the consumers (who could each receive nominal awards for mitigation expenses), but the attorneys who will reap significant attorney’s fees awards in class action lawsuits.  So what happens to these lawsuits will be important to any company that collects, stores, uses, and disposes of sensitive consumer information, which is almost every company doing business in this modern economy.


DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

A client recently asked me to identify the next wave of data privacy litigation.  I said that with so much attention on lawsuits arising from data breaches, particularly in light of some recent successes for the plaintiffs in those lawsuits, the way in which companies collect information and disclose what they are collecting is flying under the radar.  This “failure to match” what is actually being collected with what companies are saying they’re collecting and doing with that information could lead to the next wave of data privacy class action litigation.

Here’s an example.  A privacy policy in a mobile app might state that the app collects the user’s name, mailing address, and purchasing behavior.  In fact, and often unbeknownst to the person who drafted the privacy policy, the app is also collecting information like the user’s geolocation and mobile device identification number, but that collection is not disclosed to the user in the privacy policy.  The collection of the additional information isn’t what gets the company into trouble.  It’s the failure to fully and accurately disclose the collection practice and how that information is used and disclosed to others that creates the legal risk.

What is the source of this problem?  In an effort to minimize costs, small companies often slap together a privacy policy by cutting-and-pasting from a form provided by a website designer or found on the Internet.  Little care is given to the accuracy and depth of the policy because there is little awareness of the potential risk.  Larger companies face a different problem: the left hand sometimes doesn’t know what the right hand is doing.  Legal, privacy, and compliance departments often do not ask the right questions of IT, web/app developers, and marketing, and the latter may not do a sufficiently good job of volunteering more than what is asked of them.  This problem is can be further exacerbated where the app/website development and maintenance is outsourced.  This failure to communicate can, unintentionally, result in a “failure to match” a company’s words with its actions when it comes to information collection.

We have already seen state and federal regulators become active in this area.  The Federal Trade Commission has brought a significant number of enforcement actions against organizations seeking to make sure that companies live up to the promises they make to consumers about how they collect and use their information.  Similarly, the Office of the California Attorney General recently brought a lawsuit against Delta Air Lines alleging a violation of California’s Online Privacy Protection Act for failure to provide a reasonably accessible privacy policy in its mobile app. Additionally, the California Attorney General’s Office has issued a guidance on how mobile apps can better protect consumer privacy, which includes the conspicuous placement and fulsome disclosure of information collection, sharing, and disclosure practices.  As the use of mobile apps and collection of electronic information about consumers increase, we can expect to see a ramping up of these enforcement actions.

What sort of civil class action liability could companies face for “failure to match”?  Based on what we’ve seen in privacy and security litigation thus far, if the failure to match a policy with a practice is intentional or reckless, companies could face exposure under theories of fraud or deceptive trade practice statutes that provide a private right of action (e.g., state “Little FTC Acts”).  Even if the failure to disclose is unintentional, the company could still face a lawsuit alleging negligent misrepresentation, breach of contract, and statutory violations that include violations of Gramm Leach Bliley, HIPAA’s privacy rule, or California’s Online Privacy Protection Act. Without weighing in on the merits of these lawsuits, I would venture to guess that the class actions that will have the greatest chances of success will be those where the plaintiffs can show some financial harm (e.g., they paid for the apps in which the deficient privacy policy was contained) or there is a statute that provides set monetary relief as damages (e.g., $1,000 per violation/download).

What can companies do to minimize this risk?  To minimize the risks, companies should begin by evaluating whether their privacy policies match their collection, use, and sharing practices.  This process starts with the formation of a task force under the direction of counsel that is comprised of representatives from legal, compliance, IT, and marketing and that is dedicated to identifying: (1) all company statements about what information is collected (on company websites, in mobile apps, in written documents, etc.); (2) what information is actually being collected by the company’s website, mobile app, and other information collection processes; and (3) how the information is being used and shared.  The second part requires a really deep dive, perhaps even an independent forensic analysis, to ensure that the company’s statements about what information is being collected are correct.  It’s important that the “tech guys” (the individuals responsible for developing the app/website) understand the significance of full disclosure.  Companies should also ask, “do we really need everything we’re collecting?”  If not, why are you taking on the additional risk?  Also remember that this is not a static process.  Companies should regularly evaluate their privacy policies and monitor the information they collect.  A system must be in place to quickly identify when these collection, use, and sharing practices change, so the policies can be updated promptly where necessary.


DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

Just when you thought it might be safe to go back into the water, another significant data breach lawsuit may be settling.  Last week, I wrote about the proposed settlement in the AvMed lawsuit.  The motion for a preliminary proposed settlement in that case was granted on Friday, and a Final Hearing is set for February 28, 2014.

At the end of last week, however, the St. Louis Post-Dispatch reported that Schnuck Markets has agreed to settle a proposed class action arising from a breach of its systems (a cyber attack in which a computer code was inserted into Schnucks’ payment system, allowing the capture of magnetic strip data from approximately 2.4 million customers’ payment cards between December 2012 and March 2013).

The Legal Theories

The lawsuit, which is pending before a St. Louis Circuit Court, alleges that Schucks: (1) failed to secure customers’ personal financial information, and (2) did not notify customers in a clear and timely manner that their information had been stolen.

The “failure to secure” theory is based on an argument that Schnucks did not abide by “best practices and industry standards concerning the security of its computer and payment processing systems.”  This allegation should scare every corporate entity.  Why?  Because the phrase “best practices and industry standards” is so ambiguous and can be defined so differently depending on who you ask.  For example, is the standard best measured by the Payment Card Industry’s Data Security Standards?  Perhaps it’s measured by NIST?  How about ISO?  Should you use some amorphous common law standard that has developed in the case law or laws that may not directly apply to you (e.g., HIPAA if you’re not a Covered Entity or Business Associate)?  Regardless of what standard you choose, it’s a moving target and changes as technology changes.  In other words, compliance with the “reasonableness” standard can be both expensive and very difficult to determine.

The second legal theory (that Schnucks failed to timely and adequately notify consumers) should also cause some concern to organizations that maintain sensitive information.  How did Schnucks notify its customers?  According to the plaintiffs, Schnucks, issued a national press release within two weeks of learning that its systems had been compromised, though they claim that no “individual notification” to class members occurred.  With respect when the notice took place, anyone who is experienced in breach response will tell you that notification within two weeks of learning of an incident involving a cyber attack is prompt.  It takes time to identify the affected systems, determine the source and scope of the intrusion, identify what information was affected, learn where the individuals whose personal information was affected are located (assuming the incident even affected personal information), and confirm that the compromise has been contained so there is no threat of a live hacker moving to other areas of your information systems while you’re undertaking notification.  With respect to how the notice took place, it is not clear whether Schnucks was perhaps trying to provide substitute notice under the applicable state data breach notification laws, which would have obviated the need for individual notice.

The causes of action in the Second Amended Class Action Petition are as follows:

(1) Breach of implied contract – plaintiffs claim that in providing financial data to Schnucks, plaintiffs entered into an implied contract with Schnucks obligating it to reasonably safeguard plaintiffs’ information and notify plaintiffs if the information was accessed without authorization.

(2)  Violation of Missouri’s Merchandizing Practices Act – plaintiffs claim that Schnucks engaged in “unfair conduct” by failing to properly implement adequate, commercially reasonable security measures to protect their personal information while shopping at Schnucks.  Plaintiffs also contend that Schnucks’ failure to provide timely and sufficient notice of the breach of its computer systems was an “unfair practice.”

(3) Invasion of Privacy by Public Disclosure of Private Facts – plaintiffs also allege that the breach resulted in a public disclosure of the plaintiffs’ private information.

Plaintiffs do not claim violation of any state data breach notification law as a cause of action, despite their factual allegations that Schnucks’ notification was inadequate and untimely.

Damages Sought

The plaintiffs seek damages for:  (1) out of pocket expenses incurred to mitigate the increased risk of identity theft, (2) the value of their time spent mitigating identity theft and the risk of identity theft, (3) the increased risk of identity theft, (4) the deprivation of the value of their personal information, and (5) anxiety and emotional distress.  These damages, for the most part, fall into the “weaker” side of the cognizable damages spectrum based on existing case law.  The proposed settlement, however, attempts to limit recovery to those plaintiffs who suffered cognizable damages.

Terms of the Proposed Settlement

The terms of the proposed settlement are set forth in the parties’ motion for preliminary approval of class action settlement.  Schnucks denies any wrongdoing as a term of the proposed settlement.  The proposed settlement fund would provide the plaintiffs with the following relief:

  • Fraudulent Charges – up to $10 for each credit or debit card that was compromised and had fraudulent charges posted on it, even if the charges were later reversed.
  • Out-of-Pocket Expenses – unreimbursed out-of-pocket expenses (bank fees, overdraft and late fees), and $10 per hour for up to three hours of time spent dealing with the security breach.  There would be a $175 per person cap on these expenses.
  • There is an aggregate cap of $1.6 million for the above two categories.  If the total claims exceed that amount, customers are guaranteed $5 for each compromised card.
  • Identity Theft – up to $10,000 for each related identity theft loss, with a cap of $300,000 in total
  • Attorney’s Fees – up to $635,000 for the plaintiffs’ attorney’s fees
  • Incentive Awards – $500 to each of the nine named plaintiffs in the lawsuit

It would be interesting to know how many members of the class can actually demonstrate the type of quantifiable and specific damages for which the settlement provides relief.

The Fat Lady Isn’t Singing Just Yet . . .

Before the case can settle, however, the court must first consider a motion to intervene that was filed by an individual pursuing a related federal lawsuit against Schnucks elsewhere.  She argues that there are four pending federal class action lawsuits that arise from the same operative facts as the state court case, and the proposed settlement risks releasing Schnucks from the federal lawsuit.  Ostensibly, the intervening party believes she can obtain greater relief in federal court.

Whether the intervening party succeeds, the proposed settlement still has value because it is another example of the types and extent of damages some defendants are willing to agree to in data breach lawsuits.  It is also a glimpse into what the plaintiffs individually are being awarded as damages, and how much their lawyers are being awarded as fees. But the bigger lessons to be learned from all of this are:  (1) there appears to be a standard of “reasonableness” developing in data breach cases that is amorphous and therefore difficult to comply with, and (2) when and how you notify affected individuals can be a source of potential liability in a data breach class action.

A case review is scheduled in this case for December 25, 2013.  Merry Christmas.


DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.    

How much of a headache can a couple of stolen laptops cause your organization?  How about a $3 million headache??  That is the amount of a settlement proposed in an Unopposed Motion in Support of Preliminary Approval of Class Action Settlement in Resnick/Curry v. AvMed, Inc., No. 1:10-cv-24513-JLK (S.D. Fla.), a data breach lawsuit pending in the Southern District of Florida.


Resnick involved the theft of two unencrypted laptops from a conference room in the defendant’s corporate office.  Unfortunately, the laptops contained personal information of approximately 1.2 million customers/insureds (“the plaintiffs”).  The plaintiffs filed a class action lawsuit claiming that AvMed failed to adequately secure the plaintiffs’ personal information.

The District Court dismissed the lawsuit in July 2011, finding that the plaintiffs had failed to show any cognizable injury.  The 11th Circuit, however, reversed the trial court, holding that the plaintiffs had in fact suffered cognizable injuries.

Of particular note was the portion of the 11th Circuit’s opinion addressing the plaintiffs’  unjust enrichment count.  The plaintiffs had argued that a portion of their insurance premiums was ostensibly for the defendant’s administrative costs in implementing safeguards that protected the plaintiffs’ information.  The plaintiffs contended that, as evident by the stolen unencrypted laptops, a portion of those costs should be returned because their information was ultimately compromised and the defendant had not adopted reasonable security measures to protect their information.  The 11th Circuit agreed, and held that the unjust enrichment count (among other counts) could proceed on remand.

The Settlement Terms

The $3 million settlement fund is to be disbursed as follows:

(1) approved premium overpayment claims — class members can receive up to $10 per year for each year they paid the defendant for insurance before the data breach, subject to a $30 limit.  These are the unjust enrichment damages.

(2) approved identity theft claims — class members who suffered any unreimbursed monetary losses as a result of identity theft related to the breach are eligible to have those amounts reimbursed.

(3) settlement administration expenses — these are the costs for providing notice to the settlement classes and the costs of administering the settlement.  At first blush these may seem small, but remember that there are potentially 1.2 million individuals involved.

(4) class counsel’s attorney’s fees and costs — $750,000 to class counsel (Edelson LLC, one of the few plaintiffs’ firms that has demonstrated a pattern of success in privacy and data security litigation).

(5) plaintiff’s incentive awards — $10,000 to be split evenly amongst the class representatives.

Perhaps the most valuable part of the settlement for those of us who advise clients about privacy and data security legal matters is the portion relating to what the defendant has agreed to do in the future, which reads a little like an FTC consent order:

(1) mandatory security awareness and training programs for all company employees;

(2) mandatory training on appropriate laptop use and security for all company; employees whose employment responsibilities include accessing information stored on company laptop computers;

(3) upgrading of all company laptop computers with additional security mechanisms, including GPS tracking technology (this latter part seems a bit much, its usefulness is questionable, and it could lead to other privacy issues related to employee location tracking);

(4) new password protocols and full disk encryption technology on all company desktops and laptops so that electronic data stored on such devices would be encrypted at rest;

(5) physical security upgrades at company facilities and offices to further safeguard workstations from theft; and,

(6) the review and revision of written policies and procedures to enhance information security.

Lessons To Be Learned

Why are the prospective measures so important? They provide a roadmap for what companies should do to minimize the risk of similar litigation. They also make good business sense and are likely compatible with the expectations of a company’s consumers. They are safeguards all companies should consider. Had the two laptops in Resnick been encrypted, one has to wonder whether a lawsuit would have been filed at all.

Another lesson — what are you saying in your consumer-facing policies and notices about the security safeguards your company has adopted to protect consumer information?  Such statements, though useful and sometimes required, could expose your organization to the same unjust enrichment argument that the plaintiffs made in Resnick.

Finally, this is the second data breach lawsuit that has resulted in a substantial settlement for the plaintiffs and both were filed in the Southern District of Florida.  (The other was Burrows v. Purchasing Power, which I blogged about here, and resulted in a settlement of approximately $430,000).  The settlements are in sharp contrast to the vast majority of cases that have been dismissed for lack of standing and damages. It will be interesting to see what impact these recent settlements will have on future data security and privacy litigation.

10/26/13 UPDATE:  The Southern District of Florida wasted no time considering the unopposed motion seeking preliminary approval of the class action settlement.  On October 25th, just four days after the motion was filed, the court granted it and set the Final Approval Hearing for February 28, 2014.


DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.