Data Security Law Journal

Focusing on legal trends in data security, cloud computing, data privacy, and anything E

The Target Data Breach Lawsuits: Why Every Company Should Care

Posted in Data Breach, Data Security, Lawsuits

Plaintiffs’ lawyers were falling over themselves last week in a race to the courthouse to sue Target as a result of its recent data breach.  By at least one report, over 40 lawsuits have already been filed against Target, the first of which was filed the day after the breach became public.  This post will provide an overview of the lawsuits, analyze their merits, identify potential concerns for Target, and address some of the larger public policy implications raised by the lawsuits.  My next post will provide more specific details about a sample of the lawsuits.

A (Coordinated) Race to the Courthouse

The lawsuits were filed in Federal courts all over the country, including Alabama, California, Florida, Illinois, Minnesota, Oregon, and Rhode Island.  At least four of them were the result of coordinated efforts between plaintiffs’ firms that filed the lawsuits in California, Illinois, and Oregon, given the similarity of language and structure used in  those complaints.  (That’s not particularly unusual, but let’s not pretend that there isn’t a coordinated effort involved here).  The lawsuits will likely be consolidated or become part of a multidistrict litigation panel, and there will be an internal battle between the plaintiffs’ lawyers as to whom will serve as class counsel.

Also interesting is when the lawsuits were filed.  All of these lawsuits were filed within a few days of the data breach becoming public.  They were filed before knowing what caused the breach, before knowing when Target learned of the breach, and before knowing what Target did to prevent the breach from occurring in the first place.  The developing data breach legal landscape has shown us that liability from a data breach arises not from the breach itself (almost every company suffers a breach), but from what the company did before or after the breach to prevent it and notify affected individuals.  So the fact that these lawsuits were filed before we know much about what led to the breach and how Target responded should raise initial skepticism about the merits of the lawsuits.

On to the Merits . . .

Generally speaking, the lawsuits are not only premature, but weak for at least two reasons: their legal theories are not sufficiently specific, and almost none of them allege cognizable harm.

The lawsuits contain numerous causes of action (negligence, statutory violations, breach of implied and express contracts, invasion of privacy, bailment, etc.), but the causes of action are based primarily on two legal theories:  (1) Target failed to act reasonably in adopting safeguards that would have prevented the breach from happening; and/or, (2) Target didn’t notify affected consumers quickly enough.  Let’s evaluate these theories and other weaknesses in the lawsuits separately.

“Failure to Adopt Reasonable Safeguards”

Plaintiffs allege that Target failed to act reasonably to adopt safeguards to prevent the breach from occurring, but there are no allegations as to what specifically Target did wrong.  In the LinkedIn lawsuit, for example, there were allegations that LinkedIn failed to salt or hash sensitive information, and that LinkedIn’s conduct contradicted a specific provision of its consumer-facing privacy policy.  The LinkedIn complaint was dismissed because the court held that the plaintiffs lacked standing, but you knew upon reading it what the plaintiffs were claiming LinkedIn did (or failed to do) wrong.

There are no similarly specific allegations in the lawsuits against Target, probably because the plaintiffs don’t know enough about the facts to plead anything with the requisite specificity.  They don’t know yet what Target did wrong, or even if it did anything wrong.  The highly ambiguous pleading now puts Target in the position of trying to defend itself against a “moving target” (no pun intended) that plaintiffs will interpret differently to best suit their needs as the lawsuit progresses.

“Failure to Timely Notify Affected Consumers”

The plaintiffs also claim that Target failed to timely notify affected consumers of the breach, but there are currently  no facts that support this theory.  According to all accounts, the breach occurred between November 27th and December 15th, and Target notified potentially affected customers a few days thereafter by email and by creating a special web page (linked to Target.com) with regularly updated information about the breach and Target’s response.

As anyone with breach response experience will tell you, there are a number of time-consuming steps in the breach response process before notification can take place.  First, you need to identify and understand the nature of the compromise, and you have to be reasonably sure that the compromise has been contained and remediated so it is no longer a threat.  This step alone can take days or weeks to complete depending on the level of sophistication of the attack.  Further complicating this step is the coordination with  law enforcement, who may be concerned that acting too quickly will inhibit their ability to identify the perpetrators.  After the integrity of your system has been restored, you need to identify what information was affected by the breach.  If you learn that personal information was potentially compromised as a result of the breach, you need to know whose information was affected so you can quickly inform them and regulatory authorities in compliance with applicable legal requirements.  Undertaking this entire process can often take weeks.  Target appears to have done it within a few days.

There is another factor that must be considered in determining whether Target complied with any legal obligation to notify consumers – the various data breach notification laws. 46 states have their own data breach notification laws and they are triggered by the location of the individual whose information is compromised, not by the location of the company that suffered the breach (meaning that they’re all in play with a breach this size).  Most require notification within a “reasonable” period of time, and for some that means the breached entity may have as long as 30 to 45 days to undertake notification.  These laws usually do not “start the clock running” on notification until the company reasonably believes that it has identified the full scope of the breach and has contained it.  This makes sense because you wouldn’t want to tip off the hackers that you are on to them by issuing a public notification when your systems are still compromised.  Additionally, it is very difficult to undertake notification until you know who you need to notify (i.e., whose information was compromised, where do they live, how can I contact them, etc.), which can take some time to determine.  Finally, almost all of these laws allow for a delay in notification where law enforcement believes that such notification would impede their ability to identify and investigate the hackers. We do not know whether such a “law enforcement hold” was in place in this breach.  (Some of the plaintiffs allege in their complaints that no law enforcement hold was in place, but they couldn’t possibly know that yet).

It is possible that facts could emerge at a later date showing that Target knew of the compromise much earlier but chose not to notify affected consumers, but for the time being, the fact that Target notified affected consumers within a few days of the compromise becoming known easily disposes of the allegation that Target delayed notifying consumers.

Cognizable Harm

The plaintiffs will also have a very difficult time proving that they suffered cognizable harm, as evident by the difficulty they have in pleading it.  Almost half of the lawsuits allege that they suffered “compensatory damages” or “harm” generally, but fail to describe their damages with any specificity.  They likely cannot identify any cognizable harm at this point, further demonstrating the premature nature of these lawsuits.  Some of the lawsuits seek damages for a “risk” of harm at some unforeseeable point in the future, or for fraudulent charges that were almost certainly reimbursed or will be reimbursed by the consumers’ financial institutions, or for potential damage to their credit scores.  None of these types of damages have been recognized as cognizable in a data breach lawsuit.

This is not to say that all damages are not cognizable.  In a few jurisdictions, courts have held that plaintiffs can proceed in pursuing certain damages.  In the First Circuit, for example, consumers are allowed to pursue “mitigation expenses” (e.g., the unreimbursed cost of replacing their cards, obtaining credit reports and credit insurance, etc.).  In the Eleventh Circuit, consumers have been allowed to pursue the portion of their service fees/premiums to a company that was used for securing the consumers’ personal information.  To the extent the plaintiffs have filed lawsuits in these jurisdictions and are seeking these types of damages, their allegations of damages may be stronger.

Precedent

Finally, Plaintiffs will have to deal with the majority of case law in data breach lawsuits that, with some limited exceptions, has not allowed the lawsuits to proceed.  Two of the most important decisions will be the U.S. Supreme Court’s decision in Clapper v. Amnesty International and the Northern District of Illinois’s decision in In re Barnes & Noble Pin Pad Litigation.  Clapper raised the bar for demonstrating cognizable harm and standing in privacy violation cases such as this one.  The Clapper decision was relied on by the Northern District of Illinois in dismissing a data breach lawsuit against Barnes & Noble that arose from an almost identical set of facts — the compromise of consumers’ personal information stolen from PIN pads at a major retailer.  The court held that the plaintiffs lacked standing because they could not allege that a threatened injury was “certainly impending” as a result of the breach.

I expect the plaintiffs to rely on the recent decisions by the Eleventh Circuit, the First Circuit, and the Southern District of Florida that allowed data breach lawsuits to proceed.  Therefore, I would closely monitor what happens in the two Florida lawsuits and the Rhode Island lawsuit, or any others that are subsequently filed in the Eleventh or First U.S. Circuits.

Should Target Still Be Worried?

Despite the premature nature and overall weaknesses of the lawsuits as filed, Target still has cause for concern. First, even though legal precedent is heavily in its favor (this blog post cites only a few of the many opinions dismissing data breach lawsuits), the development of the law is still in its early phases, and as evident from the previous paragraph, some courts where lawsuits against Target are pending have allowed data breach lawsuits to proceed.

Another concern is how the facts emerge.  For example, if it turns out that Target knew about the breach long before it was disclosed publicly, knew that personal information had been compromised, knew whose information had been compromised, knew that the information was not encrypted, and was under a legal obligation to notify affected individuals, then the plaintiffs’ “failure to timely notify” will strengthen.

Target also has to be concerned about trying to keep the focus where the law requires it.  The plaintiffs’ lawyers are going to try to shift the focus from what Target did (the sophisticated and complex information security program Target likely had in place) to what Target could have done (the one “error” Target made that could have prevented the breach).  According to one study, 97% of breaches are avoidable (in hindsight) through simple or intermediate controls.  Why is that important?  Because I have little doubt that the plaintiffs’ lawyers will be able to find a cybersecurity “expert” somewhere willing to testify that Target could have done something that would have prevented the breach from occurring, thereby trying to create an issue of fact as to the reasonableness of Target’s conduct.  Target will need to try hard to keep the focus on the correct legal standard.  The legal standard isn’t whether Target could have done something to prevent the breach, but whether it acted reasonably to prevent the breach.  In other words, the plaintiffs’ lawyers will try to persuade the courts that liability should be determined by whether the breach was preventable, and Target will try to keep the focus on the fact that it adopted a highly sophisticated, expensive, and (for the most part) very effective information security program and made the security of its consumers’ information the highest priority.  If plaintiffs succeed in shifting the focus away from the legal standard, every company should be very concerned, because so many data breaches are, in hindsight, preventable, which means that almost every company could face potential liability if they suffer a breach.

So why should EVERY Company Care About These Lawsuits . . .

The lawsuits are premature, not well supported by precedent, and based heavily on rank speculation as to the safeguards Target had in place and how quickly it responded.  Despite these weaknesses, however, every company should care about what happens to these lawsuits.  Target is a very large company that undoubtedly had in place complex and sophisticated safeguards to protect against this type of a data breach, and from what we know so far, they notified affected individuals very quickly.  If there is anything less than a dismissal or summary judgment entered in all of these cases, then the proverbial blood will be in the water and we can expect the floodgates of data breach litigation to open.  Almost every company that suffers a data breach could be held liable because few are going to have the level of security and response efforts that an organization like Target has in place.

The public policy consequences of Target being held liable are significant.  Companies will be less inclined to reveal breaches due to potential liability exposure, so consumers will be less likely to know when their information has been accessed, precluding them from responding adequately to protect themselves.  Instead of investing resources into physical, technical, and administrative safeguards that could improve the security of consumers’ information, companies will be forced to spend their resources on litigation costs, settlements, and awards to plaintiffs.  The individuals who will benefit most won’t be the consumers (who could each receive nominal awards for mitigation expenses), but the attorneys who will reap significant attorney’s fees awards in class action lawsuits.  So what happens to these lawsuits will be important to any company that collects, stores, uses, and disposes of sensitive consumer information, which is almost every company doing business in this modern economy.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

What’s The Next Wave of Privacy Litigation? “Failure to Match”

Posted in Being Proactive, Data Privacy, Lawsuits

A client recently asked me to identify the next wave of data privacy litigation.  I said that with so much attention on lawsuits arising from data breaches, particularly in light of some recent successes for the plaintiffs in those lawsuits, the way in which companies collect information and disclose what they are collecting is flying under the radar.  This “failure to match” what is actually being collected with what companies are saying they’re collecting and doing with that information could lead to the next wave of data privacy class action litigation.

Here’s an example.  A privacy policy in a mobile app might state that the app collects the user’s name, mailing address, and purchasing behavior.  In fact, and often unbeknownst to the person who drafted the privacy policy, the app is also collecting information like the user’s geolocation and mobile device identification number, but that collection is not disclosed to the user in the privacy policy.  The collection of the additional information isn’t what gets the company into trouble.  It’s the failure to fully and accurately disclose the collection practice and how that information is used and disclosed to others that creates the legal risk.

What is the source of this problem?  In an effort to minimize costs, small companies often slap together a privacy policy by cutting-and-pasting from a form provided by a website designer or found on the Internet.  Little care is given to the accuracy and depth of the policy because there is little awareness of the potential risk.  Larger companies face a different problem: the left hand sometimes doesn’t know what the right hand is doing.  Legal, privacy, and compliance departments often do not ask the right questions of IT, web/app developers, and marketing, and the latter may not do a sufficiently good job of volunteering more than what is asked of them.  This problem is can be further exacerbated where the app/website development and maintenance is outsourced.  This failure to communicate can, unintentionally, result in a “failure to match” a company’s words with its actions when it comes to information collection.

We have already seen state and federal regulators become active in this area.  The Federal Trade Commission has brought a significant number of enforcement actions against organizations seeking to make sure that companies live up to the promises they make to consumers about how they collect and use their information.  Similarly, the Office of the California Attorney General recently brought a lawsuit against Delta Air Lines alleging a violation of California’s Online Privacy Protection Act for failure to provide a reasonably accessible privacy policy in its mobile app. Additionally, the California Attorney General’s Office has issued a guidance on how mobile apps can better protect consumer privacy, which includes the conspicuous placement and fulsome disclosure of information collection, sharing, and disclosure practices.  As the use of mobile apps and collection of electronic information about consumers increase, we can expect to see a ramping up of these enforcement actions.

What sort of civil class action liability could companies face for “failure to match”?  Based on what we’ve seen in privacy and security litigation thus far, if the failure to match a policy with a practice is intentional or reckless, companies could face exposure under theories of fraud or deceptive trade practice statutes that provide a private right of action (e.g., state “Little FTC Acts”).  Even if the failure to disclose is unintentional, the company could still face a lawsuit alleging negligent misrepresentation, breach of contract, and statutory violations that include violations of Gramm Leach Bliley, HIPAA’s privacy rule, or California’s Online Privacy Protection Act. Without weighing in on the merits of these lawsuits, I would venture to guess that the class actions that will have the greatest chances of success will be those where the plaintiffs can show some financial harm (e.g., they paid for the apps in which the deficient privacy policy was contained) or there is a statute that provides set monetary relief as damages (e.g., $1,000 per violation/download).

What can companies do to minimize this risk?  To minimize the risks, companies should begin by evaluating whether their privacy policies match their collection, use, and sharing practices.  This process starts with the formation of a task force under the direction of counsel that is comprised of representatives from legal, compliance, IT, and marketing and that is dedicated to identifying: (1) all company statements about what information is collected (on company websites, in mobile apps, in written documents, etc.); (2) what information is actually being collected by the company’s website, mobile app, and other information collection processes; and (3) how the information is being used and shared.  The second part requires a really deep dive, perhaps even an independent forensic analysis, to ensure that the company’s statements about what information is being collected are correct.  It’s important that the “tech guys” (the individuals responsible for developing the app/website) understand the significance of full disclosure.  Companies should also ask, “do we really need everything we’re collecting?”  If not, why are you taking on the additional risk?  Also remember that this is not a static process.  Companies should regularly evaluate their privacy policies and monitor the information they collect.  A system must be in place to quickly identify when these collection, use, and sharing practices change, so the policies can be updated promptly where necessary.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

Yet Another Potential Multimillion-Dollar Data Breach Settlement . . .

Posted in Data Breach, Data Security, Lawsuits

Just when you thought it might be safe to go back into the water, another significant data breach lawsuit may be settling.  Last week, I wrote about the proposed settlement in the AvMed lawsuit.  The motion for a preliminary proposed settlement in that case was granted on Friday, and a Final Hearing is set for February 28, 2014.

At the end of last week, however, the St. Louis Post-Dispatch reported that Schnuck Markets has agreed to settle a proposed class action arising from a breach of its systems (a cyber attack in which a computer code was inserted into Schnucks’ payment system, allowing the capture of magnetic strip data from approximately 2.4 million customers’ payment cards between December 2012 and March 2013).

The Legal Theories

The lawsuit, which is pending before a St. Louis Circuit Court, alleges that Schucks: (1) failed to secure customers’ personal financial information, and (2) did not notify customers in a clear and timely manner that their information had been stolen.

The “failure to secure” theory is based on an argument that Schnucks did not abide by “best practices and industry standards concerning the security of its computer and payment processing systems.”  This allegation should scare every corporate entity.  Why?  Because the phrase “best practices and industry standards” is so ambiguous and can be defined so differently depending on who you ask.  For example, is the standard best measured by the Payment Card Industry’s Data Security Standards?  Perhaps it’s measured by NIST?  How about ISO?  Should you use some amorphous common law standard that has developed in the case law or laws that may not directly apply to you (e.g., HIPAA if you’re not a Covered Entity or Business Associate)?  Regardless of what standard you choose, it’s a moving target and changes as technology changes.  In other words, compliance with the “reasonableness” standard can be both expensive and very difficult to determine.

The second legal theory (that Schnucks failed to timely and adequately notify consumers) should also cause some concern to organizations that maintain sensitive information.  How did Schnucks notify its customers?  According to the plaintiffs, Schnucks, issued a national press release within two weeks of learning that its systems had been compromised, though they claim that no “individual notification” to class members occurred.  With respect when the notice took place, anyone who is experienced in breach response will tell you that notification within two weeks of learning of an incident involving a cyber attack is prompt.  It takes time to identify the affected systems, determine the source and scope of the intrusion, identify what information was affected, learn where the individuals whose personal information was affected are located (assuming the incident even affected personal information), and confirm that the compromise has been contained so there is no threat of a live hacker moving to other areas of your information systems while you’re undertaking notification.  With respect to how the notice took place, it is not clear whether Schnucks was perhaps trying to provide substitute notice under the applicable state data breach notification laws, which would have obviated the need for individual notice.

The causes of action in the Second Amended Class Action Petition are as follows:

(1) Breach of implied contract – plaintiffs claim that in providing financial data to Schnucks, plaintiffs entered into an implied contract with Schnucks obligating it to reasonably safeguard plaintiffs’ information and notify plaintiffs if the information was accessed without authorization.

(2)  Violation of Missouri’s Merchandizing Practices Act – plaintiffs claim that Schnucks engaged in “unfair conduct” by failing to properly implement adequate, commercially reasonable security measures to protect their personal information while shopping at Schnucks.  Plaintiffs also contend that Schnucks’ failure to provide timely and sufficient notice of the breach of its computer systems was an “unfair practice.”

(3) Invasion of Privacy by Public Disclosure of Private Facts – plaintiffs also allege that the breach resulted in a public disclosure of the plaintiffs’ private information.

Plaintiffs do not claim violation of any state data breach notification law as a cause of action, despite their factual allegations that Schnucks’ notification was inadequate and untimely.

Damages Sought

The plaintiffs seek damages for:  (1) out of pocket expenses incurred to mitigate the increased risk of identity theft, (2) the value of their time spent mitigating identity theft and the risk of identity theft, (3) the increased risk of identity theft, (4) the deprivation of the value of their personal information, and (5) anxiety and emotional distress.  These damages, for the most part, fall into the “weaker” side of the cognizable damages spectrum based on existing case law.  The proposed settlement, however, attempts to limit recovery to those plaintiffs who suffered cognizable damages.

Terms of the Proposed Settlement

The terms of the proposed settlement are set forth in the parties’ motion for preliminary approval of class action settlement.  Schnucks denies any wrongdoing as a term of the proposed settlement.  The proposed settlement fund would provide the plaintiffs with the following relief:

  • Fraudulent Charges – up to $10 for each credit or debit card that was compromised and had fraudulent charges posted on it, even if the charges were later reversed.
  • Out-of-Pocket Expenses – unreimbursed out-of-pocket expenses (bank fees, overdraft and late fees), and $10 per hour for up to three hours of time spent dealing with the security breach.  There would be a $175 per person cap on these expenses.
  • There is an aggregate cap of $1.6 million for the above two categories.  If the total claims exceed that amount, customers are guaranteed $5 for each compromised card.
  • Identity Theft – up to $10,000 for each related identity theft loss, with a cap of $300,000 in total
  • Attorney’s Fees – up to $635,000 for the plaintiffs’ attorney’s fees
  • Incentive Awards – $500 to each of the nine named plaintiffs in the lawsuit

It would be interesting to know how many members of the class can actually demonstrate the type of quantifiable and specific damages for which the settlement provides relief.

The Fat Lady Isn’t Singing Just Yet . . .

Before the case can settle, however, the court must first consider a motion to intervene that was filed by an individual pursuing a related federal lawsuit against Schnucks elsewhere.  She argues that there are four pending federal class action lawsuits that arise from the same operative facts as the state court case, and the proposed settlement risks releasing Schnucks from the federal lawsuit.  Ostensibly, the intervening party believes she can obtain greater relief in federal court.

Whether the intervening party succeeds, the proposed settlement still has value because it is another example of the types and extent of damages some defendants are willing to agree to in data breach lawsuits.  It is also a glimpse into what the plaintiffs individually are being awarded as damages, and how much their lawyers are being awarded as fees. But the bigger lessons to be learned from all of this are:  (1) there appears to be a standard of “reasonableness” developing in data breach cases that is amorphous and therefore difficult to comply with, and (2) when and how you notify affected individuals can be a source of potential liability in a data breach class action.

A case review is scheduled in this case for December 25, 2013.  Merry Christmas.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.    

$3,000,000 Settlement Reached in Data Breach Lawsuit

Posted in Data Breach, Data Security, Lawsuits

How much of a headache can a couple of stolen laptops cause your organization?  How about a $3 million headache??  That is the amount of a settlement proposed in an Unopposed Motion in Support of Preliminary Approval of Class Action Settlement in Resnick/Curry v. AvMed, Inc., No. 1:10-cv-24513-JLK (S.D. Fla.), a data breach lawsuit pending in the Southern District of Florida.

Background

Resnick involved the theft of two unencrypted laptops from a conference room in the defendant’s corporate office.  Unfortunately, the laptops contained personal information of approximately 1.2 million customers/insureds (“the plaintiffs”).  The plaintiffs filed a class action lawsuit claiming that AvMed failed to adequately secure the plaintiffs’ personal information.

The District Court dismissed the lawsuit in July 2011, finding that the plaintiffs had failed to show any cognizable injury.  The 11th Circuit, however, reversed the trial court, holding that the plaintiffs had in fact suffered cognizable injuries.

Of particular note was the portion of the 11th Circuit’s opinion addressing the plaintiffs’  unjust enrichment count.  The plaintiffs had argued that a portion of their insurance premiums was ostensibly for the defendant’s administrative costs in implementing safeguards that protected the plaintiffs’ information.  The plaintiffs contended that, as evident by the stolen unencrypted laptops, a portion of those costs should be returned because their information was ultimately compromised and the defendant had not adopted reasonable security measures to protect their information.  The 11th Circuit agreed, and held that the unjust enrichment count (among other counts) could proceed on remand.

The Settlement Terms

The $3 million settlement fund is to be disbursed as follows:

(1) approved premium overpayment claims – class members can receive up to $10 per year for each year they paid the defendant for insurance before the data breach, subject to a $30 limit.  These are the unjust enrichment damages.

(2) approved identity theft claims – class members who suffered any unreimbursed monetary losses as a result of identity theft related to the breach are eligible to have those amounts reimbursed.

(3) settlement administration expenses – these are the costs for providing notice to the settlement classes and the costs of administering the settlement.  At first blush these may seem small, but remember that there are potentially 1.2 million individuals involved.

(4) class counsel’s attorney’s fees and costs – $750,000 to class counsel (Edelson LLC, one of the few plaintiffs’ firms that has demonstrated a pattern of success in privacy and data security litigation).

(5) plaintiff’s incentive awards – $10,000 to be split evenly amongst the class representatives.

Perhaps the most valuable part of the settlement for those of us who advise clients about privacy and data security legal matters is the portion relating to what the defendant has agreed to do in the future, which reads a little like an FTC consent order:

(1) mandatory security awareness and training programs for all company employees;

(2) mandatory training on appropriate laptop use and security for all company; employees whose employment responsibilities include accessing information stored on company laptop computers;

(3) upgrading of all company laptop computers with additional security mechanisms, including GPS tracking technology (this latter part seems a bit much, its usefulness is questionable, and it could lead to other privacy issues related to employee location tracking);

(4) new password protocols and full disk encryption technology on all company desktops and laptops so that electronic data stored on such devices would be encrypted at rest;

(5) physical security upgrades at company facilities and offices to further safeguard workstations from theft; and,

(6) the review and revision of written policies and procedures to enhance information security.

Lessons To Be Learned

Why are the prospective measures so important? They provide a roadmap for what companies should do to minimize the risk of similar litigation. They also make good business sense and are likely compatible with the expectations of a company’s consumers. They are safeguards all companies should consider. Had the two laptops in Resnick been encrypted, one has to wonder whether a lawsuit would have been filed at all.

Another lesson — what are you saying in your consumer-facing policies and notices about the security safeguards your company has adopted to protect consumer information?  Such statements, though useful and sometimes required, could expose your organization to the same unjust enrichment argument that the plaintiffs made in Resnick.

Finally, this is the second data breach lawsuit that has resulted in a substantial settlement for the plaintiffs and both were filed in the Southern District of Florida.  (The other was Burrows v. Purchasing Power, which I blogged about here, and resulted in a settlement of approximately $430,000).  The settlements are in sharp contrast to the vast majority of cases that have been dismissed for lack of standing and damages. It will be interesting to see what impact these recent settlements will have on future data security and privacy litigation.

10/26/13 UPDATE:  The Southern District of Florida wasted no time considering the unopposed motion seeking preliminary approval of the class action settlement.  On October 25th, just four days after the motion was filed, the court granted it and set the Final Approval Hearing for February 28, 2014.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.    

Data Breach Lawsuits Settling in the Southern District of Florida

Posted in Data Breach, Data Security, Lawsuits

Plaintiffs in data breach lawsuits around the country have had a difficult time surviving motions to dismiss and for summary judgment.  A number of courts have rejected these lawsuits because they failed to allege or demonstrate cognizable injuries, standing, causation, and the requisite elements to withstand an economic loss rule defense.  It is dangerous, however, to paint an overly broad brush.  Two  federal class action data breach lawsuits have now resulted in proposed settlements.  Both of those lawsuits are pending in the Southern District of Florida, raising the question of whether the plaintiff’s bar will perceive the Southern District of Florida as a Plaintiff-friendly jurisdiction for data breach lawsuits, resulting in even more lawsuits being filed there.

In April 2013, the Southern District of Florida preliminarily approved a proposed settlement in Burrows v. Winn Dixie, No. 1:12-CV-22800-UU (S.D. Fla.), a case in which a third-party service provider’s employee allegedly misused his access to personal information of thousands of individuals.  The plaintiffs filed a class action lawsuit and survived a motion to dismiss that argued, among other things, that the plaintiffs lacked a cognizable injury.  I previously wrote about the Burrows litigation here, if you’d like to read more about the underlying arguments.  The settlement fund, attorney’s fees, costs, and an incentive award total approximately $430,000.  A fairness hearing is scheduled next month.

Last week, a joint notice of settlement was filed in a different class action data breach lawsuit that is also pending in the Southern District of Florida.  That case, Resnick/Curry v. AvMed, Inc, No. 1:10-cv-24513-JLK (S.D. Fla.), arose from the theft of two unencrypted laptops containing the personal information of as many as 1.2 individuals.  The District Court dismissed the lawsuit in July 2011, finding that the plaintiffs had failed to show any cognizable injury, but the 11th Circuit reversed the trial court’s decision.  The joint notice of settlement does not provide the terms of the settlement, though we can expect the court to hold a fairness hearing where the fairness of the terms of settlement will be considered and may become public.

As stated above, these settlements are significant because they are two of the only publicly known settlements in class action lawsuits arising from data breaches, and they both occurred in the same court – the Southern District of Florida.  Given the lack of the number of data breach lawsuits that have proceeded to a public settlement, it will be interesting to see whether more of these lawsuits will be filed in the Southern District of Florida as a result of these recent developments.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

Healthcare Organizations Take It On The Chin

Posted in Data Breach, Data Breach, Data Privacy, Data Security, FTC, Health Care Industry, Lawsuits

If you have noticed an increasing number of high profile problems for healthcare organizations with respect to privacy and security issues these last few weeks you’re not alone.  The issues have ranged from employee misuse of protected health information, web-based breaches, photocopier breaches, and theft of stolen computers that compromised millions of records containing unsecured protected health information (PHI).  These issues remind us that healthcare companies face significant risks in collecting, using, storing, and disposing of protected health information.

Pharmacy Hit With $1.4 Million Jury Verdict For Unlawful Disclosure of PHI

An Indiana jury recently awarded more than $1.4 million to an individual whose protected health information was allegedly disclosed unlawfully by a pharmacy.  The pharmacist, who was married to the plaintiff’s ex-boyfriend, allegedly looked up the plaintiff’s prescription history and shared it with the pharmacist’s husband and plaintiff’s ex-boyfriend.  The lawsuit alleged theories of negligent training and negligent supervision.  The pharmacy intends to appeal the judgment.

Health Insurer Fined $1.7 Million For Web-Based Database Breach

Meanwhile, the Department of Health and Human Services (HHS) recently fined a health insurer $1.7 million for engaging in conduct inconsistent with HIPAA’s privacy and security rules following a breach of protected health information belonging to more than 612,000 of its customers. The breach arose from an unsecured web-based database that allowed improper access to protected health information of its customers.

HHS’s investigation determined that the insurer:

(1) did not implement policies and procedures for authorizing access to electronic protected health information (ePHI) maintained in its web-based application database;

(2) did not perform an adequate technical evaluation in response to a software upgrade, an operational change affecting the security of ePHI maintained in its web-based application database that would establish the extent to which the configuration of the software providing authentication safeguards for its web-based application met the requirements of the Security Rule;

(3) did not adequately implement technology to verify that a person or entity seeking access to ePHI maintained in its web-based application database is the one claimed; and,

(4) impermissibly disclosed the ePHI, including the names, dates of birth, addresses, Social Security Numbers, telephone numbers and health information, of approximately 612,000 individuals whose ePHI was maintained in the web-based application database.

Health Plan Fined $1.2 Million For Photocopier Breach

In another example of privacy and security issues causing legal problems for a healthcare organization, HHS settled with a health plan for $1.2 million in a photocopier breach case.  The health plan was informed by CBS Evening News that CBS had purchased a photocopier previously leased by the health plan.  (Of all the companies to get the photocopier after the health plan, it had to be CBS News).  The copier’s hard drive contained protected health information belonging to approximately 345,000 individuals.  HHS fined the health plan for impermissibly disclosing the PHI of those individuals when it returned the photocopiers to the leasing agents without erasing the data contained on the copier hard drives.  HHS was also concerned that the health plan failed to include the existence of PHI on the photocopier hard drives as part of its analysis of risks and vulnerabilities required by HIPAA’s Security Rule, and it failed to implement policies and procedures when returning the photocopiers to its leasing agents.

blogged about photocopier data security issues last year, after the Federal Trade Commission issued a guide for businesses on the topic of photocopier data security.  Another resource I recommend to my clients on the topic of media sanitization is a document prepared by the National Institute of Standards and Technology, issued last fall.

Medical Group Breach May Affect Up To Four Million Patients

Lastly, a medical group recently suffered what is believed to be the second-largest loss of unsecured protected health information reported to HHS since mandatory reporting began in September 2009.  The cause?  Four unencrypted desktop computers were stolen from the company’s administrative office.  The computers contained protected health information of  more than 4 million patients.  As a result, the medical group is mapping all of its computer and software systems to identify where patient information is stored and ensuring it is secured.  The call center set up to handle inquiries following the notification of the patients is receiving approximately 2,000 calls each day.

The Takeaways 

So what are five lessons companies should take away from these developments?

  • Having policies that govern the proper use and disclosure of PHI is a first step, but it is important that companies audit whether their employees are complying with these policies and discipline  employees who don’t comply so that a message is sent to everyone in the company that non-compliance will not be tolerated.
  • As technology is upgraded or changed, it is important that companies continue to evaluate any potential new security risks associated with these changes.  An assumption should not be made that simply because the software is an “upgrade” the security risks remain the same.
  • There are hidden risks, such as photocopier hard drives.  Stay apprised of these potential risks, identify and assess them in your risk assessment (required by HIPAA), then implement administrative and technical safeguards to minimize these risks.  With respect to photocopiers, maybe this means ensuring that the hard drives are wiped clean or written over before they are returned to the leasing agent.
  • Encrypt sensitive information at rest and in motion where feasible, and to the extent it isn’t feasible, build in other technical safeguards to protect the information.
  • Train, train, train – having a fully informed legal department and management doesn’t do much good if employees don’t understand these risks and aren’t trained to avoid them. Do your employees know how seemingly simple and uneventful conduct like photocopying a medical record, leaving a laptop unaccompanied, clicking on a link in an email, or doing a favor to a friend who needs PHI about a loved one, can lead to very significant unintended consequences for your company (and, as a result, them)?  Train them in a way that brings these risks to life, update the training and require it annually, and audit that your employees are undertaking the training.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

 

Law Firms: How Are You Securing Your Clients’ Information?

Posted in All Things E, Data Privacy, Data Security, Vendor Management

What are law firms doing to protect their clients’ sensitive information?  What are clients doing to determine whether their outside counsel are using reasonable security measures to protect their sensitive information (confidential communication, customer data, financial information, protected health information, intellectual property, etc.)?

According to the data forensic firm Mandiant, at least 80 major law firms were hacked in 2011 by attackers who were seeking secret deal information.  The threats to law firms are real and are publicly documented.  In 2011, during the conflict in Libya, law firms that represented oil and gas companies received PDF files purporting to provide information about the effect of the war on the price of oil.  These documents contained malware that infected the networks of the firms that received them.  Similarly, law firms can be a target of political “hacktivism”, as was the case of a law firm that was attacked by Anonymous after representing a soldier in a controversial case, resulting in the public release of 2.6 gigabytes of email belonging to the firm.  And, of course, law firms are just as susceptible to the same risks as other companies when it comes to employee negligence (e.g., lost mobile devices containing sensitive information), inside jobs (misusing access to sensitive information for personal gain), and theft of data.

With these threats in mind, it is useful for lawyers to remember that they have a number of ethical responsibilities to secure their clients’ information, in addition to important business interests.

The Ethical Obligations

Duty to be competent – lawyers cannot stick their heads in the sand when it comes to technology.  They have an ethical obligation to understand the technology they use to secure client information, or they must retain/consult with someone who can make them competent.  As the Arizona Bar stated in Opinion 09-04 (Dec. 2009), “[i]t is important that lawyers recognize their own competence limitations regarding computer security measures and take the necessary time and energy to become competent or alternatively consult available experts in the field.”

Duty to secure – lawyers have an obligation under Model Rule of Professional Conduct 1.6(c) to “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.”  Because the model rule was only recently adopted by the ABA, there is no easy definition of “reasonable efforts”, but Comment 18 to Rule 1.6(c) requires consideration of several factors:  (1) the sensitivity of the information; (2) the likelihood of disclosure if additional safeguards are not employed; (3) the cost of employing additional safeguards; (4) the difficulty of implementing the safeguards; and (5) the extent to which the safeguards adversely affect the lawyer’s ability to represent clients.  The Arizona Bar’s 09-04 opinion again provides some helpful details:  “In satisfying the duty to take reasonable security precautions, lawyers should consider firewalls, password protection schemes, encryption, anti-virus measures, etc.”  The Arizona Bar rightfully recognized, however, that the duty “does not require a guarantee that the system will be invulnerable to unauthorized access.”  Also, what are considered “reasonable efforts today” may change, as an opinion of the New Jersey Advisory Committee on Professional Ethics pointed out when it expressed reluctance “to render a specific interpretation of RPC 1.6 or impose a requirement that is tied to a specific understanding of technology that may very well be obsolete tomorrow.”

Duty to update – the duty to secure client information is not static; it evolves and changes as technology changes. Arizona Bar Opinion 09-04 is again helpful:  “technology advances may make certain protective measures obsolete over time . . . [Therefore,] [a]s technology advances occur, lawyers should periodically review security measures to ensure that they still reasonably protect the security and confidentiality of the clients’ documents and information.”

Duty to transmit securely – lawyers have an obligation to securely transmit information.  For example, the ABA requires that “[a] lawyer sending or receiving substantive communications with a client via e-mail or other electronic means ordinarily must warn the client about the risk of sending or receiving electronic communications using a computer or other device, or e-mail account, where there is a significant risk that a third party may gain access.”  One example is where a lawyer represents the employee of a company and the employee uses her employer’s email account to communicate with her attorney – in that instance, the attorney should advise his client that there is a risk the employer could access the employee’s email communications.

Duty to outsource securely – Model Rule of Professional Conduct 5.2 states that “a lawyer retaining an outside service provider is required to make reasonable efforts to ensure that the service provider will not make unauthorized disclosure of client information.”  ABA Formal Opinion 95-398 interprets this rule as requiring that a lawyer ensure that the service provider has in place reasonable procedures to protect the confidentiality of information to which it gains access.  The ABA recommends that lawyers obtain from the service provider a written statement of the service provider’s assurance of confidentiality.  In an upcoming blog post I will write about a Florida Bar Proposed Advisory Opinion that provides guidance on how lawyers should be engaging cloud computing service providers, which is an emerging trend in the practice of law.

Duty to dispose securely – lawyers also have an obligation to dispose of client information securely.  This is not as much an ethical duty as a legal obligation to do so.  Many states have data disposal laws that govern how companies (law firms are no exception) should dispose of sensitive information like financial information, medical information, or other personally identifiable information.  Examples of secure disposal include shredding of sensitive information and ensuring that leased electronic equipment containing sensitive information on hard drives are disposed of securely.  In one instance, the Federal Trade Commission fined three financial services companies that were accused of discarding sensitive financial information of their customers in dumpsters near their facilities without first shredding that information.  An example of an unnoticed machine that usually stores sensitive information is the copy machine, many of which have hard drives that store electronic copies of information copied by the machine.  Fortunately, the FTC has provided a useful guide to minimize some of these risks.

The Legal Obligations

The ethical obligations discussed above are separate from any legal obligations that govern certain types of information under HIPPA/HITECH, Gramm-Leach-Bliley, the Payment Card Industry’s Data Security Standards, state document disposal laws, state data breach notification laws, and international data protection laws.  Depending on the type of information the law firms collect, those laws may impose additional proactive requirements to secure data, train employees, and prepare written policies.

The Business Interests

Finally, even if the ethical and legal obligations to secure sensitive information do not provide sufficient incentives for law firms to evaluate their security measures with respect to client information, there are business interests that should compel law firms to do so.  Companies are recognizing the risks presented by sharing sensitive information with service providers like law firms and are, at a minimum, inquiring about the security safeguards the providers have adopted and, in some cases, are requiring a certain level of security and auditing that level of security.  One such example is Bank of America.  According to a recent report, following pressure from regulators, Bank of America now requires its outside counsel to adopt certain security requirements and it is auditing the firms’ compliance with those requirements.

Specifically, Bank of America requires its outside counsel to have a written information security plan, and to follow that plan.  Firms must also encrypt sensitive information that Bank of America shares with the firms.  Bank of America also wants their law firms to safeguard information on their employees’ mobile devices.  Most importantly, law firms must train their employees about their security policies and procedures.  Finally, Bank of America is auditing their law firms to ensure they are complying with these requirements.

So with these threats, ethical responsibilities, and business interests in mind, it is important that law firms, like all other companies that handle sensitive information, evaluate their administrative, technical, and physical safeguard to minimize the risks associated with their storage, use, and disposal of their clients’ sensitive information.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

Texas’s Data Privacy Training Laws Change (Again)

Posted in Data Privacy, Data Security, Health Care Industry

In August of last year, I wrote about HB 300, a Texas law that, beginning September 1, 2012, created employee training and other requirements for any company doing business in Texas that collects, uses, stores, transmits, or comes into possession of protected health information (PHI).  The law’s training provisions required covered entities to train their employees every two years regarding federal and state law related to the protection of PHI, and obtain written acknowledgement of the training.  (The training was required for new employees within 60 days of their hiring).  Companies were required to train their employees in a manner specific to the way in which the individual employee(s) handle PHI.

Recently, however, the Texas legislature passed two bills that amend the requirements of HB 300 in a few significant ways.  Under SB 1609, the role-specific training requirement has changed.  Now, companies may simply train employees about PHI “as necessary and appropriate for the employees to carry out the employees’ duties for the covered entity.”

SB 1609 also changed the frequency of the training from once every two years to whether the company is “affected by a material change in state or federal law concerning protected health information” and in such cases the training must take place “within a reasonable period, but not later than the first anniversary of the date the material change in law takes effect.”  This change could mean more or fewer training sessions of employees depending on the nature of the covered entity’s business, the size of the covered entity, and the location of the covered entity.

SB 1610, which relates to breach notification requirements, is more puzzling.  Until now, Texas law required companies doing business in Texas that suffered data breaches affecting information of individuals residing in other states that did not have data breach notification laws (e.g., Alabama and Kentucky), to notify the individuals in those states of the breach.  SB 1610 removes that requirement and now provides that:  “If the individual whose sensitive personal information was or is reasonably believed to have been acquired by an unauthorized person is a resident of a state that requires a [breached entity] to provide notice of a breach of system security, the notice of the breach of system security required under Subsection (b) [which sets forth Texas’s data breach notification requirements] may be provided under that state’s law or under required under Subsection (b).”

The natural interpretation of this provision is that a Texas company that suffers a breach of customer information where, for example, some of the customers reside in California, Massachusetts, or Connecticut, is not required to comply with those states’ data breach notification laws if the company complies with the standards set forth in Texas’s data breach notification law.  It will be interesting to see whether Texas receives any push back from other state Attorneys General who enforce their states’ data breach notification laws and may not be pleased with a Texas law that instructs companies doing business in Texas that the requirements for breach notification set forth by other states can be ignored if the Texas company meets Texas’s data breach notification requirements.  Nevertheless, the practical effect of this law is not clear because most companies will want to avoid the risk associated with ignoring another state’s data breach notification law.

In short, the legislative changes are a good reminder that companies doing business in Texas that collect, use, store, transmit, or otherwise handle PHI must determine whether they are complying with HB 300 and the more recent legislative acts that were signed into law June 14, 2013 and became effective immediately.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

U.S. Senate Considers Federal Data Security Legislation

Posted in Data Breach, Data Breach, Data Privacy, Data Security

Legislation was introduced in the U.S. Senate late last week that, if passed, would create proactive and reactive requirements for companies that maintain personal information about U.S. citizens and residents.  The legislation, titled the “Data Security and Breach Notification Act of 2013” (s. 1193) creates two overarching obligations:  to secure personal information and to notify affected individuals if the information is breached.  The bill requires companies to take reasonable measures to protect and secure data in electronic form containing personal information.  If that information is breached, companies are required to notify affected individuals “as expeditiously as practicable and without unreasonable delay” if the company reasonably believes the breach caused or will cause identity theft or other actual financial harm.

A violation of the obligations to secure or notify are considered unfair or deceptive trade practices that may be investigated and pursued by the FTC.  Companies that violate the law could be fined up to $1,000,000 for violations arising out of the same related act or omission ($500,000 maximum for failing to secure the personal information and $500,000 maximum for failing to notify about the breach of the personal information).

The legislation defines personal information as social security numbers, driver’s license numbers, passports numbers, government identification, and financial account numbers or credit/debit card numbers with their required PIN number.  The bill includes a safe harbor for personal information that is encrypted, redacted, or otherwise secured in a way that renders it unusable.

Here are some other important provisions of the legislation:

  • There is no guidance as to what “reasonable measures” means under the obligation to secure personal information, which is problematic (although not very different from state data breach notification laws) because it provides no certainty as to when a company may face liability for failing to adopt certain security safeguards.
  • With respect to the duty to notify, the bill explicitly allows for a reasonable period of time after a breach for the breached entity to determine the scope of the breach and to identify individuals affected by the breach.
  • The legislation would preempt state data breach notification laws, but compliance with other federal laws that require breach notification (e.g., HIPAA/HITECH) is deemed to be compliance with this law.
  • The bill requires that breached entities notify the Secret Service or the FBI if a breach affects more than 10,000 individuals.
  • The bill also allows for a delay of notification if such notification would threaten national or homeland security, or if law enforcement determines that notification would interfere with a civil or criminal investigation.
  • There is no private cause of action for violating the legislation.  The bill is silent as to whether private causes of action based on common law or other statutory claims (e.g., negligence, state unfair trade practices claims, etc.) may be pursued, to the extent such causes of action are recognized.

The remains, however, a big question as to whether this legislation will ultimately become law.  Given the political climate in D.C. and the lack of success of similar federal legislation in the past, the outlook is bleak.  The ambiguity of the required proactive security measures and the lack of clarity as to whether private causes of action may be pursued for non-statutory violations also raise political problems for the legislation on both sides of the aisle.   Nevertheless, there is growing climate of concern regarding privacy and security issues that may result in this legislation being included within a larger package of legislation on cybersecurity and data privacy.  It will be important to keep an eye on the status of this bill moving forward.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

The SEC’s Guidance on Cyber Risks and Incidents: A Deeper Dive

Posted in Data Security, SEC

In October 2011, the U.S. Securities and Exchange Commission’s Division of Corporation Finance issued “CF Disclosure Guidance: Topic No. 2”, which was a guidance intended to provide some clarity as to the material cyber risks that a publicly traded company should disclose.  I previously wrote about the guidance.  This blog post is the first of a three-part series to take a deeper look at the guidance:  what does the guidance mean and require (Part I), how is the SEC using/enforcing the guidance (Part II), and how are companies complying with the guidance (Part III)? 

What is a disclosure guidance?

A disclosure guidance provides the views of a specific division of the SEC (in this case, the Division of Corporation Finance) regarding disclosure obligations (in this case, disclosure obligations relating to cybersecurity risks and cyber incidents).  It is not a rule, regulation, or statement of the Securities and Exchange Commission.  The SEC has neither approved nor disapproved its content.  In fact, the guidance did very little to change the legal landscape because companies are already required to disclose materials risks and incidents, so to the extent a cyber risk/incident is material, it must be disclosed regardless of the subject disclosure guidance.  Nevertheless, at a minimum, the guidance has brought attention to the need for a company to disclose risks/incidents related to cybersecurity and it attempts to clarify the types of cyber risks/incidents that should be disclosed.

What is the likelihood that the SEC will more clearly mandate disclosure of cyber incidents and risks?

Based on some recent events, there is a reasonable likelihood that we will see a Commission-level statement relatively soon, clearly and explicitly requiring publicly traded companies to disclose material cyber incidents and risks in their public filings.

On April 9, 2013, Senator Jay Rockefeller sent a letter to the recently confirmed SEC Chairwoman, Mary Jo White, in which he strongly urged the SEC to issue the guidance at the Commission level.  Senator Rockefeller cited investors’ needs to know whether companies are effectively addressing their cybersecurity risks, and a need for the private sector to make significant investments in cybersecurity.

Chairwoman White responded positively to Senator Rockefeller’s letter.  She reiterated the existing disclosure requirements to disclose risks and events that a reasonable investor would consider material.  She also informed Senator Rockefeller that she has asked the SEC staff to provide her with a briefing of current disclosure practices relating to cyber incidents/risks and overall compliance with the guidance, as well as recommendations for further action in this area.  In short, I would not be surprised to see further instruction from the SEC on the cyber incident/risk disclosure issue this year.

What is a cybersecurity risk or cyber incident under the guidance?

According to the guidance, a cyber incident can result from a deliberate attack or unintentional event and may include gaining unauthorized access to digital systems for purposes of misappropriating assets or sensitive information, corrupting data, or causing operational disruption.  Not all cyber incidents require gaining unauthorized access; a denial-of-service attack is such an example.  These incidents can be carried out by third parties or insiders and can involve sophisticated electronic circumvention of network security or social engineering to get information necessary to gain access.  The purpose may be to steal financial assets, intellectual property, or sensitive information belonging to companies, their customers, or their business partners.

Which cyber risks and incidents should be disclosed?

Publicly traded companies must disclose timely, comprehensive, and accurate information about risks and events that a reasonable investor would consider important to an investment decision. According to the guidance, material information about cybersecurity risks and cyber incidents must be disclosed when necessary to make other required disclosures not misleading.

What factors should a company consider in determining whether a risk or incident should be disclosed?

According to the guidance, companies should consider a number of factors in determining whether to disclose a cybersecurity risk, including:  (1) prior cyber incidents and the severity and frequency of those incidents; (2) the probability of cyber incidents occurring and the quantitative and qualitative magnitude of those risks (including the potential costs and other consequences resulting from misappropriation of assets or sensitive information, corruption of data or operational disruption); and (3) the adequacy of preventative actions taken to reduce cybersecurity risks in the context of the industry in which they operate and risks to that security, including threatened attacks of which they were aware.

What should a company disclose about a cyber risk or incident after it has determined that it wishes to make a disclosure?

Once a company has determined that it will disclose a risk or incident, it must adequately describe the nature of the material risks and specify how each risk affects the company.  Generic risks need not be disclosed.  Examples of appropriate disclosures include:  (1) discussion of aspects of the business or operations that give rise to material cybersecurity risks and the potential costs and consequences; (2) descriptions of outsourced functions that have material cybersecurity risks and how the company addresses those risks; (3) descriptions of cyber incidents experienced by the company that are individually, or in the aggregate, material, including a description of the costs and other consequences; (4) risks related to cyber incidents that remain undetected for an extended period; and (5) description of relevant insurance coverage.  The disclosure should be tailored to the company’s particular circumstances and avoid generic “boilerplate” disclosure.  That said, companies are not required to disclose information that would compromise the company’s cybersecurity.  Instead, companies should provide sufficient disclosure to allow an investor to appreciate the nature of the risks faced by the company in a manner that would not compromise the company’s cybersecurity.

Where in the public filing should the disclosure(s) be made?

There are a number of places in a company’s public filing where a disclosure of a cyber incident or risk may be made:

(1) Management’s Discussion and Analysis of Financial Condition – if the costs or other consequences associated with one or more known incidents or the risk of potential incidents represent a material event, trend, or uncertainty that is reasonably likely to affect the company’s results of operations, liquidity, or financial condition or would cause reported financial information not to be necessarily indicative of future operating results of financial condition.  An example provided in the guidance is a cyber attack that results in theft of material stolen intellectual property; there, the company should describe the property that was stolen, and the effect of the attack on its results of operations, liquidity, and financial condition, and whether the attack would cause reported financial information not to be indicative of future operating results or financial condition.  If it is “reasonably likely” that the attack will lead to reduced revenues, an increase in cybersecurity protection costs, or litigation costs, then those outcomes, the amount, and duration, should be discussed.

(2) Description of Business – if a cyber incident affects a company’s products, services, relationships with customers/suppliers, or competitive conditions, then the company should disclose these effects in the “Description of Business” section of the public filing.  An example provided in the Guidance is where a cyber incident materially impairs the future viability of a new product in development; such an incident and the potential impact should be discussed.

(3) Legal Proceedings – if a legal proceeding to which a company “or any of its subsidiaries” is a party involved a cyber incident, information may need to be disclosed in the “Legal Proceedings” section of the public filing.  The example provided in the Guidance is where customer information is stolen, which results in material litigation; there, the name of the court, the date the lawsuit was filed, the parties, a description of the factual basis, and the relief sought should be disclosed.

(4) Financial Statement Disclosures – companies should consider whether cyber risks and incidents have an impact on a company’s financial statements, and, if so, include them.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.