Data Security Law Journal

Focusing on legal trends in data security, cloud computing, data privacy, and anything E

Data Breach Lawsuits Settling in the Southern District of Florida

Posted in Data Breach, Data Security, Lawsuits

Plaintiffs in data breach lawsuits around the country have had a difficult time surviving motions to dismiss and for summary judgment.  A number of courts have rejected these lawsuits because they failed to allege or demonstrate cognizable injuries, standing, causation, and the requisite elements to withstand an economic loss rule defense.  It is dangerous, however, to paint an overly broad brush.  Two  federal class action data breach lawsuits have now resulted in proposed settlements.  Both of those lawsuits are pending in the Southern District of Florida, raising the question of whether the plaintiff’s bar will perceive the Southern District of Florida as a Plaintiff-friendly jurisdiction for data breach lawsuits, resulting in even more lawsuits being filed there.

In April 2013, the Southern District of Florida preliminarily approved a proposed settlement in Burrows v. Winn Dixie, No. 1:12-CV-22800-UU (S.D. Fla.), a case in which a third-party service provider’s employee allegedly misused his access to personal information of thousands of individuals.  The plaintiffs filed a class action lawsuit and survived a motion to dismiss that argued, among other things, that the plaintiffs lacked a cognizable injury.  I previously wrote about the Burrows litigation here, if you’d like to read more about the underlying arguments.  The settlement fund, attorney’s fees, costs, and an incentive award total approximately $430,000.  A fairness hearing is scheduled next month.

Last week, a joint notice of settlement was filed in a different class action data breach lawsuit that is also pending in the Southern District of Florida.  That case, Resnick/Curry v. AvMed, Inc, No. 1:10-cv-24513-JLK (S.D. Fla.), arose from the theft of two unencrypted laptops containing the personal information of as many as 1.2 individuals.  The District Court dismissed the lawsuit in July 2011, finding that the plaintiffs had failed to show any cognizable injury, but the 11th Circuit reversed the trial court’s decision.  The joint notice of settlement does not provide the terms of the settlement, though we can expect the court to hold a fairness hearing where the fairness of the terms of settlement will be considered and may become public.

As stated above, these settlements are significant because they are two of the only publicly known settlements in class action lawsuits arising from data breaches, and they both occurred in the same court – the Southern District of Florida.  Given the lack of the number of data breach lawsuits that have proceeded to a public settlement, it will be interesting to see whether more of these lawsuits will be filed in the Southern District of Florida as a result of these recent developments.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

Healthcare Organizations Take It On The Chin

Posted in Data Breach, Data Breach, Data Privacy, Data Security, FTC, Health Care Industry, Lawsuits

If you have noticed an increasing number of high profile problems for healthcare organizations with respect to privacy and security issues these last few weeks you’re not alone.  The issues have ranged from employee misuse of protected health information, web-based breaches, photocopier breaches, and theft of stolen computers that compromised millions of records containing unsecured protected health information (PHI).  These issues remind us that healthcare companies face significant risks in collecting, using, storing, and disposing of protected health information.

Pharmacy Hit With $1.4 Million Jury Verdict For Unlawful Disclosure of PHI

An Indiana jury recently awarded more than $1.4 million to an individual whose protected health information was allegedly disclosed unlawfully by a pharmacy.  The pharmacist, who was married to the plaintiff’s ex-boyfriend, allegedly looked up the plaintiff’s prescription history and shared it with the pharmacist’s husband and plaintiff’s ex-boyfriend.  The lawsuit alleged theories of negligent training and negligent supervision.  The pharmacy intends to appeal the judgment.

Health Insurer Fined $1.7 Million For Web-Based Database Breach

Meanwhile, the Department of Health and Human Services (HHS) recently fined a health insurer $1.7 million for engaging in conduct inconsistent with HIPAA’s privacy and security rules following a breach of protected health information belonging to more than 612,000 of its customers. The breach arose from an unsecured web-based database that allowed improper access to protected health information of its customers.

HHS’s investigation determined that the insurer:

(1) did not implement policies and procedures for authorizing access to electronic protected health information (ePHI) maintained in its web-based application database;

(2) did not perform an adequate technical evaluation in response to a software upgrade, an operational change affecting the security of ePHI maintained in its web-based application database that would establish the extent to which the configuration of the software providing authentication safeguards for its web-based application met the requirements of the Security Rule;

(3) did not adequately implement technology to verify that a person or entity seeking access to ePHI maintained in its web-based application database is the one claimed; and,

(4) impermissibly disclosed the ePHI, including the names, dates of birth, addresses, Social Security Numbers, telephone numbers and health information, of approximately 612,000 individuals whose ePHI was maintained in the web-based application database.

Health Plan Fined $1.2 Million For Photocopier Breach

In another example of privacy and security issues causing legal problems for a healthcare organization, HHS settled with a health plan for $1.2 million in a photocopier breach case.  The health plan was informed by CBS Evening News that CBS had purchased a photocopier previously leased by the health plan.  (Of all the companies to get the photocopier after the health plan, it had to be CBS News).  The copier’s hard drive contained protected health information belonging to approximately 345,000 individuals.  HHS fined the health plan for impermissibly disclosing the PHI of those individuals when it returned the photocopiers to the leasing agents without erasing the data contained on the copier hard drives.  HHS was also concerned that the health plan failed to include the existence of PHI on the photocopier hard drives as part of its analysis of risks and vulnerabilities required by HIPAA’s Security Rule, and it failed to implement policies and procedures when returning the photocopiers to its leasing agents.

blogged about photocopier data security issues last year, after the Federal Trade Commission issued a guide for businesses on the topic of photocopier data security.  Another resource I recommend to my clients on the topic of media sanitization is a document prepared by the National Institute of Standards and Technology, issued last fall.

Medical Group Breach May Affect Up To Four Million Patients

Lastly, a medical group recently suffered what is believed to be the second-largest loss of unsecured protected health information reported to HHS since mandatory reporting began in September 2009.  The cause?  Four unencrypted desktop computers were stolen from the company’s administrative office.  The computers contained protected health information of  more than 4 million patients.  As a result, the medical group is mapping all of its computer and software systems to identify where patient information is stored and ensuring it is secured.  The call center set up to handle inquiries following the notification of the patients is receiving approximately 2,000 calls each day.

The Takeaways 

So what are five lessons companies should take away from these developments?

  • Having policies that govern the proper use and disclosure of PHI is a first step, but it is important that companies audit whether their employees are complying with these policies and discipline  employees who don’t comply so that a message is sent to everyone in the company that non-compliance will not be tolerated.
  • As technology is upgraded or changed, it is important that companies continue to evaluate any potential new security risks associated with these changes.  An assumption should not be made that simply because the software is an “upgrade” the security risks remain the same.
  • There are hidden risks, such as photocopier hard drives.  Stay apprised of these potential risks, identify and assess them in your risk assessment (required by HIPAA), then implement administrative and technical safeguards to minimize these risks.  With respect to photocopiers, maybe this means ensuring that the hard drives are wiped clean or written over before they are returned to the leasing agent.
  • Encrypt sensitive information at rest and in motion where feasible, and to the extent it isn’t feasible, build in other technical safeguards to protect the information.
  • Train, train, train – having a fully informed legal department and management doesn’t do much good if employees don’t understand these risks and aren’t trained to avoid them. Do your employees know how seemingly simple and uneventful conduct like photocopying a medical record, leaving a laptop unaccompanied, clicking on a link in an email, or doing a favor to a friend who needs PHI about a loved one, can lead to very significant unintended consequences for your company (and, as a result, them)?  Train them in a way that brings these risks to life, update the training and require it annually, and audit that your employees are undertaking the training.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

 

Law Firms: How Are You Securing Your Clients’ Information?

Posted in All Things E, Data Privacy, Data Security, Vendor Management

What are law firms doing to protect their clients’ sensitive information?  What are clients doing to determine whether their outside counsel are using reasonable security measures to protect their sensitive information (confidential communication, customer data, financial information, protected health information, intellectual property, etc.)?

According to the data forensic firm Mandiant, at least 80 major law firms were hacked in 2011 by attackers who were seeking secret deal information.  The threats to law firms are real and are publicly documented.  In 2011, during the conflict in Libya, law firms that represented oil and gas companies received PDF files purporting to provide information about the effect of the war on the price of oil.  These documents contained malware that infected the networks of the firms that received them.  Similarly, law firms can be a target of political “hacktivism”, as was the case of a law firm that was attacked by Anonymous after representing a soldier in a controversial case, resulting in the public release of 2.6 gigabytes of email belonging to the firm.  And, of course, law firms are just as susceptible to the same risks as other companies when it comes to employee negligence (e.g., lost mobile devices containing sensitive information), inside jobs (misusing access to sensitive information for personal gain), and theft of data.

With these threats in mind, it is useful for lawyers to remember that they have a number of ethical responsibilities to secure their clients’ information, in addition to important business interests.

The Ethical Obligations

Duty to be competent – lawyers cannot stick their heads in the sand when it comes to technology.  They have an ethical obligation to understand the technology they use to secure client information, or they must retain/consult with someone who can make them competent.  As the Arizona Bar stated in Opinion 09-04 (Dec. 2009), “[i]t is important that lawyers recognize their own competence limitations regarding computer security measures and take the necessary time and energy to become competent or alternatively consult available experts in the field.”

Duty to secure – lawyers have an obligation under Model Rule of Professional Conduct 1.6(c) to “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.”  Because the model rule was only recently adopted by the ABA, there is no easy definition of “reasonable efforts”, but Comment 18 to Rule 1.6(c) requires consideration of several factors:  (1) the sensitivity of the information; (2) the likelihood of disclosure if additional safeguards are not employed; (3) the cost of employing additional safeguards; (4) the difficulty of implementing the safeguards; and (5) the extent to which the safeguards adversely affect the lawyer’s ability to represent clients.  The Arizona Bar’s 09-04 opinion again provides some helpful details:  “In satisfying the duty to take reasonable security precautions, lawyers should consider firewalls, password protection schemes, encryption, anti-virus measures, etc.”  The Arizona Bar rightfully recognized, however, that the duty “does not require a guarantee that the system will be invulnerable to unauthorized access.”  Also, what are considered “reasonable efforts today” may change, as an opinion of the New Jersey Advisory Committee on Professional Ethics pointed out when it expressed reluctance “to render a specific interpretation of RPC 1.6 or impose a requirement that is tied to a specific understanding of technology that may very well be obsolete tomorrow.”

Duty to update – the duty to secure client information is not static; it evolves and changes as technology changes. Arizona Bar Opinion 09-04 is again helpful:  “technology advances may make certain protective measures obsolete over time . . . [Therefore,] [a]s technology advances occur, lawyers should periodically review security measures to ensure that they still reasonably protect the security and confidentiality of the clients’ documents and information.”

Duty to transmit securely – lawyers have an obligation to securely transmit information.  For example, the ABA requires that “[a] lawyer sending or receiving substantive communications with a client via e-mail or other electronic means ordinarily must warn the client about the risk of sending or receiving electronic communications using a computer or other device, or e-mail account, where there is a significant risk that a third party may gain access.”  One example is where a lawyer represents the employee of a company and the employee uses her employer’s email account to communicate with her attorney – in that instance, the attorney should advise his client that there is a risk the employer could access the employee’s email communications.

Duty to outsource securely – Model Rule of Professional Conduct 5.2 states that “a lawyer retaining an outside service provider is required to make reasonable efforts to ensure that the service provider will not make unauthorized disclosure of client information.”  ABA Formal Opinion 95-398 interprets this rule as requiring that a lawyer ensure that the service provider has in place reasonable procedures to protect the confidentiality of information to which it gains access.  The ABA recommends that lawyers obtain from the service provider a written statement of the service provider’s assurance of confidentiality.  In an upcoming blog post I will write about a Florida Bar Proposed Advisory Opinion that provides guidance on how lawyers should be engaging cloud computing service providers, which is an emerging trend in the practice of law.

Duty to dispose securely – lawyers also have an obligation to dispose of client information securely.  This is not as much an ethical duty as a legal obligation to do so.  Many states have data disposal laws that govern how companies (law firms are no exception) should dispose of sensitive information like financial information, medical information, or other personally identifiable information.  Examples of secure disposal include shredding of sensitive information and ensuring that leased electronic equipment containing sensitive information on hard drives are disposed of securely.  In one instance, the Federal Trade Commission fined three financial services companies that were accused of discarding sensitive financial information of their customers in dumpsters near their facilities without first shredding that information.  An example of an unnoticed machine that usually stores sensitive information is the copy machine, many of which have hard drives that store electronic copies of information copied by the machine.  Fortunately, the FTC has provided a useful guide to minimize some of these risks.

The Legal Obligations

The ethical obligations discussed above are separate from any legal obligations that govern certain types of information under HIPPA/HITECH, Gramm-Leach-Bliley, the Payment Card Industry’s Data Security Standards, state document disposal laws, state data breach notification laws, and international data protection laws.  Depending on the type of information the law firms collect, those laws may impose additional proactive requirements to secure data, train employees, and prepare written policies.

The Business Interests

Finally, even if the ethical and legal obligations to secure sensitive information do not provide sufficient incentives for law firms to evaluate their security measures with respect to client information, there are business interests that should compel law firms to do so.  Companies are recognizing the risks presented by sharing sensitive information with service providers like law firms and are, at a minimum, inquiring about the security safeguards the providers have adopted and, in some cases, are requiring a certain level of security and auditing that level of security.  One such example is Bank of America.  According to a recent report, following pressure from regulators, Bank of America now requires its outside counsel to adopt certain security requirements and it is auditing the firms’ compliance with those requirements.

Specifically, Bank of America requires its outside counsel to have a written information security plan, and to follow that plan.  Firms must also encrypt sensitive information that Bank of America shares with the firms.  Bank of America also wants their law firms to safeguard information on their employees’ mobile devices.  Most importantly, law firms must train their employees about their security policies and procedures.  Finally, Bank of America is auditing their law firms to ensure they are complying with these requirements.

So with these threats, ethical responsibilities, and business interests in mind, it is important that law firms, like all other companies that handle sensitive information, evaluate their administrative, technical, and physical safeguard to minimize the risks associated with their storage, use, and disposal of their clients’ sensitive information.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

Texas’s Data Privacy Training Laws Change (Again)

Posted in Data Privacy, Data Security, Health Care Industry

In August of last year, I wrote about HB 300, a Texas law that, beginning September 1, 2012, created employee training and other requirements for any company doing business in Texas that collects, uses, stores, transmits, or comes into possession of protected health information (PHI).  The law’s training provisions required covered entities to train their employees every two years regarding federal and state law related to the protection of PHI, and obtain written acknowledgement of the training.  (The training was required for new employees within 60 days of their hiring).  Companies were required to train their employees in a manner specific to the way in which the individual employee(s) handle PHI.

Recently, however, the Texas legislature passed two bills that amend the requirements of HB 300 in a few significant ways.  Under SB 1609, the role-specific training requirement has changed.  Now, companies may simply train employees about PHI “as necessary and appropriate for the employees to carry out the employees’ duties for the covered entity.”

SB 1609 also changed the frequency of the training from once every two years to whether the company is “affected by a material change in state or federal law concerning protected health information” and in such cases the training must take place “within a reasonable period, but not later than the first anniversary of the date the material change in law takes effect.”  This change could mean more or fewer training sessions of employees depending on the nature of the covered entity’s business, the size of the covered entity, and the location of the covered entity.

SB 1610, which relates to breach notification requirements, is more puzzling.  Until now, Texas law required companies doing business in Texas that suffered data breaches affecting information of individuals residing in other states that did not have data breach notification laws (e.g., Alabama and Kentucky), to notify the individuals in those states of the breach.  SB 1610 removes that requirement and now provides that:  “If the individual whose sensitive personal information was or is reasonably believed to have been acquired by an unauthorized person is a resident of a state that requires a [breached entity] to provide notice of a breach of system security, the notice of the breach of system security required under Subsection (b) [which sets forth Texas’s data breach notification requirements] may be provided under that state’s law or under required under Subsection (b).”

The natural interpretation of this provision is that a Texas company that suffers a breach of customer information where, for example, some of the customers reside in California, Massachusetts, or Connecticut, is not required to comply with those states’ data breach notification laws if the company complies with the standards set forth in Texas’s data breach notification law.  It will be interesting to see whether Texas receives any push back from other state Attorneys General who enforce their states’ data breach notification laws and may not be pleased with a Texas law that instructs companies doing business in Texas that the requirements for breach notification set forth by other states can be ignored if the Texas company meets Texas’s data breach notification requirements.  Nevertheless, the practical effect of this law is not clear because most companies will want to avoid the risk associated with ignoring another state’s data breach notification law.

In short, the legislative changes are a good reminder that companies doing business in Texas that collect, use, store, transmit, or otherwise handle PHI must determine whether they are complying with HB 300 and the more recent legislative acts that were signed into law June 14, 2013 and became effective immediately.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

U.S. Senate Considers Federal Data Security Legislation

Posted in Data Breach, Data Breach, Data Privacy, Data Security

Legislation was introduced in the U.S. Senate late last week that, if passed, would create proactive and reactive requirements for companies that maintain personal information about U.S. citizens and residents.  The legislation, titled the “Data Security and Breach Notification Act of 2013” (s. 1193) creates two overarching obligations:  to secure personal information and to notify affected individuals if the information is breached.  The bill requires companies to take reasonable measures to protect and secure data in electronic form containing personal information.  If that information is breached, companies are required to notify affected individuals “as expeditiously as practicable and without unreasonable delay” if the company reasonably believes the breach caused or will cause identity theft or other actual financial harm.

A violation of the obligations to secure or notify are considered unfair or deceptive trade practices that may be investigated and pursued by the FTC.  Companies that violate the law could be fined up to $1,000,000 for violations arising out of the same related act or omission ($500,000 maximum for failing to secure the personal information and $500,000 maximum for failing to notify about the breach of the personal information).

The legislation defines personal information as social security numbers, driver’s license numbers, passports numbers, government identification, and financial account numbers or credit/debit card numbers with their required PIN number.  The bill includes a safe harbor for personal information that is encrypted, redacted, or otherwise secured in a way that renders it unusable.

Here are some other important provisions of the legislation:

  • There is no guidance as to what “reasonable measures” means under the obligation to secure personal information, which is problematic (although not very different from state data breach notification laws) because it provides no certainty as to when a company may face liability for failing to adopt certain security safeguards.
  • With respect to the duty to notify, the bill explicitly allows for a reasonable period of time after a breach for the breached entity to determine the scope of the breach and to identify individuals affected by the breach.
  • The legislation would preempt state data breach notification laws, but compliance with other federal laws that require breach notification (e.g., HIPAA/HITECH) is deemed to be compliance with this law.
  • The bill requires that breached entities notify the Secret Service or the FBI if a breach affects more than 10,000 individuals.
  • The bill also allows for a delay of notification if such notification would threaten national or homeland security, or if law enforcement determines that notification would interfere with a civil or criminal investigation.
  • There is no private cause of action for violating the legislation.  The bill is silent as to whether private causes of action based on common law or other statutory claims (e.g., negligence, state unfair trade practices claims, etc.) may be pursued, to the extent such causes of action are recognized.

The remains, however, a big question as to whether this legislation will ultimately become law.  Given the political climate in D.C. and the lack of success of similar federal legislation in the past, the outlook is bleak.  The ambiguity of the required proactive security measures and the lack of clarity as to whether private causes of action may be pursued for non-statutory violations also raise political problems for the legislation on both sides of the aisle.   Nevertheless, there is growing climate of concern regarding privacy and security issues that may result in this legislation being included within a larger package of legislation on cybersecurity and data privacy.  It will be important to keep an eye on the status of this bill moving forward.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

The SEC’s Guidance on Cyber Risks and Incidents: A Deeper Dive

Posted in Data Security, SEC

In October 2011, the U.S. Securities and Exchange Commission’s Division of Corporation Finance issued “CF Disclosure Guidance: Topic No. 2”, which was a guidance intended to provide some clarity as to the material cyber risks that a publicly traded company should disclose.  I previously wrote about the guidance.  This blog post is the first of a three-part series to take a deeper look at the guidance:  what does the guidance mean and require (Part I), how is the SEC using/enforcing the guidance (Part II), and how are companies complying with the guidance (Part III)? 

What is a disclosure guidance?

A disclosure guidance provides the views of a specific division of the SEC (in this case, the Division of Corporation Finance) regarding disclosure obligations (in this case, disclosure obligations relating to cybersecurity risks and cyber incidents).  It is not a rule, regulation, or statement of the Securities and Exchange Commission.  The SEC has neither approved nor disapproved its content.  In fact, the guidance did very little to change the legal landscape because companies are already required to disclose materials risks and incidents, so to the extent a cyber risk/incident is material, it must be disclosed regardless of the subject disclosure guidance.  Nevertheless, at a minimum, the guidance has brought attention to the need for a company to disclose risks/incidents related to cybersecurity and it attempts to clarify the types of cyber risks/incidents that should be disclosed.

What is the likelihood that the SEC will more clearly mandate disclosure of cyber incidents and risks?

Based on some recent events, there is a reasonable likelihood that we will see a Commission-level statement relatively soon, clearly and explicitly requiring publicly traded companies to disclose material cyber incidents and risks in their public filings.

On April 9, 2013, Senator Jay Rockefeller sent a letter to the recently confirmed SEC Chairwoman, Mary Jo White, in which he strongly urged the SEC to issue the guidance at the Commission level.  Senator Rockefeller cited investors’ needs to know whether companies are effectively addressing their cybersecurity risks, and a need for the private sector to make significant investments in cybersecurity.

Chairwoman White responded positively to Senator Rockefeller’s letter.  She reiterated the existing disclosure requirements to disclose risks and events that a reasonable investor would consider material.  She also informed Senator Rockefeller that she has asked the SEC staff to provide her with a briefing of current disclosure practices relating to cyber incidents/risks and overall compliance with the guidance, as well as recommendations for further action in this area.  In short, I would not be surprised to see further instruction from the SEC on the cyber incident/risk disclosure issue this year.

What is a cybersecurity risk or cyber incident under the guidance?

According to the guidance, a cyber incident can result from a deliberate attack or unintentional event and may include gaining unauthorized access to digital systems for purposes of misappropriating assets or sensitive information, corrupting data, or causing operational disruption.  Not all cyber incidents require gaining unauthorized access; a denial-of-service attack is such an example.  These incidents can be carried out by third parties or insiders and can involve sophisticated electronic circumvention of network security or social engineering to get information necessary to gain access.  The purpose may be to steal financial assets, intellectual property, or sensitive information belonging to companies, their customers, or their business partners.

Which cyber risks and incidents should be disclosed?

Publicly traded companies must disclose timely, comprehensive, and accurate information about risks and events that a reasonable investor would consider important to an investment decision. According to the guidance, material information about cybersecurity risks and cyber incidents must be disclosed when necessary to make other required disclosures not misleading.

What factors should a company consider in determining whether a risk or incident should be disclosed?

According to the guidance, companies should consider a number of factors in determining whether to disclose a cybersecurity risk, including:  (1) prior cyber incidents and the severity and frequency of those incidents; (2) the probability of cyber incidents occurring and the quantitative and qualitative magnitude of those risks (including the potential costs and other consequences resulting from misappropriation of assets or sensitive information, corruption of data or operational disruption); and (3) the adequacy of preventative actions taken to reduce cybersecurity risks in the context of the industry in which they operate and risks to that security, including threatened attacks of which they were aware.

What should a company disclose about a cyber risk or incident after it has determined that it wishes to make a disclosure?

Once a company has determined that it will disclose a risk or incident, it must adequately describe the nature of the material risks and specify how each risk affects the company.  Generic risks need not be disclosed.  Examples of appropriate disclosures include:  (1) discussion of aspects of the business or operations that give rise to material cybersecurity risks and the potential costs and consequences; (2) descriptions of outsourced functions that have material cybersecurity risks and how the company addresses those risks; (3) descriptions of cyber incidents experienced by the company that are individually, or in the aggregate, material, including a description of the costs and other consequences; (4) risks related to cyber incidents that remain undetected for an extended period; and (5) description of relevant insurance coverage.  The disclosure should be tailored to the company’s particular circumstances and avoid generic “boilerplate” disclosure.  That said, companies are not required to disclose information that would compromise the company’s cybersecurity.  Instead, companies should provide sufficient disclosure to allow an investor to appreciate the nature of the risks faced by the company in a manner that would not compromise the company’s cybersecurity.

Where in the public filing should the disclosure(s) be made?

There are a number of places in a company’s public filing where a disclosure of a cyber incident or risk may be made:

(1) Management’s Discussion and Analysis of Financial Condition – if the costs or other consequences associated with one or more known incidents or the risk of potential incidents represent a material event, trend, or uncertainty that is reasonably likely to affect the company’s results of operations, liquidity, or financial condition or would cause reported financial information not to be necessarily indicative of future operating results of financial condition.  An example provided in the guidance is a cyber attack that results in theft of material stolen intellectual property; there, the company should describe the property that was stolen, and the effect of the attack on its results of operations, liquidity, and financial condition, and whether the attack would cause reported financial information not to be indicative of future operating results or financial condition.  If it is “reasonably likely” that the attack will lead to reduced revenues, an increase in cybersecurity protection costs, or litigation costs, then those outcomes, the amount, and duration, should be discussed.

(2) Description of Business – if a cyber incident affects a company’s products, services, relationships with customers/suppliers, or competitive conditions, then the company should disclose these effects in the “Description of Business” section of the public filing.  An example provided in the Guidance is where a cyber incident materially impairs the future viability of a new product in development; such an incident and the potential impact should be discussed.

(3) Legal Proceedings – if a legal proceeding to which a company “or any of its subsidiaries” is a party involved a cyber incident, information may need to be disclosed in the “Legal Proceedings” section of the public filing.  The example provided in the Guidance is where customer information is stolen, which results in material litigation; there, the name of the court, the date the lawsuit was filed, the parties, a description of the factual basis, and the relief sought should be disclosed.

(4) Financial Statement Disclosures – companies should consider whether cyber risks and incidents have an impact on a company’s financial statements, and, if so, include them.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

Data Security Remains Top Concern in Corporate Boardrooms

Posted in Data Security, Surveys and studies

Last August, I wrote about a survey by Corporate Board Member and FTI Consulting, Inc., showing that data security was the top legal risk for corporate directors and general counsel.

That same survey was taken again in 2013, and the results were released last week in a report entitled “Law in the Boardroom.” The gist of the report is that “the newest area of major concern continues a trend noted in last year’s study:  data security and IT risk is one of the most significant issues for both directors and general counsel.”

Here are some other significant findings in the survey:

  • More than one-quarter of director and general counsel respondents earmarked cyber risk as an area that will require their attention in 2013.
  • The average annualized cost of cybercrime jumped 6% to $8.9 million in 2012.
  • Interestingly, general counsel do not seem to think directors will be spending as much time on this topic as the legal department itself will.
  • Only one-third of general counsel felt “very confident” in their company’s ability to respond, and less than one-quarter of directors agree.   Only 51% of GCs are at least somewhat confident in their company’s ability to handle a breach.

In short, a company’s preparation for and response to cyber threats remain top concerns for general counsel and directors alike.  Fortunately, more companies are taking proactive measures, like mapping or inventorying data to apply the most stringent security safeguards to the most sensitive information.  Other proactive measures companies should consider include reviewing and revising information security policies, evaluating how to more effectively incorporate privacy and security concerns into the corporate culture, and refreshing employees on the risks and best practices in collecting, storing, using, and disposing of sensitive consumer and proprietary information.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

What Does A Cyber Attack Look Like?

Posted in All Things E, Data Breach, Data Security

The phrase “cyber attack” elicits thoughts of a compromised information system, a crashed computer network, or inappropriate access to sensitive electronic information.  It doesn’t usually conjure up images of machinery setting on fire, and smoke emerging from a factory.  Nevertheless, here is a video of an experimental cyber attack named Aurora, which took place on a generator in a manufacturing plant.

 

The experiment, which took place approximately five years ago, demonstrated potential vulnerabilities that could be used to attack much larger generators that produce the country’s electric power.  It is an interesting reminder of the impact that cyber attacks can have on critical infrastructure.

Data Breaches – Who is Causing Them, How, and What Can Companies Do About It?

Posted in Data Security, Surveys and studies

One of the leading annual studies analyzing the causes of data breaches was released earlier today.  The 2013 Verizon Data Breach Investigations Report analyzes what is causing data breaches, how the breaches are occurring, who are the hackers and the victims, and what trends can be gleaned from this information.  The report has become a “must read” for those in the data security industry and is often cited in board meetings, presentations, and by the media (the NY Times has already published a story about it). Those who do not have time to review the report may want to check out the Executive Summary.

The report studied 621 confirmed data breaches and more than 47,000 security incidents from all over the world.  Here is a summary of the most important findings:

  • Who is perpetrating the breaches?  A large majority (92%) of breaches are perpetrated by outsiders, and one out of every five are attributed to state-affiliated actors (95% of the state-affiliated espionage attacks relied on phishing in some way).  When breaches are perpetrated by insiders, more than 50% are a result of former employees taking advantage of their old accounts or backdoors that weren’t disabled, and more than 70% are committed within 30 days of resignation.
  • Who are the victims of breaches?  Larger organizations are increasingly becoming victims of breaches., and they are not isolated to any particular industry.  Manufacturing (33%), transportation (15%), professional (24%), and a variety of other industries (28%) are the targets of espionage attacks.
  • What assets are perpetrators targeting?  The most vulnerable assets are ATMs (30%), desktop computers (25%), file servers (22%), and laptops (22%).
  • How are breaches happening?  With respect to cyber breaches, they usually (76%) occur as a result of exploited weak or stolen credentials
  • Why are breaches happening?  The attackers are primarily seeking financial gain (75%), they are opportunistic (75%), and they prefer intrusions that are low in difficulty (78%).
  • How and when are breaches being discovered?  69% of breaches are discovered by an external party (9% are discovered by customers).  Perhaps more scary is the fact that 66% of breaches take months or years to discover, which is longer than it has taken to discover breaches in previous years.

The report provides some recommendations for what organizations can do to minimize some of the risks, some of which are commonly accepted best practices.  I noticed the emphasis in these recommendations on detection more so than prevention.  The report is driven by the (realistic) assumption that organizations are already operating in a compromised environment.  While organizations should continue trying to prevent breaches from occurring in the first place, they cannot entirely eliminate them.  Therefore, organizations should focus more of their efforts and resources on the detection of intrusions and protection of assets.

Here is a list of recommended practices from the report:

  • Eliminate unnecessary data; keep tabs on what’s left
  • Ensure essential controls are met; regularly check that they remain so
  • Collect, analyze, and share incident data to create a rich data source that can drive security program effectiveness
  • Collect, analyze, and share tactical threat intelligence, especially indicators of compromise, that can greatly aid defense and detection
  • Without deemphasizing prevention, focus on better and faster detection through a blend of people, processes, and technology
  • Regularly measure things like “number of compromised systems” and “mean time to detection” in networks.  Use them to drive security practices
  • Evaluate the threat landscape to prioritize a treatment strategy.  Don’t bury into a one-size-fits-all approach to security
  • If you’re a target of espionage, don’t underestimate the tenacity of your adversary.  Nor should you underestimate the intelligence and tools at your disposal.

These statistics, findings, and recommended practices should be considered by any organization that collects, uses, stores, and disposes sensitive information.  The threats to that information are real, they affect companies in all industries, and they are difficult to prevent.  Companies should evaluate and be prepared to respond to these increasing risks by adopting proactive administrative, technical, and physical security safeguards.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

Data Breach Lawsuits – Revisiting the Risks

Posted in Data Breach, Data Breach, Data Privacy, Data Security, Lawsuits

Until recently, individuals whose information was compromised as a result of a company suffering a data breach faced an uphill battle when suing the company in a class action lawsuit.  Far more often than not, Courts dismissed the lawsuits or entered summary judgment in favor of defendants on grounds that the plaintiffs could not establish a cognizable injury, preemption by breach notification statutes, or lack of evidence that the data breach (as opposed to some other act of identity theft) caused the plaintiff’s damages.  I’m still convinced that the pro-defendant environment remains the norm.  Nevertheless, four recent cases are being used to support the argument that the tide may be turning in favor of plaintiffs.

Burrows v. Purchasing Power, 12-cv-22800-UU (S.D. Fla.)

The most recent example is a proposed settlement in a class action lawsuit against Winn-Dixie and one of its service providers arising from a breach of personally identifiable information of Winn-Dixie grocery store employees.  The employees’ personally identifiable information was allegedly compromised when an employee of a company that provided an employee benefit program to Winn-Dixie employees misused his access to the PII and filed fraudulent tax returns with it.

Approximately 43,500 employees filed a class action lawsuit in the Southern District of Florida against Winn-Dixie and its employee benefits service provider.  The lawsuit includes counts of negligence, violation of Florida’s Deceptive and Unfair Trade Practice statute, and invasion of privacy.  Plaintiffs alleged that Defendants failed to adequately protect and secure the plaintiffs’ personally identifiable information, and that the defendants failed to provide the plaintiffs with prompt and sufficient notice of the breach.

The defendants’ attempts to defeat the plaintiffs lawsuit on the pleadings failed.  Winn-Dixie was subsequently voluntarily dismissed from the lawsuit and the case proceeded against the service provider, which ultimately entered into a proposed settlement with the plaintiffs, agreeing to pay approximately $430,000 ($225,000 towards a settlement fund, $200,000 in attorney’s fees and costs, and a $3,500 incentive aware to the named plaintiff).  The settlement states that it was entered into “for the purpose of avoiding the burden, expense, risk, and uncertainty of continuing to litigate the Action, . . . and without any admission of any liability or wrongdoing whatsoever.”

The settlement requires the service provider to maintain rigorous security safeguards to minimize the risk of a similar incident in the future.  The settlement fund will be divided into four groups:  (1) a tax refund fraud fund (class members who show they were victims of tax refund fraud can be compensated for a portion of lost interest); (2) a tax preparer loss fund (class members can be compensated for fees paid to tax preparers for notifying the IRS of a tax fraud claim or assisting in resolving issues arising from the tax refund fraud, not to exceed $100); (3) a credit card fraud fund (class members who show they were victims of identity theft other than tax refund fraud that resulted in fraudulent credit card charges that the credit card company did not waive, up to $500); and, (4) a credit monitoring fraud (class members who receive compensation in any of the previous three groups may receive credit monitoring services for one year).  To “prove” they were victims of fraud, plaintiffs must prepare a statement under penalty of perjury regarding the facts and circumstances of their stolen identity.

The settlement was preliminarily approved by the court on April 12, 2013, and a fairness hearing is scheduled for October 4, 2013.  The amount of money being paid to plaintiffs and their lawyers in this case should give corporate counsel monitoring these lawsuits pause for concern.  The District Court’s order allowing the case to proceed beyond the pleadings phase will likely be used as an instruction manual for plaintiffs in future data breach cases.

Resnick v. AvMed, Inc., 1:10-cv-24513-JLK (S.D. Fla.)

I previously blogged about the Eleventh U.S. Circuit Court of Appeal’s opinion that allowed a data breach class action to proceed where the plaintiffs claimed they were victims of identify theft arising from the theft of a laptop computer containing their personal information.  I encourage corporate counsel to read that post to learn more about the factors the Eleventh Circuit looked to in allowing that case to proceed beyond the pleadings phase. That lawsuit remains pending in the U.S. Southern District of Florida.

Harris v. comScore, Inc., No. 11-C-5807 (N.D. Ill. Apr. 2, 2013)

Another recent legal development considered by many to be favorable to plaintiffs was a decision by the U.S. District Court for the District of Chicago court certifying a class of possibly more than one million people who claim that the online data research company comScore, Inc. collected personal information from the individuals’ computers and sells it to media outlets without consent.  Although the lawsuit did not arise from a data breach, some of the arguments regarding lack of injury and whether class certification is appropriate are the same.  The plaintiffs allege violations of several federal statutes including the Electronic Communications Privacy Act and the Stored Communications Act. The court rejected comScore’s arguments challenging class certification, including its argument that the issue of whether each plaintiff suffered damages from comScore’s actions precludes certification.  The lawsuit remains pending.

Tyler v. Michaels Stores Inc., SJC-11145, 2013 WL 854097 (Mass. Mar. 11, 2013)

The Massachusetts Supreme Judicial Court broadened the definition of the term “personal information” to include ZIP codes.  The court held that because retailers can use ZIP codes to find other personal information, retailers where prohibited by Massachusetts law (the Song-Beverly Credit Card Act) from collecting ZIP codes.  The court also ruled that the plaintiffs did not have to prove identity theft to recover under the statute.  They could instead rely on the fact that they received unwanted marketing materials and that their data was sold to a third party.  The fact that plaintiffs can proceed with their lawsuit without having to show that their information was actually compromised will undoubtedly be used by plaintiffs in data breach litigation to argue that the threshold for injury in such cases is lower that in other cases.

What’s the Takeaway?

What should corporate counsel take from these cases? It is still too early to tell if these cases are outliers or if they mark a new trend in favor of plaintiffs in privacy and data breach cases that will embolden the plaintiffs’ bar.  The most important takeaway for corporate counsel at this stage is that they must, at a minimum, monitor the litigation risks associated with data breaches and other privacy violations so they can advise their companies about these risks, which can in turn consider these risks when building security and privacy into various products and services.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.