In three months, the EU’s General Data Protection Regulation (GDPR), one of the strictest privacy laws in the world, will go into effect.  It will apply to companies that collect or process personal data of EU residents, regardless of whether the company is physically located in the EU.  Companies that violate the law will be penalized up to 4% of their annual worldwide revenue for the preceding financial year or 20,000,000 EUR, whichever is greater.  Is your organization ready?

Shook’s Privacy and Data Security Team regularly counsels multinational companies to comply with international privacy laws like the GDPR.  In an effort to help in-house lawyers understand whether the GDPR applies to their organizations and how to minimize its risks, we have prepared a webinar that provides tips on developing a GDPR compliance program.  The webinar is on-demand and complimentary.  Check it out here, and feel free to leave comments.

 

Does your company collect biometric information?  Are you not entirely sure what “biometric information” means?  Would you like to understand the differences between the different state biometric privacy laws?  Do you want to know why more than 50 companies were hit with class action lawsuits within a period of three months as a result of their biometric privacy practices?

If the answer to any of these questions is “yes” then check out this complimentary, on-demand webinar on Biometric Privacy prepared by Shook’s Privacy and Data Security team.  Then feel free to get in touch with any of the members of our Biometric Privacy Task Force (contact information at the end of the webinar).  Feel free to leave comments below.

While the privacy world is focused on the Equifax data breach, another development is taking place that could have a more lasting effect on privacy law.  In the last month, plaintiffs’ lawyers in Illinois have filed over 20 lawsuits against companies that authenticate their employees or customers with their fingerprints.  The lawsuits are based on the Illinois Biometric Information Privacy Act (BIPA), which requires companies that possess or collect biometric information to provide notice to and obtain a written release from individuals whose biometric information the companies collect.

Why Do These Lawsuits Matter?

Companies are increasingly collecting biometric information from their customers and employees (“data subjects”) because this information helps authenticate users with greater accuracy.  It allows the company to provide customers a more seamless, secure, and tailored experience.  It also allows employees to securely and conveniently punch in and out of work by placing their finger on an electronic reader, which has the additional benefit of minimizing “buddy punching” (where employees ask their colleagues to check them in/out of work improperly).

What Is Biometric Information?

BIPA applies to “biometric Identifiers” and “biometric Information.”  A biometric identifier is a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry.  Biometric identifiers do not include, among other things, writing samples, written signatures, photographs, human biological samples used for valid scientific testing or screening, demographic data, tattoo descriptions, or physical descriptions such as height, weight, hair color, or eye color.  Biometric information means any information based on an individual’s biometric identifier used to identify an individual.  Because BIPA does not treat biometric identifiers differently from biometric information, this blog post refers to both categories collectively as “biometric information.”

To Whom Does BIPA Apply?

BIPA applies to companies in possession of biometric information or companies that collect, capture, purchase, receive through trade or otherwise obtain biometric information about Illinois residents.  BIPA does NOT apply to entities governed by HIPAA or GLBA.  Nor does it apply to state or local government agencies or any court of Illinois.

What Does BIPA Require?

Companies that possess biometric information must develop a written policy, made available to the public, that establishes a retention schedule and guidelines for permanently destroying biometric information when the initial purpose for collecting or obtaining the information has been satisfied, or within three years of the individual’s last interaction with the private entity, whichever occurs first.  The company must comply with this retention schedule and destruction guidelines, unless a valid warrant or subpoena issued by a court of competent jurisdiction provides otherwise.  The company must also adopt reasonable security safeguards to protect the storage and transmission of biometric information.  These safeguards must be at least the same as or more protective than the manner in which the private entity stores, transmits, and protects other confidential and sensitive information.

Companies that collect, capture, purchase, receive through trade, or otherwise obtain a person’s biometric information must:  (1) inform the subject in writing that biometric information is being collected or stored, and the specific purpose and length of term for which the information is being collected, stored, and used; and (2) obtain a written release executed by the subject of the biometric information.

What Conduct Does BIPA Prohibit?

Companies that possess biometric information are not allowed to sell, lease, trade, or otherwise profit from a person’s biometric information.  Additionally, disclosure, redisclosure, and other dissemination of the information is prohibited unless:  (1) the data subject consents to the disclosure; (2) the disclosure completes a financial transaction requested or authorized by the data subject; (3) the disclosure is required by law; or (4) the disclosure is required pursuant to a valid warrant or subpoena issued by a court of competent jurisdiction.

Can My Company Be Sued For Violating BIPA?

Any person “aggrieved by a violation” of BIPA can sue the violating company.  He or she may be entitled to $1,000 in liquidated damages for a negligent statutory violation or $5,000 in liquidated damages for an intentional statutory violation.  (If actual damages are greater, the plaintiff may seek those instead, but for the reasons discussed below, this is not usually the case).  Additionally, the prevailing party (plaintiff or defendant) may recover attorney’s fees and costs.

What Is This Latest Wave Of BIPA Lawsuits All About?

Between BIPA’s enactment in 2008 and a couple months ago there were relatively few lawsuits based on violations of BIPA.  Within the last couple of months, however, the Illinois plaintiffs’ bar has filed over 20 BIPA lawsuits.  Almost all of those lawsuits are based on the same underlying factual scenario:  an employee places his/her finger on a time clock to authenticate himself/herself when checking in or out of work.  In addition to suing the employer, plaintiffs are also suing the companies that sell/distribute the time clocks that use fingerprint readers.

Given the timing of the lawsuits and their almost identical language, this is surely a coordinated effort by the plaintiff’s bar to obtain quick settlements from risk-averse companies that would prefer to avoid or cannot afford the cost of litigation.  It is also a shotgun approach to flood the courts with these lawsuits in the hope that one or two of them will result in favorable precedent that can be used to file more lawsuits, so I don’t see this trend ending anytime soon.

Do The Lawsuits Have Merit?

No.  You can expect to see strong arguments by the defendants on the underlying technology and the meaning of biometric information.  But these lawsuits are meritless primarily because the plaintiffs didn’t suffer any real harm.  The lawsuits appear to be filed by former employees with axes to grind against their former employers.  Setting that aside, however, the arguments in the complaints are not persuasive.

The complaints allege that BIPA was designed to ensure that the plaintiffs receive notice that their biometric information is being collected, and that the plaintiffs should have been asked to sign written releases.  This lack of notice argument is silly when you remember that these individuals were essentially receiving notice every day by placing their fingers on a time clock to log in and out of work.  This latest wave of cases does not present the situation, as other BIPA cases have, where biometric information is being collected without the data subject’s knowledge.

The complaints also allege that the plaintiffs were not provided a policy explaining the use of their information. If we assume first that the plaintiffs would have read these policies (because we all read policies provided to us during the onboarding process), then what would those policies have told the employees?  Anyone familiar with the technology will tell you that the policies would say that the company does not actually collect fingerprint images at all, that there isn’t a database of employee fingerprints somewhere, that to the extent the company has access to numerical representations of their fingerprints those representations are useless to anyone else because they can’t be reverse-engineered, and the information is not shared with third parties (primarily because it serves no use).

The complaints are also significant in what they do NOT allege.  They do not allege, for example, that unauthorized third parties (like hackers) accessed the information.  Nor do the complaints allege that the employers shared the information with any authorized third parties.  So again, what is the harm suffered?

For these reasons, most courts that have addressed the lack of harm argument in the BIPA context have dismissed the lawsuits.  See, e.g., McCollough v. Smarte Carte, Inc. (N.D. Ill. Aug. 1, 2016); Vigil v. Take-Two Interactive Software, Inc. (S.D.N.Y. Jan. 27, 2017).  Those courts concluded that even if there was a technical violation of BIPA, the plaintiffs were not “aggrieved by those violations.”

What Can Companies Do To Minimize These Risks?

First, determine whether BIPA even applies to you.  This may require consulting with counsel knowledgeable in the requirements of BIPA and the underlying technology.  Even if you are not currently collecting biometric information from Illinois residents, could you in the future?  Additionally, while Illinois is currently the only state that creates a private right of action for violation of its biometric information privacy statute, other states have similar laws enforced by their respective Attorneys General.

Second, if BIPA applies, use experienced counsel to ensure that you comply with BIPA – draft a BIPA retention policy, prepare and obtain written releases, and evaluate the security and use of the information.  This process may require coordination with your information technology staff and the vendor you use for your authentication devices.

Finally, if your company has already been sued, there are strategies that counsel should immediately bring to your attention that will lower the cost of litigation, increase the likelihood of success, and help you identify traps for the unwary.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

One of the most significant questions in data security law is whether reports created by forensic firms investigating data breaches at the direction of counsel are protected from discovery in civil class action lawsuits.  They are, at least according to an order issued last week in In re Experian Data Breach Litigation. 15-01592 (C.D. Cal. May 18, 2017).  This post analyzes the decision, identifies important practical takeaways for counsel, and places it in context with the two other cases that have addressed this issue.

Why Do Lawyers Hire Forensic Firms?

When a breach occurs, companies often retain legal counsel to advise them on legal issues like whether the company adopted “reasonable” security safeguards; whether the company is obligated to notify affected customers and, if so, when and how; whether notice to regulators is required; and, what remedial measures are required.  To properly advise clients on these issues, legal counsel needs to know whether personally identifiable information (PII) was affected by the incident, when the intrusion occurred, whether the PII was actually accessed or acquired, what safeguards were in place to prevent the attack, and how the vulnerability was remediated.  A good forensic firm will help you answer these questions so you can advise clients accurately.  The reports often contain information that plaintiffs’ lawyers would love to get their hands on – it can provide details about why the breach occurred, how it could have been prevented, and whether the company’s safeguards were consistent with standards of reasonableness.  It is important that the forensic firm be able to perform its investigation without fear that its reports will be subject to misinterpretation and criticism by a plaintiff’s lawyer or other third party.  Hence the need for protection of these reports in civil litigation.  For the time being, there is no statutory protection for these types of documents (though there should be) so we must turn to the attorney-client privilege and work-product doctrine for protection.

What Happened In Experian?

In October 2015, Experian announced that it suffered a data breach.  A class action was filed the next day.  Experian immediately hired legal counsel who in turn hired Mandiant, one of the world’s leading forensic firms, to investigate the data breach and identify facts that would allow outside counsel to provide legal advice to Experian.

The plaintiffs requested a copy of Mandiant’s report and documents related to that investigation.  Experian objected, arguing that the documents are privileged and protected by the work-product doctrine because they were prepared in anticipation of litigation for the purpose of allowing counsel to advise Experian on its legal obligations.  The plaintiffs moved to compel production of the documents.

The court held that the documents were protected from discovery by the work-product doctrine.  Plaintiffs had argued that Experian had an independent business obligation to investigate the data breach, and it hired Mandiant to do that after realizing its own experts lacked sufficient resources.  The court rejected this argument because Mandiant conducted the investigation and prepared the report for outside counsel in anticipation of litigation, “even if that wasn’t Mandiant’s only purpose.”  The court pointed to, among other things, the fact that Mandiant’s full report was not provided to Experian’s internal incident response team.

Plaintiffs argued that the report should not be protected because it was prepared in the ordinary course of business.  Plaintiffs cited the fact Mandiant had previously worked for Experian.  The court disagreed because Mandiant’s previous work for Experian was separate from the work it did for Experian regarding the subject breach.

Plaintiffs argued that even if the documents were created to allow counsel to advise Experian, plaintiffs were not able to obtain the information that was included in the Mandiant report by other means because Mandiant accessed Experian’s live servers to do its analysis, which plaintiffs’ experts would not be able to do.  The court disagreed, citing information in the record demonstrating that Mandiant never in fact accessed the live servers, but only observed server images to create its report.

Lastly, the plaintiffs argued that even if the information was protected by the work-product doctrine, Experian waived the protection by sharing the documents with a co-defendant (T-Mobile’s counsel).  In what I believe will be the most underrated yet arguably most important part of the order, the court ruled that the sharing of the report with the co-defendant pursuant to a joint defense agreement did not constitute a waiver of the work product doctrine.

There are some limitations to the court’s order:

  1. The court only ruled on whether the work-product doctrine applied to the Mandiant documents, not whether the attorney-client privilege applied.
  1. Mandiant delivered its report to outside counsel only, who shared the reports with in house counsel.  The full report was not shared with Experian’s incident response team (it is not clear who comprised that team).
  1. Mandiant performed an analysis of Experian’s systems two years before this incident. The court did not conclude that the 2013 report was privileged.  The court also did not conclude that any work Mandiant performed before outside counsel was hired is privileged. It is not clear from the order whether the court was ruling that the pre-incident and pre-engagement materials were not protected at all, not protected by the attorney-client privilege, or simply not ruling one way or the other.  My interpretation is that it is the latter.

How Have Other Courts Ruled?

Only two other courts have addressed the applicability of privilege or work-product protection to the production of forensic reports.  Both have applied privilege and/or work product to the documents.

In In re: Target Corporation Customer Data Security Breach Litigation, No. 14-2522 (D. Minn. Oct. 23, 2015), the court held that documents relating to a forensic investigation performed to provide legal advice to the company was privileged and work product.  Following its breach, Target established a data breach task force at the request of Target’s in-house lawyers and its retained outside counsel so that the task force could educate Target’s attorneys about aspects of the breach and counsel could provide Target with informed legal advice.  What makes the Target case different from Experian is that Target undertook two forensic investigations (both by the forensic firm, Verizon) – one as described (to enable counsel to advise Target in anticipation of litigation and regulatory inquiries) and a second was required by several credit card brands (commonly referred to as a “PFI” or payment card forensic investigation).  This second investigation, Target conceded, was not protected by privilege or the work-product doctrine.  The court allowed production of certain information (emails to Target’s Board of Directors which updated the Board on Target’s business-related interests), but held that information relating to Verizon’s investigation for the data breach task force was protected by the attorney-client privilege and work-product doctrine.  The court reasoned that there were forensic images and the PFI documents that plaintiffs could use to learn how the data breach occurred and how Target responded.

In Genesco, Inc. v. Visa U.S.A., Inc., No. 3:13-cv-00202 (M.D. Tenn. Mar. 25, 2015), the court denied Visa’s request for discovery related to remediation measures performed by IBM on Genesco’s behalf.  The court reasoned that Genesco retained IBM to provide consulting and technical services to assist counsel in rendering legal advice to Genesco. Therefore, the documents were privileged.

Experian came out the same way as Target and Genesco, but there are subtle differences that should be kept in mind whenever a company decides to retain a forensic company and expects privilege or work product to apply.  Experian is arguably the most important of the three because it is the far more common scenario.  Most companies will not spend money to hire two forensic firms (or one firm with two teams) to perform two separate investigations on the same incident.  So where only one investigation is performed, the company and counsel would be wise to read the Experian filings and order before commencing the engagement of counsel and a forensic firm.

Takeaways

Here are some practical takeaways if a breached entity wants to minimize the risk of disclosure of a forensic report:

  • The forensic firm should be hired by outside counsel, not by the incident response team or the information security department.
  • Create a record and think about privilege issues early in the engagement by doing the following:
    • ensuring that the engagement letter between the breached entity and outside counsel envisions that outside counsel may need to retain a forensic firm to help counsel provide legal advice;
    • the MSA and/or SOW between outside counsel and the forensic firm makes clear that the forensic firm is being hired for the purpose of helping counsel provide legal advice to the client;
    • limit the scope of the forensic firm’s work to those issues relevant to and necessary for counsel to render legal advice;
    • ensure that the forensic firm communicates directly (and only) with counsel in a secure and confidential manner;
    • not sharing the forensic firm’s full report with anyone other than in house counsel; and
    • incorporate the forensic firm’s report into a written legal memorandum to demonstrate how the forensic firm’s findings were used to help counsel provide legal advice to the client.
  • Work a forensic firm undertakes before outside counsel is involved will not be protected, so the breached entity should engage counsel immediately.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

The consequences of a data breach reached new heights last week when Yahoo announced the resignation of its General Counsel in response to a series of security incidents the company suffered.  A more fulsome explanation of the security incidents and Yahoo’s response can be found in item seven of the company’s 10-K, but here are the highlights:

  • Yahoo suffered three security incidents from 2013 to 2016, one of which involved the theft of approximately 500 million user accounts from Yahoo’s network by a state-sponsored actor in late 2014. The stolen information included names, email addresses, telephone numbers, dates of birth, hashed passwords and encrypted or unencrypted security questions and answers.  (Note that under most, but not all, data breach notification laws, unauthorized access of these data elements would not create a legal obligation to notify affected individuals).
  • An independent committee of Yahoo’s board of directors undertook an investigation with the assistance of a forensic firm and outside counsel.
  • The committee concluded that Yahoo’s information security team knew of the 2014 security incident at that time, but the incident was not disclosed until September 2016.
  • “[S]enior executives did not properly comprehend or investigate, and therefore failed to act sufficiently upon, the full extent of knowledge known internally by the Company’s information security team.”
  • Yahoo knew, as early as December 2014, that an attacker had acquired personal data of Yahoo users, but it is not clear whether and to what extent this information was conveyed to those outside the information security team.
  • The legal team, however, “had sufficient information to warrant substantial further inquiry in 2014, and they did not sufficiently pursue it. As a result, the 2014 Security Incident was not properly investigated and analyzed at the time, and the Company was not adequately advised with respect to the legal and business risks associated with the 2014 Security Incident.”  (Emphasis added).  The 10-K does not identify the “sufficient information” or explain what “further inquiry” would have been required (or why).
  • The committee found “failures in communication, management, inquiry and internal reporting,” which all contributed to lack of understanding and handling of the 2014 Security Incident.
  • The committee also found that Yahoo’s board of directors was “not adequately informed of the full severity, risks, and potential impacts of the 2014 Security Incident and related matters.”

It’s not clear from the 10-K exactly why Yahoo’s General Counsel was asked to step down.  It’s highly unusual that a General Counsel would be held directly (and publicly) responsible for a data breach.  Nevertheless, the outcome raises a couple of questions:  (1) will this represent a new trend for in house counsel generally, and (2) how will this outcome affect how companies approach investigations of data incidents in the future?

Regarding the latter question, a colleague at another firm suggested that this outcome may make corporate legal departments less inclined to involve themselves in breach response or direct investigations of suspected data breaches.  I disagree.  Looking the other way or sticking one’s head in the sand is never the right response to a data incident.  In fact, the legal department would create bigger problems if it did little or nothing.

So what can a corporate legal department do to minimize its own risks?  Here are a few suggestions:

  • Retain a forensic firm through the legal department or outside counsel in advance of an incident to ensure that resources are available to begin an investigation immediately, and to maximize the applicability of the attorney-client privilege and work product doctrine.
  • Engage outside counsel skilled in privacy and data security law and experienced in helping similarly situated companies prepare for and respond to data incidents. There is a growing glut of lawyers who hold themselves out as privacy experts, so I recommend asking for and contacting references.  Most clients are happy to speak about their level of satisfaction with their outside counsel while avoiding details of the incident that led to the engagement.
  • Prepare written protocols, with the cooperation of your information security department, to guide your investigation when an incident occurs. These protocols are different from incident response plans; they focus specifically on the process of initiating, directing, and concluding an investigation at the direction of legal counsel for the purpose of advising the company on its compliance with privacy and data security laws.  They include rules on communication, documentation, and scope.
  • Engage in real dialogue with the information security officer(s) before an incident occurs, in an effort to identify appropriate rules of engagement for when the corporate legal department should be involved in incident response. Some companies involve legal in every data incident (that’s too much), some don’t involve them at all and maintain that data incidents are entirely within the purview of information security (that’s too little . . . and create significant legal risks), but the challenge lies in defining the middle ground.  It is easy to say “legal gets involved when Information Security determines that an incident is serious,” but it is often difficult to know at the outset of an incident whether it will become serious, and by the time you’ve figured that out it may be too late.  There is, however, a way to strike that balance.
  • Test, test, test – regularly simulate data incidents to test the protocols, rules of engagement, and incident response plans. I’ve been involved in some phenomenal tabletop exercises, which clients have used to benchmark their response readiness against other similarly situated companies.  I’ve been consistently impressed with one particular forensic firm in this space.  Legal and information security departments can and should work together to undertake these exercises.

Information security officers will not be the only high-level executives to have their feet held to the fire when a data breach occurs.  I predict that C-level executives and boards of directors will increasingly hold corporate legal departments responsible (at least in part) for how the company investigates and responds to a suspected data breach.  So it will be important for legal departments to proactively educate themselves on the legal issues that arise when an incident occurs, identify their roles in the incident response procedure, and prepare to act quickly and thoroughly when the time comes.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

Earlier this year, Bloomberg Law reported that Edelson PC, a leading plaintiffs’ firm in privacy and data security law, filed a class action lawsuit against a regional law firm that had vulnerabilities in its information security systems.  This week, the identity of the firm and the allegations of the lawsuit were unsealed.  The case, Shore v. Johnson & Bell, LTD, No. 1:16-cv-04363 (N.D. Ill. Apr. 15, 2016), alleges that Johnson & Bell (“the firm”), a Chicago-based law firm, was negligent and engaged in malpractice by allowing information security vulnerabilities to develop that created risks to client information.  This blog post explains the alleged vulnerabilities, analyzes the merits of the lawsuit, and discusses what it means for other law firms, their clients, and service providers.

By coincidence, Fortune reported earlier this week that China stole data from major U.S. law firms:  “The evidence obtained by Fortune did not disclose a clear motive for the attack but did show the names of law firm partners targeted by the hackers. The practice areas of those partners include mergers and acquisitions and intellectual property, suggesting the goal of the email theft may indeed have been economic in nature.”  These developments are reminders that information security must be a high priority for all law firms.

The Johnson & Bell Lawsuit

The lawsuit is based on three alleged vulnerabilities in the firm’s information security infrastructure.  According to a court filing, the vulnerabilities have now been addressed and fixed.

First, the lawsuit alleges that the firm’s Webtime Server, an application attorneys use via any web browser to remotely log in and record their time, was based on the 2005 version of the Java application JBoss.  The Complaint alleges that the 2005 version of JBoss has been identified by the National Institute of Standards and Technology as having an exploitable vulnerability. Plaintiffs also allege hackers have taken advantage of the vulnerability in other situations to conduct ransomware attacks.

Second, the lawsuit alleges that the firm’s virtual private network (VPN) server contains a vulnerability.  Companies use VPNs to allow their employees to remotely access company information in an encrypted, secured manner.  The secured nature of a VPN connection allows companies to feel comfortable providing access to highly sensitive internal resources and databases.  Sometimes, a temporary disconnection occurs while an employee is using a VPN connection.  The Complaint alleges generally that when the firm’s VPN sessions were disconnected, the renegotiation (or re-connection of the VPN session) was insecure, making it vulnerable to a “man-in-the-middle” attack.  A man-in-the-middle attack is a cyberattack in which the hacker gains access to a system to eavesdrop on communications and steal confidential information.

Finally, the Complaint alleges that the firm’s email system was vulnerable because it supports version 2.0 of SSL.  Secure Sockets Layer (SSL) is a form of technology that creates an encrypted tunnel between a web server and a browser to ensure that information passing through the tunnel is protected from hackers. Version 2.0 was replaced by version 3.0 in 1996.  In 1999, Transport Layer Security (TLS) replaced SSL entirely.  Since then, TLS has been updated at least twice.  According to the Complaint, the use of SSL 2.0 made the firm susceptible to a DROWN (Decrypting RSA with Obsolete Weakened Encryption) attack that could allow hackers to access the contents of the firm’s emails and attachments.  The Complaint claims that the Panama Papers breach was a result of a similar attack.

Notably, the Complaint does not allege that the firm actually suffered a compromise of sensitive information, that a successful cyberattack occurred, or even that a cyberattack was attempted.  In other words, the lawsuit is based on the firm’s alleged state of security that may make it vulnerable to an attack in the future.

Who is the class?  Plaintiffs (Jason Shore and Coinabul, LLS) are former clients of the Johnson & Bell firm.  The firm defended Plaintiffs in a class action lawsuit alleging that Plaintiffs defrauded consumers by accepting payments in the form of bitcoins while refusing to ship gold or silver ordered by customers.  See Hussein v. Coinabul, LLC, No. 14 C 5735 (N.D. Ill. 2014).  Plaintiffs define the class as all of the firm’s clients within the statute of limitations period except insurance companies and clients operating in the healthcare industry. Why insurance and healthcare companies are not included in the proposed class is not evident from the allegations.  It could be that those industries are more highly regulated in privacy and data security and therefore would have had a greater duty to ask questions of the firm about its information security practices.  Though why financial institutions, the most highly regulated sector in data security, would not also have been included in this group is not clear.

The Complaint is based on four causes of action:

  1. Breach of implied contract – Plaintiffs allege that, as a term of the engagement agreement, the firm promised to keep a file for the work they performed on Plaintiffs’ matter.  The Complaint claims there was an implied promise that the firm would use reasonable methods to keep Plaintiffs’ information confidential, which was breached by the firm’s security vulnerabilities.
  2. Negligence – Plaintiffs claim the attorney-client relationship automatically created a duty to adopt industry standard data security measures, which was breached as evident by the alleged vulnerabilities.
  3. Unjust enrichment – Plaintiffs argue that a portion of the attorney’s fees they paid to the firm was for the administrative cost of data security to maintain the confidentiality of client information.  Plaintiffs seek return of that amount of the fees paid.
  4. Breach of fiduciary duty – Plaintiffs claim that the failure to implement industry standard data security measures and resulting vulnerabilities were breaches of the firm’s fiduciary duty to Plaintiffs.

What is the injury? Plaintiffs allege they were injured because the security vulnerabilities created (1) a diminished value of the services they received from the firm, and (2) a risk that their sensitive information may be compromised at some point in the future (which could result in damages from that theft).  Plaintiffs measure their damages as the portion of fees paid to the firm that were meant to be for the administrative cost of securing client information.  Plaintiffs have also asked the court to require an independent third-party security audit of the firm’s systems.

Is a Vulnerability by Itself Enough to Meet Standing Requirements?

In my opinion, the lawsuit is fatally flawed because there was no attack or attempted attack on Plaintiffs’ information, let alone actual unauthorized access or acquisition of the information.  The firm’s security system was analogous to an unlocked door to a home that nobody burglarized.  The plaintiffs indisputably suffered no financial damages as a result of the alleged vulnerabilities, and the vulnerabilities were identified (albeit by this lawsuit) and addressed before any actual harm occurred.

If the mere risk of harm at some point in the future is enough to allow a lawsuit to proceed, then every company in America should be concerned.  Most companies probably have similar unknown vulnerabilities in their systems.  The challenge with information security is that it is like a game of “Whack-A-Mole” — the fast-paced and constantly changing threats and defenses means that new vulnerabilities are always emerging so it is almost impossible to eliminate all vulnerabilities entirely.  The floodgates will be blown wide open if a lawsuit based only on the mere existence of a vulnerability is considered actionable.

That said, the Edelson firm is one of the most creative plaintiffs’ privacy and data security firms in the country.  They have made their name by doing things differently from their peers.  They are known for pushing the envelope and expanding the boundaries of liability in privacy and data security law.  For example, in Resnick v. AvMed they were the first firm to persuade a U.S. Circuit Court of Appeals to apply the unjust enrichment theory to data breach class actions.  Other courts have since applied that theory in allowing data breach class action lawsuits to proceed. The Resnick case subsequently settled for over $3 million.

In In re: LinkedIn User Privacy Litigation, No. 5:12-cv-03088 (N.D. Cal. 2012), at a time when other plaintiffs firms were pursuing data breach liability based on a failure to adopt reasonable security safeguards, they persuaded the court of a new theory:  that the gravamen was not the failure to adopt certain security safeguards, but the misrepresentations in consumer-facing statements about the safeguards that were actually in place.  The LinkedIn case settled for $1.25 million.

In Spokeo v. Robins, a case that was appealed all the way to the U.S. Supreme Court, the Edelson firm argued to the Court that the mere violation of a privacy statute without other damages or harm is sufficient to confer standing on a plaintiff.  The Court’s decision gave plaintiffs a roadmap for circumventing the standing problem.

But no case has gone this far – to hold that a mere vulnerability without a compromise of information, an attack, or an attempted attack, is actionable.  Doing so would essentially change the data security class action litigation “ball game” once again.

The Impact on Everyone Else

This lawsuit is important because of its potential impact to several key groups.  First, is other law firms.  Every firm should immediately determine whether it has the same vulnerabilities alleged in the Complaint.  Law firms should be concerned that similar vulnerabilities could lead to similar lawsuits, whether or not an actual attack has occurred.  They should be prepared to respond to client inquiries explaining what safeguards they have adopted to protect sensitive client information, consistent with their legal and ethical obligations. (For a discussion of these obligations, read my July 2013 blog post on the subject).  Firms should review and update their engagement letters for promises and disclaimers to their clients about information security.

This leads to the second group of impacted individuals:  the law firms’ clients.  Every company should have in place a vendor management program that incorporates information security as part of the due diligence process, and law firms are service providers like the rest of the companies’ vendors.  Companies should be asking their outside counsel as part of the due diligence process how they protect client data:  what administrative, technical, and physical safeguards are in place?  Has the firm obtained an independent third-party certification (like ISO 27001) or performed a risk assessment by an information security expert?  (I was pleasantly surprised to see the Complaint refer to Shook, Hardy & Bacon’s ISO 27001 certification as an example of what law firms should be doing).

Beyond asking questions, clients need to identify what they expect from their law firms in terms of specific security requirements and communication about vulnerabilities or notifications of data incidents.  This lawsuit may have been avoided if the engagement letter had required notice of material vulnerabilities.  The questions clients should be asking their law firms can (and will) be the focus of an entirely separate blog post.

The third group impacted by this lawsuit will be the service providers law firms use for information security services.  Small firms commonly outsource most or all of their information security to these providers.  Even large firms use service providers for information security services that include threat detection, data loss prevention, firewall implementation, and cloud storage.

Firms also purchase licenses for applications that may present security risks, similar to the alleged vulnerability in the Webtime service. These applications require a separate security vetting by the law firm before they can be used.

I suspect this is the first of what will be a series of lawsuits relating to law firm security brought by the Edelson firm and plaintiffs’ firms that follow their lead.  It will be interesting to see whether courts allow a lawsuit based on a security vulnerability alone to proceed or dismiss it for lack of standing.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

The SEC recently agreed to a $1,000,000 settlement of an enforcement action against Morgan Stanley for its failure to have sufficient data security policies and procedures to protect customer data. The settlement was significant for its amount. The true noteworthiness here, however, lies not in the end result but the implications of how it was reached: (1) the “reasonableness” of a company’s data security safeguards shall be judged in hindsight, and (2) almost any data breach could give rise to liability. The SEC has left no room for error in making sure that your cybersecurity procedures and controls actually and always work.

What Happened?

Morgan Stanley maintained personally identifiable information collected from its brokerage and investment advisory services customers on two internal company portals. Between 2011 and 2014, an employee unlawfully downloaded and transferred confidential data for approximately 730,000 accounts from the portals to his own personal data storage device/website. It is unclear whether the transfer of information was for the employee’s personal convenience or a more nefarious purpose. Soon thereafter, however, the employee suffered a cyberattack on his personal storage device, leading to portions of the data being posted to at least three publicly available Internet sites. Morgan Stanley discovered the information leak through a routine Internet sweep, they immediately confronted the employee, and voluntarily brought the matter to law enforcement’s attention.

The employee who transferred the information to his personal device was criminally convicted for violating the Computer Fraud and Abuse Act by exceeding his access to a computer, he was sentenced to 36 months probation, and ordered to pay $600,000 in restitution. He also entered into a consent order with the SEC barring him from association with any broker, dealer, and investment adviser for five years.

Morgan Stanley entered into a consent order with the SEC pursuant to which Morgan Stanley agreed to pay a $1,000,000 fine, but did not admit or deny the findings in the order.

SIDE NOTE TO COMPLIANCE OFFICERS READING THIS BLOG POST – if ever you need a way to deter your employees from sending corporate information to their personal devices or email accounts, tell them about this case.

What Does The Law Require?

Federal security laws (Rule 30(a) of Regulation S-P – the “Safeguards Rule”) require registered broker-dealers and investment advisers to adopt written policies and procedures reasonably designed to:

  1. Insure the security and confidentiality of customer records and information;
  2. Protect against any anticipated threats or hazards to the security or integrity of customer records and information; and
  3. Protect against unauthorized access to or use of customer records or information that could result in substantial harm or inconvenience to any customer.

Here, the SEC based Morgan Stanley’s liability on the fact that Morgan Stanley:

failed to ensure the reasonable design and proper operation of its policies and procedures in safeguarding confidential customer data. In particular, the authorization modules were ineffective in limiting access with respect to one report available through one of the portals and absent with respect to a couple of the reports available through the portals. Moreover, Morgan Stanley failed to conduct any auditing or testing of the authorization modules for the portals at any point since their creation, and that testing would likely have revealed the deficiencies in the modules. Finally, Morgan Stanley did not monitor user activity in the portals to identify any unusual or suspicious patterns.

In other words, the authorization modules did not work in this instance, nor was there auditing to test and possibly identify the problem, nor had Morgan Stanley invested in sophisticated monitoring applications that would have identified that the employee was engaging in suspicious activity.

Why Should Companies Worry?

The most concerning part of the Morgan Stanley consent order is this paragraph, which describes some robust safeguards Morgan Stanley had implemented before the incident occurred:

MSSB [Morgan Stanley] adopted certain policies and restrictions with respect to employees’ access to and handling of confidential customer data available through the Portals. MSSB had written policies, including its Code of Conduct, that prohibited employees from accessing confidential information other than what employees had been authorized to access in order to perform their responsibilities. In addition, MSSB designed and installed authorization modules that, if properly implemented, should have permitted each employee to run reports via the Portals only with respect to the data for customers whom that employee supported. These modules required FAs [Financial Advisors] and CSAs [Client Service Associates] to input numbers associated with the user’s branch and FA or FA group number. MSSB’s systems then should have permitted the user to access data only with respect to those customers whose data the user was properly entitled to view. Finally, MSSB installed and maintained technology controls that, among other things, restricted employees from copying data onto removable storage devices and from accessing certain categories of websites.

Lesson learned: it doesn’t matter how robustly designed your policies and procedures may be, if they don’t actually work as designed then you could be liable under the Safeguards Rule.

Commentary

The standard applied by the SEC in this enforcement action is higher than a “reasonableness” standard. It is easy, after the fact, to find a weakness that could have been exploited. Indeed, it is unusual if you cannot identify such a vulnerability after the fact. If a criminal or departing employee is set on unlawfully accessing sensitive information he can likely do so no matter what hurdles you place in his way. A company should not be held liable for failing to stop every data incident. Some may argue that a company like Morgan Stanley must be held to a higher standard because of the known threats to the financial services industry and the potentially significant consequences to consumers of a financial services company suffering a data breach. Nevertheless, the law as written requires policies and procedures that are only “reasonably designed” to protect sensitive information; the law does not require that these policies and procedures be perfectly designed nor that they be effective 100% of the time, nor could it.

Hindsight is 20/20 and regulators would be hard pressed to find any organization that could show their policies and procedures are always followed. Could audits and testing have detected the fact that Morgan Stanley’s authorization module was not preventing the type of unauthorized access and transfer of sensitive information in this case? Possibly, depending on the depth of the audit and foresight of the auditors. But little benefit appears to have inured to Morgan Stanley for the fact it actually had an authorization module, data security training for its employees, policies and restrictions regarding employee access of information, controls that prevented the copying of data onto removable storage devices, and the fact that it voluntarily brought this matter to law enforcement’s attention.

Is there a risk now that the SEC’s interpretation of “reasonableness” will be applied similarly by state Attorneys General, the Health and Human Services’ Office of Civil Rights, the Federal Trade Commission, or other regulators? All of this reminds us that the definition of reasonableness in the context of data security is subjective, and that subjectivity is a risk that companies must address.

Takeaways

There are some important practical takeaways for companies from this settlement: (1) perform a risk assessment to determine how your organization could suffer from a similar risk (employee transferring corporate information to a personal device); (2) implement an authorization module and other policies and procedures to limit access (and identify unauthorized access) to sensitive information to those who have a legitimate business need; and (3) make sure you audit and test these controls so ensure that they actually work. Additionally, CISOs, compliance officers, and in house counsel would be well served to ensure that the story of this enforcement action becomes part of their organization’s data security training as part of the onboarding and annual training process.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

 

A significant change is happening to payment card technology. Any company that accepts credit cards as a form of payment needs to know about it if they intend to continue accepting payment cards in the future. The technology is called “EMV” (EuroPay, MasterCard, Visa). The card brands hope that EMV technology will significantly reduce the amount of fraud in transactions where the payment card is present. This blog post will discuss how EMV works, why it was adopted, how merchants can comply with its requirements, the incentives to adopt (and penalties in failing to adopt) the technology, and the security pros/cons of EMV.

SPOILER ALERT: EMV is effective in reducing the risk of fraud from counterfeit payment cards used for in-person transactions, but the best way to minimize payment card fraud is through implementation of point-to-point encryption and tokenization. The liability shift is an appealing incentive to adopt the technology, but many merchants have been reporting difficulty finding EMV software that works properly with the EMV hardware.

What is EMV?  EMV (EuroPay, MasterCard, Visa) is a payment method that combines a plastic card with a microchip. Unlike your typical credit card, credit cards with an EMV chip generate a different code with every purchase a consumer makes. The code is shared with the issuing bank as part of the transaction to authenticate that the card is legitimate. Because the code changes with each transaction, even if a thief steals the information contained on the magnetic stripe of the credit card, he cannot create a counterfeit card because he cannot replicate the codes generated by the microchip. It is this inability to make counterfeit cards that makes EMV technology so important to the card brands and issuing banks.

Why was EMV created?  EMV was created because criminals/hackers were stealing credit card information, selling it, and using it to create counterfeit credit cards. Those counterfeit cards are then often sold or used as part of identity theft and dark web crime rings. By requiring a microchip that generates a random code for each transaction, card brands have made it almost impossible to create a counterfeit card. EMV technology, however, is only helpful in preventing fraud where the card is present. Online transactions, for example, do not benefit from the technology because an e-commerce transaction usually does not require that a card be inserted or swiped into a point of sale terminal (which would allow for the micro-chip’s unique code generation). There are, however, other technologies like point-to-point encryption and tokenization (discussed below) that could potentially eliminate payment card data breaches.

How does a merchant become EMV-compliant?  To become EMV-compliant, a merchant must install EMV-enabled point-of-sale terminals and obtain certification from its acquiring bank that its payment application for each card network is certified for EMV. The cost of a new EMV-compliant terminal can be between $250 and $500, depending on whether the merchant wants to purchase one that will also accept near-field communications payments like Apple Pay. The merchant also needs to ensure that EMV-compliant software is installed in these terminals. Several of payment card forensic contacts and merchant clients have told me that they are having issues implementing the software solutions.

What if a consumer wants to swipe her card instead of use the chip feature?  Assuming the consumer is using an EMV card at an EMV-enabled terminal, the terminal will require the consumer to use the chip instead of swiping the card.

Is a signature or PIN required to complete a transaction?  Each issuing bank will have different requirements. Visa has said that a signature accompanying the chip is sufficient. MasterCard, however, appears to prefer use of a PIN with the chip. If a merchant does not support the “Chip and PIN” system, but the subject transaction could have been performed with a PIN, then the merchant may be responsible for chargebacks related to those transactions.

Will merchants pay lower interchange fees if they adopt the EMV-compliant terminals?  No and there are no current plans to change that, though it is possible the card brands could change their mind if EMV is not adopted quickly enough.

What is the “liability shift”?  Until recently, issuing banks were responsible for card-present counterfeit fraud losses. As a way to encourage merchants to adopt EMV, the card brands have implemented a shift of liability as a “carrot” and “stick” approach. For most merchants, as of October 1, 2015, if they have been certified through their acquiring banks as EMV-compliant and they subsequently suffer a breach, they are not responsible for card-present counterfeit fraud losses. MasterCard requires that 95% of its transactions originate from EMV-compliant POS terminals for the liability shift to apply to 100% of the charges; the liability shift applies to only 50% of affected MasterCard transactions if only 75% of MasterCard transactions originate from EMV-compliant POS terminals. Merchants are not required to be EMV-compliant, but doing so gives them the protection of this liability shift. The liability shift likely applies to both magnetic stripe cards and EMV cards that are compromised, but the card brands have released public statements that create ambiguity.

Are there any exceptions to the October 1, 2015, deadline for the liability shift?  Yes. The liability shift does not apply to automated fuel dispensers (gas pumps) until October 2017. Also, MasterCard is shifting its liability to ATM owners in October 2016; Visa is shifting that liability in October 2017. The EMV software for fuel dispensers and ATMs has been particularly lacking, making it extremely challenging for merchants to fully implement EMV technology. For small businesses that accept mobile payments (like Square), merchants will need to purchase new EMV readers. (Square has been assuming the liability until its customers purchase the EMV readers). Unfortunately, these delays may harm these merchants because they could result in a spike in fraud for those companies as criminals shift their focus to these targets that are easier to compromise without the EMV technology.

Besides the liability shift, why else should merchants move quickly to become EMV compliant?  First, EMV has been shown to significantly lower fraudulent activity for card-present transactions. Second, fraud migrates to non-EMV compliant terminals. Third, if 75% of transactions are processed through EMV-enabled terminals and the terminals support contact and contactless transactions, the annual PCI DSS compliance validation with a QSA is no longer required. Fourth, you may be protected from assessments by card brands arising from a compromise of magnetic stripe cardholder information, if 95% of card-present transactions are from EMV-capable terminals 30 days before the start of the compromise event. Fifth, from a public relations standpoint, you do not want to be known as a company that doesn’t take customer security seriously. Finally, the EMV-capable POS terminals also allow the merchant to accept contactless transaction devices, which may be a feature the merchant does not currently offer.

Are there security weaknesses to EMV?  Yes. As mentioned earlier, EMV is only helpful in reducing fraud where the payment card is present during the transaction; online purchases and other e-commerce would not be protected by EMV (for the time being). Also, some payment card security experts have observed that an EMV-compliant merchant still possesses personal account numbers for credit cards because EMV merely attaches the randomly generated code to the personal account number, meaning that a hacker could still potentially access the payment card information by merely removing/scrubbing the code from the personal account number, assuming the hackers access unencrypted information.

Are there better ways to secure payment card transactions?  Absolutely. EMV is a “fraud-reducer,” but point-to-point encryption (P2PE) and tokenization is a “fraud eliminator.” P2PE encrypts the personal account number through the entire transaction process, so the merchant never possesses unencrypted personal account numbers and does not have the keys necessary to unlock the encrypted information. Tokenization takes security one step further by replacing the personal account number with a different number that is worthless to a hacker, so the merchant never possesses this valuable information. An example of tokenization is Apple Pay – when you pay for goods or services with Apple Pay you are not providing the merchant with your credit card number, but rather with a random number that would be useless in any other context.

In short, companies that accept credit and debit cards as a form of payment should move quickly to become EMV compliant. While EMV is not a panacea to protect against fraud, it can significantly reduce it and, more importantly, provide other benefits to a company, like the liability shift. Companies that want to take their security to the next level, however, should consider implementing P2PE and tokenization.

DISCLAIMER: The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients. Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients. All of the data and information provided on this site is for informational purposes only. It is not legal advice nor should it be relied on as legal advice.

 

In 2014, the Food and Drug Administration (“FDA”) articulated its expectations for how device manufacturers address cybersecurity premarket in Content of Premarket Submissions for Management of Cybersecurity in Medical Devices. Recently, the FDA released complementary draft guidance in Postmarket Management of Cybersecurity in Medical Devices. In the new guidance, the FDA explains what constitutes an effective cybersecurity risk management program, how manufacturers should evaluate postmarket cybersecurity vulnerabilities, and when manufacturers must report to the FDA cybersecurity risks and improvements. Comments on the draft guidance are due by April 21, 2016.

Here are the key takeaways from the guidance:

  • Cybersecurity programs should be documented, systematic, and comprehensive.
  • Consider medical device cybersecurity throughout the device’s entire lifecycle.
  • When evaluating a medical device’s cybersecurity, consider a broad range of quality information and focus on cybersecurity threats that may compromise a device’s essential functions.

Components of an Effective Cybersecurity Risk Management Program

The new guidance exhorts manufacturers to create a cybersecurity risk management program that will address a device’s cybersecurity from the drawing board to the dustbin.

Premarket, manufacturers should account for cybersecurity by designing cybersecurity-related inputs for their devices and incorporating a cybersecurity management approach that determines (A) assets, threats, and vulnerabilities; (B) how threats and vulnerabilities may affect device functionality and end users/patients; (C) the likelihood of threats and exploitation of vulnerabilities; (D) risk levels and suitable mitigation strategies; and (E) residual risk and risk acceptance criteria. (The FDA gave the same recommendations in its guidance 2014 premarket guidance.)

Adequate postmarket cybersecurity management requires a program that is systematic, structured, documented, consistent with the Quality System Regulation (21 C.F.R. Part 820), and incorporates the National Institute of Standards and Technology’s (NIST) Framework for Improving Critical Infrastructure Cybersecurity (cybersecurity guidelines the NIST created pursuant to a presidential executive order and with input from public and private stakeholders). Key components include:

  • monitoring quality cybersecurity information sources—such as complaints, service records, and data provided through cybersecurity Information Sharing Analysis Organizations (“ISAOs”)—for identification and detection of cybersecurity vulnerabilities and risk;
  • establishing, communicating, and documenting processes for vulnerability intake and handling;
  • understanding, assessing, and detecting the presence and impact of vulnerabilities;
  • clearly defining essential clinical performance to develop mitigations that protect, respond, and recover from the cybersecurity risk;
  • adopting a coordinated vulnerability disclosure policy and practice; and
  • deploying mitigations that address cybersecurity risk early and prior to exploitation.

Assessing Postmarket Cybersecurity Vulnerabilities

Acknowledging that not all vulnerabilities threaten patient safety and that manufacturers may not be able to identify every threat, the guidance advises manufacturers to identify a device’s “essential clinical performance” and focus on identifying and resolving risks to that performance. Manufacturers should define a device’s essential clinical performance by considering the conditions necessary for the device to operate safely and effectively. Manufacturers should assess a vulnerability’s risk by evaluating its exploitability and health dangers resulting from its exploitation. The draft guidance recommends tools for each evaluation: the Common Vulnerability Scoring System v3.0 for exploitability and the standards in ANSI/AAMI/ISO 14971: 2007/(R)2010: Medical Devices – 442 Application of Risk Management to Medical Devices for health dangers caused by exploitation.

The guidance divides risks into two groups and recommends manufacturers do the same. Low or “controlled” risk exists when, after accounting for existing controls, there is an acceptable amount of risk that the device’s essential clinical performance could be compromised by a cybersecurity vulnerability. High or “uncontrolled” risk exists when insufficient controls and mitigations create an unacceptable amount of risk that the device’s essential clinical performance could be compromised by a cybersecurity vulnerability.

Reporting Mitigation

A risk’s classification affects whether a manufacturer may address the risk without reporting the risk and its remediation to the FDA under 21 C.F.R. Part 806, which obligates manufacturers to report to the FDA when they repair, modify, or adjust a device to reduce the device’s health risk. Manufacturers may ameliorate controlled risks without reporting the risk or enhancement under Part 806. (But for Class III devices, manufacturers must disclose the risk and the remediation in its periodic report to the FDA under 21 C.F.R. § 814.84.) Uncontrolled risks are a different matter: manufacturers must report them and their remediation unless (A) there are no known serious adverse events or deaths associated with the vulnerability; (B) within 30 days of learning of the vulnerability, the manufacturer identifies and implements device changes and/or compensating controls to bring the residual risk to an acceptable level and notifies users; and (C) the manufacturer participates in an ISAO.

What the Draft Guidance Means for Device Manufacturers

Device manufacturers should not delay assessing the strength of their cybersecurity management program. The U.S. Department of Health and Human Services, Office of Inspector General identified cybersecurity of medical devices as one of its priorities for 2016. And the draft guidance explains that the FDA may consider devices with uncontrolled risk to violate the FDCA and be subject to enforcement action.

To see how their program measures up to what the draft guidance describes, device manufacturers should start by ask themselves these key questions:

  • Is our cybersecurity management program addressing cybersecurity throughout each device’s lifecycle?
  • Is our program proactive?
  • Are there quality data security sources, such as ISAOs, we have not used but should?
  • Do we need to develop and deploy new training or messaging to colleagues about cybersecurity?
  • Are we using good cyber hygiene?

When deciding how to move forward with strengthening a cybersecurity program, manufacturers will want to keep in mind the need to safeguard devices against malicious and non-malicious attacks. Vulnerable devices may become infected by malware that cannot tell the difference between a personal computer and a pacemaker. That example is not farfetched: J.M. Porup recently reported for Slate that malware designed to steal credit card information infected and disabled vulnerable fetal heart monitors.

Ever wonder how your credit card gets compromised and how the bad guys get your information? This report on tonight’s episode of 60 Minutes provides an overview of what happens from the moment you swipe your card at the point-of-sale terminal to the moment when the card number is compromised and sold on a black market website to the moment when the bad guy who buys your credit card number online uses it to create a counterfeit card. The report also investigates why most companies learn of these breaches from third parties rather than their own information security team. I highly recommend it to anyone interested in learning about this risk as this year’s holiday season begins.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.