Published by Al Saikali

The SEC recently agreed to a $1,000,000 settlement of an enforcement action against Morgan Stanley for its failure to have sufficient data security policies and procedures to protect customer data. The settlement was significant for its amount. The true noteworthiness here, however, lies not in the end result but the implications of how it was reached: (1) the “reasonableness” of a company’s data security safeguards shall be judged in hindsight, and (2) almost any data breach could give rise to liability. The SEC has left no room for error in making sure that your cybersecurity procedures and controls actually and always work.

What Happened?

Morgan Stanley maintained personally identifiable information collected from its brokerage and investment advisory services customers on two internal company portals. Between 2011 and 2014, an employee unlawfully downloaded and transferred confidential data for approximately 730,000 accounts from the portals to his own personal data storage device/website. It is unclear whether the transfer of information was for the employee’s personal convenience or a more nefarious purpose. Soon thereafter, however, the employee suffered a cyberattack on his personal storage device, leading to portions of the data being posted to at least three publicly available Internet sites. Morgan Stanley discovered the information leak through a routine Internet sweep, they immediately confronted the employee, and voluntarily brought the matter to law enforcement’s attention.

The employee who transferred the information to his personal device was criminally convicted for violating the Computer Fraud and Abuse Act by exceeding his access to a computer, he was sentenced to 36 months probation, and ordered to pay $600,000 in restitution. He also entered into a consent order with the SEC barring him from association with any broker, dealer, and investment adviser for five years.

Morgan Stanley entered into a consent order with the SEC pursuant to which Morgan Stanley agreed to pay a $1,000,000 fine, but did not admit or deny the findings in the order.

SIDE NOTE TO COMPLIANCE OFFICERS READING THIS BLOG POST – if ever you need a way to deter your employees from sending corporate information to their personal devices or email accounts, tell them about this case.

What Does The Law Require?

Federal security laws (Rule 30(a) of Regulation S-P – the “Safeguards Rule”) require registered broker-dealers and investment advisers to adopt written policies and procedures reasonably designed to:

  1. Insure the security and confidentiality of customer records and information;
  2. Protect against any anticipated threats or hazards to the security or integrity of customer records and information; and
  3. Protect against unauthorized access to or use of customer records or information that could result in substantial harm or inconvenience to any customer.

Here, the SEC based Morgan Stanley’s liability on the fact that Morgan Stanley:

failed to ensure the reasonable design and proper operation of its policies and procedures in safeguarding confidential customer data. In particular, the authorization modules were ineffective in limiting access with respect to one report available through one of the portals and absent with respect to a couple of the reports available through the portals. Moreover, Morgan Stanley failed to conduct any auditing or testing of the authorization modules for the portals at any point since their creation, and that testing would likely have revealed the deficiencies in the modules. Finally, Morgan Stanley did not monitor user activity in the portals to identify any unusual or suspicious patterns.

In other words, the authorization modules did not work in this instance, nor was there auditing to test and possibly identify the problem, nor had Morgan Stanley invested in sophisticated monitoring applications that would have identified that the employee was engaging in suspicious activity.

Why Should Companies Worry?

The most concerning part of the Morgan Stanley consent order is this paragraph, which describes some robust safeguards Morgan Stanley had implemented before the incident occurred:

MSSB [Morgan Stanley] adopted certain policies and restrictions with respect to employees’ access to and handling of confidential customer data available through the Portals. MSSB had written policies, including its Code of Conduct, that prohibited employees from accessing confidential information other than what employees had been authorized to access in order to perform their responsibilities. In addition, MSSB designed and installed authorization modules that, if properly implemented, should have permitted each employee to run reports via the Portals only with respect to the data for customers whom that employee supported. These modules required FAs [Financial Advisors] and CSAs [Client Service Associates] to input numbers associated with the user’s branch and FA or FA group number. MSSB’s systems then should have permitted the user to access data only with respect to those customers whose data the user was properly entitled to view. Finally, MSSB installed and maintained technology controls that, among other things, restricted employees from copying data onto removable storage devices and from accessing certain categories of websites.

Lesson learned: it doesn’t matter how robustly designed your policies and procedures may be, if they don’t actually work as designed then you could be liable under the Safeguards Rule.

Commentary

The standard applied by the SEC in this enforcement action is higher than a “reasonableness” standard. It is easy, after the fact, to find a weakness that could have been exploited. Indeed, it is unusual if you cannot identify such a vulnerability after the fact. If a criminal or departing employee is set on unlawfully accessing sensitive information he can likely do so no matter what hurdles you place in his way. A company should not be held liable for failing to stop every data incident. Some may argue that a company like Morgan Stanley must be held to a higher standard because of the known threats to the financial services industry and the potentially significant consequences to consumers of a financial services company suffering a data breach. Nevertheless, the law as written requires policies and procedures that are only “reasonably designed” to protect sensitive information; the law does not require that these policies and procedures be perfectly designed nor that they be effective 100% of the time, nor could it.

Hindsight is 20/20 and regulators would be hard pressed to find any organization that could show their policies and procedures are always followed. Could audits and testing have detected the fact that Morgan Stanley’s authorization module was not preventing the type of unauthorized access and transfer of sensitive information in this case? Possibly, depending on the depth of the audit and foresight of the auditors. But little benefit appears to have inured to Morgan Stanley for the fact it actually had an authorization module, data security training for its employees, policies and restrictions regarding employee access of information, controls that prevented the copying of data onto removable storage devices, and the fact that it voluntarily brought this matter to law enforcement’s attention.

Is there a risk now that the SEC’s interpretation of “reasonableness” will be applied similarly by state Attorneys General, the Health and Human Services’ Office of Civil Rights, the Federal Trade Commission, or other regulators? All of this reminds us that the definition of reasonableness in the context of data security is subjective, and that subjectivity is a risk that companies must address.

Takeaways

There are some important practical takeaways for companies from this settlement: (1) perform a risk assessment to determine how your organization could suffer from a similar risk (employee transferring corporate information to a personal device); (2) implement an authorization module and other policies and procedures to limit access (and identify unauthorized access) to sensitive information to those who have a legitimate business need; and (3) make sure you audit and test these controls so ensure that they actually work. Additionally, CISOs, compliance officers, and in house counsel would be well served to ensure that the story of this enforcement action becomes part of their organization’s data security training as part of the onboarding and annual training process.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

 

In 2014, the Food and Drug Administration (“FDA”) articulated its expectations for how device manufacturers address cybersecurity premarket in Content of Premarket Submissions for Management of Cybersecurity in Medical Devices. Recently, the FDA released complementary draft guidance in Postmarket Management of Cybersecurity in Medical Devices. In the new guidance, the FDA explains what constitutes an effective cybersecurity risk management program, how manufacturers should evaluate postmarket cybersecurity vulnerabilities, and when manufacturers must report to the FDA cybersecurity risks and improvements. Comments on the draft guidance are due by April 21, 2016.

Here are the key takeaways from the guidance:

  • Cybersecurity programs should be documented, systematic, and comprehensive.
  • Consider medical device cybersecurity throughout the device’s entire lifecycle.
  • When evaluating a medical device’s cybersecurity, consider a broad range of quality information and focus on cybersecurity threats that may compromise a device’s essential functions.

Components of an Effective Cybersecurity Risk Management Program

The new guidance exhorts manufacturers to create a cybersecurity risk management program that will address a device’s cybersecurity from the drawing board to the dustbin.

Premarket, manufacturers should account for cybersecurity by designing cybersecurity-related inputs for their devices and incorporating a cybersecurity management approach that determines (A) assets, threats, and vulnerabilities; (B) how threats and vulnerabilities may affect device functionality and end users/patients; (C) the likelihood of threats and exploitation of vulnerabilities; (D) risk levels and suitable mitigation strategies; and (E) residual risk and risk acceptance criteria. (The FDA gave the same recommendations in its guidance 2014 premarket guidance.)

Adequate postmarket cybersecurity management requires a program that is systematic, structured, documented, consistent with the Quality System Regulation (21 C.F.R. Part 820), and incorporates the National Institute of Standards and Technology’s (NIST) Framework for Improving Critical Infrastructure Cybersecurity (cybersecurity guidelines the NIST created pursuant to a presidential executive order and with input from public and private stakeholders). Key components include:

  • monitoring quality cybersecurity information sources—such as complaints, service records, and data provided through cybersecurity Information Sharing Analysis Organizations (“ISAOs”)—for identification and detection of cybersecurity vulnerabilities and risk;
  • establishing, communicating, and documenting processes for vulnerability intake and handling;
  • understanding, assessing, and detecting the presence and impact of vulnerabilities;
  • clearly defining essential clinical performance to develop mitigations that protect, respond, and recover from the cybersecurity risk;
  • adopting a coordinated vulnerability disclosure policy and practice; and
  • deploying mitigations that address cybersecurity risk early and prior to exploitation.

Assessing Postmarket Cybersecurity Vulnerabilities

Acknowledging that not all vulnerabilities threaten patient safety and that manufacturers may not be able to identify every threat, the guidance advises manufacturers to identify a device’s “essential clinical performance” and focus on identifying and resolving risks to that performance. Manufacturers should define a device’s essential clinical performance by considering the conditions necessary for the device to operate safely and effectively. Manufacturers should assess a vulnerability’s risk by evaluating its exploitability and health dangers resulting from its exploitation. The draft guidance recommends tools for each evaluation: the Common Vulnerability Scoring System v3.0 for exploitability and the standards in ANSI/AAMI/ISO 14971: 2007/(R)2010: Medical Devices – 442 Application of Risk Management to Medical Devices for health dangers caused by exploitation.

The guidance divides risks into two groups and recommends manufacturers do the same. Low or “controlled” risk exists when, after accounting for existing controls, there is an acceptable amount of risk that the device’s essential clinical performance could be compromised by a cybersecurity vulnerability. High or “uncontrolled” risk exists when insufficient controls and mitigations create an unacceptable amount of risk that the device’s essential clinical performance could be compromised by a cybersecurity vulnerability.

Reporting Mitigation

A risk’s classification affects whether a manufacturer may address the risk without reporting the risk and its remediation to the FDA under 21 C.F.R. Part 806, which obligates manufacturers to report to the FDA when they repair, modify, or adjust a device to reduce the device’s health risk. Manufacturers may ameliorate controlled risks without reporting the risk or enhancement under Part 806. (But for Class III devices, manufacturers must disclose the risk and the remediation in its periodic report to the FDA under 21 C.F.R. § 814.84.) Uncontrolled risks are a different matter: manufacturers must report them and their remediation unless (A) there are no known serious adverse events or deaths associated with the vulnerability; (B) within 30 days of learning of the vulnerability, the manufacturer identifies and implements device changes and/or compensating controls to bring the residual risk to an acceptable level and notifies users; and (C) the manufacturer participates in an ISAO.

What the Draft Guidance Means for Device Manufacturers

Device manufacturers should not delay assessing the strength of their cybersecurity management program. The U.S. Department of Health and Human Services, Office of Inspector General identified cybersecurity of medical devices as one of its priorities for 2016. And the draft guidance explains that the FDA may consider devices with uncontrolled risk to violate the FDCA and be subject to enforcement action.

To see how their program measures up to what the draft guidance describes, device manufacturers should start by ask themselves these key questions:

  • Is our cybersecurity management program addressing cybersecurity throughout each device’s lifecycle?
  • Is our program proactive?
  • Are there quality data security sources, such as ISAOs, we have not used but should?
  • Do we need to develop and deploy new training or messaging to colleagues about cybersecurity?
  • Are we using good cyber hygiene?

When deciding how to move forward with strengthening a cybersecurity program, manufacturers will want to keep in mind the need to safeguard devices against malicious and non-malicious attacks. Vulnerable devices may become infected by malware that cannot tell the difference between a personal computer and a pacemaker. That example is not farfetched: J.M. Porup recently reported for Slate that malware designed to steal credit card information infected and disabled vulnerable fetal heart monitors.

A client recently asked me to identify the next wave of data privacy litigation.  I said that with so much attention on lawsuits arising from data breaches, particularly in light of some recent successes for the plaintiffs in those lawsuits, the way in which companies collect information and disclose what they are collecting is flying under the radar.  This “failure to match” what is actually being collected with what companies are saying they’re collecting and doing with that information could lead to the next wave of data privacy class action litigation.

Here’s an example.  A privacy policy in a mobile app might state that the app collects the user’s name, mailing address, and purchasing behavior.  In fact, and often unbeknownst to the person who drafted the privacy policy, the app is also collecting information like the user’s geolocation and mobile device identification number, but that collection is not disclosed to the user in the privacy policy.  The collection of the additional information isn’t what gets the company into trouble.  It’s the failure to fully and accurately disclose the collection practice and how that information is used and disclosed to others that creates the legal risk.

What is the source of this problem?  In an effort to minimize costs, small companies often slap together a privacy policy by cutting-and-pasting from a form provided by a website designer or found on the Internet.  Little care is given to the accuracy and depth of the policy because there is little awareness of the potential risk.  Larger companies face a different problem: the left hand sometimes doesn’t know what the right hand is doing.  Legal, privacy, and compliance departments often do not ask the right questions of IT, web/app developers, and marketing, and the latter may not do a sufficiently good job of volunteering more than what is asked of them.  This problem is can be further exacerbated where the app/website development and maintenance is outsourced.  This failure to communicate can, unintentionally, result in a “failure to match” a company’s words with its actions when it comes to information collection.

We have already seen state and federal regulators become active in this area.  The Federal Trade Commission has brought a significant number of enforcement actions against organizations seeking to make sure that companies live up to the promises they make to consumers about how they collect and use their information.  Similarly, the Office of the California Attorney General recently brought a lawsuit against Delta Air Lines alleging a violation of California’s Online Privacy Protection Act for failure to provide a reasonably accessible privacy policy in its mobile app. Additionally, the California Attorney General’s Office has issued a guidance on how mobile apps can better protect consumer privacy, which includes the conspicuous placement and fulsome disclosure of information collection, sharing, and disclosure practices.  As the use of mobile apps and collection of electronic information about consumers increase, we can expect to see a ramping up of these enforcement actions.

What sort of civil class action liability could companies face for “failure to match”?  Based on what we’ve seen in privacy and security litigation thus far, if the failure to match a policy with a practice is intentional or reckless, companies could face exposure under theories of fraud or deceptive trade practice statutes that provide a private right of action (e.g., state “Little FTC Acts”).  Even if the failure to disclose is unintentional, the company could still face a lawsuit alleging negligent misrepresentation, breach of contract, and statutory violations that include violations of Gramm Leach Bliley, HIPAA’s privacy rule, or California’s Online Privacy Protection Act. Without weighing in on the merits of these lawsuits, I would venture to guess that the class actions that will have the greatest chances of success will be those where the plaintiffs can show some financial harm (e.g., they paid for the apps in which the deficient privacy policy was contained) or there is a statute that provides set monetary relief as damages (e.g., $1,000 per violation/download).

What can companies do to minimize this risk?  To minimize the risks, companies should begin by evaluating whether their privacy policies match their collection, use, and sharing practices.  This process starts with the formation of a task force under the direction of counsel that is comprised of representatives from legal, compliance, IT, and marketing and that is dedicated to identifying: (1) all company statements about what information is collected (on company websites, in mobile apps, in written documents, etc.); (2) what information is actually being collected by the company’s website, mobile app, and other information collection processes; and (3) how the information is being used and shared.  The second part requires a really deep dive, perhaps even an independent forensic analysis, to ensure that the company’s statements about what information is being collected are correct.  It’s important that the “tech guys” (the individuals responsible for developing the app/website) understand the significance of full disclosure.  Companies should also ask, “do we really need everything we’re collecting?”  If not, why are you taking on the additional risk?  Also remember that this is not a static process.  Companies should regularly evaluate their privacy policies and monitor the information they collect.  A system must be in place to quickly identify when these collection, use, and sharing practices change, so the policies can be updated promptly where necessary.

 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.