Published by Al Saikali

The Illinois Supreme Court’s decision last week in Rosenbach v. Six Flags may have closed the first of what will be several chapters in class action litigation arising from the Illinois Biometric Information Privacy Act (BIPA).  The court addressed the very narrow issue of what it means for a person to be “aggrieved” under BIPA.  Ultimately, the court held that a violation of the notice, consent, disclosure, or other requirements of BIPA alone, without proof of actual harm, is sufficient for a person to be considered “aggrieved” by a violation of the law.

There are several important issues, however, that were not before the court and remain to be litigated.  One of those issues is implied notice and consent. Defendants will argue that the plaintiffs who checked in/out at work using fingerscan timekeeping systems (which is the fact pattern of almost all of the almost 200 class action lawsuits filed in state court) knew that the fingerscans were being collected and used by their employers for timekeeping purposes, and they voluntarily provided that information.

Federal courts have dismissed such lawsuits, reasoning that plaintiffs effectively received notice and gave consent.  In Howe v. Speedway LLC, for example, the court in a fingerscan timekeeping case held that the plaintiff’s “fingerprints were collected in circumstances under which any reasonable person should have known that his biometric data was being collected.”  Similarly, in Santana v.Take-Two Interactive Software, Inc.the U.S. Court of Appeals for the Second Circuit held that plaintiffs essentially received the notice and consent contemplated by BIPA because “the plaintiffs, at the very least, understood that Take-Two had to collect data based upon their faces in order to create the personalized basketball avatars, and that a derivative of the data would be stored in the resulting digital faces of those avatars so long as those avatars existed.”  In dismissing for lack of standing, the McGinnis court reasoned that the plaintiff “knew his fingerprints were being collected because he scanned them in every time he clocked in or out of work.”

Another significant defense is constitutional standing.  Federal courts have recently dismissed BIPA lawsuits on the ground that they do not meet Article III standing requirements.  Defendants in state court will argue that Illinois constitutional standing (which Illinois state courts have held should be similar to federal law) requires a level of harm that, at a minimum, should be what Article III of the U.S. Constitution requires. To hold otherwise would lead to a different result for a party based entirely on whether the lawsuit is filed in federal or state court.

Defendants will argue that most of the claims are barred by the one-year statute of limitations that applies to claims involving the right of privacy.  Assuming that the one-year statute of limitations is applied, the classes of affected individuals will shrink considerably.

Defendants will also contend that the information collected/stored by the timekeeping devices is not considered biometric information under BIPA.  There is no library of fingerprints stored by these timekeeping devices.  Instead, the devices measure minutiae points and convert those measurements into mathematical representations using a proprietary formula that cannot be used to create a fingerprint.  More security is layered on top of that — the mathematical representation is encrypted.  For these reasons, no plaintiff in any of these biometric cases has been able to point to a single data breach involving biometric information.  The technology is essentially tokenization(similar to Apple Pay), where if a hacker were to access the actual device, he’d find nothing there to steal because the valuable thing (the credit card number or, in this case, fingerprint) is not stored on the device but is instead replaced by a numerical representation.

Plaintiffs will also have to prove that the defendants didn’t just violate BIPA, but did so negligently or intentionally.  This is not an easy standard to meet, especially if the trier of fact determines that these are “gotcha” lawsuits, meant to catch companies off-guard about a little known and rarely used state law.

Assuming the plaintiffs jump all these hurdles, they must still demonstrate that these cases are appropriate for class certification. The cases involve different facts regarding whether individual plaintiffs received notice, whether they gave consent, whether they used the fingerscan method of authentication or another method like PIN number or RFID card, whether they enrolled in Illinois, and whether their claim involves a violation of BIPA beyond collection or storage. Given these differences between plaintiffs, it will be difficult for them to meet the commonality and fairness requirements for class certification.

To be sure, some Defendants will face their own challenges.  A line of cases has held that where companies used their time-clock provider’s cloud service to store or back up timekeeping information from the clock, they may be in violation of BIPA’s prohibition against disclosure of biometric identifiers to a third party.  But at least one court has disagreed with that logic, stating that not all disclosures to a third party automatically present a concrete injury, and whether the third party has strong protocols and practices in place to protect data is relevant to the inquiry.

Defendants need only win one of these (or several other) defenses.  Plaintiffs must win them all.  In the meantime, plaintiffs must hope that the Illinois legislature does not notice that hundreds of BIPA lawsuits are flooding the Illinois state court system creating potentially crippling liability for companies that tried to adopt more secure methods of authentication, which could lead to an amendment that would make the law more consistent with its original intent. 

DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

On Friday afternoon an Illinois intermediate appellate court decided that the bar for a plaintiff bringing a class action lawsuit under the Illinois Biometric Information Privacy Act (BIPA) is low, creating a conflict with its sister intermediate appellate court. The Illinois Supreme Court is expected to resolve the conflict early next year. How the court resolves the conflict will significantly impact companies doing business in Illinois.


BIPA requires companies to provide notice and obtain consent from Illinois residents before collecting their biometric information. It also limits what companies can do with biometric information and requires the adoption of certain security safeguards. Any person “aggrieved by a violation” of the law may sue for actual damages or statutory damages ranging from $1,000 to $5,000 per violation. You can learn more about BIPA from my earlier blog post.

Beginning in the fall of 2017, Illinois businesses of all sizes were hit with “gotcha” class action lawsuits brought by former employees looking for reasons to sue their former employers. Those companies used timekeeping systems that required employees to scan their fingers to punch in and out of work. Ironically, the timekeeping systems improved security by reducing fraud and strengthening authentication. Nevertheless, many companies were not aware of BIPA or the possibility that it might apply to their timekeeping systems. The plaintiff’s bar was quick to pounce. Over 150 class actions were filed by former employees claiming that they did not receive BIPA’s requisite notice and consent (despite the fact the employees voluntarily placed their fingers on these devices every day). The lawsuits in aggregate seek tens of millions of dollars from companies doing business in Illinois.

Requisite Harm for a Private Cause of Action

A key question in the BIPA litigation is what it means to be “aggrieved by a violation.” Is it enough that an employee doesn’t receive notice and consent, or must they show that they suffered some actual harm (e.g., financial loss or identity theft) as a result of the violation, as would be necessary in a typical data breach lawsuit?

In December of 2017, the Illinois Appellate Court (Second District) in Rosenbach v. Six Flags Entertainment Corp. held that a person aggrieved must allege some actual injury, adverse effect, or harm. The outcome makes sense because BIPA does not say that the data subject can sue “for a violation.” It requires two things: a violation of BIPA and that someone be aggrieved.

Nevertheless, last week the Illinois Appellate Court (First District) weighed in on the issue and reached an opposite conclusion, holding that a mere violation of BIPA, without additional harm, is all that is necessary to meet the “aggrieved by” standard for a private cause of action. The case, Sekura v. Krishna Schaumburg Tan, Inc., was brought against a tanning salon that used finger scans to admit members into its salons. The court rejected its sister court’s ruling in Rosenbach and held that aggrieved means only the deprivation of a legal right. The court further held that disclosure of biometric information to a third party (e.g., storing the information in the cloud) was sufficient to meet the “aggrieved by” standard, as was an allegation of mental injury. In short, the bar for meeting the “aggrieved by” standard, according to the First District’s conclusion, should be incredibly low.

What’s Next and When?

Presumably, the Sekura decision will be appealed quickly and joined with the Rosenbach case already pending at the Illinois Supreme Court. It is unclear what impact Sekura will have on the timing of a ruling from the Supreme Court on the issue, as briefing in the Rosenbach case was finished in September and the parties were simply awaiting the scheduling of an oral argument. It’s possible the court will wait for briefing to be perfected in the Sekura case before scheduling oral argument, or an expedited briefing process may take place because the issues in the two cases are so similar.

Substantively, one of the most significant consequences of the Sekura decision is that it could give the Illinois Supreme Court something to cite if it were inclined to reverse Rosenbach. I would argue that the reasoning in Rosenbach actually appears stronger in contrast to the Sekura decision. For example, the Sekura analogy of disclosing encrypted biometric information to a third party as equivalent to a disclosure of whether someone has AIDS under the AIDS Confidentiality Act is misplaced. Similarly, the Sekura reasoning makes the words “aggrieved by” meaningless as a mere violation of the statute also is all that is necessary to bring a private cause of action under the decision.

A Final Observation

Most concerning to me about the BIPA litigation generally is that it appears to be based on an unfounded fear and misunderstanding of the underlying technology companies use to collect, store, and share the subject information. Businesses are not collecting, storing, or sharing images of fingerprints, which might be accessed without permission and/or potentially misused. The finger scanning machines in question measure minutiae points and turn them into mathematical representations, which cannot be reverse engineered into a fingerprint. As a belt on these suspenders, the information is encrypted.

Two facts in the biometric privacy context are particularly telling and dispositive. First, no plaintiff or amici in any briefing in the more than 150 BIPA class actions has identified an example where biometric information was compromised. Why? Because the manner in which the finger scan information is collected is much like tokenization (a technology companies use to replace credit card numbers with valueless characters) – if a bad guy breaks in, all he can steal is a random set of characters that have no value.

Another important fact: all state data breach notification laws exempt encrypted information from the definition of personal information and the obligation to notify if it is the subject of a data breach. Why? Because there is no risk that a hacker can access the information and misuse it. Here, the subject information is encrypted so there is no risk of harm to the individuals bringing these lawsuits. The lawsuits are instead based on an unfounded fear of what could happen.

I wonder what impact a more fulsome explanation of the technology would have on the outcome of these cases. In the meantime, companies continue to spend significant sums of money defending these lawsuits and they face the risk of millions of dollars in potential liability.

In three months, the EU’s General Data Protection Regulation (GDPR), one of the strictest privacy laws in the world, will go into effect.  It will apply to companies that collect or process personal data of EU residents, regardless of whether the company is physically located in the EU.  Companies that violate the law will be penalized up to 4% of their annual worldwide revenue for the preceding financial year or 20,000,000 EUR, whichever is greater.  Is your organization ready?

Shook’s Privacy and Data Security Team regularly counsels multinational companies to comply with international privacy laws like the GDPR.  In an effort to help in-house lawyers understand whether the GDPR applies to their organizations and how to minimize its risks, we have prepared a webinar that provides tips on developing a GDPR compliance program.  The webinar is on-demand and complimentary.  Check it out here, and feel free to leave comments.


Does your company collect biometric information?  Are you not entirely sure what “biometric information” means?  Would you like to understand the differences between the different state biometric privacy laws?  Do you want to know why more than 50 companies were hit with class action lawsuits within a period of three months as a result of their biometric privacy practices?

If the answer to any of these questions is “yes” then check out this complimentary, on-demand webinar on Biometric Privacy prepared by Shook’s Privacy and Data Security team.  Then feel free to get in touch with any of the members of our Biometric Privacy Task Force (contact information at the end of the webinar).  Feel free to leave comments below.

While the privacy world is focused on the Equifax data breach, another development is taking place that could have a more lasting effect on privacy law.  In the last month, plaintiffs’ lawyers in Illinois have filed over 20 lawsuits against companies that authenticate their employees or customers with their fingerprints.  The lawsuits are based on the Illinois Biometric Information Privacy Act (BIPA), which requires companies that possess or collect biometric information to provide notice to and obtain a written release from individuals whose biometric information the companies collect.

Why Do These Lawsuits Matter?

Companies are increasingly collecting biometric information from their customers and employees (“data subjects”) because this information helps authenticate users with greater accuracy.  It allows the company to provide customers a more seamless, secure, and tailored experience.  It also allows employees to securely and conveniently punch in and out of work by placing their finger on an electronic reader, which has the additional benefit of minimizing “buddy punching” (where employees ask their colleagues to check them in/out of work improperly).

What Is Biometric Information?

BIPA applies to “biometric Identifiers” and “biometric Information.”  A biometric identifier is a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry.  Biometric identifiers do not include, among other things, writing samples, written signatures, photographs, human biological samples used for valid scientific testing or screening, demographic data, tattoo descriptions, or physical descriptions such as height, weight, hair color, or eye color.  Biometric information means any information based on an individual’s biometric identifier used to identify an individual.  Because BIPA does not treat biometric identifiers differently from biometric information, this blog post refers to both categories collectively as “biometric information.”

To Whom Does BIPA Apply?

BIPA applies to companies in possession of biometric information or companies that collect, capture, purchase, receive through trade or otherwise obtain biometric information about Illinois residents.  BIPA does NOT apply to entities governed by HIPAA or GLBA.  Nor does it apply to state or local government agencies or any court of Illinois.

What Does BIPA Require?

Companies that possess biometric information must develop a written policy, made available to the public, that establishes a retention schedule and guidelines for permanently destroying biometric information when the initial purpose for collecting or obtaining the information has been satisfied, or within three years of the individual’s last interaction with the private entity, whichever occurs first.  The company must comply with this retention schedule and destruction guidelines, unless a valid warrant or subpoena issued by a court of competent jurisdiction provides otherwise.  The company must also adopt reasonable security safeguards to protect the storage and transmission of biometric information.  These safeguards must be at least the same as or more protective than the manner in which the private entity stores, transmits, and protects other confidential and sensitive information.

Companies that collect, capture, purchase, receive through trade, or otherwise obtain a person’s biometric information must:  (1) inform the subject in writing that biometric information is being collected or stored, and the specific purpose and length of term for which the information is being collected, stored, and used; and (2) obtain a written release executed by the subject of the biometric information.

What Conduct Does BIPA Prohibit?

Companies that possess biometric information are not allowed to sell, lease, trade, or otherwise profit from a person’s biometric information.  Additionally, disclosure, redisclosure, and other dissemination of the information is prohibited unless:  (1) the data subject consents to the disclosure; (2) the disclosure completes a financial transaction requested or authorized by the data subject; (3) the disclosure is required by law; or (4) the disclosure is required pursuant to a valid warrant or subpoena issued by a court of competent jurisdiction.

Can My Company Be Sued For Violating BIPA?

Any person “aggrieved by a violation” of BIPA can sue the violating company.  He or she may be entitled to $1,000 in liquidated damages for a negligent statutory violation or $5,000 in liquidated damages for an intentional statutory violation.  (If actual damages are greater, the plaintiff may seek those instead, but for the reasons discussed below, this is not usually the case).  Additionally, the prevailing party (plaintiff or defendant) may recover attorney’s fees and costs.

What Is This Latest Wave Of BIPA Lawsuits All About?

Between BIPA’s enactment in 2008 and a couple months ago there were relatively few lawsuits based on violations of BIPA.  Within the last couple of months, however, the Illinois plaintiffs’ bar has filed over 20 BIPA lawsuits.  Almost all of those lawsuits are based on the same underlying factual scenario:  an employee places his/her finger on a time clock to authenticate himself/herself when checking in or out of work.  In addition to suing the employer, plaintiffs are also suing the companies that sell/distribute the time clocks that use fingerprint readers.

Given the timing of the lawsuits and their almost identical language, this is surely a coordinated effort by the plaintiff’s bar to obtain quick settlements from risk-averse companies that would prefer to avoid or cannot afford the cost of litigation.  It is also a shotgun approach to flood the courts with these lawsuits in the hope that one or two of them will result in favorable precedent that can be used to file more lawsuits, so I don’t see this trend ending anytime soon.

Do The Lawsuits Have Merit?

No.  You can expect to see strong arguments by the defendants on the underlying technology and the meaning of biometric information.  But these lawsuits are meritless primarily because the plaintiffs didn’t suffer any real harm.  The lawsuits appear to be filed by former employees with axes to grind against their former employers.  Setting that aside, however, the arguments in the complaints are not persuasive.

The complaints allege that BIPA was designed to ensure that the plaintiffs receive notice that their biometric information is being collected, and that the plaintiffs should have been asked to sign written releases.  This lack of notice argument is silly when you remember that these individuals were essentially receiving notice every day by placing their fingers on a time clock to log in and out of work.  This latest wave of cases does not present the situation, as other BIPA cases have, where biometric information is being collected without the data subject’s knowledge.

The complaints also allege that the plaintiffs were not provided a policy explaining the use of their information. If we assume first that the plaintiffs would have read these policies (because we all read policies provided to us during the onboarding process), then what would those policies have told the employees?  Anyone familiar with the technology will tell you that the policies would say that the company does not actually collect fingerprint images at all, that there isn’t a database of employee fingerprints somewhere, that to the extent the company has access to numerical representations of their fingerprints those representations are useless to anyone else because they can’t be reverse-engineered, and the information is not shared with third parties (primarily because it serves no use).

The complaints are also significant in what they do NOT allege.  They do not allege, for example, that unauthorized third parties (like hackers) accessed the information.  Nor do the complaints allege that the employers shared the information with any authorized third parties.  So again, what is the harm suffered?

For these reasons, most courts that have addressed the lack of harm argument in the BIPA context have dismissed the lawsuits.  See, e.g., McCollough v. Smarte Carte, Inc. (N.D. Ill. Aug. 1, 2016); Vigil v. Take-Two Interactive Software, Inc. (S.D.N.Y. Jan. 27, 2017).  Those courts concluded that even if there was a technical violation of BIPA, the plaintiffs were not “aggrieved by those violations.”

What Can Companies Do To Minimize These Risks?

First, determine whether BIPA even applies to you.  This may require consulting with counsel knowledgeable in the requirements of BIPA and the underlying technology.  Even if you are not currently collecting biometric information from Illinois residents, could you in the future?  Additionally, while Illinois is currently the only state that creates a private right of action for violation of its biometric information privacy statute, other states have similar laws enforced by their respective Attorneys General.

Second, if BIPA applies, use experienced counsel to ensure that you comply with BIPA – draft a BIPA retention policy, prepare and obtain written releases, and evaluate the security and use of the information.  This process may require coordination with your information technology staff and the vendor you use for your authentication devices.

Finally, if your company has already been sued, there are strategies that counsel should immediately bring to your attention that will lower the cost of litigation, increase the likelihood of success, and help you identify traps for the unwary.


DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

The consequences of a data breach reached new heights last week when Yahoo announced the resignation of its General Counsel in response to a series of security incidents the company suffered.  A more fulsome explanation of the security incidents and Yahoo’s response can be found in item seven of the company’s 10-K, but here are the highlights:

  • Yahoo suffered three security incidents from 2013 to 2016, one of which involved the theft of approximately 500 million user accounts from Yahoo’s network by a state-sponsored actor in late 2014. The stolen information included names, email addresses, telephone numbers, dates of birth, hashed passwords and encrypted or unencrypted security questions and answers.  (Note that under most, but not all, data breach notification laws, unauthorized access of these data elements would not create a legal obligation to notify affected individuals).
  • An independent committee of Yahoo’s board of directors undertook an investigation with the assistance of a forensic firm and outside counsel.
  • The committee concluded that Yahoo’s information security team knew of the 2014 security incident at that time, but the incident was not disclosed until September 2016.
  • “[S]enior executives did not properly comprehend or investigate, and therefore failed to act sufficiently upon, the full extent of knowledge known internally by the Company’s information security team.”
  • Yahoo knew, as early as December 2014, that an attacker had acquired personal data of Yahoo users, but it is not clear whether and to what extent this information was conveyed to those outside the information security team.
  • The legal team, however, “had sufficient information to warrant substantial further inquiry in 2014, and they did not sufficiently pursue it. As a result, the 2014 Security Incident was not properly investigated and analyzed at the time, and the Company was not adequately advised with respect to the legal and business risks associated with the 2014 Security Incident.”  (Emphasis added).  The 10-K does not identify the “sufficient information” or explain what “further inquiry” would have been required (or why).
  • The committee found “failures in communication, management, inquiry and internal reporting,” which all contributed to lack of understanding and handling of the 2014 Security Incident.
  • The committee also found that Yahoo’s board of directors was “not adequately informed of the full severity, risks, and potential impacts of the 2014 Security Incident and related matters.”

It’s not clear from the 10-K exactly why Yahoo’s General Counsel was asked to step down.  It’s highly unusual that a General Counsel would be held directly (and publicly) responsible for a data breach.  Nevertheless, the outcome raises a couple of questions:  (1) will this represent a new trend for in house counsel generally, and (2) how will this outcome affect how companies approach investigations of data incidents in the future?

Regarding the latter question, a colleague at another firm suggested that this outcome may make corporate legal departments less inclined to involve themselves in breach response or direct investigations of suspected data breaches.  I disagree.  Looking the other way or sticking one’s head in the sand is never the right response to a data incident.  In fact, the legal department would create bigger problems if it did little or nothing.

So what can a corporate legal department do to minimize its own risks?  Here are a few suggestions:

  • Retain a forensic firm through the legal department or outside counsel in advance of an incident to ensure that resources are available to begin an investigation immediately, and to maximize the applicability of the attorney-client privilege and work product doctrine.
  • Engage outside counsel skilled in privacy and data security law and experienced in helping similarly situated companies prepare for and respond to data incidents. There is a growing glut of lawyers who hold themselves out as privacy experts, so I recommend asking for and contacting references.  Most clients are happy to speak about their level of satisfaction with their outside counsel while avoiding details of the incident that led to the engagement.
  • Prepare written protocols, with the cooperation of your information security department, to guide your investigation when an incident occurs. These protocols are different from incident response plans; they focus specifically on the process of initiating, directing, and concluding an investigation at the direction of legal counsel for the purpose of advising the company on its compliance with privacy and data security laws.  They include rules on communication, documentation, and scope.
  • Engage in real dialogue with the information security officer(s) before an incident occurs, in an effort to identify appropriate rules of engagement for when the corporate legal department should be involved in incident response. Some companies involve legal in every data incident (that’s too much), some don’t involve them at all and maintain that data incidents are entirely within the purview of information security (that’s too little . . . and create significant legal risks), but the challenge lies in defining the middle ground.  It is easy to say “legal gets involved when Information Security determines that an incident is serious,” but it is often difficult to know at the outset of an incident whether it will become serious, and by the time you’ve figured that out it may be too late.  There is, however, a way to strike that balance.
  • Test, test, test – regularly simulate data incidents to test the protocols, rules of engagement, and incident response plans. I’ve been involved in some phenomenal tabletop exercises, which clients have used to benchmark their response readiness against other similarly situated companies.  I’ve been consistently impressed with one particular forensic firm in this space.  Legal and information security departments can and should work together to undertake these exercises.

Information security officers will not be the only high-level executives to have their feet held to the fire when a data breach occurs.  I predict that C-level executives and boards of directors will increasingly hold corporate legal departments responsible (at least in part) for how the company investigates and responds to a suspected data breach.  So it will be important for legal departments to proactively educate themselves on the legal issues that arise when an incident occurs, identify their roles in the incident response procedure, and prepare to act quickly and thoroughly when the time comes.


DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

Ever wonder how your credit card gets compromised and how the bad guys get your information? This report on tonight’s episode of 60 Minutes provides an overview of what happens from the moment you swipe your card at the point-of-sale terminal to the moment when the card number is compromised and sold on a black market website to the moment when the bad guy who buys your credit card number online uses it to create a counterfeit card. The report also investigates why most companies learn of these breaches from third parties rather than their own information security team. I highly recommend it to anyone interested in learning about this risk as this year’s holiday season begins.


DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

My last post described what the recently passed Florida Information Protection Act (FIPA) will do.  This post analyzes how FIPA differs from Florida’s existing breach notification law and explains why those differences will hurt or help companies that maintain information about Florida residents.  Florida’s Governor must still sign the FIPA into law, but his signature is expected given the unanimous support of FIPA in the state legislature.  Once signed, the law will go into effect on July 1, 2014.  So what do businesses need to know about FIPA?

Attorney General Notification

The first significant difference between FIPA and Florida’s existing breach notification law is that, with some limited exceptions, breached entities will be required to notify Florida’s Attorney General within 30 days of any breach that affects more than 500 Florida residents.  Until now, Florida has been part of the majority of states that does not require notice of a breach to the state Attorney General.

The law also requires breached entities to notify the Attorney General’s office even when the entities decide notification to affected consumers is not necessary because the breach will not likely result in harm to affected individuals.  It remains to be seen whether this change in the law will result in a flood of “non-notifications” to the Attorney General’s office.

The FIPA provides teeth for the Attorney General’s Office to enforce it.  A violation of FIPA may be automatically considered a violation of Florida’s Deceptive and Unfair Trade Practices Act.  Though the FIPA does not create a private cause of action, we could see the Attorney General actively enforce the law against breached entities that fail to meet the law’s requirements.

Broader Definition of PII

Another significant change in Florida law as a result of the FIPA is the expansion of the definition of personally identifiable information (PII). PII will now include the username or email address in combination with a password or security questions and answer that would permit access to an online account.  This change is based on a realization that consumers are increasingly storing information online and, unfortunately, often using the same usernames and passwords.  The net result, however, will be an increased number of data breaches under the law.

Shortening the Breach Notification Period

FIPA also shortens the time a breached entity has to notify affected individuals of a breach.  Currently, breached entities must notify affected individuals “without unreasonable delay” but they have up to 45 days.  The new law requires breached entities to notify affected individuals “as expeditiously as practicable, but no later than 30 days after the determination of the breach or reason to believe a breach occurred,” unless a waiver or authorized delay is issued.

This change raises a couple of concerns for breached entities.  First, while in most instances 30 days may be enough time to notify affected individuals of a breach, in some cases that will not be enough time.  There are many steps that must take place as part of the notification process, including determining the source and scope of the instruction, identifying what information is affected, identifying who is affected and where they live, and ensuring that the threat is no longer in existence.  Adopting a bright line deadline may end up punishing breached entities that are working as quickly as possible to respond to a breach.

Second, it is not clear under the FIPA what starts the clock running on the 30 days.  When is “determination of the breach” triggered?  Is it when the breached entity reasonably believes an intrusion has occurred?  Is it when the entity knows that PII has been affected?  Is it when the entity knows whose PII has been affected?  I would argue that clock shouldn’t start running until the entity knows that the PII of a Florida resident has been affected, but we are left to guess how regulators will interpret this requirement.

Notification by Email

A welcome change that the FIPA will usher in is breach notification by email.  This will help significantly reduce the cost of breach notification in matters that involve a large number of Florida consumers.  It is also recognition that the best contact information a company may have for its customers is their email address.

Be prepared to turn over your incident and forensic reports

Perhaps the most significant change is that the FIPA purports to require breached entities to provide incident reports, data forensic reports, and the company’s policies in place regarding breaches, if the Florida Attorney General’s Office requests them.  These documents sometimes contain unintentionally damaging statements or proprietary information about a company’s security infrastructure that the company would not want to be made public.  And, once disclosed to the Attorney General’s office, the documents may become subject to a public records request, though this bill (which also awaits the Governor’s signature) tries to limit that risk?  As a result of this change, we could see breached entities either not requesting reports at all (out of concern that they will have to disclose them to third parties), or requesting two versions – a sanitized version that contains little information and can be produced to the Attorney General, and a more fulsome version for internal use.  Either result could not have been what the legislature intended when it passed this law.  It will also be interesting to see how the FIPA will affect the work product and self-critical analysis privileges that apply to data forensic reports prepared at the direction of counsel.

Proactive Security Requirements

The FIPA adds a new type of protection of PII:  it requires that an entity maintaining PII adopt “reasonable measures” to protect and secure the PII.  With this change, Florida joins the minority of jurisdictions that statutorily require entities maintaining PII to adopt safeguards regardless of whether the entity ever suffers a breach.  To be sure, adopting safeguards to protect PII is a good idea regardless of whether it is statutorily required, and the failure to adopt those safeguards could expose a company to an enforcement action by the FTC or state attorney general under the FTC Act or “little FTC Acts,” respectively, even in states where those safeguards are not required.  But the FIPA provides no guidance as to what is meant by “reasonable measures.”  Does this mean encryption?  Password protection?  Are written policies and training required?  Does it differ depending on the size of the breached entity?  Again, we are left to guess.

Some Final Observations

A few closing observations about the FIPA:

  • The definition of a breach is still limited to electronic personal information; so a breach involving purely paper records may not trigger the statute.
  • A violation of the statute is automatically considered a violation of Florida’s Deceptive and Unfair Trade Practices Act, but that violation appears to be enforceable only by the Florida Attorney General and not a private cause of action.
  • A breach now means unauthorized “access” of PII, where before it was defined as unauthorized “acquisition” of PII.  This change broadens the number of scenarios that could be considered a breach.

In short, the FIPA is generally a consumer-friendly law that will increase the number of breaches that require notification, shorten the time by which notification must take place, require that the Attorney General be included in the breach notification process, and demand that companies adopt security safeguards to protect PII regardless of whether they ever suffer a breach.


DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.

The Florida Legislature recently passed the Florida Information Protection Act of 2014 (FIPA).  This post describes the FIPA and analyzes the advantages and disadvantages to businesses governed by the new law.  The FIPA must still be signed by the Governor, but the law received unanimous support in the legislature, so his signature is expected.  Once signed, the law would go into effect in a less than two months.

What is the FIPA?  The FIPA will replace Florida’s existing data breach notification law.  It has a reactive component (what companies must do after a breach) and a proactive component (what companies must do to protect personally identifiable information they control regardless of whether they ever suffer a breach).  The FIPA governs “covered entities.”  A covered entity is a commercial entity that acquires, maintains, stores or uses personally identifiable information.  A “breach” triggering the FIPA is the unauthorized access of data in electronic form containing personally identifiable information (PII).  The FIPA applies only to PII in electronic form, though an argument can be made that the secure disposal requirement under the FIPA applies to PII in any form given its use of the term “shredding.”

What is PII?  PII is defined as a first name or first initial and last name in combination with any of the following:

  • social security number;
  • driver’s license or ID card number, passport number, military identification number, or other similar number issued on a government document used to verify identity;
  • a financial account number or credit or debit card number, in combination with any required security code, access code, or password that is necessary to permit access to an individual’s financial account;
  • information regarding an individual’s medical history, mental or physical condition, or medical treatment or diagnosis by a health care professional; or
  • an individual’s health insurance policy number or subscriber identification number and any unique identifier used by a health insurer to identify the individual.

PII also includes a username or email address in combination with a password or security question and answer that would permit access to an online account.  The FIPA does not apply to PII that is encrypted, secured, or modified such that the PII is rendered unusable.

Do covered entities have to notify the Florida Attorney General’s Office of a breach?  Yes.  Covered entities must notify Florida’s Department of Legal Affairs (i.e., the Florida Office of the Attorney General) of any breach that affects more than 500 people.  Notice must be provided as expeditiously as practicable, but no later than 30 days after determination of the breach or reason to believe a breach occurred.  An additional 15 days is permitted if good cause for delay is provided in writing to the Attorney General within 30 days after determination of the breach or reason to believe a breach occurred.

The notice to the Attorney General must include:

  • a synopsis of the events surrounding the breach;
  • the number of affected Floridians;
  • any services related to the breach being offered without charge to the affected individuals (e.g., credit monitoring) and instructions as to how to use such services;
  • a copy of the notice sent to affected individuals or an explanation as to why such notice was not provided (e.g., there was no risk of financial harm); and
  • the name, address, telephone number, and email address of the employee or agent of the covered entity from whom additional information may be obtained about the breach.

Additionally, if the Attorney General asks for any of the following, the covered entity must provide it:

  • a police report
  • an incident report
  • a computer forensics report
  • a copy of the policies in place regarding breaches
  • steps that have been taken to rectify the breach

When must affected individuals be notified?  Notice to affected individuals must be made as expeditiously as practicable and without unreasonable delay.  The law allows covered entities to take into account the time necessary to allow the covered entity to determine the scope of the breach of security, to identify individuals affected by the breach, and to restore the reasonable integrity of the data system that was breached.  But even with those considerations, notice to affected individuals cannot take longer than 30 days after determining or having a reason to believe that a breach has occurred.

Two exceptions can permissibly delay or eliminate the obligation to notify affected individuals.  One exception is an authorized delay, which occurs when law enforcement determines that notice to individuals would interfere with a criminal investigation.  The determination must be in writing and must provide a specified period for the delay, based on what law enforcement determines to be reasonably necessary.  The delay may be shortened or extended at the discretion of law enforcement.

The second exception is a waiver, which occurs where, after an investigation and consultation with law enforcement, the covered entity reasonably determines that the breach has not and will not likely result in identity theft or any other financial harm to the affected individuals.  If a waiver applies, the covered entity must document it, maintain the documentation for five years, and provide the documentation to the Attorney General within 30 days after the determination.

How must notice to affected individuals take place and what must it include?  Direct notice to affected individuals can take one of two forms:  it can be in writing (sent to the mailing address of the individual in the records of the covered entity) or it can be by email to the email address of the individual in the records of the covered entity.  In either form, the notice must include:  (a) the date, estimated date, or estimated date range of the breach; (b) a description of the PII that was accessed; and, (c) information that the individual can use to contact the covered entity to inquire about the breach and the PII that the covered entity maintained about the individual.

Can a covered entity provide substitute notice to affected individuals?  If the cost of direct notice would exceed $250,000, more than 500,000 individuals are affected, or the covered entity does not have a mailing or email address for the affected individuals, then substitute notice can be provided.  The substitute notice must include a conspicuous notice on the covered entity’s website and notice in print and to broadcast media where the affected individuals reside.

What if the covered entity is governed by HIPAA or some other federal regulations?  Notice provided pursuant to rules, regulations, procedures, or guidelines established by the covered entity’s primary or functional federal regulator is deemed to be in compliance with the notice requirement to individuals under the FIPA. However, a copy of that notice must be timely provided to the Attorney General.  For example, if a company is governed by HIPAA, then their notice pursuant to the Breach Notification Rule will be sufficient to meet the requirements under the FIPA, but a copy of that notice still must be sent to the Attorney General.

Do covered entities have to notify credit reporting agencies?  If more than 1,000 individuals are affected, then notice to all consumer reporting agencies must be provided without unreasonably delay.

What if the breach occurs with a third-party agent (e.g., a vendor)?  A third-party agent is an entity that has been contracted to maintain, store, or process PII on behalf of a covered entity or governmental entity.  If a third-party agent suffers a breach, it must notify the covered entity within 10 days following the determination of the breach or reason to believe the breach occurred.  Upon receiving notice of the breach, the covered entity must then comply with the requirements to notify affected individuals and the Attorney General.  In that case, the third-party agent must provide all information necessary for the covered entity to comply with its notice requirements.  The third-party agent may notify affected individuals and the Attorney General on behalf of the covered entity, but the agent’s failure to provide proper notice is deemed a violation against the covered entity.

Are there obligations other than notification after a breach?  In addition to the reactive component of the FIPA (actions covered entities must take after a data breach), the FIPA also has a proactive component that imposes obligations on covered entities regardless of whether they ever suffer a breach.  Specifically, covered entities must take reasonable measures to protect and secure PII.  Covered entities must also take reasonable measures to dispose, or arrange for the disposal, of customer records containing PII within their custody or control when the records are no longer to be retained.  Such disposal must involve shredding, erasing, or otherwise modifying the PII in the records to make it unreadable or undecipherable through any means.

Who enforces the FIPA and how?  A violation of the FIPA is an unfair or deceptive trade practice subject to an action by the Attorney General under Florida’s Deceptive and Unfair Trade Practices Act against the covered entity or third-party agent.  A covered entity that does not properly notify affected individuals or the Attorney General may be fined up to $500,000 per breach, depending on the number of days in which the covered entity is in violation of the FIPA.  The law creates no private cause of action, nor does the presumed FDUTPA violation for the Attorney General appear to apply to a private action under FDUTPA.

The law will become effective on July 1, 2014 if it is signed by the Governor.


DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.


A client recently asked me to identify the next wave of data privacy litigation.  I said that with so much attention on lawsuits arising from data breaches, particularly in light of some recent successes for the plaintiffs in those lawsuits, the way in which companies collect information and disclose what they are collecting is flying under the radar.  This “failure to match” what is actually being collected with what companies are saying they’re collecting and doing with that information could lead to the next wave of data privacy class action litigation.

Here’s an example.  A privacy policy in a mobile app might state that the app collects the user’s name, mailing address, and purchasing behavior.  In fact, and often unbeknownst to the person who drafted the privacy policy, the app is also collecting information like the user’s geolocation and mobile device identification number, but that collection is not disclosed to the user in the privacy policy.  The collection of the additional information isn’t what gets the company into trouble.  It’s the failure to fully and accurately disclose the collection practice and how that information is used and disclosed to others that creates the legal risk.

What is the source of this problem?  In an effort to minimize costs, small companies often slap together a privacy policy by cutting-and-pasting from a form provided by a website designer or found on the Internet.  Little care is given to the accuracy and depth of the policy because there is little awareness of the potential risk.  Larger companies face a different problem: the left hand sometimes doesn’t know what the right hand is doing.  Legal, privacy, and compliance departments often do not ask the right questions of IT, web/app developers, and marketing, and the latter may not do a sufficiently good job of volunteering more than what is asked of them.  This problem is can be further exacerbated where the app/website development and maintenance is outsourced.  This failure to communicate can, unintentionally, result in a “failure to match” a company’s words with its actions when it comes to information collection.

We have already seen state and federal regulators become active in this area.  The Federal Trade Commission has brought a significant number of enforcement actions against organizations seeking to make sure that companies live up to the promises they make to consumers about how they collect and use their information.  Similarly, the Office of the California Attorney General recently brought a lawsuit against Delta Air Lines alleging a violation of California’s Online Privacy Protection Act for failure to provide a reasonably accessible privacy policy in its mobile app. Additionally, the California Attorney General’s Office has issued a guidance on how mobile apps can better protect consumer privacy, which includes the conspicuous placement and fulsome disclosure of information collection, sharing, and disclosure practices.  As the use of mobile apps and collection of electronic information about consumers increase, we can expect to see a ramping up of these enforcement actions.

What sort of civil class action liability could companies face for “failure to match”?  Based on what we’ve seen in privacy and security litigation thus far, if the failure to match a policy with a practice is intentional or reckless, companies could face exposure under theories of fraud or deceptive trade practice statutes that provide a private right of action (e.g., state “Little FTC Acts”).  Even if the failure to disclose is unintentional, the company could still face a lawsuit alleging negligent misrepresentation, breach of contract, and statutory violations that include violations of Gramm Leach Bliley, HIPAA’s privacy rule, or California’s Online Privacy Protection Act. Without weighing in on the merits of these lawsuits, I would venture to guess that the class actions that will have the greatest chances of success will be those where the plaintiffs can show some financial harm (e.g., they paid for the apps in which the deficient privacy policy was contained) or there is a statute that provides set monetary relief as damages (e.g., $1,000 per violation/download).

What can companies do to minimize this risk?  To minimize the risks, companies should begin by evaluating whether their privacy policies match their collection, use, and sharing practices.  This process starts with the formation of a task force under the direction of counsel that is comprised of representatives from legal, compliance, IT, and marketing and that is dedicated to identifying: (1) all company statements about what information is collected (on company websites, in mobile apps, in written documents, etc.); (2) what information is actually being collected by the company’s website, mobile app, and other information collection processes; and (3) how the information is being used and shared.  The second part requires a really deep dive, perhaps even an independent forensic analysis, to ensure that the company’s statements about what information is being collected are correct.  It’s important that the “tech guys” (the individuals responsible for developing the app/website) understand the significance of full disclosure.  Companies should also ask, “do we really need everything we’re collecting?”  If not, why are you taking on the additional risk?  Also remember that this is not a static process.  Companies should regularly evaluate their privacy policies and monitor the information they collect.  A system must be in place to quickly identify when these collection, use, and sharing practices change, so the policies can be updated promptly where necessary.


DISCLAIMER:  The opinions expressed here represent those of Al Saikali and not those of Shook, Hardy & Bacon, LLP or its clients.  Similarly, the opinions expressed by those providing comments are theirs alone, and do not reflect the opinions of Al Saikali, Shook, Hardy & Bacon, or its clients.  All of the data and information provided on this site is for informational purposes only.  It is not legal advice nor should it be relied on as legal advice.