The world of digital marketing has grown exponentially in the last two decades.  In fact, it was estimated that in 2020, despite the global pandemic, approximately $332.84 billion will be spent on digital advertising worldwide.[1]  Not surprisingly, sophisticated algorithms (such as real-time bidding and programmatic ad buying) have been built in recent years to master the science of digital marketing and customer segmentation-aka target marketing.  While none of the current U.S. privacy laws explicitly prohibit target marketing based on electronically obtained consumer data, this space is getting over populated, and over regulated, and the landscape is changing.  And so we ask the obvious question, can target marketing withstand the emerging privacy regulations? Our answer is probably, with certain notable caveats.

Target marketing is an old but powerful marketing strategy.[2]  It used to involve breaking consumers into defined segments where each segment shared some similar characteristic, such as, gender, age, buying power, demographics, income, or a combination of a few shared characteristics; then designing marketing campaigns based on the shared characteristic(s).  Approaches have changed with the passing of time.  Nowadays, target marketing has been narrowed to the point of defining every individual consumer or household, and designing marketing campaigns for each individual consumer or household.  Target marketing is often the key marketing tool used to attract new business, increase sales, or strengthen brand loyalty.[3]  Despite its success, with the massive amount of consumer data now being used to target consumers, and the emerging data privacy laws and regulations, marketers have to tread carefully to avoid getting themselves in (legal) hot water.

How do marketers access consumer data?  And why is it potentially problematic?

Lets first address consumer data.  Marketers can acquire data by themselves, (aka, “first party data”).  This includes data from behaviors, actions or interests demonstrated across website(s) or app(s), as well as data stored in a business’ customer relationship management system (“CRM”).[4]  By contrast, “second party data” or “third party data” is data acquired from another source.  It could be someone else’s first party data, or it could be data collected by outside sources that are not the original collectors of the data.[5]

The most common method for obtaining consumer data (first, second or third party) over the internet has been through cookies stored on our digital devices.[6]  (For a recent litigation involving the use of cookies in the context of kids’ privacy rights see this prior post).  Cookies are used to track the activities of devices as users visit particular web pages, allowing advertisers to build profiles of a device’s online activities; these profiles can then be used to create targeted advertising tailored to the user of that device.[7]

Marketers are also able to obtain data through social media platforms.  Most of us using social media are aware of the personal information we submit before we create our accounts.  This information may include some personally “identifiable” information, such as our name, address, date of birth etc., but there is other personal information which is not considered “identifiable”, such as our gender, age, postal code, etc.  Marketers can then partner with social media platforms to create marketing campaigns based on consumer segments created through each individual’s personal information.  Ever wonder why your husband is not seeing ads for women’s shoes, or why you are receiving ads for products or services you have not shopped for but may be interested in?  It is target marketing.  (And of course, as CPW has covered, data can also be harvested from social medial platforms through scraping).

So what?  Well, until recently (with a few notable exceptions such as the Fair Credit Reporting Act (“FCRA”)) laws regulating companies selling or acquiring consumer data were sparse and preceded the advent of new technologies.  Compare Trans Union LLC v. FTC, 536 U.S. 915, 917 (2002) (stating that “the FCRA permits prescreening—the disclosure of consumer reports for target marketing for credit and insurance. . . .”) with FTC I, 81 F.3d 228 (D.C. Cir. 1996) (holding that selling consumer reports for target marketing violates the FCRA).

In many respects, corporations were thus able to use consumer data to create complex marketing campaigns.  This practice recently came up in the context of the Capital One data breach.  See, e.g., In re Capital One Consumer Data Sec. Breach Litig., 2020 U.S. Dist. LEXIS 175304, at *28 (E.D. Va. Sep. 18, 2020) (discussing plaintiffs’ allegation that “Capital One created a massive concentration of [personally identifiable information, a ‘data lake,’ in which Capital One ‘mines [customers’] data for purposes of product development, targeted solicitation for new products, and target marketing of new partners—all in an effort to boost its profits.”).

The tide is starting to change.  With the emergence of more recent data privacy laws, such as the California Privacy Rights Act of 2020” (“CPRA”), the California Consumer Privacy Act of 2018 (“CCPA”) and General Data Protection Regulation (“GDPR”), “covered entities” can no longer use personal information carte blanche for advertising purposes.  However, it bears noting that the statutory definition of personal information remains much narrower than what one might assume.   CCPA for example defines personal information as: “…information that identifies, relates to, describes, is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household…”  California Consumer Privacy Act of 2018 §1798.140.(o)(1).

Thus, information about one’s gender and income, without more, would not be fall under this definition.  Are consumers comfortable to have this information used without their consent?  Do they even have a choice?  It depends.  Although common law tort principles, such as invasion of privacy, embarrassment or emotional distress, may allow some legal remedies, case law is sparse and for obvious reasons, has trended towards permitting corporate use of such data.  See, e.g., Bradley v. T-Mobile US, Inc., 2020 U.S. Dist. LEXIS 44102 (N.D. Cal. Mar. 13, 2020) (rejecting claim that use of consumer data, including age, for target marketing concerning online job postings constituted age discrimination and violated various federal and state laws).

At least insofar as California is concerned, there has been some interesting developments concerning target marketing of late.  This is because under CCPA, some businesses engaged in target marketing interpreted “sales” as excluding the exchange of personal information, such as cookie data, for targeting and serving advertising to users across different platforms.  This approach was on the purported basis that no “sales” (as defined in the statute) were involved because no exchange for “valuable consideration” had occurred.  The CPRA, which was approved by California voters in November, utilizes the concept of “sharing” and seemingly eliminates this potential loophole (although that doesn’t mean there won’t be future litigation regarding this issue).

The concept of “data clean rooms” as also (re)surfaced to bypass the issues related to sharing customer data.  Data clean room allow companies, or divisions of a single company, to bring data together for joint analysis under defined guidelines and restrictions that keep the data secure[8].  Whether a clean room contains PII or anonymized data, data privacy practices are critical.  If the anonymized data can be deanonymized (tied back to actual people through creative analytics), it would make the data subject to most privacy laws (and definitely the GDPR).

What does the future look like for digital advertising?  With the spike in US state regulations relating to consumers’ online privacy, such as, CPRA, the Nevada Senate Bill 220 Online Privacy Law (2019), and the Maine Act to Protect the Privacy of Online Consumer Information (2019)[9], it remains fluid.  There has also been changes in cybersecurity, data security and data breach notification laws (although we will table discussion of the specifics of that for another day).  The bottom line is that marketers now not only have to pay extra attention to each state’s regulation before obtaining and/or processing consumer information, they also have to pay extra attention to the consent obtained.  The free reigns of using unlimited consumer data to create complex algorithms for the optimal marketing campaign is slowly coming to a halt.

To mitigate litigation risk, entities in the marketing industry will have to take a jurisdiction specific approach that accounts for recent developments.  And as the scope of these new laws and regulations are tested via litigation, CPW will be there every step of the way.  Stay tuned.

[1] https://www.emarketer.com/content/global-digital-ad-spending-update-q2-2020

[2] https://www.acrwebsite.org/volumes/8572/volumes/v29/NA-29

[3] https://www.thebalancesmb.com/target-marketing-2948355

[4] https://www.lotame.com/1st-party-2nd-party-3rd-party-data-what-does-it-all-mean/#:~:text=First%20party%20data%20is%20the,you%20have%20in%20your%20CRM

[5] Ibid.

[6] Swire, Peter and Kennedy-Mayo, DeBrae, “U.S. Private-Sector Privacy,” Third Edition,  Pg 130

[7] Ibid.

[8] https://www.snowflake.com/blog/distributed-data-clean-rooms-powered-by-snowflake/

[9] https://www.csoonline.com/article/3429608/11-new-state-privacy-and-security-laws-explained-is-your-business-ready.html

2020 has been a year for the record books, and the area of data breach litigation is no exception.   Several key developments, when considered individually or in conjunction, will likely make breach litigation a top of mind data privacy issue going into the next year.  So fasten your seatbelts and read on as CPW recaps what you need to know going into 2021.

Overview of Industries Impacted by Data Breach Litigation in 2020

What industries were impacted by data breach litigations in 2020?  The short answer: all of them.

Despite the widespread adoption of cybersecurity policies and procedures by organizations to safeguard their proprietary information and the personal information of their clients, consumers, and employees, data breaches are all too common.  CPW has covered previously how “[t]echnical cybersecurity safeguards, such as patching, are obviously critical to an effective cybersecurity program.  However, many of the most common vulnerabilities can be addressed without complex technical solutions.”  Top five practical recommendations to reduce cyber risk can be reviewed here.

In fact, the number of data breaches in 2020 was more than double that of 2019, with industries that were frequent targets including government, healthcare, retail and technology.  In this instance, correlation equals causation—as more and more companies experienced crippling security breaches, the number of data breach litigations is also on the rise.

What Has Changed with Data Breach Litigations in 2020?

Besides increasing in frequency, the considerations implicated by data breach litigation have also grown increasingly complex.  This is due to several factors.

First, plaintiffs bringing data breach litigations have continued to rely on common law causes of action (negligence and fraud, among others) in addition to asserting new statutory claims (although of course there are exceptions).  The reason for this boils down to the fact that while nearly every state has a data breach statute, many do not include a private right of action and are enforced by the state attorneys general.  Hence plaintiffs’ reliance on common law and tort based theories.  Insofar as statutory causes of action are concerned, the California Consumer Privacy Act (“CCPA”) has only been on the books since the start of this year, but emerged as a focal point for data breach litigations (be sure to check out our CCPA Year-in-Review coverage).  The first CCPA class action settlement was announced last month and will likely serve as a benchmark going forward (keep a close eye on organizations agreeing to adopt increased security and data privacy controls, as has been done on the regulatory front).

Second, there was a monumental development in the spring that sent shockwaves through the data breach defense bar.  A federal judge ordered production of a forensic report prepared by a cybersecurity firm in the wake of the Capital One data breach.  The report was found not protected as attorney work product despite having been prepared at the direction of outside counsel.  [Note: A forensic report is usually prepared by a cybersecurity firm following a thorough investigation into a company’s cyberattack.  The report will address, among other areas, any vulnerabilities in a company’s IT environment that enabled the cyberattack.  Obviously, while these findings can help a company defend itself in subsequent litigation and mitigate risk, the utility of the forensic report can cut both ways.  Plaintiffs can also use this information to substantiate their claims.]  This ruling reaffirmed several key lessons for companies facing cyber incidents.  This includes that to shield a forensic report as work product, a company must demonstrate that the report would not have been created in essentially the same form absent litigation.  Notably, this burden is more difficult to meet where the company has a pre-existing relationship with the cybersecurity vendor that prepares the report.

And third, as seen from a high profile case earlier this year, the legal fallout from a data breach can extend to company executives.  A company’s former Chief Security Officer (CSO) was charged with obstruction of justice and misprision of felony for allegedly trying to conceal from federal investigators a cyberattack that occurred in 2016, exposing the data of 57 million individuals.  Although an outlier, it is a significant reminder for companies and executives to take data breach disclosure obligations seriously—notwithstanding regarding murkiness in the law regarding when these obligations arise.

What Changed With Standing in Data Breach Cases in 2020?

Experienced litigators may be familiar with the classic requirements for standing, but even the most experienced of them are not likely familiar with standing as it applies to data breach litigation.  The reason for this discrepancy is simple:  although standing case law can be generally straightforward, this case law has not caught up to the unique challenges posted by data breaches.  This, when combined with the absence of national-level legislation for data privacy, has created a hodgepodge of circuit splits and differing interpretations.

As you will recall, Article III standing consists of three elements:  (1) an injury-in-fact that is concrete and particularized, as well as actual or imminent; (2) the injury must be fairly traceable to the defendant’s act; and (3) it must be “likely” that a favorable decision will compensate or otherwise rectify the injury.

When a data breach occurs, the penultimate standing question is whether the theft of data may, by itself, constitute a sufficient injury.  Is there an injury when leaked personal information is not copied or used to facilitate fraud or another crime?  Should an injury occur when only certain types of personal information, such as Social Security numbers, are leaked, or may the disclosure of other types of information, such as credit card numbers or addresses, be sufficient for injury?  These questions are the heart of data breach litigation, and 2020 brought us a few notable cases that are worth reflecting on at this time of the year.

Given the absence of uniform causes of action in data breach litigation, plaintiffs often employ a number of strategies when drafting their complaints.  One strategy has been to allege a negligence cause of action.  This year, this strategy drew increased attention when Wawa, a convenience store chain, moved to dismiss a class action lawsuit filed against it by a group of credit unions regarding an alleged data breach.  In In Re: Wawa Inc. Data Security Litigation, No. 2:19-cv-06019 (E.D. Pa.), a group of credit unions alleged that a convenience store chain’s failure to abide by the PCI DSS–the payment card industry’s data security standards–should be the standard of care for determining a negligence claim.  In opposition, the plaintiffs argued that Wawa had an independent and common law duty to use reasonable care to safeguard the data used by credit and debit cards for payments.  The parties held oral argument in November and a decision remains pending.  Our previous coverage provides more information.

While some commentators have reported a trend this year towards viewing standing in data privacy cases to be more permissive towards plaintiffs, at least one court this year paused this trend.  In Blahous v. Sarrell Regional Dental Center for Public Health, Inc., No. 2:19-cv-00798 (N.D. Ala.), a group of patients filed suit against a dental provider due to an alleged data breach.  After conducting an investigation, the defendant determined that there was no evidence that any breached files were copied, downloaded, or otherwise removed.  This factual finding was included in the notice that the defendant sent to its patients.

The court rejected the plaintiff’s argument and granted the defendant’s motion to dismiss.  Crucial to the court’s opinion was that there were no allegations that suggested any disclosure of the acquired data, “such as an actual review by a third party,” had occurred.  The court stated “the fact that the [b]reach occurred cannot in and of itself be enough, in the absence of any imminent or likely misuse of protected data, to provide Plaintiffs with standing to sue.”  The court looked to the notice of the data breach and observed “[t]he [n]otice upon whose basis the Plaintiffs sue, included as exhibits to their own pleading, denies that any personal information was copied, downloaded, or removed from the network, despite Plaintiffs’ mistaken belief to the contrary.”

Perhaps the biggest takeaway of Blahous is that the disclosure of a patient’s Social Security number and health treatment information were not sufficient for standing.  This was contrary to other decisions where the absence of a Social Security number in a data breach specifically led a court to conclude there was no injury.  See Antman v. Uber Technologies, No. 3:15-cv-01175 (N.D. Cal.) (allegations are not sufficient when the complaint alleged “only the theft of names and driver’s licenses. Without a hack of information such as social security numbers, account numbers, or credit card numbers, there is no obvious, credible risk of identity theft that risks real, immediate injury.”).

Another case highlighted the current circuit split concerning injury in data breaches.  In Hartigan v. Macy’s, No. 1:20-cv-10551 (D. Mass.), a Macy’s customer filed a class action lawsuit after his personal information was leaked due to a breach through Macy’s online shopping platform.  The court granted Macy’s motion to dismiss, attributing three reasons for its holding:  (1) the plaintiff did not allege fraudulent use or attempted use of his personal information to commit identify theft; (2) the stolen information “was not highly sensitive or immutable like social security numbers”; and (3) immediately cancelling a disclosed credit card can eliminate the risk of future fraud.

Hartigan has at least two takeaways.  First, the change brought by Blahous may be an anomaly.  In Blahous, the court found no standing when a Social Security number was disclosed.  The Hartigan court, however, specifically stated that the absence of any disclosed Social Security numbers was a reason why the plaintiff did not suffer an injury.  Although issued later in the year, the Hartigan court did not cite Blahous or any opinion from within the Eleventh Circuit.

Second, Hartigan highlighted the current circuit split regarding standing in data breach cases.  The court’s analysis was based on First Circuit precedent that was issued prior to the Supreme Court’s decision in Clapper.  The court then looked to six other circuits for guidance.  It cited opinions in the D.C. and Ninth Circuits that suggested the disclosure of “sensitive personal information,” like Social Security numbers, creates a substantial risk of an injury.  It then looked to opinions from the Fourth, Seventh, and Ninth Circuits that suggested post-theft criminal activity created an injury.  Finally, it noted that the Third, Fourth, and Eighth Circuits found no standing in the absence of criminal activity allegations, even when Social Security numbers were disclosed.

Finally, no year-in-review would be complete without additional discussion of the CCPA (including in the area of standing).  At least one notable standing opinion highlights what may be to come.  In Fuentes v. Sunshine Behavioral Health Group, LLC, No. 8:20-cv-00487 (C.D. Cal.), a Pennsylvania resident filed suit against an operator of drug and alcohol rehabilitation treatment centers regarding an alleged data breach.  A significant issue was whether the plaintiff, a Pennsylvania resident that stayed in one of the defendant’s California facilities for one month, may be a “consumer” under the CCPA for standing purposes.

The defendant seized on the plaintiff’s residency issues for its motion to compel arbitration, or, in the alternative, to dismiss.  The defendant argued that the plaintiff’s one-month at a California treatment facility did not make him a “consumer.”  The CCPA defines a “consumer” as “a natural person who is a California resident,” as defined by California regulations.  Cal. Civ. Code § 1798.150(h).  That part of the California Code of Regulations includes in its definition of “resident”:  (1) individuals who are in California for other than a temporary or transitory purpose; or (2) individuals domiciled in California who are outside the state for a temporary or transitory purpose.

Unfortunately, the court did not evaluate this issue because the parties voluntarily dismissed the suit prior to a decision.

Trends in 2021

The nation’s political landscape and the pending circuit split will likely fuel developments in 2021.

With a new Congress arriving shortly, most eyes are watching to see whether the 117th Congress will finally bring about comprehensive federal data privacy legislation.  Of the previously introduced federal legislation, one point of difference has been whether there should be a private cause of action.  The CCPA, which permits private causes of action for California residents, may be one source of influence.  Should federal legislation recognize a private cause of action, cases like Fuentes may foreshadow a standing argument to come.

The change of administration will also likely influence data privacy trends.  The Vice President-Elect’s prior experiences with data privacy issues may place her on-point for any federal action.  When she was Attorney General of California, the Vice President-Elect had an active interest in data privacy issues.  In January 2013, her office oversaw the creation of the privacy Enforcement and Protection Unit of the California Attorney General’s Office, which was created to enforce laws related to data breaches, identity theft, and cyber privacy.  The Vice President-Elect also secured several settlements with large companies, some of which required creation of specific privacy-focused offices within settling companies, such as chief privacy officer (mirroring recent trends discussed above).

2021 may also be the year of the Supreme Court.  In recent years, the Supreme Court has denied several cert petitions in cases involving data breaches.  2021, however, may be the year when we see the nation’s highest court decide who has standing in a data breach and when an injury occurs.  Several high-profile data privacy cases have increased the public’s attention to data issues, such as the recent creation of two MDLs.  Additionally, the circuit split referenced in Hartigan may be coming to a head.  Finally, the implementation of the CCPA and possibility of federal legislation may make this the year of data privacy.

CPW will be there to cover these developments, as they occur.  Stay tuned.

Laptop Data TransferA financial institution has asked a Virginia federal court to overturn a magistrate judge’s order to disclose its forensic report, detailing its 2019 data breach.  If your company experiences a data breach, it is imperative to immediately retain outside counsel who understands the nuances of cybersecurity events and attorney work product privileges.  Here we provide the following practical takeaways: Continue Reading Data Breach: Is Your Forensic Report Privileged?

Originally posted on Squire Patton Boggs’ Capital Thinking blog by David StewartLudmilla Kasulke and Dominic Braithwaite.


On March 11, 2024, US President Joe Biden released his Fiscal Year (FY) 2025 budget request, which included proposals on U.S. Artificial Intelligence (AI) development and efforts to implement the Biden Administration’s Executive Order (EO) on AI. The budget identifies the National Science Foundation (NSF) as central to U.S. leadership in AI, requesting $10.2 billion in funding for the agency. $2 billion of that total would be dedicated to research and development (R&D) in accordance with CHIPS Act priorities, including AI, and $30 million would support the National AI Research Resource pilot program. The budget also requests $65 million for the Commerce Department “to safeguard, regulate, and promote AI, including protecting the American public against its societal risks.” This funding would include directing the National Institute of Standards and Technology (NIST) to establish the U.S. AI Safety Institute. The institute would be responsible for operationalizing “NIST’s AI Risk Management Framework by creating guidelines, tools, benchmarks, and best practices for evaluating and mitigating dangerous capabilities and conducting evaluations including red-teaming to identify and mitigate AI risk.” Further, the Department of Energy (DOE) Office of Science, which is responsible for implementing aspects of both the CHIPS Act and the AI EO, would receive $8.6 billion under the President’s proposed budget.

Continue Reading Biden Budget Proposal Advances AI Priorities

As state legislation increasingly regulates sensitive data, and expands the concepts of what is sensitive, the Federal Trade Commission (“FTC” or “Commission”) is honing-in on sensitive data processing in expanding its unfairness authority in relation to privacy enforcement. The FTC’s recent enforcement activities regarding location aware data is a good example. As we have previously reported here and here, Kochava, an Idaho-based data broker, is currently embroiled in a federal lawsuit with the Commission that has the potential to redefine the legal bounds of sensitive data collection, use and sharing and the data brokering industries on a federal level.

Continue Reading Sensitive Data Processing is in the FTC’s Crosshairs

On January 18, during a luncheon fireside chat at the California Lawyers Association’s UCL Institute event in Los Angeles, Federal Trade Commission (“FTC”) Bureau of Consumer Protection Director Samuel Levine shared his insights on what data practices are of concern to him and to the FTC.  Companies should take heed of his comments, the highlights of which include:

For FTC watchers, none of this should come as any surprise.  While the upcoming election could usher in a FTC with very different perspectives and priorities, it is a sure bet that the current FTC will look to advance its agenda this year.  For more information contact the authors or your usual firm contact.

Disclaimer: While every effort has been made to ensure that the information contained in this article is accurate, neither its authors nor Squire Patton Boggs accepts responsibility for any errors or omissions. The content of this article is for general information only, and is not intended to constitute or be relied upon as legal advice.

On Devil’s Night Day, two significant AI developments were announced. First, the White House’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (“AI EO”)Second, the Group of 7 (“G-7”) announced its International Guiding Principles on Artificial Intelligence (“G-7 Principles”) and companion Code of Conduct for AI Developers (“G-7 Code”). All are three broad strokes – the devil will be in the details. 

Following is a short summary of each but please check back soon for more analysis and key takeaways for businesses and their AI governance programs.

Continue Reading Two Significant AI Announcements:  Spooky for AI Developers?

CPW’s Kristin Bryan joins two of Squire Patton Boggs’ policy experts – Beth Goldstein and Jeffrey Turner – to discuss one of the most critical pieces of privacy legislation in years, the American Data Privacy and Protection Act (ADPPA), for Lexology’s Masterclass series. This game-changing privacy legislation not only has potential far-reaching impact, but it could also be in effect within the next year. Join us for an insightful look at what this legislation means for businesses and consumers.

Wednesday, December 7, 2022

11 a.m. ET

More details and registration

Key topics:

  • Current policy and political landscape in Congress and in state capitals
  • Main provisions of the ADPPA
  • Recent state legislative developments driving Congressional action
  • Limitations on the Federal Trade Commission’s power to regulate privacy in the absence of federal legislation
  • Ongoing litigation and future risks
  • Sovereigns vs. corporate distinctions

We hope you can join us on December 7!

On October 17, 2022, the California Privacy Protection Agency (“CPPA” or “Agency”) published Modified Text of Proposed Regulations (“Modified Regs”) and Explanation of Modified Text of Proposed Regulations (“Explanation of Modified Regs”). The CPPA review of the Modified Regs has been postponed and is now scheduled to be considered during the October 28-29, 2022 public meeting.

Recall that earlier this year, on May 27, 2022, the CPPA published the first draft of the proposed CPRA Regs and initial statement of reasons. The Agency commenced the formal rulemaking process to adopt the Regs on July 8, 2022, and the 45-day public comment period closed on August 23, 2022. The comments submitted in response to the first draft of the Regs are available here. Continue Reading Revised Proposed CPRA Regs To Be Considered At October 28, 2022 Meeting