Join us on January 21, 2021 at 12pm EST/9am PST for a complimentary webinar – Understand and Prepare for the California Privacy Rights Act.

Panelist Glenn Brown of our Data Privacy & Cybersecurity Practice will provide an overview of the CPRA and its interplay with the CCPA and will also address what can be done now to start preparing for compliance.

Additional information and registration is available here.

This program is pending 1.0 hour of CLE in AZ, CA, GA, NJ and NY. The program is also approved for 1.0 hour of CPE by IAPP.

As 2020 drew to a close, the Ninth Circuit gave the CFPB a victory in Consumer Fin. Prot. Bureau v. Seila Law LLC, 2020 U.S. App. LEXIS 40572 (9th Cir. Dec. 29, 2020), upholding the CFPB’s civil investigative demand (CID) to Seila Law.  The case was on remand from the United States Supreme Court, which held that the statute establishing the CFPB violated the Constitution by placing leadership of the agency in the hands of a single Director who could only be removed for cause.  Seila Law LLC v. CFPB, 140 S. Ct. 2183 (2020).  The Supreme Court, however, concluded that the for-cause provision of the statute could be severed and did not require the invalidation of the entire agency; it then remanded the case back to the Ninth Circuit to determine whether the CFPB’s ratification of its earlier decision to issue the CID to Seila Law was valid.  Just over a month after hearing oral argument on the ratification question, a unanimous panel of the Ninth Circuit held that on July 9, 2020, the CFPB’s current Director, Kathleen Kraninger, validly ratified the agency’s earlier decision to issue a CID to Seila Law.

The Ninth Circuit quickly disposed of the two primary arguments put forth by Seila Law to challenge Director Kraninger’s ratification of the Seila Law CID.  First, relying on Federal Election Commission v. NRA Political Victory Fund, 513 U.S. 88 (1994), Seila Law argued that because the agency lacked the authority to issue the CID back in 2017, Director Kraninger’s 2020 ratification of such action was not valid.  In other words, according to Seila Law, an action that was void at the time taken could not be later ratified.  Finding the argument “largely foreclosed” by its earlier decision in Consumer Fin. Prot. Bureau v. Gordon, 819 F.3d 1179 (9th Cir. 2016), the Ninth Circuit concluded that the “the constitutional infirmity relates to the Director alone, not the legality of the agency itself” and that the defect with the provision relating to the removal of the Director did not “render[] all of the agency’s prior actions void.”  As the Ninth Circuit noted, if that were the case, then there would have been no reason for the Supreme Court to remand the ratification question back to the Ninth Circuit.

The Ninth Circuit concluded that Seila Law’s second argument—that the ratification took place outside of the limitations period for bringing an enforcement action—was premature.  The statutory limitations period relied upon by Seila Law applies only to the bringing of an enforcement action, which has not happened here.  “The only actions ratified by Director Kraninger are the issuance and enforcement of the CID” against Seila Law.  Whether Seila Law could successfully bring a statute-of-limitations defense to any future enforcement action has no bearing on the validity of the Director’s ratification of the CID to Seila Law.

The Ninth Circuit’s decision confirms our earlier blog (LINK HERE) that defendants seeking to challenge Bureau actions taken before the Supreme Court invalidated the statute’s removal provision have an uphill battle.  The ratification issue is teed up in other cases around the country, so stay tuned to see whether any court sees the ratification issue differently than the Ninth Circuit.

The technology that science fiction promised us has finally arrived, but accompanying it are new duties, liabilities, and causes of action.  Smart homes, or homes interfaced with internet functionality, are growing in popularity.  In a smart home, features like door locks and appliances may be connected to the internet, allowing consumers to remotely control or perform tasks from wherever they can maintain an internet connection.  Many consumers have noticed the promised convenience of smart homes and experts predict a coming business surge.  For instance, in the U.S., some experts predict that the industry’s revenue could reach $141 billion by 2023.  Yet although we may find ourselves on the cusp of new lifestyles and conveniences, consumers and industry would be well-advised to look at a recent opinion that previews the liability issues that will inevitably emerge.

In Doty v. ADT, LLC, No. 20-cv-60972, 2020 U.S. Dist. LEXIS 245373 (S.D. Fla. Dec. 30, 2020), the court granted in part and denied in part a motion to dismiss a consumer’s class action lawsuit initiated by misconduct with her smart home system.  The plaintiff had her home outfitted with smart home technology, including cameras inside and outside of her home and locks that could be controlled through an internet connection.  The trouble began when the technician that installed the system gave himself remote access.  According to the opinion, the employee accessed the plaintiff’s account over 70 times.  He allegedly viewed and downloaded footage from the security cameras inside and outside of her home.

The plaintiff filed a class action lawsuit on behalf of herself and all customers “whose security systems were remotely accessed by an employee or agent” of the defendant “without authorization from the customer”.  The plaintiff alleged several state law causes of action—including breach of contract, negligence, violations of the Texas Deceptive Trade Practices Act, intrusion upon seclusion, intentional infliction of emotional distress, privacy monitoring, and negligent hiring, supervision, and retention—and one federal claim arising under the Computer Fraud and Abuse Act at 18 U.S.C. Section 1030.

Doty has a number of takeaways, but three stood out to us.

First, the court’s reasoning behind its decision to not dismiss the breach of contract claim suggests an implied duty to protect consumers from invasions of their privacy.  Although the plaintiff’s contract contained an express waiver of implied covenants, the court found an implied covenant “to supply a security system reasonably secure from unauthorized access.”  The court agreed with the plaintiff’s argument that “the contract necessarily implie[d] an agreement that the security monitoring services would be secure from intrusion” by the defendant’s employees, opining that, “A contract for a security monitoring service that is itself unsecure is a contract for nothing at all.”  (Internal quotations omitted).

Second, in upholding most of the negligence claims, the court recognized a duty to protect consumers from unauthorized intrusions of their privacy and found that physical injury was not required for a damages award.  Specifically, the court stated the defendant had a duty to “reasonably protect Plaintiff from invasions of privacy through unauthorized access of that system and that Plaintiff may recover damages for mental anguish caused by a breach of that duty, even in the absence of physical damages.”  The court did not discuss what actions would satisfy this duty or the defendant’s business practices, which leaves an area that may be explored on summary judgment.

Third, the court recognized that there was no cause of action for privacy monitoring, but did not completely close the door on injunctive relief.  The plaintiff requested injunctive relief, which included requiring the defendant to “create a fund sufficient to cover the costs of commercial and/or legal services needed to remedy the invasion of privacy that they have suffered”.  The court granted the defendant’s motion to dismiss this claim, finding it was not a viable cause of action, but recognized that “injunctive relief may be an available equitable remedy in the event” the defendant “is held liable” on other claims.

If the experts are correct, smart home technology will only continue to proliferate.  Doty, however, suggests technology will not be the only force to grow.  Liabilities, duties, and causes of action will likely continue to grow as the industry develops.  Doty is a case that we will be watching.

The world of digital marketing has grown exponentially in the last two decades.  In fact, it was estimated that in 2020, despite the global pandemic, approximately $332.84 billion will be spent on digital advertising worldwide.[1]  Not surprisingly, sophisticated algorithms (such as real-time bidding and programmatic ad buying) have been built in recent years to master the science of digital marketing and customer segmentation-aka target marketing.  While none of the current U.S. privacy laws explicitly prohibit target marketing based on electronically obtained consumer data, this space is getting over populated, and over regulated, and the landscape is changing.  And so we ask the obvious question, can target marketing withstand the emerging privacy regulations? Our answer is probably, with certain notable caveats.

Target marketing is an old but powerful marketing strategy.[2]  It used to involve breaking consumers into defined segments where each segment shared some similar characteristic, such as, gender, age, buying power, demographics, income, or a combination of a few shared characteristics; then designing marketing campaigns based on the shared characteristic(s).  Approaches have changed with the passing of time.  Nowadays, target marketing has been narrowed to the point of defining every individual consumer or household, and designing marketing campaigns for each individual consumer or household.  Target marketing is often the key marketing tool used to attract new business, increase sales, or strengthen brand loyalty.[3]  Despite its success, with the massive amount of consumer data now being used to target consumers, and the emerging data privacy laws and regulations, marketers have to tread carefully to avoid getting themselves in (legal) hot water.

How do marketers access consumer data?  And why is it potentially problematic?

Lets first address consumer data.  Marketers can acquire data by themselves, (aka, “first party data”).  This includes data from behaviors, actions or interests demonstrated across website(s) or app(s), as well as data stored in a business’ customer relationship management system (“CRM”).[4]  By contrast, “second party data” or “third party data” is data acquired from another source.  It could be someone else’s first party data, or it could be data collected by outside sources that are not the original collectors of the data.[5]

The most common method for obtaining consumer data (first, second or third party) over the internet has been through cookies stored on our digital devices.[6]  (For a recent litigation involving the use of cookies in the context of kids’ privacy rights see this prior post).  Cookies are used to track the activities of devices as users visit particular web pages, allowing advertisers to build profiles of a device’s online activities; these profiles can then be used to create targeted advertising tailored to the user of that device.[7]

Marketers are also able to obtain data through social media platforms.  Most of us using social media are aware of the personal information we submit before we create our accounts.  This information may include some personally “identifiable” information, such as our name, address, date of birth etc., but there is other personal information which is not considered “identifiable”, such as our gender, age, postal code, etc.  Marketers can then partner with social media platforms to create marketing campaigns based on consumer segments created through each individual’s personal information.  Ever wonder why your husband is not seeing ads for women’s shoes, or why you are receiving ads for products or services you have not shopped for but may be interested in?  It is target marketing.  (And of course, as CPW has covered, data can also be harvested from social medial platforms through scraping).

So what?  Well, until recently (with a few notable exceptions such as the Fair Credit Reporting Act (“FCRA”)) laws regulating companies selling or acquiring consumer data were sparse and preceded the advent of new technologies.  Compare Trans Union LLC v. FTC, 536 U.S. 915, 917 (2002) (stating that “the FCRA permits prescreening—the disclosure of consumer reports for target marketing for credit and insurance. . . .”) with FTC I, 81 F.3d 228 (D.C. Cir. 1996) (holding that selling consumer reports for target marketing violates the FCRA).

In many respects, corporations were thus able to use consumer data to create complex marketing campaigns.  This practice recently came up in the context of the Capital One data breach.  See, e.g., In re Capital One Consumer Data Sec. Breach Litig., 2020 U.S. Dist. LEXIS 175304, at *28 (E.D. Va. Sep. 18, 2020) (discussing plaintiffs’ allegation that “Capital One created a massive concentration of [personally identifiable information, a ‘data lake,’ in which Capital One ‘mines [customers’] data for purposes of product development, targeted solicitation for new products, and target marketing of new partners—all in an effort to boost its profits.”).

The tide is starting to change.  With the emergence of more recent data privacy laws, such as the California Privacy Rights Act of 2020” (“CPRA”), the California Consumer Privacy Act of 2018 (“CCPA”) and General Data Protection Regulation (“GDPR”), “covered entities” can no longer use personal information carte blanche for advertising purposes.  However, it bears noting that the statutory definition of personal information remains much narrower than what one might assume.   CCPA for example defines personal information as: “…information that identifies, relates to, describes, is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household…”  California Consumer Privacy Act of 2018 §1798.140.(o)(1).

Thus, information about one’s gender and income, without more, would not be fall under this definition.  Are consumers comfortable to have this information used without their consent?  Do they even have a choice?  It depends.  Although common law tort principles, such as invasion of privacy, embarrassment or emotional distress, may allow some legal remedies, case law is sparse and for obvious reasons, has trended towards permitting corporate use of such data.  See, e.g., Bradley v. T-Mobile US, Inc., 2020 U.S. Dist. LEXIS 44102 (N.D. Cal. Mar. 13, 2020) (rejecting claim that use of consumer data, including age, for target marketing concerning online job postings constituted age discrimination and violated various federal and state laws).

At least insofar as California is concerned, there has been some interesting developments concerning target marketing of late.  This is because under CCPA, some businesses engaged in target marketing interpreted “sales” as excluding the exchange of personal information, such as cookie data, for targeting and serving advertising to users across different platforms.  This approach was on the purported basis that no “sales” (as defined in the statute) were involved because no exchange for “valuable consideration” had occurred.  The CPRA, which was approved by California voters in November, utilizes the concept of “sharing” and seemingly eliminates this potential loophole (although that doesn’t mean there won’t be future litigation regarding this issue).

The concept of “data clean rooms” as also (re)surfaced to bypass the issues related to sharing customer data.  Data clean room allow companies, or divisions of a single company, to bring data together for joint analysis under defined guidelines and restrictions that keep the data secure[8].  Whether a clean room contains PII or anonymized data, data privacy practices are critical.  If the anonymized data can be deanonymized (tied back to actual people through creative analytics), it would make the data subject to most privacy laws (and definitely the GDPR).

What does the future look like for digital advertising?  With the spike in US state regulations relating to consumers’ online privacy, such as, CPRA, the Nevada Senate Bill 220 Online Privacy Law (2019), and the Maine Act to Protect the Privacy of Online Consumer Information (2019)[9], it remains fluid.  There has also been changes in cybersecurity, data security and data breach notification laws (although we will table discussion of the specifics of that for another day).  The bottom line is that marketers now not only have to pay extra attention to each state’s regulation before obtaining and/or processing consumer information, they also have to pay extra attention to the consent obtained.  The free reigns of using unlimited consumer data to create complex algorithms for the optimal marketing campaign is slowly coming to a halt.

To mitigate litigation risk, entities in the marketing industry will have to take a jurisdiction specific approach that accounts for recent developments.  And as the scope of these new laws and regulations are tested via litigation, CPW will be there every step of the way.  Stay tuned.

[1] https://www.emarketer.com/content/global-digital-ad-spending-update-q2-2020

[2] https://www.acrwebsite.org/volumes/8572/volumes/v29/NA-29

[3] https://www.thebalancesmb.com/target-marketing-2948355

[4] https://www.lotame.com/1st-party-2nd-party-3rd-party-data-what-does-it-all-mean/#:~:text=First%20party%20data%20is%20the,you%20have%20in%20your%20CRM

[5] Ibid.

[6] Swire, Peter and Kennedy-Mayo, DeBrae, “U.S. Private-Sector Privacy,” Third Edition,  Pg 130

[7] Ibid.

[8] https://www.snowflake.com/blog/distributed-data-clean-rooms-powered-by-snowflake/

[9] https://www.csoonline.com/article/3429608/11-new-state-privacy-and-security-laws-explained-is-your-business-ready.html

It has become commonplace for government agencies and law enforcement, particularly in large metropolitan areas, to use facial recognition software.  These practices, though, have garnered recent public attention and some controversy. In response to concerns raised by media coverage of Clearview’s practices, three cities last year banned their governments from using facial recognition technology, and another banned all corporate uses of facial recognition technology in public spaces.  However, for the most part government utilization of facial recognition software has continued unabated.

But using such software is not without risks, as shown by a lawsuit recently filed against law enforcement officers and prosecutors.  In Parks v. McCormack, et al., Case No. L-003672-20 (N.J.), the plaintiff alleges he spent ten days wrongfully imprisoned after facial recognition software used by a New Jersey police department mistakenly identified him as a suspect in a criminal investigation.  This was allegedly notwithstanding that the plaintiff’s fingerprints and DNA did not match those left at the scene of the crime, and that the plaintiff provided an alibi at the time of his detention.  The complaint alleges that the police department involved was relying solely on facial recognition technology in issuing the warrant for the plaintiff’s arrest.  Plaintiff filed suit against the police, the prosecutor, and the municipality involved for false arrest, false imprisonment and violation of his civil rights.  The lawsuit comes almost one year after New Jersey’s attorney general asked state prosecutors to stop using Clearview AI’s app and announced an ongoing investigation into it and similar facial recognition software.

The plaintiff is the third person reported to have been falsely arrested based on an incorrect facial recognition match.  Notably, in all three instances the individuals mistakenly identified by the software were Black men—underscoring racial bias concerns previously raised about the adoption of facial recognition technology by government bodies.  The Parks lawsuit names as defendants the officials and government entities involved in the plaintiff’s allegedly wrongful detention and imprisonment.  However, due to the doctrine of governmental immunity, which shields the government from liability for the actions of state or federal employees under certain circumstances, future litigations may also seek to bring direct claims against the manufacturers of such software – and one such manufacturer, Clearview, is certainly no stranger to privacy litigation.  Stay tuned.

In case you missed it, below is a summary of recent posts from CPW.  Please feel free to reach out if you are interested in additional information on any of the developments covered.

The California Consumer Privacy Act (“CCPA”) – 2020 Year in Review | Consumer Privacy World

Data Breach Litigations: 2020 Year in Review | Consumer Privacy World

Child’s Play: Federal Judge Shuts Down Privacy Litigation Brought Against Tech and Toy Companies Alleging They Violated Kids’ Privacy Rights | Consumer Privacy World

BREAKING NEWS: Federal Court Grants Preliminary Approval of First CCPA Settlement | Consumer Privacy World

Data Breach Litigation Without a Data Breach? Not So Fast Walmart Says… | Consumer Privacy World

Rounding out 2020 a federal court right before Christmas squelched a significant litigation concerning alleged violations of children’s privacy rights brought against the operator of a video sharing platform and channel operators (including Cartoon Network, Inc., DreamWorks Animation LLC, Hasbro Studios LLC, and Mattel, Inc., among others).  Hubbard, 2020 U.S. Dist. LEXIS 239936 (N.D. Cal. 2020).  The court held that plaintiff’s common law privacy and other state law claims were preempted by the Children’s Online Privacy Protection Act (“COPPA”).  However, it was not a complete win for defendants—the court allowed the plaintiffs “the opportunity to amend the complaint to allege facts showing that Defendants’ conduct amounts to more than solely a violation of COPPA’s requirements.”  Read on below.

Particularly with COVID-related shutdowns, the popularity of video sharing platforms has reached an all-time high this year.  For many of them, as was the case in this litigation, any individual can share videos through use of various social media accounts without registering with the video sharing platform itself.  Often, there is no age verification required to view videos.  As parents already know, it is commonplace for companies that manufacture and market products for children (e.g., toy companies) to upload content such as music videos, videos of kids unwrapping toys, and the like.  As parents may not already know, however, the operators of those video sharing platforms collect the personal information of users through the use of cookies.  These can track websites a user has visited and the amount of time spent on those websites, among other things.  This information is then utilized and sold for advertising purposes (as allegedly occurred in Hubbard with the personal information of kids viewing video content produced by defendants).

Which is where COPPA comes in.  What is COPPA?  Generally speaking, it provides that “[i]t is unlawful for an operator of a website or online service directed to children, or any operator that has actual knowledge that it is collecting personal information from a child, to collect personal information from a child in a manner that violates the regulations prescribed [by the Federal Trade Commission].”  15 U.S.C. § 6502(a).  COPPA applies to any operator of a commercial website or online service directed to children under thirteen years of age that collects, uses, and/or discloses personal information from children.

Additionally, the FTC has interpreted COPPA’s definition of “website or online service” to include individual channels on a general audience platform—according to the FTC, “content creators and channel owners” are both “standalone ‘operators’ under COPPA, subject to strict liability for COPPA violations.”  (emphasis added).  In order to determine whether a website or online service is “directed to children” the FTC is to “consider [the website’s or online service’s] subject matter, visual content, use of animated characters or child-oriented activities and incentives, music or other audio content, age of models, presence of child celebrities or celebrities who appeal to children, language or other characteristics of the Web site or online service, as well as whether advertising promoting or appearing on the Web site or online service is directed to children.”  16 CFR § 312.2.

COPPA contains a preemption provision: “[n]o State or local government may impose any liability for commercial activities or actions by operators in interstate or foreign commerce in connection with an activity or action described in this chapter that is inconsistent with the treatment of those activities or actions under this section.”  15 U.S.C. § 6502(d) (emphasis added).

Well, in Hubbard the parties agreed for purposes of defendants’ motion to dismiss that all of plaintiffs claims are premised on violations of COPPA (although plaintiff argued that their claims allege independent state law violations fully consistent with, but not identical to, COPPA).  Defendants contended that plaintiff’s claims were preempted under Section 6502(d) of COPPA.

While plaintiff asserted that: (1) COPPA’s statutory text only preempts state laws that are “inconsistent” with COPPA, (2) the application of the state laws claims brought by plaintiff was not “obstacles” to the enforcement of COPPA, and (3) the state laws at issue did not make it “impossible” for defendants to comply both with COPPA and the state laws, the court disagreed.  The reason for this was simple: the court held that the plain text of the statute “clearly indicates Congress’s desire to expressly preempt plaintiffs’ state law claims.”

Another day, another privacy litigation bites the dust at the pleadings stage (although stay tuned to see if plaintiff amends the complaint).  While there is not yet a broadly applicable federal privacy law, the patchwork of federal privacy laws (including COPPA) will continue to impact the course of privacy litigations involving state law claims.  CPW will be there.  Stay tuned.

In November the high-end children’s clothing retailer Hanna Andersson agreed to pay $400,000 and implement new security measures as part of a class action settlement arising from litigation brought in the wake of a widespread data breach.  The lawsuit stems from a security incident where hackers accessed Hanna Andersson’s (“Hanna”) third-party e-commerce platform and gained access to customers’ personal information (“PII”).  The breach affected the PII (including names, shipping and billing addresses, payment card numbers, CVV codes, and expiration dates) of over 200,000 customers who made online purchases using the Hanna website between September 16 and November 11, 2019.  The hackers then exfiltrated and used this information to make fraudulent purchases using Hanna’s customers’ credit cards.  Hanna notified its customers of the breach on January 15, 2020.

Well, yesterday a federal court in California granted preliminary approval of the settlement and certified a settlement class under Federal Rule of Civil Procedure 23(a), consisting of “[a]ll individuals residing in the United States who made purchases on the Hanna Andersson website from September 16, 2019 to November 11, 2019.”  As part of this ruling the court found that “[t]he terms of the Settlement Agreement do not improperly grant preferential treatment to any individual or segment of the Settlement Class and fall within the range of possible approval as fair, reasonable, and adequate.”  Settlement Class Members who (1) wish to opt-out and exclude themselves from the Settlement Class or (2) object to the settlement must provide notice by April 28, 2021.  A final approval hearing was scheduled for June 17, 2021.

Assuming no objectors derail final approval next year, the Hanna Andersson case will be the first class action settlement under the CCPA.  Other CCPA litigants will be sure to look to the enhanced security requirements and monetary payout to the class as a starting point for CCPA settlements going forward.  CPW will be there to cover those developments as they occur.  Stay tuned.

The Lavarious Gardiner v. Walmart Inc. et al. case is anything but typical.

As a re-cap, back in July 2020, plaintiff filed a class action complaint against Walmart alleging that Walmart suffered a data breach which they never disclosed. As evidence of the breach, plaintiff presented claims that the personal information associated with his Walmart account had been discovered on the dark web and presented the results of security scans performed on Walmart’s website, which allegedly show certain vulnerabilities.

In other words, plaintiff filed suit on the suspicion that Walmart’s systems had been breached, which Walmart denies.

On December 12, Walmart filed a Motion to Dismiss all plaintiff’s claims, (which include, among others, a claim under the California Consumer Privacy Act (“CCPA”) and a claim under California Unfair Competition Law (‘UCL’)) arguing that plaintiff failed to state viable claims. In addition to the specific arguments discussed below for the CCPA and UCL claims, the motion presents several additional arguments, including the allegation that plaintiff “cannot make the requisite showing of cognizable harm.” 

Specifically with respect to the alleged CCPA violation, Walmart argues that plaintiff failed to allege when the breach occurred, which makes it impossible to determine if the CCPA even applies. The CCPA expressly provides that it is not operative until January 1, 2020, and it contains no express language establishing that it applies retroactively.[1] Walmart’s motion argues that the court should follow the precedent set by Judge Koh in In re Yahoo! Inc. Customer Data Sec. Breach Litig., which reached the conclusion that a claim under the recently amended California Customer Records Act (“CRA”) had to be dismissed where the plaintiffs failed to allege when the alleged violation occurred.[2]

The motion also urges the court to dismiss the UCL claim on the grounds (among others) that a UCL claim cannot be based on alleged violations of the CCPA. On its face, the CCPA states that “nothing in this title shall be interpreted to serve as the basis for a private right of action under any other law.”[3] Furthermore, during negotiations for the passage and the amendment of CCPA two separate California Senate Judiciary Committee reports acknowledged CCPA eliminates the possibility of a private right of action outside the narrow claim related to data breaches.[4]

In sum, the resolution by the court of the motion to dismiss could shed light on two interesting questions related to CCPA litigation: (1) whether CCPA could be read to apply to data breaches that occurred before its effective day but were subsequently discovered; and (2) whether CCPA may allow for a private right of action outside of the narrow provision on data breaches.

As always, CPW will be there to discuss additional developments in this and other data privacy litigation cases.

Stay tuned.

[1] See, Cal. Civ.Code § 1798.198(a).

[2] 2017 WL 3727318, at *38 (N.D. Cal. Aug. 30, 2017) (“Because the CCAC does not allege when Defendants discovered the 2013 Breach, the Court cannot determine which version of the CRA was in effect at the time that Defendants allegedly violated the CRA . . .”)

[3] See, Cal. Civ. Code § 1798.150(c).

[4] See, Cal. Sen. Judiciary Committee Report on 2018 CA A.B. 375, June 26, 2018, p. 22. (acknowledging before the passage of CCPA’s Section 1798.150(c) that CCPA “eliminates the ability of consumers to bring claims for violations of the Act under statutes such as the [UCL]”) and Cal. Sen. Rules Committee Report on CA A.B. 1355, September 12, 2019, p. 6. (stating that section 1798.150 is the “only enforcement mechanism made available to consumers pursuant to the CCPA . . .” )

On 24th December 2020, the UK and the EU finally agreed on the terms of a Brexit deal, including an interim solution to the issue of personal data transfers from the EU to the UK.  This interim arrangement gives some much-needed breathing space to European organizations with UK affiliates or that use UK service providers, and renewed hope for an eventual adequacy decision from the European Commission covering transfers of personal data to the UK.

The interim solution agreed allows companies and organisations that transfer personal data from the EU to the UK, to continue to do so, for up to six months to give time for the European Commission to approve an adequacy decision in favour of the UK (under Article 36(3) of Directive (EU) 2016/680 and under Article 45(3) of Regulation (EU) 2016/679).

Continue Reading Brexit Updated: Interim Deal Reached on EU-UK Data Transfers