Litigation

In case you missed it, below are recent posts from Privacy World covering the latest developments on data privacy, security and innovation. Please reach out to the authors if you are interested in additional information.

Balancing the Scales: How to Use “Legitimate Interest” to Process Personal Data “Fairly”

Court Ruling in China on Personal Data

On October 9, 2024, the European Data Protection Board (EDPB) unveiled its much-anticipated Guidelines on using legitimate interest (Article 6.1(f) of the GDPR) as a lawful basis for processing personal data. These guidelines set out clear criteria for data controllers, and will therefore be most welcome.

For years, legitimate interest has been among the go-to option for organizations, with the idea that it offers more flexibility (as long as you comply with the inherent requirements of its use). High-profile cases, like the Court of Justice of the European Union’s (CJEU) decision in Royal Dutch Tennis Association (KNLTB), acknowledged that commercial interests may qualify as legitimate, but also crystalized the tension on its uses from supervisory authorities and privacy advocates.Continue Reading Balancing the Scales: How to Use “Legitimate Interest” to Process Personal Data “Fairly”

In September 2024, the Guangzhou Internet Court released its ruling on a civil dispute that was originally issued in September 2023, involving the transfer of personal data outside mainland China. This judgment is reportedly the first judicial judgment on cross-border data transfers.

In this case, an international hotel group based in France, as the defendant, was found liable for illegally transferring the personal data of the plaintiff, an individual Chinese customer, to third parties outside of China for the purpose of marketing, without obtaining the customer’s separate consent prior to providing the data.Continue Reading Court Ruling in China on Personal Data Transfer by International Hotel Chain

The ICO has fined the Police Service of Northern Ireland (“PSNI”) £750,000 in what it has described as the “most significant data breach that has ever occurred in the history of UK policing[1]. The ICO imposed the largest ever fine on a public body following the unauthorised disclosure of an Excel spreadsheet containing the personal data of 9,483 police officers and staff. Given the ICO’s stated policy for public authorities is for enforcement to act as a deterrent and to remedy data breaches through reprimands and enforcement notices, with the use of fines reserved for the most egregious cases, it is, at first glance at least, surprising to see the level of fine imposed. The fine comes with a word of warning to private sector data controllers that they would not have benefited from the reduction afforded to public sector enforcement and could have faced a fine of up to £17.5 million.

Background

On 3 August 2023, the PSNI received two Freedom of Information (FOI) requests from the website WhatDoTheyKnow (WDTK) requesting details of the number of officers and staff at each rank or grade. This data was compiled by the PSNI’s Workforce Planning Team by downloading and editing existing HR Excel spreadsheets. After preparation, the responsive spreadsheet was sent to the Head of the Workforce Planning Team for quality assurance checks. Once reviewed, it was forwarded to the FOI Decision Maker, who chose to disclose the Excel file in its original format rather than convert it to a Word document, due to technical issues.Continue Reading Data Breaches and Spreadsheets: How to Avoid Fines When Excelling

In case you missed it, below are recent posts from Privacy World covering the latest developments on data privacy, security and innovation. Please reach out to the authors if you are interested in additional information.

Join SPB’s Privacy Team for Two Strafford Webinars in December

Cancel Culture: New Requirements for Automatic Renewal and Other Negative

As we predicted a year ago, the Plaintiffs’ Bar continues to test new legal theories attacking the use of Artificial Intelligence (AI) technology in courtrooms across the country. Many of the complaints filed to date have included the proverbial kitchen sink: copyright infringement; privacy law violations; unfair competition; deceptive and acts and practices; negligence; right of publicity, invasion of privacy and intrusion upon seclusion; unjust enrichment; larceny; receipt of stolen property; and failure to warn (typically, a strict liability tort).

A case recently filed in Florida federal court, Garcia v. Character Techs., Inc., No. 6:24-CV-01903 (M.D. Fla. filed Oct. 22, 2024) (Character Tech) is one to watch. Character Tech pulls from the product liability tort playbook in an effort to hold a business liable for its AI technology. While product liability is governed by statute, case law or both, the tort playbook generally involves a defective, unreasonably dangerous “product” that is sold and causes physical harm to a person or property. In Character Tech, the complaint alleges (among other claims discussed below) that the Character.AI software was designed in a way that was not reasonably safe for minors, parents were not warned of the foreseeable harms arising from their children’s use of the Character.AI software, and as a result a minor committed suicide. Whether and how Character Tech evolves past a motion to dismiss will offer valuable insights for developers of AI technologies.Continue Reading Artificial Intelligence and the Rise of Product Liability Tort Litigation: Novel Action Alleges AI Chatbot Caused Minor’s Suicide

In a previous article, we considered the overlap between data protection claims and defamation claims and highlighted two high profile cases: Noel Anthony Clarke v Guardian News & Media Limited [2023] EWHC 2734 (KB) and Donald J. Trump v Orbis Business Intelligence Limited [2024] EWHC 173 (KB), that demonstrated this.

We now take

The Office of the Attorney General of Texas (“OAG”) announced a “first-of-its-kind healthcare generative AI” settlement with Pieces Technology, Inc. (“Pieces”). The settlement related to the Texas OAG allegations that Piece’s advertising and marketing claims about the accuracy of its generative artificial intelligence (GenAI) products in violation of the Texas Deceptive Trade Practices – Consumer Protection Act (“DTPA”), Tex. Bus. & Com. Code Ann. § 17.58. The Texas OAG states in its press release that the Piece’s investigation is a “First-of-its-Kind Healthcare Generative AI Investigation.”Continue Reading Texas Attorney General Settles with Healthcare AI Firm Over False Claims on Product Accuracy and Safety

In case you missed it, below are recent posts from Privacy World covering the latest developments on data privacy, security and innovation. Please reach out to the authors if you are interested in additional information.

2024 Data Privacy Thought Leadership Series

The Trade Practitioner Blog Features Post on Key Takeaways from the Proposed August 2024

SPB’s Gabrielle Martin authored a piece on the recently passed Illinois HB 3773. The bill amends the Illinois Human Rights Act to protect employees against discrimination from, and require transparency about, the use of AI in employment-related decisions. Head over to Employment Law Worldview, for an in-depth discussion of the bill, including a contrast