In case you missed it, below are recent posts from Privacy World covering the latest developments on data privacy, security and innovation. Please reach out to the authors if you are interested in additional information.

CA Legislators Charge That Privacy Agency AI Rulemaking Is Beyond Its Authority

Data Processing Evaluation and Risk Assessment Requirements Under California’s Proposed CCPA Regulations

Light at the End of the Tunnel – Are You Ready for the New California Privacy and Cybersecurity Rules?

Join Team SPB this Spring for Three Engaging Webinars

As we have covered, the public comment period closed on February 19th for the California Privacy Protection Agency (CPPA) draft regulations on automated decision-making technology, risk assessments and cybersecurity audits under the California Consumer Privacy Act (the “Draft Regulations”).  One comment that has surfaced (the CPPA has yet to publish the comments), in particular, stands out — a letter penned by 14 Assembly Members and four Senators. These legislators essentially charged the CPPA for being over its skis, calling out “the Board’s incorrect interpretation that CPPA is somehow authorized to regulate AI.” 

Continue Reading CA Legislators Charge That Privacy Agency AI Rulemaking Is Beyond Its Authority

As we have previously detailed here, the latest generation of regulations under the California Consumer Privacy Act (CCPA), drafted by the California Privacy Protection Agency (CPPA), have advanced beyond public comments are closer to becoming final. These include regulations on automated decision-making technology (ADMT), data processing evaluation and risk assessment requirements and cybersecurity audits. Recently, Privacy World’s Alan Friel spoke at the California Lawyer’s Association’s Annual Privacy Summit at UCLA in Westwood, California (Go Bruins!) on the evaluation and assessment proposals. Separately, Privacy World’s Lydia de la Torre, a CPPA Board Member until recently, spoke on artificial intelligence laws and litigation. A transcript of Alan’s presentation follows:

Continue Reading Data Processing Evaluation and Risk Assessment Requirements Under California’s Proposed CCPA Regulations

In case you missed it, below are recent posts from Privacy World covering the latest developments on data privacy, security and innovation. Please reach out to the authors if you are interested in additional information.

Light at the End of the Tunnel – Are You Ready for the New California Privacy and Cybersecurity Rules?

Join Team SPB this Spring for Three Engaging Webinars

Ch-ch-ch-ch-changes… for the UK Competition and Markets Authority

Join SPB’s Alan Friel for an Automated Data Mapping Webinar Hosted by Today’s General Counsel

The ReAIlity of What an AI System Is – Unpacking the Commission’s New Guidelines

Court: Training AI Model Based on Copyrighted Data Is Not Fair Use as a Matter of Law

A New Era: Trump 2.0 Highlights for Privacy and AI

Key Insights on President Trump’s New AI Executive Order and Policy & Regulatory Implications

After what seems like forever, the most recent (and last?) public comment period for the draft California Consumer Privacy Act (CCPA) regulations finally closed on February 19, 2025. (Read Privacy World coverage here and here.) 

Following an initial public comment period on an earlier draft, the formal comment period for the current version of the proposed CPPA regulations (Proposed Regulations) began on November 22, 2024. The Proposed Regulations include amendments to the existing CCPA regulations and new regulations on automated decision-making technology, profiling, cybersecurity audits, requirements for insurance companies and data practice risk assessments. The California Privacy Protection Agency (CPPA) may either submit a final rulemaking package to the California Office of Administrative Law (OAL, which confirms statutory authority) or modify the Proposed Regulations in response to comments received during the public comment period.

Continue Reading Light at the End of the Tunnel – Are You Ready for the New California Privacy and Cybersecurity Rules?

Join Team SPB’s Alan Friel, Julia Jacobson and Kyle Dull for three informative webinars addressing key topics including AI-driven decision-making technologies, the development of terms of service and privacy policies, and best practices for the responsible use of AI and associated risk management.

A limited number of complimentary passes are available to clients for each webinar. For more details on free passes, please reach out to Julia Jacobson.


Continue Reading Join Team SPB this Spring for Three Engaging Webinars

By repeating “ch-ch-ch-ch-changes” in his famous song, David Bowie was reportedly trying to mirror the stuttered steps of growth. January 2025 was a month full of changes for the UK Competition and Markets Authority (CMA). As with any changes, it is difficult to predict their effect precisely, only time will tell. Although we do not have a crystal ball, however, our longstanding and in-depth experience in UK competition law gives us unique insights on what to expect and most importantly how to adapt. In this update, we will cover some of these key changes including:   

  • The entry into force of the Digital Markets, Competition and Consumers Act (DMCCA) and related updated guidance.
  • An anticipated reform of the UK concurrency regime to extend to consumer protection.
  • The exercise by the CMA of its new DMCCA powers to designate companies with Strategic Market Status (SMS).
  • Last but not least, perhaps the changes that grabbed the headlines the most: the CMA has a new interim Chairperson and the UK government’s “steer” to the CMA’s CEO.
Continue Reading Ch-ch-ch-ch-changes… for the UK Competition and Markets Authority

Join Alan Friel for an insightful webinar, hosted by Today’s General Counsel, where industry experts will discuss the critical role of automated data mapping in effective data governance. This esteemed panel will discuss strategies for creating defensible data maps that safeguard sensitive information and will be moderated by Rebecca Perry, Managing Director of GTM Strategy & Operations for Privacy & Data Governance Solutions at Exterro.

Continue Reading Join SPB’s Alan Friel for an Automated Data Mapping Webinar Hosted by Today’s General Counsel

The European Commission has recently released its Guidelines on the Definition of an Artificial Intelligence System under the AI Act (Regulation (EU) 2024/1689). The guidelines are adopted in parallel to commission guidelines on prohibited AI practices (that also entered into application on February 2), with the goal of providing businesses, developers and regulators with further clarification on the AI Act’s provisions.

Key Takeaways for Businesses and AI Developers

Not all AI systems are subject to strict regulatory scrutiny. Companies developing or using AI-driven solutions should assess their systems against the AI Act’s definition. With these guidelines (and the ones of prohibited practices), the European Commission is delivering on the need to add clarification to the core element of the act: what is an AI system?

The AI Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment. The system, for explicit or implicit objectives, infers from input data how to generate outputs – such as predictions, content, recommendations or decisions – that can influence physical or virtual environments.

One of the most significant clarifications in the guidelines is the distinction between AI systems and “traditional software.”

  • AI systems go beyond rule-based automation and require inferencing capabilities.
  • Traditional statistical models and basic data processing software, such as spreadsheets, database systems and manually programmed scripts, do not qualify as AI systems.
  • Simple prediction models that use basic statistical techniques (e.g., forecasting based on historical averages) are also excluded from the definition.

This distinction ensures that compliance obligations under the AI Act apply only to AI-driven technologies, leaving other software solutions outside of its scope.

Continue Reading The ReAIlity of What an AI System Is – Unpacking the Commission’s New Guidelines

In what may turn out to be an influential decision in the burgeoning sphere of AI technology, in Thomson Reuters v. Ross Intelligence, the court ruled that creating short summaries of law to train Ross Intelligence’s artificial intelligence legal research application not only infringes Thomson Reuters’ copyrights as a matter of law but that the copying is not fair use. Critically, it did so (1) despite finding that a number of these issues were for the jury and (2) only two of the four fair-use factors weighed in favor of the court’s ruling. It should also be noted that the technology at issue in this case is not “generative AI” (like ChatGPT). Nevertheless, this ruling is seemingly a strong victory for rightsholders against the AI “revolution.”

Find out more from SPB’s Joe Meckes and Joeseph Grasser in their recent article, “Court: Training AI Model Based on Copyrighted Data Is Not Fair Use as a Matter of Law,” hosted on the Global IP &Technology Law Blog.

Disclaimer: While every effort has been made to ensure that the information contained in this article is accurate, neither its authors nor Squire Patton Boggs accepts responsibility for any errors or omissions. The content of this article is for general information only and is not intended to constitute or be relied upon as legal advice.