Artificial Intelligence

As companies begin to move beyond large language model (LLM)-powered assistants into fully autonomous agents—AI systems that can plan, take actions, and adapt without human-in-the-loop—legal and privacy teams must be aware of the use cases and the risks that come with them.

What is Agentic AI?
Agentic AI refers to AI systems—often built using LLMs but not limited to them—that can take independent, goal-directed actions across digital environments. These systems can plan tasks, make decisions, adapt based on results, and interact with software tools or systems with little or no human intervention.

Agentic AI often blends LLMs with other components like memory, retrieval, application programming interfaces (APIs), and reasoning modules to operate semi-autonomously. It goes beyond chat interfaces and can initiate real actions—inside business applications, internal databases, or even external platforms.

For example:

  • An agent that processes inbound email, classifies the request, files a ticket, and schedules a response—all autonomously.
  • A healthcare agent that transcribes provider dictations, updates the electronic health record , and drafts follow-up communications.
  • A research agent that searches internal knowledge bases, summarizes results, and proposes next steps in a regulatory analysis.

These systems aren’t just helping users write emails or summarize docs. In some cases, they’re initiating workflows, modifying records, making decisions, and interacting directly with enterprise systems, third-party APIs, and internal data environments. Here are a handful of issues that legal and privacy teams should be tracking now.Continue Reading What is Agentic AI? A Primer for Legal and Privacy Teams

With the entry into force of the AI Act (Regulation 2024/1689) in August 2024, a pioneering framework of AI was established.

On February 2, 2025, the first provisions of the AI Act became applicable, including the AI system definition, AI literacy and a limited number of prohibited AI practices. In line with article 96 of the AI Act, the European Commission released detailed guidelines on the application of the definition of an AI system on February 6, 2025.Continue Reading Understanding the Scope of “Artificial Intelligence (AI) System” Definition: Key Insights From The European Commission’s Guidelines

The European Commission published its long-awaited Guidelines on Prohibited AI Practices (CGPAIP) on February 4, 2025, two days after the AI Act’s articles on prohibited practices became applicable.

The good news is that in clarifying these prohibited practices (and those excluded from its material scope), the CGPAIP also addresses other more general aspects

(Updated May 12, 2025)

Since January, the federal government has moved away from comprehensive legislation on artificial intelligence (AI) and adopted a more muted approach to federal privacy legislation (as compared to 2024’s tabled federal legislation). Meanwhile, state legislatures forge ahead – albeit more cautiously than in preceding years.

As we previously reported, the Colorado AI Act (COAIA) will go into effect on February 1, 2026. In signing the COAIA into law last year, Colorado Governor Jared Polis (D) issued a letter urging Congress to develop a “cohesive” national approach to AI regulation preempting the growing patchwork of state laws. Absent a federal AI law, Governor Polis encouraged the Colorado General Assembly to amend the COAIA to address his concerns that the COAIA’s complex regulatory regime may drive technology innovators away from Colorado. Eight months later, the Trump Administration announced its deregulatory approach to AI regulation making federal AI legislation unlikely. At that time, the Trump Administration seemed to consider existing laws – such as Title VI and Title VII of the Civil Rights Act and the Americans with Disabilities Act which prohibit unlawful discrimination – as sufficient to protect against AI harms. Three months later, a March 28 Memorandum issued by the federal Office of Management and Budget directs federal agencies to implement risk management programs designed for “managing risks from the use of AI, especially for safety-impacting and rights impacting AI.”Continue Reading States Shifting Focus on AI and Automated Decision-Making

Our very own Alan Friel, Julia Jacobson, Kyle Dull and Samuel Marticke will be featured in a series of upcoming CLE webinars designed to equip legal professionals with practical strategies for drafting enforceable terms of use, managing privacy risks in AI, and navigating the latest state data privacy laws.Continue Reading Join Us in April for Three Upcoming Strafford Webinars

In case you missed it, below are recent posts from Privacy World covering the latest developments on data privacy, security and innovation. Please reach out to the authors if you are interested in additional information.

FCC Seeks Comment on Quiet Hours and Marketing Messages | Privacy World

New Class Action Threat: TCPA Quiet Hours and

In case you missed it, below are recent posts from Privacy World covering the latest developments on data privacy, security and innovation. Please reach out to the authors if you are interested in additional information.

CA Legislators Charge That Privacy Agency AI Rulemaking Is Beyond Its Authority

Data Processing Evaluation and Risk Assessment Requirements Under

As we have covered, the public comment period closed on February 19th for the California Privacy Protection Agency (CPPA) draft regulations on automated decision-making technology, risk assessments and cybersecurity audits under the California Consumer Privacy Act (the “Draft Regulations”).  One comment that has surfaced (the CPPA has yet to publish the comments), in particular, stands out — a letter penned by 14 Assembly Members and four Senators. These legislators essentially charged the CPPA for being over its skis, calling out “the Board’s incorrect interpretation that CPPA is somehow authorized to regulate AI.” Continue Reading CA Legislators Charge That Privacy Agency AI Rulemaking Is Beyond Its Authority

As we have previously detailed here, the latest generation of regulations under the California Consumer Privacy Act (CCPA), drafted by the California Privacy Protection Agency (CPPA), have advanced beyond public comments are closer to becoming final. These include regulations on automated decision-making technology (ADMT), data processing evaluation and risk assessment requirements and cybersecurity audits. Recently, Privacy World’s Alan Friel spoke at the California Lawyer’s Association’s Annual Privacy Summit at UCLA in Westwood, California (Go Bruins!) on the evaluation and assessment proposals. Separately, Privacy World’s Lydia de la Torre, a CPPA Board Member until recently, spoke on artificial intelligence laws and litigation. A transcript of Alan’s presentation follows:Continue Reading Data Processing Evaluation and Risk Assessment Requirements Under California’s Proposed CCPA Regulations

In case you missed it, below are recent posts from Privacy World covering the latest developments on data privacy, security and innovation. Please reach out to the authors if you are interested in additional information.

Light at the End of the Tunnel – Are You Ready for the New California Privacy and Cybersecurity Rules?

Join