Compliance

Join (or re-join) Lyn Lustig, Associate General Counsel, Global Partners LP, and Julia Jacobson, Partner, Squire Patton Boggs, on July 24th at 1:00 pm ET, for a rescheduled webinar about the tricks and traps of effectively incorporating terms and conditions into commercial agreements, particularly technology agreements, online agreements and adhesive contracts. In this 90-minute

In a much-awaited decision, the U.S. Supreme Court (Supreme Court) has ruled that in civil enforcement proceedings under the Telephone Consumer Protection Act (TCPA), whether brought by the Government or in private civil suits, the Federal district courts (District Courts) are not bound by the Federal Communications Commission’s (FCC) interpretation of the TCPA. Rather, the

As companies begin to move beyond large language model (LLM)-powered assistants into fully autonomous agents—AI systems that can plan, take actions, and adapt without human-in-the-loop—legal and privacy teams must be aware of the use cases and the risks that come with them.

What is Agentic AI?
Agentic AI refers to AI systems—often built using LLMs but not limited to them—that can take independent, goal-directed actions across digital environments. These systems can plan tasks, make decisions, adapt based on results, and interact with software tools or systems with little or no human intervention.

Agentic AI often blends LLMs with other components like memory, retrieval, application programming interfaces (APIs), and reasoning modules to operate semi-autonomously. It goes beyond chat interfaces and can initiate real actions—inside business applications, internal databases, or even external platforms.

For example:

  • An agent that processes inbound email, classifies the request, files a ticket, and schedules a response—all autonomously.
  • A healthcare agent that transcribes provider dictations, updates the electronic health record , and drafts follow-up communications.
  • A research agent that searches internal knowledge bases, summarizes results, and proposes next steps in a regulatory analysis.

These systems aren’t just helping users write emails or summarize docs. In some cases, they’re initiating workflows, modifying records, making decisions, and interacting directly with enterprise systems, third-party APIs, and internal data environments. Here are a handful of issues that legal and privacy teams should be tracking now.Continue Reading What is Agentic AI? A Primer for Legal and Privacy Teams

With the entry into force of the AI Act (Regulation 2024/1689) in August 2024, a pioneering framework of AI was established.

On February 2, 2025, the first provisions of the AI Act became applicable, including the AI system definition, AI literacy and a limited number of prohibited AI practices. In line with article 96 of the AI Act, the European Commission released detailed guidelines on the application of the definition of an AI system on February 6, 2025.Continue Reading Understanding the Scope of “Artificial Intelligence (AI) System” Definition: Key Insights From The European Commission’s Guidelines

The rulemaking process on California’s Proposed “Regulations on CCPA Updates, Cybersecurity Audits, Risk Assessments, Automated Decisionmaking Technology, and Insurance Companies” (2025 CCPA Regulations) has been ongoing since November 2024.  With the one-year statutory period to complete the rulemaking or be forced to start anew on the horizon, the California Privacy Protection Agency (CPPA) voted unanimously to move a revised set of draft regulations forward to public comment on May 1, which began May 9 and closes at 5 pm Pacific June 2, 2025.  The revisions cut back on the regulation of Automated Decision-making Technology (ADMT), eliminate the regulation of AI, address potential Constitutional deficiencies with regard to risk assessment requirements and somewhat ease cybersecurity audit obligations.  This substantially revised draft is projected by the CPPA to save California businesses approximately 2.25 billion dollars in the first year of implementation, a 64% savings from the projected cost of the prior draft.Continue Reading Revised Draft California Privacy Regulations Lessen Impact on Business

On April 14, 2025, the European Data Protection Board (EDPB) released guidelines detailing how to process personal data using blockchain technologies in compliance with the General Data Protection Regulation (GDPR) (Guidelines 02/2025 on processing of personal data through blockchain technologies). These guidelines highlight certain privacy challenges and provide practical recommendations.Continue Reading From Blocks to Rights: Privacy and Blockchain in the Eyes of the EU data Protection Authorities

The European Commission published its long-awaited Guidelines on Prohibited AI Practices (CGPAIP) on February 4, 2025, two days after the AI Act’s articles on prohibited practices became applicable.

The good news is that in clarifying these prohibited practices (and those excluded from its material scope), the CGPAIP also addresses other more general aspects

(Updated May 12, 2025)

Since January, the federal government has moved away from comprehensive legislation on artificial intelligence (AI) and adopted a more muted approach to federal privacy legislation (as compared to 2024’s tabled federal legislation). Meanwhile, state legislatures forge ahead – albeit more cautiously than in preceding years.

As we previously reported, the Colorado AI Act (COAIA) will go into effect on February 1, 2026. In signing the COAIA into law last year, Colorado Governor Jared Polis (D) issued a letter urging Congress to develop a “cohesive” national approach to AI regulation preempting the growing patchwork of state laws. Absent a federal AI law, Governor Polis encouraged the Colorado General Assembly to amend the COAIA to address his concerns that the COAIA’s complex regulatory regime may drive technology innovators away from Colorado. Eight months later, the Trump Administration announced its deregulatory approach to AI regulation making federal AI legislation unlikely. At that time, the Trump Administration seemed to consider existing laws – such as Title VI and Title VII of the Civil Rights Act and the Americans with Disabilities Act which prohibit unlawful discrimination – as sufficient to protect against AI harms. Three months later, a March 28 Memorandum issued by the federal Office of Management and Budget directs federal agencies to implement risk management programs designed for “managing risks from the use of AI, especially for safety-impacting and rights impacting AI.”Continue Reading States Shifting Focus on AI and Automated Decision-Making

SPB’s Tokyo/Shanghai Partner Scott Warren, along with New York Partner Julia Jacobson, will be speaking on and moderating panels at the Society for the Policing of Cyberspace (POLCYB) Global Cybercrime Management Executive Roundtable in Vancouver, Canada on May 30 as well as the LegalPlus 8th Annual Shanghai International Arbitration & Corporate Fraud Summit.Continue Reading Join Us This Summer in Vancouver and Shanghai for Insights on Cybercrime and Cross Border Data Transfers

The Ministry of Electronics and Information Technology (MeitY) has recently released the much-awaited draft of the Digital Personal Data Protection Rules, 2025 (Rules) for public consultation. These proposed Rules provide important insights into the upcoming implementation of India’s new data protection law, which has been under development for some time.

The enactment of the Digital