Artificial Intelligence

Inside AI Policy reports that a survey of U.S. office workers indicates that across industries approximately half of survey respondents said that they do or would use AI contrary to company policy to make their job easier, including 42% of security sector workers.  The study published on August 20, 2025 by CalypsoAI, found that while 87% of respondents indicated that their employers had AI governance policies 52% are not prepared to follow restrictions, and 28% admitted to submitting sensitive or proprietary  data or documents so AI could complete a task; 29% used AI to generate something sent without, or with minimal, review; and 25% used AI without knowing if the use case was permissible.  The results for highly regulated industries are not better, and in some cases worse.  For instance, 60% of employees in financial services and banking indicated that they use AI tools regardless of company policy and 36% “don’t feel guilty about it.”Continue Reading Rogue AI Usage and High-risk Data Processing Runs Rampant

Recently, Squire Patton Boggs Tokyo and Shanghai Partner Scott Warren was cited in an article in Lexology Pro, written by Victoria Hudgins, comparing the US and China AI action plans. Both plans set more aspirational goals, while establishing a framework for AI security. Read the article (subscription required).

In addition, Scott will be speaking

On June 30, 2025, the California Civil Rights Council (CRC) secured final approval for regulations addressing employment discrimination resulting from the use of artificial intelligence and other algorithms it collectively refers to as Automated-Decision Systems. Shortly after that, on July 24, 2025, the California Privacy Protection Agency Board approved its own long-anticipated regulations on cybersecurity

The EU AI Act is entering into force in stages. While most of its provisions will not apply until August 2026, key requirements for general-purpose AI (GPAI) models took effect much earlier, starting on August 2, 2025.

In anticipation of this earlier milestone, the Code of Practice for General-Purpose AI Models was published on the EU commission’s website on July 10, 2025. It is a voluntary tool, prepared by independent experts in a multi-stakeholder process involving nearly 1000 participants, (general-purpose AI model providers, downstream providers, industry organizations, civil society, rightsholders and other entities, as well as academia and independent experts). The Code represents an initial effort to translate the AI Act’s GPAI-specific obligations into practical measures.

It focuses on three central areas (Transparency, Copyright, and Safety and Security) and offers a framework that developers of GPAI models may rely on to demonstrate responsible practices in line with the EU’s evolving regulatory approach.Continue Reading The EU’s Voluntary GPAI Code: Reflecting on Strategic Choices in an Evolving Regulatory Context

On July 23, 2025, the Trump Administration released Winning the Race: America’s AI Action Plan, signaling a decisive departure from the AI governance strategy set forth by the Biden Administration’s Executive Order 14110 (November 2023). While the previous framework focused on risk mitigation, civil rights, and regulatory oversight—particularly of advanced AI systems—the new plan

As companies begin to move beyond large language model (LLM)-powered assistants into fully autonomous agents—AI systems that can plan, take actions, and adapt without human-in-the-loop—legal and privacy teams must be aware of the use cases and the risks that come with them.

What is Agentic AI?
Agentic AI refers to AI systems—often built using LLMs but not limited to them—that can take independent, goal-directed actions across digital environments. These systems can plan tasks, make decisions, adapt based on results, and interact with software tools or systems with little or no human intervention.

Agentic AI often blends LLMs with other components like memory, retrieval, application programming interfaces (APIs), and reasoning modules to operate semi-autonomously. It goes beyond chat interfaces and can initiate real actions—inside business applications, internal databases, or even external platforms.

For example:

  • An agent that processes inbound email, classifies the request, files a ticket, and schedules a response—all autonomously.
  • A healthcare agent that transcribes provider dictations, updates the electronic health record , and drafts follow-up communications.
  • A research agent that searches internal knowledge bases, summarizes results, and proposes next steps in a regulatory analysis.

These systems aren’t just helping users write emails or summarize docs. In some cases, they’re initiating workflows, modifying records, making decisions, and interacting directly with enterprise systems, third-party APIs, and internal data environments. Here are a handful of issues that legal and privacy teams should be tracking now.Continue Reading What is Agentic AI? A Primer for Legal and Privacy Teams

With the entry into force of the AI Act (Regulation 2024/1689) in August 2024, a pioneering framework of AI was established.

On February 2, 2025, the first provisions of the AI Act became applicable, including the AI system definition, AI literacy and a limited number of prohibited AI practices. In line with article 96 of the AI Act, the European Commission released detailed guidelines on the application of the definition of an AI system on February 6, 2025.Continue Reading Understanding the Scope of “Artificial Intelligence (AI) System” Definition: Key Insights From The European Commission’s Guidelines

The European Commission published its long-awaited Guidelines on Prohibited AI Practices (CGPAIP) on February 4, 2025, two days after the AI Act’s articles on prohibited practices became applicable.

The good news is that in clarifying these prohibited practices (and those excluded from its material scope), the CGPAIP also addresses other more general aspects of the AI Act, which comes to provide much-needed legal certainty to all authorities, providers and deployers of AI systems/models in navigating the regulation.Continue Reading The European Commission’s Guidance on Prohibited AI Practices: Unraveling the AI Act

(Updated May 12, 2025)

Since January, the federal government has moved away from comprehensive legislation on artificial intelligence (AI) and adopted a more muted approach to federal privacy legislation (as compared to 2024’s tabled federal legislation). Meanwhile, state legislatures forge ahead – albeit more cautiously than in preceding years.

As we previously reported, the Colorado AI Act (COAIA) will go into effect on February 1, 2026. In signing the COAIA into law last year, Colorado Governor Jared Polis (D) issued a letter urging Congress to develop a “cohesive” national approach to AI regulation preempting the growing patchwork of state laws. Absent a federal AI law, Governor Polis encouraged the Colorado General Assembly to amend the COAIA to address his concerns that the COAIA’s complex regulatory regime may drive technology innovators away from Colorado. Eight months later, the Trump Administration announced its deregulatory approach to AI regulation making federal AI legislation unlikely. At that time, the Trump Administration seemed to consider existing laws – such as Title VI and Title VII of the Civil Rights Act and the Americans with Disabilities Act which prohibit unlawful discrimination – as sufficient to protect against AI harms. Three months later, a March 28 Memorandum issued by the federal Office of Management and Budget directs federal agencies to implement risk management programs designed for “managing risks from the use of AI, especially for safety-impacting and rights impacting AI.”Continue Reading States Shifting Focus on AI and Automated Decision-Making

Our very own Alan Friel, Julia Jacobson and Kyle Dull will be featured in a series of upcoming CLE webinars designed to equip legal professionals with practical strategies for drafting enforceable terms of use, managing privacy risks in AI, and navigating the latest state data privacy laws.Continue Reading Join Us in April for Three Upcoming Strafford Webinars