In 2025, India’s approach on AI has shifted significantly from, “Will AI change the way business is done?” to “What is the best way to adopt it to enable business expansion?” Guided by the principles of People, Planet, and Progress, “Safe and trusted AI for all” has become the motto governing India’s approach to AI. The evolving digital infrastructure, specific sector-driven regulation, techno-legal philosophy, strength of the powerful Global South, and a strong inclusion narrative are cornerstones to India’s AI journey.

AI and Global Governance

There are several basic models for AI governance that are emerging globally.

The European Union

The EU establishes in essence a single, horizontal rule on AI. It classifies systems as per the level of risk posed to the end user: specifically unacceptable, high, limited, and minimal. It forbids certain specific actions and provides for strict documentation, compliance, and penalty regimes for high-risk AI. For instance, Chapter II, Article 5 of the EU AI Act provides that deploying subliminal, manipulative, or deceptive techniques to distort behavior and impair informed decision-making, causing significant harm, is prohibited and rated as unacceptable under the Act.

United States

The United States currently lacks federal AI legislation, except for concerns relating to AI affecting national security. However, several states have enacted their own AI laws. Colorado, Utah, Illinois, California and Texas all have their own AI Acts that tend to focus on bias, discrimination and civil rights in hiring and employment, as well as profiling. Much of this is already prohibited by various State privacy laws. Notably, President Trump has just issued an Executive Order prohibiting other states from passing further AI-focused laws. It will be interesting to see how this Order may impact US States going forward.  

APAC

Many APAC jurisdictions aim to balance AI driven innovation with safeguards against potential abuse. 

  • South Korea: new AI provisions align closest with the EU’s risk-based approach. 
  • China: legislation emphasizes risk of social unrest and other national security threats posed by generative AI, although the courts have weighed in on private sector misuse.
  • Japan: the focus is on allowing free development, but it also recently established a Cabinet-level office to monitor AI deployment and usage to adapt its policies as needed. 
  • APEC: adopted the APEC AI Initiative (2026-30) on November 4, 2025, which prioritizes AI infrastructure development in the region over restrictions on AI. 
  • Australia: released its National AI Plan on December 2, 2025, which moves away from an EU-style approach and instead emphasizes regulating AI under existing laws, supported by a newly formed regulatory body, the National AI Safety Institute.

India’s 2025 AI Governance Guidelines

India’s AI Governance Guidelines reflect its development priorities, diversity, and the evolving digital capabilities. They are structured into four main sections:

  • An action plan which outlines short-, medium-, and long-term actions, such as creating organizations and incident databases, starting sandboxes, and developing existing legislation;
  • Practical guidelines entities should adopt.
  • Six pillars of governance: a set of recommendations under which India aims to increase data access, provide specific amendments, devise risk tools tailored to India, define liability, and develop AI focused institutions. They span:
    • Infrastructure
    • Capacity Building
    • Policy and Regulation
    • Risk Mitigation
    • Accountability
    • Institutions.
  • Seven “sutras,” a set of guiding principles around AI development:  
    • Trust is the Foundation
    • People First
    • Innovation Over Restraint
    • Fairness & Equity
    • Accountability
    • Understandable by Design
    • Safety, Resilience & Sustainability

What’s distinctive about India’s approach?

Several factors seem to set India’s strategy apart from others. For example, India has already been regulating AI through existing laws, new guidelines, and sector-specific rules, such as those issued by the Reserve Bank of India (RBI) and the Securities and Exchange Board of India (SEBI), rather than a single or overarching AI statute.

With innovation a core focus, India intends to follow a “hands-off” approach to encourage new AI development while addressing harms through existing laws. The country’s strategy is to leverage AI for economic growth by focusing on the application of AI and using existing laws for specific issues like data privacy and discrimination.

The principle behind the sutras is that innovation should take precedence over preventative restrictions, while maintaining obligations related to safety, accountability, and fairness. The key word is accountability. India’s desired goal is to seek a more direct balance between risk and growth than perhaps the obligations imposed under the EU AI Act.

India already has rolled out various extensive and unique Digital Public Infrastructure (DPI) AI platforms, which it hopes to leverage in this implementation.  These include digital solutions such as Aadhaar (equivalent to a citizen id or social security number), UPI, Digi Locker, and various data exchanges. This approach intends to utilize existing, shared digital rails for identity verification, payments, and data exchange to efficiently deliver services in crucial sectors such as healthcare, agriculture, education, and welfare, particularly focusing on low-income and rural communities.

The guidance clarifies how important it is to include legal requirements directly in AI system architecture. Some examples of this are technologies that protect privacy, standards for content authentication (like C2PA-style watermarking), and DEPA for training AI. This idea of “compliance-by-design,” similar to ‘privacy-by-design’ initially introduced in GDPR, goes beyond the stated ideas in many of the AI obligations stated elsewhere to date. 

Further, India plans to set up a Technology Policy Expert Committee, an AI Governance Group (AIGG) for high-level collaborations and ordination, and a special AI Safety Institute to test models, to set standards and participate in international safety networks.

The guidelines provide for a risk assessment and classification system that focuses on national security issues and harms that may be caused to vulnerable groups (for example, deepfakes aimed at women, child safety, language, and caste bias) instead of relying on generic risk grids. This social-context approach is thought to be a better fit for India’s population and diversity than many “one-size-fits-all” models from around the world.

India focuses on using voluntary commitments, self-certifications, transparency reports, and third-party audits before putting strict responsibilities for AI, with a desire to provide stronger incentives like sandbox access, reputational badges, and targeted support. The systematic use of incentives to promote voluntary protections seems more common than in many other regimes.

India’s role in the governance of AI worldwide

It appears India aims to leverage AI governance as a diplomatic tool, particularly within the Global South, while also fostering local economic growth. The recommendations place India’s balanced, DPI-enabled, inclusion-first model at the center of global discussions, calling for active engagement in multilateral forums such as the G20, UN, OECD, and other similar bodies.

Through these efforts, India seeks to shape international standards in areas like child safety, content authentication, and safety testing – supported by initiatives such as hosting an AI Impact Summit and joining networks of AI safety institutes. At the same time, India aims to demonstrate that open, interoperable platforms can be used to deliver solutions that can be adopted widely. A combination of normative leadership (e.g. guiding principles, safety norms) and practical infrastructure (DPI, AI Mission, GPUs, and AI Kosh datasets) are what sets India apart from the rest of the world in its approach to AI governance.

Disclaimer: While every effort has been made to ensure that the information contained in this article is accurate, neither its authors nor Squire Patton Boggs accepts responsibility for any errors or omissions. The content of this article is for general information only, and is not intended to constitute or be relied upon as legal advice.

Stay Ahead on Consumer Privacy News

Not a subscriber yet? Subscribe here to be among the first to receive timely updates on the fast-moving world of data privacy, security, and innovation—delivered straight to your inbox.

Looking for deeper insights and expert analysis? You can also subscribe here to our privacy attorneys’ marketing communications for thought leadership and rich content when you need a more comprehensive perspective.

In case you missed it, below are recent posts from Privacy World covering the latest developments on data privacy, security and innovation. Please reach out to the authors if you are interested in additional information.

Germany Implements NIS2: Registration portal will open on January 6, 2026

2025 State Privacy Roundup: Key Trends and California Developments to Watch in 2026

2025 Mass Arbitration Year in Review

Extra Large PII-zza: Courts Allows California Privacy Class Action to Proceed for Use of AI Phone Call Assistant

California Federal Court Urges California Legislature to Clean Up “Total Mess” of State Wiretap Act, Dismisses Claim for Website Tracking

Federal Court Dismisses “Trap and Trace” Lawsuit for Plaintiff’s Lack of Injury

Federal Court Holds That Button-Click Data From Public Website Can Disclose Patient Status in Violation of the ECPA

Second Circuit Undercuts Plaintiffs’ Threats of Mass Arbitration Fees, Often Used In Asserting Privacy Claims

Attention Privacy World Readers!  Do you need CLE? We have some options for you!

Stay Ahead on Consumer Privacy News

Not a subscriber yet? Subscribe here to be among the first to receive timely updates on the fast-moving world of data privacy, security, and innovation—delivered straight to your inbox.

Looking for deeper insights and expert analysis? You can also subscribe here to our privacy attorneys’ marketing communications for thought leadership and rich content when you need a more comprehensive perspective.

The 2025 legislative cycle marked a pivotal year in US privacy law, defined not only by continued nationwide expansion into Artificial Intelligence (AI) governance, children’s and teen privacy and online safety, as well as emerging data categories, but by a major restructuring of California’s privacy enforcement infrastructure. California’s introduction of the Delete Request and Opt-out Platform (DROP) system, the nation’s first centralized, statewide platform for managing consumer deletion requests; combined with sweeping reforms to the Consumer Privacy Fund, will materially increase CalPrivacy and attorney general enforcement capacity on a recurring, self-replenishing basis. These developments accompany completion of a far-reaching rulemaking package that imposes detailed obligations for Data Protection Impact Assessments (DPIAs or risk assessments), cybersecurity governance and Automated Decision-Making Technology (ADMT). At the same time, states beyond California have enacted targeted statutory reforms addressing neurotechnology, data-broker practices and minors’ online safety, underscoring that – absent federal preemption – state-driven models will continue to shape the national privacy compliance landscape in 2026. By January 2026, there will be 20 state consumer privacy laws in effect, several with unique material obligations. We detail what enterprises need to be prepared for in 2026 and explain why we believe next year will be a watershed period for consumer privacy in the US.

Continue Reading 2025 State Privacy Roundup: Key Trends and California Developments to Watch in 2026

A Domino’s customer may proceed in her putative class action for violations of the California Invasion of Privacy Act (CIPA) against ConverseNow for its provision of an AI virtual assistant that processes restaurant telephone orders. In Taylor v. ConverseNow Technologies, Inc., Case No. 25-cv-00990-SI, 2025 WL 2308483 (N.D. Cal. Aug. 11, 2025), the Court held that a communication software provider that could potentially improve its software with collection of communications was plausibly violating CIPA even though it had an agreement with the business receiving the communications. This ruling serves a cautionary note to both software companies and – because of potential aiding and abetting liability – companies that use those technologies.

Case Background

According to the complaint, ConverseNow provides AI voice assistants to clients like Domino’s to answer calls and process orders. Plaintiff Eliza Taylor alleged she called Domino’s to place a delivery order, was routed to ConverseNow’s virtual assistant without notice, and then provided personally identifiable information (including her payment information and delivery address). Taylor alleged “ConverseNow has the capability to use caller communications” to improve its products and develop new ones. Taylor brought claims under CIPA, seeking statutory damages for herself and a putative class.

CIPA is an anti-wiretapping statute that imposes criminal and civil penalties. Cal. Penal Code §§ 631(a), 632(a).  Section 631(a) prohibits, among other things, (1) unauthorized wiretapping, (2) intercepting the contents of any wireline communication, or (3) using or attempting to use any information so obtained. Section 637.2 authorizes a private right of action and imposes statutory damages of at least $5,000 per violation without requiring proof of actual damages.

Court Adopts Capability Test To Uphold CIPA Claims Against Software Provider

Critically, CIPA exempts parties to a conversation from liability. In other words, both Taylor and Domino’s could “intercept” communications with each other or use a tape recorder to record communications. ConverseNow moved to dismiss on this basis, arguing that its AI voice assistant was simply an extension of its client, Domino’s, who was a party to the conversation.

The Taylor Court disagreed and held that ConverseNow was an intercepting third party and not covered by the exemption for Domino’s. The Court discussed two different approaches adopted by California federal courts: the “extension” test and the “capability” test. 

Under the extension test, a software provider is not liable under CIPA where it is a tool used by a party to the communication (akin to a tape recorder) and does not use communication for the software provider’s own purposes. 

Under the capability test, whether the software provider did use the communication for the software provider’s own purposes is irrelevant; the inquiry is whether the software provider had the capability to use the communication for its own purposes. 

Citing “[a] growing number of district courts,” the Taylor Court adopted the capability test as the better interpretation of CIPA. Applying the capability test, the Court held that Taylor sufficiently alleged ConverseNow is a third party based on its capability and actual use of data from customers’ calls “to improve its own product.”

After concluding that ConverseNow was a third party to the conversation, the Court quickly disposed of the defendant’s other CIPA arguments. The Court found that there were sufficient allegations of “interception” because Taylor did not realize her phone call was connected to a party other than Domino’s. Taylor’s complaint also satisfied the intent element of CIPA because it alleged that ConverseNow’s business model depended on recording conversations. Finally, the Court held that plaintiff alleged a “confidential” conversation for purposes of a Section 632 claim by alleging disclosure of her personally identifiable information and personal financial information.

Conclusion

Not all decisions addressing CIPA claims have reached similar outcomes – many in fact have been dismissed. However, as this decision demonstrates, CIPA provides significant risk for software providers and website operators, particularly when it comes to training AI models using real human interactions. Moreover, all businesses using or developing AI-powered platforms to provide services to customers should also take this ruling under consideration. Although AI software providers may primarily offer tools for their customers to use, state wiretapping laws like CIPA can extend liability to providers themselves based on the software’s capabilities.  Given the proliferation of AI across industries  – and state efforts to regulate its use  – additional litigation activity is anticipated going into 2026. Privacy World will keep you in the loop on further developments in this space. Stay tuned.

Disclaimer: While every effort has been made to ensure that the information contained in this article is accurate, neither its authors nor Squire Patton Boggs accepts responsibility for any errors or omissions. The content of this article is for general information only, and is not intended to constitute or be relied upon as legal advice.

Stay Ahead on Consumer Privacy News

Not a subscriber yet? Subscribe here to be among the first to receive timely updates on the fast-moving world of data privacy, security, and innovation—delivered straight to your inbox.

Looking for deeper insights and expert analysis? You can also subscribe here to our privacy attorneys’ marketing communications for thought leadership and rich content when you need a more comprehensive perspective.

This fall, a federal court in California granted summary judgment in favor of a website operator for alleged violations of the California Invasion of Privacy Act (CIPA). In its decision, the Court emphasized that it was “virtually impossible” to apply CIPA to internet communications and urged the California legislature to “step up” and “speak clearly” about how internet activity should be treated under the statute in light of a deluge of claims that have been filed recently against website operators.

Continue Reading California Federal Court Urges California Legislature to Clean Up “Total Mess” of State Wiretap Act, Dismisses Claim for Website Tracking

Did we miss you November 20th for our Navigating the App Store Age Verification Laws webinar? Not to worry! See link below to the recording

Watch Kyle Fath, Partner (Los Angeles), for a webinar discussion with Hailun Ying (Head of PrivSec Legal, Roblox), Amy Lawrence (Head of Legal, Chief Privacy Officer, SuperAwesome) where we dove into the burning topics that are (or should be) top of mind for app stores and companies that own or operate mobile apps.

Click here to watch full webinar.

December 3, 2025, at 10:00 am – 5:00 pm ET

National Business Institute is holding a live webinar “Business Data Privacy and Cybersecurity Tool Kit” on December 3rd. Join Julia Jacobson, Partner (New York) and Matthew Flora, Managing Director for the Ankura Consulting Group, LLC, for the following sessions:

II. Identifying Vulnerabilities: Business Data Audits 11:00 am ET

VI. Breach Response and Incident Reporting: What to Do First and Next 2:45 pm ET

For more information, click here.

December 8, 2025, at 1:00 pm ET

Join Alan Friel, Partner (Los Angeles) and Lydia de la Torre, Of Counsel (Palo Alto) on the panel for “Essential Elements of a Privacy Notice for Connected Devices” at PLI California (455 Market Street, San Francisco) as part of PLI’s Advanced Internet of Things 2025: Deeper Dive, Practical Wisdom program.

January 7, 2026, at 1:00 pm ET

Join Alan Friel, Partner (Los Angeles), Kyle Fath, Partner (Los Angeles), and Jennifer Oliver, Shareholder with Buchanan Ingersoll & Rooner PC for “New California Cybersecurity and Data Privacy Laws,” a Strafford Live CLE Webinar.

For more information click here

February 20, 2026 at 10:00 am – 1:15 pm ET

Join Julia Jacobson, Partner (New York) and the National Business Institute for a live webinar discussion on “Data Security: A Business Attorney’s Guide“. Julie will be speaking at the following sessions:

I. “Core Data Security Concepts and Current Laws” 10:00 am ET

IV. “Breach Response and Litigation” 12:30 pm ET

For more information, click here.

Stay Ahead on Consumer Privacy News

Not a subscriber yet? Subscribe here to be among the first to receive timely updates on the fast-moving world of data privacy, security, and innovation—delivered straight to your inbox.

Looking for deeper insights and expert analysis? You can also subscribe here to our privacy attorney’s marketing communications for thought leadership and rich content when you need a more comprehensive perspective.

We have previously covered the recent changes to the California Consumer Privacy Act (CCPA) regulations, and summarized the changes companies need to make to be 2026-ready under them and other state consumer privacy laws that have recently or will soon become effective.  In a recent guidance document, CalPrivacy highlights “seven things businesses should know and prepare for,” which are:

Continue Reading CalPrivacy Highlights Regulatory Changes for 2026

The California Consumer Privacy Act (CCPA) requires that privacy notices be updated annually, and that the detailed disclosures it proscribes be in those notices reflect the 12-month period prior to the effective (posting) date. Interestingly, failure to make annual updates was one of several alleged CCPA violations that resulted in a recent $1.35 Million administrative civil penalty by the California Privacy Protection Agency (CPPA) against retailer Tractor Supply Company. Also, three more state consumer protection laws go into effect on January 1, 2026, which will require notice and consumer rights intake changes, if applicable. Additionally, new and amended CCPA regulations will bring new obligations for businesses starting the first of the year that need to be addressed between now and then. Also recommended is a general checkup with particular attention to enforcement priorities. Here are some things to do in preparation for 2026:

  • Assess which of the 20 state consumer privacy laws (CPLs) apply to your business, and update notices and rights request processes to identify which apply and address material differences in what each requires.
  • Consider new or modified data practices initiated in 2025, or under consideration to be introduced in 2026, complete risk assessments on them, and update the privacy notice to reflect at least the preceding 12-month period.
  • Implement a data processing risk assessment program, or revise the current process to reflect the new CCPA requirements, effective January 1.
  • Confirm you have contracts in place containing data protection terms required by CCPA and other CPLs with parties that receive (or access) your personal data – an ongoing California enforcement priority. Have these organized by service provider / processor or third party and be prepared to produce them upon regulatory inquiry.
  • Employers, especially in California, need to address use of automated decision-making tools. This will become an even more complex and time urgent matter for California employers if Governor Newsome does not veto SB-7 (the “No Robo-Bosses” Act), which would become effective January 1 and add even further requirements and restrictions on technology-assisted HR decision-making. (Note: An inadequate privacy notice and rights request process for personnel was another basis for the Tractor Supply penalty.)
  • Review your tracking technologies and cookie banner(s) and preference tool(s) to support a defense to wiretapping (e.g., CIPA) claims and comply with CPL notice and opt-out requirements, including browser privacy control signals, as explained here.
  • If you process personal data of minors, consumer health data, precise location data, biometric data, or other sensitive personal data, consider the legal requirements and limitations that have been evolving in recent years and the growing application of consumer protection law principles to limit unexpected uses.
  • Revisit and update your information governance roadmap or project plan and seek budget for 2026 initiatives. This should include:
  • Consider Privacy Powered by SPB forms, templates, and guidance materials to help support your program and conduct a stakeholder survey to assess actual practices and knowledge of policies and procedures.

Many companies go on website code lock in mid-November, and Q4 is a hectic time between year-end financial closings and the holidays, so give yourself enough time to get revisions to notices, policies, and tools updated and published. Update your information governance roadmap for 2026 to reflect new laws, regulations, and enforcement trends and be sure your budget for next year reflects these needs.

For more information, contact the author or your Squire Patton Boggs relationship partner.

Disclaimer: While every effort has been made to ensure that the information contained in this article is accurate, neither its authors nor Squire Patton Boggs accepts responsibility for any errors or omissions. The content of this article is for general information only, and is not intended to constitute or be relied upon as legal advice.

On September 25, the California Privacy Protection Agency (CPPA) Board advanced OAL-approved updates to the California Consumer Privacy Act (CCPA), the process of which we covered in detail here and here, that include long-awaited regulations on cybersecurity audits, risk assessments, and automated decision-making technology (ADMT). The CPPA Board also approved a $1.35 Million settlement with Tractor Supply Company, officially announced this week. At last week’s meeting, staff reported that there were hundreds of investigations and enforcement actions in progress, many of which were at a stage that the applicable businesses were not yet aware that they are a target. 2026 will bring new privacy obligations for businesses and greater repercussions for half-baked compliance efforts.

So, California businesses, brace yourselves: the CCPA has undergone a major update at the same time the CPPA is turning up the heat on businesses. Following years of civic discussion, multiple hearings, and hundreds of public comments, the CPPA Board has adopted a batch of regulations impacting businesses’ data privacy obligations. On September 23, the California Office of Administrative Law (OAL) approved new regulations on cybersecurity audits, risk assessments, ADMT, and edits to existing CCPA regulations, which the CPPA Board confirmed last week.  These regulations impose new obligations on businesses to comply with strengthened consumer privacy rights, some of which will phase in over time:

  • Cybersecurity Audits

Businesses required to complete annual cybersecurity audits must submit certifications to the CPPA by:

  1. April 1, 2028, if the business makes over $100 million;
  2. April 1, 2029, if the business makes between $50 million and $100 million; or
  3. April 1, 2030, if the business makes less than $50 million.
  • Risk Assessments

Businesses subject to risk assessment requirements must conduct them subject to timing requirements that depend on whether the processing activity was initiated before or after January 1,2026:

  1. For new processing activities initiated on or after Jan. 1, 2026, assessments must be completed prior to beginning such new processing activities.
  2. For processing activities that began before January 1, 2026, and that continue after that date, assessments must be completed no later than December 31, 2027.

By April 1, 2028, they must submit to the CPPA:

  1. An attestation that required risk assessments were completed in compliance with the regulations, and
  2. A summary of their risk assessment information for 2026 and 2027 (and thereafter annually).

California now joins Colorado with very detailed obligations for how assessments must be conducted and documented, which unfortunately have material differences from the Colorado mandates.

  • Automated Decisionmaking Technology (ADMT)

Businesses that use ADMT to make significant decisions must comply with the ADMT requirements beginning January 1, 2027. While the final regulations are far less burdensome than originally proposed, they bring new considerations and obligations and include material differences from other states.

  • Substantive Changes Unrelated to Cybersecurity Audits, Risk Assessments, and ADMT go into effect Jan. 1, 2026.

The CPPA is also making it clear that existing regulations will be vigorously enforced.  We have covered the evolution of CCPA enforcement here, here and here.  The latest case addresses issues that have proven to be of particular concern to regulators:  properly effectuating opt-out of sale/share for cookies and other tracking technologies that facilitate targeted advertising or are otherwise not qualifying as a service provider, enabling browser privacy control signals to automatically convey and implement such opt-outs, and having contracts in place with service providers, contractors and third parties that include CCPA-mandated contract provisions appropriate for the nature of the processing relationship. We have already delved into how to meet these requirements in detail here.  Interestingly, Tractor Supply is the first published enforcement action that addresses CCPA compliance in the context of job applicants and current and former employees. California is the only state consumer privacy law that applies in the human resources and business-to-businesses contexts. The CPPA also brought claims for failing to update the posted privacy notice annually and not clarifying that the description of privacy practices in the notice reflected processing activities for the 12 months prior to the effective date. As businesses prepare for their year-end notice updates, they should assess overall compliance, with particular attention on the issues that have led to recent enforcement actions.

To help you prepare, we follow with a summary of the changes for businesses under the new and revised CCPA regulations:

CCPA Regulatory Updates – ADMT, Cybersecurity Audits, and Risk Assessments

Automated Decision-making Technology (ADMT)

Scope

The regulations define ADMT as “any technology that processes personal information and uses computation to replace… or substantially replace human decision making.” Section 7001(e). This includes a business’s use of the technology’s output to make a decision without meaningful human involvement, including through profiling. Section 7001(e)(1) and (2). Profiling is defined as any form of automated personal information (PI) processing to evaluate, analyze, or predict personal aspects concerning—among others—a consumer’s intelligence, ability, aptitude, performance at work, economic situation, health (including mental health), interest, behavior, and location. Section 7001(ii).

The use of ADMT is regulated insofar as it is used to make a significant decision, defined as a decision that results in the provision or denial of financial or lending services, housing, education enrollment or opportunities, employment or independent contracting opportunities or compensation, or healthcare services. Section 7001(ddd).

Notably, the final regulations departed from prior efforts to regulate ADMT that was used to merely facilitate significant decisions, and the scope of significant decisions was significantly narrowed from what had been proposed. However, other states take a broader approach to both issues. Despite calls to track Colorado’s detailed regulations on profiling, California’s ADMT regulations are in some way more, and in other ways less, burdensome. Accordingly, companies will need to either take a high-water-mark approach, or address ADMT and profiling on a state-by-state basis.

Consumer Rights

Consumers will have the following rights with respect to ADMT:

  • Right to opt out of ADMT: businesses must provide consumers with the ability to opt out of the use of ADMT to make a significant decision concerning the consumer. Section 7221. However, this right is limited as follows:
    • If an appeal right is provided (see below); or
    • For certain educational and human resources decisions, if the ADMT (i) works as intended and (ii) does not discriminate. Section 7221(b)(2) and (3)
  • Right to access ADMT: upon request, businesses must provide the consumer information about the business’ use of ADMT, including information about the logic used and how the ADMT processed PI to generate an output with respect to them and what specific outputs were used, as well as information about the outcome of the decision and the role of human involvement in reaching the decision.  Section 7222.
  • Request to appeal ADMT: if the businesses provides consumers a process to appeal the business’ use of ADMT for a significant decision to a human reviewer, with authority to change the outcome, it may avoid providing the opt-out right. Section 7221(b)(1).
  • A previously proposed notice of adverse decision requirement was abandoned and is not part of the current regulatory scheme.

Pre-Use Notice

Additionally, businesses using ADMT must provide consumers with a prominent and conspicuous Pre-Use Notice informing consumers about the specific purpose for the business’ use of ADMT, their rights to opt-out (if appeal rights are not provided and excepting the HR and educational uses exempt from opt-out) and access ADMT, and the prohibition on retaliating against consumers for exercising those rights. Sections 7010(d), 7220 and 7221. The Pre-Use Notice must also contain an opt-out link for ADMT use, if opt-out is required.

HR Context

As mentioned above, the use of ADMT to make a significant decision about a consumer includes employment or independent contracting opportunities or compensation, though certain exceptions to opt-out apply. These updates to the CCPA are one part of a larger effort to regulate the use of AI in the employment context, including regulations by the California Civil Rights Council (CCR) addressing employment discrimination resulting from the use of AI, effective October 1, 2025. These regulations expand the reach of existing law—such as the California Fair Employment and Housing Act (FEHA)—to cover AI employment tools, opening the door for plaintiffs seeking to allege harms from algorithmic discrimination. We analyzed the impact of these regulations on employers processing data for HR purposes and the interplay between the CCPA and CCR regulations in this report.

Cybersecurity Audits

To comply with the new cybersecurity regulations, businesses must: (1) conduct an annual cybersecurity audit; (2) submit an audit report; and (3) certify completion of the audit.

Audit

A business whose processing of consumers’ PI presents a significant risk to consumer (including HR and B-to-B) PI security is required to complete an annual audit of its cybersecurity program. Along with assessing a business’ cybersecurity program overall, the audit must assess specific components, including authentication, encryption of PI, account management and access controls, hardware and software security, vulnerability scans and, importantly, systems to inventory and maintain all PI and hardware and software that processes PI. This last requirement essentially mandates data mapping and management, following Minnesota’s approach.

Report

The audit must produce a report with certain information, such as a description of the business’s information system, audit criteria, evidence examined to make the assessments, and the policies, procedures, and practices assessed by the audit.

Certify

After completing the annual audit, businesses must submit a written certification of completion to the state no later than April 1 of the following year.

Risk Assessments

In addition to conducting a cybersecurity audit, a business whose processing of consumers’ PI presents a significant risk to consumers’ privacy is required to conduct a risk assessment before initiating that processing. Section 7150(a). This includes sale/sharing of PI, processing of sensitive PI, profiling, the use of ADMT for significant decisions concerning a consumer, and the use of PI to train ADMT or biometric data technology. Section 7150(b).

Businesses engaging in these activities must prepare and maintain a “risk assessment report” documenting much of the required assessment process.  Significantly, the risk/benefit analysis that the regulations require be part of the assessment process need not be included in the published report, a welcome departure from the approach of other states. Certainly, this is an attempt to avoid First Amendment compelled speech challenges that brought down the California Age-Appropriate Design Act assessment requirements. The report must include the business’ purpose for processing consumers’ PI, categories of PI to be processed, the operational elements of the processing (including seven specific types of operational details, that for ADMT includes the logic used and the intended usage of outputs produced), safeguards to address potential negative impacts, the persons involved in the assessment, whether the activity will be initiated and who approved that determination and when. Section 7152. An aggregate summary of assessments for each calendar year, accompanied by a certification of completion, are to be filed annually with the CPPA. Section 7157(c). 

Finally, businesses must review and update their risk assessments at least once every three years. Section 7155(a)(2). Reports, and updates, are to be retained for as long as the processing continues, or five years after completion, whichever is longer.  Section 7155(c). The individual reports, and updates, are subject to inspection.  Section 7157(e).

Other Substantive Changes to the CCPA Regulations

The CPPA also revised the existing regulations and made material changes, often revisiting issues it had originally considered in prior rulemaking but pulled back to give businesses time to adapt.  Other changes reflect concerns regarding implementation and attempt to avoid ambiguity or more clearly establish consumer protection intent.

Symmetry of Choice

The new regulations refine consent requirements by illustrating asymmetry of choice in more detail, an issue that has been raised in enforcement actions. According to Section 7004(a)(2), a consumer’s path to a more privacy-protective option should not be longer, more difficult, or more time-consuming than the path to a less privacy-protective option. The regulations detail that the number of steps to opt-out of sale/sharing should be the same or fewer than the number of steps to opt-in. Similarly, a “yes” button that is more prominent than a “no” button—whether in size or color—is not an equal or symmetrical choice. Significantly, the regulations which had clarified that there would not be requisite symmetry if opting-in after having opted out required more steps, have been amended to apply such principle to an opt-in request in the first instance, not just where opt-out is being overridden. Section 7004(a)(2)(A). This reflects concerns regarding configuration of cookie banners that have been raised in enforcement actions.

Businesses must also abide by new design requirements to avoid consumer confusion about choice. For instance, the regulations prohibit businesses from using double negatives, misleading statements or omissions, or deceptive language when asking for consent. Businesses are also prohibited from obtaining consumer consent without affirmative action or by silence. Finally, businesses are prohibited from designing their choices in a way that impairs the consumer’s ability to provide freely given, specific, informed, and unambiguous consent. For instance, businesses cannot rely on a consumer’s acceptance of general or broad terms of use to constitute consent for a particular purpose. Section 7004(a)(4)(C).

Confirmation of Opt-Out Processing

Section 7026(g) will now require businesses to “provide a means by which the consumer can confirm that their request to opt out of sale/sharing has been processed by the business.” The regulations also now require the same with respect to honoring of opt-out preference signals. See Section 7025(g)(6). Previously, these were optional. The regulations provide that the same example notice can suffice to meet both requirements: “For example, the business may display on its website “Opt-Out Request Honored” … and display in the consumer’s privacy settings through a toggle or radio button that the consumer has opted out of the sale/sharing of their personal information.”

Timing of Processing Sale/Sharing Opt-Outs

Section 7026(f) requires businesses to cease selling and sharing PI with third parties “as soon as feasibly possible, but no later than 15 business days from the date the business receives the request.” It also requires notifying all third parties to whom the business has sold or shared the consumer’s PI, after the consumer submits the request to opt-out of sale/sharing and before the business complies with that request, that the consumer has made a request to opt-out of sale/sharing (along with directing them to comply and forward the request downstream).

The regulations provide helpful examples interpreting these obligations, addressing advertising/marketing use cases – one involving “programmatic advertising technology” on a website that can “restrict the transfer of personal information instantaneously” where the regulations state taking 15 business days to comply would not be compliant – and another involving the disclosure of PI lists to a marketing company that addresses the timing and notification requirements.

Colors of the Opt-Out Icon

There was previously a lack of clarity regarding whether the blue and white opt-out icon could be changed according to a website’s branding or otherwise. The regulations now state, “Businesses may adjust the color of the icon to ensure that the icon is conspicuous. For example, if the webpage background is the same color of blue as the icon, the business may invert or change the colors of the icon to ensure visibility.” Section 7015(b)(3).

Privacy Policy Requirements

The amended regulations include several changes to the required accessibility and content of privacy policies.

First, mobile apps must now include a link to their privacy policy. Previously, it was optional to include a link to the “privacy policy” in the mobile application settings menu. It will now be required as of Jan. 1, 2026. The defined term “privacy policy” refers specifically to the CCPA’s required disclosures; as a result, companies should consider including a direct link to their CCPA or state-specific privacy notice in their app settings menu, if they have not already done so. Section 7011(d).

Second, businesses must comply with the following requirements regarding the content of their privacy policies:

  • When identifying categories of sources and categories of third party (sale/sharing recipients), the regulations clarify that the categories “shall be described in a manner that provides consumers a meaningful understanding of” where the information is collected and the parties to whom the information is sold or shared, respectively. Section 7011(e)(1)(B) and (E).
  • Previously, businesses were required to associate the specific business or commercial purpose for disclosing PI to service providers as to each category of PI collected. Businesses no longer need to associate the purposes with specific categories of PI. See Section 7011(e)(1)(I).
  • Instead of referring to the right “not to receive discriminatory treatment,” businesses now must state that consumers have the right “not to be retaliated against for exercising privacy rights conferred by the CCPA, including when a consumer is an applicant to an educational program, a job applicant, a student, an employee, or an independent contractor.” Section 7011(e)(2)(H).

New Categories of Sensitive PI

The definition of “sensitive personal information” has been expanded to PI of consumers that the business has actual knowledge are less than 16 years of age. A business that willfully disregards the consumer’s age shall be deemed to have had actual knowledge of the consumer’s age. This means that the processing of PI of consumers less than 16 years of age is subject to the right to limit. For sale/sharing of such data, however, consent of the consumer is required.

Additionally, “sensitive personal information” now includes a consumer’s neural data, or information generated by measuring the activity of a consumer’s central or peripheral nervous system.

Updated Notice of Right to Limit

The Notice of Right to Limit requirements have been updated largely to align with the Notice of Right to Opt-Out (e.g., how to present the notice when interacting with consumers online vs. offline). Section 7014(e)(3).

Expansion of Access Rights Trailing Period

Under Section 7024(h), businesses are only required to “provide all the personal information it has collected and maintains about the consumer during the 12-month period preceding the business’s receipt of the consumer’s request.” However, reflecting CPRA changes, a consumer may request PI from beyond such period, as long as it was collected on or after January 1, 2022. The prior regulations did not require notifying consumers of that right.

Businesses now must “include a means by which the consumer can request that the business provide personal information collected prior to the 12-month period preceding the business’s receipt of the consumer’s request. For example, the business may ask the consumer to select or input the date range for which the consumer is making the request to know or present the consumer with an option to request all personal information the business has collected about the consumer.” Section 7020(e).

Authorized Agent Requirements

The regulations now explicitly prohibit, in connection with obtaining proof that the consumer gave the agent signed permission, businesses from requiring consumers to resubmit their request in their individual capacity. Section 7063(a).

Conduct Year-end Updates and Compliance Checks and Develop  2026 Project Plans and Budgets

Prior to year-end, business should (1) confirm PI practices and update their privacy notices to reflect practices from the prior 12 months; (2) update policies and procedures, especially regarding consumer choice, to reflect amendments to the regulations and issues raised in enforcement actions; (3) become prepared to implement a data processing risk assessment program that meets the new regulations’ requirements for new 2026 processing activities before they are initiated, and develop a roadmap for assessing ongoing processing prior to December 31, 2027; and (4) develop a project and plan to prepare for the upcoming ADMT and cybersecurity audit (including data mapping) requirements. To help you do so, we have developed guidance materials, including a data processing risk assessment tool kit. More information is available here, or by contact the authors of your Squire Patton Boggs relationship partner.

Disclaimer: While every effort has been made to ensure that the information contained in this article is accurate, neither its authors nor Squire Patton Boggs accepts responsibility for any errors or omissions. The content of this article is for general information only, and is not intended to constitute or be relied upon as legal advice.

Date: September 10, 2025 at 12:00 PM EDT

Format: Live Video

Duration: 1 Hour

Description: With limited federal regulation on consumer protection, data privacy, and AI, states are stepping in, creating a patchwork of laws that vary widely in scope and enforcement. While California and Colorado set high standards, other states like Maryland, Minnesota, and Oregon are introducing even stricter measures. Additional laws around consumer health data, data brokers, and child/teen online safety further complicate the landscape.

This panel will explore key differences and overlaps in state laws, highlight enforcement trends, and offer practical strategies for enterprises to implement privacy programs across states and globally. Attendees will receive comparison charts to support compliance efforts.

Continue Reading State Privacy and AI Law Updates – A Live Legal Briefing You Won’t Want to Miss