As companies begin to move beyond large language model (LLM)-powered assistants into fully autonomous agents—AI systems that can plan, take actions, and adapt without human-in-the-loop—legal and privacy teams must be aware of the use cases and the risks that come with them.

What is Agentic AI?
Agentic AI refers to AI systems—often built using LLMs but not limited to them—that can take independent, goal-directed actions across digital environments. These systems can plan tasks, make decisions, adapt based on results, and interact with software tools or systems with little or no human intervention.

Agentic AI often blends LLMs with other components like memory, retrieval, application programming interfaces (APIs), and reasoning modules to operate semi-autonomously. It goes beyond chat interfaces and can initiate real actions—inside business applications, internal databases, or even external platforms.

For example:

  • An agent that processes inbound email, classifies the request, files a ticket, and schedules a response—all autonomously.
  • A healthcare agent that transcribes provider dictations, updates the electronic health record , and drafts follow-up communications.
  • A research agent that searches internal knowledge bases, summarizes results, and proposes next steps in a regulatory analysis.

These systems aren’t just helping users write emails or summarize docs. In some cases, they’re initiating workflows, modifying records, making decisions, and interacting directly with enterprise systems, third-party APIs, and internal data environments. Here are a handful of issues that legal and privacy teams should be tracking now.

Continue Reading What is Agentic AI? A Primer for Legal and Privacy Teams

With the entry into force of the AI Act (Regulation 2024/1689) in August 2024, a pioneering framework of AI was established.

On February 2, 2025, the first provisions of the AI Act became applicable, including the AI system definition, AI literacy and a limited number of prohibited AI practices. In line with article 96 of the AI Act, the European Commission released detailed guidelines on the application of the definition of an AI system on February 6, 2025.

Continue Reading Understanding the Scope of “Artificial Intelligence (AI) System” Definition: Key Insights From The European Commission’s Guidelines

The rulemaking process on California’s Proposed “Regulations on CCPA Updates, Cybersecurity Audits, Risk Assessments, Automated Decisionmaking Technology, and Insurance Companies” (2025 CCPA Regulations) has been ongoing since November 2024.  With the one-year statutory period to complete the rulemaking or be forced to start anew on the horizon, the California Privacy Protection Agency (CPPA) voted unanimously to move a revised set of draft regulations forward to public comment on May 1, which began May 9 and closes at 5 pm Pacific June 2, 2025.  The revisions cut back on the regulation of Automated Decision-making Technology (ADMT), eliminate the regulation of AI, address potential Constitutional deficiencies with regard to risk assessment requirements and somewhat ease cybersecurity audit obligations.  This substantially revised draft is projected by the CPPA to save California businesses approximately 2.25 billion dollars in the first year of implementation, a 64% savings from the projected cost of the prior draft.

Continue Reading Revised Draft California Privacy Regulations Lessen Impact on Business

On April 14, 2025, the European Data Protection Board (EDPB) released guidelines detailing how to process personal data using blockchain technologies in compliance with the General Data Protection Regulation (GDPR) (Guidelines 02/2025 on processing of personal data through blockchain technologies). These guidelines highlight certain privacy challenges and provide practical recommendations.

Continue Reading From Blocks to Rights: Privacy and Blockchain in the Eyes of the EU data Protection Authorities

The European Commission published its long-awaited Guidelines on Prohibited AI Practices (CGPAIP) on February 4, 2025, two days after the AI Act’s articles on prohibited practices became applicable.

The good news is that in clarifying these prohibited practices (and those excluded from its material scope), the CGPAIP also addresses other more general aspects of the AI Act, which comes to provide much-needed legal certainty to all authorities, providers and deployers of AI systems/models in navigating the regulation.

It refines the scope of general concepts (such as “placing on the market”, “putting into service”, “provider” or ” deployer”) and exclusions from the scope of the AI Act, provides a definition of others not expressly included in the AI Act (such as “use”, “national security”, “purposely manipulative techniques” or “deceptive techniques”), as well as takes a position on the allocation of responsibilities of providers and deployers using a proportionate approach (establishing that these responsibilities should be assumed by whoever is best positioned in the value chain).

It also comments on the interplay of the AI Act with other EU laws, explaining that while the AI Act applies as lex specialis to other primary or secondary EU laws with respect to the regulation of AI systems, such as the General Data Protection Regulation (GDPR) or EU consumer protection and safety legislation, it is still possible that practices permitted under the AI Act are prohibited under those other laws. In other words, it confirms that the AI Act and these other EU laws complement each other.

However, this complementarity is likely to pose the greatest challenges to both providers and deployers of the systems. For example, while the European Data Protection Board (EDPB) has already clarified in its Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models (adopted in December 2024) that the “intended” purposes of AI models at the deployment stage must be taken into account when assessing whether the processing of personal data for the training of said AI models can be based on the legitimate interest of the providers and/or future deployers. The European Commission clarifies in Section 2.5.3 of the CGPAIP that the AI Act does not apply to research, testing (except in the real world) or development activities related to AI systems, or AI models before they are placed on the market or put into service (i.e. during the training stage). Similarly, the CGPAIP provides some examples of exclusions from prohibited practices (i.e., permitted practices) that are unlikely to find a lawful basis in the legitimate interests of providers and/or future users of the AI system.

The prohibited practices:

  1. Subliminal, purposefully manipulative or deceptive techniques (Article 5(1)(a) and Article 5(1)(b) AI Act)
    This prohibited practice refers to subliminal, purposefully manipulative or deceptive techniques that are significantly harmful and materially influence the behavior of natural persons or group(s) of persons, or exploit vulnerabilities due to age, disability or a specific socio-economic situation.

    The European Commission provides examples of subliminal techniques (visual and auditory subliminal messages, subvisual and subaudible queueing, embedded images, misdirection and temporal manipulation), as well as explains that the rapid development of related technologies, such as brain-computer interfaces or virtual reality, increases the risk of sophisticated subliminal manipulation.

    When referring to purposefully manipulative techniques (to exploit cognitive biases, psychological vulnerabilities or other factors that make individuals or groups of individuals susceptible to influence), it clarifies that for the practice to be prohibited, either the provider or the deployer of the AI system must intend to cause significant (physical, psychological or financial/ economic) harm. While this is consistent with the cumulative nature of the elements contained in Article 5(1)(a) of the AI Act for the practice to be prohibited, it could be read as an indication that manipulation of an individual (beyond consciousness) where it is not intended to cause harm (for example, for the benefit of the end user or to be able to offer a better service) is permitted. The CGPAIP refers here to the concept of “lawful persuasion”, which operates within the bounds of transparency and respect for individual autonomy.

    With respect to deceptive techniques, it explains that the obligation of the provider to label “deep fakes” and certain AI-generated text publications on matters of public interest, or the obligation of the provider to design the AI system in a way that allows individuals to understand that they are interacting with an AI system (Article 50(4) AI Act) are in addition to this prohibited practice, which has a much more limited scope.

    In connection with the interplay of this prohibition with other regulations, in particular, with the DSA, the European Commission recognizes that dark patterns are an example of manipulative or deceptive technique when they are likely to cause significant harm.

    It also provides that there should be a plausible/reasonably likely causal link between the potential material distortion of the behavior (significant reduction in the ability to make informed and autonomous decisions) and the subliminal, purposefully manipulative or deceptive technique deployed by the AI system.

  2. Social scoring (Article 5(1)(c) AI Act)
    The CGPAIP defines social scoring as the evaluation or classification of individuals based on their social behavior, or personal or personality characteristics over a certain period of time, clarifying that a simple classification of people on said basis would trigger this prohibition and that the concept evaluation is inclusive of “profiling” (in particular to analyze and/or make predictions on interests or behaviors), that leads to detrimental or unfavorable treatment in unrelated social contexts, and/or unjustified or disproportionate treatment.

    Concerning the requirement that it leads to detrimental or unfavorable treatment, it is established that such harm may be caused by the system in combination with other human assessments, but that at the same time, the AI system must play a relevant role in the assessment. It also provides that the practice is prohibited even if the detrimental or unfavorable treatment is produced by an organization different from the one that uses the score.

    The European Commission states, however, that AI systems can lawfully generate social scores if they are used for a specific purpose within the original context of the data collection and provided that any negative consequences from the score are justified and proportionate to the severity of the social behavior.

  3. Individual Risk Assessment and Prediction of Criminal Offences (Article 5(1)(d) AI Act)
    When interpreting this prohibited practice, the European Commission outlines that crime prediction and risk assessment practices as such are not outlawed, but only when the prediction of a natural person committing a crime is made solely on the basis of a profiling of said individual, or on assessing their personality traits and characteristics. In order to avoid circumvention of the prohibition and ensure its effectiveness, any other elements being taken into account in the risk assessment will have to be real, substantial and meaningful for them to be able to justify the conclusion that the prohibition does not apply (excluding therefore AI systems to support the human assessment based on objective and verifiable facts directly linked to a criminal activity, in particular when there is human intervention).

  4. Untargeted Scraping of Facial Images (Article 5(1)(e) AI Act)
    The European Commission clarifies that the purpose of this prohibited practice is the creation or enhancement of facial recognition databases (a temporary, centralized or decentralized database that allows a human face from a digital image or video frame to be matched against a database of faces) using images obtained from the Internet or CCTV footage, and that it does not apply to any scraping AI system tool that can be used to create or enhance a facial recognition database, but only to untargeted scraping tools.

    The prohibition does not apply to the untargeted scraping of biometric data other than facial images, or even if it is a database that is not used for the recognition of persons. For example to generate images of fictitious persons and clarifies that the use of databases created prior to the entry into force of the AI Act, which are not further expanded by AI-enabled untargeted scraping, must comply with applicable EU data protection rules.

  5. Emotion Recognition (Article 5(1)(f) AI Act)
    This prohibition concerns AI systems that aim to infer the emotions (interpreted in a broad sense) of natural persons based on their biometric data and in the context of the workplace or educational and training institutions, except for medical or security reasons. Emotion recognition systems that do not fall under this prohibition are considered high-risk systems and deployers will have to inform the natural persons exposed thereto of the operation of the system as required by article 50(3) of the AI Act.

    The European Commission refers here to certain clarifications contained in the AI Act regarding the scope of the concept of emotion or intention, which does not include, for example, physical states such as pain or fatigue, nor readily apparent expressions, gestures or movements unless they are used to identify or infer emotions or intentions. Therefore, a number of AI systems used for safety reasons would already not fall under this prohibition.

    Similarly, the notions of workplace, educational and training establishments must be interpreted broadly. There is also room for member states to introduce regulations that are more favorable to workers with regard to the use of AI systems by employers.

    It also clarifies that authorized therapeutic uses include the use of CE marked medical devices and that the notion of safety is limited to the protection of life and health and not to other interests such as property.

  6. Biometric Categorization for certain “Sensitive” Characteristics (Article 5(1)(g) AI Act)
    This prohibition is for biometric categorization (except where purely ancillary to another commercial service and strictly necessary for objective technical reasons) that individually categorize natural persons on the basis of their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.

    The European Commission clarifies that this prohibition, however, does not cover the labelling or filtering of lawfully acquired biometric datasets (such as images), including for law enforcement purposes (for instance, to guarantee that data equally represents all demographic groups).

  7. Real-time Remote Biometric Identification (RBI) Systems for Law Enforcement Purposes (Article 5(1)(h) AI Act)
    The European Commission devotes a substantial part of the CGPAIP to the development of this prohibited practice, which refers to the use of real-time RBI systems in publicly accessible areas for law enforcement purposes. Exceptions, based on the public interest, are to be determined by the member states, through local legislation.

The CGPAIP concludes with a final section on safeguards and conditions for the application of the exemptions to the prohibited practices, including the conduct of Fundamental Rights Impact Assessments (FRIAs), which are defined as assessments aimed at identifying the impact that certain high-risk AI systems, including RBI systems, may have on fundamental rights, and which, it is clarified, do not replace the existing Data Protection Impact Assessment (DPIA) that data controllers (i.e., those responsible for processing personal data) must conduct and have a broader scope (covering not only the fundamental right to data protection but also all other fundamental rights of individuals) and which complement, inter alia, the required DPIA, the registration of the system or the need for prior authorization.


Disclaimer: While every effort has been made to ensure that the information contained in this article is accurate, neither its authors nor Squire Patton Boggs accepts responsibility for any errors or omissions. The content of this article is for general information only and is not intended to constitute or be relied upon as legal advice.

(Updated May 12, 2025)

Since January, the federal government has moved away from comprehensive legislation on artificial intelligence (AI) and adopted a more muted approach to federal privacy legislation (as compared to 2024’s tabled federal legislation). Meanwhile, state legislatures forge ahead – albeit more cautiously than in preceding years.

As we previously reported, the Colorado AI Act (COAIA) will go into effect on February 1, 2026. In signing the COAIA into law last year, Colorado Governor Jared Polis (D) issued a letter urging Congress to develop a “cohesive” national approach to AI regulation preempting the growing patchwork of state laws. Absent a federal AI law, Governor Polis encouraged the Colorado General Assembly to amend the COAIA to address his concerns that the COAIA’s complex regulatory regime may drive technology innovators away from Colorado. Eight months later, the Trump Administration announced its deregulatory approach to AI regulation making federal AI legislation unlikely. At that time, the Trump Administration seemed to consider existing laws – such as Title VI and Title VII of the Civil Rights Act and the Americans with Disabilities Act which prohibit unlawful discrimination – as sufficient to protect against AI harms. Three months later, a March 28 Memorandum issued by the federal Office of Management and Budget directs federal agencies to implement risk management programs designed for “managing risks from the use of AI, especially for safety-impacting and rights impacting AI.”

Continue Reading States Shifting Focus on AI and Automated Decision-Making

SPB’s Tokyo/Shanghai Partner Scott Warren, along with New York Partner Julia Jacobson, will be speaking on and moderating panels at the Society for the Policing of Cyberspace (POLCYB) Global Cybercrime Management Executive Roundtable in Vancouver, Canada on May 30 as well as the LegalPlus 8th Annual Shanghai International Arbitration & Corporate Fraud Summit.

Continue Reading Join Us This Summer in Vancouver and Shanghai for Insights on Cybercrime and Cross Border Data Transfers

The Ministry of Electronics and Information Technology (MeitY) has recently released the much-awaited draft of the Digital Personal Data Protection Rules, 2025 (Rules) for public consultation. These proposed Rules provide important insights into the upcoming implementation of India’s new data protection law, which has been under development for some time.

The enactment of the Digital Personal Data Protection Act, 2023 (DPDP Act) marks a significant shift in India’s data privacy landscape, laying the foundation for a comprehensive framework governing the collection, use and management of personal data.

Key aspects of the draft Rules:


Phased Implementation

The draft Rules outline a gradual implementation strategy. Initially, provisions relating to the establishment of the enforcement body – the Data Protection Board (DP Board) – will come into effect immediately upon publication of the final version of the Rules in the Official Gazette. These include appointing the DP Board’s chairperson and members, as well as establishing regulations on compensation, meeting protocols and employment terms. More substantive provisions, including Rules 3 to 15, 21 and 22, will come into effect at a later date, as specified within the Rules.

Consent Is a Must

The Sensitive Personal Data or Information (SPDI) Rules require explicit written consent before collecting sensitive data. The DPDP Act builds upon this by mandating that data fiduciaries provide a clear and comprehensive notice to data principals before collecting personal data. This notice must include specific details about the data being processed, its purpose and the entities involved. Additionally, it must inform the data principal of the rights available to them under the DPDP Act. The draft Rules further stipulate that the notice should be in clear and plain language, which is easy to understand, itemized and include specific information about the goods or services resulting from the data processing.

The consent provided by the Data Principal must be free, specific, informed, unconditional and unambiguous. It should involve clear affirmative action, indicating agreement to the processing of their personal data solely for the specified purpose and limited to such personal data as is necessary for such specified purpose.

Reasonable Security Safeguards

The SPDI Rules already require businesses to implement security measures that protect sensitive personal data in line with global standards like ISO/IEC 27001. Similarly, the draft Rules require Data Fiduciaries to adopt baseline security measures, such as encryption, obfuscation, masking and access control, to protect personal data from breaches. Data fiduciaries must also ensure that contracts with data processors include provisions to maintain these safeguards.

Data Breach Notification

Under the IT Act and SPDI Rules, there has been no obligation to notify data owners or processors in the event of a data breach. However, the DPDP Act mandates breach notifications to both the DP Board and affected data principals. The draft Rules specify that these notifications must be clear, concise and timely, outlining the nature, scope, timing and impact of the breach, along with mitigation steps. Data fiduciaries are required to notify the DP Board within 72 hours of discovering a breach.  Although not a part of the DPDP, we note there are also obligations to notify the computer emergency response team in India within six hours of discovering a breach.

Data Retention

While the SPDI Rules limit the retention of sensitive data to the period necessary for its intended purpose, the DPDP Act introduces similar provisions, stating that personal data should be erased when consent is withdrawn or when it is no longer needed for the specified purpose. The draft Rules set a three-year retention period for certain types of data fiduciaries, such as e-commerce platforms, online gaming services and social media intermediaries, provided they meet user thresholds outlined in the Rules.

Data Protection Officers

The SPDI Rules mandated the appointment of a grievance officer. The DPDP Act goes further, requiring significant data fiduciaries to appoint a data protection officer (DPO) based in India. Smaller data fiduciaries can either appoint a DPO, or designate an individual to handle data processing queries. The draft Rules also mandate that businesses display the DPO’s contact information on their website and in communications with data principals.

Children and Their Personal Data

While the IT Act and SPDI Rules did not specifically address children’s personal data, the DPDP Act introduces more stringent provisions. Data fiduciaries must obtain verifiable parental consent before processing children’s data and are prohibited from using such data for specific purposes, like targeted advertising. The draft Rules clarify how consent should be obtained, including requirements for verified identity and age verification.

Cross-border Data Transfer

The SPDI Rules allowed the transfer of sensitive data outside India, provided that the receiving party adhered to adequate data protection standards. The DPDP Act imposes stricter restrictions on cross-border data transfers, requiring the government to issue guidelines outlining when such transfers are permissible. The draft Rules specify that data fiduciaries in India may transfer personal data abroad only in compliance with conditions set by the government.

Consent Managers

The DPDP Act introduces the concept of consent managers—entities that facilitate the management of consent between data principals and data fiduciaries. These managers must be registered with the DP Board and provide user-friendly platforms for individuals to manage their consent. The draft Rules provide detailed requirements for these consent managers, including financial and operational thresholds, security measures and record-keeping. The DP Board will also have the authority to audit their operations.

Conclusion

The DPDP Act represents a significant advancement in strengthening data privacy and security in India. The draft Rules provide further clarity on the law’s implementation, particularly around consent, data retention, security, breach notifications, children’s data and cross-border data transfers. While there are still areas that remain unclear, such as the practical implementation of consent managers and the impact of cross-border restrictions, the draft Rules pave the way for more robust data protection. Businesses must stay informed about the evolving regulatory framework to ensure compliance and protect the rights of data principals in this increasingly digital world.

For more information, please contact the authors or your Squire Patton Boggs relationship attorney.

Disclaimer: While every effort has been made to ensure that the information contained in this article is accurate, neither its authors nor Squire Patton Boggs accepts responsibility for any errors or omissions. The content of this article is for general information only and is not intended to constitute or be relied upon as legal advice.

In case you missed it, below are recent posts from Privacy World covering the latest developments on data privacy, security and innovation. Please reach out to the authors if you are interested in additional information.

State Privacy Enforcement Updates: CPPA Extracts Civil Penalties in Landmark Case; State Regulators Form Consortium for Privacy Enforcement Collaboration | Privacy World

FCC Approved Limited, One Year Waiver of Key Element of New TCPA Consent Revocation Rules | Privacy World

The Future for California’s Latest Generation of Privacy Regulations is Uncertain | Privacy World

Companies in all industries take note: regulators are scrutinizing how companies offer and manage privacy rights requests and looking into the nature of vendor processing in connection with application of those requests. This includes applying the proper verification standards and how cookies are managed. Last month, the California Privacy Protection Agency (“CPPA” or “Agency”) provided yet another example of this regulatory focus in a March 2025 Stipulated Final Order (“Order”) against a global vehicle manufacturer (referred to throughout this blog as “the Company”). We discuss this case in further detail, and provide practical takeaways from the case, further below.

On the heels of the CPPA’s landmark case against the Company, various state AGs and the CPPA announced a formal agreement to promote collaboration and information sharing in the bipartisan effort to safeguard the privacy rights of consumers. The announcement Attorney General Bonta of California can be found here. The consortium includes the CPPA and State Attorneys General from California, Colorado, Connecticut, Delaware, Indiana, New Jersey and Oregon. According to an announcement by the CPPA, the participating regulators established the consortium to share expertise and resources and coordinate in investigating potential violations of their respective privacy laws. With the establishment of a formal enforcement consortium, we can expect cross-jurisdictional collaboration on privacy enforcement by the participating states’ regulators. On the plus side, perhaps we will see the promotion of consistent interpretation of these seven states’ various laws that make up almost a third of the current patchwork of U.S. privacy legislation.

CPPA Case – Detailed Summary

In the case against the Company, the CPPA alleged that it violated the California Consumer Privacy Act (“CCPA”) by:

  • requiring Californians to verify themselves where verification is not required or permitted (the right to opt-out of sale/sharing and the right to limit) and provide excessive personal information to exercise privacy rights subject to verification (know, delete, correct);
  • using an online cookie management tool (often known as a CMP) that failed to offer Californians their privacy choices in a symmetrical or equal way and was confusing;
  • requiring Californians to verify that they gave their agents authority to make opt-out of sale/sharing and right to limit requests on their behalf; and
  • sharing consumers’ personal information with vendors, including ad tech companies, without having in place contracts that contain the necessary terms to protect privacy in connection with their role as either a service provider, contractor or third party.

This Order illustrates the potential fines and financial risks associated with non-compliance with the state privacy laws. Of the $632,500 administrative fine lodged against the company, the Agency clearly spelled out that $382,500 of the fine accounts for 153 violations – $2,500 per violation – that are alleged to have occurred with respect to the Company’s consumer privacy rights processing between July 1 and September 23, 2023. It is worth emphasizing that the Agency lodged the maximum administrative fine – “up to two thousand five hundred ($2,500)” – that is available to it for non-intentional violations for each of the incidents where consumer opt-out/limit rights were wrongly applying verification standards. It Is unclear to what the remaining $250,000 in fines were attributed, but they are presumably for the other violations alleged in the order, such as disclosing PI to third parties without having contracts with the necessary terms, confusing cookie and other consumer privacy requests methods and requiring excessive personal data to make a request. It is unclear the number of incidents that involved those infractions but based on likely web traffic and vendor data processing, the fines reflect only a fraction of the personal information processed in a manner alleged to be non-compliant.

The Agency and Office of the Attorney General of California (which enforces the CCPA alongside the Agency) have yet to seek truly jaw-dropping fines in amounts that have become common under the UK/EU General Data Protection Regulation (“GDPR”). However, this Order demonstrates California regulators’ willingness to demand more than remediation. It is also significant that the Agency requires the maximum administrative penalty on a per-consumer basis for the clearest violations that resulted in denial of specific consumers’ rights. This was a relatively modest number of consumers:

  • “119 Consumers who were required to provide more information than necessary to submit their Requests to Opt-out of Sale/Sharing and Requests to Limit;
  • 20 Consumers who had their Requests to Opt-out of Sale/Sharing and Requests to Limit denied because the Company required the Consumer to Verify themselves before processing the request and;
  • 14 Consumers who were required to confirm with the Company directly that they had given their Authorized Agents permission to submit the Request to Opt-out of Sale/Sharing and Request to Limit on their behalf.”

The fines would have likely been greater if applied to all Consumers who accessed the cookie CMP, or that made requests to know, delete or correct. Further, it is worth noting that many companies receive thousands of consumer requests per year (or even per month), and the statute of limitations for the Agency is five years; applying the per-consumer maximum fine could therefore result in astronomical fines for some companies.

Let us also not forget that regulators also have injunctive relief at their disposal. Although, the injunctive relief in this Order was effectively limited to fixing alleged deficiencies, it included “fencing in” requirements such as use of a UX designer to evaluate consumer request “methods – including identifying target user groups and performing testing activities, such as A/B testing, to access user behavior” – and reporting of consumer request metrics for five years. More drastic relief, such as disgorgement or prohibiting certain data or business practices, are also available. For instance, in a recent data broker case brought by the Agency, the business was barred from engaging in business as a data broker in California for three years.

We dive into each of the allegations in the present case further below and provide practical takeaways for in-house legal and privacy teams to consider.

Requiring consumers to provide more info than necessary to exercise verifiable requests and requiring verification of CCPA sale/share opt-out and sensitive PI limitation requests

The Order alleges two main issues with the Company’s rights request webform:

  • The Company’s webform required too many data points from consumers (e.g., first name, last name, address, city, state, zip code, email, phone number). The Agency contends that requiring all of this information necessitates that consumers provide more information than necessarily needed to exercise their verifiable rights considering that the Agency alleged that the Company “generally needs only two data points from the Consumer to identify the Consumer within its database.” The CPPA and its regulations allow a business to seek additional personal information if necessary to verify to the requisite degree of certainty required under the law (which varies depending on the nature of the request and the sensitivity of the data and potential harm of disclosure, deletion or change), or to reject the request and provide alternative rights responses that require lesser verification (e.g., treat a request of a copy of personal information as a right to know categories of person information). However, the regulations prohibit requiring more personal data than is necessary under the particular circumstances of a specific request. Proposed amendments the Section 7060 of the CCPA regulations also demonstrate the Agency’s concern about requiring more information than is necessary to verify the consumer.
  • The Company required consumers to verify their Requests to Opt-Out of Sale/Sharing and Requests to Limit, which the CCPA prohibits.

In addition to these two main issues, the Agency also alluded to (but did not directly state) that the consumer rights processes amounted to dark patterns. The CPPA cited the policy reasons behind differential requirements as to Opt-Out of Sale/Sharing and Right to Limit; i.e., so that consumers can exercise Opt-Out of Sale/Sharing and Right to Limit requests without undue burden, in particular because there is minimal or nonexistent potential harm to consumers if such requests are not verified.

In the Order, the CPPA goes on to require the Company to ensure that its personnel handling CCPA requests are trained on the CCPA’s requirements for rights requests, which is an express obligation under the law, and confirming to the Agency that it has provided such training within 90 days of the Order’s effective date.

Practical Takeaways

  • Configure consumer rights processes, such as rights request webforms, to only require a consumer to provide the minimum information needed to initiate and verify (if permitted) the specific type of request. This may be difficult for companies that have developed their own webforms, but most privacy tech vendors that offer webforms and other consumer rights-specific products allow for customizability. If customizability is not possible, companies may have to implement processes to collect minimum information to initiate the request and follow up to seek additional personal information if necessary to meet CCPA verification standards as may be applicable to the specific consumer and the nature of the request.
  • Do not require verification of do not sell/share and sensitive PI limitation requests (note, there are narrow fraud prevention exceptions here, though, that companies can and should consider in respect of processing Opt-Out of Sale/Sharing and Right to Limit requests).
  • Train personnel handling CCPA requests (including those responsible for configuring rights request “channels”) to properly intake and respond to them.
  • Include instructions on how to make the various types of requests that are clear and understandable, and that track the what the law permits and requires.

Requiring consumers to directly confirm with the Company that they had given permission to their authorized agent to submit opt-out of sale/sharing sensitive PI limitation requests

The CPPA’s Order also outlines that the Company allegedly required consumers to directly confirm with the Company that they gave permission to an authorized agent to submit Opt-Out of Sale/Sharing and Right to Limit requests on their behalf. The Agency took issue with this because under the CCPA, such direct confirmation with the consumer regarding authority of an agent is only permitted as to requests to delete, correct and know.

Practical Takeaways

  • When processing authorized agent requests to Opt-Out of Sale/Sharing or Right to Limit, avoid directly confirming with the consumer or verifying the identity of the authorized agent (the latter is also permitted in respect of requests to delete, correct and know). Keep in mind that what agents may request, and agent authorization and verification standards, differ from state-to-state.

Failure to provide “symmetry in choice” in its cookie management tool

The Order alleges that, for a consumer to turn off advertising cookies on the Company’s website (cookies which track consumer activity across different websites for cross-context behavioral advertising and therefore require an Opt-out of Sale/Sharing), consumers must complete two steps: (1)  click the toggle button to the right of Advertising Cookies and (2) click the “Confirm My Choices” button.

The Order compares this opt-out process to that for opting back into advertising cookies following a prior opt-out. There, the Agency alleged that if consumers return to the cookie management tool (also known as a consent management platform or “CMP”) after turning “off” advertising cookies, an “Allow All” choice appears. This is likely a standard configuration of the CMP that can be modified to match the toggle and confirm approach used for opt-out. Thus, the CPPA alleged, consumers need only take one step to opt back into advertising cookies when two steps are needed to opt-out, in violation of and express requirement of the CCPA to have no more steps to opt-in than was required to opt-out.

The Agency took issue with this because the CCPA requires businesses to implement request methods that provide symmetry in choice, meaning the more privacy-protective option (e.g., opting-out) cannot be longer, more difficult or more time consuming than the less privacy protective option (e.g., opting-in).

The Agency also addressed the need for symmetrical choice in the context of “website banners,” also known as cookie banners, pointing to an example cited as insufficient symmetry in choice from the CCPA regulations – i.e., using “’Accept All’ and ‘More Information,’ or ‘Accept All’ and ‘Preferences’ – is not equal or symmetrical” because it suggests that the company is seeking and relying on consent (rather than opt-out) to cookies, and where consent is sought acceptance and acceptance must be equally as easy to choose. The CCPA further explained that “[a]n equal or symmetrical choice” in the context of a website banner seeking consent for cookies “could be between “Accept All” and “Decline All.”” Of course, under CCPA consent to even cookies that involve a Share/Sale is not required, but the Agency is making clear that where consent is sought there must be symmetry in acceptance and denial of consent.

The CPPA’s Order also details other methods by which the company should modify its CCPA requests procedures including:

  1. separating the methods for submitting sale/share opt-out requests and sensitive PI limitation requests from verifiable consumer requests (e.g., requests to know, delete, and correct);
  2. including the link to manage cookie preferences within the Company’s Privacy Policy, Privacy Center and website footer; and
  3. applying global privacy control (“GPC”) preference signals for opt-outs to known consumers consistent with CCPA requirements.

Practical Takeaways

  • It is unclear whether the company configured the cookie management tool in this manner deliberately or if the choice of the “Allow All” button in the preference center was simply a matter of using a default configuration of the CMP, a common issue with CMPs that are built off of a (UK/EU) GDPR consent model. Companies should pay close attention to the configuration of their cookie management tools, including in both the cookie banner (or first layer), if used, and the preference center, and avoid using default settings and configurations provided by providers that are inconsistent with state privacy laws. Doing so will help mitigate the risk of choice asymmetry presented in this case, and the risks discussed in the following three bullets.
  • State privacy laws like the CCPA are not the only reason to pay close attention and engage in meticulous legal review of cookie banner and preference center language, and proper functionality and configuration of cookie management tools.
  • Given the onslaught of demands and lawsuits from plaintiffs’ firms under the California Invasion of Privacy Act and similar laws – based on cookies, pixels and other tracking technologies – many companies turn to cookie banner and preference center language to establish an argument for a consent defense and therefore mitigate litigation risk. In doing so it is important to bear in mind the symmetry of choice requirements of state consumer privacy laws. One approach is to make it clear that acceptance is of the site terms and privacy practices, which include use of tracking by the operator and third parties, subject to the ability to opt-out of some types of cookies. This can help establish consent to use of cookies by using the site after notice of cookie practices, while not suggesting that cookies are opt-in, and having lack of symmetry in choice.
  • In addition, improper wording and configuration of cookie tools – such as providing an indication of an opt-in approach (“Accept Cookies”) when cookies in fact already fired upon the user’s site visit, or that “Reject All” opts the user out of all, including functional and necessary cookies that remain “on” after rejection – present risks under state unfair and deceptive acts and practices (UDAAP) and unfair competition laws, and make the cookie banner notice defense to CIPA claims potentially vulnerable since the cookies fire before the notice is given.
  • Address CCPA requirements for GPC, linking to the business’s cookie preference center, and separating methods for exercising verifiable vs. non-verifiable requests. Where the business can tie a GPC signal to other consumer data (e.g., the account of a logged in user), it must also apply the opt-out to all linkable personal information.
  • Strive for clear and understandable language that explains what options are available and the limitations of those options, including cross-linking between the CMP for cookie opt-outs and the main privacy rights request intake for non-cookie privacy rights, and explain and link to both in the privacy policy or notice.
  • Make sure that the “Your Privacy Choices” or “Do Not Sell or Share My Personal Information” link gets the consumer to both methods. Also make sure the opt-out process is designed so that the required number of steps to make those opt-outs is not more than to opt-back in. For example, linking first to the CMP, which then links the consumer rights form or portal, rather than the other way around, is more likely to avoid the issue with additional steps just discussed.

Failure to produce contracts with advertising technology companies

The Agency’s Order goes on to allege that the Company did not produce contracts with advertising technology companies despite collecting and selling/sharing PI via cookies on its website to/with these third parties. The CPPA took issue with this because the CCPA requires a written contract meeting certain requirements to be in place between a business and PI recipients that are a CCPA service provider, contractor or third party in relation to the business. We have seen regulators request copies of contracts with all data recipients in other enforcement inquiries.

Practical Takeaways

  • Vendor and contract management are a growing priority of privacy regulators, in California and beyond, and should be a priority for all companies. Be prepared to show that you have properly categorized all personal data recipients and have implemented and maintain processes to ensure proper contracting practices with vendors, partners and other data recipients, which should include a diligence and assessment process to ensure that the proper contractual language is in place with the data recipient based on the recipient’s data processing role. To state it another way, it may not be proper as to certain vendors to simply put in place a data processing agreement or addendum with service provider/processor language. For instance, vendors that process for cross-context behavioral advertising cannot qualify as a service provider/contractor. In order to correctly categorize cookie and other vendors as subject to opt-out or not, this determination is necessary.
  • Attention to contracting is important under the CCPA in particular because different language is required depending on whether the data recipient constitutes a “third party,” “service provider” or a “contractor,” the CCPA requires different contracting terms be included in the agreements with each of those three types of personal information recipients. Further, in California, the failure to have all of the required service provider/contractor contract terms will convert the recipient to a third party and the disclosure into a sale.

Conclusion

This case demonstrates the need for businesses to review their privacy policies and notices, and audit their privacy rights methods and procedures to ensure that they are in compliance with applicable state privacy laws, which have some material differences from state-to-state. We are aware of enforcement actions in progress not only in California, but other states including Oregon, Texas and Connecticut, and these states are looking for clarity as to what specific rights their residents have and how to exercise them. Further, it can be expected that regulators will start, potentially in multi-state actions that have become common in other consumer protection matters, looking beyond obvious notice and rights request program errors to data knowledge and management, risk assessment, minimization and purpose and retention limitation obligations. Compliance with those requirements requires going beyond “check the box” compliance as to public facing privacy program elements and to the need to have a mature, comprehensive and meaningful information governance program.

If you have any questions, or for more information, contact the authors or your SPB relationship attorney.

Disclaimer: While every effort has been made to ensure that the information contained in this article is accurate, neither its authors nor Squire Patton Boggs accepts responsibility for any errors or omissions. The content of this article is for general information only and is not intended to constitute or be relied upon as legal advice.