According to the 2023 ACC CLO Survey, legal teams are facing unique and growing data-related challenges in this ever-changing regulatory and threat landscape. Data requirements for privacy and compliance continue to become more complex and confusing and the risk of resulting litigation continues to rise.

Join team SPB, in partnership with Exterro, in a lively luncheon that will explore key areas of risks and provide tools and tips to mitigate these risks and establish defensible compliance to help insulate your organization in the event of a privacy breach, regulatory action or litigation. This in-person CLE program will be held next Wednesday, November 8, from 12:00 to 2:00 pm MST (AZ) in SPB’s Phoenix, Arizona office.

Topics include:

  1. Insight into best practices for managing your company’s compliance, risks and data defensibility
  2. Data from the 2023 ACC CLO Survey, exploring how CLOs/GCs are shifting their focus to effectively managing these data-related challenges
  3. Important updates on how laws governing data are evolving

Speakers:

  1. Dan Christensen, Data Protection Officer, PrivaCyber, LLC
  2. Rebecca Perry, Director of Strategic Partnerships, Exterro
  3. Elizabeth Spencer Berthiaume, Data Privacy, Cybersecurity & Digital Assets Associate, Squire Patton Boggs

This program is approved or pending approval for 1.0 general credit hour of CLE in Arizona, California, New Jersey, New York, and Utah.

Registration is available here.

Scott Warren, Partner, Tokyo/Shanghai, has been busy this week at various speaking events in Tokyo.  On 25 October 2023, he spoke at two events:

  • 9th International Arbitration, Corporate Crime and Anti-Trust Summit, moderating and presenting a panel entitles “Digital Crimes, AI and Cyber Incidents: What You Need To Know for Compliance”. The panel discussed the nature of the threat landscape for companies, legal issues relating to GenAI implementation and the challenges relating to cross-border data breach. Preparatory solutions (such as data mapping, employee training, having an incident response plan, conducting cyber-preparatory exercises, broadly assessing the GenAI risk landscape) and post-incident handling (such as utilizing global cyber-breach experts who can quickly help you assess regulatory requirements) were discussed. The key takeaway was to prepare in advance (including utilizing practice sessions) and then utilize expert global resources to help effectively handle the matter.
  • Generative AI: How It Will Shape Businesses Tomorrow and Is Transforming Legal Work Today” hosted by the American Chamber of Commerce in Japan, featuring speakers from Microsoft and LegalOn Technolgies.  Scott moderated this panel looked at the rollout/solutions GenAI has on the broad economy, but with a particular focus on the legal industry.  It was reported that studies have shown GenAI may affect up to 60% of the legal work done, especially in the contract drafting space.  Microsoft discussed their ethical roll out, which feature transparency, responsible design, and cooperation with implementing appropriate regulatory frameworks.  Microsoft also addressed their new innovative initiative to provide users of their new Co-Pilot AI service protection from copyright claims.  A discussion of Japan’s and China’s approach to GenAI was held contrasting Japan’s approach not to ‘over-regulate’ the space while mentioning its concerns (such as GenAI use for criminal activity, personal/confidential information dissemination, disinformation and copyright violation) with China’s, which mentions many of the same concerns, but has specifically identified a prohibition of GenAI rollout to the public unless certain things are ensured (such as the result will not sow dissention, disseminate anti-government sentiment nor creative materials that have not been approved by regulatory clearance entities). For more information on the China law, please see China Generative AI New Provisional Measures | Privacy World.

On 27 October 2023, Scott led a panel at the 13th International Cybersecurity Symposium – Program (keio.ac.jp) hosted by Keio University and the MITRE Corporation.  This event brings together global government, academics and corporations to discuss ways to better collaborate to discuss government and collaboration policies to help protect society, companies and people from cyber-attacks.  Scott’s panel was entitled “The Threat Landscape and Addressing Issues in AI, Cybersecurity Transformation and Incident Notification”.  Speakers from Crowdstrike, AWS, Splunk and Cisco shared, along with Scott, their perspectives on these issues.  Key items addressed were:

  • The threat landscape for Japan involves much more than just personal information theft, but also is heavily focused on business confidential information;
  • The key to cybersecurity transformation is to understand that it is not the purchase of a particular device, but the implementation of a more wholistic threat security plan;
  • AI can be an important solution, but also must be understood as a tool of threat by the hackers; and
  • Because of the increasing complexity faced by companies in cross-border cyber threats, they must prepare for cyber threats BEFORE they happen, including creating a Cyber-Incident Response Plan but also testing it.

This event may be available for streaming at a later date.  If you are interested in watching this, please reach out to Scott, or your Squire Patton Boggs contact person.  

Disclaimer: While every effort has been made to ensure that the information contained in this article is accurate, neither its authors nor Squire Patton Boggs accepts responsibility for any errors or omissions. The content of this article is for general information only, and is not intended to constitute or be relied upon as legal advice.

We have limited places left at our in-person roundtable which will gather a select group of industry leaders to enable a high-level discussion focused on the legal and public policy challenges surrounding the EU’s proposed Artificial Intelligence Act, AI Code of Conduct and AI Pact. This will be an opportunity to discuss shared issues and opportunities in a trusted and neutral environment with your peers.

Date: Tuesday 14 November 2023

Time: 12:30 – 2:30 p.m

Venue: Squire Patton Boggs, Avenue Louise 523, 1050 Brussels

Duration: 2 hours

To register for this in-person event, click here.

We originally published an in July 2022, and have refreshed the article to include new information below.

There is increasing public pressure on internet companies to intervene with content moderation, particularly to tackle disinformation, harmful speech, copyright infringement, sexual abuse, automation and bias, terrorism and violent extremism. The new Online Safety Act is the British response to such public demand.

The Online Safety Act received Royal Assent on 26 October 2023, giving Ofcom powers as online safety regulator in the UK. Online platforms around the world will get the first detail of requirements for complying with the Online Safety Act on 9 November, when Ofcom says it will publish its first draft codes of practice and enforcement guidance for consultation. Ofcom has published a timeline with a comprehensive implementation schedule extending over three years.

Continue Reading UPDATED BLOGPOST: Online Safety in Digital Markets Needs a Joined-Up Approach with Competition Law in the UK

On October 27th, the Federal Trade Commission (the “FTC”) announced that it approved an amendment to the Safeguards Rule promulgated under the federal Gramm-Leach-Bliley Act (the “Safeguards Rule”) requiring non-bank financial institutions subject to the FTC’s jurisdiction to report to the FTC data breaches affecting 500 or more people (the “Amendment”). 

The Safeguards Rule requires non-bank financial institutions, such as mortgage brokers, motor vehicle dealers, and payday lenders, to develop, implement, and maintain a comprehensive security program to keep customer information safe. In the process of adopting certain amendments to the Safeguards Rule in October 2021, the FTC also sought comment on a proposed supplemental amendment to the Safeguards Rule that would require financial institutions to report certain data breaches and other security events to the FTC. The Amendment is the final version of the 2021 proposed supplemental amendment.

The Amendment requires financial institutions to notify the FTC as soon as possible and no later than 30 days after the discovery of a security breach involving the information of at least 500 people. A security breach will trigger the notification requirement if unencrypted “customer information” has been acquired without the authorization of the individual to which the information pertains. The Safeguards Rule defines “customer information” as “any record containing nonpublic personal information about a customer of a financial institution, whether in paper, electronic, or other form, that is handled or maintained by or on behalf of [the financial institution or its] affiliates.” Note that the terms “nonpublic personal information” and “customer” have nuanced definitions in the Safeguards Rule.

The Amendment provides that unauthorized acquisition will be presumed to include unauthorized access to unencrypted customer information unless there is reliable evidence showing that there has not been, or could not reasonably have been, unauthorized acquisition of such information.

The notice to the FTC required by the Amendment must be submitted electronically on a form found on the FTC’s website, and it must include certain information about the event, including: 

  • a description of the types of information involved;
  • the date or date range of the data breach (if known);
  • a general description of the data breach; and
  • the number of consumers affected or potentially affected.

The Amendment becomes effective 180 days after publication in the Federal Register.

With the trilogues on the draft EU AI Act entering what is probably their final phase and the idea that procuring AI cannot be done lightly spreading, organizations are often confronted with hard choices, including on how to source AI responsibly and protect against liabilities with an uncertain developing legal framework. Contractual language is one answer, but what was done in the field had no authoritative referential to benchmark against. That has now changed, with the renewed proposal for standard EU model contractual AI clauses when procuring AI, released early October by a multi-stakeholder group within the European Commission.

What Are the Model Clauses?

Two templates have been developed that use the risk classification of the draft EU AI Act as a marker. One set is developed for non-high-risk (and non-prohibited) AI uses (AI Clauses Light Version). The other one is for high-risk (and non-prohibited) AI systems (AI Clauses High-Risk Version, together with the AI Clauses Light Version, the “Clauses”). The Clauses are due to work as schedules from an existing contractual agreement; they are not standalone. They are available here.

The Clauses contain provisions setting (i) essential requirements that an AI system must meet before its delivery, such as the implementation of appropriate risk management systems by the supplier, the implementation of data governance models in connection with the datasets used for training the AI system and compliance with ethical standards; (ii) obligations for the supplier, such as the implementation of quality management systems to ensure compliance with the provisions set in the Clauses; and (iii) stipulations on the use of the data sets by the acquiring company/public authority, the supplier and third parties.

The Clauses include several annexes aiming at providing details on (i) the AI System and its intended purpose; (ii) the description of the data sets used; (iii) the technical documentation requirements; (iv) the instructions for use; and (v) the measures adopted to ensure that the AI system complies with ethical standards.

The AI Clauses High-Risk Version includes additional obligations for the supplier, such as carrying out a conformity assessment to ensure compliance with the provisions of the Clauses prior to delivery, the implementation of corrective actions in case of non-compliance and the need to cooperate in audits. The supplier’s liability in the event of claims by third parties is also addressed.

What They Are Not?

The Clauses only contain provisions specific to AI systems, addressing items relevant under the (upcoming) EU AI Act. They exclude obligations or requirements that may arise under other applicable legislations, such as GDPR. The Clauses do not aim to provide a comprehensive set of prescriptive drafting; they put on the contracting parties the responsibility to assess the adequacy and proportionality of each section and to customize them further, based on the specific context.

Further, and as a matter of their initial scope, the Clauses are initially solely meant to address issues faced by public authorities procuring AI systems.  

How Are the Clauses Relevant for Businesses?

Although these EU Model Contractual clauses are aimed at public organizations, they are useful outside of public authorities. They can serve as a good basis for private organizations to benchmark against the contractual provisions that were developed internally until now.

The Clauses reinforce the need for organizations to set innovation-oriented procurement procedures and focus throughout the procurement process on mechanisms for ensuring accountability and transparency of the solutions provided by their AI providers. Their adoption is fully voluntary.

Relying on the Clauses will only be one facet to tackle the many and complex challenges inherent in the sourcing of AI technology, in a quickly evolving regulatory landscape. At a minimum, you can now see whether the work you have done so far is going into the right direction.

Disclaimer: While every effort has been made to ensure that the information contained in this article is accurate, neither its authors nor Squire Patton Boggs accepts responsibility for any errors or omissions. The content of this article is for general information only, and is not intended to constitute or be relied upon as legal advice.


Last week, the House of Representatives’ Committee on Energy and Commerce kicked off its first in a series of hearings surrounding the burgeoning topic of artificial intelligence (AI) with a hearing titled “Safeguarding Data and Innovation: Building the Foundation for the Use of Artificial Intelligence.”

While this was the first AI-focused Energy and Commerce hearing this year, Chair Cathy McMorris Rodgers (R-WA) noted it was the Committee’s seventh hearing focused on data privacy. Consumer data privacy is a major priority for Chair Rodgers. This first hearing demonstrated that she is keenly focused on the intersection between data privacy and the use of AI in the private sector. She emphasized the need for a national data privacy standard as a “first step towards a safe and prosperous AI future.” “Data is the lifeblood of artificial intelligence,” she said. “As we think about how to protect people’s data privacy, we need to be considering first and foremost how the data is collected and how it is meant to be used, and ensure that it is secured.”

Chair Rodgers is reportedly updating the bicameral, bipartisan consumer data privacy bill from last Congress, the American Data Privacy and Protection Act (ADPPA). While Representative Nancy Pelosi (D-CA) is no longer Speaker of the House and thus cannot block floor action to ensure California’s more stringent privacy standards are not eclipsed by federal action, the delay in selecting a new Speaker of the House and the likely backlog of other legislative action that will consume the House floor for the rest of the year suggests that the ongoing debate over technology policy will continue into next year.

Notably, the all-encompassing nature of AI technology means Energy and Commerce is not the only House committee examining AI-related issues under their jurisdiction. Last week, the House Committee on Science, Space, and Technology also held a hearing on risk management, and the House Committee on the Judiciary held a hearing on intellectual property.

On the Senate side, the “Gang of Four” – consisting of Majority Leader Chuck Schumer (D-NY) and Senators Todd Young (R-IN), Martin Heinrich (D-NM), and Mike Rounds (R-SD) – is rushing to present a workable framework for AI-focused legislation. The gang hosted its first “Insight Forum” to examine AI technology with industry powerhouses in September, and it is expected to host additional topic-based forums in the weeks and months to come. The second one, held this week, focused on “Innovation,” including AI’s potential to unlock transformational innovation – from healthcare to food supply – and the need to ensure that innovation is sustainable.

In the weeks leading up to the New Year, many lawmakers are working to present draft legislation on AI. Congressional committees and leaders will examine a host of issues related to the technology, its potential impacts on society, and how Congress can best legislate against its most ominous capabilities. Members of Congress will also continue to introduce bills addressing more targeted AI-related issues, such as recent legislation related to “deepfake” content and political ads. Meanwhile, expect President Joe Biden to use the powers at his disposal, including by executive order, to begin to shape the federal government’s approach to AI development and regulation without Congress.

Last week, the Attorney General for California filed a notice of appeal to overturn a federal court ruling that the state’s Age-Appropriate Design Code Act (“CAADCA”) likely violates the First Amendment.  The appeal will put the constitutionality of California’s act before the Court of Appeals for the Ninth Circuit.

Following unanimous votes by the California legislature and signature by the Governor, California enacted the CAADCA in September 2022 as a measure purportedly “aimed at protecting the wellbeing, data, and privacy of children using online platforms.”  Industry group NetChoice soon turned to federal court and sought an injunction seeking to prevent the law from being enforced on the grounds that it violates the First Amendment and the dormant Commerce Clause of the United States Constitution and is preempted by other federal statutes addressing online child safety, including the Children’s Online Privacy Protection Act (“COPPA”).  Last month, the court granted a preliminary injunction in favor of NetChoice, holding that CAADCA likely violates the First Amendment.  Specifically, the court reasoned that the law regulates expression by limiting the use and sharing of (personal) information and that California’s justifications did not rise to the level required to regulate expression under the U.S. Constitution.

Privacy World is following this appeal and will be here to keep you in the loop.  Stay tuned.

Data breaches are an all-too-familiar issue, affecting businesses of all sizes and across all industries. Beyond dealing with the operational and reputational impacts and other resulting fallouts of a data breach, businesses also face enhanced class action litigation risk.

A recent high-profile case serves as a valuable reminder that companies should consider reliance upon a well-established mechanism of mitigating class action litigation risk. In In re Marriott International, Inc., Consumer Data Security Breach Litig., 78 F.4th 677 (4th Cir. 2023), the Fourth Circuit Court of Appeals reversed the district court’s certification order in a data breach class action dispute due to the effect of a class action waiver signed by all putative class members. The Marriott decision demonstrates how class action waivers can be utilized as a core strategy for mitigating heightened data breach litigation risks.

Continue Reading Recent Marriott Data Breach Class Action Decision Underscores the Importance of Class Action Waivers

Mr. Philippe Latombe, a French member of Parliament, beat privacy activist Max Schrems to the punch! Despite Mr. Schrems’ many statements against the EU-US Data Privacy Framework (DPF), Mr. Latombe was the first to file a request in the EU’s General Court to seek the annulment of the DPF and, separately, an interim measure to suspend the DPF pending the General Court’s decision.

However, the Court of Justice of the European Union (CJEU) (the EU’s highest court) rejected Mr. Latombe’s request to suspend the DPF in an interim decision dated October 12, 2023.

The conditions to obtain an interim relief are stringent and are rarely granted. The applicant must demonstrate that the grant of the interim measure (i.e., the suspension of DPF) is (i) prima facie justified in fact and in law; and (ii) urgent, i.e., that it is necessary for the applicant to seek the measure to avoid serious and irreparable damage before the decision on the merits is rendered.

The CJEU’s decision found that Mr. Latombe did not demonstrate the necessary urgency because he could not establish that he would suffer serious harm if the DPF was not suspended. The CJEU found that Mr. Latombe’s arguments were too broad and that he did not sufficiently set out the reasons why, in his particular case, transfers of his personal data, on the basis of the DPF to a DPF-certified business in the US, would cause him serious harm, especially considering that, under certain conditions, transfers of personal data to the US already are permitted based on the transfer tools provided for in Articles 46 and 49 of the GDPR.

We will be monitoring closely actions against the validity of the DPF and will bring further updates here and in our DPF FAQs.

Disclaimer: While every effort has been made to ensure that the information contained in this article is accurate, neither its authors nor Squire Patton Boggs accepts responsibility for any errors or omissions. The content of this article is for general information only, and is not intended to constitute or be relied upon as legal advice.