The Federal Trade Commission (“FTC” or “Agency”) recently indicated that it considers initiation of pre-rulemaking “under section 18 of the FTC Act to curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination.”  This follows a similar indication from Fall 2021 where the FTC had signaled its intention to begin pre-rulemaking activities on the same security, privacy, and AI topics in February 2022. This time, the FTC has expressly indicated that it will submit an Advanced Notice of Preliminary Rulemaking (“ANPRM”) in June with the associated public comment period to end in August, whereas it was silent on a specific timeline when it made its initial indication back in the Fall. We will continue to keep you updated on these FTC rulemaking developments on security, privacy, and AI.

Also, on June 16, 2022 the Agency issued a report to Congress (the “Report”), as directed by Congress in the 2021 Appropriations Act, regarding the use of artificial intelligence (“AI”) to combat online problems such as scams, deepfakes, and fake reviews, as well as other more serious harms, such as child sexual exploitation and incitement of violence. While the Report is specific in its purview—addressing the use of AI to combat online harms, as we discuss further below—the FTC also uses the Report as an opportunity to signal its positions on, and intentions as to, AI more broadly.

Background on Congress’s Request & the FTC’s Report

The Report was issued by the FTC at the request of Congress, which—through the 2021 Appropriations Act—had directed the FTC to study and report on whether and how AI may be used to identify, remove, or take any other appropriate action necessary to address a wide variety of specified “online harms.” The Report itself, while spending a significant amount of time addressing the prescribed online harms and offering recommendations regarding the use of AI to combat the same, as well as caveats for over-reliance on them, also devotes a significant amount of attention to signaling its thoughts on AI more broadly. In particular, due to specific concerns that have been raised by the FTC and other policymakers, thought leaders, consumer advocates, and others, the Report cautions that the use of AI should not necessarily be treated as a solution to the spread of harmful online content. Rather, recognizing that “misuse or over-reliance on [AI] tools can lead to poor results that can serve to cause more harm than they mitigate,” the Agency offers a number of safeguards. In so doing, the Agency raises concerns that, among other things, AI tools can be inaccurate, biased, and discriminatory by design, and can also incentivize relying on increasingly invasive forms of commercial surveillance, perhaps signaling what may be areas of focus in forthcoming rulemaking.

While the FTC’s discussion of these issues and other shortcomings focuses predominantly on the use of AI to combat online harms through policy initiatives developed by lawmakers, these areas of concern apply with equal force to the use of AI in the private sector. Thus, it is reasonable to posit that the FTC will focus its investigative and enforcement efforts on these same concerns in connection with the use of AI by companies that fall under the FTC’s jurisdiction. Companies employing AI technologies more broadly should pay attention to the Agency’s forthcoming rulemaking process to stay ahead of the issues.

The FTC’s Recommendations Regarding the Use of AI

Another major takeaway of the Report pertains to the series of “related considerations” that the FTC has cautioned will require the exercise of great care and focused attention when operating AI tools. Those considerations entail (among others) the following:

  • Human Intervention: Human intervention is still needed, and perhaps always will be, in connection with monitoring the use and decisions of AI tools intended to address harmful conduct.
  • Transparency: AI use must be meaningfully transparent, which includes the need for these tools to be explainable and contestable, especially when people’s rights are involved or when personal data is being collected or used.
  • Accountability: Intertwined with transparency, platforms and other organizations that rely on AI tools to clean up harmful content that their services have amplified must be accountable for both their data and practices and their results.
  • Data Scientist and Employer Responsibility for Inputs and Outputs: Data scientists and their employers who build AI tools—as well as the firms procuring and deploying them—must be responsible for both inputs and outputs. Appropriate documentation of datasets, models, and work undertaken to create these tools is important in this regard. Concern should also be given to the potential impact and actual outcomes, even though those designing the tools will not always know how they will ultimately be used. And privacy and security should always remain a priority focus, such as in their treatment of training data.

Of note, the Report identifies transparency and accountability as the most valuable direction in this area—at least as an initial step—as being able to view and allowing for research behind platforms’ opaque screens (in a manner that takes user privacy into account) may prove vital for determining the best courses for further public and private action, especially considering the difficulties created in crafting appropriate solutions when key aspects of the problems are obscured from view. The Report also highlights a 2020 public statement on this issue by Commissioners Rebecca Kelly Slaughter and Christine Wilson, who remarked that “[i]t is alarming that we still know so little about companies that know so much about us” and that “[t]oo much about the industry remains opaque.”

In addition, Congress also instructed the FTC to recommend laws that could advance the use of AI to address online harms. The Report, however, finds that—given that major tech platforms and others are already using AI tools to address online harms—lawmakers should instead consider focusing on developing legal frameworks to ensure that AI tools do not cause additional harm.

Taken together, companies should expect the FTC to pay particularly close attention to these issues as they begin to take a more active approach in policing the use of AI.

FTC: Our Work on AI “Will Likely Deepen”

In addition to signaling what areas of focus may be moving forward when addressing Congress’ mandate, the FTC veered outside of its purview to highlight its recent AI-specific enforcement cases and initiatives, describe the enhancement of its AI-focused staffing, and provide commentary on its intentions as to AI moving forward. In one notable sound bite, the FTC notes in the Report that its “work has addressed AI repeatedly, and this work will likely deepen as AI’s presence continues to rise in commerce.” Moreover, the FTC specifically calls out its recent staffing enhancements as it relates to AI, highlighting the hiring of technologists and additional staff with expertise in and specifically devoted to the subject matter area.

The Report also highlights the FTC’s major AI-related initiatives to date, including:

Conclusion

The recent Report to Congress strongly indicates the FTC’s overall apprehension and distrust as it relates to the use of AI, which should serve as a warning to the private sector of the potential for greater federal regulation over the utilization of AI tools. That regulation may come sooner than later, especially in light of the Agency’s recent ANAPR signaling the FTC’s consideration of initiating rulemaking to “ensure that algorithmic decision-making does not result in unlawful discrimination.”

At the same time, although the FTC’s Report calls on lawmakers to consider developing legal frameworks to help ensure that the use of AI tools does not cause additional online harms, it is also likely that the FTC will increase its efforts in investigating and pursuing enforcement actions against improper AI practices more generally, especially as it relates to the Agency’s concerns regarding inaccuracy, bias, and discrimination.

Taken together, companies should consult with experienced AI counsel to obtain advice on proactive measures that can be implemented at this time to get ahead of the compliance curve and put themselves in the best position to mitigate legal risks moving forward—as it is only a matter of time before regulation governing the use of AI is enacted, likely sooner rather than later.

Legislatures, regulators, and enforcement agencies across the United States and in Germany have turned up the heat on subscription plans within the past year by updating their automatic renewal law (ARL). California and Germany have new ARL requirements starting July 1, 2022. Generally, an automatic renewal or negative option is a paid subscription plan that automatically renews at the end of the term for a subsequent term, until the subscribing consumer cancels. Many US states and the US Federal Trade Commission (FTC) require businesses offering subscription plans to obtain from the consumer affirmative consent to subscription plan terms, send confirmation emails with the subscription terms, send renewal notices within a set number of days prior to the plan automatically renewing, and allow consumers to easily cancel their subscriptions, among other requirements. The FTC’s enforcement power for automatic renewals rests in several laws and rules, such as Section 5 of the FTC Act, the Restore Online Shoppers’ Confidence Act (ROSCA), and the Telemarketing Sales Rule. Although most state ARLs target business-to-consumer contracts, some states have ARLs that regulate business-to-business contracts (e.g., New York and Wisconsin). We take a look at the varying requirements of the more stringent state ARLs regulating business-to-consumer contract below. New or updated ARLs have taken effect in Colorado, Delaware, New York, and Illinois. Notably, California’s new, more stringent requirements for businesses that offer consumers automatic renewals take effect July 1, 2022.

In Europe, the EU has had several Directives relating to consumer contracts, including the Unfair Contract Terms Directive, Consumer Rights Directive, and most recently, the Digital Content Directive and Sale of Goods Directive. However, in addition to these Directives, Germany passed the Fair Consumer Contracts Act, which will place stricter regulations on automatic renewals in e-commerce. An important new practical requirement is the cancellation button, the design of which is subject to detailed requirements. Non-compliant businesses will be subject to injunctive relief from both competitors and from consumer protection associations. Further, consumers can cancel contracts at any time if the business is non-compliant. Some of the provisions of the Fair Consumer Contracts Act entered into force on October 1, 2021, however, the implementation of the cancellation button is mandatory July 1, 2022, the same effective date as California’s updated ARL.

Updates to Laws

United States

Last year, New York strengthened its business-to-consumer ARL to include additional consent, disclosure, and cancellation requirements. In addition to this updated business-to-consumer ARL, New York’s original ARL covers business-to-business contracts “for service, maintenance or repair to or for any real or personal property” where the renewal period is longer than a month. New York’s enhanced ARL, which went into effect in 2021, has some notable new requirements for businesses that we have seen in other state consumer protection laws, including omnibus privacy laws:

  1. Obtain “affirmative consent” to the terms, including the cancellation policy, (which are clearly and conspicuously disclosed in “visual” or “temporal” proximity to the consent mechanism) prior to charging a consumer for an automatic renewal. Failure to obtain this consent will deem the “goods, wares, merchandise, or products” as “unconditional gifts to the consumer, who may dispose of the [gift] in any manner he or she sees fit without any obligation whatsoever on the consumer’s part to the business.” §527-a(6).
  2. “Clear[ly] and conspicuous[ly]” disclose the “terms, cancellation policy, and information regarding how to cancel in a manner that is capable of being retained by the consumer.” §527-a(1)(c). Think of this as a requirement to send a confirmation email or letter to the subscribing consumer. If the subscription includes a free gift, the business should provide the ability and include instructions in the confirmation for the consumer to cancel before being charged for the good or service.
  3. Allow cancellation online of subscriptions purchased online, as well as “cost-effective, timely, and easy-to-use mechanism for cancellation” for subscriptions not purchased online. §527-a(2)-(3).

Indicating that automatic renewals are an enforcement priority, New York Attorney General Letitia James issued a consumer alert in November 2021, reminding consumers and businesses that New York has updated its ARL for business-to-consumer contracts.

In October 2021, the FTC issued an enforcement policy statement “warning companies against deploying illegal dark patterns that trick or trap consumers into subscription services.” The enforcement policy states that sellers should obtain a consumer’s unambiguous affirmative consent for the automatic renewal. You can read our other coverage of dark patterns here.

Also in October 2021, California enacted its enhanced ARL that has an operative date of July 1, 2022. In the enhanced ARL, California has required additional consent, disclosure, and cancellation requirements on businesses that offer automatic renewals. Notably, California’s ARL will soon require:

  1. Businesses must provide a notice (i.e. an email or letter to the consumer stating that the automatic renewal will automatically renew) that clearly and conspicuously discloses (a) the renewal will occur “unless the consumer cancels,” (b) the length of the additional term, (c) how the consumer may cancel, (d) if sent electronically, a link that directs the consumer to the cancellation process or another electronic method to cancel, and (e) the contact information for the business. §17602(a)(4).
  2. Notice timing.
    1. Notice must be provided 3 to 21 days before the expiration of a free gift or trial period lasting more than 31 days. §17602(b)(1).
    2. Notice must be provided 15 to 45 days prior to the renewal for automatic renewals with subscriptions one year or longer, under certain conditions. §17602(b)(2).
  3. Easy-to-use cancellation. Consumers subscribing online, must be allowed to cancel online, “at will, and without engaging in any further steps that obstruct or delay the consumer’s ability to terminate” the subscription immediately. Businesses shall provide (a) “a prominently located direct link or button” located in the account profile, or device or user settings; (b) a preformatted termination email that the “consumer can send to the business without additional information.” §17602(d)(1). Businesses can require account authentication prior to cancelling the account online, but consumers can still cancel through the other methods outlined elsewhere in California’s ARL.

Many other states and Washington, D.C. have similar consent, disclosure, and cancellation requirements in their existing or recently updated automatic renewal laws. For instance, Colorado’s ARL became effective January 1, 2022, and requires notices be sent to consumers 25 to 45 days prior to the “first automatic renewal that would extend the contract beyond a continuous twelve-month period,” as well as any subsequent renewal that would extend the contract past the additional twelve-month period. Delaware also enacted an ARL which has specific notice and disclosure requirements. Illinois’ enhanced ARL, which became effective January 1, 2022, now includes a requirement for cancellation instructions and mechanisms in the renewal notice, and requires an online cancellation option for consumers that subscribe online.

Germany

With the passage of the Fair Consumer Contracts Act (Gesetz für faire Verbraucherverträge), the German Civil Code (Bürgerliches Gesetzbuch – “BGB”) was amended to include stricter rules on tacit contract renewals (automatic renewals) for certain businesses. Sect. 309 No. 9 lit. b BGB. Notably, as of July 1, 2022, businesses offering subscriptions must provide a cancellation button on their websites. There are specific requirements including:

  • The button must be legibly labeled a phrase like “Cancel contract here.”
  • The button must lead the consumer to a confirmation page that meets specific requirements, such as allowing the consumer to provide identifying information, cancellation reason, and subscription end date.
  • The button and confirmation page must be permanently available, and immediately and easily accessible (i.e., clear and conspicuous).
  • The business must allow the consumer to document the request for termination (e.g., by means of a downloadable summary of the data and time the cancellation button was pressed) and provide the consumer with an electronic receipt of the request, including the date of the cancellation request and the date on which the subscription is to be cancelled.
  • If the consumer does not specify a time for cancellation, the termination date must be the earliest date possible.

If a business fails to follow these cancellation requirements, a German consumer may terminate a contract at any time and without observing a notice period.

Enforcement and Class Action Threat

Violations of automatic renewal laws are typically addressed by government enforcement actions. However, there have been a number of large class action settlements over the past few years that alleged illegal automatic renewal programs in newspaper and magazine subscription programs. Recently, a lawsuit alleging violations of state consumer protection laws, as well as California’s ARL, based on a wellness company’s deceptive trial periods and consumers’ difficulty in cancelling and getting a refund, settled for over $50m.  Although this class action alleged a violation of California’s ARL, several courts have found there is no independent private right of action in the California ARL. See Johnson v. Pluralsight, LLC, 728 F. App’x 674, 676 (9th Cir. 2018); Lopez v. YP Holdings, LLC, 2019 WL 7905748, *4 (C.D. Cal. Jan. 23, 2019); Mayron v. Google LLC, No. H044592, 2020 WL 5494245 (Cal. Ct. App. Sept. 11, 2020). Private litigants may attempt to bring automatic renewal lawsuits under different consumer protection statutes, such as California’s Unfair Competition Law. See Morrell v. WW Int’l, Inc., 551 F. Supp. 3d 173, 182 (2nd Cir. 2021).

As to state government enforcement, the state attorney general usually enforces the ARL. In California, the state Attorney General, District Attorneys, County Attorneys, City Prosecutors, and City Attorneys can enforce the state’s ARL. But as noted above, private litigants may still try to bring an ARL claim under another consumer protection statute, such as a law prohibiting unfair or deceptive trade practices. Some states explicitly allow private rights of action in their ARL (e.g., Virginia).

The ramification for failing to comply with the state ARL varies by state. States, such as New York and Connecticut, have clauses in their ARLs that proscribe failure to comply with certain requirements means that the good or service is an unconditional gift, which would prevent the non-complying business from collecting from the consumer for non-payment. Florida, for example, states that a violation of the ARL “renders the automatic renewal provision void and unenforceable.”

In addition to state enforcement, it is likely that the FTC will be looking more closely at automatic renewal programs in 2022 based on the October 2021 enforcement statement. For example, on March 8, 2022, the FTC announced a settlement with an online investment site for more than $2.4m based on allegations of bogus stock earnings claims and hard-to-cancel subscription plans, in violation of Section 5(a) of the FTC Act and Section 4 of ROSCA. The FTC’s press release notes that the settlement “continues the FTC’s crackdown on false earnings claims, returning millions to consumers and requiring click-to-cancel online subscriptions” signaling that more enforcement actions may be on the horizon and online cancellation is an FTC requirement for online subscriptions.

Recommendations

The consent, disclosure, and cancellation requirements vary by state and businesses should be vigilant in complying with the state specific requirements. Businesses that offer subscription plans should ensure that customers are notified of the automatic renewal provision prior to beginning the transaction. Businesses should obtain a subscribing customer’s affirmative consent to the automatic renewal provision and send the subscriber a descriptive confirmation email after the initial purchase. Consumers should also receive a renewal notice prior to the subscription automatically renewing. Finally, businesses must be cautious of the difference between clever marketing and dark patterns in the subscription process.

These enhanced ARL requirements are already the law in certain states, and will soon be required of businesses selling automatic renewals to Californians. Businesses should implement the best practices outlined above as soon as possible, and prior to July 1, 2022, if subject to California’s law.

In Germany, we recommend that businesses review their subscription terms and conditions to ensure that no stipulations can be construed to bar consumers from using the cancellation button, and ensure that the cancellation flow complies with Germany’s specific requirements, prior to July 1, 2022.

For more information, please contact the authors or your usual point of contact at Squire Patton Boggs.

In case you missed it, below are recent posts from Consumer Privacy World covering the latest developments on data privacy, security and innovation. Please reach out to the authors if you are interested in additional information.

CJEU Rules Consumer Associations Can File Data Infringement Class Actions Without a Consumer Mandate

CPW’s Scott Warren Joins Faculty of PrivSec Focus: Enterprise Risk | Consumer Privacy World

German Supervisory Authorities: Online Traders Must Allow Guest Access for Customer Orders | Consumer Privacy World

BREAKING: FTC Nominee Bedoya Confirmed | Consumer Privacy World

A Do-Not-Miss CLE Webinar: Cross-border Data, the Cookiepocalypse and Standard Contractual Clauses | Consumer Privacy World

Google to Require Apps to Display “Data Safety” Information by July 20, 2022 | Consumer Privacy World

Connecticut and Utah Latest States to Jump On Consumer Privacy Bandwagon | Consumer Privacy World

Noteworthy Information in the French Data Protection Authority‘s (CNIL) Newly Published 2021 Annual Report | Consumer Privacy World

California Privacy Protection Agency Continues Rulemaking Focus on Automated Decision-Making and Profiling in Stakeholder Sessions | Consumer Privacy World

Episode 2 Out Now: APAC Partner Scott Warren Discusses Data Privacy Laws in China | Consumer Privacy World

NOW AVAILABLE: Lexis Practical Guidance Releases CPW Team Member David Oberly’s “Mitigating Legal Risks When Using Biometric Technologies” Biometric Privacy Practice Note and Biometric Privacy Compliance Checklist | Consumer Privacy World

CJEU Rules Consumer Associations Can File Data Infringement Class Actions Without a Consumer Mandate | Consumer Privacy World

Registration Open: CPW’s Kyle Fath and Kristin Bryan to Discuss Artificial Intelligence and Biometrics in New IAPP Virtual Event | Consumer Privacy World

Aerojet Rocketdyne Cybersecurity Trial and Settlement | Consumer Privacy World

“Dark Patterns” Are Focus of Regulatory Scrutiny in the United States and Europe | Consumer Privacy World

Webinar Recording Now Available: 2022 Developments and Trends Concerning Data Breach and Cybersecurity Litigation and Related Matters | Consumer Privacy World

Third Circuit Issues Order in WaWa Data Breach | Consumer Privacy World

California Privacy Regulator to Hold Stakeholder Sessions First Week of May | Consumer Privacy World

In case you missed it, below are recent posts from Consumer Privacy World covering the latest developments on data privacy, security and innovation. Please reach out to the authors if you are interested in additional information.

NOW AVAILABLE: Lexis Practical Guidance Releases CPW Team Member David Oberly’s “Mitigating Legal Risks When Using Biometric Technologies” Biometric Privacy Practice Note and Biometric Privacy Compliance Checklist

Registration Open: CPW’s Kyle Fath and Kristin Bryan to Discuss Artificial Intelligence and Biometrics in New IAPP Virtual Event

Aerojet Rocketdyne Cybersecurity Trial and Settlement

“Dark Patterns” Are Focus of Regulatory Scrutiny in the United States and Europe

Webinar Recording Now Available: 2022 Developments and Trends Concerning Data Breach and Cybersecurity Litigation and Related Matters

Third Circuit Issues Order in WaWa Data Breach

California Privacy Regulator to Hold Stakeholder Sessions First Week of May

CPW’s David Oberly Talks to Legaltech News About the Tech Industry’s Increased Focus on Enhancing Privacy Protections Over Consumers’ Sensitive Personal Information

Indiana Amends Data Breach Notification Law

Connecticut General Assembly Passes Comprehensive Privacy Bill

Federal Trade Commission Proposes Adjustments to Telemarketing Sales Rule, Including B2b Telemarketing Calls

Law360: Anti-Bias Considerations For Finance Cos. After CFPB Notice

Court Rejects Demand for “Corrective” Notice in Blackbaud Data Breach MDL

APAC Partner Scott Warren Featured in New Multi-Part Global Privacy Podcast Series

Double Trouble: Why Organisations Need to Consider the Legal Consequences of Ransomware and DDoS Attacks

BREAKING: Federal Judge in Illinois Affirms that BIPA Extends to Information Derived from Photographs

FCC Chairwoman Announces Proposed New Rules to Combat International Scam Robocalls 

Fourth Circuit Rules on Data Privacy and Trade Secret Claims Brought in Context of Former Employee/Employer Dispute

FDA Publishes Draft Cybersecurity Guidance for Medical Devices

Badgerow v. Walters: Supreme Court Holds that “look through” Approach Does not Apply to Requests to Confirm or Vacate Arbitral Awards Under the FAA

SEC Chair Reiterates New Potential Cyber Regulations at Financial Sector Meeting

Implications of Judge Jackson’s Confirmation for Data Privacy and Cybersecurity Litigations Going Forward

Congratulations to CPW’s Kyle Dull on Being Named to the 2022 Law360 Consumer Protection Editorial Board!

FCC Seeks Letters of Intent to Serve as Traceback Consortium on Suspected Unlawful Robocalls

hiQ Labs v. LinkedIn : Ninth Circuit Indicates Tort-Based Claims, Not CFAA, Appropriate Legal Theory for Attacking Data Scraping Practices

CPW on the Speaking Circuit in April: Scott Warren to Present on Digital Crime

 

As part of its continued preliminary rulemaking activities, the California Privacy Protection Agency (“CPPA”) will be holding stakeholder sessions Wednesday, May 4 through Friday, May 6 to provide an opportunity for stakeholders to weigh in on topics relevant to upcoming rulemaking. The Agenda for each of the sessions, which are slated to last an entire day, is available here. Continue Reading California Privacy Regulator to Hold Stakeholder Sessions First Week of May

Connecticut is gearing up to be the next state with a comprehensive privacy law. On April 28, 2022, the Connecticut General Assembly passed SB 6, “An Act Concerning Personal Data Privacy and Online Monitoring,” which is currently with the governor awaiting signature.  Of the state laws that have passed, SB 6 is most similar to the Colorado Privacy Act (“CPA”), Virginia Consumer Data Protection Act (“CDPA”), and Utah Consumer Privacy Act (“UCPA”). For example, under SB 6, the terms “controller,” “processor,” and “personal data” have similar definitions as under the CPA, CDPA, and UCPA. Continue Reading Connecticut General Assembly Passes Comprehensive Privacy Bill

Privacy regulators in California and Colorado recently made announcements regarding rulemaking for their respective state privacy laws. Last week, the California Privacy Protection Agency (“CPPA”) announced that it will hold its next public meeting this Thursday, February 17, during which it will discuss updates on the rulemaking process, including a timeline. On January 28, Colorado Attorney General Phil Weiser publicly announced the intent of the Colorado Office of the Attorney General (“COAG”) to carry out rulemaking activities to implement the Colorado Privacy Act (“CPA”), providing an indication of focus areas and a rough timeline. We discuss each of these developments in further detail below. Continue Reading California and Colorado Privacy Regulators Provide Updates on Rulemaking

In case you missed it, below is a summary of recent posts from CPW.  Please feel free to reach out if you are interested in additional information on any of the developments covered.

Federal Court Partially Grants FTC Summary Judgment on Claims Brought Against MyLife for Alleged Privacy Violations, Awards Statutory Injunction and $34 Million in Consumer Redress – Consumer Privacy World

NYC’s ‘Peculiar’ New Delivery App Law Raises Data Breach Fears-Kyle Fath Talks to Bloomberg Law – Consumer Privacy World

Ninth Circuit Urged to Reconsider Ruling Narrowly Limiting “Public Injunctive Relief” Exception to Arbitration in Privacy and Data Collection Class Actions – Consumer Privacy World

Complimentary Webinar November 3rd from 12-1 pm PST The Future of Cybersecurity – When will we be one step ahead of threat actors? – Consumer Privacy World

Save the Date March 3-4, 2022 ABA’s Second Biannual Cybersecurity and Data Privacy Conference – Consumer Privacy World

BREAKING: FTC Announces It Will Ramp up Enforcement Against “Dark Patterns” Directed at Consumers – Consumer Privacy World

Elliot Golding Recognized by Bloomberg Law’s They’ve Got Next Series on Privacy Law Stars – Consumer Privacy World

The Federal Trade Commission (FTC) has made it clear: data privacy and cybersecurity are now a priority, and will be for years to come. In the wake of PrivacyCon 2021, the FTC’s sixth annual privacy, cybersecurity and consumer protection summit, held this summer, the FTC finally took official and sweeping action on privacy and cybersecurity. In particular, the Commission recently designated eight key areas of focus for enforcement and regulatory action, three of which directly implicate privacy, cybersecurity, and consumer protection. Below, we discuss the FTC’s action and what it means for businesses, the three key areas of interest to consumer privacy that are now in the FTC’s spotlight, as well as their relation to state privacy legislation and their anticipated impact to civil litigation. Full details on PrivacyCon 2021 and the FTC’s resolutions following the summit can be found on the FTC’s website, linked here for your convenience.

The FTC’s Actions and Areas of Focus

In mid-September, the FTC voted to approve a series of resolutions, directed at key enforcement areas, including the following, each discussed in further detail below:

  • Children Under 18: Harmful conduct directed at children under 18 has been a source of significant public concern, now, FTC staff will similarly be able to expeditiously investigate any allegations in this important area.
  • Algorithmic and Biometric Bias: Allows staff to investigate allegations of bias in algorithms and biometrics. Algorithmic bias was the subject of a recent FTC blog.
  • Deceptive and Manipulative Conduct on the Internet: This includes, but is not limited to, the “manipulation of user interfaces,” including but not limited to dark patterns, also the subject of a recent FTC workshop.

The approval of this series of resolutions will enable the Commission “to efficiently and expeditiously investigate conduct in core FTC priority areas. Through the passage of the resolutions, the FTC has now directed that all “compulsory processes” available to it be used in connection with COPPA enforcement. This omnibus resolution mobilizes the full force of the FTC for the next ten years and gives FTC staff full authority to conduct investigations and commence enforcement actions in pursuit of this goal. The FTC has offered very little elaboration on this front, however, regarding how it will use such “compulsory processes,” which include subpoenas, civil investigative demands, and other demands for documents or testimony.

What does seems clear, however, is that the FTC is buckling down on the enforceability of its own actions. Previous remarks by Chair Lina M. Khan before the House Energy and Commerce Committee expressed frustration at the frequent hamstringing of the agency at the hands of courts in its enforcement efforts in the past. With this declaration of renewed energy, the FTC is summoning all the power it can to do its job, and we should expect to see an energized FTC kick up its patrol efforts in the near future. Businesses that conduct activities that implicate these renewed areas should be aware of the FTC’s focus and penchant for investigations and enforcement in such areas.

Children Under 18

The FTC’s mandate to focus on harmful conduct directed at children under 18 is a signal that the Commission plans on broadening and doubling down on its already active enforcement efforts in this area. Areas of the Commission’s prior and current focus on children include marketing claims, loot boxes and other virtual items that can be purchased in games, and in-app and recurring purchases made by children without parental authorization. Most importantly, the FTC is the main arbiter of children’s online privacy through its enforcement of the Children’s Online Privacy Protection Act (“COPPA”), but that law only applies to children under 13 (i.e., 12 and under).  With this new proviso to focus on children under 18, we can certainly expect the FTC to focus on consumer privacy issues, broader than COPPA, for children from ages 13 to 17 as well.

Algorithmic and Biometric Bias

The FTC already has enforcement capabilities to regulate the development and use of artificial intelligence (“AI”) and its associated algorithms. These include the Section 5 of the FTC Act, which prohibits “unfair or deceptive acts or practices,” the Fair Credit Reporting Act, which rears its head when algorithms impact lenders’ decisions to provide credit, and the Equal Opportunity Credit Act, which prohibits the use of biased algorithms that discriminate on the basis of race, color, sex, age, and so on when making credit determinations. In using these tools, the FTC aims to clarify how algorithms are used and how the data that feeds them contributes to algorithmic output, and to bring to light issues that arise when algorithms don’t work or feed on improper biases.

Bias and discrimination arising from use of biometrics will also now be a focus of the FTC. Interestingly, much recent research and criticism has pointed out that algorithms and biometric systems are biased against faces of color. This has arisen in many contexts, from the iPhone’s FaceID feature to the 2020 remotely-administered bar exam that threatened to fail applicants of color because their webcams could not detect their faces. These are just some of the issues that arise when companies turn to algorithms to try to create heuristics in making business decisions. The FTC has not let these concerns go by the wayside, and after preliminarily addressing them in an April 2021 blog post, has now reestablished that algorithmic and biometric bias is a new focus for the upcoming years.

Notably, AI and other automated decision-making, particularly that which results in legal and/or discriminatory effects, will also become regulated under omnibus privacy legislation in California, Virginia, and Colorado, forthcoming in 2023.

Deceptive and Manipulative Conduct on the Internet (Including “Dark Patterns”)

The sinisterly-nicknamed practice of “dark patterns” happens constantly to online consumers, albeit in ways that tend to seem benign. For example, shoppers contemplating items in their cart may be pressured to complete the sale if they receive a notification like, “Hurry, three other people have this in their cart!” More annoyingly, online consumers who wish to unsubscribe to newsletters or email blasts may find themselves having to click through multiple pages just to free their inboxes, rather than an easily-identifiable and quickly-accessible “unsubscribe” button. “Dark patterns” is the term coined for these sorts of techniques, which impair consumers’ autonomy and create traps for online shoppers.

Earlier this year, the FTC hosted a workshop called “Bringing Dark Patterns to Light,” and sought comments from experts and the public to evaluate how these dark patterns impact customers. The FTC was particularly concerned with harms caused by these dark patterns, and how dark patterns may take advantage of certain groups of vulnerable consumers. The FTC is not alone in its attention to this issue; in March, California’s Attorney General announced regulations that banned dark patterns and required disclosure to consumers of the right to opt-out of the sale of personal information collected through online cookies. These regulations also prohibit companies from requiring consumers who wish to opt out to click through myriads of screens before achieving their goals. On the opposite coast, the weight-loss app Noom now faces a class action alleging deceptive acts through Noom’s cancellation policy, automatic renewal schemes, and marketing to consumers.

With both public and private entities turning their eyes toward dark patterns, the FTC has now declared the agency will put its full weight behind seeking out and investigating “unfair, deceptive, anticompetitive, collusive, coercive, predatory, exploitative, or exclusionary acts or practices…including, but not limited to, dark patterns…” Keeping an eye on this work will be important—just as important as keeping an eye on which cookies you accept, and which are best to just let go stale.

In addition to being in the crosshairs of the FTC, dark patterns are also a focus of regulators across the globe, including in Europe, and will be regulated under California’s forthcoming California Privacy Rights Act.

Anticipated Litigation Trends

With the FTC declaring its intent to vigorously investigate these three aforementioned areas, we now turn to what the agency’s new enforcement priorities mean for civil litigation. As practitioners in this field already know, it is unlikely that they will result in an influx of new litigations. The FTC’s enforcement authority exists pursuant to Section 5(a) of the FTC Act, which outlaws “unfair or deceptive acts or practices in or affecting commerce,” but does not contain a private right of action – so plaintiffs cannot technically bring new suits based on the new enforcement priorities, as they have no private right to enforce those priorities.

However, these areas of focus could influence broader trends in civil litigation, even if, on their own, they do not create any new liability. Successful enforcement actions by the FTC could bring about new industry standards with respect to algorithmic bias, dark patterns, and other areas of focus. These standards, in turn, could be cited in consumer privacy class action complaints. New civil actions could also stem from enforcement actions by the FTC and the information revealed in settlements resulting from such actions. For example, the FTC announced a settlement with fertility-tracking app Flo Health Inc. in January; this month, a consolidated class action complaint was filed against Flo Health, stemming from seven proposed class actions filed against it this year, alleging that the app unlawfully shared users’ health information with third parties.

Although the FTC’s new enforcement priorities seem ambitious, recent developments may impede its capability to bring enforcement actions in these areas. The agency was dealt a blow in April of this year, when the Supreme Court ruled in AMG Capital Mgmt., LLC v. FTC that the agency lacks power to seek monetary recovery under Section 13 of the FTC Act. Legislation to restore this power to the agency passed the House, but is awaiting a Senate vote. More recently, the Senate voted to advance the nomination of Rohit Chopra, currently a Democratic Commissioner, to lead the Consumer Financial Protection Bureau. The White House announced that President Biden will nominate Alvaro Bedoya, a privacy scholar and expert with expertise in surveillance and data security, to fill Commissioner Chopra’s seat. As Commissioner, likely priorities for Bedoya include the FTC’s enforcement of various privacy laws, including the Fair Credit Reporting Act and the Gramm-Leach-Bliley Act, which could further impact litigations brought under those statutes.

In case you missed it, below is a summary of recent posts from CPW.  Please feel free to reach out if you are interested in additional information on any of the developments covered.

Federal Court Gives Preliminary Approval of $92 Million TikTok MDL Settlement Over Objections – Consumer Privacy World

California Privacy Agency Announces Appointment of Executive Director Ashkan Soltani – Consumer Privacy World

Ex-FTC Tech Leader Picked To Head Calif. Privacy Agency-CPW’s Alan Friel Talks to Law360 – Consumer Privacy World

Key Themes From California Attorney General’s Examples of CCPA Non-Compliance: Join CPW’s Alan Friel October 15 at 1:45 pm EST – Consumer Privacy World

Cyberattacks and the Energy Sector: CPW’s Kristin Bryan Talks to ITPro – Consumer Privacy World

Federal Court Grants Motion for Judgment on the Pleadings In FCRA Litigation – Consumer Privacy World

California Privacy Agency Moves Forward With Rulemaking Process – Consumer Privacy World

European Data Protection Board Establishes Cookie Banner Taskforce, Which Will Also Look Into Dark Patterns and Deceptive Designs – Consumer Privacy World