In case you missed it, below are recent posts from Privacy World covering the latest developments on data privacy, security and innovation. Please reach out to the authors if you are interested in additional information.

The EU Approach to AI Regulation: Texts That Generative AI  Will Not Come Up With | Privacy World

Singapore Open-sources World’s First AI Governance Testing Framework and Toolkit | Privacy World

Hong Kong Initiates Privacy Compliance Checks on All Credit Reference Agencies | Privacy World

Montana’s Comprehensive Privacy Law Signed by the Governor | Privacy World

Singapore’s Central Bank and Google Cloud Collaborate on Responsible Generative AI | Privacy World

Uncloaking Dark Patterns: Identifying, Avoiding, and Minimizing Legal Risk | Privacy World

South Korea Looks to Tighten Biometrics Laws Amid Generative AI | Privacy World

FTC’s New Policy Statement on Biometric Information Provides Clear Warning to Companies on Increased Scrutiny of Facial Recognition & Related Biometrics Practices | Privacy World

The Philippines Consults on Draft Consent and Private Identification Cards Guidelines | Privacy World

Southeast Asia and the EU Publish a First-of-its-Kind Interoperability Guide for Data Transfers | Privacy World

Changes to Spanish Data Protection Laws | Privacy World

Navigating Data Privacy Assessments Amid New State Laws | Privacy World

The Philippines and Hong Kong Sign Data Protection Mutual Assistance Agreement | Privacy World

 

 

The Philippines’ National Privacy Commission (NPC) has released for public comment two sets of draft guidelines on:

  • Consent as a basis for processing personal data (Consent Guidelines)[1]
  • The issuance and use of identification cards by private organizations (ID Cards Guidelines)[2]

Consent Guidelines

Consent is acknowledged as the most common criterion for processing personal data. Hence, the NPC has determined the need to provide further guidance to the industry on the concept and usage of consent as a lawful basis for processing personal data.

Data Privacy Principles 

The Consent Guide sets out the following data privacy principles that must be adhered to:

  • Transparency
  • Legitimate purpose
  • Proportionality
  • Fairness

There is a minimum level of information that must be provided to data subjects in a clear and concise[3] manner. This includes the purpose, nature, extent, duration and scope of processing, the identity of the organization, the existence of data subject rights, and how these can be exercised.

Where there is further processing of personal data for additional purposes beyond what the data was initially collected for, a compatibility assessment should first be done to establish:

  • A clear and reasonable link between the original and new purposes of the processing
  • The context in which the data was collected, and any reasonable expectations on further use based on the parties’ relationship
  • The nature of the data and the impact of its further processing on the data subject
  • The existence of appropriate security measures accorded to the processing

Where the additional purpose goes beyond what a data subject might reasonably expect, then consent is required.

Elements of Consent 

The elements to valid consent are as follows: it must be freely given, specific, informed, an indication of will, and evidenced by written, electronic or recorded means.

Public bodies – The use of consent by processing by public authorities is permitted where the processing activity is unrelated to what is required by law or regulation.

Contracts of adhesion – Where a party imposes a ready-made form of contract on the other party (known as a contract of adhesion in the Philippines), consent is only valid if the contract of adhesion contains all the necessary information to demonstrate transparency, and the processing is necessary and for a legitimate purpose, is not excessive and is fair and lawful.

Quality of consent – Consent must be granular and not bundled. However, organizations must avoid consent fatigue by properly identifying the lawful basis for processing prior to any data collection. If another lawful basis applies, then a request for consent is unnecessary and does not need to be made. Implied consent is not valid. On the other hand, if all the elements of consent are present, then it is possible that a data subject’s continued use of a specific service is an assenting action that signifies consent.

Format of consent – There is no differentiation among different formats or media for capturing consent. An organization must, however, keep evidence of the consent, including the date it was obtained, the method of obtaining it, who obtained it, and what information was given to the data subject. Deceptive design or dark patterns and other forms of coercion will void any manner of obtaining consent, and the NPC will consider such determination on a case-by-case basis.

Withdrawal of consent – Consent may also be withdrawn at any time and without cost to the data subject, subject to any limitations prescribed by law or contract. It must be as easy as giving consent. When consent is withdrawn, an organization must stop processing without undue delay, and delete the personal data if there is no other lawful basis to justify its continued processing. The data can still be retained post-withdrawal, but only for a reasonable period based on industry standards and other relevant considerations.

Specific Processing

 Direct marketing – Consent is required for direct marketing where this would significantly affect the rights and freedoms of a data subject. The guidelines list the following as examples: analyzing or predicting personal preferences, behavior and attitudes of the data subject to inform subsequent decision-making, tracking and profiling for direct marketing, behavioral advertisement, data brokering, location-based advertising, tracking-based digital market research, and other analogous activities. However, it is possible to consider direct marketing as a legitimate interest for which consent is not required, but this must be determined on a case-by-case basis.

Data sharing – Where data sharing is based on consent, the data subject must be given specific information about the sharing arrangement.

Research – Research is recognized as important to nation-building and in the public interest. Consent can be obtained within a reasonable time after the conclusion of the data gathering, if obtaining consent prior to collection will affect the research results. Where research is done only through observing public behavior, or where the results will be fully anonymized, consent is not required.

Publicly available information – Significantly, the guidelines clarify that the fact that personal data is provided by a data subject on a publicly accessible platform does not mean that blanket consent has been given for its use for any purpose whatsoever. Ultimately, organizations bear the responsibility of finding and proving that its processing is pursuant to a lawful basis under Philippines data privacy law as applicable.

Profiling and automated processing – Data subjects must be informed of any profiling or automated processing of their personal data. There must be safeguards against discriminatory outcomes affecting, or unfair treatment of, data subjects. Consent must be obtained for automated processing that solely determines any decision that has legal ramifications or a significant impact on a data subject.

Miscellaneous provisions – The processing of sensitive personal data through a contract between an organization and a data subject will be regarded as one that is based on consent. Hence, the requirements for consent must be complied with. Further, any waiver by a data subject of their privacy rights, including the right to file a complaint, will be void.

ID Cards Guidelines

This set of guidelines will apply to any private organization that issues an identification card to a data subject. Such cards may be in a physical or digital format, and include company IDs, school IDs, insurance cards, membership cards, and even rewards or loyalty cards.

The requirements imposed for these ID cards are:

  1. They must only capture personal data as is necessary for the purpose of identifying the data subject. However, other personal data may be included if explicitly required by law.
  2. The organization that issues the ID cards must implement appropriate safeguards to protect personal data on these cards, which must be on par with technological advancements, best practices and industry standards.
  3. The organization issuing the cards bears the ultimate burden of demonstrating that the inclusion of any personal data is proportionate to a legitimate purpose.

Violation of the above carries criminal, civil and administrative liability as set out in the Philippines’ data privacy law.

Effective Date 

Each set of the guidelines will take effect 15 days after it is published in a newspaper or a gazette, and affected organizations have 90 days from such effective date to comply with it.

Public Consultation 

Comments on either of these guidelines must be submitted to policy@privacy.gov.ph no later than June 9, 2023, with the subject: “Public Consultation – Consent” or “Public Consultation – ID Cards,” as the case may be.

Privacy World will continue to cover developments. For more information, contact your relationship partner at the firm.

Disclaimer: While every effort has been made to ensure that the information contained in this article is accurate, neither its authors nor Squire Patton Boggs accept responsibility for any errors or omissions. The content of this article is for general information only, and is not intended to constitute or be relied upon as legal advice.

[1] https://privacy.gov.ph/wp-content/uploads/2023/05/DRAFT-Circular-Guidelines-on-Consent-For-Public-Consultation.pdf.

[2] https://privacy.gov.ph/wp-content/uploads/2023/05/DRAFT-Circular-on-ID-Cards-For-Public-Consultation.pdf.

[3] Language used must not be confusing or complex.

 

 

Yesterday, Utah’s Social Media Regulation Act (“SMRA”) was signed into law by Gov. Spencer Cox.

The SMRA applies to businesses that provide a social media platform with at least five (5) million account holders worldwide. The definition of “social media platform” is broad but includes 24 exceptions that generally narrow the SMRA’s scope to a lay-person’s typical understanding of a social media platform.

It goes into effect on May 3, 2023 with numerous compliance requirements and prohibitions for social media platforms coming into force beginning March 1, 2024. Continue Reading Utah’s Social Media Regulation Act Signed by Governor

Several months ago, you may have seen social media filled with artistic renditions of your connections as paintings, cartoons, or other artistic styles. These renditions came from Lensa, an app by which users upload “selfies” or other photos, which the app processes to generate artistic images of the user. Lensa, which is owned by Prisma Labs, Inc., is the latest subject of a putative class action brought under the Illinois Biometric Information Privacy Act (“BIPA”).

In Flora, et al., v. Prisma Labs, Inc., No. 5:23-cv-00680 (N.D. Cal.), Plaintiffs—a group that includes a minor child—are residents of Illinois who used the Lensa app to create artistic images of themselves. Plaintiffs allege that they used Lensa in December 2022, after the app exploded in popularity in November 2022 due to the launch of the “magic avatars” feature, which requires users to upload at least eight images of themselves (and up to 20 images) to create artistic, stylized “avatars” of the user’s face. The app can also be used to upload images of others, and create avatars based on those images. Plaintiffs allege that Lensa’s privacy policy as of December 2022 did not inform users that their facial geometry would be collected to create the avatars, and that several oblique references to Lensa’s use and processing of users’ images lead users to believe that their biometric data is “anonymized” and does not leave the user’s device—which seemingly contradicts Lensa’s model of collecting users’ images and generating avatars based on those images. The Complaint also alleges that Lensa’s privacy policy temporarily disclosed that “face data” will be used to “train” its “neural network algorithms,” but that the provision was subsequently removed, and never included provisions of how that data would be protected or disclosed.

Based on the allegations in the Complaint, Plaintiffs seek to represent a class of “All persons who reside in Illinois whose biometric data was collected, captured, purchased, received through trade, or otherwise obtained by Prisma, either through use of the Lensa app or otherwise.” Plaintiffs bring seven causes of action under Sections 15(a), 15(b)(1), 15(b)(2), 15(b)(3), 15(c), 15(d), and 15(e) of BIPA, as well as an additional claim for unjust enrichment based on Lensa’s paid subscription service.

The Complaint also raises additional concerns about Lensa’s business model and methods of generating images. For example, upon downloading the app, a user is prompted to begin a seven-day trial subscription with Lensa; the Complaint alleges that the app uses dark patterns to prompt users to choose this option, rather than closing out of it and declining the trial subscription. The Complaint also alleges that Lensa uses Stable Diffusion to generate images, which is an open-source AI model trained on over 2 billion copyrighted images, including images that are protected by copyright. As alleged in the Complaint, the system could violate the intellectual property rights of artists who own the copyrights in the images used to train the AI model.

Flora is similar to past BIPA class actions brought against apps that allow users to virtually “try on” makeup, clothing, or other beauty items, as well as class actions brought against entities that use images to “train” models of AI. Plaintiffs are represented by Loevy & Loevy, which notably prevailed in the first BIPA case to go to trial, Rogers v. BNSF Railway Company. Privacy World will continue to keep an eye on how this case develops for you.

746 years. That is the total amount of time criminal defendants have been sentenced to prison from consumer fraud cases the Federal Trade Commission (FTC) has referred to prosecutors the past five years. Indeed, the FTC’s Bureau of Consumer Protection Criminal Liaison Unit (Bureau) highlighted these figures in its recently published Criminal Liaison Unit Report. Notably, this report emphasized the FTC’s growing enforcement concern over the use of deceptive negative option marketing (or dark patterns) and its intended aim to push egregious cases to prosecutors in the future. The Criminal Liaison Unit Report (the Report) is consistent with FTC’s November 4, 2021 Enforcement Policy Statement Regarding Negative Option Marketing, and the Report outlines four key takeaways for companies going forward. Continue Reading FTC Signals More Criminal Referrals for Negative Option Fraudsters

On October 17, 2022, the California Privacy Protection Agency (“CPPA” or “Agency”) published Modified Text of Proposed Regulations (“Modified Regs”) and Explanation of Modified Text of Proposed Regulations (“Explanation of Modified Regs”). The CPPA review of the Modified Regs has been postponed and is now scheduled to be considered during the October 28-29, 2022 public meeting.

Recall that earlier this year, on May 27, 2022, the CPPA published the first draft of the proposed CPRA Regs and initial statement of reasons. The Agency commenced the formal rulemaking process to adopt the Regs on July 8, 2022, and the 45-day public comment period closed on August 23, 2022. The comments submitted in response to the first draft of the Regs are available here. Continue Reading Revised Proposed CPRA Regs To Be Considered At October 28, 2022 Meeting

In case you missed it, below are recent posts from Consumer Privacy World covering the latest developments on data privacy, security and innovation. Please reach out to the authors if you are interested in additional information.

Passage of Federal Privacy Bill Remains Possible This Year, Remains a Continued Priority | Consumer Privacy World

Webinar Registration Open: Mitigating Cybersecurity Class Action Litigation Risks: Policies, Procedures, Service Providers, Notification, Damages | Consumer Privacy World

Kyle Fath appointed to Connecticut Privacy Legislation Working Group | Consumer Privacy World

FCC Adopts Rulemaking Proposal to Protect Consumer Privacy From Invasion by Unwanted Text Messages | Consumer Privacy World

Update on the California Privacy Protection Agency: Still No Date Certain for the CPRA Regulations | Consumer Privacy World

“Delaware Ruling Highlights Challenges Of Data Breach Biz Disputes” Article, Co-Authored by CPW’s Kristin Bryan, Jesse Taylor and Caroline Dzeba, is Published on Law360 | Consumer Privacy World

Third Circuit Announces Standard for Determining Accuracy of Credit Reports Under FCRA | Consumer Privacy World

2023 State Privacy Laws: How to Assess and Ensure Readiness by Year-end

Malcolm Dowden and Niloufar Massachi Discuss Vendor Contracting Requirements Under New US Privacy Laws and the GDPR

New topic for EDPB’s coordinated enforcement action: the DPO

Dark Patterns under the Regulatory Spotlight Again

CPW’s Shea Leitch and Kyle Dull to Speak at ACC South Florida’s 12th Annual CLE Conference

CPW’s David Oberly Examines Recent Major Changes to Consumer Privacy Legal Landscape in Latest Issue of the Cincinnati Bar Association’s CBA Report Magazine

CPW’s Kristin Bryan Discusses Session Replay Software Litigation Trends With The Seattle Times

Office of Management and Budget Takes Action to Enhance the Security of Software Supply Chain

CPW’s Kristin Bryan, Jesse Taylor and Shing Tse Co-Author Chapter for Lexis Practical Guidance on Privacy, Cybersecurity and Data Breach Litigation: Key Laws and Considerations

Data Protection and Digital Information Bill Delayed – Aspects to Consider While We Wait

CPW’s David Oberly Analyzes the FTC’s Largest FTC Contact Lens Rule Settlement to Date in Law360

 

In case you missed it, below are recent posts from Consumer Privacy World covering the latest developments on data privacy, security and innovation. Please reach out to the authors if you are interested in additional information.

2023 State Privacy Laws: How to Assess and Ensure Readiness by Year-end

New topic for EDPB’s coordinated enforcement action: the DPO

Dark Patterns under the Regulatory Spotlight Again

CPW’s Kyle Dull to Speak at ACC South Florida’s 12th Annual CLE Conference

CPW’s David Oberly Examines Recent Major Changes to Consumer Privacy Legal Landscape in Latest Issue of the Cincinnati Bar Association’s CBA Report Magazine

CPW’s Kristin Bryan Discusses Session Replay Software Litigation Trends With The Seattle Times

Office of Management and Budget Takes Action to Enhance the Security of Software Supply Chain

CPW’s Kristin Bryan, Jesse Taylor and Shing Tse Co-Author Chapter for Lexis Practical Guidance on Privacy, Cybersecurity and Data Breach Litigation: Key Laws and Considerations

Data Protection and Digital Information Bill Delayed – Aspects to Consider While We Wait

CPW’s David Oberly Analyzes the FTC’s Largest FTC Contact Lens Rule Settlement to Date in Law360

Congratulations to CPW’s Kristin Bryan on Being Named a 2022 Cybersecurity & Privacy MVP by Law360!

FCC Reportedly Issues Letters of Inquiry Seeking Further Information on Wireless Providers Data Privacy Practices

Webinar Registration Open: Navigating Cross-border Challenges Relating to HR Data Protection and Employee Right-to-Work Compliance

HR and B-to-B Data Compliance Deadline Looming – Legislative Efforts to Extend California Consumer Privacy Act Exemptions Fail

For years now, California has led the way by setting the standard for privacy and data protection regulation in the United States. Recently— and as calls for greater controls over the addictive nature of social media grow louder—legislators in the Golden State have moved closer toward enacting a new, first-of-its-kind privacy law that would prohibit the development and utilization of “addictive” features by social media platforms. At the same time, state legislators also advanced a second bill that would put in place stringent online privacy protections for minors.

Businesses should monitor the progress of these bills closely, as their enactment—combined with an increased focus on children’s privacy by both federal lawmakers and the Federal Trade Commission (“FTC”)—may have a ripple effect in other states and municipalities, with legislators following close behind to enact similar children’s online privacy laws.

Continue Reading California Moves Closer to Enacting More Stringent Online Privacy Protections for Children

The Federal Trade Commission (“FTC” or “Agency”) recently indicated that it considers initiation of pre-rulemaking “under section 18 of the FTC Act to curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination.”  This follows a similar indication from Fall 2021 where the FTC had signaled its intention to begin pre-rulemaking activities on the same security, privacy, and AI topics in February 2022. This time, the FTC has expressly indicated that it will submit an Advanced Notice of Preliminary Rulemaking (“ANPRM”) in June with the associated public comment period to end in August, whereas it was silent on a specific timeline when it made its initial indication back in the Fall. We will continue to keep you updated on these FTC rulemaking developments on security, privacy, and AI.

Also, on June 16, 2022 the Agency issued a report to Congress (the “Report”), as directed by Congress in the 2021 Appropriations Act, regarding the use of artificial intelligence (“AI”) to combat online problems such as scams, deepfakes, and fake reviews, as well as other more serious harms, such as child sexual exploitation and incitement of violence. While the Report is specific in its purview—addressing the use of AI to combat online harms, as we discuss further below—the FTC also uses the Report as an opportunity to signal its positions on, and intentions as to, AI more broadly.

Background on Congress’s Request & the FTC’s Report

The Report was issued by the FTC at the request of Congress, which—through the 2021 Appropriations Act—had directed the FTC to study and report on whether and how AI may be used to identify, remove, or take any other appropriate action necessary to address a wide variety of specified “online harms.” The Report itself, while spending a significant amount of time addressing the prescribed online harms and offering recommendations regarding the use of AI to combat the same, as well as caveats for over-reliance on them, also devotes a significant amount of attention to signaling its thoughts on AI more broadly. In particular, due to specific concerns that have been raised by the FTC and other policymakers, thought leaders, consumer advocates, and others, the Report cautions that the use of AI should not necessarily be treated as a solution to the spread of harmful online content. Rather, recognizing that “misuse or over-reliance on [AI] tools can lead to poor results that can serve to cause more harm than they mitigate,” the Agency offers a number of safeguards. In so doing, the Agency raises concerns that, among other things, AI tools can be inaccurate, biased, and discriminatory by design, and can also incentivize relying on increasingly invasive forms of commercial surveillance, perhaps signaling what may be areas of focus in forthcoming rulemaking.

While the FTC’s discussion of these issues and other shortcomings focuses predominantly on the use of AI to combat online harms through policy initiatives developed by lawmakers, these areas of concern apply with equal force to the use of AI in the private sector. Thus, it is reasonable to posit that the FTC will focus its investigative and enforcement efforts on these same concerns in connection with the use of AI by companies that fall under the FTC’s jurisdiction. Companies employing AI technologies more broadly should pay attention to the Agency’s forthcoming rulemaking process to stay ahead of the issues.

The FTC’s Recommendations Regarding the Use of AI

Another major takeaway of the Report pertains to the series of “related considerations” that the FTC has cautioned will require the exercise of great care and focused attention when operating AI tools. Those considerations entail (among others) the following:

  • Human Intervention: Human intervention is still needed, and perhaps always will be, in connection with monitoring the use and decisions of AI tools intended to address harmful conduct.
  • Transparency: AI use must be meaningfully transparent, which includes the need for these tools to be explainable and contestable, especially when people’s rights are involved or when personal data is being collected or used.
  • Accountability: Intertwined with transparency, platforms and other organizations that rely on AI tools to clean up harmful content that their services have amplified must be accountable for both their data and practices and their results.
  • Data Scientist and Employer Responsibility for Inputs and Outputs: Data scientists and their employers who build AI tools—as well as the firms procuring and deploying them—must be responsible for both inputs and outputs. Appropriate documentation of datasets, models, and work undertaken to create these tools is important in this regard. Concern should also be given to the potential impact and actual outcomes, even though those designing the tools will not always know how they will ultimately be used. And privacy and security should always remain a priority focus, such as in their treatment of training data.

Of note, the Report identifies transparency and accountability as the most valuable direction in this area—at least as an initial step—as being able to view and allowing for research behind platforms’ opaque screens (in a manner that takes user privacy into account) may prove vital for determining the best courses for further public and private action, especially considering the difficulties created in crafting appropriate solutions when key aspects of the problems are obscured from view. The Report also highlights a 2020 public statement on this issue by Commissioners Rebecca Kelly Slaughter and Christine Wilson, who remarked that “[i]t is alarming that we still know so little about companies that know so much about us” and that “[t]oo much about the industry remains opaque.”

In addition, Congress also instructed the FTC to recommend laws that could advance the use of AI to address online harms. The Report, however, finds that—given that major tech platforms and others are already using AI tools to address online harms—lawmakers should instead consider focusing on developing legal frameworks to ensure that AI tools do not cause additional harm.

Taken together, companies should expect the FTC to pay particularly close attention to these issues as they begin to take a more active approach in policing the use of AI.

FTC: Our Work on AI “Will Likely Deepen”

In addition to signaling what areas of focus may be moving forward when addressing Congress’ mandate, the FTC veered outside of its purview to highlight its recent AI-specific enforcement cases and initiatives, describe the enhancement of its AI-focused staffing, and provide commentary on its intentions as to AI moving forward. In one notable sound bite, the FTC notes in the Report that its “work has addressed AI repeatedly, and this work will likely deepen as AI’s presence continues to rise in commerce.” Moreover, the FTC specifically calls out its recent staffing enhancements as it relates to AI, highlighting the hiring of technologists and additional staff with expertise in and specifically devoted to the subject matter area.

The Report also highlights the FTC’s major AI-related initiatives to date, including:

Conclusion

The recent Report to Congress strongly indicates the FTC’s overall apprehension and distrust as it relates to the use of AI, which should serve as a warning to the private sector of the potential for greater federal regulation over the utilization of AI tools. That regulation may come sooner than later, especially in light of the Agency’s recent ANAPR signaling the FTC’s consideration of initiating rulemaking to “ensure that algorithmic decision-making does not result in unlawful discrimination.”

At the same time, although the FTC’s Report calls on lawmakers to consider developing legal frameworks to help ensure that the use of AI tools does not cause additional online harms, it is also likely that the FTC will increase its efforts in investigating and pursuing enforcement actions against improper AI practices more generally, especially as it relates to the Agency’s concerns regarding inaccuracy, bias, and discrimination.

Taken together, companies should consult with experienced AI counsel to obtain advice on proactive measures that can be implemented at this time to get ahead of the compliance curve and put themselves in the best position to mitigate legal risks moving forward—as it is only a matter of time before regulation governing the use of AI is enacted, likely sooner rather than later.