Compliance

Building a customer base is time-consuming and expensive. Engaging existing customers is often easier and more profitable than acquiring new customers.  In the US, email and other targeted marketing is a low-cost and high-ROI way to foster this engagement, which makes collecting customers’ email addresses (and other personal information) a high priority for marketers.  But, marketers beware: laws in California and Massachusetts that limit the collection of email addresses (and other personal information) at the point of purchase are an increasingly popular source of class action legal risk. While the laws in California and Massachusetts are popular with plaintiffs’ counsel now, several other states have similar laws, applying to different categories of information (e.g., some state laws only apply to address and telephone number) and transactions and varying enforcement mechanisms (e.g., criminal penalties or state attorney general enforcement).

Key Takeaways

  • Ensure that retail location staff understand that the collection of a customer’s personal information that is not required to complete a transaction must be the customer’s choice.  Requesting a customer email address or other contact data during the purchase process – such as for tailored discounts and rewards – is permitted as long as the customer knows it is voluntary, i.e., not required to complete the purchase transaction.  Further, to avoid errors and discourage claims clearly delineate subscriptions from transactions by separating sign-ups from purchases.
  • Check that etailer (i.e., e-commerce stores)  purchase transaction flows do not require additional personal information that is not necessary to complete the transaction and clearly disclose to customers what is and is not required. 
  • Beware of personal information collection by cookies, pixels and similar technology active on purchase transaction web pages.
  • Implement written policies and procedures – whether online or off – to document what personal information collected is mandatory vs. voluntary.

Continue Reading Collecting Personal Information during Checkout: Balancing Consumer Rights with Business Marketing

Please join us at these upcoming events to hear the latest trends, updates and insights in data privacy. For more information, contact the presenters or your relationship attorney.

  • Shanghai: On September 5th 2024, Scott Warren and the Squire Patton Boggs Shanghai office are hosting a “Tea at Three PM” cyberbreach training for

On August 22, 2024, the Singapore Computer Society, with support from the Infocomm Media Development Authority (IMDA), released the AI Ethics & Governance Body of Knowledge Version 2.0 (BoK 2.0). This latest edition represents a significant advancement in the ongoing effort to guide the ethical and responsible implementation of artificial intelligence (AI) technologies.

Background

BoK 2.0 was developed in response to the rapid advancements in AI technologies, and their increasing integration into everyday applications and solutions. The updated framework addresses practical issues related to human safety, fairness, privacy, data governance and general ethical values in AI deployment.Continue Reading Singapore Strengthens Her Commitment to Responsible AI with the Release of AI Ethics and Governance Body of Knowledge Version 2.0

In case you missed it, below are recent posts from Privacy World covering the latest developments on data privacy, security and innovation. Please reach out to the authors if you are interested in additional information.

Are Data Practice Risk Assessments at Risk in the US?

FCC Moves Forward with Proposed Rules for Use of Artificial

We have previously reported on the requirements, including mandatory risk assessments, of the California Age Appropriate Design Code Act, (CAADCA or Act) and that the Act was enjoined by a federal District Court as likely a violation of the publisher’s free speech rights under the First Amendment of the U.S. Constitution.  The 9th Circuit has upheld that decision, but only as to Data Protection Impact Assessments (DPIAs), and gone further to find that such assessments are subject to strict scrutiny and are facially unconstitutional.  See Netchoice, LLC v Rob Bonta, Atty General of the State of California (9th Cir., August 16, 2024) – a copy of the opinion is here.  The Court, however, overruled the District Court as to the injunction of other provisions of CAADCA, such as restrictions on the collection, use, and sale of minor’s personal data and how data practices are communicated.  Today, we will focus on what the decision means for DPIA requirements under consumer protection laws, including the 18 (out of 20) state consumer privacy laws that mandate DPIAs for certain “high-risk” processing activities.Continue Reading Are Data Practice Risk Assessments at Risk in the US?

The Federal Communications Commission (“FCC” or “Commission”) continues its regulatory focus on Artificial Intelligence (“AI”) in the communications world, with the issuance of new proposed regulations designed to protect consumers from harmful AI-generated communications, targeting robocalls, automated texting, and political advertising.

The FCC has formally moved forward with a combined Notice of Proposed Rulemaking and Notice of Inquiry (“NPRM/NOI”) “to protect consumers from the abuse of AI in robocalls alongside actions that clear the path for positive uses of AI, including its use to improve access to the telephone network for people with disabilities.”

The NPRM/NOI, released on August 8, 2024, seeks public comment on many of the major provisions that Squire Patton Boggs previously reported on in the draft proposal, albeit with some changes.  These include, for example:Continue Reading FCC Moves Forward with Proposed Rules for Use of Artificial Intelligence with Robocalls and Political Advertisements

In case you missed it, below are recent posts from Privacy World covering the latest developments on data privacy, security and innovation. Please reach out to the authors if you are interested in additional information.

Singapore Unveils Guide on Synthetic Data Generation: A Strategic Resource for AI Decision-Making

Singapore Consults on Cybersecurity Guidelines for AI

Singapore has published and is inviting public feedback on two proposed sets of guidelines for securing AI systems.

The first is the Guidelines on Securing AI Systems, intended to help system owners secure AI throughout its life cycle. These guidelines are meant to provide principles to raise awareness of adversarial attacks and other threats that could compromise AI system security, and guide the implementation of security controls to protect AI against potential risks.

The second is the Companion Guide for Securing AI Systems, which is intended to be a community-driven resource for supporting system owners and will entail the Cybersecurity Agency of Singapore (Agency) working closely with AI and cybersecurity practitioners to develop it.

Noting that AI “offers significant benefits for the economy and society”, including driving “efficiency and innovation across various sectors, including commerce, healthcare, transportation, and cybersecurity”, the Agency also stressed that AI systems must “behave as intended”, and that the outcomes must be “safe, secure, and responsible”. Such objectives are put at risk when AI systems are vulnerable to adversarial attacks and other cybersecurity risks.Continue Reading Singapore Consults on Cybersecurity Guidelines for AI Systems

In case you missed it, below are recent posts from Privacy World covering the latest developments on data privacy, security and innovation. Please reach out to the authors if you are interested in additional information.Continue Reading Privacy World Week in Review

Context

Businesses are under pressure from a range of internal and external stakeholders to create and maintain genuinely diverse and inclusive workplaces. Consequently, more and more businesses want to collect and track Diversity and Inclusion (“D&I”) data about their staff. This may include information about gender, sexual orientation, race, ethnic origin, religion, socio-economic background health, and disability. This information may help organizations better understand the current profile of their workforce, assess the impact of their equal opportunities policies, determine what steps they may need to take to address any barriers to change and measure progress against any objectives/targets set.

However, in some countries, collection and tracking of such data is regulated by various laws and it is socially and culturally inappropriate to ask certain questions in this area.

In France, various regulations and case law restrict the collection of such data, including the EU General Data Protection Regulation (“GDPR”). There is a particular sensitivity in relation to origin/race/ethnicity data (as notably stated in a decision from the French Constitutional Council of 15 November 2007 sanctioning the collection of such data in this context).

Draft recommendation

To guide organizations wishing to implement diversity measurement surveys, the CNIL is submitting a recommendation for public consultation until September 13, 2024 (the “Draft Recommendation”).

It notably includes GDPR-specific recommendations that were not in the guide “Measuring to progress towards equal opportunities” that the CNIL had published with the Defender of Rights twelve years ago (the “Guide”).

The recommendation addresses the following issues in relation to diversity surveys.Continue Reading Measuring Diversity at Work in France: the CNIL Launches a Public Consultation on a Draft Recommendation