The California Privacy Protection Agency (“CPPA”) has published revised draft regulations detailing what it proposes to be required of businesses under the California Consumer Privacy Act (“CCPA”) to assess, mitigate and document risk before engaging in specified types processing of California residents’ personal information, and on March 8th is set to vote on advancing them to the public comment stage of rulemaking.

Continue Reading More Detail on U.S. Data Processing Assessment Requirements

In 2023, we analyzed the laws in Arkansas, Texas and Utah that require age verification and parental consent before allowing minors to create accounts on social media and other interactive platforms.  A similar law – Secure Online Child Interaction And Age Limitation (SOCIAL) Act – was passed in Louisiana, which has an in-force date of July 1, 2024.  Ohio legislators also enacted the Parental Notification by Social Media Operators Act (Ohio Act).  All of these laws have requirements that are similar to the proposed federal law titled Kids Online Safety Act” (KOSA), which we explain in a companion post).

Continue Reading Protecting Kids Online – Part II

Protection for minors online continues to top the list of U.S. regulatory and legislative priorities in 2024. So far in 2024, legislators in California introduced several bills focused on minors; Congress held hearings and advanced federal legislation protecting minors online; and constitutional challenges to 2023 state laws focused on minors’ social networking accounts advanced in the Courts. Congress and the Federal Trade Commission (FTC) are looking to update the Children’s Online Privacy Protection Act and corresponding Rule, as detailed in another post. However, the proposals explained in this post extend far beyond online privacy concerns, and we believe more focus on minors’ online safety is on the way.

Continue Reading Protecting Kids Online: Changes in California, Connecticut and Congress – Part I

Online privacy and safety of children and teens are hot legislative topics this year. In a companion post we provide an update of federal and state legislative efforts to fundamentally change how online content and advertising are delivered to children and teens. We have previously discussed legislation in California and Connecticut to require assessments of online privacy impacts on minors. In this post we focus on proposed regulatory and legislative changes to the 1998 Children’s Online Privacy Protection Act (COPPA) (effective in 2000) and its corresponding regulations (COPPA Rule), which were last updated in 2013.

Continue Reading Federal Children’s Privacy Requirements to Be Updated and Expanded

The Digital Services Act (DSA) entered into full force on 17 February 2024. This is a monumental EU regulation, containing 93 articles and 156 recitals, which is intended to impose:

  • A framework for the conditional exemption from liability of providers of online intermediary services (i.e. companies that are conduits for, cache or host third-party online content)
  • Rules on specific due diligence obligations tailored to certain specific categories of providers of intermediary services
  • Rules on implementation and enforcement, including as regards the cooperation between the competent authorities

It is applicable across the whole EU and EEA, and has extraterritorial reach.

Part of the DSA has already been in force since 2023 for some designated providers. However, since 17 February 2024, the remainder of the DSA now applies to all online intermediary services providers that offer their services in the EU/EEA, regardless of whether or not such providers have an establishment in the EU/EEA. A more detailed overview of the DSA is available on our Digital Markets Regulation page.

VLOPs and VLOSEs

The part of the DSA that was already in operation prior to 17 February applied only to designated “very large online platforms” (VLOPs) and “very large online search engines” (VLOSEs). The current VLOP and VLOSE list includes platforms and search engines such as Amazon Store, App Store, LinkedIn, Facebook, Instagram, Pinterest, Snapchat, X, and Google Search, Google Play, Google Maps, Google Shopping and YouTube. Alibaba, TikTok, and Booking.com are also among the listed platforms and search engines.

Intermediary Services

The remainder of the DSA, which entered into force on 17 February, contains broader rules that are applicable to all“online intermediary services providers” defined as providers of “mere conduit”, “caching” or “hosting services”, whether or not they are established in Europe.

While the DSA may, at first blush, seem to cover only Big Tech, many ordinary and smaller online services, including apps and websites that facilitate the sharing of user generated content, may come under the definition of intermediary services. The definition of “intermediary services” spans a wide range of economic activities that take place online and that develop continually to provide for transmission of information that is swift, safe and secure, and to ensure convenience of all participants of the online ecosystem. For example, “mere conduit” intermediary services include generic categories of services, such as internet exchange points; wireless access points; virtual private networks; domain name system (DNS) services and resolvers; top-level domain name registries and registrars; certificate authorities that issue digital certificates; and voice over IP and other interpersonal communication services, while generic examples of “caching” intermediary services include the sole provision of content delivery networks, reverse proxies or content adaptation proxies. Such services are crucial to ensuring the smooth and efficient transmission of information delivered on the internet. Examples of “hosting services” include categories of services such as cloud computing, web hosting, paid referencing services or services enabling sharing information and content online, including file storage and sharing. Intermediary services may be provided in isolation, as a part of another type of intermediary service, or simultaneously with other intermediary services. Whether a specific service constitutes a “mere conduit”, “caching” or “hosting” service depends solely on its technical functionalities, which might evolve in time, and should be assessed on a case-by-case basis.

Conditional Exemption

The DSA exempts these intermediary services providers from content liability subject to the following conditions:

  • For mere conduit services, the exemption conditions are that the provider “(a) does not initiate the transmission; (b) does not select the receiver of the transmission; and (c) does not select or modify the information contained in the transmission”.
  • For caching services, the exemption conditions are that the provider “(a) does not modify the information; (b) complies with conditions on access to the information; (c) complies with rules regarding the updating of the information, specified in a manner widely recognised and used by industry; (d) does not interfere with the lawful use of technology, widely recognised and used by industry, to obtain data on the use of the information; and (e) acts expeditiously to remove or to disable access to the information it has stored upon obtaining actual knowledge of the fact that the information at the initial source of the transmission has been removed from the network, or access to it has been disabled, or that a judicial or an administrative authority has ordered such removal or disablement”.
  • For hosting services, the exemption conditions are that the provider “(a) does not have actual knowledge of illegal activity or illegal content and, as regards claims for damages, is not aware of facts or circumstances from which the illegal activity or illegal content is apparent; or (b) upon obtaining such knowledge or awareness, acts expeditiously to remove or to disable access to the illegal content”.

This is similar to, but materially different from, the immunity offered to intermediaries under Section 230 of the US Communications Decency Act, and companies familiar with that regime should note the differences. One of the most crucial differences is the obligation of hosting services to act upon knowledge, in order to qualify for the exception, which is not necessary under US law.

Due Diligence Obligations

The DSA imposes specific due diligence obligations (including positive information obligations vis-à-vis consumers and business partners) tailored to each specific category of providers of intermediary services. Concerns about insufficient transparency, the nontraceability of online traders, automated machine-based decision-making, content tailoring, and the excessive power of large intermediaries, permeate the DSA and have significantly shaped these obligations. For an overview of these obligations, check our DSA Compliance Tracker.

New Enforcers and Compliance Roles

The DSA also introduces new enforcers and new compliance roles. For example, each EU member state will designate a digital services coordinator and each intermediary service provider without an EU establishment will need to appoint a representative in an EU member state who will need to be able to liaise with the designated digital services coordinators. One of the sticky practical implementation aspects for this requirement, however, is that the appointed representative will have direct liability for DSA noncompliance, without prejudice to the liability and legal actions that could be initiated against the provider of intermediary services.

International Ramifications

The DSA’s “rights-driven” model of internet governance seeks to chart something of a middle way between the US “market-driven” model and China’s “state-driven” model. Some commentators have described the EU model as more proactive and risk averse than the US model but also more mindful of privacy and individual rights than the China’s model. As an analytical framework, this categorisation is compelling, though it has worrisome implications arising from the dangers of a splinternet.

Because it applies to all intermediary services providers that offer their services to recipients located in the EU/EEA, whether they are established inside or outside the EU/EEA, the DSA will affect US and UK intermediaries servicing the EU/EEA market, an application that suggests that, as has been the case with the EU General Data Protection Regulation (GDPR), some spillover from the EU legislation will be felt in the US and UK. As the GDPR has shown, such spillover can result in US and UK intermediaries being targeted by EU enforcement actions, and in US and UK intermediaries adjusting their operations pursuant to the EU legislation, including inside the US and UK. Spillover may also result in US legislators looking at the EU legislation for thoughts about their own legislative actions in the US; the California Consumer Privacy Act is a prime example of GDPR spillover. The UK has already passed its Online Safety Act on content liability for online intermediary services and its regulator, Ofcom, is said to be cooperating with the EU Commission towards a coherent application of the Online Safety Act and the DSA.

Conclusion

The novel framework introduced by the DSA and its international ramifications present both opportunities and risks for online intermediary services providers active in Europe. To check what you need to do to ensure compliance with the DSA rules that are applicable to all online intermediary services, check our DSA Compliance Tracker or contact the authors.

Disclaimer: While every effort has been made to ensure that the information contained in this article is accurate, neither its authors nor Squire Patton Boggs accepts responsibility for any errors or omissions. The content of this article is for general information only, and is not intended to constitute or be relied upon as legal advice.

In case you missed it, below are recent posts from Privacy World covering the latest developments on data privacy, security and innovation. Please reach out to the authors if you are interested in additional information.

Deep Fake of CFO on Videocall Used to Defraud Company of US$25M | Privacy World

Address Cyber-risks From Quantum Computing | Privacy World

FCC Clarifies and Codifies TCPA Consent Revocation Rules | Privacy World

Asia Data Privacy/Cybersecurity and Digital Assets Partners to speak at the Global Legal ConfEx in Singapore, 20 February 2024 | Privacy World

Potential CCPA Fines “Significant”, California AG’s Office “Plotting” and Other Takeaways From Privacy Regulators during Privacy Summit in Los Angeles | Privacy World

FCC Rules Voice-Cloned Robocalls Are Covered by the TCPA as Artificial/Pre-Recorded | Privacy World

Ten Things About Artificial Intelligence (AI) for GCs in 2024 | Privacy World

CCPA Regs Effective Immediately, No One-Year Delay for Future Regs: Court of Appeal Sides with California Privacy Protection Agency in Regulations Delay Case | Privacy World

Sensitive Data Processing is in the FTC’s Crosshairs | Privacy World

ASEAN and EU Finalise Implementation Guide for Cross-border Data Transfers | Privacy World

The Product Security and Telecommunications Infrastructure (PSTI) Act FAQ | Privacy World

Connecticut Attorney General Report: CTDPA Enforcement Insights & Takeaways | Privacy World

Cyber executive fraud scams have been rampant for years.  These scams trick an employee into transferring large sums of money into the fraudster’s bank account.  In the past, these often involved using a high-level executives hacked email account (or an email appearing to be from them) to request the employee to quickly and secretly transfer money for a ‘special project’ that no one else should know about. They play on an employee’s desire to please the requesting executive and their unique position to quickly do so.  It used to be the average value of these were around US$100,000.  But they have been steadily growing more sophisticated and costly, often involving the hackers doing a detailed inspection of the executive’s email to identify information to make the request sound more believable (such as to determine current projects, confirm when the executive is likely to be unavailable for a call, and even to be able to craft the email to sound more like the executive). 

Recently, the risk grew far greater as it was reported a deep fake videocall showing an AI-generated multi-national company’s CFO and other co-workers were used to convince a HK branch employee to make 15 transfers totaling HK$200M (approximately US$25M) into 5 local HK bank accounts.  Reports indicate that the initial email request seemed suspicious to the employee, but then she was invited to a videochat, purportedly over a common personal communications app, where the deep fake of the CFO, and apparently or other employees, were used to instruct her to make the transfers.  The deep fakes were apparently AI-generated videos created from past videochat recordings obtained from the individuals. From the reports, the deepfakes were more like a recording and would not be able to interact and respond to questions and may have had somewhat limited head movement.  It would appear that at least one of the hackers was a live participant orchestrating things so, after allowing the HK employee to introduce herself, the deepfake images informed her to make the transfers.  It was only after the 15 transfers were made that the employee contacted their UK headquarters, only to be informed there was no such instruction. 

Gen-AI also seems involved in other incidents where deep fake images of individuals contact their loved ones to request ‘urgent funds’, including indicating they have been kidnapped or otherwise in dire need.  Further, we are increasingly seeing deep fake images of celebrities and even public officials.  Moreover, it appears hackers are using AI to sift large digital data to identify more convincing approaches for their scams as well as weaknesses in weaknesses in software coding or network security. 

So, what can a company do to avoid such a loss?  Here are some things that can help:

  • Awareness and Training: It is essential to make employees, especially those with the ability to transfer money, aware that such sophisticated fraud exists. Just like with phishing email, where we train employees to be suspicious when the email does not look right (such as misspellings, email address not correct or sent from outside the organization, etc), financial staff should strongly suspect any immediate secret request for such a transfer, especially large ones.  In this case, the platform used for the video was likely not the usual internal company communications platform, but instead a personal communications platform.  In addition, the employee could have asked questions on the call that would have been outside the deepfakes recorded content. 
  • Separately Verify the Instructions: without using links provided in the email, the employee should contact the executive and/or others on the call directly, if not in person at the office, the perhaps by making a phone call to the number provided in the corporate directory.
  • Implement Protocols to Prevent: Companies should implement procedures to prohibit large financial transactions happening without multiple executive approval.  Companies may also issue signed encryption keys to appropriate employees before such transactions are approved. Just as it is now common to use two-factor authentication to allow an employee computer access, why shouldn’t we also do so to allow a large financial transaction to occur?  Also, ensure robust passwords are being used.  A lot of hacks are currently occurring by the use of lists of leaked individual’s passwords that have been re-used by the individual over multiple accounts. 
  • Broaden the Scope of Concern: This fraud involved financial assets. However, it would not be difficult for the fraudsters to seek instead key business secrets and/or key customer data.  For example, the ‘executive’ could as easily indicate they are away and unable to access the network, asking the employee to send them an attachment of all key clients or the latest business plan, or similar.  Businesses should take steps to identify such key data and secure it from being provided through such fraud, such as limiting access, restricting export and encrypting the contents.   
  • Act Quickly: Especially when financial assets are involved, as soon as possible after determining you have been scammed, reach out to your bank and the bank the funds were transferred to, asking them to halt the transaction and freeze the funds while you seek a judicial order to have them returned.  On several occasions, our firm has worked with defrauded individuals in HK and elsewhere to recover funds left in the bank accounts where the money was initially transferred.  In one case, although we were only brought into the matter even a week after the transfer of US$1m, we were able to recover nearly half of the funds.  Most likely this is because the onward transfer of large sums of money, especially abroad, often raises red flags in the banking system, making the hackers move the money more slowly.  
  • Don’t Quit Too Soon: When improper access to email and/or the network has been identified, make sure you do a detailed review, likely utilizing a cyber investigations expert, to ensure where the hacker went in the system, not built in backdoors, stolen other information, or otherwise affected the system.  Simply changing the affected email password is very likely insufficient to protect your system. Further, in many countries, data privacy and other laws will require a detailed assessment of what happened in order to determine whether any data privacy or other notifications are necessary. 

In the end, although the increase in use of AI to enhance cyberfraud is clearly troubling, by doing the above you can take effective steps to help prevent them from impacting your company. Should you have any questions, please feel free to reach out to your Squire Patton Boggs contact or one of our global AI contacts listed here.

Disclaimer: While every effort has been made to ensure that the information contained in this article is accurate, neither its authors nor Squire Patton Boggs accepts responsibility for any errors or omissions. The content of this article is for general information only, and is not intended to constitute or be relied upon as legal advice.

The Monetary Authority of Singapore (MAS) has issued an advisory[1] to financial institutions on quantum computing and the cybersecurity risks that it could pose, including potentially breaking commonly used encryption and digital signature algorithms.

Similar concerns have been raised elsewhere. Some related and ongoing developments include:

  • National Institute of Standards and Technology’s (NIST) initiation of a global standardization process for post-quantum cryptography[2]. Recognizing that large-scale quantum computing could disrupt public-key cryptosystems that are currently in use, and “seriously compromise the confidentiality and integrity of digital communications on the internet and elsewhere,” NIST’s project is aimed at developing cryptographic systems that are secure against both quantum and classical computers, and can interoperate with existing communications protocols and networks.
  • The World Economic Forum is conducting research into quantum key distribution technology to establish secure communication channels for distributing encryption keys.[3]

In its advisory, MAS stressed that financial institutions need to be crypto-agile to efficiently migrate away from vulnerable cryptographic algorithms to post-quantum cryptography without compromising their IT systems. MAS’s recommended measures include:

  • Maintaining an inventory of cryptographic solutions used, and carrying out a risk assessment to classify these based on sensitivity, criticality and risks.
  • Having proper governance in place (including with third-party vendors) to understand, assess and mitigate the potential threats of quantum technology, and supporting quantum security solutions.
  • Planning for contingencies where risks materialize ahead of predicted timelines.
  • Engaging with industry and research institutes to share know-how.

Should you have any concerns, feel free to reach out to your usual contact at the firm.

Disclaimer: While every effort has been made to ensure that the information contained in this article is accurate, neither its authors nor Squire Patton Boggs accepts responsibility for any errors or omissions. The content of this article is for general information only, and is not intended to constitute or be relied upon as legal advice.


[1] Advisory on Addressing the Cybersecurity Risks Associated With Quantum, MAS

[2] Post-quantum Cryptography, NIST

[3] Transitioning to a Quantum-secure Economy, World Economic Forum

At its February 19, 2024 Open Meeting, the Federal Communications Commission (“FCC”) adopted an array of changes and codifications to its Telephone Consumer Protection Act (“TCPA”) rules to “strengthen consumers’ ability to revoke consent” to receive robocalls and texts after deciding that they no longer want them. The agency’s Report and Order and Further Notice of Proposed Rulemaking (Order) is designed to make consent revocation “simple and easy” and adopts requirements “for callers and texters to implement revocation requests in a timely manner.”

Continue Reading FCC Clarifies and Codifies TCPA Consent Revocation Rules

Scott Warren (Japan/China) and Charmian Aw (Singapore) will be speaking in Singapore at the Global Legal ConfEx full day conference.  Scott will be moderating two panels, one entitled Breach Notification: A Deep Dive Into What, When and Who to Notify, and the other Strategies for Managing Risks and Ensuring Compliance with Anti-Bribery and Anti-Corruption Measures. Charmian will be a featured panelist on the topic Managing Third-Party Risk Throughout the Lifecycle. For more information on the in-person event, please see Global Legal ConfEx – Singapore 20 Feb 2024 | Events 4 Sure.  If interested, Scott and Charmian have a few complimentary seats available.