Background

On March 1, 2024, Singapore’s personal data protection commission (Commission) issued its advisory guidelines (Guidelines) on the use of personal data for AI systems that provide recommendations, predictions and decisions.

The Guidelines follow a public consultation in July and August last year, in which a number of organizations submitted comments[1] on the proposed draft Guidelines. After considering all of the feedback received, the Guidelines were published on March 1, 2024.

Do the Guidelines Apply to All AI Systems?

These Guidelines apply to discriminative AI systems that are used to make decisions, predictions and/or recommendations. They do not apply to generative AI, and the Commission is likely assessing the need for a further set of guidelines that address data processing in generative AI development and deployment specifically.

Do the Guidelines Impose Mandatory Requirements?

The Guidelines are advisory, and are not intended to impose any legally binding obligations on anyone, whether the Commission or any other organization or individual.

In the same vein, the Guidelines are meant to be read together with other advisory guidelines, for instance, on key concepts in Singapore’s comprehensive data protection legislation, the Personal Data Protection Act (PDPA).[2] 

The Guidelines do, however, provide regulatory guidance and certainty by clarifying how the PDPA will apply when organizations use personal data to develop and train AI systems, and provide consumers assurance by setting out baseline guidance and best practices for organizations to be transparent on their use of personal data to deploy AI.

How Are the Guidelines Structured?

The Guidelines are organized based on the typical stages of implementing AI systems, as follows.

  1. Development, testing and monitoring
  2. Deployment
  3. Procurement

What Do the Guidelines Recommend?

1. Development, testing and monitoring

Organizations may use personal data in developing AI models, whether in-house or by engaging third-party service providers. Consent from individuals whose personal data is used should be sought. Alternatively, the following exceptions to consent may be considered:

(a) Business Improvement

The business improvement exception is relevant when:

  • An organization has developed a product or has an existing one that it is enhancing
  • An AI system is intended to improve operational efficiency by supporting decision-making
  • An AI system is intended to offer more or new personalized products or services by offering recommendations to users

However, this exception can only be relied upon for data sharing among related companies within a group, or inter-departmental sharing within a single company.

Additionally, the following pre-requisites must be met:

  • The purposes cannot be reasonably achieved without the personal data being in an individually identifiable form
  • The use is what a reasonable person would consider appropriate in the circumstances

Some examples cited in the Guidelines include:

  • Recommendation engines in social media services that offer users relevant content based on their browsing history
  • Job assignment systems that allocate jobs to platform workers
  • Internal HR systems to recommend candidates for jobs
  • Use of AI systems to provide new product features and improve competitiveness
  • AI system testing for bias, i.e., to “de-bias” datasets for model training

(b) Research

The research exception is relevant when an organization conducts commercial research and development, to advance science and engineering generally without the need for any immediate application to products, services, business operations or the market.

In contrast to the business improvement exception, the research exception does allow for data sharing between unrelated companies for joint commercial research to develop new AI systems.

However, the following conditions must be met:

  • The purposes cannot be reasonably achieved unless the data is in an individually identifiable form
  • There is a clear public benefit
  • The results will not be used to make any decision that affects the individual
  • Published results must not identify the individual

(c) Anonymization

Organizations are encouraged to anonymize their datasets as much as possible. While there are trade-offs, such as model accuracy, repeatability or reproducibility of results, an organization should document the reasons for choosing to use personal data over anonymized data, and adopt appropriate governance. Other considerations include:

  • Whether the anonymization method chosen to be adopted is reversible
  • The extent of disclosure of the dataset and any intended recipients
  • Whether a motivated individual can find means to re-identify anonymized datasets
  • Whether sufficient controls are in place to prevent re-identification

2. Deployment

Organizations are reminded to comply with the consent, notification and accountability obligations in the PDPA.

In particular, organizations are encouraged to provide the following information to users:

  • The function of their product requiring the processing of personal data – for instance, recommendation of movies
  • A general description of the types of data that will be processed, e.g., past movies watched
  • How the processing of personal data is relevant to the product feature
  • Specific features that are likely to influence the product feature, e.g., the number of times a movie is viewed

Organizations should also consider capturing the following in policies:

  • Measures taken to achieve fairness and reasonableness in recommendations, predictions and decisions for the benefit of consumers, during model development and testing (these may include bias assessments, and ensuring training data quality)
  • Technical and other safeguards for protecting personal data, such as anonymization of datasets
  • For outcomes with a greater impact on individuals, how human agency and oversight have been implemented

Technical tools such as Singapore’s AI Verify[3] can be considered to validate the performance of AI systems, together with Singapore’s Model AI Governance Framework[4] for managing stakeholder interaction.

3. Procurement

Finally, systems integrators and other service providers engaged to develop and deploy bespoke AI systems are generally considered to be data intermediaries for the purposes of the PDPA.

Good practices that such intermediaries may adopt include:

  • Data mapping and labeling of training datasets
  • Maintaining a provenance record of data sources and tracking such data as it transforms during the data preparation stage

Ultimately, service providers should support their customer organizations in complying with the latter’s consent, notification, accountability and other applicable obligations in the PDPA insofar as they apply to the development and deployment of AI. With that said, organizations bear the primary responsibility for ensuring that any AI system they have chosen to use can meet their obligations under the PDPA.

How Can We Help?

We can advise on any regulatory implications applicable to the various stages of AI development, deployment and procurement, as well as negotiate any relevant agreements with the respective counterparties to a transaction. As governance and compliance also come “hand in glove” with AI, should your organization require support with any of this, feel free to reach out to your usual contact at the firm.

Disclaimer: While every effort has been made to ensure that the information contained in this article is accurate, neither its authors nor Squire Patton Boggs accepts responsibility for any errors or omissions. The content of this article is for general information only, and is not intended to constitute or be relied upon as legal advice.


[1] https://www.pdpc.gov.sg/guidelines-and-consultation/2023/07/public-consultation-for-the-proposed-advisory-guidelines-on-use-of-personal-data-in-ai-recommendation-and-decision-systems/responses-received-on-31-august-2023

[2] https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/advisory-guidelines/ag-on-key-concepts/advisory-guidelines-on-key-concepts-in-the-pdpa-17-may-2022.pdf

[3] https://aiverifyfoundation.sg/what-is-ai-verify/

[4] http://go.gov.sg/ai-gov-mf-2