Photo of Andrea Otaola

Andrea Otaola

The European Commission has recently released its Guidelines on the Definition of an Artificial Intelligence System under the AI Act (Regulation (EU) 2024/1689). The guidelines are adopted in parallel to commission guidelines on prohibited AI practices (that also entered into application on February 2), with the goal of providing businesses, developers and regulators with further clarification on the AI Act’s provisions.

Key Takeaways for Businesses and AI Developers

Not all AI systems are subject to strict regulatory scrutiny. Companies developing or using AI-driven solutions should assess their systems against the AI Act’s definition. With these guidelines (and the ones of prohibited practices), the European Commission is delivering on the need to add clarification to the core element of the act: what is an AI system?

The AI Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment. The system, for explicit or implicit objectives, infers from input data how to generate outputs – such as predictions, content, recommendations or decisions – that can influence physical or virtual environments.

One of the most significant clarifications in the guidelines is the distinction between AI systems and “traditional software.”

  • AI systems go beyond rule-based automation and require inferencing capabilities.
  • Traditional statistical models and basic data processing software, such as spreadsheets, database systems and manually programmed scripts, do not qualify as AI systems.
  • Simple prediction models that use basic statistical techniques (e.g., forecasting based on historical averages) are also excluded from the definition.

This distinction ensures that compliance obligations under the AI Act apply only to AI-driven technologies, leaving other software solutions outside of its scope.Continue Reading The ReAIlity of What an AI System Is – Unpacking the Commission’s New Guidelines

On October 9, 2024, the European Data Protection Board (EDPB) unveiled its much-anticipated Guidelines on using legitimate interest (Article 6.1(f) of the GDPR) as a lawful basis for processing personal data. These guidelines set out clear criteria for data controllers, and will therefore be most welcome.

For years, legitimate interest has been among the go-to option for organizations, with the idea that it offers more flexibility (as long as you comply with the inherent requirements of its use). High-profile cases, like the Court of Justice of the European Union’s (CJEU) decision in Royal Dutch Tennis Association (KNLTB), acknowledged that commercial interests may qualify as legitimate, but also crystalized the tension on its uses from supervisory authorities and privacy advocates.Continue Reading Balancing the Scales: How to Use “Legitimate Interest” to Process Personal Data “Fairly”

1. Introduction

The Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law has been concluded by the Council of Europe (CoE) Committee on Artificial Intelligence on March 24, 2024, finally landing a decisive blow with a provisional agreement on the text of a treaty on artificial intelligence and human rights (Treaty).

This Treaty is the first of its kind and aims to establish basic rules to govern AI that safeguard human rights, democratic values and the rule of law among nations. As a CoE treaty, it is open for ratification by countries worldwide. It is worth noting that in this epic battlefield, apart from the CoE members in one corner of the global arena, on the opposite corner, representing various nations like the US, the UK, Canada and Japan, we have the observers, eyeing the proceedings, ready to pounce with their influence. Although lacking voting rights, their mere presence sends shockwaves through the negotiating ring, influencing the very essence of the Treaty.Continue Reading Heavyweight Fight, Did the US or EU KO the AI Treaty?

On February 13, 2024, the European Data Protection Board (EDPB) released its opinion on the notion of the main establishment of a controller in the EU under article 4(16)(a) GDPR and the criteria for the application of the “one-stop shop” mechanism, in particular, regarding the notion of a controller’s “place of central administration” (PoCA) in

On January 15, 2024, the European Commission (EC) published its report on 11 adequacy decisions made under the Data Protection Directive. This is the first review of its kind in GDPR times for adequacy decisions that were living their own existence, with not many troubles (leaving the US one aside). A periodic checkup is foreseen in the most recent adequacy decisions (and Japan last review was published in April 2023), but not much was done for the other ones; this is now remedied.Continue Reading Adequate One Day Keeps Personal Data Transfer Problems (Forever) Away? Let’s See What the EU Doctor Just Said

On December 8, 2023, after some intense rollercoaster rides, the European Union (EU) institutions reached a political agreement on the EU Artificial Intelligence Act (EU AI Act). The compromises reached after sleepless nights still need to find their way into a final text (and this one might only be available after the holiday season), but

With the trilogues on the draft EU AI Act entering what is probably their final phase and the idea that procuring AI cannot be done lightly spreading, organizations are often confronted with hard choices, including on how to source AI responsibly and protect against liabilities with an uncertain developing legal framework. Contractual language is one

According to the latest draft of the EU cybersecurity certification scheme for cloud services (EUCS), dated August 2023 (leaked by POLITICO), the data localisation requirement, which was heavily criticised by the industry, will now apply only to the highly critical “high+” level. Data localisation would, should the EUCS be approved as such, not apply to the category 3 (“high”) level. This might not be the end of a debate that has run wild between industry (with major cloud providers unkeen with the idea) on one side and some member states defending some level of sovereignty, such as France, Italy and Spain, and EU institutions (such as the European Data Protection Board and ENISA) on the other one.Continue Reading Fewer Clouds on … Cloud: The EU to (Finally) Drop Most Data Localisation Requirements in the EUCS