The Digital Services Act (DSA) has now moved from abstract framework to concrete enforcement. Two recent cases involving very large online platforms show how the same law, applied to similar types of conduct, can produce dramatically different outcomes. The difference lies less in the substance of the infringements and more in how each platform chose to respond once the EU Commission intervened.

In the first case, the Commission opened a formal investigation and raised concerns about several aspects of the platform’s compliance. The platform had introduced a prominent interface element that appeared to suggest some form of account verification or trustworthiness, although it appeared not to be backed by a substantial verification process. At the same time, the Commission considered that its advertising repository did not consistently include key information, such as the content and topic of ads and the identity of the sponsoring entity, and that it was structured in a way that may have reduced its usability for researchers and civil society. Access to public-interest data also seemed to be constrained by technical and contractual factors, including restrictive terms of service and procedures that could be viewed as burdensome.

Following preliminary findings, the Commission ultimately adopted a full infringement decision. It confirmed non-compliance with key transparency and design provisions under the DSA, imposed a substantial fine and set strict deadlines for remedial measures, with the possibility of periodic penalties if the platform failed to comply. There was no mention of commitments; the procedure ran its course to formal sanction.

The second case began in a strikingly similar fashion. Here too, the focus was on transparency in advertising: whether the platform’s ad repository was complete, accessible and genuinely usable for regulators, researchers and civil society. The Commission’s preliminary concerns again revolved around the ability to see which ads were being shown, to whom, under what targeting criteria, and how quickly that information became available.

But once the Commission communicated its initial findings, the platform took a very different path. It entered into extensive dialogue, provided detailed information and did not treat the investigation as a purely adversarial process. Instead, it presented a package of binding commitments designed to address the Commission’s concerns in full. These commitments included publishing the complete content of ads as they appear in users’ feeds, updating the repository within short, clearly defined timeframes, adding richer data on targeting and aggregate user characteristics, and enhancing search and filtering tools so the repository could genuinely support scrutiny and research.

The Commission concluded that these commitments adequately resolved the issues and decided to accept them under the DSA’s commitment mechanism (Art. 71 DSA). The case was closed without imposing a fine, with the focus shifting to monitoring implementation against agreed timelines.

What explains this contrast? On the publicly available information, it does not appear to be primarily the nature or gravity of the underlying concerns: both platforms were being examined for similar types of deficiencies in transparency, design and data access. We do not have access to the full decisions or complete case files, so any explanation can only be inferred from what the Commission has chosen to disclose. However, taken at face value, the decisive variable seems to be the platforms’ conduct once the Commission’s concerns were on the table.

For other regulated actors, the lesson is straightforward. Under the DSA, cooperation does not appear to be a peripheral, “soft” factor; it is, on the publicly available information, a key determinant of outcomes. A platform that treats the Commission as a counterpart with whom problems can be solved is not guaranteed a commitments-based resolution but is clearly in a better position to obtain one. Conversely, a platform that approaches the process as a game of brinkmanship appears to increase the likelihood of a formal infringement decision, significant fines and intrusive remedial obligations.

There is also a broader reputational dimension. The substance of infringement decisions under the DSA becomes public and therefore they have an impact on how platforms are perceived by users, advertisers, researchers and policymakers. A pattern of limited cooperation or narrowly framed compliance efforts may, in practice, prompt closer attention from regulators in subsequent interactions, although this ultimately depends on the circumstances of each case. By contrast, a visible pattern of constructive engagement and credible remediation can help to build trust and dampen calls for harsher regulatory intervention.

Ultimately, DSA compliance is not only about “how much it might cost” in fines. It is about institutional credibility and long-term image. Money can settle a penalty, but it cannot erase the narrative that the decision creates.

Disclaimer: While every effort has been made to ensure that the information contained in this article is accurate, neither its authors nor Squire Patton Boggs accepts responsibility for any errors or omissions. The content of this article is for general information only, and is not intended to constitute or be relied upon as legal advice.

Stay Ahead on Consumer Privacy News

Not a subscriber yet? Subscribe here to be among the first to receive timely updates on the fast-moving world of data privacy, security, and innovation—delivered straight to your inbox.

Looking for deeper insights and expert analysis? You can also subscribe here to our privacy attorneys’ marketing communications for thought leadership and rich content when you need a more comprehensive perspective.