February 2026
The European Union’s ex ante digital competition law, the Digital Markets Act (DMA) has been in effect for almost two years. The DMA requires companies designated as gatekeepers to comply with ex ante competition rules, preventing alleged anti-competitive practices before they occur. As such, it contrasts sharply with the traditional ex post enforcement of competition policy.
The European Commission has already pursued investigations under the DMA and imposed fines amounting to €700 million, specifically on Apple for presumed breach of an anti-steering obligation and on Meta for allegedly failing to provide a version of its service that requires less personal data from users. The Commission is also investigating whether Amazon Web Services and Microsoft Azure cloud computing services should be designated as gatekeepers, and whether the DMA effectively addresses unfair practices in cloud computing, such as interoperability barriers, bundling, and restricted data access.
Yet another case, the European Commission’s late-2025 decision to open a new investigation into Google’s Site Reputation Abuse Policy (SRAP), provides an interesting case study of how the Commission's enforcement of the DMA can have unwarranted consequences. Indeed, the case raises questions about whether DMA enforcement, conducted in the name of competition, risks constraining gatekeepers’ ability to fight spam, scams, and deceptive content – thereby weakening Europeans’ online safety. The case also highlights potential conflicts between the DMA and other European landmark laws such as the Digital Services Act (DSA) and the General Data Protection Regulation (GDPR).
Google’s SRAP was introduced in March 2024, in response to a rapidly escalating form of manipulation of online searches, known as “parasite SEO” or “site reputation abuse.” This practice exploits the reputation of high-authority websites, such as news outlets, universities, or government platforms. By hosting affiliate-heavy or deceptive content on these trusted domains, bad actors trick search engines into ranking low-quality products (like fraudulent health or financial schemes) higher than they deserve. Because these host sites already enjoy high trust, parasitic content can rank immediately for competitive keywords and bypass the normal process of earning credibility through quality and relevance. It can thus mislead users who reasonably assume that content appearing on trusted domains has been vetted. Google’s SRAP is aimed at preventing such misleading content and protecting users online.
Taking a step back, the Commission’s case against Google illustrates a growing problem in Europe’s digital regulatory landscape: conflicts between the DMA’s drive for competitiveness and other EU laws such as the General Data Protection Regulation (GDPR) and the Digital Services Act (DSA) that seek to ensure users’ safety online. The DMA’s enforcement risks penalizing gatekeepers’ efforts to comply with these other EU laws and protect end users. With gatekeepers hamstrung, it falls on European consumers to vet online services, content, and apps.
The purpose of this paper is to discuss how DMA enforcement and some of its provisions risk creating conflict with the Commission’s privacy and online safety objectives, and to propose policy approaches that could help to better balance the aims of consumer safety and the DMA’s enforcement.
​
What then could be done? The paper provides three recommendations:
-
Ensure the DMA’s competition objectives do not trump privacy, security, and consumer safety. Competition policy should not override fundamental consumer protections, including privacy, cybersecurity, and device safety. Mandating access to sensitive user data, system APIs, or device capabilities may expose ordinary Europeans to harm as they go about their days online. DMA enforcement should explicitly adopt a “security-first” principle: the Commission could introduce exemptions for gatekeepers against technical evidence that specific interoperability or data sharing obligations undermine device integrity, encryption, or user safety.
​
-
Establish a safe harbor for anti-spam ranking measures. The Commission could establish a formal safe harbor for anti-spam and ranking measures, ensuring that search integrity and Internet users’ safety are being prioritized. Interventions targeted at deceptive content should be deemed presumptively lawful under the DMA when they are based on objective indicators, such as affiliate or malware density. This approach could also support the Commission’s contestability goals as it would protect legitimate publishers from being marginalized by AI-driven, industrial-scale manipulation.
-
Promote transparency in the fight against spam and deceptive content and processes to resolve disputes about commercial outcomes. Search engines should not only be allowed but expected to deploy safeguards that are necessary as well as proportionate to mitigating deceptive content and spam. The Commission could prioritize transparency and accountability for search engines to spell out clear criteria for anti-spam measures, timelines for resolving disputes, and pathways for appeals. Such a process-oriented framework would promote rapid security responses while providing commercial clarity and remedies.
​

