What general counsel should know about AI and competition law

AI is rapidly moving from an innovative technology to a standard tool which is fully embedded in core business operations. AI systems increasingly influence pricing, demand and supply chains, product assortment and customer targeting. These are all areas that sit at the core of competition law. When used effectively, AI can substantially strengthen a company’s competitive positioning. AI can intensify competition, but it can also amplify coordination and foreclosure risks. Where are the boundaries for using AI under competition law? When does AI create compliance risks? When can it negatively effect competition, foreclose markets or lead to exploitative behaviour? The developments are evolving rapidly and regulators are paying close attention. This contribution highlights the most common competition law concerns and risks for businesses.

AI and the cartel prohibition

Article 101(1) TFEU prohibits agreements and concerted practices that restrict competition such as price fixing, market sharing and other forms of coordination.

AI and algorithms are increasingly used by companies to analyse markets, to monitor competitors and to adapt their own commercial strategy, often without human intervention. While their use is not illegal as such, self‑learning systems may give rise to competition concerns.

In straightforward cases, AI tools can be used to facilitate traditional cartels. For example, in a traditional price‑fixing cartel, AI may be used to maintain agreed price levels or to monitor deviations. The AI tool limits the need for communication between the cartel members, making it more difficult for authorities to detect the cartel. While price‑fixing cartels are nothing new, AI tools can enable them to operate in more sophisticated and less visible ways than earlier software solutions.

More complex issues arise where competitors use the same AI tools or external providers to determine their commercial behaviour, such as prices:

  • When AI tools process competitively sensitive data of these competitors, this could result in an unlawful exchange of information (a hub and spoke scenario).
  • It is also possible that the shared pricing tool is set-up in such a way that it facilitates collusion. A pricing rule that the tool will match the lowest price on a particular online platform or shop + 5 %, or to match the price of a particular competitor – 5 %, can be seen as a concerted practice and is likely to violate Article 101 TFEU.
  • In distribution chains, automated pricing and monitoring tools can increase the risk of unlawful resale price maintenance.

The most difficult category involves autonomous, self‑learning AI tools that converge on stable pricing outcomes without explicit instructions to coordinate. Under EU competition law, mere parallel conduct – in the absence of unlawful information exchange or illegal pricing practices – remains lawful. However, when companies have, through system design or deployment choices, effectively abandoned independent market behaviour it could potentially still be caught by Article 101 TFEU.

AI and market power (dominance)

AI can also raise issues under Article 102 TFEU, which prohibits the abuse of a dominant market position. AI systems can amplify traditional abuse of dominance risks. Data advantage from AI tools and large scale data collection can make it hard for smaller competitors and startups to enter the market, reducing competition.

Examples of compliance risks include:

  • Data access and interoperability: a dominant company controlling access to its platform or data that is relevant for business users or competitors to compete, and refusing or limiting access (e.g. to an app developed by a third-party). See Android Auto case.
  • Self‑preferencing: businesses with a dominant position using ranking or recommendation algorithms that favour their own products or services over those of competitors. See Google Shopping case. Personalised AI driven advertising or search rankings can give businesses an unfair advantage over others.
  • Unfair terms: See the recent investigation by the European Commission into Google in which it examines whether Google is distorting competition by imposing unfair terms and conditions on publishers and content creators, or by granting itself privileged access to such content, thereby placing developers of rival AI models at a disadvantage.
  • Tying and bundling: practices allowing a dominant business to leverage power from one market into another by forcing or incentivising customers to purchase products together (tied or bundled). Algorithmic targeting amplifies this by steering less price‑sensitive customers exclusively toward bundled offers, foreclosing competitors and strengthening the businesses’ market power.
  • Personalised pricing and price discrimination: the use of algorithms by a dominant undertaking may facilitate exclusionary strategies where pricing selectively targets customers critical for competitors. Personalised pricing algorithms can also lead to price discrimination, where different consumers are charged varying prices for the same product based.
  • Rebate schemes: enabling dominant firms to optimise loyalty‑inducing rebates by targeting customers at risk of switching.

Competition authorities and enforcement

AI is changing how competition authorities enforce the rules. Authorities increasingly use AI tools to screen markets, analyse pricing patterns and identify anomalies that may warrant further investigations.

During investigations, AI can support e‑discovery by reviewing large volumes of data quickly to identify evidence and reduces investigation timelines. Use of AI tools makes enforcement more targeted, effective and it potentially lowers the practical threshold for authorities to start investigations.

The AI Act introduces documentation, transparency and reporting obligations that may generate information relevant for competition enforcement.

What does this mean for general counsels?

AI does not fundamentally change competition law risks, but it does require a refined compliance approach. Traditional compliance programmes focused on human conduct alone are no longer sufficient.

Practical considerations:

  • Include the usage of AI in competition compliance and training, also for non-legal teams (pricing, marketing and sales, data).
  • Identify where AI systems influence pricing, marketing, ranking, preferencing, contract formation, or other competition-sensitive parameters.
  • Assess what data is used to train and operate the AI tools, including any competitively sensitive information and competitor‑related inputs.
  • Assess governance: are AI outputs advisory or effectively determinative?

Document the decision-making process when deploying specific AI tools and selecting service providers, including why their use is considered lawful.

Over de auteur(s)

Tosca Bokhove | Kennedy Van der Laan
Martijn van Bemmel | Kennedy Van der Laan