Platform Algorithmic Abuse in the Age of Artificial Intelligence and Its Antitrust Regulation: Postprint
Fu Lin
Submitted 2025-06-24 | ChinaXiv: chinaxiv-202506.00292

Abstract

[Objective/Significance] With the advent of the artificial intelligence era, algorithms have risen to prominence, propelling rapid economic and social development. Concurrently, algorithm-induced social issues are increasingly emerging, among which platform algorithm abuse represents a significant concern. Examining the current landscape of legal regulation and practical dilemmas surrounding platform algorithm abuse can provide valuable reference points for its antitrust regulation.

[Method/Process] Employing journal articles from CNKI and judicial decisions from China Judgments Online as data sources, this study identifies the principal hotspots and research frameworks concerning platform algorithm abuse through systematic analysis, and undertakes a comparative analysis of domestic and international research themes while offering prospects for antitrust regulation.

[Result/Conclusion] Based on an analysis of three categories of platform algorithm abusive behaviors, this paper proposes solutions across several dimensions: platform algorithm auditing, regulation of data collection and utilization, refinement of antitrust legislation targeting platform algorithm abuse, and strengthening of antitrust regulatory mechanisms for such abuse. These recommendations aim to achieve a balance between competition regulation and competition promotion.

Full Text

Antitrust Regulation of Platform Algorithm Abuse in the Age of Artificial Intelligence

Fu Lin
School of Law, Southeast University, Nanjing 211189, China

Abstract:
[Purpose/Significance] With the advent of the artificial intelligence era, algorithms have "risen" to become a driving force behind rapid economic and social development. Simultaneously, algorithm-related social problems are increasing, among which platform algorithm abuse represents a significant concern. Researching and analyzing the current state of legal regulation and practical challenges in addressing platform algorithm abuse can provide valuable reference for its antitrust regulation. [Method/Process] Using journal literature from CNKI and judicial documents from China Judgments Online as data sources, this study identifies the main hotspots and research frameworks concerning platform algorithm abuse through analytical methods, while conducting comparative analysis of domestic and international research themes and offering prospects for antitrust regulation. [Result/Conclusion] Based on analysis of three types of platform algorithm abuse, this paper proposes solutions in platform algorithm review, regulation of data collection and use, improvement of antitrust legislation on platform algorithm abuse, and strengthening of antitrust supervision mechanisms, aiming to achieve a balance between competition regulation and competition promotion.

Keywords: artificial intelligence; platform economy; algorithm abuse; antitrust regulation

0. Problem Statement

With the rise of the Fourth Industrial Revolution, information technologies centered on intelligence and digitization—such as big data, industrial internet, cloud computing, and artificial intelligence—have flourished, propelling the digital economy to become a crucial engine for high-quality social development. As a key component of the digital economy, the platform economy, characterized by distinctive features including network effects, multi-sided markets, and dynamic innovation, continuously spawns new business models while facing a series of antitrust issues and challenges, among which internet platform algorithm abuse is particularly prominent. Examples include "algorithmic price discrimination," where platform operators use AI algorithms to charge different prices for the same goods or services; "algorithmic self-preferencing," where platform companies leverage their market advantages to prioritize their own products and services in search results to gain more traffic; and using specific algorithms to set parameters that elevate target information in ranking order, triggering a series of legal regulation issues. How to effectively regulate platform algorithm abuse through antitrust measures has become a focal point for antitrust supervision worldwide. For instance, the United States' American Innovation and Choice Online Act and Algorithmic Accountability Act both contain specific provisions on platform algorithm abuse, while the European Union's Artificial Intelligence Act proposes regulatory approaches for platform algorithms. In China, the 2021 Anti-Monopoly Guidelines for the Platform Economy Sector (hereinafter referred to as the Platform Anti-Monopoly Guidelines) explicitly prohibits operators in the platform sector from excluding or restricting competition through data, algorithms, platform rules, or other means to form monopoly agreements or implement discriminatory treatment. The 2022 amended Anti-Monopoly Law of the People's Republic of China (hereinafter referred to as the Anti-Monopoly Law) Article 9 similarly prohibits operators from using algorithms to exclude or restrict market competition. Additionally, the Law on the Protection of Consumer Rights and Interests and the Regulations on the Management of Algorithmic Recommendations for Internet Information Services (hereinafter referred to as the Algorithmic Recommendation Management Regulations) also regulate internet platform operators' use of algorithms to exclude or restrict competition.

Platform algorithm abuse has also become a hot topic in theoretical research. Existing research offers three main perspectives on regulating such behavior. The first advocates improving the algorithm accountability system, with Su Yu [2] arguing for clearer delineation of specific responsibilities for developers, operators, and third parties in specific cases. The second emphasizes strengthening protection of individual rights, with Zhang Shuling [3] suggesting the use of online platforms for education and training to raise awareness and avoid excessive dependence on algorithms. The third calls for enhanced antitrust review of algorithms, with Meng Yanbei and Zhao Zeyu [4] proposing stronger algorithmic antitrust review based on monopoly leverage effect theory, and Yu Zuo and Li Siming [5] suggesting review based on factors such as market dominance and platform dependency. Overall, although provisions regulating platform algorithm abuse such as "big data price discrimination" are scattered across multiple laws and regulations, and existing research focuses on technical regulation of algorithm abuse, new forms of algorithmic price discrimination, algorithmic self-preferencing, and algorithmic collusion continue to emerge, with legal frameworks still unable to provide effective support for antitrust enforcement. How to better regulate the persistent algorithm abuse by internet platforms from a legal perspective is an urgent issue to be resolved amid the flourishing development of the digital economy.

This paper endeavors to analyze this problem from an antitrust law perspective. First, it defines platform algorithms, noting that they have deeply intervened in human social life with implicit value positions, thus possessing legal regulability. Second, it analyzes three typical types of algorithm abuse in the current platform economy, pointing out that these behaviors may lead to anti-competitive effects such as excluding or restricting competition, infringing upon consumer rights, and triggering data resource monopolies. Third, it examines the current state of antitrust legal regulation of platform algorithm abuse in China, identifying existing problems. Finally, it offers recommendations and suggestions for better regulating platform monopolies caused by algorithm abuse in the future.

1.1 Definition and Regulability of Platform Algorithms

Before analyzing algorithm abuse in the platform economy, it is necessary to clarify the definition, characteristics, and regulability of platform algorithms.

1.1.1 Definition of Platform Algorithms

In the narrow sense, algorithms refer to rules derived from mathematics and computer science for solving mathematical and computational problems, which can be regarded as pure science or technology. From a broader perspective, moving beyond mathematics and computer science into the social sciences, algorithms are not merely automated decision-making rules related to machines, but are more broadly defined as all decision-making steps and procedures [6]. Given that algorithms currently frequently intervene in all aspects of social life, this paper adopts a moderate definition, treating algorithms as various forms of decision-making in machine-human interaction. Therefore, platform algorithms in this paper refer to the complete set of mechanisms that internet platform enterprises actively implement in their daily operations, using the platform as a carrier, through data computation, parameter code setting, and automated machine judgment during interactions such as transactions with consumers and operators within the platform.

1.1.2 Characteristics of Platform Algorithms

Overall, algorithms in the platform economy exhibit characteristics of generality, effectiveness, finiteness, black-box nature, and regulability. Generality means platform algorithms have wide application scope, covering different problems and scenarios. Effectiveness means platform algorithms can produce valid results or solutions to pending problems within a reasonable time. Finiteness means algorithms must terminate after executing a finite number of steps. Black-box nature refers to the fact that the internal working mechanisms and decision-making principles of algorithms are unknown to users, who can only operate according to the algorithm's input and output requirements without understanding its complex internal logic and calculation processes. In practice, it is precisely this "black-box" characteristic of platform algorithms that enables operators to use algorithms to manipulate pricing and implement collusion, making such behavior difficult to detect and prove. The regulability of platform algorithms means that AI algorithms, represented by platform algorithms, have deeply intervened in human society and become an inseparable part of social value judgment, losing their original independence and neutrality, thus necessitating regulation.

1.2 Anti-Competitiveness of Platform Algorithm Abuse

As algorithm abuse has become one of the important manifestations of platform monopolies, the anti-competitiveness of platform algorithm abuse behaviors needs to be clarified one by one.

On August 1, 2024, the China Consumers Association released the consumer rights protection hot spots for the first half of 2024, among which "frequent big data price discrimination in the platform economy" remained a difficulty troubling consumers, such as "gold members paying more than regular members for hotel bookings," "diamond members on ride-hailing apps being charged higher prices than new members," and price differences of over 900 yuan for the same flight class purchased by three different accounts [7]. Such platform algorithm abuse has seriously affected consumers' right to know and their independent decision-making ability. In recent years, judicial cases involving platform algorithms have also been increasing. To further study how improper platform algorithm behaviors affect consumer rights, the author searched databases such as China Judgments Online and Beida Fabao using keywords like "platform algorithm" and "online unfair competition disputes," systematically compiling 20 typical cases from 2021 to 2024 (Table 1 [TABLE:1]).

Through case analysis, it can be seen that among these 20 hot cases, the representative technical behaviors are, in order: forced "choose one of two" (7 cases), fake traffic (6 cases), excessive data scraping (3 cases), algorithmic personalized push (2 cases), and algorithmic malicious price comparison (2 cases). These behaviors seriously harm the legitimate order of online platforms and infringe upon consumer rights.

1.2.2 Platform Algorithm Abuse Excludes and Restricts Competition

Platform algorithm abuse not only damages consumer rights but also enables digital platforms to continuously establish monopolistic positions using their platform advantages to pursue maximum benefits, employing algorithms to exclude and restrict competition and seriously undermining free and fair market competition. Specifically, platforms use their advantages in algorithms and data to treat algorithms as agents for pricing and price adjustment, engaging in unfair competition. Additionally, platform operators may use algorithms to implement differentiated services, conduct price forecasting, and optimize self-operated businesses to gain competitive advantages. Combined with the inherent cross-network effects of the platform economy, this easily leads to monopolistic risks such as high entry barriers, user lock-in, and concentrated market power, thereby amplifying the resulting competitive harm [8].

1.2.3 Platform Algorithm Abuse Triggers Data Resource Monopoly

With the rapid development of modern digital technology, platforms use their advantages in data acquisition, management, and institutional control to continuously strengthen their dominant position, triggering data resource monopolies. Currently, large digital platforms frequently engage in data trading wars using data resources and algorithmic technology, thereby affecting the fairness of platform economic competition. The phenomenon of monopolizing data through algorithm abuse is becoming increasingly common, with data resources becoming a new competitive advantage for platforms [9]. Furthermore, super platforms with massive amounts of data can easily use their data advantages to implement abuse of market dominance, such as refusing competitors access to data resources and using real-time data analysis to monitor competitors' algorithms.

1.3 Typical Platform Algorithm Abuse Behaviors

The anti-competitiveness of platform algorithms primarily exploits data advantages. China currently lacks a legal framework regarding data property rights, circulation, trading, and transfer, providing insufficient legal support for combating the anti-competitiveness of platform algorithms. Meanwhile, the uniqueness of data and the deep integration of data and algorithms tend to spawn new forms of monopolistic behavior distinct from traditional markets, such as platform algorithmic price discrimination, platform algorithmic self-preferencing, and platform algorithmic collusion, making effective supervision of platform algorithms difficult to implement.

1.3.1 Platform Algorithmic Price Discrimination Behavior

The concept of "price discrimination" originally comes from economics, referring to situations where "the same operator charges different prices for the same product to different consumers or to the same consumer based on purchase quantity or order" [10]. It is typically classified into first-degree, second-degree, and third-degree price discrimination. First-degree price discrimination assumes that the operator knows the maximum price consumers are willing to pay and sells goods accordingly [11], allowing the seller to capture all consumer surplus and maximize profits. Second-degree price discrimination means the selling price depends on the quantity purchased, where operators give discounts to consumers who buy in large quantities, similar to the "small profits but quick turnover" business concept in daily life. Third-degree price discrimination refers to operators implementing differentiated pricing for different consumers in different markets, specifically by dividing consumers into different groups based on external characteristics such as age, gender, and social identity, then charging lower prices to price-sensitive users and higher prices to less price-sensitive users, such as "free for seniors" or "half-price for students" pricing strategies. For example, in the case of Liu Quan v. Beijing Sankuai Technology Co., Ltd., Liu Quan sued a merchant for implementing "differentiated delivery fees" for customers in the same community purchasing the same package [12]. Another case involved Zheng Yugao v. Shanghai Ctrip Commerce Co., Ltd., where a customer suspected the platform of using its network advantage to unilaterally change and manipulate ticket prices [13].

In the platform economy, price discrimination is usually combined with algorithmic technology, manifesting as internet platforms analyzing consumers' personal information through AI algorithms to form user profiles and set different prices for different consumers, achieving "thousand people, thousand prices." Platforms often charge higher prices to familiar customers with stronger purchase intentions, commonly known as "big data price discrimination," while offering price discounts or subsidies to new users to achieve economies of scale and scope, showing a "preference for new over old" pattern. When determining the nature of big data price discrimination, the more common academic view is to identify it as price discrimination, similar to first-degree and third-degree price discrimination in economics.

1.3.2 Platform Algorithmic Self-Preferencing Behavior

Platform algorithmic self-preferencing refers to the covert, automated systematic behavior where platform enterprises, based on massive data collection and analysis, use algorithms to favor and give preferential treatment to their own businesses while differentially treating competitors' businesses. From the perspective of behavioral effects, platform algorithmic self-preferencing has certain legitimacy, as it can exercise autonomous power for internal governance through algorithmic technology, thereby achieving positive network effects and attracting more consumers. However, such behavior also carries the risk of causing competitive harm, such as hindering healthy competition in platform markets and forming high market entry barriers [4].

Common types of platform algorithmic self-preferencing can be divided into three categories. (1) Altering Search Rankings to Implement Self-Preferencing: Globally, cases of large platform enterprises artificially changing search ranking algorithms to favor their own businesses can be traced back to the EU's "Google Shopping Case" in 2015. In this case, the European Commission accused Google of systematically favoring its own products in general search results across the European Economic Area through manual intervention in its search ranking algorithm, thereby stifling competitors' innovation motivation. Additionally, in June 2024, South Korea's largest e-commerce platform Coupang was fined heavily by its national antitrust enforcement agency for manipulating search rankings. Investigations revealed that from February 2019 to July 2023, Coupang manipulated its search algorithm to ensure that 64,250 of its private label and directly sold products consistently appeared at the top of website search results. This search ranking manipulation was identified as unfair competition that obstructs consumers' reasonable choice rights and distorts market circulation order. (2) Using Algorithmic Blocking to Implement Self-Preferencing: Algorithmic blocking refers to platform operators using algorithms to identify competitors and then permanently or selectively refuse their access to or use of the platform's facilities. A typical case is Facebook's algorithmic blocking of Vine. In 2013, after Facebook identified through algorithmic technology that the social application Vine had copied its core news feed function, it immediately implemented algorithmic blocking on Vine on the day of its release, cutting off its access to the API and preventing Vine users from searching for their Facebook friends within the application. A similar case occurred in December 2022 when Twitter released a policy prohibiting users from promoting their other social media accounts on the platform, meaning users could no longer include links to other social platforms in their Twitter profiles or send tweets directing other users to their Facebook or Instagram accounts. (3) Manipulating Algorithms to Seize Competitor Information for Self-Preferencing: In the platform economy, low-value-density data must be combined with systematic and agile algorithmic technology for analysis and processing to be transformed into competitive resources and advantages. However, this is by no means easy for newly arrived merchants on the platform. In contrast, large platform enterprises that have already built digital ecosystems, being both operators providing platform services and market participants competing with merchants on the platform, possess dual identities as both "referee" and "player," enabling them to track market dynamics and development trends of competitors on the platform through algorithms and formulate sales strategies for their own businesses accordingly [14]. For example, Amazon, as the largest U.S. electronics retailer and cloud computing company, has repeatedly been embroiled in lawsuits alleging abuse of platform merchant data to profit from its own product sales. In July 2019, the European Commission found during its investigation into allegations of Amazon manipulating algorithms to seize competitor data that Amazon employees frequently used algorithms to obtain business data of merchants on the platform, thereby giving its own business more favorable delivery and advertising services in competition.

1.3.3 Platform Algorithmic Collusion Behavior

In market activities, collusion is an extremely common anti-competitive behavior, and the rise of algorithms in the AI era has provided a new model for collusion—algorithmic collusion. The term "collusion" originates from U.S. antitrust law, and its connotation is equivalent to "monopoly agreement" in China's Anti-Monopoly Law, mainly referring to "two or more operators monopolizing the market through coordinated behavior to exclude or restrict competition, which can be manifested as manipulating market prices, limiting output, and adopting other strategies affecting market competition" [15].

Compared with traditional collusion, algorithmic collusion in the AI era presents new characteristics of lower implementation thresholds, stronger concealment, and broader impact. First, platform algorithmic collusion has lower implementation thresholds. Traditional collusion formation has high requirements for market environment, typically needing concentrated market share, limited number of operators, and high concentration. In contrast, AI-guided algorithmic collusion breaks through these market condition limitations, enabling collusion between operators and between operators and producers to be easily achieved through convergence algorithms for optimal strategies even in normal, open market environments. Second, platform algorithmic collusion has stronger concealment. Algorithms themselves exist in virtual cyberspace with weak connections to real entities, and the black-box nature of platform algorithms makes it harder to detect and identify operators' collusion behaviors using algorithms. Third, platform algorithmic collusion has broader impact. Given that digital market products have differentiated strategies and dynamic pricing characteristics, once operators reach algorithmic collusion, the impact will quickly expand to markets for different products and services, leading to persistently high prices for products and services in specific markets and squeezing other or potential market competitors.

In 2017, the OECD hosted a roundtable on digital economy titled "Algorithms and Collusion," categorizing algorithmic collusion into four types: monitoring algorithmic collusion, parallel algorithmic collusion, signaling algorithmic collusion, and self-learning algorithmic collusion. (1) Monitoring Algorithmic Collusion refers to operators having expressed intention to collude and using algorithms to collect competitors' business information in real-time to achieve coordination with competitors on price and output. (2) Parallel Algorithmic Collusion refers to collusion types where competing operators use the same algorithm to reach uniform prices, with the typical case being hub-and-spoke algorithmic collusion, where operators achieve price coordination through pricing algorithms developed by third parties. (3) Signaling Algorithmic Collusion refers to algorithms automatically sending price signals to competitors based on collected data analysis and reaching collusion after the signals are received [16]. A typical case of signaling algorithmic collusion is the E-turas case heard by the European Court of Justice in 2016. As an online travel booking platform in Lithuania, E-turas sent a signal in August 2009 to travel agencies on its platform capping discounts at 3%. After the information was released, no travel agency on the platform expressed opposition. The European Court of Justice held that the travel agencies' failure to take an evasive attitude after receiving the limited discount signal from platform system manager E-turas should be regarded as participation in this signaling algorithmic collusion. (4) Self-Learning Algorithmic Collusion refers to algorithms analyzing and self-learning market changes based on collected data information and formulating prices without human guidance to maximize profits.

2.1 Current Status of Antitrust Regulation of Platform Algorithm Abuse in China

China issued the Outline for Promoting Big Data Development in 2015, explicitly requiring research on data opening systems and standardized management of data collection, transmission, storage, and utilization. In 2016, the Notice on Organizing and Implementing Major Projects for Promoting Big Data Development further clarified industry management regulations for big data application, development, and other processes. In 2020, the Opinions on Building a More Complete System and Mechanism for Market-oriented Allocation of Production Factors proposed "exploring the establishment of unified and standardized data management systems, improving data quality and standardization, and enriching data products" [17]. Subsequently, many laws and regulations concerning data were introduced. Since the implementation of the Anti-Monopoly Law in 2008, China has formed a socialist antitrust legal system with Chinese characteristics, with the Anti-Monopoly Law at its core, supplemented by 1 administrative regulation, 8 State Council antitrust guidelines, and 6 departmental rules [18]. Among them, the Anti-Monopoly Law, Platform Anti-Monopoly Guidelines, Guidelines for Defining Relevant Markets, and other laws and normative documents provide the main basis for antitrust regulation in the platform economy sector. In addition, there are specialized laws and regulations for different types of algorithm abuse.

First, regarding algorithmic price discrimination, most Chinese scholars believe that Article 22(1)(6) of the Anti-Monopoly Law on "differential treatment" should be the regulatory basis. Furthermore, to better adapt to platform economic development, the Platform Anti-Monopoly Guidelines contain more specific provisions on this algorithm abuse behavior. Article 17(1)(1) clarifies the specific manifestation of algorithmic price discrimination in the platform economy, namely "implementing differential transaction prices or other transaction conditions based on big data and algorithms according to transaction counterparties' payment capacity, consumption preferences, usage habits, etc." [19]. Subsequently, the Provisions on Prohibiting Abuse of Market Dominance enumerated typical types of differential treatment by operators with market dominance and further clarified the concept of "transaction counterparties under the same conditions."

Second, regarding regulation of platform algorithmic self-preferencing, academic circles generally believe that refusal to deal, tying, and differential treatment provisions in the Anti-Monopoly Law on abuse of market dominance should apply. Additionally, Articles 14, 16, and 17 of the Platform Anti-Monopoly Guidelines further clarify the above three behaviors. Article 14 states that platform operators with market dominance setting unreasonable restrictions and obstacles in algorithms and other aspects that make it difficult for transaction counterparties to conduct transactions may constitute abusive refusal-to-deal behavior. Article 16 enumerates specific content of digital platform tying behaviors, including using specific algorithmic technology to make transaction counterparties accept additional goods or services provided by the platform in a manner that cannot be chosen, refused, or changed. Article 17 expands the interpretation of "other transaction conditions" in Article 22(1)(6) of the Anti-Monopoly Law in combination with digital markets [20], making this differential treatment behavior include not only price discrimination but also implementing differential algorithms, standards, rules, payment conditions, or payment methods. This precisely matches the non-price differential treatment characteristics of platform algorithmic self-preferencing.

Finally, regarding regulation of platform algorithmic collusion, Article 5 of the Platform Anti-Monopoly Guidelines includes it in the category of "other concerted practices," while Articles 6 and 7 incorporate it into the regulatory scope of traditional horizontal and vertical monopoly agreements. Simultaneously, Article 8 of the Platform Anti-Monopoly Guidelines introduces the concept of "hub-and-spoke agreements" for the first time in China's antitrust regulation system, providing a policy basis for regulating hub-and-spoke algorithmic collusion. In addition, Article 19 of the Anti-Monopoly Law prohibits operators from assisting in monopoly agreement implementation, which can be regarded as a prohibition on algorithmic collusion behavior, meaning that operators providing the same or similar pricing algorithm services to other market operators may be regarded as organizing or assisting behavior.

2.2.1 The Current Antitrust Legal System Cannot Fully Regulate Platform Algorithm Abuse

Regarding the regulation of platform algorithm abuse, the current antitrust legal system cannot provide complete regulation. First, applying the current antitrust legal basis—the "differential treatment" stipulated in Article 22 of the Anti-Monopoly Law—to platform algorithmic self-preferencing still has problems of incomplete applicability. The main reason is that differential treatment targets "transaction counterparties," i.e., third parties unrelated to the platform itself, while the objects of algorithmic self-preferencing cannot meet this condition because the favored self-operated businesses necessarily have relevance to the platform enterprise. Second, factual determination presents certain difficulties. For instance, the Anti-Monopoly Law has certain difficulties in determining factual elements such as "the same counterparty" and "the same conditions." Meanwhile, Article 22 of the Anti-Monopoly Law requires regulation of operators with market dominance abusing their dominance, but in enforcement, platform algorithms and core data are key factors for platform survival and development, and operators are unwilling to provide them, causing difficulties in data acquisition and factual determination. Finally, operators' market dominance also fluctuates with market dynamics. Unlike traditional static, price-oriented markets, the platform economy market has characteristics such as network effects and dynamic competition, where market power is more related to resources like data, traffic, and algorithms rather than directly related to market share. Therefore, traditional methods of determining whether operators have market dominance based on market share will be difficult to apply to the platform economy.

2.2.2 "Algorithmic Black Box" Exacerbates Difficulties in Identifying Platform Algorithm Abuse

Traditional algorithms set certain objective functions with clear logical processes and traceability. However, big data algorithms based on artificial intelligence have unclear processes between input and output, strengthening the "algorithmic black box" problem and making the computational process difficult for consumers to challenge [21]. Given that algorithms themselves are highly complex and specialized, the public is always in a state of ignorance about their data collection and mining procedures, computational decision-making methods, etc. This black-box characteristic makes algorithm abuse in the platform economy more concealed and difficult to identify. Platform enterprises cleverly hide monopolistic behaviors in marketing strategies through algorithms, resulting in the boundary between illegal monopolistic nature and legitimate competition becoming blurred due to algorithmic black-box characteristics even when consumer rights and market competition order are infringed [22]. Taking platform algorithmic price discrimination as an example, the application of algorithmic technology leads to diverse characteristics in the specific infringement's size, timing, scenario, and scope, further amplifying the regulatory challenges of the "algorithmic black box" and making it harder to identify and trace when determining whether the behavior constitutes differential treatment under the Anti-Monopoly Law, creating difficulties in allocating fault magnitude and determining tort liability. For instance, when Goldman Sachs and Apple jointly promoted credit cards in 2019, the credit limit was suspected of gender discrimination, but responsible personnel attributed the problem to the algorithmic black box. Due to the complexity of algorithmic liability determination, it is difficult to determine whether it is natural persons' intentional fault or problems with the technology itself [23]. Even if it is the subjective intention of algorithm decision-makers, collecting evidence is difficult for victims. Meanwhile, unlike traditional pricing, platform algorithmic price discrimination uses specific algorithmic models to find target products from massive data, set target prices, and continuously optimize and adjust target prices during this process [24]. This dynamic characteristic of algorithmic pricing makes it difficult for antitrust enforcement agencies to ascertain consumer terminal pricing, thus making it difficult to judge price reasonableness. Additionally, platforms can use algorithms to mask consumers' price and cost mechanisms, seemingly providing free goods and services to users while actually causing invisible non-economic harm to their privacy and personal information, which is also difficult for antitrust enforcement agencies to identify due to algorithmic black-box characteristics.

2.2.3 The Supervision Mechanism for Platform Algorithm Abuse Remains Imperfect

In recent years, against the backdrop of the government optimizing the business environment, antitrust enforcement agencies have played more of a service provider role, delegating management authority to platforms themselves. However, regarding supervision of platform algorithm abuse, both external supervision by antitrust enforcement agencies and internal governance by platform enterprises have many problems.

First, China's antitrust enforcement agencies still have issues such as inconsistent enforcement, insufficient experience, and weak antitrust awareness in supervising platform algorithm abuse. Second, regarding internal platform governance, platform rules should be an effective way for operators and users within the platform to supervise the digital platform, but currently, the vast majority of platform rules do not truly give operators and users choice but rather constitute a "malicious consent" mechanism that forces agreement from operators and users. Specifically, platform users cannot continue using platform services if they do not agree to the rules. Moreover, China's regulatory mechanism for platform algorithms themselves still needs improvement. For example, in terms of regulation formulation, Article 23 of the Algorithmic Recommendation Management Regulations stipulates "establishing an algorithm classification and grading safety management system" that only classifies and manages algorithm service providers without directly classifying algorithm risk levels. Finally, there are high requirements for the professional competence and technical level of China's antitrust regulatory enforcement personnel. Given the complexity and specialized nature of algorithms, if regulatory enforcement personnel do not possess solid algorithmic literacy, it will be difficult to conduct timely review and supervision when algorithm operations deviate.

3.1 Legal Regulation of Platform Algorithms

Since platform algorithm abuse may infringe upon numerous rights and interests including sound market competition order, consumer legitimate rights, user privacy and personal information security, and data security, attention should be paid to coordinating with laws such as the Law on the Protection of Consumer Rights and Interests, Personal Information Protection Law, and Data Security Law of the People's Republic of China when conducting antitrust regulation.

First, the current Anti-Monopoly Law and supporting rules should be refined to regulate algorithmic collusion, expanding the scope of subjects who can form monopoly agreements. Specifically, algorithm designers and users could be introduced into the scope of subjects who can form monopoly agreements. If algorithm developers embed their own value biases into algorithms during the design phase, causing the algorithms to have collusive anti-competitive effects when put into use, they can be identified as subjects of algorithmic collusion. Meanwhile, for collusion that occurs during subsequent algorithm application due to self-learning or changes in the business environment when no subjective bias was embedded during the design phase, if designers are aware of the collusive behavior but do not take any remedial measures, they can also be identified as subjects of algorithmic collusion.

Second, the current Article 9 "digital clause" of the Anti-Monopoly Law should be further improved. The 2022 amended Anti-Monopoly Law added Article 9 "digital clause," which prohibits platform operators from using algorithms to implement monopolies in principle. However, faced with the rapid iteration of AI algorithms and constantly evolving means for platforms to use algorithms to implement monopolies, the provisions of the Anti-Monopoly Law "digital clause" are not specific or clear enough to meet current antitrust regulation needs. Therefore, consideration could be given to adding explanations of the particularity of monopolistic behaviors caused by data and algorithms in this clause and clarifying the relationship between using algorithms to implement monopolies and other traditional monopolistic behaviors. Additionally, reference could be made to Germany's Act Against Restraints of Competition tenth amendment, which treats self-preferencing as an independent abuse behavior, and consider regulating typical platform algorithm abuse behaviors as independent clauses with specific and detailed identification standards when improving the Platform Anti-Monopoly Guidelines in the future, thereby forming a logically consistent antitrust theoretical framework.

Finally, the legal subject status of platform enterprises should be further clarified in liability determination. Setting platform enterprises as certain legal responsibility subjects can effectively regulate their business behaviors, urge enterprises to prevent the negative effects of "algorithmic black boxes," effectively constrain development, design, and operation subjects, and effectively prevent falling into "algorithmic black boxes." At the same time, it can clarify the responsible subject when consumer rights are infringed and prevent the expansion of damage consequences.

3.2 Regulation of Platform Algorithm Review

"Compared with the traditional economy, the digital characteristics of the platform economy field lead to obvious monopolistic trends in this field" [25]. Therefore, to avoid the alienation of AI algorithm power that could turn against humanity, regulation of platform algorithm review is urgent.

First, reference can be made to extraterritorial legal formulation experiences such as the EU's Artificial Intelligence Act and General Data Protection Regulation to further integrate existing legal norms and establish clear platform review and accountability systems. Specifically, during the algorithm design process, experts should be used to evaluate and review algorithm compliance and transparency. For new algorithms developed by platform enterprises, especially the algorithm source code and algorithm training data of large platforms, filing and review should be conducted. Meanwhile, information such as algorithm developers and actual controllers, algorithm research and development time, application scenarios, technical standards, and risk prediction analysis should be registered and filed. Simultaneously, platform enterprise responsibilities should be further consolidated to ensure algorithm results can be reviewed and traced for platform liability determination.

Second, specialized algorithm supervision agencies and teams can be established to strengthen algorithm supervision and management, with personnel composed of professional technical personnel, state agency personnel, people's representatives, and legal and ethical experts. The functions of algorithm supervision agencies include improving algorithm design and operation rules, clarifying supervision and review procedures, accepting consumer complaints, formulating specific measures to constrain regulators, and publicizing relevant laws and regulations. Through methods such as holding joint meetings, open and effective accountability and supervision mechanisms can be formed to ensure fair and transparent algorithm operation processes.

Finally, the role of industry associations should be fully utilized. Internet industry associations and others can formulate unified algorithm technical standards for typical and frequent platform algorithm abuse behaviors to enhance algorithm transparency and reduce the negative impact of algorithm abuse. In addition, platform enterprises can refer to the EU's Artificial Intelligence Act approach of classifying AI systems according to risk assessment, evaluate the potential negative impacts of research and development algorithms on consumer legitimate rights, personal information protection, and market competition order, classify algorithms by level, increase compliance guidance for enterprises developing high-risk algorithm systems, and list reasonable and feasible emergency and remedial measures.

3.3 Regulation of Platform Data Collection

In the context of rapid digital economy development, data as a new production factor has become a strategic resource contested by platform enterprises, with personal information being the most competitive core resource. To this end, operators have used algorithms and platform rules to force consumers to authorize and collect user data beyond scope, seriously threatening consumer personal information security. Therefore, antitrust regulation can be conducted from the source of data collection.

Cooperation between antitrust enforcement agencies and industry regulatory departments such as cyberspace administration, telecommunications, and finance is the cornerstone for building a data antitrust defense line from the source. During platform data collection, data characteristics and competitive situations vary across industries. Financial sector data involves sensitive information such as user assets and credit, and without regulation on the scope, methods, and purposes of data collection, data monopoly risks may arise, such as large financial platforms using massive user data advantages to implement unfair pricing or market exclusion in credit and insurance businesses. Through multi-department joint formulation of normative documents, regulatory expertise across industries can be integrated to clarify the legal boundaries for platforms to obtain user data, comprehensively covering antitrust supervision of data collection in different forms, and effectively curbing data-driven monopolistic behavior.

3.4 Regulation of Platform Data Protection

Improving the data classification and grading protection system in the Data Security Law is a key strategy for data antitrust regulation. Different levels and types of data have uneven value and risk distribution in market competition. For example, core data involving national security and critical infrastructure may cause serious public security and economic security problems if monopolized, while monopolization of sensitive data such as user personal identity and consumption preferences would infringe upon consumer rights and distort market competition. Conducting important data catalog formulation helps accurately identify data resources with key impacts on market competition, implement key protection and antitrust monitoring, and strengthen platform enterprise responsibilities in data management, access control, and security auditing for high-value, high-risk data to prevent data abuse and monopolistic possession. Meanwhile, reasonable data classification and grading provides a scientific basis for antitrust enforcement resource allocation, enabling enforcement forces to focus on investigating monopolistic behaviors in key data areas, improving data antitrust regulation efficiency, and maintaining competitive balance in data factor markets.

Meanwhile, during data classification and protection, reference can be made to the Data Protection Officer (DPO) system in the EU's General Data Protection Regulation to improve China's Chief Data Officer (CDO) system. Within platform enterprises, the CDO serves as the core role in enterprise data management, responsible for supervising data collection and use processes, ensuring legal data sources and legitimate use, and preventing data from becoming a tool for platform monopolies. Improving the CDO system can promote the integration of internal enterprise data culture and antitrust culture, drive compliant practices in data openness and protection, break data monopoly barriers, enhance market competition vitality, improve the effectiveness and sustainability of data antitrust regulation from the internal governance level, and achieve positive interaction between data utilization and market competition.

3.5 Administrative and Criminal Regulation of Platform Algorithms

Article 11 of the Anti-Monopoly Law explicitly requires "improving the connection mechanism between administrative enforcement and judicial proceedings." Regarding China's antitrust law implementation pathways, they mainly include antitrust administrative enforcement and antitrust judicial litigation. Specifically, in administrative enforcement, antitrust enforcement agencies investigate suspected monopolistic behaviors according to relevant legal provisions and make corresponding penalties, i.e., public implementation of antitrust law. In judicial litigation, to remedy the difficulties private subjects face in filing antitrust lawsuits, such as difficulty in providing evidence and high litigation costs, Article 60 of the 2022 amended Anti-Monopoly Law specially added provisions on procuratorial public interest litigation, authorizing people's procuratorates at or above the municipal level to file antitrust civil public interest lawsuits with people's courts when operators implement monopolistic behaviors that harm social public interests. Given that new types of monopoly cases represented by algorithm abuse in the platform economy often involve interdisciplinary knowledge in internet, big data technology, economics, and law, cooperation and exchange between antitrust enforcement agencies and procuratorial organs can be strengthened to clarify the case acceptance scope of procuratorial public interest litigation in the antitrust field and improve the clue transfer system. At the same time, talent team building should be enhanced, with regulatory enforcement personnel improving their existing algorithm knowledge reserves, studying laws and regulations and typical cases related to algorithm antitrust, and summarizing enforcement experience. Additionally, experts with multidisciplinary backgrounds in law, economics, and computer science can be hired to provide professional opinions for platform algorithm antitrust supervision.

As the driving force of a new round of scientific and technological revolution and industrial transformation, artificial intelligence plays a key role in driving revolutionary breakthroughs in science and technology, innovative allocation of production factors, and transformation and upgrading of traditional industries, and has become an important engine for developing new quality productive forces and promoting high-quality economic and social development. However, at the same time, cases of artificial intelligence technology represented by algorithms infringing upon consumer rights and disrupting market competition order occur from time to time, manifesting in the platform economy as algorithmic price discrimination, algorithmic self-preferencing, and algorithmic collusion. Therefore, it is necessary to properly handle the relationship between competition regulation and competition promotion, improve the existing problems in current platform algorithm antitrust regulation, and build a platform algorithm antitrust regulation mechanism involving multiple subjects including antitrust enforcement agencies, judicial organs, industry associations, platform enterprises, and consumers to safeguard sound market competition order.

References

[1] Wang Xianlin, Cao Hui. Three Key Issues of Antitrust in the Platform Economy Sector [J]. Exploration and Free Views, 2021(9): 54-65, 178.
[2] Su Yu. On the Value Objectives and Mechanism Design of Algorithm Regulation [J]. Journal of Dialectics of Nature, 2019, 41(10): 8-15.
[3] Zhang Shuling. Cracking the Black Box: Algorithm Power Regulation and Transparency Mechanism in the Age of Intelligent Media [J]. China Publishing Journal, 2018(7): 49-53.
[4] Meng Yanbei, Zhao Zeyu. Reasonable Regulation of Super Platform Self-Preferencing Behavior Under Antitrust Law [J]. Journal of Central South University (Social Sciences Edition), 2022, 28(1): 70-82.
[5] Yu Zuo, Li Siming. Anti-Competitive Effects of Digital Platforms' Cross-Market Self-Preferencing—Taking Search Engine Platforms as an Example [J]. Economist, 2024(2): 111-119.
[6] Ding Xiaodong. On the Legal Regulation of Algorithms [J]. Social Sciences in China, 2020(12): 138-159, 203.
[7] Yan Mi. China Consumers Association Reviews First Half-Year Consumer Rights Protection Hot Spots [N]. International Business Daily, 2024-08-07(005).
[8] Chen Canqi. Legal Regulation of Platform Abuse of Algorithm Power [J]. Journal of Hunan University of Science and Technology (Social Sciences Edition), 2023, 26(6): 154-161.
[9] Cheng Zengwen. Autonomous Algorithm Abuse and Antitrust Regulation in the Platform Economy Sector [J]. South China Finance, 2021(10): 87-96.
[10] Liang Zheng, Zeng Xiong. Policy Responses to "Big Data Price Discrimination": Behavior Qualification, Regulatory Dilemmas, and Governance Solutions [J]. Science Technology and Law (Chinese-English), 2021(2): 8-14.
[11] Zhu Chengcheng. Analysis of the Illegality of Big Data Price Discrimination and Exploration of Legal Regulation—Based on the Perspective of Consumer Rights Protection [J]. South China Finance, 2020(4): 92-99.
[12] Hunan Province Changsha City Intermediate People's Court (2019) Xiang 01 Min Zhong 9501 Civil Judgment.
[13] Shanghai No. 1 Intermediate People's Court (2020) Hu 0105 Min Chu 9010 Civil Judgment.
[14] Bi Jinping, Zhang Yu. Monopoly Analysis and Regulatory Path of Platform Algorithmic Self-Preferencing [J]. Journal of Ningxia University (Humanities and Social Sciences Edition), 2023, 45(5): 102-109.
[15] Chen Bing. Risks of Algorithmic Collusion in the Digital Economy and Antitrust Regulatory Approaches [J]. Legal Forum, 2024, 39(4): 80-90.
[16] Yin Jiguo. Antitrust Regulation of Algorithmic Monopoly Behavior in the Age of Artificial Intelligence [J]. Journal of Comparative Law, 2022(5): 185-200.
[17] CPC Central Committee and State Council Opinions on Building a More Complete System and Mechanism for Market-oriented Allocation of Production Factors [EB/OL]. (2020-03-30) [2025-05-21]. http://www.gov.cn/gongbao/content/2020/content_5503537.htm.
[18] Wang Xianlin. Toward Sustainable and Normalized Antitrust Enforcement in China [J]. China Price Supervision and Anti-Monopoly, 2022(3): 22-25.
[19] State Council Anti-Monopoly Committee Anti-Monopoly Guidelines for the Platform Economy Sector [EB/OL]. (2021-02-07) [2025-05-21]. http://www.gov.cn/xinwen/2021-02/07/content_5585758.htm.
[20] Ye Ming, Zhang Jie. Challenges and Responses of China's Antitrust Enforcement by Big Data Competition Behavior [J]. Journal of Central South University (Social Sciences Edition), 2021, 27(3): 26-39.
[21] Wang Huaiyong, Deng Ruohan. The Realization Dilemma and Legal Response of Financial Fairness in the Algorithm Era [J]. Journal of Central South University (Social Sciences Edition), 2021, 27(3): 1-14.
[22] Liu Dongying, Wang Yanan. Platform Economy Antitrust: Analysis of the Relative Independence of Market Structure and Market Behavior [J]. Journal of Hebei University of Economics and Business, 2023, 44(2): 45-52.
[23] Wang Anshu, Kong Lingxue, Li Yifei. Legal Risk Review and Response for Digital Finance Algorithm Black Box [J]. Journal of Financial Development Research, 2024(11).
[24] Yu Ling, Lan Jianghua. Antitrust Regulation of Algorithmic Personalized Pricing: Based on the Perspective of Consumer Segmentation [J]. Social Sciences, 2021(1).
[25] Wang Xiaoye. Some Thoughts on Antitrust Supervision of the Digital Economy [J]. Science of Law (Journal of Northwest University of Political Science and Law), 2021, 39(4).

Submission history

Platform Algorithmic Abuse in the Age of Artificial Intelligence and Its Antitrust Regulation: Postprint