In the age of artificial intelligence, Wikipedia faces a disturbing paradox: while its content is more consulted than ever, its sustainability is at risk. AIs massively consume their data, but human users interact less and less with the platform. This imbalance puts in check a project that, without publicity or public resources, depends exclusively on donations to survive.
“Our content is free, our infrastructure is not”
By: Gabriel E. Levy B.
Since its creation in 2001, Wikipedia has established itself as the most consulted free encyclopedia in the world. Its collaborative and non-profit model, managed by the Wikimedia Foundation, has been funded mainly by small donations from users and contributions from some technology companies.
However, the rise of artificial intelligence has upset this balance. Since January 2024, Wikipedia’s bandwidth consumption has increased by 50%, mainly due to automated bots mining data to train AI models. These bots access multimedia content and less frequented pages, bypassing cache systems and generating high costs in data centers.
Rebecca MacKinnon, vice president of global advocacy at the Wikimedia Foundation, expressed concern: “Our content is free, our infrastructure is not.” The growing pressure of AI on Wikipedia’s technical resources threatens its viability as a free knowledge platform
The Traffic Paradox: More Queries, Fewer Users
Despite the increase in data consumption, Wikipedia has experienced a decline in the number of human visitors. According to data from Similarweb, in March 2025, Wikipedia.org received 500 million fewer visits than ChatGPT.com. In the last three years, it has lost more than 1,100 million monthly visits, a drop of 22.7%.
This reduction is not isolated: in January 2022, Wikipedia accumulated approximately 4,900 million monthly visits, but by March 2025 that figure fell to less than 3,800 million. In parallel, AI platforms experienced dizzying growth.
ChatGPT alone, in the same period, went from less than 600 million monthly visits to more than 4,300 million, an expansion of more than 600%.
This phenomenon is due to the fact that users now access information through AI interfaces, such as ChatGPT or virtual assistants, which consult Wikipedia in the background without the user interacting directly with the platform.
In addition, according to the DataReportal report, 62% of global online users prefer to receive instant answers through conversational search engines or artificial intelligence apps, instead of visiting traditional websites.
As a result, although Wikipedia’s content is more widely used than ever, direct interaction with the page decreases dramatically, weakening the mechanisms that allow for its funding and community participation, affecting donations and community engagement.
Who pays for free knowledge?
Wikipedia’s model, based on collaboration and donations, faces an unprecedented challenge. Big tech companies, such as OpenAI and Google, massively use their content to train AI models, but their financial contributions are limited.
In 2019, Google donated $3.1 million to the Wikimedia Foundation and provided machine learning tools to facilitate content creation. However, these contributions do not offset the growing cost of traffic generated by AI. The Wikimedia Foundation has launched Wikimedia Enterprise, a commercial service for businesses that require mass access to their data, but so far only Google has agreed to pay for this service.
The lack of fair compensation by companies that profit from Wikipedia’s content raises questions about the sustainability of the free knowledge model in the age of artificial intelligence.
Case Studies: The Impact of AI on Wikipedia
The increase in traffic generated by AI not only increases infrastructure costs, but also affects the quality of content.
A recent study revealed that more than 5% of new articles in English contain snippets generated by artificial intelligence, which are often biased, factual errors, or designed for covert promotional purposes
In many cases, these automated texts end up displacing the most rigorous human contributions, generating an information saturation effect that complicates editorial verification within the community itself. Volunteer editors warn that, due to the growing volume of suspicious entries, the manual review process has become almost unsustainable, especially in less monitored languages such as Swedish, Cebuano or Waray-Waray, where automated bots have been proliferating for years.
In addition, the Wikimedia Foundation has observed that AI bots consume 65% of the most expensive technical resources, including bandwidth, redundant storage, and request processing through complex APIs, despite accounting for only 35% of total page views.
This disproportion not only increases operational costs but also slows down the overall performance of the system for human users, compounding the pressure on an infrastructure funded almost entirely by individual donations.
The situation is such that, during traffic spikes generated by AI, temporary outages have been reported in sister projects such as Wikimedia Commons and Wikidata, affecting researchers and publishers who depend on these platforms for the creation of new free content.
In response, the Wikimedia Foundation has released a structured dataset in Kaggle, under open reuse licenses, with the goal of making it easier to train AI models without overloading its servers.
This data package attempts to funnel some of the automated traffic to more sustainable sources, reducing the need for constant, real-time tracking.
However, this measure, although technically useful, represents only a temporary patch against the magnitude of the problem.
It does not solve the critical point: the absence of a global policy of economic compensation by technology companies that massively use Wikipedia’s content as a primary source.
According to the Online Queso platform, only a fraction of the companies that benefit directly or indirectly from the free knowledge hosted on Wikipedia have contributed financially to the maintenance of the project.
The “everything for free” logic that has governed the relationship between AI and common knowledge could be approaching its breaking point.
In conclusion, Wikipedia is at a critical crossroads.
While its content is more consulted by AIs than ever, its donation-based financing model faces serious threats.
To ensure the sustainability of free knowledge, it is imperative that big tech companies take responsibility commensurate with their use of this invaluable source of information.