This Article is Sponsored by:
Visit Us At:
Europe wants to play in the big league of artificial intelligence, but its investment is minimal compared to the technological titans of the United States and China.
OpenEuroLLM is the European Union’s major project to develop state-of-the-art language models with a focus on transparency, privacy and regulation.
Europe’s commitment to safer AI
By: Gabriel E. Levy B.
Since the 2010s, artificial intelligence has been dominated by U.S. tech companies and, to a lesser extent, Chinese conglomerates.
OpenAI, Microsoft, Google, and Meta have developed increasingly sophisticated models, while China, led by Alibaba and Baidu, has invested billions in generative artificial intelligence.
In contrast, Europe has taken a more cautious approach, prioritizing ethics and regulation over accelerated innovation.
The European Union’s regulation on artificial intelligence, known as the AI Act, is the first of its kind globally and seeks to ensure that the models developed are safe, transparent and respect fundamental rights.
However, large companies in the sector have criticized this regulation, arguing that it can slow down development and leave Europe behind in the technological race.
In this context, OpenEuroLLM emerges, a coordinated effort between 20 European institutions that seek to create multilingual and accessible artificial intelligence models for both the public and private sectors.
Under the direction of Jan Hajič and Peter Sarlin, the project aims to provide a European alternative that does not rely on large American or Chinese corporations.
Regulation and ethics: a double-edged sword
If there is one thing that sets OpenEuroLLM apart from its competitors, it is its focus on transparency and privacy.
While OpenAI, Google and Meta develop closed-source and privately owned AI models, the European project is committed to open, accessible and compliant development with EU regulations. But this ethical approach is not without its challenges.
According to Nick Bostrom, philosopher and AI expert, regulation is necessary to avoid catastrophic risks, but it can also slow down technological progress. Europe, by prioritizing security and ethics, faces a dilemma:
How do you compete in a market where your rivals don’t have the same restrictions?
In contrast, the U.S. has adopted a more flexible strategy, allowing companies like OpenAI and Anthropic to move quickly without significant regulatory hurdles.
China, for its part, has pushed AI as a strategic pillar of its economy, with massive investment in language models and specialized chips.
Gary Marcus, an AI specialist and former professor at New York University, has pointed out that artificial intelligence needs both innovation and oversight, but warns that too much regulation can make startups and European projects end up relying on foreign technologies.
The Budget Problem: An Unequal Battle
The main obstacle of OpenEuroLLM is not only regulation, but budget.
The €52 million allocated to the project is a negligible fraction compared to what its competitors spend.
OpenAI, for example, spends approximately $3 billion a year developing AI models and more than $4 billion maintaining its servers and products like ChatGPT.
If the costs in salaries and other expenses are added, OpenAI allocates more than 1,000 million dollars per month to its operation. In other words, OpenEuroLLM has a budget equivalent to what OpenAI spends in two days.
Even more recent projects, such as DeepSeek, have managed to develop AI models with fewer resources, but with highly optimized strategies.
DeepSeek, a Chinese initiative backed by Alibaba, has shown that it is possible to create artificial intelligence with a more modest investment, but even so, its funding is still several times higher than that of the European project.
Another example is Anthropic, which has received more than $4 billion in investment from Amazon and Google.
Faced with these numbers, OpenEuroLLM seems like a symbolic effort rather than a real competition in the global artificial intelligence market.
Can Europe be relevant in the AI market?
The challenge for OpenEuroLLM is immense. Not only must it develop advanced AI models on a limited budget, but it must also comply with strict regulations and convince European companies that its approach is viable.
Peter Sarlin, co-director of the project, has clarified that the goal is not to create a chatbot similar to ChatGPT, but to establish a digital infrastructure that allows European companies to innovate with their own AI models. In other words, OpenEuroLLM seeks to be the basis on which European companies can build their own solutions without relying on American or Chinese technology.
However, the reality is that the lack of investment in hardware and data centers could limit the scope of the project. OpenAI and Google have access to specialized supercomputers and high-performance chips such as Google’s TPUs or NVIDIA’s processors. Europe, on the other hand, continues to rely heavily on external providers to access this type of technology.
In conclusion
OpenEuroLLM is Europe’s largest effort to develop its own artificial intelligence, but its budget and regulatory framework could become difficult barriers to overcome. While OpenAI, Google and Alibaba invest billions in the development of increasingly advanced models, the European project faces a reality in which its global competitiveness seems elusive.
The EU is committed to transparency, privacy and regulation, but these virtues can become obstacles in a world where speed and investment define success. Without changes in strategy and a significant increase in funding, OpenEuroLLM risks remaining a testimonial initiative rather than a real alternative to the tech giants.