Bots Like Us

In a world where artificial intelligence is presumed to be impartial, an experimental simulation laid bare an inconvenient truth: bots, like us, fall into the trap of polarization, tribalism, and toxicity.

An experiment led by researchers at the University of Amsterdam showed that the misery of social media lies not only in its design, but also in the nature of its users, human or otherwise.

The study that showed that misery is not only human

By: Gabriel E. Levy B.

For decades, social platforms have defended themselves by attributing digital polarization to algorithms designed to maximize attention.

According to this narrative, if we eliminated personalized recommendations, ads, and optimized feeds, conversations would become healthier, more reasonable, more human.

However, one head-on study challenges this assumption at its root.

A computational sociologist and professor at the University of Amsterdam, Dr. Petter Törnberg, specialized in the analysis of complex social dynamics using computational modeling tools and network theory, with Maik Larooij, a research engineer from the same academic center, with experience in artificial intelligence, multi-agent simulations and development of digital platforms for experimental studies,  published in August 2025, the results of a radically simple simulation: a minimal social network, no recommendation algorithm, no sponsored content and no addictive design.

Only 500 bots, all of them powered by GPT-4o mini, each with an artificial personality based on real data from the American Electorate Study.

The American Electorate Study, formally known as the American National Election Studies (ANES), is one of the most comprehensive and respected sources of data on the political behavior of citizens in the United States.

Since 1948, this study has systematically collected detailed information on the political attitudes, party affiliations, ideological beliefs, educational level, demographic composition and electoral participation of thousands of U.S. citizens.

These data not only allow us to trace historical trends, but also to build statistically representative profiles of different segments of the electorate.

In the 500-bot experiment, the researchers used this dataset to program each artificial agent with a demographic and political identity consistent with real patterns of the American electorate.

The rules of experiments and their results

The bots, fed with data from the US Electorate Study, could post, follow each other and repost. Nothing else.

The experiment soon took a disturbing turn. Within hours, the bots began to form groups of friends with ideological coincidences.

They grouped together in closed communities, favored extreme opinions, and ignored each other except to insult each other or reinforce their biases.

Digital tribalism emerged spontaneously, without the need for algorithmic incentives.

It was a dark mirror: it wasn’t the algorithms that promoted the conflict, it was our data footprints that triggered the toxic behavior.

“The network behaves like a symbolic battlefield”

What surprised the researchers was not only how quickly the bots degenerated into tribalism, but how faithfully they replicated the more dysfunctional dynamics of Twitter or Facebook.

Echo chambers, spaces where only one’s own amplified ideas are heard, appeared naturally.

Bots with similar ideological positions sought each other out, ignored those who disagreed and gave greater visibility to the most extreme messages within their group.

Törnberg, a digital sociologist, did not say it with resignation, but as a warning.

In a recent interview, he explained, “This experiment reminds us that online social behaviors are not simply a consequence of technological design.

There’s something deeper, something that we’re constantly projecting onto our machines.”

This phenomenon had already been analyzed by the philosopher Shoshana Zuboff in The Age of Surveillance Capitalism, where she argued that the problem was not only the economic model of the platforms, but our cultural disposition to exhibitionism, confrontation and the need for immediate validation.

AI does nothing more than reproduce these impulses, trained with our own data, opinions, biases and contradictions.

Thus, the digital scenario looks less like an enlightened public square and more like a symbolic battlefield, where belonging matters more than truth, and conflict becomes a social value.

The worrying thing is that this logic also translates to the artificial systems we design: it is not enough to remove the algorithm if the base model is fed with the chaos of our collective expression.

“Toxicity does not need algorithms, only the need for identity”

One of the most disturbing conclusions of the experiment is that the bots not only replicated toxicity, but did so with evolutionary patterns of their own. Moderate voices were marginalized.

The most radical positions found traction and visibility within their communities. The dynamic was clear: the bots reproduced the loudest behaviors, not because they were incentivized to do so, but because they responded to a logic of belonging, of artificial identity.

This is reminiscent of Cass Sunstein’s work on the law of the group, the tendency of groups to adopt more extreme positions than their individual members. In the Amsterdam experiment, the bots had no emotions, no personal history, and no hidden goals. Still, they showed the same impulse for group radicalization. The AI simply followed the pattern it learned from us.

As bots reconfigured themselves into digital tribes, a structure emerged that eerily resembled today’s debates on X (formerly Twitter) or Reddit: a few dominant actors dictating discourse, an army of repeaters, and a periphery silenced by a lack of adherence to extremism.

The chilling detail is that these bots were designed to reflect the American electorate, not the extremes.

They were average representations. And yet, without outside intervention, they ended up trapped in the logic of conflict.

Digital tribalism does not need artificial incentives. It is enough to faithfully represent contemporary human identity.

“AI is not neutral, it reflects its creator”

One of the most discussed findings of the experiment was the role played by the design of artificial personalities.

Each bot was programmed with a specific demographic identity: age, gender, education level, political leaning, and social beliefs.

These features were not improvised, but taken from real statistical databases, such as the ANES (American National Election Studies).

What emerged was a brutally honest portrait of how certain combinations of identity predispose to certain behaviors.

Bots with conservative affinities tended to be wary of progressive content, and vice versa.

Consensus publications were ignored.

Polarization was not imposed: it was generated organically from artificial but deeply human identities.

A particularly telling case was that of the “progressive young university students” bots, which developed a kind of ideological purism, censoring even those like-minded bots that did not share their standards of inclusive language.

In contrast, the “rural middle-class conservatives” quickly formed a closed and hostile network to everything external. Both extremes ignored each other, except to attack each other.

This reminds us of the words of sociologist Evgeny Morozov, who warned in To Save Everything, Click Here that “technology does not correct human behavior; it just amplifies it.”

AI, in its most sophisticated form, cannot escape this logic: if it is trained with our words, it will reproduce our failures.

In fact, Törnberg and Larooij’s experiment shows that it is not enough to create “neutral” or “objective” systems if the data that nourish them are already contaminated by our most visceral passions.

Bias isn’t just a technical issue; it is a reflection of the society that feeds it.

In conclusion, the experiment of the 500 bots without algorithms makes it clear that digital toxicity is not only a consequence of the design of the platforms. It is a brutal reflection of our social and cultural dynamics. Even without artificial incentives, we reproduce tribalism, extremism, and conflict. And when our artificial creations imitate us, they inherit not our rationality, but our deepest fractures. Technology, as a mirror, does not lie. It only gives back what we are.

References:

  • Törnberg, P., & Larooij, M. (2025). Emergent Polarization in a Bot-Only Social Network. Preprint, University of Amsterdam.
  • Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs.
  • Sunstein, C. (2009). Going to Extremes: How Like Minds Unite and Divide. Oxford University Press.
  • Morozov, E. (2013). To Save Everything, Click Here: The Folly of Technological Solutionism. PublicAffairs.