The challenge of self-managed cities

The idea of a self-administered city seems to be taken from science fiction, but it is an increasingly real possibility.

Distrust in rulers has led citizens and large technology conglomerates to look for alternatives.

In the age of smart cities, algorithms and artificial intelligence promise to manage traffic, safety, and public services with a precision that humans have not been able to. However, this automation entails risks: what happens when humanity is left behind in decision-making? Can we trust that algorithms will work for the benefit of everyone and not just those who program them?

The automation of power: an old ambition with new technology

By: Gabriel E. Levy B.

The idea that a city can function without human intervention is not new. From the urban utopias of the 19th century to the most recent smart city projects, automation has always been a promise. Philosophers such as Thomas More imagined societies where perfect order was possible, and twentieth-century technologists such as Norbert Wiener, the father of cybernetics, dreamed of automated control systems that would optimize human life.

But the difference today is that technology makes it possible.

In countries such as China and Singapore, smart cities have already taken steps towards self-administration with sensor networks that regulate everything from traffic to energy consumption in real time.

Companies like Google and Amazon have developed urban models where data replaces traditional policy decisions.

Toronto was a failed experiment in this regard: Google’  s Sidewalk Labs project  aimed to create a smart neighborhood managed by algorithms, but failed due to concerns about privacy and a lack of democratic regulation.

While automation promises efficiency, it also raises the central question: who controls the city when government is programming code?

When algorithms decide for us

Smart cities are moving forward with a basic premise: data is the key to improving quality of life. Artificial intelligence algorithms can analyze information in real time and make decisions faster and with less margin for error than humans.

This translates into smooth traffic, efficient energy systems, and immediate responses to emergencies.

But there’s a fundamental problem: algorithms aren’t neutral. They are designed by companies and governments with self-interest. Shoshana Zuboff, author of The Age of Surveillance Capitalism, warns that in automated cities, those who handle data not only regulate traffic or security, but also have full access to citizens’ private lives.

In a city self-managed by artificial intelligence, every movement is recorded, analyzed, and used to predict behaviors.

Another risk is algorithmic bias. Cathy O’Neil, in her book Weapons of Math Destruction, explains how algorithms can reproduce and amplify social inequalities. In an automated surveillance system, for example, neighborhoods with more crime reports may receive greater police attention, even if those reports are influenced by racial or class bias.

Thus, a self-administered city could consolidate inequalities instead of eliminating them.

In addition, there is a moral dilemma: if an algorithm makes a wrong decision, who is responsible? In 2018, an Uber autonomous vehicle struck a pedestrian in Arizona because the system didn’t identify her as a threat.

If in the future a fully automated city makes similar mistakes, who is held accountable?

Cases where automation replaces governments

Some cities have taken concrete steps towards technological self-administration.

Songdo, in South Korea, is an example of a smart city where traffic, energy consumption and garbage collection are managed by artificial intelligence. However, despite its futuristic design, citizens have not embraced the city as expected: the lack of human interaction and algorithmic hyperregulation have turned it into a cold and impersonal space.

Another case is that of Shenzhen, in China, where facial recognition and surveillance systems determine citizen behavior.

Cameras with artificial intelligence detect violations and issue automatic fines, while a social credit system decides who can access public benefits. Although efficient, this model has been criticized for eroding privacy and individual rights.

In Europe, Amsterdam has implemented algorithms to manage public services more equitably. But, unlike Asian models, it has included democratic control mechanisms that allow citizens to intervene in technological decisions. This case demonstrates that automation can coexist with citizen participation, although it remains a challenge to maintain balance.

The most radical case is Neom, the futuristic city that Saudi Arabia is building in the desert. Designed to be managed by artificial intelligence, it is touted as a model of efficiency and sustainability. However, its planning ignores social complexity: a city is not only infrastructure, but also culture, history and human relations.

In conclusion

Self-managed cities offer innovative solutions, but they also pose ethical and political risks. Automation can make services more efficient, but if you remove human decisions from the process, you risk losing your sense of community and social responsibility. Algorithms can manage a city, but the question remains: who controls them and under what values do they operate? Technology can improve urban life, but without democratic oversight, it can become a tool of exclusion and mass surveillance.