How Artificial Intelligence Challenges Democracy and Concentrates Power in Opaque Algorithms

-

How Artificial Intelligence Challenges Democracy, Concentrates Power in Opaque Algorithms, and Forces Us to Rethink Regulation, Transparency, and Public Debate

Modern democracies are built on a simple yet powerful pillar: both individual and collective decisions recognize a higher auditor — the “sovereign,” the body of citizens.

At a time when much of the world confronts authoritarianism, artificial intelligence emerges as a potential risk of returning to historical moments when no thought outside the official culture was permitted.

It is striking that a technology that promises the future can also introduce something deeply unsettling. We will not see boots marching in the streets or uniforms displaying weapons, but we may become subject to invisible forces operating through influence, repetition, and opacity.

AI is not, by definition, an enemy of democracy. The problem arises when its development and deployment move faster than our ability to understand, regulate, and publicly debate it.

At a recent event on Artificial Intelligence and the Financial Industry, N5’s CEO reflected on a troubling issue while responding to a question about automation and unemployment:

“As AI replaces jobs, many people will lose their source of income.
Suppose the system addresses this with universal basic income — the most commonly suggested solution. That would not replace the other dimensions of work. Work is not only about financial compensation. It is difficult to be fulfilled without producing, creating, and shaping one’s reality.

And this assumes governments actually WANT to solve the problem of human trivialization. History shows that, at times, the incentives of authorities are loosely connected to those of the governed.

In Greece and Rome, citizens gained many of their rights when the State needed them — for war, production, or expansion.

In today’s world, where wars are won with drones and labor can be performed by robots, it is difficult not to foresee a drift away from democratic values. Why seek votes when military or economic power depends more on machine obedience than on human will? That is why strengthening institutions that keep humans at the center is indispensable.”

Colombo also pointed to an antidote:

“In contemporary terms, no democratic country has the margin to allow AI to replace public transportation drivers, for example. Why? Because no political party wins by a margin larger than the number of citizens who would lose their jobs. In that case, democracy acts as a regulator. That is precisely why it is more essential than ever.”

Concentrated Power, Invisible Decisions

Hannah Arendt warned that power becomes dangerous when it ceases to be visible and debatable. Something similar happens today with many AI systems: they concentrate power in a few actors — large technology companies or states — and exercise it through opaque algorithms that are difficult even for experts to audit.

When an important decision is justified with “the algorithm decided,” political responsibility dissolves. Who is accountable if the system discriminates, fails, or manipulates?

Democracy depends on identifiable responsibility. AI’s black boxes, for now, do not make transparent their means, their ends, or those responsible.

Public Space Under Pressure

One of the most concrete risks appears at the heart of democratic life: public debate.

AI enables highly segmented political messaging tailored to individual fears, desires, and biases. Two citizens may receive completely different — even contradictory — messages from the same candidate without knowing it.

Yuval Harari has pointed out that, for the first time, technology makes it possible to influence millions of people in a personalized and automated way.

When the public sphere is no longer shared, democratic deliberation fragments dangerously.

A Simple Example

Imagine an election in which every citizen is targeted by a powerful AI system generating campaign content.

An undecided voter receives emotionally charged messages designed to provoke fear; another receives optimistic promises; a third is exposed to carefully calibrated misinformation. None sees what the others see. There is no debate — only personalized stimuli. A sophisticated version of “divide and rule.”

The troubling reality: this is already happening.

Speed vs. Deliberation

Democracy is inherently slow. It requires debate, friction, and institutional time.

AI specializes in acceleration. It decides in milliseconds, scales effortlessly, and rewards efficiency over reflection.

The risk lies not only in which decisions become automated, but in the speed at which they unfold before our eyes.

The Uncomfortable Question

As researcher Timnit Gebru reminds us, the AI debate is not merely technical — it is profoundly political: who designs these systems, with what data, and for whose benefit?

The question is no longer whether AI will be used, but under which democratic rules it will operate. Without transparency, public oversight, and accountability, AI can amplify inequalities and erode trust in institutions.

Democracy does not die when machines learn to think. It dies when citizens stop understanding, questioning, and deciding.

Artificial intelligence could strengthen our societies — but only if we accept that its governance is not a matter for isolated experts, but an urgent public conversation.

Share this article

Recent posts

Popular categories