Between inflated expectations and real adoption: how to understand the so-called artificial intelligence bubble
We call a bubble anything that promises perfection and then vanishes as quickly as it appears, leaving nothing behind. In today’s debate, many are asking whether artificial intelligence (AI) is following that same path.
In the short term, the so-called “AI bubble” is already showing signs of disappointment. While some companies have invested heavily in developing their own solutions and others have opted for off-the-shelf technologies, the reality is clear: no one seems to have achieved what was promised by self-proclaimed visionaries preaching immediate, transformative impact.
Yet what we may be witnessing is not the failure of technology, but the symptom of something else — something deeply human. A kind of emotional condition that, paradoxically, specialized AI itself is beginning to address: anxiety.
More than a classic technology bubble, the debate around artificial intelligence exposes a growing tension between the real pace of innovation and our urgency to see immediate results.
The AI bubble: technological disappointment or human impatience?
The dominant narrative around artificial intelligence was built on the idea of infinite acceleration: more efficiency, more productivity, more automation, more results. When that impact does not arrive as quickly as expected, enthusiasm turns into suspicion and expectations give way to frustration.
The question, then, is not only whether there is an AI bubble, but whether we are demanding miracles on calendar timelines.
Artificial intelligence and human evolution: a change without precedent
From a historical perspective, this moment demands pause and reflection. We are facing a milestone in human evolution with no clear precedent.
Humans have never been the strongest, the fastest, or the most versatile species. But until now, they have been the hardest to surpass intellectually. Today, for the first time, a technology has emerged that can alter that equation.
Artificial intelligence promises something no previous tool ever could: the multiplication of human intelligence. Not merely through access to information, but through the ability to operate using processes similar to those we employ to reason, classify, learn, decide, and solve problems.
AI does not just have “all the books.” It appears to be approaching something far more unsettling: all possible mental models, combined with the accumulated memory of humanity.
As cognitive scientist Geoffrey Hinton, one of the pioneers of deep learning, has warned, “it may be that these things become smarter than us.” This is not a call for paranoia, but a reminder of the true scale of the transformation underway.
And in the face of such change, a sense of vertigo is only natural.
AI does not change the world in the lab — it changes it in everyday life
No matter how groundbreaking an innovation may be, its impact is not determined in laboratories, research papers, or keynote presentations. History shows that technological revolutions do not take hold when they become possible, but when they are adopted.
A blunt saying captures this reality: nothing truly changes “until it reaches everyday life.”
In this case, the “river” is not a demo or an academic paper. It is the time and experience of ordinary people, average professionals, real users. That is where it becomes clear whether AI will become a practical tool or merely an illusion.
So the core question is not only what AI can do, but something far more fundamental:
How quickly can anyone learn to use it?
Conspiracy narratives and technological distrust
Some, driven by conspiratorial thinking, argue that a small group of powerful actors is deliberately shaping AI adoption to keep the broader population uninformed while extracting unlimited benefits.
From this perspective, recurring questions arise:
How fast is real AI access being enabled beyond closed circles?
How transparent are its true objectives?
And how well does the general public understand its limitations?
Even when extreme, these concerns point to a legitimate issue: the widespread lack of clarity about what artificial intelligence truly is — and what it is not.
A real gap
Two undeniable truths coexist. On one hand, AI continues to improve steadily. On the other, most people still do not clearly understand what it can and cannot do.
Moreover, that boundary shifts constantly.
This gap fuels a collective perception that everything was just a brief illusion — a fragile bubble of optimism, as fleeting as soap bubbles blown by a child.
But that perception may not reflect reality. We may not be facing a technological bubble, but rather a period of transition.
The most important questions are not whether AI works, but deeper ones:
Who will have access to its full potential?
For what purposes will that access be granted?
And who will be left out of the adoption process?
It is possible that, for now, we are experiencing not a technology bubble, but an inclusion bubble — the belief that everyone has access to AI, when in fact that is not yet true.
Expanding that inclusion depends on each of us. Engaging actively in this transition, through education and adaptability, becomes almost a collective responsibility if we are to avoid the risks that more cautious voices have long anticipated.

