Nassim Taleb analyzes artificial intelligence with skepticism: risks, bubbles, and antifragility as the key to navigating the future.

Nassim Taleb’s critical view on artificial intelligence
Artificial intelligence (AI) has become the great technological promise of our time. Companies, governments, and experts present it as the force that will transform entire industries and even redefine what it means to be human. However, mathematician and essayist Nassim Nicholas Taleb, author of The Black Swan and Antifragile, warns that behind all this enthusiasm lie risks, fragilities, and misleading expectations.
The scalability fallacy in AI
One of his best-known critiques focuses on what he calls the scalability fallacy. The industry celebrates ever-larger models, trained on massive amounts of data, believing that this will inevitably lead to artificial general intelligence (AGI). Taleb remains skeptical: quantitative growth does not automatically yield qualitative breakthroughs. A library with more books doesn’t necessarily understand the world better; likewise, inflating models doesn’t make them sentient—it only extends their reach.
The risk of a new tech bubble
Drawing on his background in finance, Taleb warns that the current AI hype strongly resembles past bubbles. The meteoric rise of companies like Nvidia, or the multibillion-dollar bets of Silicon Valley, could turn out to be a house of cards. The rapid emergence of leaner, cheaper competitors such as China’s DeepSeek shows how quickly the balance can shift. The dot-com bubble looms as a reminder: what looks unstoppable today may collapse overnight.
AI and the future of work: The Lindy Effect
When it comes to employment, Taleb invokes the Lindy Effect, which suggests that things that have lasted for centuries are likely to endure. Manual professions such as plumbing, nursing, or cooking seem more resilient than many white-collar jobs, which may appear prestigious today but are fragile in the face of digital automation. Time, Taleb argues, is a more reliable judge than technological fashion.
Antifragility: learning from errors
Perhaps Taleb’s most valuable contribution to this debate is his concept of antifragility. It is not enough for systems to be robust and withstand shocks; the real value lies in their ability to learn from stress and emerge stronger. AI should be designed to metabolize errors, much like muscles grow after strain. A rigid technology, incapable of absorbing failure, plants fragility in an already uncertain field.
Black swans in artificial intelligence
Taleb also reminds us that AI can bring about black swan events: unpredictable, high-impact occurrences that defy expectations. History shows that the unknown can produce devastating consequences. For this reason, he insists on applying the precautionary principle—to avoid blindly pushing toward the irreversible without first measuring the risks.
Beyond technopessimism
Taleb’s view is not technophobic but critical. He urges us to distrust triumphalist narratives and pay attention to hidden fragilities. AI is neither an oracle nor a monster; it is a mirror reflecting our illusions and weaknesses. The true challenge is not to build machines that know everything, but to create systems that can grow stronger through uncertainty.
Final reflections: AI as a means, not an end
As Julián Colombo, CEO of N5, points out, artificial intelligence is not an end in itself but a means to unlock human potential:
“The real challenge is not in machines that learn, but in organizations that dare to unlearn.”
Through engaging and accessible podcasts, Colombo highlights the value of Taleb’s insights as a compass for understanding both the risks and opportunities of our time.