Painful AI Mistakes

-

AI is changing the world; but in these early stages, it does so with the permanent risk that it will not always be for the good.

Gisela Colombo *

Artificial intelligence has shown immense potential to transform industries, optimize processes, and create new opportunities. However, it has also made serious mistakes: Many AI models (such as deep neural networks) are black boxes. This means that it is not easy to understand why they make certain decisions. This is especially true in some areas such as medicine, finance or justice.

Some banks use AI models to evaluate credit applications. There have been cases where people with excellent credit histories were rejected, and neither the bank nor the customer could understand why, because the model could not be explained clearly.

Another of the most common errors is that, when collecting information, it collects issues that include prejudices or social behaviors that have been overcome in the official contemporary vision of our culture, which have not always been eradicated and are definitely still latent in our societies.

For this reason, racial or gender discrimination in labor contracting systems, facial recognition, or judicial decisions is one of the undesirable products of AI.

Sometimes, on the other hand, the context determines that human unforesight does not manage to avoid danger. In this way, AI cannot operate at ease and that generates painful consequences. Indeed, in 2018, an Uber car fatally ran over a woman in Arizona. The AI recognized the pedestrian as a “bicycle” and did not activate the brake. The system was deactivated in emergency mode and did not react in time. The same can happen with health, transport or defence.

In recent times there has been a lot of talk about “Deepfakes”, forms of disinformation produced by this new technology. Generative AIs are perfectly capable of creating incredibly realistic fake images, audios, and videos. The danger is that they are not used merely for recreation, but constitute a tool for manipulating public opinion, generating fake news, and even venturing into identity theft. In the coming years, all countries will have to generate rigorous judicial responses to avoid this danger facing the universe of mass communication.

An example of this mechanic is the fake video of Zelensky that was released in 2022 where Ukrainian President Volodymir Zelensky appeared asking his troops to surrender definitively. The material was quickly discredited, however, it generated confusion and demonstrated the power of these technologies to manipulate on a large scale. Medical misdiagnosis due to poor model training is another serious version of failure.

Hallucinations.

Among the mistakes made by AI are those that have been called “Hallucinations”, a group of realities that mysteriously affirms artificial reasoning. It can give false quotes or references, invent historical facts or scientific data.

There is an emblematic case that does not make any more logical: an American lawyer used AI to help him draft a court document. The model generated invented legal references. The judge found that the subpoenas did not exist and sanctioned the lawyer for presenting false information.

While this technology greatly facilitates knowledge acquisition and preventive intervention, some authoritarian regimes use it through a series of impositions that conflict with ethics and human rights. Such is the case of China. There it has been used for mass surveillance of minorities such as the Uyghurs. Cameras with facial recognition, geolocation and behavioral analysis are combined to control and restrict movements, without transparent legal process.

Job displacement would also be contemplated in the possible effects of AI errors. The classic example of this type of inconvenience is the one that occurred in Japan: Japanese manga and animation companies began to use AI art generators for tasks that were previously done by human illustrators. Many freelance artists lost commissions because AIs could do “acceptable” work much faster and cheaper.

Faced with this complex panorama, responsibility, judgment and human approach become essential.
In an event that brought together large systems companies, we had the opportunity to consult an authoritative voice, who positioned himself as a key figure in the construction of ethical and effective AI. The CEO and founder of N5 expressed himself in these terms: -It is important to offer transparent and explainable AI, so that each decision can be understood and audited. Because AI does not replace man, but it is to him as a mechanical arm was and is in the automotive industry: an instrument to accelerate, optimize and lower costs when doing the same task.

To do this, we train models with well-curated data, reducing the risk of bias. Sometimes, we even avoid data searches beyond the own database of each company that hires our solutions. This preserves reliable data preserved and in addition ensures the security of our customers much more.

In short, we develop solutions that comply with international ethical and regulatory standards.

Because Innovation is not enough. You have to do it right…

In a world where AI can be both a solution and a problem, leaders say innovation can also be responsible, transparent, and deeply human.

– The key is not only to use artificial intelligence, but to use intelligence responsibly.

Share this article

Recent posts

Popular categories