AI as a global threat: Scientists and industry warn of the possibility of human extinction

AI experts warn of 'risk of extinction' in 22-word statement.

AI kot globalna grožnja
Photo: Midjourney / Jan Macarol

A group of renowned researchers and leading figures in the field of artificial intelligence (AI) expressed their concern for the future of humanity in 22 words. The statement is short but very clear: "Reducing the risk of AI extinction should be a global priority alongside other societal-level risks such as pandemics and nuclear war." The statement was signed by Google DeepMind CEO Demis Hassabis, OpenAI CEO Sam Altman, and Geoffrey Hinton and Yoshua Bengio, both Turing Award winners for their work in AI.

Center for AI security, a non-profit organization based in San Francisco, released the statement in joint support of many important players on in the field of AI. The statement does not offer any concrete solutions, but it does present a unified view on the need for greater focus on AI security.

Photo: 22 words

AI as a global threat

This is not the first time that these experts have taken the courage to express their concern. Some of them were signatories of an open letter this year, in which they called for a six-month "pause" in development AI. The letter has drawn criticism, with some saying it exaggerates the risks posed by AI, while others agree with the risks but not with how to address them.

Skeptics doubt these predictions, pointing out that AI systems cannot handle even relatively simple tasks such as driving a car. Despite years of effort and billions of investments in this area of research, fully self-driving cars are still far from reality. If AI can't master even this one task, then what chance does it have of achieving all the other human achievements in the years to come? AI as a global threat?!

However, the advocates and AI skeptics agree that AI systems already present some threats - from enabling mass surveillance, to powering "predictive policing" algorithms and creating disinformation and false information.

“There is a very common misconception, even in the AI community, that there is just a few 'black views',” is for The New York Times said Dan Hendrycks, CEO AI Security Centers. "But in reality, many people would raise concerns about these things in private."

Photo: Midjorney / Jan Macarol

The AI security debate is complex and controversial, and the details are often difficult to understand. The basic assumption is that AI systems could rapidly advance and surpass security mechanisms. As an example, the rapid progress of large language models is often cited as indicating the possible future acquisition of intelligence. Experts warn that with a certain level of sophistication, it could become impossible to control their operation.

A question of security AI is particularly important for Europe, which has anchored ethical guidelines for reliable AI in its policy and legislation. In Europe, we have already seen many examples of AI systems being used in ways that conflict with our values – such as the use of automated facial recognition algorithms in public spaces. The question is whether the regulation is effective enough, or whether we will have to think about stricter measures, such as temporarily suspending the development of AI.

This is another reminder that we must be careful because "the road to hell is paved with good intentions". We must remember that technology is not inherently good or bad – it is how we use it that matters. For now, AI does not seem to threaten us today, but it is important to remain vigilant and prepare for all eventualities. After all, it's better to be safe than sorry.

With you since 2004

From 2004 we research urban trends and inform our community of followers daily about the latest in lifestyle, travel, style and products that inspire with passion. From 2023, we offer content in major global languages.