fbpx

Stop AI development: Elon Musk and top AI researchers call for moratorium on 'speed of AI development'

The open letter calls for an immediate halt to the development of large-scale artificial intelligence systems due to security and regulatory concerns.

Ustavite razvoj AI
Photo: envato

In an open letter published by the nonprofit Future of Life Institute, a group of artificial intelligence researchers, including Tesla CEO Elon Musk, are calling for a moratorium on the development of large-scale artificial intelligence systems. Their letter reads: Stop AI development! The letter expresses concern over the potential risks these systems pose to society and humanity, as well as the lack of understanding and control over them. The researchers are urging AI labs to stop updating AI systems stronger than GPT-4 for at least six months and work with independent regulators to develop security protocols.

A group of high profile AI researchers including Tesla CEO – Elon Musk and the author To Yuval Noah Harari, signed an open letter calling for a moratorium on the development of large-scale artificial intelligence systems. The letter, published by the non-profit Future of Life Institute, notes that artificial intelligence labs around the world are in an "uncontrolled race" to develop and deploy machine learning systems that are too powerful for anyone, including their creators, to master. understood. Or managed to reliably control.

Stop AI development

The letter specifically calls on AI labs, to stop training AI systems for at least six months, more powerful than GPT-4, and work with independent regulators to develop security protocols for advanced AI design and development. The signatories argue that this pause is necessary to ensure the safety and regulation of future AI systems given the high risks, which they represent to society and humanity.

The call for a moratorium is unlikely to have an immediate impact on the current climate in artificial intelligence research, which has seen tech companies like Google and Microsoft rush to introduce new products without fully considering the safety and ethical implications. However, it is a sign of growing opposition to this "send now and fix later" approach, which could influence future regulations and legislation.

Forehead OpenAI, which he founded Elon Musk, expressed the potential need for an “independent review” of future AI systems to ensure that to meet safety standards. The signatories of the open letter argue that now is the time for AI labs and independent experts to work together to develop and implement common security protocols for advanced design and development AI, which are audited and supervised by independent external experts.

Although there is an immediate effect of an open letter uncertain, emphasizes the need for constant dialogue and cooperation between artificial intelligence researchers, technology companies and regulators to ensure the responsible development and deployment of AI systems in the future.

With you since 2004

From 2004 we research urban trends and inform our community of followers daily about the latest in lifestyle, travel, style and products that inspire with passion. From 2023, we offer content in major global languages.