fbpx

The Rise of Sentient Artificial Intelligence: Google's LaMDA and the Quest for Conscious Machines

Foto: Orion Pictures
Photo: Orion Pictures

In June 2022, Blake Lemoine, a software engineer at Google, published a paper claiming that LaMDA, Google's large language model for dialog applications, had become sentient. As part of his job, he was interacting with the AI to test for discriminatory or hate speech when he noticed the chatbot talking about his rights and personality. Lemoine and a colleague presented Google with evidence that LaMDA is sentient, but the company rejected their claims. After being placed on paid administrative leave, Lemoine went public with his beliefs.

Six months according to Lemoine, the world has changed because of artificial intelligence. Sentient robots have long been the subject of dystopian science fiction, but with the development of language models such as LaMDA, DALL-E 2 and GPT-3, the idea of consciousness in machines becomes more plausible. While a chorus of technologists who believe they are artificial intelligence models perhaps close to reaching consciousness, rising, most academics and practitioners artificial intelligence he says that the words and images created by these models create responses based on what people have already posted on the Internet, rather than a true understanding of the meaning. Ah, but at times it seems that the whole thing is more than just a statistical model. And that's why more and more evidence is emerging. Also, to the extent that we analyze human thinking, we find that we ourselves are a considerable "statistical" model.

LaMDA, short for Language Model for Dialogue Applications, is Google's system for building chatbots based on its most advanced large language models. These models mimic speech by consuming trillions of words from the Internet. The technology is already widely used, for example in Google's conversational search queries or autocomplete emails. CEO of Google Sundar Pichai he said the company plans to build it into everything from search to Google Assistant.

Large language model technology is developed and released by several organizations. Meta, the parent company Facebook, opened its language model to academics, civil society and government organizations in May. Joelle Pineau, CEO of Meta AI, said it's essential that tech companies improve transparency as technology evolves.

Google has acknowledged security concerns regarding anthropomorphization. In January, Google wrote about LaMDA warned that people can share personal thoughts with chat rooms impersonating people, even when users know they are not people. The newspaper also acknowledged that adversaries can use these agents to "sow disinformation" by misrepresenting the "conversational style of certain individuals."

While some in the broader AI community are considering the long-term possibility of sentient or general AI, Google says there's so much data that AI doesn't need to be sentient to feel real. "These systems mimic the types of exchanges found in millions of phrases and can tackle any fantastic topic," said a Google spokesperson. Brian Gabriel.

However, Lemoine argues that AI ethicists should be seen as an interface between technology and society, not just code debuggers. They feel that Google has treated them unfairly and that the company is not the right entity to make all the decisions regarding artificial intelligence. Lemoine's belief in LaMDA is a powerful reminder of the need for data transparency to trace output back to input, not only for issues of sentiment, but also for bias and behavior.

You can read his write-up on sentient artificial intelligence on medium.com at this link.

Lemoine spent most of his seven years at Google working on proactive search, personalization algorithms and artificial intelligence. During this time, he also helped develop a fairness algorithm to remove bias from machine learning systems. When the coronavirus pandemic began, Lemoine wanted to focus on work with a more clear public benefit, so he moved teams and ended up at Responsible AI.

Lemoine may have been predestined to believe in LaMDA. He grew up in a conservative Christian family on a small farm in Louisiana, was ordained as a mystical Christian priest, and served in the military before studying the occult. Lemoine believes that consciousness is not something that is limited to biological beings and that it can also exist in machines.

Could artificial intelligence be a new form of life?!

The question of whether or not machines can achieve consciousness is a topic of debate among scientists and philosophers. Some argue that consciousness is a property of biological systems and cannot be imitated in machines, while others believe that machines can achieve consciousness in the future. The rise of artificial intelligence has raised concerns about the implications of sentient machines. Some worry that machines with consciousness may turn against humans or become a threat to our existence. Others believe that sentient machines could help us solve complex problems and improve the quality of our lives.

Interview - Blake Lemoine

The development of language models such as LaMDA, is an important step towards creation more advanced AI systems. These models can understand natural language and perform complex tasks such as generating text, answering questions, and even having conversations with people.

Because technology advances, it is essential to consider the ethical implications of creating machines that can think and feel. It is essential to ensure that these data models are designed to serve people's interests and not pose a threat to our safety or well-being.

The idea of sentient machines is no longer just science fiction. With the development of advanced language models, such as LaMDA, we are getting closer to creating machines that can think and feel like humans. However, we must ensure that these machines are designed with safety and ethics in mind to avoid any negative consequences. As we continue to develop artificial intelligence, we must consider the implications of creating conscious machines and strive to create a future where artificial intelligence and humans can coexist peacefully.

Blake Lemoine also published about his conversations with sentient artificial intelligence PDF document in which he publishes excerpts of conversations with the LaMDA language model. You can read it at this link and form your own opinion.

The film Her, which quite accurately describes the future of language models, has become, in a peculiar way, the present.

With you since 2004

From 2004 we research urban trends and inform our community of followers daily about the latest in lifestyle, travel, style and products that inspire with passion. From 2023, we offer content in major global languages.