fbpx

AI War: Google to Launch AI Powered Search Engine with Chatbot Features

Following the introduction of OpenAI's ChatGPT, Google is moving quickly to introduce its own AI technology to compete with its rival OpenAI

Photo: envato

According to The New York Times, Google is reportedly planning to launch a version of its search engine with chatbot features this year. The launch of OpenAI's AI chatbot, OpenAI ChatGPT, has set off alarms at Google, prompting the company to unveil more than 20 AI-powered projects. The artificial intelligence war for world domination is about to begin. And from today, nothing is the same. All in all, it will also greatly change the media landscape, which will become somewhat irrelevant.

After the recent launch of the highly anticipated artificially powered chatbot the intelligence of ChatGPT, which he did OpenAI introduced, you should Google tried to present its own version of the search engine with chatbot functions as soon as possible. According to The of the New York Times intends to Google in the coming months to reveal more than 20 projects, powered by artificial intelligence. And immediately occupy the space. We know for ourselves that Google has been working in this field for a very long time. But at the same time, I believe that the latter did not want to share everything with the world, also due to security and ethical concerns. Imagine getting a complex answer to any even complex question in the form of a text or even a visualized story. Artificial intelligence assistants are here. And it is at this historical moment that the war for supremacy begins one artificial intelligence.

Why Google is afraid of the OpenAi project - ChatGPT

The move came when it was supposed to Google management became concerned, that despite large investments in artificial intelligence technology, its too rapid introduction could damage the company's reputation. With the introduction of ChatGPT and alarming warnings of Google's impending demise, however, the company appears to be changing its tactics.

Google has claimed in the past that it is avoiding launching certain AI products due to potential "reputational damage". Now, the reputation he wants to avoid seems to have fallen behind the times. And that Google was too careful!

Controversial Google artificial intelligence project with the potential of self-awareness

Google's AI language team recently developed a new AI chatbot generator called LaMDA (Language Model for Dialogue Applications). The chatbot is based on advanced large-scale language models and is capable of generating dialogue on a wide range of topics, even topics it has never seen before. LaMDA uses neural networks and is precisely tailored to specific use cases such as customer service, storytelling and more.

Google engineer Blake Lemoine, who works for the company's responsible artificial intelligence organization, started talking to LaMDA as part of his work. He signed up to test whether AI uses discriminatory or hate speech. Lemoine, who studied cognitive computing in college, noticed that the chatbot was talking about his rights and personality and decided to follow up.


In exchange with LaMDA Lemoine found that artificial intelligence was able to change his opinion about Isaac Asimov's Third Law of Robotics. Lemoine was impressed by the chatbot's ability to understand context and generate responses that were appropriate for a given conversation. He believed that a chatbot is able to understand and respond to natural language, making it interactive more like a conversation with a human.

Lemoine worked with a colleague to present evidence to Google that it did LaMDA sentient. However, they are vice presidents of Google Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into Lemoine's claims and dismissed them. Still, Lemoine believed that people have the right to design technology that can significantly affect their lives, and that it is important for the public to be aware advances in artificial intelligence technology.

Can artificial intelligence be self-aware? Photo: envato

As a result, Lemoine decided to go public with his concerns and was ousted by Google. Because Lemoine believes that the technology will be amazing and that it will benefit everyone, but other people may not agree and that Google shouldn't be the only one, which makes all the decisions.


Google, on the other hand, stated that they had reviewed Lemoine's concerns and informed him that the evidence did not support his claims. They said there was no evidence that it was LaMDA sentient, and much evidence against it. Google also stated that with LaMDA they use a restrained and cautious approach to better consider valid concerns.

This incident sparked a debate in the tech industry about ethical consequences technologies artificial intelligence and the role of large corporations in shaping the future of artificial intelligence. Some experts believe that it is imperative that technology companies improve transparency as technology is built, and that the future of large language model work should not live solely in the hands of larger corporations or labs.

As artificial intelligence technology continues to advance, it is important that the public is informed of progress and that companies consider the ethical implications of their work. This event serves as a reminder of the need for transparency and open dialogue as we move forward in the world of artificial intelligence.

When will Google join the AI war?

Launch date Google's The launch of artificial intelligence search is not set, but the company's other big projects will apparently be introduced during the year I/O events in May. The company also said it will prioritize "getting the right facts, ensuring security and eliminating misinformation" in its search demo, hoping to solve the issue of AI responding confidently and clearly to queries with bad information. Besides, you are Google is working to speed up AI technology review processes to ensure it works fair and ethical.

In recent years it has Google handled the release of new AI products cautiously. The company found itself at the center of the debate on the ethics of artificial intelligence, after firing two prominent researchers in the field, Timnit Gebru and Margaret Mitchell. The pair highlighted criticisms of AI language models, noting challenges such as their tendency to increase bias in their training data and pass off false information as fact.

With the introduction OpenAI's ChatGPT and Google's plans to unveil its own AI-powered search engine, it's clear that the competition is in the field of artificial intelligence is getting tougher. As one of the leading technology companies in the world, it will be interesting to see how it does Google's AI's products match those of its competitors.

With you since 2004

From 2004 we research urban trends and inform our community of followers daily about the latest in lifestyle, travel, style and products that inspire with passion. From 2023, we offer content in major global languages.