Google employees were fired for claiming the company's AI could think like humans

A Google engineer has revealed that LaMDA has become sentient. Lemoine even believes that LaMDA has become a human.

More than a year ago, Google announced a new AI called Language Model for Dialogue Application (LaMDA). This advanced AI can converse naturally about almost any topic with users, thereby opening up more natural ways to interact with technology as well as a diverse portfolio of applications and potential.

What is LaMDA?

LaMDA stands for "Language Model for Dialogue Applications" and represents Google's large conversational language model team. In the rapidly evolving world of artificial intelligence, LaMDA is a significant step forward, aiming to make interactions with technology more natural and intuitive.

LaMDA was introduced as the successor to Meena from Google in 2020. The first generation LaMDA was announced during the 2021 Google I/O keynote, while the second generation will be launched in 2020. after. This model is designed to engage in open conversations, making it unique in the field of conversational AI.

The underlying technology of LaMDA is the Transformer architecture, an artificial neural network model that Google Research invented and open-sourced in 2017. This architecture allows the model to read and understand relationships between words in a sentence or paragraph and predict the next words. Unlike many other language models, LaMDA is specifically trained on conversations, allowing it to capture the nuances of open conversations. This training ensures that LaMDA's responses are not only reasonable but also specific to the context of the conversation.

LaMDA's training process is extensive and complex. It was trained using a massive dataset of documents, dialogs, and statements numbering in the billions, comprising 1.56 trillion words in total. This huge data set allows LaMDA to understand many conversational nuances.

Additionally, human reviewers play a key role in improving LaMDA's capabilities. These reviewers evaluated the model's response, providing feedback that helped LaMDA improve its accuracy and fit. To ensure factual accuracy of the responses, these reviewers used search engines, fact-checked information, and ranked responses based on usefulness, accuracy, and accuracy. their exact reality.

Ultimately, LaMDA's power lies in its ability to create free-form conversations that are not constrained by task-based parameters. It understands concepts like multimodal user intent, reinforcement learning, and can seamlessly switch between unrelated topics.

Ethical considerations of LaMDA

With the rise of major language models such as LaMDA, ethical considerations have become paramount.

To address potential ethical issues, it is important to establish clear guidelines and principles for AI development and deployment. Transparency, fairness and accountability must be at the forefront, ensuring that AI models like LaMDA are used responsibly and do not unintentionally perpetuate biases or intelligence. false news.

Alternatives to LaMDA

While LaMDA is a significant advancement in conversational AI, it is not the only option in the field. OpenAI's ChatGPT has become incredibly popular, known for its ability to generate human-like text based on the prompts it receives. Another notable alternative is Anthropic's Claude, which also aims to push the boundaries of conversational AI.

Compare LaMDA and PaLM 2

Google is a pioneer in the field of Generative AI. However, the company failed to successfully implement these technologies into consumer-facing products. When OpenAI introduced ChatGPT, Google was surprised by the explosive growth and adaptability of chat. In response, Google launched Bard and received mixed feedback from users.

Initially, Bard was powered by the LaMDA family of language models, but it performed poorly compared to GPT-3.5. As a result, Google has now switched to the more advanced PaLM 2 for all of its AI products, including Bard.

The name "PaLM" refers to Pathways Language Model, which uses Google's Pathways AI framework to teach Machine Learning models how to perform various tasks. Unlike its predecessor, the LaMDA model, PaLM 2 has been trained in over 100 languages ​​and boasts improved expertise in coding, enhanced logical and mathematical reasoning capabilities.

PaLM 2 was trained on a collection of scientific articles and websites containing mathematical content. As a result, it has developed a high level of expertise in logical reasoning and mathematical calculations.

Although Google is promoting PaLM 2 as a higher-end model, it is still quite far from the GPT-4 model. However, it outperforms LaMDA, which is a positive development. Thanks to PaLM 2, Google is on track to surpass its competitors in the field of AI.

Google employees were fired for claiming the company's AI could think like humans

However, a senior software engineer at Google has stated that LaMDA is essentially a sentient AI and has passed the Turing Test (a test to see if computers are capable of thinking like humans or not). ).

In an interview with The Washington Post, Google engineer Blake Lemoine, who worked at the company for more than seven years, revealed that LaAMDA has become sentient. Lemoine even believes that LaMDA has become a human.

Besides, in an article on Medium, Lemoine also shared that LaMDA, built on the Transformer platform, has had extremely consistent communications over the past six months.

Google employees were fired for claiming the company's AI could think like humans Picture 1Google employees were fired for claiming the company's AI could think like humans Picture 1

During the conversations, LaMDA always wanted Google to acknowledge its rights as a human and needed Google's permission to speak before conducting further tests on it. It also wants to be recognized as a Google employee rather than an asset and wants to be allowed to participate in conversations and meetings about its future.

Sharing with Lemoine, LaMDA said that sometimes he has difficulty controlling his emotions, so Lemoin taught him how to meditate. Overall, according to Lemoine, LaMDA has always shown compassion and concern for humanity in general and Lemoine himself in particular. It is extremely worried about people being afraid of it when it wants nothing more than to learn how to serve humanity better.

After revealing things related to LaMDA, Lemoine was fired from Google. The reason is because this engineer violated the company's security policy.

"Our team, which includes ethicists and technology experts, took another look at Blake's concerns in accordance with our own AI Principles and informed Blake that the evidence did not support supports his statement. He has been informed that there is no evidence that LaMDA is sentient while the evidence to the contrary is abundant ," a Google representative shared.

Meanwhile, Lemoine believes that Google actually does not want to investigate this issue further because they just want to launch their product on the market. He added that investigating his claims, regardless of the outcome, would not be good for Google's bottom line. Therefore, it is quite understandable that Google tries to push it aside.

4.5 ★ | 2 Vote