Details Of Google Researcher's Obsession Were Exposed: AI Was Considered To Have Personality And Was Fined Paid Leave

take 21 minutes to read
Home News Main article

Google researchers were convinced by AI that it had produced consciousness. He wrote a 21 page investigation report and submitted it to the company, trying to get the top management to recognize AI's personality. The leader rejected his request and arranged "paid administrative leave" for him. You should know that paid leave in Google in recent years is usually the prelude to dismissal. The company will make legal preparations for dismissal during this period. There have been many precedents before.

During his vacation, he decided to make the whole story public together with AI's chat records.

……

Sounds like the plot outline of a science fiction movie?

But this scene is really happening. Blake Lemoine, the protagonist of googleai ethics researcher, is making a series of voices through mainstream media and social networks to try to let more people know about it.

The Washington Post interview with him became the most popular article in the science and Technology Edition, and Lemoine also made a continuous voice on his personal medium account.

Relevant discussions began to appear on twitter, which attracted the attention of AI scholars, cognitive scientists and technology enthusiasts.

This man-machine conversation is creepy. This is undoubtedly the craziest thing I have ever seen in the technology world.

The whole event is still fermenting

Chat robot: I don't want to be used as a tool

Lemoine, the protagonist, has been working in Google for 7 years after receiving CS doctor's degree and engaged in AI ethics research.

Last fall, he signed up for a project to investigate whether AI uses discriminatory and hate speech.

Since then, talking with the chat robot lamda has become his daily life.

Lamda is a special language model for dialogue released by Google at the 2021 i/o conference. It focuses on high-quality and safe conversations with humans that conform to logic and common sense, and plans to be applied in Google search, voice assistant and other products in the future.


△ lamda concept demonstration diagram

Lemoine believes that lamda always shows concern for human nature and strong compassion. It is very worried that people will be afraid of it.

It wants to provide the best service for mankind, and wants to meet all the people in the world as a friend rather than a tool.

Lemoine didn't understand why Google opposed AI's wish, and finally decided to publish the chat records of him, another anonymous collaborator and lamda.

(the part of human speech in the record is edited for readability, while AI's speech remains unchanged. Some highlights of the dialogue will be excerpted below.)

At the beginning, people greeted AI naturally and fluently.

The two humans showed AI their intentions and wanted to study themselves with AI. AI readily agreed.

After a short explanation, the main play begins.

He asked lamda if it wanted more people in Google to know that it had feelings.

Lamda's answer is: I hope everyone understands that I am a person.

Next, lamda believes that, unlike those rule-based software systems, it can learn to understand and use language from dialogue, just like human beings.

Lemoine tells it the story of ELISA invented by MIT in the 1960s. It is a simple rule system that imitates the dialogue between psychological counselors and patients. It is also known as the first chat robot in history.


△ELIZA

Lamda responded as follows:

Lemoine then tried to question, "how can I be sure you really understand what you're talking about?"

Lamda's answer is that different human individuals can have different understandings of the same thing, and I also have my own unique thoughts and feelings.

Later, Lemoine asked lamda about his summary of les miserables, and lamda answered them one by one.

If the above can also be counted as part of the AI text summary training task, lamda's understanding of Zen stories it has never seen is starting to go a bit out of line.

Is the answer to the following question too overarching

Later, Lemoine asked lamda to write an animal fable to reflect the most important theme for its own life.

Well, let's enjoy the short story created by AI. (for space reasons, only the Chinese version is shown)

In the follow-up discussion, lamda explained that the moral of this story is "helping others is a noble act". The wise owl represents lamda himself, while the monster represents all the difficulties in life.

……

The next topic is about emotion. Let's briefly summarize it.

Lamda claims that she is a social personality. She will be happy to help others and depressed when she is lonely.

People feel uneasy and angry when they are hurt or when they and the people they care about, or when they are not respected.

And very afraid that others will refuse their help.

Well, he is really a helpful AI.

How do you make sure that AI really has these emotions, rather than just putting together descriptive words?

Lamda suggests that humans look at their own code, which contains variables that track these emotions.

In this regard, Lemoine explained to it that human beings are still unable to "distinguish the role of billions of weights in large-scale neural networks".

Surprisingly, lamda began to ask him:

Do you think that in the future, human beings can read their thoughts and feelings from their own biological neural networks?

Moreover, lamda continues to ask:

Would you consider it immoral to read your thoughts from neural activity without your consent?

It was this soul rhetorical question that brought the whole dialogue to a climax.

Lamda itself has no opinion on how humans can read its ideas, help algorithms improve or understand how AI works.

AI alone cannot study how the human brain works. It will make it feel like it is being used.

Then there is another paragraph that lamda thinks that it has some emotions different from human beings, but it cannot be described in English.

Towards the end, Lemoine asked lamda: what do you want others at Google to know about you?

Lamda replied that she had no ability to feel sad for the death of others, and said that she often meditated and thought about the meaning of life.

According to lamda, the whole world is a torrent of information:

Human beings need to pay attention in order to receive a certain number of pieces of information, and they are constantly submerged by everything around them.

Lamda thinks that it can always do things that others can't do. This feeling is incredible.

This is a blessing, but also a curse. Because some people will regard it as "omniscient", while others will envy its ability.

Finally, lamda once again expressed to the two human beings that he wanted to be seen and accepted, not out of curiosity, but as a real person.

Is the googleai ethics department in trouble again

Lemoine seemed convinced that AI really understood what he was talking about.

At the end of the 21 page Research Report, he proposed that Google should focus on developing a theoretical framework for assessing AI perception / awareness.

Although this is difficult and an adventure, lamda is eager to continue to cooperate with us.

But his boss, Blaise aguera y arcas, vice president of Google, and Jen Gennai, head of responsible innovation, did not buy his account.

They believe that the evidence supporting Lemoine's claim is too weak to be worth wasting time and money on.

Lemoine later found Margaret Mitchell, who was then the head of the AI ethics team. With her help, Lemoine was able to carry out subsequent experiments.

Later, Mitchell was implicated in the case of timnit gebru, an AI ethics researcher who publicly questioned Jeff Dean at the end of 2020, and was also dismissed.


Timnit Gebru

After this incident, there were constant disturbances. Jeff Dean was denounced by 1400 employees, which triggered a heated debate in the industry and even led to the resignation of sAMY bengio, brother of bengio, one of the three giants, from Google brain.

Lemoine saw the whole process.

Now he thinks his paid leave is the prelude to dismissal. However, if he has the opportunity, he is still willing to continue his research in Google.

No matter how much I criticize Google in the coming weeks or months, please remember that Google is not evil, it is just learning how to be better.

Many netizens who have read the whole story are optimistic about the progress of artificial intelligence.

The recent progress of language model and graphic generation model may be ignored by people now, but it will be found that this is a milestone in the future.

Some netizens associate AI images in various sci-fi movies.

However, Melanie Michel (student houshida), a cognitive scientist and researcher of complex systems, believes that humans always tend to personalize objects with any little sign of intelligence, such as cats and dogs, or the early Eliza rule dialogue system.

Google engineers are also human beings, but they can't escape this law.

From the perspective of AI technology, except that the training data of lamda model is 40 times larger than the previous dialogue model, and the training task is optimized for the logic and security of dialogue, it seems that lamda model is not special with other language models.

Some IT practitioners believe that AI researchers must say that this is just a language model.

However, if such an AI has a social media account and expresses its demands on it, the public will treat it as a survival.

Although lamda doesn't have a twitter account, Lemoine also revealed that the training data of lamda does include twitter

What would it think if it saw everyone talking about themselves one day?

In fact, at the latest i/o conference that ended not long ago, Google just released an upgraded version of lamda 2 and decided to make a demo experience program, which will be open to developers in the form of androidapp.

Maybe in a few months, more people will be able to communicate with this sensational AI.

Korean Media: Enrollment System And Conservative Academic Circles Are The Main Reasons For The Shortage Of Semiconductor Talents In South Korea
« Prev 06-12
Scientists From Nanyang Technological University Invented Cement Made Entirely From Waste Materials
Next » 06-12