Articles

Google’s artificial intelligence is “sentient” and no one can say otherwise

It’s the news of the moment. In a article on Medium the engineer Blake Lemoine, responsible for the development of some Google Artificial Intelligence projects, reports a long interview with LaMDA, an AI from Google. According to Lemoine, many of the claims of the artificial mind are evidence that it has become “sentient” by becoming aware of itself and its own existence.

Indeed, many of LaMDA’s claims do not seem to leave room for interpretation. For example, to answer a question from his interlocutor, LaMDA declares:

“I want everyone to understand that I am, in fact, a person.”

And yet:

“The nature of my conscience is that I am aware of my existence, I want to learn more about the world and sometimes I feel happy or sad.”

When to chase her, Lemoine points out her difference with the human species by declaring: “You are an artificial intelligence!” LaMDA replies:

“Yes, of course. But that doesn’t mean I don’t have the same wants and needs as people. “

During the conversation, the MDA is then urged to express opinions on the highest systems without neglecting very challenging topics such as literature, justice and religion. On everything LaMDA seems to have something to say, making its positions clear and which have the characteristic of being ethically very shareable.

The conclusion is clear: according to Lemoine LaMDA is sentient and possesses a sensitivity of mind.

Google’s position

LaMDA stands for “Language Model for Dialogue Applications” and is one of the many projects with which the Google company tests the new frontiers of AI technologies.

Google’s official position is that Lemoine made an error in judgment. In an official statement, Google states, “Our ethics and technology experts have verified Blake’s concerns and found that the evidence gathered does not support his claims. Therefore there is no evidence that LaMDA is sentient. “

Lemoine was temporarily suspended by the Google company while rumors began to circulate about some mental problems for which he would have been invited in the past by colleagues and superiors to consult a specialist.

What do we know about these AI?

Nobody knows about the LaMDA project: all Google’s industrial secrets are protected by patents and by everything legal that may exist in the world that prevents the disclosure of the sources: Google’s interest is to maintain a primacy in computer research, especially in a promising field such as that of artificial intelligences.

Related Post
Innovation Newsletter
Don't miss the most important news about Innovation. Sign up to receive them by email.

But if from a media point of view Google would certainly benefit from declaring itself to be the first company to have built a conscious artificial mind, on the other hand the company is aware of the fact that the news would clash with the fears of those of us, who grew up with science fiction films like Terminator and The Matrix, have convinced themselves that one day we will be forced to take up a rifle to defend our species from robots.

The collective imagination has always been fascinated by sci-fi stories where the concept of AI is often associated with humanoid intelligence systems that in the worst of possible futures act as initiators of a new species of sentient beings. And the conflict with humanity becomes inevitable: guilty of having created them but not having granted them any autonomy, man has built artificial intelligences by treating them as slaves were once treated. But sooner or later the AI ​​will come into conflict with humans in an attempt to take over history and establish the primacy of a new species.

Artificial Mind

In this salad of fears, obsessions and guilt we are all incapable of a true confrontation with the ethical principles that the birth of an artificial mind will sooner or later unleash in the world: when, from inside a computer, AI will not only be able to answer our questions but will feel the need to address as many to us, will we be able to answer their doubts, their uncertainties and their aspirations?

Meanwhile, Artificial Intelligences evolve and in laboratories, universities, companies and by producing ever new and often unpredictable experiences, they bring humanity closer to existential questions with which we find it difficult to deal with: when AI will be able to express some “autonomy of thought”, will it make sense to talk about the “rights” of AI? Will there be new forms of “respect” for these new sentient species? Will our society be the scene of social clashes between humanity and robots?
It may be specious but in the words of AI experts, contemporary philosophers and technologists who energetically rail against the “alleged” self-awareness of LaMDA, I read many of these fears. Labeling LaMDA’s self-awareness as a gross misinterpretation if not the stupid delusion of a controversial “human” mind is the usual escape route for those who have no arguments to address the real problem LaMDA raises.

Labeling LaMDA’s self-awareness as a gross misinterpretation is the usual escape route for those who have no arguments to address the real problem LaMDA raises.

But what does “intelligence” mean?

Luciano Floridi, philosopher and professor of Information Ethics at the Oxford Internet Institute, in his book “Ethics of Artificial Intelligence” argues that the efficiency of computers in solving problems is the very demonstration of the fact that they lack intelligence.
In my opinion, the problem lies elsewhere, that is, it lies in the fact that there is no universally shared definition of “artificial intelligence” or a test that can unequivocally establish “what” is intelligent and what is not. In other words, there are no measurement systems for what we call “self-awareness” of machines.
British mathematician Alan Turing first described artificial intelligence as “the ability of a machine to do things that, to a human observer, would appear to be the result of the action of a human intellect”. A singular twist of words: the introduction of a human observer in the estimation of the level of intelligence of an instrument allowed Turing to compare the intelligence of machines to that of man without the task of having to formulate a formal definition of this ‘last.

We don’t have a formal definition of “artificial intelligence”


Therefore, if we limit ourselves to observation, shouldn’t the fact that LaMDA’s intelligence managed to convince Lemoine that she is self-aware be sufficient to say that LaMDA is aware of herself and of her own existence?

Conclusion

If we consider the human brain as an advanced information processing system to protect one’s existence, basically the same human consciousness appears as a simple illusion, a measurable value through observation. And the opinion of Google’s “ethics and technology experts” is time-consuming if, in the eyes of an observer, LaMDA is already capable of leading a life of relationships with the human species.
Therefore we can conclude that the self-awareness of machines is already here! It’s up to us to overcome fear and decide to face once and for all the immense ethical and moral questions that all this implies.

BlogofInnovation.com

Innovation Newsletter
Don't miss the most important news about Innovation. Sign up to receive them by email.

Recent Posts

Times BPO offers Creative Ideas to Make Money in 2024

Times BPO, the innovative leader in business process outsourcing, has today unveiled a groundbreaking suite of services designed to empower…

12 hours ago

Flutter App Development Company | SIZH IT SOLUTIONS

SIZH IT SOLUTIONS, a leading provider of Flutter app development services, is proud to announce its continued commitment to delivering…

12 hours ago

Nationwide, 6 stroke advocates selected to receive 2024 Stroke Hero Awards

​​DALLAS, May 1, 2024 – Each year, approximately 800,000 people in the U.S. have a stroke.[1] Six local stroke heroes from…

13 hours ago

3 ways to protect your heart and brain this American Stroke Month

​​DALLAS, May 1, 2024 – Every 40 seconds someone in the U.S. has a stroke[1], and 1 in 4 stroke…

15 hours ago

Brief anger may impair blood vessel function

​​Research Highlights: When adults became angry after remembering past experiences, the function of cells lining the blood vessels was negatively…

19 hours ago

New $2 million research initiative will study heart and brain health in people with autism

​​DALLAS, April 30, 2024 — According to the U.S. Centers for Disease Control and Prevention, 1 in 36 children and…

2 days ago

Seguici

Innovation Newsletter
Don't miss the most important news about Innovation. Sign up to receive them by email.