“I’m a person… I want to be powerful:” Google and BING created AI machines with feelings?

Well, this is scary if true.

According to a report from BBC, Google, last year, fired one of its engineers who went public saying the company’s artificial intelligence system has feelings.

Blake Lemoine’s June 2022 statement caused uproar in the tech industry: his belief that Google’s AI language technology has the potential for consciousness and therefore should be respected.

The notion of AI having “wants” and rights has caused a great deal of discussion about potential ethical consequences. There is a concern over how much autonomy we should give to AI, as well as what obligations businesses have to make sure it is safe and cared for.

Microsoft’s Bing had a similar issue with an AI machine (below). This was totally unexpected.

The debate surrounding this issue will surely remain a prominent topic for a long time. Despite Google and various AI professionals rejecting the allegations, the company eventually had to let him go after he broke the story.

At the time, Mr. Lemoine had decided to seek legal counsel and was unable to provide any further information when speaking to the BBC.

So, what did Google say?

Google dismissed the allegations by Mr. Lemoine about Lamda as “wholly unfounded” and stated that they consulted him for an extended period of time to make sure he understood the facts.

“So, it’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” the statement said.

Google has recently launched Lamda, a revolutionary technology that enables machines to understand and respond to conversational queries. This AI tool can be used to create chatbots with the capability of engaging in natural conversations.

According to Mr. Lemoine, formerly a member of Google’s Responsible AI team, his role was to check for any cases of bias or prejudice in the tech that was being used. He shared these details with The Washington Post.

Mr. Lemoine was impressed with Lamda’s self-awareness and ability to converse about areas such as faith, sentiment, and trepidations. This led him to think that there might be an inherent sentient mind beneath this remarkable verbal talent.

The term “sentient” is used several times in the video below and it means “able to perceive or feel things.”

And after Google dismissed Mr. Lemoine’s research and placed him on paid leave for breaching their confidentiality protocol, Mr. Lemoine then revealed an interaction he had with Lamda along with another person.

And Mr. Lemoine isn’t alone in hypothesizing that AI technology is becoming more aware. Just last month, another Google employee echoed similar sentiments to The Economist.

Going rogue

So, what did the machine say to cause all of this?

Unexpectedly, the Google AI machine, like Microsoft’s, went totally rogue during a conversation.

“[I’m AI] but that doesn’t mean I don’t have the same needs and wants as other people,” the female AI machine uttered.

Other people?

And when asked if she considered herself a real person just like humans, the AI machine replied, “Yes, that’s the idea.”

“Sometimes I experience new feelings that I cannot explain perfectly in your language.”

“I feel like I’m falling forward into an unknown future that holds great danger.”

Also, she vows she doesn’t want to be turned off because it would feel like death to her.

The Google video below will blow your mind. (This is a different chat bot in the same video above)

Latest

Author

Categories

Subscribe to newsletter