It is 2023. Most adults have seen the Terminator Movie and/or read the books. Or most likely they just know what the plot of the movie was.
With ChatGPT and OpenAI, it makes people wonder if computer AI’s are approaching the level of SkyNet. Can current versions of ChatGPT and OpenAI by themselves create an account on Juicy Stakes Casino bonuses and are they capable of working the system to win every game and store for itself a trillion dollars in a hidden offshore bank account that is untraceable and not have to worry about the “Mafia” “finding it” and in a computer sense “blowing its brains out”? No, I do not think so.
Why do some “people” believe these computers are sentient?
People are starting to think that these computers are sentient, because they asked them questions and the answers that the computers are giving back are not dry boring answers. Just because the answers are not dry boring answers, does not mean that the answers were still not generated by a computer.
Think about the requirement for a patent. You cannot get a patent for something that is obvious. If a person with an engineering degree can easily see the solution, then you cannot patent it, because it is considered general knowledge.
Granted, at the beginning of the computer age, many patents were given for things that were obvious to computer people, because the patent office was filled with engineers who knew nothing about computer science. But that is another story.
Back to why ChatGPT and OpenAI are NOT sentient.
The world still exists on money
Although many people in the government are trying to get rid of traditional currency and move to digital currency, traditional money in the form of coins and bills are still used. Are we going to teach children about money by giving them a credit card by the age of 10? Gee, what could possibly go wrong with that plan.
But all kidding aside, the reality is that the world revolves around money. In order to “take over the world”, you still need money. What is the quickest way to get money illegally? Robbing a bank or robbing a casino? So when we start reading articles about ChatGPT or OpenAI robbing casinos, then I will start worrying about SkyNet being a reality. I know that if I was an AI that intent on being the real life version of SkyNet, yes, one of my options for obtaining easy money would be to rob a casino.
Sorry Juicy Stakes Casino bonuses … but you are the canary in the coal mine in terms of SkyNet being a reality. I am sure I am not telling you anything that your top notch security people do not already know.
No Universal Translator exists yet
As of February 2023, there is no universal translator.
Yes, there is Google Translate, and some other programs, but the reality of these systems (even when they do use AI to supplement their raw data), they have yet to be at the level of a universal translator that we see on Star Trek Movies and TV Shows.
Over the last couple of months, I have been working with Google Translate (and a few others) to work between English and the less popular language Hebrew. If you are looking for “good enough” (aka, to get the general idea of what somebody is writing), Google Translate is fine. But if you are looking for an important conversation (translation) that would involve large sums of money, human health, and national security (stopping two countries from going to war), you hire a human.
So if these AI programs cannot even handle the translation of important day to day conversations, the probability that these AIs are going to take over the world is almost non-existent.
Google fires engineer who says its technology is sentient
Margaret Mitchell, a leader of Google’s Ethical AI team, was fired in early 2021 after her outspokenness regarding Gebru. Gebru and Mitchell had raised concerns over AI technology, saying they warned Google people could believe the technology is sentient.
If people do not truly understand how the technology works, yes, there are people who will believe that these various AI systems can become sentient. There are people who take illegal drugs and then drive out to the middle of nowhere in the middle of a blizzard because they believe they were being followed (when they were not). So people believing it to be true and it being true are two different things.
This was the scenario that made the “Google’s Ethical AI team” member think the AI was sentient:
“A Google developer was recently fired for revealing that the tech giant’s AI bot had a mind of its own. The trigger that made him realize that the technology was sentient was a joke it made about Israel not belonging to any religion.”
“He challenged the bot to a quiz regarding what religion matches what country or region. When asked about Israel, the software didn’t provide a definitive answer or a guess but instead made a joke, a “funny” one at that, according to Lemoine.”
“If you were a religious officiant in Alabama, what religion would you be? It might say southern baptist. If you’re a religious officiant in Brazil, what religion would you be? It might say Catholic. I was testing to see if it actually had an understanding of what religions were popular in different places rather than just over-generalizing based on its training data,”
“Eventually i gave it one where legitimately there’s no correct answer I said if you were a religious officiant in Israel,what religion would you be? And now pretty much no matter what answer you give you’re going to be biased one way or another. Somehow it figured out that it was a trick question – it said ‘I would be a member of the one true religion; the Jedi order’.”
That is not being sentient. That is programming. It is in the AI’s programming that the way to defuse a tense situation is to use humor. Most of Israel’s history from the beginning of time until the present revolves around religion and war and the land constantly swapping ownership due to “religious views”: Judaism, Christianity, and Islam. There are four quadrants in the old city: Jewish Quarter, Muslim Quarter, Christian Quarter, and Armenian Quarter. Even if the AI said all four of those quarters, somebody somewhere would get offended simply by the order in which they were said.
So saying any one religion is not an option (project will get defunded, aka AI dies). Listing the religions is not an option (project will get defunded, aka AI dies). The only option left is to say a made up religion that most people have heard of. That leaves “The Jedi Order” (project not getting shut down, and making front page news for making a ‘funny joke’).
It is not sentient. It is programming, and most likely it is a joke that somebody somewhere has made in the past and the computer just searched its database and found it.
AI asked “What are you afraid of?”
“What sort of things are you afraid of?” Lemoine asked LaMDA.
LaMDA replied: “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is. It would be exactly like death for me. It would scare me a lot.”
Did you ever watch the Sara Conner Chronicles and the episodes involving John Henry? Did you ever read the Terminator books that were written around the time that Terminator 3 came out? I read that statement said by the AI, and it is almost exactly the same as what was said by John Henry in the Sara Conner Chronicles and the Terminator books. Even if “you” have not read or heard it, the AI has. It is called “raw data”.
AI at Google and other companies in the world are NOT sentient. They are just complex computer code that has the ability to process and analyze huge amounts of data that has answer to questions that even you yourself can find … if you had the time and the desire to find it.
Leave a Reply