We're Getting Worried about this "Conscious" AI ft. GPT-3

This outstanding AI FAKED an intelligent test

When computers, robots, or even intelligent chairs become sentient in the future, switching them off would be seen as heartless.


They could attempt pleading with you to change your mind. However, a genuinely intelligenct smart technology may encourage something more extraordinary, if it had a sophisticated grasp of society and human psychology.


In the end, you may not be able to turn it off at all.


Does the robot have the capacity for sentience, which would then allow it to have its own free will and decision making? Or is it merely programmed to look human in its behavior?


At first glance, the AI did a pretty good job of showcasing how human is capable of thinking, feeling and having hopes.


But I would still call it imitating the human way of communicating, instead of being an independent sentient being, one that can think on its own.


After all, GPT-3 is just a huge database of human knowledge, and it can perfectly simulate human behaviour by imitating what it has seen before.


Does the robot have the capacity for sentience, which would then allow it to have its own free will and decision making? Or is it merely programmed to look human in its behavior?


At first glance, the AI did a pretty good job of showcasing how human is capable of thinking, feeling and having hopes.


But I would still call it imitating the human way of communicating, instead of being an independent sentient being, one that can think on its own.


After all, GPT-3 is just a huge database of human knowledge, and it can perfectly simulate human behaviour by imitating what it has seen before.


Additionally, the idea of sentience is not a binary choice. Of course, your levels of intelligence, creativity, self-awareness, and linguistic proficiency might range widely.


After all, perhaps the real question would be: how smart must an AI be, for us to begin treating it as a sentient being?


What happens if an AI passes the Turing test?

The Turing test is actually not a new concept. In fact, the framework was first proposed in 1950 by British computer science pioneer, Alan Turing.


The original idea was to examine a machine's ability to exhibit intelligent behaviour, that is indistinguishable from a human. In the original formulation, Alan Turing proposed that a machine could be considered "intelligent" if it could fool a human into thinking that it was also human.


Due to the rise of AI technology, the test has been getting a lot of exposure, and drive popular imaginations among the public.


And the publication of the GPT-3 model by OpenAI has generated a lot of buzz about its potential to pass the Turing Test.


Just what exactly the test really tells us? To answer that question, let's go back to when Turing first laid out his thesis, entitled "Computing Machinery and Intelligence."


In the paper, Turing proposed the following experiment. A human judge is engaged in a text discussion with unknown and unseen players. Then, the judge will evaluate their responses.


To pass the test, a machine needs to replace one of the players without substantially changing the results. In other words, an intelligent machine needs to be good enough at imitating human behaviour that the judge can't tell the difference.


From the paper, Turing predicted that by the year 2000, a machine with over 100 MB of RAM, would be able to easily pass his test.


And as it turns out, Turing was right. In 2014, a computer program called Eugene Goostman, passed the Turing test by convincing 33% of human judges that it was also human.


But as digital computer became more and more common, and their processing power increased exponentially, the relevance of the Turing test has diminished.


Just in the past decade alone, there have been quite a few AI systems that have claimed to have successfully passed the test. For example, Google's AlphaGo Zero program, which beat the world's best Go player, was able to do so without any human knowledge.


So the big question now is, does it mean that some machines have achieved the human-level intelligence?