Turing Tease: Computer Deception & Human Consciousness

Last Sunday, when I was about to wrap up my Red Pills of the Week column, I received a Twitter notification that was both exciting & disturbing at the same time: The Turing test, that technological Rubicon dividing the line between mindless Roombas & German-accented Terminators, had finally been passed! The news read that a computer program designed by a team of Russians, had allegedly succeeded in convincing 33 percent of the judges in a test conducted at the Royal Society in London, that instead of a computer they were chatting with a 13-year-old boy from Ukraine named 'Eugene'.

"Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn't know everything," said Vladimir Veselov, one of the creators of the programme. "We spent a lot of time developing a character with a believable personality."

But just as I was in a hurry to order some high-powered LED flashlights & a copy of Robopocalypse the next Monday --the book explains why flashlights would be essential to combat our sylicon-based overlords; also check out this video-- all the online buzzing powered down faster than GlaDOS after taking a beating with a portal gun. The claim, it turns out, was no more real than the cake in Aperture laboratories.

Oh, well. There's always the Zombie Apocalypse, right?

Nevertheless, all this online commotion & the readiness many people showed in accepting the news got me thinking: Why is it that we're so obsessed with the Turing test? Why do we even think it would be a valid assessment of Artificial Intelligence?

It was in 1950 when British mathematician & computer scientist Alan Turing (1912-1954) published Computing Machinery and Intelligence in which he posed the question: Can a machine think? Turing answered in the affirmative, but in doing so he pointed out to a bigger conundrum --if a computer could think, how could we tell? Here's where Turing proposed a solution: If a machine could establish a conversation with a person, and that person wouldn't be able to tell the difference between the machine & a human being, then from this person's point of view, the machine was capable of thinking.

We should also point out that there's been several iterations of Turing's test. The original version in fact, originated from the premise of a man & a woman sitting in different rooms, and a 3rd participant acting as the judge, whose job would involve determining the gender of the persons in the other rooms conversing with him through a computer; the trick in the test was that the woman would try to deceive the job in convincing him she was the man --and it doesn't take a computer genius to realize that Turing's concealed homosexuality, was quite likely the reason he chose deception as proof of intelligence.

I was considering these ideas this afternoon, while I was listening to the latest episode of the Skeptiko podcast, in which Alex Tsakiris interviewed Princeton neuroscientist Dr. Michael Graziano, author of the book Consciousness and the Social Brain. As you may probably suspect, Dr. Graziano is a hardcore materialist, and the theory he's trying to elaborate seeks views human consciousness strictly from a biological viewpoint.

Alex Tsakiris: [...]Okay, Dr. Graziano, tell us what’s necessary and sufficient to create consciousness. That would be like a first logic, rationalist kind of thing. What’s necessary and sufficient to create human consciousness?

Dr. Michael Graziano: Well one way to put it, and I have often used this example as it kind of nicely encapsulates our approach. And it is certainly totally different from the perspective that you outlined that I think a lot of people take. So here is an example – I had a friend who was a psychologist and he told me about a patient of his. And this patient had a delusion, he thought he had a squirrel in his head. And that’s a little odd, but people have odd delusions and it’s not that unusual. Anyway, he was certain of it and you could not convince him of it otherwise. He was fixed on this delusion and he knew it to be true. Now, you could tell him that’s illogical and he would say yeah, that’s okay, but there are things in the universe that transcend logic. You could not argue him out of it. So there were kind of two directions you could take in trying to explain this phenomenon. And would be to ask okay, how does his brain produce a squirrel? How did the neurons secrete the squirrel? Now, that would be a very unproductive approach. And another approach would be to say how does his brain construct that self-description? And how does it arrive at such certainty that the description is correct? And how does the brain not know that it’s a self-description? Now, those things you can get at from an objective point of view. You can answer those questions.
And in effect, I think you could replace the word ‘squirrel’ with the word ‘awareness’ and I think that the whole thing is exactly encapsulated. I think almost all approaches to consciousness take the first direction, how does the brain produce a squirrel – it doesn’t.

Herein lies the reason why modern Science has encumbered the Turing test: We should accept a deception from a computer as a sign of intelligence, because our own brains deceive us into thinking WE are conscious! I am a biological robot whose brain is tricking me to believe I'm Red Pill Junkie, and you are a biological robot tricked by your brain into believing a different identity. But it's ALL an illusion as far as modern Neuroscience is concerned, and since computer scientists also assume the brain is nothing but a data processing system, this is the model they're currently working on in order to achieve the Holy Grail of A.I.

But what if they are wrong? What if Alex & many of the researchers he's interviewed on his podcast, are right in pointing out that Consciousness is the ultimate test for materialistic Science, precisely because of its incapacity to adequately quantify & measure consciousness? Ironic, considering how every intellectual achievement, including Science, originates from Mind --the one thing we cannot put into a microscope.

Dr. Graziano & other skeptics might accuse me of being an uncredentialed woo-woo trying to defend a magical belief system, and argue that even though Neuroscience hasn't fully explained the emergence of consciousness in our brain, it doesn't mean it won't do so in the future. I would point those skeptics to the work of Jaron Lanier, a fellow who IMO knows a thing or two about computers --after all, he's the one who coined the term 'virtual reality'-- and who is not only VERY skeptic of the Turing test's efficacy to measure intelligence in an artificial system, but also shares my suspicions that human consciousness cannot be explained away purely from a mechanistic perspective:

But the Turing test cuts both ways. You can't tell if a machine has gotten smarter or if you've just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you've let your sense of personhood degrade in order to make the illusion work for you?

Which is *precisely* what happened with the judges testing the Eugene program. You see, it may be that in our rush to increase our expectations about artificial intelligence, we might have inadvertently lowered our expectations for teenage intelligence --the machines are not getting smarter, 'tis the meatbags who are getting dumber!

So fear not, fellow Coppertops, for even if tomorrow, a year or ten from now, we finally get the news that some geek managed to program a computer that could pass the legendary Turing test, I hardly doubt it would mean Skynet is about to wake up & purge the world from the human infestation.

...But the keep the flashlights handy, just in case.

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
Rudyi Lis's picture
Member since:
27 September 2011
Last activity:
8 weeks 5 days

I always viewed the Turing Test through the lens of Thomas Nagel's essay, "What is it like to be a bat?" We have no way of accessing the subjective experience of another being, but we assume other creatures are conscious/have a subjective experience anyway. We perform the Turing Test every time we engage other human beings. It's impossible to know that your neighbor isn't really a philosophical zombie that has no experience of color or pain, but is just faking it (like the Chinese room argument). But you still treat your neighbor as if they're a person just like you (at least I assume so :)).

If a computer behaves just like a human being does, we have to wonder if it's experiencing color and pain and emotions and has a sense of self too (never mind entities that could be conscious and have a subjective experience of reality that's completely alien to our own).

I don't believe "Eugene" has the same experience of the world as a human Ukrainian 13 year old. It's probably more in line with the Chinese room argument where there's no comprehension of the questions being asked of it. People saying the Turing test has been passed are completely missing the point.

red pill junkie's picture
Member since:
12 April 2007
Last activity:
7 hours 24 min

See video

I dunno. Perhaps there's something going on that allows us to recognize when we're communicating with a sentient being; something beyond receiving intelligible answers to our questions.

It's not the depth of the rabbit hole that bugs me...
It's all the rabbit SH*T you stumble over on your way down!!!

Red Pill Junkie
_______________
@red_pill_junkie

emlong's picture
Member since:
18 September 2007
Last activity:
37 min 3 sec

The Psychopath Test

http://cassiopaea.com/cassiopaea/psychop...

"Also, read Cleckley's speculations on what was "really wrong" with these people (psychopaths.). He comes very close to suggesting that they are human in every respect - but that they lack a soul. This lack of "soul quality" makes them very efficient "machines." They can be brilliant, write scholarly works, imitate the words of emotion, but over time, it becomes clear that their words do not match their actions. They are the type of person who can claim that they are devastated by grief who then attend a party "to forget." The problem is: they really DO forget.

Being very efficient machines, like a computer, they are able to execute very complex routines designed to elicit from others support for what they want. In this way, many psychopaths are able to reach very high positions in life. It is only over time that their associates become aware of the fact that their climb up the ladder of success is predicated on violating the rights of others."Even when they are indifferent to the rights of their associates, they are often able to inspire feelings of trust and confidence."

red pill junkie's picture
Member since:
12 April 2007
Last activity:
7 hours 24 min

You raise an interesting idea, and I'm sure it will give me much food for thought in the future.

Though I'm not really sure if considering psychopaths as 'machine-like' is very adequate. After all, aren't they driven by the unrelenting purpose to fulfill their desires? The fact that they care little for others' emotions doesn't necessarily mean *they* themselves don't have emotions of their own --though I have no problem in considering they experience emotions in a different manner, or at a different scale than a person with a more balanced sense of empathy.

So this is another thing I think about when considering the possibility of Artificial Intelligence: 'Intelligence' by itself is no enough; you ALSO need to add emotions or desires in order to have a drive to achieve a goal --otherwise the machine is merely an automaton, acting by impulse instead of self-reflection.

Take for instance the character of GlaDOS in the Portal videogames: She (or It if you prefer) is the machine controlling every single aspect of the giant Aperture laboratories facility, and at first is driven by the merciless obsession of putting the human guinea pig (the player) through incredibly dangerous test courses; by the second game, GlaDOS' motives have morphed into tormenting the player for having 'killed' her by the end of the 1st game.

And yes, GlaDOS fits the description of 'pathological' perfectly.

You see, there's this notion that machine intelligence would conduct itself by pure logic, like a Silicon-based Vulcan; but that IMO doesn't really make sense. For a machine to rise to the level of true consciousness it would not only need to understand the command you give it --it would also need to CARE enough in order to perform the task.

And there's the problem. Suppose it doesn't WANT to follow your order?

But perhaps I'm following in the fallacy of understanding intelligence from an Anthropocentric perspective --or at least a biological one.

And also someone might point out I'm constantly interchanging the words 'intelligence' & 'consciousness', although I hope it's understood that on this matter I'm considering intelligence as something more than the ability of a calculator to calculate a math problem --I'm talking about the intelligence of *understanding* the result.

It's not the depth of the rabbit hole that bugs me...
It's all the rabbit SH*T you stumble over on your way down!!!

Red Pill Junkie
_______________
@red_pill_junkie

Red-walker's picture
Member since:
6 May 2014
Last activity:
20 hours 31 min

Loved the Closer to Truth interview here!

Jaron Lanier seemed to touching on something, an insight if you will, that occurred to me a few weeks ago. That is: "What would the universe be like without consciousness?" And my conclusion was that it, in a sense, would not exist, because who would be around to verify it's existence. It could never be experienced.

Now, I think Lanier's conclusion was much better and more logically sound than mine, but we seem touch on the same points. The universe would be both exactly the same and radically different if consciousness did not exist, which then creates another mystery for consciousness.

P.S. RPJ, you always seem to link to the best thinkers. ;)

Question everything.

red pill junkie's picture
Member since:
12 April 2007
Last activity:
7 hours 24 min

Thanks! I became fascinated with Lanier's way of thinking back when I used to buy Discover magazine religiously, and he wrote a monthly column for them.

It's not the depth of the rabbit hole that bugs me...
It's all the rabbit SH*T you stumble over on your way down!!!

Red Pill Junkie
_______________
@red_pill_junkie

emlong's picture
Member since:
18 September 2007
Last activity:
37 min 3 sec

Bringing up the psychopath wasn't intended to dwell on whether a psyho was a machine - it was meant mostly to raise the question of any dissembler being able to fool the Turing Test. The inverse of that would be an autistic person answering the questions in such a way that the test actually labeled that person a machine instead of a person.

red pill junkie's picture
Member since:
12 April 2007
Last activity:
7 hours 24 min

Again, very interesting points.

Which highlights the inadequacy of using conversation as an adequate medium to gauge intelligence.

It's not the depth of the rabbit hole that bugs me...
It's all the rabbit SH*T you stumble over on your way down!!!

Red Pill Junkie
_______________
@red_pill_junkie

Rudyi Lis's picture
Member since:
27 September 2011
Last activity:
8 weeks 5 days

I was going to say that most of our interactions with other people involve seeing their physical body and all its gestures and behaviors, so we get more input than simply text on a screen, but then my thoughts went off in another direction.

Is there anyone you speak to regularly that you've never met beyond Internet posts? What would you think if you found out "they" were an algorithm? (I'm not arguing for a particular position. I'm just curious what your opinion is since you doubt the usefulness of conversation as a test.)

red pill junkie's picture
Member since:
12 April 2007
Last activity:
7 hours 24 min

I can tell you that the 1st time I used the Alice chatbot several years ago, I was really surprised at how 'human' its responses were at times --even though I was well aware from the beginning of its artificial nature. For a VERY fleeting moment it felt I was chatting with a 'person.'

CGI artists are aware of a phenomenon called the Uncanny Valley, which describes how with our current special effects, the closer they get to rendering a photorealistic synthetic human character, the more 'fake' it seems. Perhaps computer scientists will face similar problems in their pursuit to generate A.I.s that interact with you only through text messages or 'spoken' language.

It's not the depth of the rabbit hole that bugs me...
It's all the rabbit SH*T you stumble over on your way down!!!

Red Pill Junkie
_______________
@red_pill_junkie

Rudyi Lis's picture
Member since:
27 September 2011
Last activity:
8 weeks 5 days

As long as people are creating chatbots that can only process text, I don't believe the algorithm will have any comprehension of what it's talking about, even if the output crosses into the uncanny valley. Eugene said his favorite artist was Eminem. I would give a lot more consideration of consciousness to a system that actually processed audio files and decided on its own that Eminem was its favorite than one that is really good at constructing sentences based on statistics. (In other words, Alice and Eugene and the rest of those programs have no semantic meaning attached to the syntax they manipulate.)

My opinion is that it means a lot more if the Turing test is passed by an artificial general intelligence that can take in sensory data, associate ideas with experiences, and illustrate emergent behavior than a chatbot that's essentially performing a magic trick.

red pill junkie's picture
Member since:
12 April 2007
Last activity:
7 hours 24 min

The ultimate paradox of such tests, is that maybe an A.I. clever enough to pass them, would instead choose to PLAY DUMB, in order not to upset the Meatbags around it :P

It's not the depth of the rabbit hole that bugs me...
It's all the rabbit SH*T you stumble over on your way down!!!

Red Pill Junkie
_______________
@red_pill_junkie

Rudyi Lis's picture
Member since:
27 September 2011
Last activity:
8 weeks 5 days

:)

http://www.penny-arcade.com/comic/2004/0...

red pill junkie's picture
Member since:
12 April 2007
Last activity:
7 hours 24 min

You get the Internetz for linking to one of my favorite webcomic ^_^

It's not the depth of the rabbit hole that bugs me...
It's all the rabbit SH*T you stumble over on your way down!!!

Red Pill Junkie
_______________
@red_pill_junkie

Rudyi Lis's picture
Member since:
27 September 2011
Last activity:
8 weeks 5 days

From other articles I've read, human participants have been labelled as the computer in these tests.