Surely everyone has heard that IBM’s Watson virtually crushed its meatbag counterparts in the epic trivia game Jeopardy. What? You haven’t? I suppose you’re not nerdy enough. Lucky for you…I am!!
But first a bit of background. Alan Turing is often considered the father of Artificial Intelligence (AI). Besides being absolutely brilliant he was one of the first to really recognize the power that computing could have on our world. Many of his predictions in AI have been very accurate. In 1950, Turing introduced a concept that has become known as the “Turing Test.” The Turing Test is considered to the be the grand challenge of AI.
Here’s the scenario. There are three players, A, B, and C. Player A is a computer, and players B and C are humans. Players A and B are not visible or otherwise physically distinguishable to player C. Player C (the interrogator) gets to ask any question to players A and B. If, via these questions, player C cannot deduce which player is the computer and which is the human, then the computer is said to have passed the Turing Test.
It is fairly clear that this grand challenge is still many years away. Most computers, even those having extremely sophisticated AI algorithms, can typically only carry on approximately a five minute conversation with a human before the human gets bored and gives up (Emacs psychiatrist anyone?). But, in a particular narrowly defined scope, we have many machines that have passed the Turing Test. For example, in 1996 Gary Kasparov (a world chess champion) lost to IBM’s Deep Blue, a supercomputer. It is easily demonstrated that, at least in the narrowly defined world of chess playing, Deep Blue passed the Turing Test. That is, a human observer, based on the chess playing alone, would not be able to distinguish between the world champion chess player, and Deep Blue.
This leads us to IBM’s Watson, and the recent Jeopardy episodes featuring him. It’s fairly obvious that in the narrow world of Jeopardy, Watson passed the Turing Test (beating the two most successful champions). In fact, not only did he beat them, he demolished them, ending with a total of $77,147 for the two game total, compared with Ken Jennings’ $24,000, and Brad Rutter’s $21,600.
It is by means an overstatement to say that Watson is an amazing supercomputer and that his AI reaches into new territory never before seen in the computing world. The natural language processing, capture of statistically significant information, in conjunction with (most of the time) correct inference is absolutely incredible!! When one considers what it takes to use 1s and 0s to represent, process, and store the kind and amount of information we’re talking about, and how much ambiguity there is in natural language, it is remarkable how far we have come.
From a theoretical AI standpoint, it really is a new era of computing in which computers cannot only provide us with loads of information (a la Google), but it can also give us the right information. The ability to interpret questions, even ones riddled with metaphor, idioms, and other tom-foolery opens doors of analysis with speed previously unknown. Further, the computer can reach into its vast database of knowledge (I call them Google Dumps), and somehow retrieve the right information (think about how long it sometimes takes you to find the information you want on Google). Being able to command that amount of information at the speed Watson is capable of has limitless possibilities.
The Not so Remarkable
Now for a bit more skeptical view. Let’s face, Jeopardy is impressive because of it’s breadth. It has virtually no depth. The questions are all simple enough to be answered by a High School Sophomore who has access to Wikipedia. Sure, when you watch Jeopardy you probably don’t know most of the answers. But if a topic comes up in which you’re an expert, it’s obvious the questions are terribly trivial. What makes Ken and Brad so amazing is their ability to remember and quickly recall trivia that spans the spectrum of topics. Nevertheless, with Wikipedia, I contend virtually anyone could correctly answer almost all Jeopardy questions on a single show in a relatively short period of time.
Furthermore, from watching the show it is clear that all three players knew the answers to almost all the questions. So why did Watson win? It was not because he knew the answers to questions that his decimal-crunching competitors did not (that would be impressive and would really inaugurate the takeover of our robotic overlords)! No, he won because he could ring his buzzer faster. The fact that a computer can respond quicker than a human is not at all a remarkable feat. That has been happening since the dawn of the computer age.
All in all, I was thrilled to watch the match. I was delighted to see Watson completely botch some super simple questions (like Final Jeopardy in game 1 in which Watson thought that perhaps Toronto was a U.S. city). Is this a major leap for AI and computing in general? I think so, even while acknowledging that in many ways it is not particularly impressive and certainly Watson is still light years behind having the kind of intelligence, reasoning, and consciousness that even the most uninspiring bipeds have.
So what say ye? Are you concerned about the influx of our increasingly intelligent binary slurping creations? Is Watson remarkable to you? Are we reaching the limits of what computers can accomplish, or only beginning? Perhaps the most relevant question of all, is this a step toward robotic consciousness, or is such a thing unachievable?
I think Watson was a glitsy display of what computers and AI are capable of doing. However, the real application of this technology are going to be in the workforce, not gameshows.
The computer technology, now in it’s infancy but developing rapidly, will replace a lot of jobs out there. And it could be applied to a lot of jobs – from engineering to accounting to health care. A computer will be able to do the job faster and better with fewer mistakes than humans. That, I think, is the real implication of Watson.
Not concerned. It just does a better job than it’s ancestors and storing and recalling things; and, processing things it has been programmed to do.
It is not a Child of God. It does not have a Spirit that can commune with the Holy Spirit. That is what separates us from any other life form.
I spent most of my younger years programming computers and I find the advancements impressive.
Another implication is the mystery of intelligence and consciousness – that people usually ascribe to God – is being unlocked by mere mortals. If we can reproduce truly intelligent machines that are self-aware and “conscious”, then what does that mean for the idea of God-granted consciousness? What does it mean for the notion of a soul; a ghost in the machine?
I think a lot of dualists chalk our consciousness up to a spirit. However, if computer programers and engineers can produce it “artificially”, then it challenges the notion that consciousness is a supernatural God-granted abilty that we alone can possess.
Already, there is plenty of data that other highly evolved animals (like dolphins and chimps) are self-aware and conscious of themselves. The advent of super-smart, intelligent, conscious machines, will only further challenge our perceived hegemony of consciousness.
I’m very confused by how you’re defining a “pass” for the Turing Test.
It seems like you’re saying, “As soon as an AI may beat a human at some task, then it passes the Turing Test.”
But this doesn’t seem like it would necessarily follow. It doesn’t follow that because an AI chess player could beat a grand master that the AI is indistinguishable from the grandmaster. Computers are technically better at a great many more things than humans, but this doesn’t make them convincing humans (even at those limited tasks.)
For example, there is an online game of 20 Questions under the guise of some genie or whatever. This thing is SCARY accurate, often getting what you think are really obscure things.
But that doesn’t make me for a second think it is human…because I know that its scary accuracy is not something the ordinary human would have. (Furthermore, when you DO stump it, you realize that it is really going based on databases and millions of previous attempts from players. All of a sudden, that answer you thought was obscure…well, it had been tried by 3000 people before you.)
In fact, the rest of your post mentions exactly this. Watson’s ability isn’t because he is “more human” than an actual human, but because he is taking advantage of extra-human capabilities.
“Already, there is plenty of data that other highly evolved animals (like dolphins and chimps) are self-aware and conscious of themselves.”
When they design, engineer and build the microchip, supersonic engine or Watson, we’ll talk. Heck, for that matter, let’s talk when they can start a fire. They are animals – unintelligent, instinctive, pre-programmed animals. They are here for the use and benefit of man – for friendship, food and clothing or to keep other lower life forms in check.
They are not children of God. They have never, or will never, have the intellectual capacity of God’s children. It is the fact that we are God’s in embryo that has lead to these great advancements. Dolphins, Chimps or AI will never evolve, or will never be on the same playing field, as Man.
Kinda bouncing off Will’s ideas (and subverting it at the same time), humans are not just “mere mortals.” Humans are gods in embryo.
So it makes sense that humans could unlock secrets of intelligence and consciousness, because this is our birthright.
This is entirely consistent with eternal progression, and the idea of God being a scientist and architect. Maybe what we consider to be “spirit” is simply very advanced technology? And maybe the difference between Watson and us is that Watson is that technology applied to metal, and we are that technology applied to meat.
Will: do you think human animals (we’re obviously mammals and primates more specifically) evolved just like every other animal on the planet? Or do you think humans were an act of special creation, created just the way we are now?
Will: I’ll let you answer the above question at your leisure (but will assume that you have issues with human evolution).
You say: “They (animals) are not children of God. They have never, or will never, have the intellectual capacity of God’s children.”
OK, so humans are special because of our intellectual ability. However, if intellectual ability is the criteria by which humans should be treated more ethically than mere animals (not butchered for our food and clothing) then why should we treat infants, mentally retarded adults, or our grandparents with dementia special?
I’ll assume you would answer it’s not because of their intelligence, but because they are human that we should treat them special. Do you see your circular argument?
And what if machines also develop human-like intellectual ability. Maybe machines should have human-like civil rights too?
A few things from pop culture:
#6 Andrew S:
Reminds me of a line from the upcoming Thor movie: “Your ancestors called it magic…
but you call it science. I come from a land where they are one and the same.”
The line isn’t as clear are you think in what defines us as “special”. Perhaps read a book like “Revelation Space” – a science fiction novel – and your mind will be opened to all of the possibilities out there.
Just as a thought experiment, what if a scientist could replace a single defective neuron in your brain with a chip that performed the exact same function (actually possible, just not scaled down yet in size). No problem. Still human. What if they did it again and again. Eventually, your entire brain might be replaced by “machine”. At what threshold would you consider yourself “not human”?
This is what I find most interesting as well, and why I’m drawn to AI. Consciousness is poorly understood. In AI (and even in this post and in the comments) we are referring to machines as “him” and “her.” We often speak as if they have intelligence (and they do). I think it’s very fascinating.
Re Andrew S
I was hoping someone would say something like this. You’re exactly right. The Turing Test is as I described it. The extension of the Turing Test to a narrowly defined scope is no longer a Turing Test (by definition). I am merely extrapolating and drawing some rather unfounded conclusions in my claim that they passed the “Turing Test” in that scope. I think an argument can be made that when a machine can beat the best human in something there’s reason to consider the machine “intelligent” in that particular area. This of course boils down to a definition of what is intelligent. In Watson’s case, his ability to buzz in faster than humans is unremarkable (hence the win is less interesting). But it is extremely interesting that he can get the right answer almost every time faster than a human. However, I digress, suspicion is certainly raised in light of his silly mistakes. If you were to only listen to the show (and didn’t take into account Watson’s electronic voice) would you be able to deduce that Watson wasn’t a human? Perhaps because of his silly mistakes, but otherwise I don’t think you could. Hence the case for him passing the Turing Test in that scope.
The goal of the Turing test is to fool a human observer. In that sense lots of machines have fooled human observers in certain tasks, but never in an interrogation. The fact that you do not believe (in your genie example) that a human is on the other end speaks more to your knowledge of the internet, the modern world, and the way things work. It’s not a fair test for the AI agent as you are leveraging a priori information about the situation itself, not merely evaluating on your interaction with the agent. The real question is, if we put a human behind the curtain, and the computer, and you interacted with them, could you tell the difference? As I stated, this is currently not possible.
Also, there are problems with the Turing Test and not everyone in AI accepts it as valid. There are valid criticisms of it. But it has been the grand challenge for many years and is certainly useful.
Re Mike S
Brilliant thought experiment. I love those! This is why defining what makes a person a person, what death is, what consciousness is, etc. is EXTREMELY difficult.
To merely state we have “spirits” is just a statement of belief that may or may not be true and leaves a great many questions unanswered. You take for granted that the scriptures, prophets, etc. teach it, and perhaps you have had a witness it’s true. That’s fine, but it doesn’t make it true. BTW, I’m not saying it’s not, just that your response doesn’t further our discussion, merely states a belief. There are good arguments for the existence of a spirit that answer a great many questions.
This is the direction I’d like to see the comments go. What is intelligence? The Turing Test was designed to answer that question. Is it even a good test? Is intelligence the ability to process information, reason, feel, sense, make decisions? Computers can do all those things at some level, yet none of us consider them to be conscious.
I reply with the Chinese Room.
Can the person in the “Chinese Room” be said to be “intelligent” (or “understand”) Chinese?
(I am aware, that a Chinese Room may pass the Turing Test, but this is a different thing than what I was talking about before or what you’re talking about here.)
It wouldn’t be perhaps because of his silly mistakes, but because of his inhumane speed on the buzzer in the first place (his “successes”.) In the same way, some humans may “fail” the Turing Test in a way. (Consider when certain world records in sports events are broken. We might become incredibly suspicious and say, “A human couldn’t have done that.” But we usually phrase it a little differently: “Was his taking performance-enhancing drugs? Was his swimsuit the cause of such performance gains? Etc,”)
I’m saying, quite simply, that proficiency isn’t a good substitute for humanity. A machine can be quite proficient, and as a result, expose itself as not being human.
No, I don’t think that is the case. I think it speaks to my understanding of humans: namely, humans just don’t work that efficiently at games of 20 questions. So to see a human “guess” consistently right tens of times, or hundreds of times…that would clue me that it wasn’t a human.
Before the end of the game, I couldn’t figure out HOW the machine was doing it. So, I wasn’t leveraging a priori information, as you say.
I think I COULD tell the difference: the human would have significant limitations. The 1) human doesn’t know everything (e.g., doesn’t have access to google in his brain) and 2) doesn’t have an efficient way of culling down in 20 questions.
So, to summarize, I’m not challenging the Turing Test. I’m challenging your interpretation of what passing the TT would mean. I don’t think, “The machine can beat a human at x technical task” equates to “The machine can trick an observer into thinking it is human.”
…To put it another way, there was an article I read about AI and Turing Tests. I don’t have the link, but maybe someone can find it. AIs performed better in conversations when they responded as if they had negative personality traits (e.g., moodiness, obnoxiousness, etc.,) Think about the implications of that: humans expect WEAKNESS of humans, not STRENGTH.
Sorry if I threadjacked your OP; I felt I needed to take a brief detour and address Will’s comment.
I think the fact that other animals have similar intelligent capabilities (self-awareness, take care of their young, engage in altruistic behavior) gives us a hint of the source of our own intelligence. The things that make us intelligent are evolved traits, shared in degree with other evolved animals. For that reason, I’m not sure that computers/machines will ever possess it, unless they too can reproduce and evolve as we do.
What is intelligence?
I watched a TV show last night about Orca whales. This seal that they wanted to eat was sitting on a block of ice. The ice was too big to ram and break — so two of them swam away while the third stayed and watched the seal.
Then the other two swam towards the block of ice just below the surface so as to created a wave of water above their heads. Just before arriving at the block of ice — they dove and the wave hit the ice and broke into.
I was impressed — I started thinking that perhaps you could consider that to be intelligence.
However, the seal was still on one of the broken pieces of ice. And now there was a lot of little pieces of ice surrounding the one he was on — so the same wave trick wouldn’t work again.
So the whales surrounded the piece of ice the seal was on and collectively pushed it out to open water, so they could do the wave trick again, it knocked the seal into the water, and they ate him.
I told you all of that to tell you that it wasn’t until that last part that I was convinced that those animals are intelligent. I would tie any definition of “intelligence” that one formulates with the idea of what you do when things don’t go according to plan [or programming as it were].
Watson was smart b/c humans programmed it how to be smart. What would it do if something beyond the scope of what it had been told to expect happened [e.g. the 1st wave didn’t knock the seal off]. Those whales thought of another idea. They figured something out. I think that’s key to an idea of “What is intelligence?”
Actually, the pop culture reference this has me thinking of is the recent Chrysler commercial that says something to the effect of, “Cars that drive themselves using a search engine? We’ve seen this movie before, and it ends with humans being harvested for energy.”
Very well put
Re Andrew S
Absolutely. I do not mean to imply that it is.
If you could tell the difference, then the machine would not pass the Turing Test. But this is silly, I can easily program a computer to randomly get the wrong answer or take varying time intervals between answering questions. That’s certainly not intelligence. According to what you’ve described I’d be surprised if many humans would pass your test. I have to be smart, but not too smart, fast, but not too fast, etc.
Andrew, what test would you propose to judge the intelligence of a machine?
No, it’s fine, I was applauding your comments on asking those questions. Answering Will’s question is VERY relevant to the conversation. I think that was an example of unintelligent commenting on my part.
BTW, Andrew, your chinese room example means that you don’t really buy into the Turing Test. That’s totally valid. I think the definition of intelligence is up for debate. The Chinese room example is a good one. However, my conjecture to the Chinese room experiment is that many humans wouldn’t pass or would be thought of as potentially a machine.
Personally, I think the problem we run into whenever we (not us necessarily) talk about this is that we really have a holistic view of what it means to be intelligent. It’s not just being able to do a specialized task well (though that is part of it). Being able to respond to problems, re-plan when things go wrong, sense, emote, etc.
Justin brings up a good point that Watson was really only good at one thing. If displaced, or if faced with a different plan it was not equipped to deal with that.
I think they should have rigged up a little mechanical arm with a mallet, so Watson would have to physically hit the button like the flesh-and-blood guys.
Also, I wonder if “Jeopardy” may have monkeyed with the questions and categories a bit, to make them more computer-friendly.
Right. You’re the one who is saying that the machine passes the Turing Test because it can beat a grandmaster (which says nothing about whether it makes a convincing human.)
Then do that…and then you’d be closer to a TT-passing machine.
You say, “That’s certainly not intelligence” as if it has any bearing.
1) The Turing Test has *nothing* to do with intelligence, except that to the extent a human would recognize another human by his intelligence, the Turing Test has to *mimic* that level of intelligence. (You address this criticism in the a later response, when you address the Chinese Room, but I want to stress that this isn’t my only point.)
2) The Turing Test isn’t about proficiency, except that to the extent a human would recognize another human by his proficiency, the TT has to *mimic* that level of proficiency. (This is my other point. Even granting the TT as being valid…it’s valid not because an AI/machine is proficient at some task, but because it seems to be human-like in the completion of the task.)
3) Proficiency isn’t about intelligence. To *mimic* proficiency (or even to *mimic* intelligence) can be done in a decidedly unintelligent way.
I don’t see why this is the case. “Smart, but not too smart” EXACTLY describes most humans. (Humans who are TOO smart become marginalized. We don’t treat supergeniuses very well because we can’t relate to them.) “fast, but not too fast” exactly describes humans. (Hence, we have used tools throughout the history of civilization…horses, wagons, bikes, trains, cars, airplanes.) Humans are a bundle of limitations that we overcome through the use of non-human tools.
This line actually gives us a great question to pursue your final question:
1) Not a Turing Test. Turing Tests just describe “human mimicry.” There’s no guarantee that being able to mimic a human is the same as being a human. (This really gets into questions of qualia.)
2) Not a trivia test like Jeopardy or a game like Chess. These describe proficiencies (in information recall and computation of possibilities and things like that), but we know that humans aren’t maximally proficient at a great deal of things (especially not those things just mentioned). We aren’t maximally proficient at speed, so we have transportation. We aren’t maximally proficient at computation, so we have computers, and so on.
what is the uniquely human attribute that we might call intelligence behind this? It is the ability to recognize our weaknesses and adapt through the creative use and construction of tools.
So, maybe a machine becomes “intelligent” when it can overcome its own limitations?
jmb, I partially addressed this response in 21, but yeah I’m going to change a few things.
A quick check to Wikipedia gives me this first line:
I’m sorry then. I was under the impression this entire time that the Turing test was the test of a machine’s ability to seem humanlike (to the extent that a human would think, “That’s a human.”)
But if the TT assumes that it shows intelligence, then YEAH, I have some really deep-seated problems with it.
I agree with the rest of your message in 19, and with Justin’s point. In fact, I’d probably agree that Justin gives a good example of intelligence (and shows how animals have it.)
Re Andrew S (sorry, I keep thinking of things)
I disagree. It speaks to your experience of humans on average. As you yourself say, some humans would be thought by you to be a machine (if they were exceptional in one particular task and you had no other information with which to judge).
But this is exactly what the Chinese Room experiment does – challenge the validity of the Turing Test.
I see what you’re saying.
(sorry I’m spamming with so many comments in a row)
Actually, upon further review of the wikipedia page, I’d say these capture my reservations
“maybe a machine becomes ‘intelligent’ when it can overcome its own limitations”
By this definition, some might question whether humans are truly intelligent, since we seem incapable of overcoming our own speciest biases. Just two examples:
1. Humans, as just 1 of millions of species, monopolize over 40% of all the planet’s available energy stores – leading to human caused global warming.
2. Humans destroy other species at an astronomical rate as we consume the planet’s energy, thereby destroying our land, rivers, lakes, oceans, rainforests, and atmosphere (it’s the Tragedy of the Commons that we can’t seem to escape).
It seems that this is one of our greatest challenges to our intelligence: Are we going to destroy not just other species, but even ourselves with it?
But this then becomes a matter of pessimism or optimism. Many people believe (maybe naively) that history is a progressive march forward. Optimists say we CAN overcome our own speciest biases (not to mention the other biases we have).
Even if you’re not totally optimistic, you still might PUSH for people to change their behaviors before it is too late. You don’t, for example, throw up your hands, and say in despair, “Goodbye, world.”
…or maybe you do. 😉
Plus, if humans ended up destroying ourselves and our planet, then why would we want to consider ourselves intelligent anyway?
I’m sorry if I sounded too pessimistic. I’m actually an optimist; I think we will figure out a way to solve our biggest problems we face today. It’s our ability to reason and understand the consequences of our actions that will help. We sure are taking a long time though. Maybe some of that quick computer computing ability would come in handy now 🙂
Sorry, Andrew, I think we’re talking past each other a lot here.
I think there are two underlying issues here.
1) what is intelligence
2) what does it mean to be human
I certainly agree with you that passing the TT isn’t a good measure of whether something is human. But I do think it gives some indication of intelligence, especially if the experiment is carried on for “long enough.” Mimicry is certainly not the only indicator of intelligence but I do not see how to distinguish genuine learning from mimicry. Writing a computer program to recite a speech is trivial, but writing a computer program that learns to process natural language, synthesize it, and use that knowledge to respond with a speech is not trivial. In both cases the outcome might be the same.
I think that’s pretty good. But clearly some humans cannot do this (or rather, they could, but their psychology prevents it perhaps so they won’t), but I still think they are intelligent human beings. In part you’re describing learning, and there are machines that learn and overcome weakness and limitations and adapt through creative solutions (though I confess that currently this is done in a limited scope). However, I don’t know of any that construct tools…yet!
All I’m saying is that as amazing as human intelligence is, it has limitations. And one of those limitations is to see everything from a human-centric speciesist view. I think this is destructive. We’ve got to use our intelligence for constructive purposes and fix long-existing problems that stem from our short-sightedness (just part of the way we have evolved).
Absolutely, I agree with those reservations. The TT is not a proof of anything, that’s for sure. I think it can be an indicator depending on your definition of intelligence.
The TT has been a grand challenge in AI, but it is not the end all be all for sure. And I didn’t mean to conflate proficiency with humanness.
No problem. The thing I get from this scenario though is…it’s possible to see someone going either way.
Yeah, I got that feeling too, and I think it’s unfortunate.
I agree with you on the issues that you’ve formulated, but based on the rest of the answers, I guess there’s still a bit of disagreement :3
For example, you say:
I think that if I had to pick ONE OR THE OTHER: which does the TT show, whether something is human or being intelligent, I’d pick the *former*.
Because I think when something passes the TT, the observer says, “Oh, that’s human!” He doesn’t necessarily say, “Oh, that’s intelligent!” Because to pass off as human, you have to mimic human idiocy and the limits to human intelligence as well.
This actually creates a conundrum. For a machine to mimic human idiocy and the limits to human intelligence, it does so because of *intelligence*, whereas humans do so because of their *humanity*.
…so, I can see your point about learning natural language and then synthesizing and using that knowledge to respond with a speech.
To the contrary, I point out that to be an “intelligent human being” (emphasis on “human being”) is to include constraints and limitations on intelligence. To be an intelligent human being is to lack super-human or inhuman intelligence (like consciously changing neurology and psychology through willpower.)
Nevertheless, here’s where human ingenuity comes back into play. Psychology prevents us from doing a lot. And then we (HUMANS) invent things (drugs, therapy, surgery) to BYPASS and CHANGE psychology. Hence, we have antidepressants. We have mood inhibitors. And so on. Maybe we’ll come to a point where we can change neurology through willpower?
The fact that you can point of the flaw of human-centric speciesist views (and others, like Singer, can make arguments against speciesism) suggests to me that such a limitation isn’t as strong as we think.
If we RECOGNIZE this bias, then why can’t we overcome? I’d say a really insurmountable limitation would be one that we cannot perceive and analyze.
In this vein, I’ll give my best shot at defining the terms.
Intelligence: I do think that the ability to perform something better than an average human makes something intelligent in performing that thing. Clearly the scope of the intelligence has to be well established. I consider the avionics system in an aircraft to be VERY intelligent, as is my laptop I’m in front of. As the scope of the “something” being performed grows to encompass virtually everything, then I would begin to wonder about its humanness.
Being Human: To me, one has take a holistic approach to this. It naturally resists careful definitions that use reductionism precisely because of the dynamic nature of the system that is a human. I think it includes proficiency in some things, but not too much, it includes overcoming, replanning, reasoning, and problem solving. It also includes emoting, sensing, and acting. Most importantly, I think it includes language, learning and adapting, and self-awareness (I think these are perhaps the primary traits of humans and what separate us from most of the animal kingdom).
As a result, as the scope of intelligence for a machine expands, when it eventually encapsulates the breadth that human consciousness encompasses, I will have a hard time not admitting that such a machine is not a human.
Re: “destroying the planet” via global warming, even the most (grossly unrealistic) apocalyptic models have us warming up to basically the temperatures of the Jurassic, when life went chugging along just fine.
That would of course wreak havoc on complex civilizations, and probably crash the human population — but “deep” environmentalists tend to think that would be a good thing. There would probably be fewer walruses and more monkeys overall, but life and the Earth are more resilient, relative to our destructive ability, than we sometimes tend to think.
Of course that doesn’t mean that we’re excused from being good stewards of the Earth. Just that we should panic only so much as is actually justified, and no more.
For me, I tend to think that accepting an increase of 1 Celsius in global temperature, from doubling atmospheric CO2 concentrations, is an acceptable tradeoff for avoiding a return to the energy consumption levels of the Middle Ages — with all the return to medieval mortality rates that would probably entail.
Well, disagreement is great as I love to learn from the viewpoints of others. But talking past each other is a pain.
Yes, indeed. Though I’m scratching my head as to what it means to pit “intelligence” against “humanity.” A machine acts like an idiot because of something that goes on in its thinking process (you happened to label it as “intelligence”). A human acts like an idiot because of something that goes on in its thinking process (which you happen to label “humanity”). It’s certainly not clear to me where our intelligence stops being that and starts being humanity since they both originate in my brain (or so I presume).
I think non-human intelligences are common, and are to be valued for their non-humanity as well as for their intelligence.
As I suggested here:
there are theories of the nature of consciousness as it applies to humans ourselves that suggest it exists on a continuous spectrum everywhere.
Early Mormon scriptures are actually quite comfortable with this, until theology of the physicality of God became so well established as part of the Nauvoo period.
ugh, had a long post, It got deleted.
It was basically a critique of your definition of intelligence…once again, you’re getting to proficiency. But an additional point is, with your definition, phrases like, “Humans are an intelligent species” don’t make sense, because intelligence is define in comparison/contrast to the “average human.” Interestingly, the humanity of a machine becomes defined as when its intelligence (that is, it’s proficiency ABOVE that of an average human in a particular task or field) expands in a number of directions and fields.
So, a machine becomes closer to humanity when it becomes intelligent enough to encapsulate the breadth that consciousness encompasses…but the definition of intelligence for a machine is being *more* proficient than a human at x task. So, a machine becomes human when it becomes *more* proficient at consciousness than the average human?
Notwithstanding any “one” thing like global warming, my concern is…it’s not just walruses, monkeys, and humans. Rather, plenty of other species are being endangered or wiped out.
The thinking processes are very different, however.
A human acts like an idiot because its thought processes are incapable of not acting like an idiot in some instances. (That is, until we figure out ways to overcome biases, incompetences, etc., etc.,)
A machine acts like an idiot not because its thought processes are incapable of not acting like an idiot in some instances…but because its thought processes are very much capable of outperforming the human, but it handicaps itself. The machine engineers itself to have biases that it doesn’t natively have.
Consider a stage imitator giving an impression. The person he is imitating/providing an impression of naturally acts like that, because that is his default nature.
But the stage imitator’s default nature is different. So the stage imitator has to recognize the differences between himself and who he is imitating and try to put on an affectation…certain mannerisms, idiosyncrasies, etc.,
Does that make sense?
We may confuse one for the other. Some people give REALLY good impressions, after all. This is akin to passing the Turing Test. BUT one person does it “naturally” and the other person does it as an affectation.
“Will: do you think human animals (we’re obviously mammals and primates more specifically) evolved just like every other animal on the planet? Or do you think humans were an act of special creation, created just the way we are now?”
I don’t care either way. It makes no difference to me whether our physical form evolved from a lower life form, or if God framed, put HVAC, electrical and skin coated us like a contractor. It is irrelevant. What is relevant is the scripture “God breathed the breath of life” into Adam. It was at this point, that Michael (Adam’s spirit) entered the tabernacle of clay known as Adam. At this point, a Child of God entered a human body.
My 11 year old son has a language learning disability.
After discuss his language weaknesses in a meeting this morning I realized that my son seems to process language almost like Watson. He’s given a sentence and rather than truly processing the language and understanding the meaning he is often guessing based on the words he manages to process. My son is faking it, just like Watson is faking it. You can get pretty good at faking it if you have a lot of practice. That’s my hope anyway, that my son gets better at guessing with practice.
What do you think about Watson’s ability to understand language?
I was always taught that you don’t fully understand a language until you get the jokes.
Chinese room is a crock. Searle is off base on just about everything. It wouldn’t be the neurons in the brain that are intelligent, it would be the whole freakin’ system combined.
If a computer could ‘pass the Turing Test’ the truth is we’d have no reason to deny it the label of intelligence any more.
It really isn’t about mimicking a human, it’s really about how we grant to others existence as being agents and having wills. In other words, the reason we believe other humans have minds (just as we believe they do) is because they ‘pass the Turing Test.’ on a regular basis.
Presumably the only way to pass the Turing Test is to be as fully conscious as we are, or perhaps more so. We are, in many ways, only barely self conscious now. Most of us is a black box to our consciousness.
*just as we believe *we* do.
That’s what I’ve heard as well. I actually spent a miniature class about it (multilingualism and multiculturalism).
re 40, 41
I don’t know about YOU, but I’m definitely zombie-ing Wheat & Tares up over here.
It just seems to me that the Turing Test doesn’t actually figure out any of what you said it would. After all, it could very well be that a zombie would pass the Turing Test. In fact, that’s the entire point, isn’t it? We presume that the only way to pass the Turing Test is to be as fully conscious as we are (or perhaps more so), but THAT is the one thing in the equation we really have no clue about. The Turing Test just determines the point in which we can’t tell the difference any more.
Does the Chinese Room *system* find Chinese jokes “humorous”? Or does it find *anything* “humorous”? Is there a subjective response to anything? If it doesn’t do this, then can it still be called intelligent (I am aware that maybe, “Intelligent” is just a different idea.)
I think we just assume other people have minds because if we’re wrong, then that turns us into colossally self-centered jerks — and as people who personally have minds, being perceived as a jerk (even internally) sucks. (unless you like being a jerk, and then you go about treating other people as mindless anyway.)
But we also have a lot of speciesist intuitions that prevent us from granting this status of privilege to others. We are CHILDREN OF GOD, and machines/animals aren’t — so not only is it ok to be “jerks” to machines/animals, but in fact, it is impossible to do so, because they are like clockwork, and do not sufficiently “feel” the effects of our would-be jerkiness.
No amount of systems tuning or whatever will create the breath of God, allegedly.
P.S. This is awkward. Why are you on that side and why am I on this side of the argument?
“It just seems to me that the Turing Test doesn’t actually figure out any of what you said it would. After all, it could very well be that a zombie would pass the Turing Test”
The concept of a ‘zombie’ is a silly concept. It’s an emulation of self consciousness. But the emulation of intelligence *is* intelligence. Likewise the emulation of self consciousness *is* self consciousness. i.e. a ‘zombie’ that can so completely emulate self-consciousness enough to pass the Turing test would be fully as self aware as we are. A ‘zombie’ is therefore a simple contradiction. We should discard it and ignore people that take them seriously, at least until they can put the concept together into a non-contradictory way.
“Does the Chinese Room *system* find Chinese jokes “humorous”?”
What I just heard you ask me is if the system of neurons in the brain can find jokes humorous. What do you think my answer is?
“I think we just assume other people have minds because if we’re wrong, then that turns us into colossally self-centered jerks ”
Oh, come on. Just how often do you think normal people (or any people) walk around thinking “probably all these other people don’t have minds, but if I’m wrong about that, then I’ll be a giant self-centered jerk.”
The real reason we believe other people have minds is because we have a pretty good intuitive (if vague) concept of ‘mind’ that we can easily check with others by talking with them. I.e. other minds pass the Turing test because they are self aware. It’s very easy to tell when you aren’t talking to a mind because our detection system is so good.
“jerks to animals and machines”
Actually, we have at least some level of concern about being jerks to animals. This seems to stem from two sources — the idea that they are at least partially minds and also the idea that they also have a portion of the breath of God.
“This is awkward. Why are you on that side and why am I on this side of the argument?”
You’ve followed my blogging for a while now. Are you really shocked? Stop and think about it for a while. Think especially about my epistemology writings on Wheat and Tares. Did you really think I’d make an assumption one way or the other on this subject? Besides, doesn’t the fact that I’m a theist pretty much mean what I am saying above isn’t biased? Also, read my comments on the Myth of the Framework post on M*.
I am not sure if we can make a computer be conscious or not. It depends on several factors we aren’t very sure of yet. But am I completely positive that we can write off the Chinese room example and that we can write off the idea that emulation of intelligence is somehow not intelligence. If an emulation of intelligence comes up with the next physics theory, writes your favorite song, or films your favorite movie, are you really going to differentiate any more?
*But I am completely positive…
I’ll respond here by responding elsewhere…
The fact that you’re a theist is what’s throwing everything around, at least *on this issue*. (For whatever it’s worth, some of your other positions tend to elicit the same kind of response. Especially epistemology writings.)
Um, maybe I’m just abnormal, but I definitely did this by default until I was 10 or 11. It was at 10 or 11 that I realized what I was doing, felt guilty about presuming that I was “the only one around,” and started thinking about the other whole lives others could be living.
Even today, I lapse. For example, if I get angry on the road, I have to slow down and CONSIDER that they have lives, feelings, etc.,
You’re probably going to think, “You’re a sociopath” now.
But we have varying opinions based precisely on those metrics (how partial the minds? how small the portion of the breath of God?) What about those who say, “Animals don’t have any rights and don’t deserve any. The earth is our dominion [so let’s do whatever we want], etc.,”?
“The fact that you’re a theist is what’s throwing everything around, at least *on this issue*. (For whatever it’s worth, some of your other positions tend to elicit the same kind of response. Especially epistemology writings.)”
I take seriously the idea of truth. But I’ve come to realize the conept of “Truth” is a fully Theistic concept. I’ve embraced that. The concept of Truth has no place in a Lovecraftian world, which is the only real rational alternative available.
“Um, maybe I’m just abnormal, but I definitely did this by default until I was 10 or 11”
It’s well known now that human beings come pre-wired to see ‘agency’ in others. If anything we over do it, assuming a thing to be an agent when it is not. But we do seem to be pre-wired to see the existence of other wills around us.
Oh, on the animals thing. You are missing the point. Yes, people have different moral views on animals. But there seems to be a near univeral consensus that animals and machines are not the same. Everybody* seems to accept that there is a range of self awareness.
*I am using “everybody” in the techincal sense that Joel Spolsky uses it. He takes ‘nobody’ to mean “under a 100 million’ and ‘everybody’ to mean ‘all but under 100 million.’ 😉