Saturday 2 January 2016

2b. Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence

Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing,Machinery and Intelligence. In: Epstein, Robert & Peters, Grace (Eds.) Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer 


This is Turing's classical paper with every passage quote/commented to highlight what Turing said, might have meant, or should have meant. The paper was equivocal about whether the full robotic test was intended, or only the email/penpal test, whether all candidates are eligible, or only computers, and whether the criterion for passing is really total, liefelong equavalence and indistinguishability or merely fooling enough people enough of the time. Once these uncertainties are resolved, Turing's Test remains cognitive science's rightful (and sole) empirical criterion today.

106 comments:

  1. From the “Lady Lovelace’s objection” section: “This is one of the many Granny objections. The correct reply is that (i) all causal systems are describable by formal rules (this is the equivalent of the Church/Turing Thesis), including ourselves; (ii) we know from complexity theory as well as statistical mechanics that the fact that a system's performance is governed by rules does not mean we can predict everything it does…”

    The realization that we can’t predict every output of a system even when the system is governed by causal rules has helped me reconsider my intuition that people have “free will” and machines don’t. Just because we don’t yet understand the rules that govern how quarks behave in our atoms, and how these atoms behave in our neurons, and how these neurons behave in our brains doesn’t mean these rules don’t exist. However, I think it’s interesting to note that if it were possible to take a list of these “how to operate a human brain” rules and translate them into software for a computer system, it still feels vaguely like the computer isn’t endowed with the same level of free will that humans have – despite the rules being the exact same.

    ReplyDelete
    Replies
    1. The real problem of free will is the hard problem of explaining why it feels like we have free well. The rest is just about causality. (Presumably every effect has a cause, all the way to the Big Bang, including why I do what I do. But if it didn't feel like something to do something deliberately, the question of the "free will" of organisms would be as empty as the question of the free will of planetary systems, avalanches, clocks or toasters.

      Delete
  2. From the argument that a machine would not make an arithmetic mistake and would therefore be outed as a computer rather than a human:
    If a machine could do everything a human could do and the only thing that distiguishes the machine from the human is its ability to perform incredibly complex computations, does this not qualify as consciousness? A machine should not be assumed to be unconscious merely because it is better than humans at some things. There are plenty of perfectly conscious and aware humans who are better at arithmetic than other, this does not make them less human.

    ReplyDelete
    Replies
    1. I think performing incredibly complex computations would qualify as thinking, but I’m not sure about consciousness (i.e. feeling). The Turing Test speaks more to whether or not the machine is distinguishable from a human in its performance capacity as opposed to whether or not it is conscious. Whether or not a machine or a human is conscious can never be known unless we can feel what they feel, which is impossible.

      Delete
    2. If we could define consciousness so easily it wouldn't be a very hard problem at all. A robot might do everything we can do and more and we'd still have no idea if it felt anything. Replace 'consciousness' with 'cognition' and I'd agree that such a robot would be cognizing.

      Delete
    3. T3 is Total indistinguishablity in robotic (sensorimotor) performance capacity. So if the indistinguishability is total, I believe consciousness should be included, or more specifically self-awareness dont you think? in that case there is clearly a step between "human performance" (T3: being able to do anything a human does, in terms of stimuli analysis - decision making - response) and "human imitation" (lets say for now T3.5 , being able to be the way humans are: feel the way humans do, for example being in love, and be conscious the way humans are and aware of being a thinking thing). This issue was addressed in the movie Ex-Machina, were they made sure to include in the definition of the Turing test not only properties of the machine but also properties of the human-tester: the human has to be fooled into thinking they are interacting with another human. In that case the second step, or T3.5 if you want, absolutely has to be included, and we proved that feelings/emotions and consciousness/awareness are all necessary in the definition of T3, since it is the level that we should believe Turing himself was referring to. Therefore we can argue that T3 and what I referred to as T3.5 are superimposable to reach the full definition of the intended Turing test: the make-belief of a human interaction.

      Delete
    4. “replace consciousness with cognition” brings up an interesting point. You say, you “agree the robot would be cognizing”, this alludes to how to define cognition. In this sense, cognition seems to be the ability to perform an action (whether verbal or motor), then reflect critically on the action in an effort to alter the next action. This is distinct from consciousness. Consciousness seems to be the feeling one gets from the action performed. So, In other words to cognize is to be aware, and to be conscious is to be self-aware. Is this the distinction you are trying to make?

      Delete
    5. This was a response to Michael :)

      Delete
    6. Adrian, why would you think the abililty to complex computations = consciousness (feeling)?

      Austin, yes, the Turing Test (TT) is as close as we can get to mind-reading, and of course it doesn't provide certainty. Neither does physics ("apples fall down") but the other-minds problem is even worse than ordinary scientific uncertainty ("underdetermination").

      Michael, the trouble with replacing "consciousness" by "cognition" is that it makes cognition = information-processing by definition. Despite the other-minds problem, thinking (cognition) feels like something; if it feels like nothing, it's not thinking, it's just information-processing. This will come out more clearly with Searle's argument against computationalism, but it also applies to noncomputational models of "cognition." Just note the question of the peekaboo relation between cognition and consciousness is a tricky one.

      Julie, the TT is not about "fooling" anyone. To pass it, the candidate really has to be able to do anything a human can do (both verbally, T2, and robotically, T3) that a real human can do, indistinguishably from a real human, to a real human (for a lifetime, if need be). Consciousness (feeling) is not something we do, it cannot be objectively observed, hence it is not part of our performance capacity, hence it cannot be tested by the TT. (Nor does adding what the brain can do, T4, help.)

      Maddy, good question for Michael!

      Delete
    7. Computers are thought to be unconscious because, although they carry out complex computations and can do many things better than we can, they have no agency in doing those computations or actions. Regular computers are tools, doing what we want them to do. The difference is that, to pass a turing test, a computer has to imitate a human so well that no one suspects anything, and a human without its own agency would be highly suspicious. If you have to keep telling or reminding someone what to do without them taking initiative on ANY action, like to take a seat or drink some water or find the square root of 144, then wouldn't you be on red alert, even though they can perform all those actions? A person not acting for themselves in a nonspecific scenario would be incredibly suspicious, and thus wouldn't pass the turing test by arousing that suspicion.

      Delete
  3. RE: "Now we ask: Do the successful candidates really feel, as we do when we think? This question is not meaningless, it is merely unanswerable -- in any other way than by being the candidate."

    As stated in the paper, asking this question gives rise to the other-minds problem. We can never know for sure that the machine is feeling. Given that this is unanswerable, how will we ever know if we've reached T5? Doesn't the other minds problem prevent us from knowing with certainty that we've created a t5?

    This confuses me because it seems like on one hand the TT is fundamentally about deception (i.e deceiving the interrogator into thinking the machine is a human), however at the T5 level wouldn't the test no longer involve deception as the machine would fundamentally be human? What is the difference between being human and being "indistinguishable from other human beings right down to the last molecule."?

    ReplyDelete
    Replies
    1. I’m not sure if T5 requires indistinguishable consciousness/feeling. It only has to be indistinguishable in terms of structure/function “down to the last molecule”; being indistinguishable down to the molecular-level doesn’t seem to imply that it must feel or that we must know that it feels.

      The Turing Test, from what I understand, is not about deception but more about having indistinguishable performance capacity; deception seems to be associated with a view of the TT as ‘merely an imitation game’ when it’s a much larger endeavour.

      There isn’t really a distinction between T5 and a human being and T5 is rejected by Turing. Rather, reverse engineering T3 would allow us to answer why and how we do what we do if we can get it to have indistinguishable performance capacity. Cloning someone (i.e. T5) doesn’t tell us why and how we do what we do.

      Delete
    2. I disagree. The whole point of t5 is to show us how why and what. T3 allows us to create something that has the same output as humans, but not necessarily in the same manner. Therefore, I do not believe that T3 is really telling us how humans do what they do, but rather a possible way to achieve this. Reverse engineering a machine to act like humans is an amazing feat and can teach us about possible ways in which to achieve consciousness and can be useful for other purposes. However, it does not necessarily tell us how humans achieve this. I do agree with your point on t5, because clones are not helpful to us understanding how they work. However, if a t5 was theoretically reverse engineered why could this not answer all the questions of why and how?

      Delete
    3. I felt that my post was not kid-sib friendly enough. My main point is T3 doesn't presuppose that we understand how it works or why it does what it does (e.g. bayesian neural nets). The goal of the turing test is 'indistinguishable performance capacity'; AI not CM. For CM t5 might be a better level because then you would see at each step how it works, but for AI T3 is more than adequate!

      Delete
    4. Elise, see reply to Julie, above. The TT is the best we can hope to do, and T5 has the usual falling-apple uncertainty (but that's nothing to worry about: we certainly can't do better than that!)

      Austin, T5 isn't cloning, it's a causal explanation of all observable data, in all fields. It includes T2-T4 as part of it. I'm not sure Turing rejects T5 (or T4); he just doesn't think it's necessary. The big question (about computationalism) is whether Turing also thinks T3 is unnecessary, and if so, is that because he is a computationalist or simply because he doesn't think T2 could be passed without the power to pass T3, even if T3 is not tested directly: because T2 is grounded in T3).

      Valentina, you're right that there might be more than one way to pass T3 (though coming up with even one way is not likely to be so "easy"). T4 (neural indistinguishability) is supposed to cut down the options, but that still does not eliminate ordinary scientific uncertainty (under-determination); neither does T5. There could possibly be more than one T4 causal explanation, or more than one T5! But nothing to lose sleep over... All of T2 - T5, however, are meant to be causal explanations, not just unexplained "clones."

      Delete
  4. While T2’s “verbal performance [would] break down if we questioned it too closely about the qualitative and practical details of sensorimotor experience”, would a similar questioning be problematic for T3? My thought is that indistinguishable performance capacity may require more than sensorimotor experience. Referring to Jefferson’s argument that “thoughts and emotions felt” are needed, rather than a restatement of the other-minds problem, these feelings might be necessary for an indistinguishable performance capacity, in which case reverse engineering a machine to pass the Turing Test might involve more than sensorimotor experience and may be impossible. The reason is that we can question a T3 robot on the qualitative and practical details of feeling thoughts and emotions, as a rewording of Jefferson’s phrase (perhaps the same as feeling feeling?). In this scenario, I’m wondering if the T3 robot would be distinguishable because of a lack of “thoughts and emotions felt”.

    ReplyDelete
    Replies
    1. I think the point is that if you ask a robot about thinking and feeling and it responds differently than another human would, then it hasn't passed the T3 test.

      If it *does* respond in a way indistinguishable from a human both in this regard and every other, then you should assume that the robot "thinks and feels" or is conscious in the same way that you assume other humans do.

      Delete
    2. I agree with the point that if the machine’s performance capacity is indistinguishable from a human, then we can assume that it feels - just as we assume humans around us in daily life feel.

      My original question after the reading was if feeling might be necessary for indistinguishable performance capacity. But I now realize that it still falls into the other-minds problem. I can't argue that feeling is necessary for performance capacity without the assumption that we can check for this feeling - and checking for this feeling is exactly what we do when we judge a human's (or robot's) performance as indistinguishable from any other ordinary human. The question ends up in a loop.

      Delete
    3. The only way to check for feeling is to be the thing you're checking. We can assume another person is feeling but this is far from proof. A T3 robot would probably trigger the same intuitions but we still couldn't know for sure.

      Delete
    4. Austin, if it's true that you need to feel in order to pass T2 (or T3) (the "easy" problem) then -- assuming that by "need" you mean something causal -- then the explanation of how and why you need to feel in order to pass T2 (or T3) would be the solution to the "hard" problem. (Magically supposing it's causally necessary without being able to explain how or why does not help: does not "reduce our uncertainty" about how and why organisms feel) As to whether T3s can talk about feeling, ask Dominique! (That's essentially Auguste's reply. Don't underestimate the TT; even T2 requires the capacity to talk indistinguishably about anything an ordinary person can talk about.) (I think in the end you came to realize this "loop" by yourself; Michael also agrees.)

      Delete
  5. RE: “This sets the stage for what will be Turing’s real object of comparison, which is a thinking human being versus a (nonthinking) machine.”

    If we don’t care about what the machine is thinking or feeling (hard problem), then it seems to be unnecessary to say that it’s “nonthinking”. Although computation is not a sufficient explanation for cognition, the logistics becomes unclear.

    On one side, the universalilty of digital computers implies that that which a machine simulates is not reality. In other words, it is just “formal universality”, and so a Turing machine that mimics a thinking human is not truly thinking.

    But, if the machine is “nonthinking” than that would mean that its actions (“thinking does”) alone are not enough to imply thinking (“thinking is”). If a computer passes the TT that would mean that it can do everything a human can do, and if “thinking is as thinking does”, that suggests that there is a causal mechanism for how the machine is thinking.

    ReplyDelete
    Replies
    1. Manda, yes, thinking (cognition) would be "as thinking does" -- if it weren't for the fact that we each know it feels like something to think! If that's missing, it's not thinking: it's just doing (which we can all observe). Unfortunately, the TT cannot tell the difference, and neither can we, except in our own case (the Cogito). But (despite the other-minds problem) it's almost certain other people, and other species, think too. The TT (i.e., cognitive science) is trying to give a causal explanation of how and why organisms think; unfortunately, it can only give a causal explanation of how they can do what they can do. Maybe feeling is (somehow) "necessary" to pass TT, but the hard problem would be explaining how and why it is necessary.

      The universality of Turing Machines and of computation (and the Strong Church-Turing Thesis) does not "imply" that formal simulations are not "reality." We see that a computational simulation of a toaster is not really toasting real bread, but we cannot see whether a computational simulation of thinking (cognition) is or is not really thinking because we cannot "see" thinking -- except in our own case (the other-minds problem + the Cogito (Sentio)).

      Delete
  6. You disapprove the restriction of the Turing test to T2, but isn't Turing doing this more for practical reasons? It seems to me that his choice does not come from an un-testability of machines such as robots but rather from as specificity of the general setup of of the test, which makes difficult the testing of something else than a computer. But since a computer can simulate virtually any system, one that passes T2 by Turing's test can then easily take the step to T3 provided it gets the physical potential to do what a human can do (e.g. it has an articulated body, a voice, etc.)

    ReplyDelete
    Replies
    1. Mael, the Strong Church-Turing Thesis (that a computer can simulate just about everything. does not mean a computer can be the thing it's simulating.

      A simulated waterfall is not wet. And a simulated robot is not a robot.

      If you did have a T3 robot, a computer could simulate it, but it would not be passing T3 in so doing. And, more important, it would not be thinking, just computing -- unless, of course, cognition really is just computation!

      (When "Stevan says" a T2-passer would also have to have the capacity to pass T3, I just mean that the capacity to pass T2 needs to be grounded in the capacity to pass T3: as a real robot, in the real world, not just a computational simulation of a robot in a computational simulation of the world. In other words, even if the test were just T2, the only one that could pass it would be a T3 robot.)

      Delete
  7. (*Note: this is about class, not this reading, but believe we're supposed to post those kinds of comments under a reading anyways? Correct me if I'm wrong.)

    Regarding the example in class about choosing the sandwiches behind three doors: I have an example which I heard when I first learned about this scenario, which I believe helps make it clear why the better choice is to switch.

    Suppose there are 1000 doors with 1 sandwich behind a door. I tell you to pick one door. You pick door 500. I then open ALL OTHER DOORS except door 235. Now, would you rather keep door 500, or change to 235?

    With this example, it's clear that you should switch, since door 500 has a 1/1000 chance of having the sandwich, while door 235 has a 999/1000 chance, since it is the ONLY possible door out of the remaining 999 that could have a sandwich. Now we can see that the probability after switching is 1 - (1/1000), or 1 - (probability of door 500).

    In the example with three doors, the same scenario is being played out, so you should switch from your original door (say, door C) to the other door (door B); it has 2/3 (as opposed to 1/3) chance of getting the sandwich, since prob(door B) = 1 - prob(door C).

    ReplyDelete
    Replies
    1. Dominique (I love your robo-icon!), another way to penetrate Monty Hall without going to 1000 doors is to ask contestants, after they made the first choice, what they think their door's chances are (1/3) and what the other two doors' chances are (2/3). Then ask them whether they would rather be allowed to choose both the two unchosen doors (2/3) in place of their single first choice (1/3). Of course people will prefer two chances (2/3) to win over one (1/3). Then, having reminded them that there is only one prize, and that therefore at least one of those two 2nd-choice doors must be empty, show them that one is indeed empty -- and then ask them if they now want to go back to their first choice (and why)...

      Delete
  8. I strongly agree with the fact that Turing seems not to talk about nonverbal behavior, which should be included in a Turing Test. However, I feel like some arguments stem from the fact that it can get confusing why Turing wants to compute this test. It seems to be more about if can machines think, rather than reverse engineering to create something that resembles us in cognitive and performance capacities that lead understand of ourselves better. Thus, he doesn’t really talk about nonverbal behavior, T3, and seems to focus on the game, and whether this machine can ‘fool’ us into thinking it is human. The distinction mentioned in this paper between the field of Artificial Intelligence and Cognitive Modeling is a very important one, especially in understanding Turing.

    In addition, Turing states that a computer should only do the imitation game. I don’t really understand the reasoning behind why he did not talk about the potential physical moving capabilities of an AI, or why he is not interested in making a robot with the same hardware of a universal computer.

    ReplyDelete
    Replies
    1. I think that Turing’s focus on a computer only playing the imitation game and not the other parts of the Turing Tests beyond T2 was a result of the time this was written and the progress that had been made. To this day, we don’t have computers that can move similarly to us to the point that they are indistinguishable from human movement, we aren’t even close. At that time, a computers that could move around physically was merely science fiction, but they were beginning to see the ability for computers to respond to text in ways that seemed almost human (in very restricted cases, at least for back then). T3 and so on are ideas created by Dr. Harnad (to my understanding) and are now a big part of the cognitive science community, but back then they did not exist. Asking why Turing did not come up with these ideas is similar to asking why Einstein did not create the ideas of quantum physics, in that general relativity and special relativity were huge ideas for the time and nobody had any idea about quantum back then.

      Delete
    2. @Deniz, I'm curious what you mean by why Turing wants to preform this test and why you think it was to show that computers could think. I think that the test itself is more about humans then it is about AI. Otherwise you might try to build machines that can carry out complex calculations or solve problems using very large amounts of search space (kid-sib: problems with large number of possibilities to find an answer), rather than see if they can take a joke or answer chess problems. I find the strength in his argument is the idea that the functional abilities of our brains/minds can be likened to computational processes. The fact that they are carried out by a machine with or without arms, eyes, a voice, etc for me is less important. As Harnad points out, his use of the word "imitation" does not capture the reverse-engineering of the human brain that the test would prove, but I do still think that Turing was more interested in this aspect of his argument. Adding physical potential to an AI would not help us understand how humans think as physical response is an output of mental computation.

      Delete
    3. Deniz, I think Turing's paper had two purposes. One was to build on the power and universality of computation (which is related to -- in fact a part of -- the power of language). The other is to point out some limitations on the power of explanation, especially in the case of trying to explain how the mind works (cognition). I believe (i.e., "Stevan says") Turing was not a computionalist. The only reason he made the test purely verbal was to prevent biasses from the appearance of the candidate (from the primitive machines of the day, as Karl notes). I do not believe Turing (giant) would have been unaware of the symbol grounding problem (pigmy), hence the need to ground T2 (verbal) capacity in T3 (robotic) capacity. He just didn't bother to mention it.

      Cassie, I could not quite follow your point. I think Turing was proposing a methodology for (what was eventually called) cognitive science (which includes both the reverse engineering of biological cognition and the creation of useful AI tools). He was also certainly recommending the power of computation as a tool for doing cognitive science (the "Strong Church-Turing Thesis," which is also the same thing as Searle's "Weak AI"). But I doubt that Turing was a computationalist ("cognition = computation," which is also the same thing as Searle's "Strong AI").

      Delete
  9. What about the mind/body interaction? In the field of neuroscience nowadays, there are increasing studies done on the effects of what one thinks and what takes place in his body (such as the studies of stress over cardiovascular levels, or depression and respiratory problems). Our thoughts seem to have an influence on our body, but more importantly, our body seems to alter our thoughts. I wonder then, if only a machine at the T4 or T5 level could think the same way humans can do. Indeed, if our anatomy has a role on our thoughts, then only a machine with a similar anatomy could think the same way?

    ReplyDelete
    Replies
    1. I wonder what distinguishes the way in which thoughts and body interact that makes it so a T3 cannot also have the same performance capacity. The article speaks to the argument from continuity in the nervous system: “Any dynamical causal system is eligible, as long as it delivers the performance capacity”. I have trouble identifying what exactly makes the human anatomy special so that its function, including any interaction between thoughts and body, cannot be reverse engineered in a machine with computational capacity and sensorimotor performance capacity. I’m inclined to think that biological matter isn’t special for delivering performance capacity, including any interaction, unless a reverse engineered T3 cannot deliver indistinguishable performance.

      Delete
    2. Josiane, Yes, a T3 robot (Dominique) needs a body. But how human-like the body needs to be is another question (as Austin notes.

      It's possible that some T4 properties will turn out to be essential too: But the proof of that will be that without them you can't pass T3!

      Delete
  10. From Arguments on consciousness and Turing's solipsism argument.

    Do we really not know if someone else has a mind of their own? Can a machine ever think?

    Our thinking and cognition assumes that there are others with minds of their own. We are almost always affected by the behavior of others and react to it. The process of human cognition as it is today seems to operate within a sphere of social cognition, which is summed up by the interactions we have with with others who have minds of their own. Language shows us the effect of this through the evolution of grammatical persons (I, you, we). The other-minds problem however, occurs only when we try to find meaning in what it means to have a mind. This is connected to the “why” (hard problem) rather than the “how” (easy problem). Put into a causal context, the other-minds problem doesn’t reflect in our behavior, so there is an explanatory gap between the process itself and the meaning the process causes. Turing is dead wrong when he says that to know if a machine is thinking we have to be the machine, but not because of the other-minds problem or because he considers it a solipsism. He is wrong because A] any machine we create, even if it’s a T5, will be cognizing but not thinking. The passability of the Turing test occurs because we as human beings who can think (includes feeling) will see the same ability in in the T5s. An example that comes to mind is that of the Disney/Pixar robot WALL-E. Though we know that WALL-E is just a metal machine in a movie, we come to sympathize and connect with it. This doesn’t mean that WALL-E can think. Everyone in the class knows that they won’t kick Dominque even if they knew that she was made in MIT not because she can think or act like us. This is the same case with the way we approach animals (or in some cases plants). What’s happening here is not merely the imposition of moral anthropocentrism on creatures or objects that exhibit human like behavior. I think we have the theory of mind because we genuinely accept the existence of an other with independent minds full of subjective experience. Therefore, It’s obvious that at some level we do worry about others. This brings me to the question of how we can reverse engineer a machine without knowing why we do what we do? I just think the “why” comes before the “how” and that they are not mutually exclusive. Our cognizing is interrelated to our thinking and feeling and one can’t exist without the other. For the purpose of a Turing Test, its not sufficient for a machine to just pass as a human, it needs to be able to think and feel like one.

    ReplyDelete
  11. In Harnad's piece, under the Lady Lovelace argument:
    I'm unclear on the meaning of the third point-it is not clear that anyone or anything has "originated" anything new since the Big Bang.
    This seems to suggest that there has been no innovation of any type across the universe's history. I would argue that even evolution is a creative process (as utilitarian as it is) which creates novel technologies in the universe. Additionally, non-physical phenomena can also be new. Music was not invented at the big bang, nor were many of the elements on the periodic table. How can we argue that there has been nothing new since the big bang theory in light of these innovations?

    ReplyDelete
    Replies
    1. I think the criticism of Lady Lovelace's objection is that she is comparing machines to a popular view of humans as "originators" of what we do. The problem with this view is that "originate" seems to mean to be a source of causal effects but itself uncaused. It's not clear that anyone or anything has "originated" anything new since the Big Bang because everything since the Big Bang has been caused. Seemingly, no-one or no-thing can be a source of causal effects without itself having been caused. Innovation, evolution, music, etc. are not uncaused and so are not “original”.

      Delete
  12. I find it difficult to believe that we will ever be able to build a robot that can pass T3 without a full understanding of the human brain and consciousness. The problem of other minds will never go away of course, but surely the creation of a T3 robot will require a kind of understanding which allows us to state: "this robot is conscious because its "brain" fundamentally follows the same logic as biological brains, which we understand to be the root of a consciousness like ours."

    ReplyDelete
    Replies
    1. The reverse question comes to mind: can we understand thinking without a T3 that has indistinguishable performance capacity? Having a full understanding of human thinking as it is based in the brain presupposes already arriving at understanding what T4/T5 is. (A full understanding of consciousness can never be the case since, as you mentioned, the other minds problem doesn’t go away). The creation of T3 is so that it can have indistinguishable performance capacity, not necessarily so that it follows the same logic as biological brains. The hope is that it follows the same logic so that when we dissect our reverse engineered T3, we can learn why and how we do what we do. Admittedly, the reverse engineering of T3 might result in a robot that can have indistinguishable performance capacity but it doesn’t work using the exact same brain logic that we use. But the point is that it can at least help us understand why and how we do what we do to some extent - whereas having a full understanding of the brain before embarking on creating a T3 presupposes that we know the logic used in T4/T5 (it’s like trying to run before having tried walking).

      Delete
    2. I think this is touching on what we discussed in class with the strong versus weak equivalence. As the professor stated, Turing doesn’t seem to care whether or not T3 cognizes in the same way as humans (strong equivalence). And I agree, Auguste, that it is difficult to imagine something that cognizes without having some similarity with the human brain. However, Turing has already demonstrated that it is not necessary to cognize in the same way, that computers can and do achieve the same results as humans in (assumedly) different means, for example how a calculator does basic addition compared to how a human goes about doing it. I think personally this is what is so interesting about the Turing Test, it proposes that cognition might still be cognition even if it doesn’t necessitate a brain.

      Delete
  13. RE: "There is no way to know whether either humans or machines do what they do because they feel like it -- or whether they feel anything at all, for that matter."

    Perhaps what is missing here is a clear definition of what it actually means to "feel." Several factors go into feeling, and much of it goes back to causality. Perceptions, sensations, experiences, emotions, all of these contribute to how we "feel" at any point in time. Phenomenal experiences and qualitative experiences themselves are already hard enough to explain subjectively. Instead of tackling the other minds problem, how do you know within yourself that you are a human and not a machine? What are the necessary criteria for being able to feel as a person and not a computer? We say we know we each have a mind but how do we even prove this? I agree with Auguste in that we need to have a full understanding of the human brain and consciousness before we can even build these T3 robots.

    ReplyDelete
    Replies
    1. 1. Neil, I was wondering the same thing. How is it possible then to distinguish yourself from a machine? I think that it must be possible to prove that we have a mind as I don’t know anyone who believes otherwise.

      Delete
  14. "
    The question... will not be quite definite until we have specified what we mean by the word "machine." It is natural that we should wish to permit every kind of engineering technique to be used in our machines.
    This passage (soon to be contradicted in the subsequent text!) implies that Turing did not mean only computers: that any dynamical system we build is eligible (as long as it delivers the performance capacity). But we do have to build it, or at least have a full causal understanding of how it works. A cloned human being cannot be entered as the machine candidate (because we didn't build it and hence don't know how it works), even though we are all "machines" in the sense of being causal systems (Harnad 2000, 2003).
    " (around page 6)

    Evolution didn’t “know” that it was going to give rise to a dynamic system that allows for cognition. I think the field of AI could stumble into something that looks and acts very cognition-y without fully understanding the causal mechanism. Isn’t reverse engineering supposed help with our understanding anyway? We shouldn’t be expected to understand it 100% before we start trying to reverse engineer.

    ReplyDelete
  15. RE: “We also wish to allow the possibility that an engineer or team of engineers may construct a machine which works, but whose manner of operation cannot be satisfactorily described by its constructors because they have applied a method which is largely experimental.”

    In response to this statement you differentiated between AI and cognitive modeling (CM). Now, suppose a computer scientist designs an AI that can perform millions of trials and errors in order to ‘code itself,’ and as a result we have a T3. The computer scientists does not know exactly how it works, but couldn’t we look at the methods this AI used to build itself, closely analyzing every step of the way, and through this reach an understanding of how humans cognition takes place?

    Furthermore, lets say computer scientists build T3. It involves a great deal of experimentation without knowing specifically what causal mechanism took place every step of the way. Why would it still be necessary for cognitive scientists to understand every causal mechanism that led us there, if we already have the finished T3 and know HOW to get there, just not WHY the machine works the way it does? Would it be merely for the sake of curiosity, or could this knowing of WHY help us advance the system itself?

    ReplyDelete
    Replies
    1. Very shrewd point, Nimra! Yes, cases could fall between designing the causal mechanism explicitly, so you know what it is, and designing something that can then go on to "learn" the rest so that, in the end, it passes TT, but the original designer does not know how.

      (1) This could apply to the original T2 (verbal only) or to T3 (robotic, including verbal).

      (2) For T2 (but not T3) the original design could have been (a) purely computational or (b) hybrid computational + dynamic (physical).

      Yes, T2a could probably be decoded along the lines you mention, since it’s all computational. But not T2b or T3.

      But if there was a contest for T2a, I’d say the original design already won it — if it really was the original design (algorithm(s)) that then went on to design the winning T2a design. I’d also say that understanding the original algorithm(s) already amounts to explaining how the system passes T2. For learned capacities, it’s enough to explain how we learn them.

      The probability that T2a could be passed by chance, starting with the original design, is about the same as the probability of chimpanzee’s typing Shakespeare.

      I’m not sure what you have in mind about getting from the initial design to T3 through learning. I think you might be underestimating T3 (and the symbol grounding problem).

      I think it’s still true that until you can explain how the model passes T3, you do not have a causal explanation.

      About arriving at T3 “experimentally”: What we need to know is not how we get to the T3 capacity, but, once we’re there, we need to know how T3’s internal mechanism does what it does. For doing (rather than feeling) it’s hard to imagine how the mechanism that is generating it, which we built, would still not be causally understood by its builders once it could pass T3. But maybe it would be like those huge maths proofs in which a computer is needed and no one can hold the whole proof clearly in their head!

      HOW and WHY are the same question when we ask “How does it do that?” (The other WHY question is the evolutionary one: Why do we have that capacity? What adaptive advantage did it give us?)

      Delete
  16. In Harnad’s paper, a lot of Turing’s ideas were cleared up for me. Specifically, I had been wondering why the Turing Test was restricted to verbal capabilities when Harnad points out - “we can all do a lot more than just email!” I completely agree with this and believe the Turing Test should be adjusted to include everything humans can do in addition to email.

    However, I was confused by one point: “…if Turing’s indistinguishability-criterion is to have any empirical substance, the performance of the machine must be totally indistinguishable from that of a human being — to anyone and everyone, for a lifetime.” What exactly does this mean? Should someone be constantly testing the computer on the Turing Test? If I'm interpreting this statement correctly, does this mean one guess as to the machine being a machine would mean a failure on the TT?

    ReplyDelete
    Replies
    1. 'Does this mean one guess as to the machine being a machine would mean a failure on the TT?'

      The Turing test has been misinterpreted throughout the years but the test is not intended to be a trick. I do not believe Turing intended for people to create computers that would simply trick people into believing they were thinking beings with cognitive abilities that are in line with humans.
      But to answer your question, if we look at Dominique (our Robot in class) we would never guess or have reason to guess she was a Robot. If a man made robot was to pass the Turing test it must behave in the same manner we do without raising any suspicion of being a robot. Any suspicion of it being a robot, would automatically mean it has failed all forms of the Turing test (T2,T3 and T4).
      That being said, simply because a robot is able to pass the symbol version of the Turing test (T2) without raising suspicion, does not mean it is a thinking thing. For it to be a thinking thing, like discussed in class it must actually feel to the same ability that humans can feel.

      Delete
    2. Laura, my understanding from our class discussions and this reading is that Turing uses verbal demonstrations in the form of email just as a way to eliminate any biases we might have towards the appearance of the digital computer. Of course there is more to what a cogniser can do than just talking but if the agent does not have all those capacities that a thinking being has in the broadest sense, it would not be able to pass the verbal test either. In other words, Turing believes that verbal demonstration of thinking capacity should be sufficient because the only way the robot can give us truly accurate verbal descriptions is for the robot to have those descriptions grounded in sensory motor experiences.

      Nadia's point is very important that TT is not meant for fooling people. However, I think we can get into more details as to what passing the symbolic version at T2 level would mean. I think Dr. Harnad's annotations on Turing's original article really clarifies that it is impossible to pass the T2 level without passing T3 and having the capacity for sensorimotor experiences. However, going back to Nadia's last point, where would passing these two levels leave us in relation to the question of machine thinking and reverse engineering of cognition? In class we discussed that Turing would argue that weak equivalence to a cognizer's capacities would be sufficient and the machine does not have to have the same internal structures at a T4 level to be able to think.
      This is as far as we can get in solving the problem of machine thinking as answering any questions about how the machine capable of passing T2 and T3 would feel will get us to the hard problem and other minds problem which does not seem to be solvable at least for a very long time.

      Delete
  17. "The performance of the machine must be totally indistinguishable from that of a human being -- to anyone and everyone, for a lifetime (Harnad 1989)."

    If that is the case, how would we ever know if the machine passed the Turing Test or not? And if we are not able to know, who cares and what difference does it make to even do the test in the first place?

    If the indistiguishability criterion is to be fulfilled, it raises the question of reality. How do we know that we are in fact human and not simply a machine that passed the Turing test?

    In addition, the Turing test hierarchy, especially T4 and T5, introduces the problem of creation. Whether human beings can create another machine that can pass the T5 would be in the realm of God. The human reproduction is still a mystery, but whether 3-D printing can sufficiently create an indistinguisable human-machine is questionable.

    The physical components from cells to DNA would have to be not dependent on an existing human (this allows for the exclusion of normal human reproduction). This problem appears to be stopped by the 3rd property of the Cell Theory (i.e., cells arise from pre-existing cells). If we are able to overcome these questions, I presume that it is possible to "create" a human-machine that can pass the T5.

    ReplyDelete
    Replies
    1. Francisco have you watched Westworld yet? You're imaging a science fiction world not too dissimilar from the plot of the show...

      Though these are ideas are wild to think about, and I think that T4 and T5 both begin to seep into that sci-fi world as well, I think that pop culture has a habit of getting a little lost in the fantasy. I don't think Turing or Harnad are suggesting something so radically altering to reality, and the point is stronger than how AI are usually portrayed in science fiction. But I think still it is interesting to think about, humans can certainly think about thinking and this is how we (mostly) ground ourseleves in reality. Can a T3 think about itself? On the topic of pop culture, the OS's in the movie /Her/ give the impression that they think about their own thoughts. Though I don't think that writing music and falling in love are necessarily realistic or even helpful in determining if a robot contains a theory of self or mind (ability to understand the minds of others).

      Delete
    2. I think that this kind of "who cares?" question can be a slippery slope into apathy, and misses the point of the Turing Test – I don't believe that the indistinguishability criterion means that our very reality or our own thoughts and feelings have to be questioned. We all know very well that we feel and that we think, and it is when other beings are brought into the equation where we learn to operate along the 'other minds' paradigm to assume that others' minds work as well as we do. Passing the Turing Test is not a trick, or a matter of "fooling the examiner"; it's a way to reverse-engineer what we humans naturally think of as consciousness. We assume other humans are conscious through interacting with them, and therefore if a machine is indistinguishable from a human through its interactions, the assumption through that interaction is that it is conscious as well.

      Delete
    3. As a complement to Cassie’s Westworld reference, in the show, the guests/humans would find it very hard to tell who was human and who wasn’t, but the ‘teacher’ was still in complete control. Everything was coded into the systems of the hosts (machines), with all possible actions having varied statistical chances of happening. Even if we can predict behaviour because of statistical chances, can we know that there’s anything really going on inside? This is outlined in the show by so called ‘reveries’, or daydreaming.
      Based on this, how can we know this is actually happening or if a machine is simply pretending? There is a reason people do the things they do, but can we really fully understand how people function by reverse engineering ourselves into something that isn’t human? (Think of animal testing. Can we really understand what the effects on us will be if we simply test animals?) The getting to the end goal may have happened somewhat by chance, but there was still an end goal. How can we possibly teach machines to act independently, let alone create conscious machines, if we cannot even determine what our end goals are?

      Delete
  18. I'm interested in the distinction made between 'real' and simulated environments. As prof. Harnad says, “passing T3 is something only a real robot can do, not a simulated robot tested by T2, be it ever so Turing-equivalent to the real robot” – my question is why is this necessarily the case?

    On some level, one can argue that WE can never truly discount that we are living in a complex simulation. This leads me to believe that we should extend the same courtesy to a computer, as we would for our classmate. If we found out tomorrow that our reality was a simulated one (although unlikely), it would seem strange to then say that we therefore do not feel.

    In my understanding, feeling is a simulative process, even for humans. Our brains interpret perceptive signals from many different neurons to create the experience of feeling. It is possible to electrically stimulate neurons, creating false perceptions in ‘real’ humans. While an experience in virtual reality might not count as ‘real’ for some, sufficiently sophisticated VR might be indistinguishable from reality. If some person had been born in a gel tube and raised entirely in sophisticated VR, what relevant difference would there be between their experience and a computer simulation? How can you separate their experience of ‘feeling’ from a sophisticated and powerful program which simulates a human brain with functional equivalence?

    If their simulated brain is as complex as ours (or functionally equivalent) I do not understand why the ‘simulated’ aspect is relevant, and enough to conclude that they do not feel.

    ReplyDelete
    Replies
    1. To follow your tangent, Bostrom wrote one of the more convincing arguments for simulation theory, which you can find here: http://simulation-argument.com/simulation.html

      In response to your point, the key difference is that even if we are in a simulation, a robot in a world simulated by us is in a different simulation by definition. A simulation constructed by us, which is only a guess at the structure of our (simulated) world. The key is that for any robot to pass T3, it must be in the same world as ours, whether we are in simulated or base reality.

      Delete
  19. For the argument against naming the experiment “The Imitation Game.”

    While I agree that for practicality in future papers and scientific understanding, a better name could have been used, I believe that for the 1950’s, naming it the Imitation Game is perfect. Unless people were directly involved in studying computers or in related fields, most people weren’t very aware of a computer’s capacity/had a strong understanding. I think the use of a common name helped attract outside attention and enable understanding for everyday people. To an outsider, a person trying to guess whether ‘X’ is a person or a computer “imitating” a person, does appear to be a game, and makes the concept much more accessible. The name Turing used also garners much more attention. Personally, I would may more attention to something called the Imitation Game, than a test for “reverse-engineering human cognitive performance capacity”.

    ReplyDelete
    Replies
    1. While public interest is always good for various reasons (funding, public understanding, etc.), I think the use of "Imitation Game" here has more cons than pros. It weakens the argument on two fronts: the first being that it indicates the purpose of the test is to trick the observer, rather than to provide a valid test for the cognitive capacity of a machine; the second being, as Professor Harnad notes, that it is a "game" rather than an empirically sound method.

      Delete
  20. The founders of the Loebner Prize clearly have not read this paper. In 2014, the first prize of its kind was awarded to the Ukrainian boy AI we discussed in class, for successfully fooling the judges into thinking it was a real person, presumably with subpar English abilities. While perfectly in line with Turing's explicit formulation of TT, it has all the flaws of T2 in its restriction to verbal behaviour alone. On top of that, my impression is that the programmers must have specifically counted on their ability to deceive, rather than to produce a truly intelligent piece of software, considering their decision to simulate a person with below average performance to begin with (taking a native English speaker as baseline). Don't get me wrong: it's still a feat of chatbot engineering, but all we have here is a chatbot and not a thinking machine.

    It's disappointing to me that this renowned version of TT has been co-opted in such a way. Hopefully the next iteration involves something closer to T3.

    ReplyDelete
  21. Comparing the electrical circuitries inside machine and the neural system inside human brain, neither appears to be something that is capable of thinking. As such, is the process of thinking characterized as an epiphenomenon and therefore machines holds the potential for thinking and feeling as long as its system works in a mannar that is comparable to that of a human brain? Moreover, to which human intelligence level are machines being compared to generally? Is it possible that a machine holding an artificial intelligence level that is comparable to that of a human kid or those with lower education? Will it be T3 or T4 in this case (if it also appears like a human being)?

    ReplyDelete
    Replies
    1. If I'm not mistaken, I believe Professor Harnad had said in class that we assume that the machine has the intelligence level of at least an adolescent, such that the machine can converse with an adult. That would be for the machine to qualify as T2, and I am assuming the same would hold for one that'd be T3 or T4.

      Delete
    2. Yea I just realized that in class of last week!

      Delete
  22. Re: I believe that in about fifty years' time it will be possible, to programme computers... [to] play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.

    If today our computers can still not be entirely indistinguishable from real human beings, will they ever be? Is it possible that they ever will be completely indistinguishable given that they cannot originate anything themselves or do things in all circumstances including those beyond rule-based computation?

    ReplyDelete
    Replies
    1. Maya,

      I do believe they eventually will be indistinguishable – just look at how far computers and technology has advanced in the last half-century. The way I see it, it’s more just a matter of time, rather than a matter of ‘if’. While I am not certain as to how we could, I don’t think that there can be any doubt that the day will come.

      Delete
    2. I think that in the past few decades we have steadily surprised ourselves with the exponential growth in ability that computers have achieved – if we treat "passing the Turing test 70% of the time" as a benchmark, the capabilities of machines will have surpassed the level of ability that we cannot even imagine at this point in time.

      Delete
  23. “Surely the goal is not merely to design a machine that people mistake for a human being statistically as often as not! That would reduce the Turing Test to the Gallup Poll that Turing rightly rejected in raising the question of what "thinking" is in the first place! No, if Turing's indistinguishability-criterion is to have any empirical substance, the performance of the machine must be totally indistinguishable from that of a human being -- to anyone and everyone, for a lifetime.”

    I agree with this statement and the further explanation on pg. 14, that it is not ‘empirical science’ for a machine to fool the experimenter most of the time, to truly have the performance capacity of a human the machine must be indistinguishable from a human all the time. Does this allow any room for error? Humans do make errors and I am wondering if the machine would need to have 100% accuracy on fooling the experimenter? While it is not legitimate to be indistinguishable some of the time (50% or with sufficient statistical significance) how would being perfectly indistinguishable reconcile with humans who are inherently not perfect and prone to error?
    Human error is what would allow T3 to pass the test. But how can we demand perfection from something trying to demonstrate our performance capacity – which is imperfection.

    ReplyDelete
    Replies
    1. I find your question very interesting Aliza, because it is true that humans are not perfect and trying to make a machine that perfectly models human cognition would result in making an imperfect machine. In the previous reading, Turing stated that “the best strategy is to try to provide answers that would naturally be given by a man,” so if the question is a hard math problem or if the man/robot can perfectly recite a poem by Yeats, then I am sure the robot would struggle with this answer (therefore imitating a man). However, it is true that some men can solve difficult math problems in seconds and can recite a poem by Yeats when prompted to do so. Where is the threshold here where a mistake seems natural verses awkward? It really depends on the judge in this case and maybe the answer is subjective.

      Delete
    2. On the topic of being “totally indistinguishable, not just indistinguishable more often than not,” I am wondering how vegetative functions play a role in being a huge giveaway that a robot is not human. If verbal exchanges that depend on sensorimotor experience can help one distinguish between T2 and T3, then where does an experience like tasting fit in? For example, a robot that cannot eat therefore cannot taste food and would not be able to comment on the taste or experience of eating a particular food. Would tasting be a criterion when judging if a T3 robot can pass the TT? Because tasting is heavily interconnected with eating (which is most likely a property of a T5 robot), then would one assume that only a T5 robot can pass the TT?

      Delete
    3. In response to Annabel, I would imagine that a robot at even the T2 level would be 'fed' such a vast array of literature to parse (perhaps including recipe books and pieces by food critics) and deconstruct to later be able to form responses that would allow it to talk about food and consumption of food. However, there probably wouldn't be much utility to it being able to do this other than for it to be able to pass off as human for x amount of time.

      Delete
  24. I think the hard problem does have some bearing on the TT. Many of the things humans communicate about deal with subjective experience. A T2-candidate machine would be unable to accurately communicate these feelings (i.e. do what we can do, without regard to if they are actually feeling these feelings) without substantial pre-programming. Such a brute force manner of creating a “thinking machine” (if it is even possible in the first place, which I think it is not) seems to devalue the test and the notion of “thinking” to that of mere information processing.

    ReplyDelete
  25. The most interesting point raised in this reading is the separation of what me mean by "think", what thinking creatures can do and how they can do it, and what it "feels-like" to think. This brings the discussion back to consciousness and how to determine if a machine or program can really fell, and if they can think, how would we assess what the machine feels-like to think. We could always ask the machine, but I don’t believe any answer would be accepted without skepticism (regardless of novelty or believably).

    Harnad states that "whether either the human or the machine are completely predictable is irrelevant." Earlier in the thread Harnard mentioned that it would be difficult to imagine the T3’s internal mechanism for doing what it does and that “it would be like those huge maths proofs in which a computer is needed and no one can hold the whole proof clearly in their head!” Perhaps such a technical understanding is not necessary, there can be some kind of meta-analysis made and a more comprehensive understanding can be reached at a higher abstraction level - not maths, or science, but metaphysical.

    ReplyDelete
  26. I found this annotation paper extremely interesting and easy to follow. Harnads annotations clarified many of the points made by Turing. What I found interesting is that in the Turing Test, the interrogator only interacts with the machine through language - no visual or physical contact is permitted. This seems fair because we should not judge a machine by its appearance and definitely not label it as unintelligent just because it doesn't look like humans. However, can we actually separate mind and body? Is language alone enough to express and capture all the types of intelligence that humans have?

    ReplyDelete
    Replies
    1. I agree that the language Q&A format through which the Turing Test takes place fails to account for much of human intelligence, and I think Harnad touches on this when he refers to the “universal power of natural language”, in that it does NOT fit the Turing criterion of identical performance capacity as it leaves out our nonverbal performance capacities. I said in a previous post that this medium of written word puts the computer at a huge advantage in its ability to “pass” as a human; indeed, because it overlooks much of the complexities of human ability – i.e. motor skills, problem-solving abilities. Harnad contends that T2 is not sufficient and so we turn to successively higher levels of Turing Test to account for these shortcomings and to properly gauge human intelligence.

      Delete
  27. With regards to the T3 level of robotic intelligence, there is a great emphasis on the embodiment of cognition. It seems that we cannot say computational machines cognize unless it is embodied. However, even if we build T3-passing robots successfully, it would not help us discriminate between vegetative function and cognitive function since a variety of vegetative functioning supports cognition and the explanation (computational algorithm) of how this robot does everything we can confounds cognitive and non-cognitive functioning. Does this distinction between the two types of functioning become moot then?

    ReplyDelete
  28. I'm still a bit confused as to the key differences between a "simulated" robot and an actual robot. I understand that a simulated robot is based in/on a simulated world; is this then assuming some sort of joint intentionality in normal human interactions, based on a shared world?
    What exactly is a computer if not a simulated robot? and for the original Turing Test then, why can't a simulated robot work?

    ReplyDelete
  29. RE: The argument from consciousness
    "Jefferson (1949): "Not until a machine can [do X] because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain"
    "There is no way to know whether either humans or machines do what they does because they feel like it -- or whether they feel anything at all, for that matter. But there is a lot to be known from identifying what can and cannot generate the capacity to do what humans can do. "

    I wonder if in the future it will be possible for machines to accomplish such feelings and emotions. As of now we might not think that this is possible but I'm sure back in the 1800s people never could have guessed what these machines were capable of. I'm sure that the idea of touch-screen, siri, GPS, etc seemed like an unrealistic ideal to many but now we see the inventions of self-driving cars and robots that can make food for you which is amazing. Is it realistic to put a ceiling on what technology can accomplish?

    ReplyDelete
  30. “On the other hand, something else that sounds superficially similar to this (but happens to be correct) could be said about scaling up to the TT empirically by designing a candidate that can do more and more of what we can do. And Turing Testing certainly provides a methodology for such cumulative theory-building and theory-testing in cognitive science.” –Harnad
    I think this idea is very important to keep in mind when comparing Harnad’s commentary to Turing’s original paper. To me, it appears that what Harnad is suggesting is a completely new test that is simply based off of the Turing Test. While Harnad’s theory proposes a stronger and stricter version of the test, it is not the same as the original and should not be argued as such. I agree that to really appropriate human consciousness, a continuous and life-long test is necessary, but this is not what Turing originally proposes. He is looking particularly to test if computers are capable of thinking (not if they are capable of all capacities and thoughts of humans), and having them pass his 5-minute version of the test should be adequate to prove this. He does not suggest that this proves that these computers would then have all capabilities as do humans (which is highlighted in his exclusion of physical and sensory properties of humanity). I agree that these sensorimotor aspects are very important to understanding humanity and creating Cognitive Models. In regards to the above quote, it seems that we have already succeeded in fulfilling Turing’s original thesis and from there it is now possible to “scale” up to Hardnad’s thesis through the addition of sensorimotor components, long-term indistinguishability, etc. In order to have the ability to scale from one to the other, it’s important to keep them as separate ideals and both important and valuable in their own respects. It is important then to determine how to scale up from the ability to pass the short-versioned Turing test, to the all-encompassing Turing test which includes a lifelong ability to pass as a human in all-respects. What would the in-between stages look like physically and computationally?

    ReplyDelete
    Replies
    1. Although, after class today this was cleared up more, and I understand now that the Turing test was generally understood as just the shorter version which was influencing my conception of the Turing Test. I still think different version/strengths of the Turing test can still be considered as legitimate interpretations of the test and are useful in terms of this scaling up from shorter and easier version to the more complete ones. Keeping it as a step-ladder would be useful for the people developing this technology to build up and test technology as they do it

      Delete
  31. The article did an excellent job at making clear what Turing said, meant to say and consequent critiques. T3 is obviously the favoured level of testing as it combines total indistinguishability of both sensorimotor and verbal performance capacities. This would cover up flaws in T2 testing as no amount of data can make up for genuine answers to questions like "describe the sunset last night" or "what does rain smell like?". Turing's constraints in an efforts to direct future stud were met with criticism. I agree that T3 sounds like the ideal level but what exactly does that mean? If no constraints given then no ideas can be shot down to be replaced with stronger ones. Are all senses required for an accurate representation of "thinking" or are only some of them important? Movement, vision, communication (verbal) are give a better picture of cognition, what about the other senses? Is taste necessary? Are all fives tastes necessary for a T3? How much clarification does the T3 level add when we aren't really sure what we are looking for, ie. what does it mean to think?

    The word robot was soon used to describe a T3 machine, what makes a robot a robot (the kid-sib explanation) and can they not be discrete state machines?

    I also agree that the only way to accomplish this is by learning machines but the mechanism through which a machine can learn is throwing me off. Humans come to a plateau of learning (language learning as an example), will T3 have this built into them or is it feasible for something to learn forever? Say learning was to occur forever then the T3 would come across conflicting inputs and colossal amounts of information. We are very good at being ignorant- unnecessary information leaves our attention if it got any attention at all and we don't get bogged down by conflicting inputs. What seems like a flaw of the brain actually serves to help to prevent an overwhelming amount of information- in creating a learning T3, do we program in ignorance or similar fallibilities? How would a machine decide what is relevant information?

    ReplyDelete
  32. Why does the test preferably use “typewritten” format of the answers instead of handwritten? Since machines are programmed rather perfectly in terms of some minor aspects, it is possible that human will be less consistant in their handwritings (while maintaining their own “style”) whereas machines can be more consistant in their handwritings while not showing any personal style?

    ReplyDelete
    Replies
    1. I think the "typewritten" is just to make sure that the imitation game can be controlled in a way that the interrogator is deciding whether the one inside is a machine or a person ONLY by the performance capacity/the answer it provides. If the game/test is conducted in handwritten, then the interrogator can easily distinguish by looking at whether the handwritings are with style(which humans are good at). This trick enables us to distinguish, but it is not the focus of the test. The test is about whether the machine is a successful reverse-engineering of how our brain perform/think (so the machine can do what human can do), but not about our handwriting style. Of course, there is a consider of whether the imitation game is a T2 TT or a T3 TT, but I guess the part mentioning type/hand written is about the T2 level.

      Delete
    2. These comments reminded me of a video clip I saw about robots who are able to mimic human hand writing at a near perfect level. That is to say, they're able to mimic the exact style of previously scanned individuals, including human errors.

      https://www.youtube.com/watch?v=LsZH7SS_lfQ

      I think that the typewritten/email criteria is a dated criteria meant to simplify the experiment before this technology was realized. If somebody were to do a TT now, I think they could use this robotic hand writing for the responses and it would work just as well as typing. It may be even more convincing to an unsuspecting judge.

      Delete
    3. The video clip is very interesting! I guess maybe for a higher AI test the hand part would matter for testing the equivalence of the output?

      Delete
  33. RE: What is intelligence?

    While the Turing Test may sufficiently capture a kind of intelligence, it goes way too far if its objective is to understand the essence of intelligence itself (or to answer the question “can machines think in any way at all whatsoever?”). A more suitable test for addressing such an objective might be: if building a system from scratch, what is the minimal “threshold” point at which the system would “cross over” from vegetative to intelligent? Where on the evolutionary spectrum of life does intelligence “begin?” Unicellular organisms? Insects? What does the simplest intelligent organism on Earth have that allows it to be “intelligent?” Should we ever feel remorse for harming such a simple lifeform? And if that organism is at least as simple as a tiny bug, why is it so hard to reverse-engineer it?

    ReplyDelete
  34. I’m glad the “frame problem” was raised, because the standard definition of “computers as RULE-BASED symbol manipulating devices” certainly does not solve the problem at hand. In agreement with the argument from informality of behaviour, I would think it’s impossible to program a set of rules for every conceivable circumstance. It would require the “thinking machine” (or rather “learning” machine) that Turing proposes, since it is the problem of self-machine learning. In order for machine learning to work, we need to know what information is relevant to the outcome, and then tell the machine that these were the things that affected the output. Now in a linear system, this can be automatically found through trends, etc., but in a non-linear system, such as is our dynamic world, it is impossible to logically determine with any degree of precision what information affected the outcome, as there would need to be infinite controls and infinite trial-and-error. In the same way that humans can look at the results of an experiment, and come to different helpful conclusions, a robot would struggle to find valid conclusions at all. The frame problem not only poses the issue of needing this learning to solve problems, but also questions how a being would be able to pick out the necessary information from the irrelevant.

    ReplyDelete
  35. (I think yet another piece of unnoticed equivocation by Turing -- and many others -- arises from the fact that thinking is not observable: That unobservability helps us imagine that computers think. But even without having to invoke the other-minds problem (Harnad 1991), one needs to remind oneself that a universal computer is only formally universal: It can describe just about any physical system, and simulate it in symbolic code, but in doing so, it does not capture all of its properties: Exactly as a computer-simulated airplane cannot really do what a plane does (i.e., fly in the real-world), a computer-simulated robot cannot really do what a real robot does (act in the real-world) -- hence there is no reason to believe it is really thinking either. A real robot may not really be thinking either, but that does require invoking the other-minds problem, whereas the virtual robot is already disqualified for exactly the same reason as the virtual plane: both fail to meet the TT criterion itself, which is real performance capacity, not merely something formally equivalent to it!)

    I am feeling a bit confused when reading this part.

    So it says a universal computer is only formally universal but does not capture all of its properties. Then what else can pass the TT? If there is a universal computer that (in ideal) can capture all the properties, like a successful reverse-engineering miracle finally comes, then will there be a chance that thinking can become observable?

    From the 2a reading, it mentioned the only way to know whether one can think is to become the one. If the 2b reading quoted here is saying that “there is no reason to believe it is really thinking”, would it be too early to make such a conclusion now?

    ReplyDelete
  36. RE: Sensorimotor Grounding

    What’s the difference between a T3 robot seeing a sunset for itself, and a programmer just storing all of the information that a T3 robot would get by seeing a sunset directly into the “brain” of a T2 machine? If sensorimotor systems are just means for acquiring information, why would the means matter if only the information itself is important?

    ReplyDelete
    Replies
    1. What about using google deep web to search the world's blogs and publications for appropriate responses, in a way so that at the beginning of the turing test the algorithm chooses a demographic profile/ or a roughly circles around a personality. For example, there would be tons of data on teenagers it could use data from facebook, instagram and various blogs to respond using the voice of a teenager.
      I guess this wouldn't be computation per se because there is no real "storage" component here... and the process of designing this algorithm (as well as the process you described above) is falling into the trap that Harnad describes, of thinking that the Turing test is about fooling humans into believing they are interacting with a humans, rather than helping us explain human cognition ..

      Delete
    2. Gus, canned instructions about what you should say about sunsets in reply to possible questions about sunsets is a recipe for making more and more clever Siris to fool people into thinking they understand -- but it's not what the TT is about, or for.

      The TT really wants to deliver the capacity -- and for a lifetime, not just a 10-minute phone call. Consider all the things Dominique can say about sunsets and tell me how to code all that in advance, as an algorithm!

      (You are right, however, that what matters, with a causal mechanism, is whatever state it is in -- hence what (future) capacities it has -- now, not the real-time history of how it got into that state or earned those capacities. But that's still not good news for algorithms and sunsets.)

      Lauren, hard-wiring google-search into T2 (or T3) would be cheating as surely as if you simply piped the messages to a real person to respond on behalf of T2 (or T3). And, yes, it's a trick for fooling people, not a way of generating human performance capacity and reverse-engineering the brain.

      Delete
  37. “On the other hand, just about all of us can walk and run. And even if we are handicapped […] we all have some sensorimotor capacity.”

    I agree with this argument (that a T2 device is not sufficient) in the sense that a computer stuck in a room with no ability to perceive the outside world will not experience nearly the same input as a human and will never be able to learn everything a human does (therefore it will never reach the ability level of an adult human brain). However, I do not agree that these capacities— walking and running— are necessary in their own right. If we are expecting the Turing machine to “do everything people can do” then I think this would have to include having senses and producing movements. However, I believe Turing’s goal was to come up with an operational definition of what it would mean for a machine to “think.” Although sensorimotor capacities might be necessary in order for “thinking” to take place, I wouldn’t categorize these capacities themselves as thinking.

    ReplyDelete
    Replies
    1. Hello Emma, that is a very interesting point. Like the paper and your point, I do agree that T2 or just computation is not sufficient to pass the TT. But I was just curious about your thoughts on the sensorimotor capacities and their role. You said that sensorimotor capacities might be needed for "thinking" to take place, but the capacities itself would not be "categorize as thinking." Perhaps you're right; I haven't thought of them as either included or excluded from "thinking", but I also haven't concluded what their role or link is to cognition. So I was wondering how do you see them? Would you see these sensorimotor aspects as part of input and, perhaps, output of "thinking"? Or simply, for some are just reflexes?

      Delete
    2. Emma, moving and sensing isn't thinking (as toy t1 robots show), yet you may need to be able to move and sense to be able to think. But a robot is not necessarily a computer with legs and eyes either -- especially not a T3 robot. There may need to be other dynamic (noncomputational) structures and processes inside its head too. (

      I would say the TT is not an operational definition of thinking but a methodology for testing whether you've succeeded in reverse-engineering thinking.)

      Grace, I don't know how Emma would answer your question, but I think the reason sensorimotor capacity and experience is needed for cognition is that that is the way our brains detect the features of the things that allow us to categorize them (do the right thing with them), including naming them: That's how we can pick out ther referents of words. And that's what makes words grounded symbols, rather than the ungrounded symbols of formal mathematics and computation.

      Delete
  38. "Turing's proposal will turn out to have nothing to do with either observing neural states or introspecting mental states, but only with generating performance capacity (intelligence?) indistinguishable from that of thinkers like us." (From page 1)

    Through this quote as well as the computing machinery and intelligence reading we are able to discern if Turing is a functionalist. I think he believes that whether or not we can know what is going on inside a machine is not what is important, but rather the focus is on the outcome, the real performance capacity. We can remain largely ignorant to what is going on inside and rely on the outcome to answer the question of whether machines can think. The notion of performance capacity is later elucidated again in Turing's notion of "thinking is as thinking does."

    "Nor is it relevant what stuff they are made out of, since our successful mind-reading of other human beings has nothing to do with what stuff they are made out of either. It is based only on what they do" (Around page 14)

    This further emphasizes performance capacity, the outcome of a machine, "what they do." And I think supports functionalism and its role in determining whether or not machines are able to think. Whether or not we know its inner workings, the physical substrate, is irrelevant.

    Given the two quotes above, I am still not convinced that the performance capacity of a machine, or what it "does," is enough to say that machines can think. To say that a machine can think just because we are not able to distinguish it from a human is questionable as Searle has demonstrated through his example of the Chinese room that it Is possible to deceive us.

    On the note in the reading about how autonomy in the world is an "important feature for a Turing Test candidate" - is this perhaps why we could not answer the question of why we feel through a machine? Even if we utilized reverse engineering by building a machine to figure out the "how" question, it seems that we will not be able to answer the "why" question due to the lack of autonomy of a machine.

    ReplyDelete
    Replies
    1. Brittany, kid-sib doesn't know what "functionalism" means: I assume computationalism is a kind of functionalism, but are there other kinds too?

      Turing is only asking for "weak equivalence" (I/O equivalence) with his Turing Test, although, strictly speaking, weak/strong equivalence only applies to computationalism: same I/O or same I/O + same algorithm. But even for T3 (which can't be just computational) Turing only asks for same I/O. In fact that's his whole point with the TT.

      Turing agrees that I/O equivalence does not guarantee thinking. He just says it's the best we can do. (There's still the other-minds problem, which is more serious with machines than with real biological people. And then there is also ordinary underdetermination: There may be more than one way to explain all the data, and there's no way to know which (if any) of the explanations is the right one. Turing suggests not worrying about either of those, and he's right...)

      Searle thinks he has refuted the TT, but he has only refuted computationalism (T2). And he has not even shown that cognition is not computation; he has just shown that cognition is not just computation.

      "Autonomy" in this context just means the T3 has to go into the world and find out what's what for itself. It's not a matter of describing it all in advance, in words (or computations).

      TT will not explain either why or how we feel; it only explains how and why we can do what we can do.

      Delete
  39. If I remember correctly, Stevan says that “Turing is most likely not a computationalist” and also “thinking is not computation.” The paper persuasive in reasoning towards these statements and in my attempts to summarize so, I am also in hopes of clarification if there are any mistakes. Computation is a powerful tool in cognitive science as so described by the Church/Turing strong thesis; to relate it back to the levels, computational applies only until T2, whereas T3 onwards are dynamic systems. So when Turing proposed the imitation game, he used verbal performance to evaluate the candidate, but as the paper pointed out that it is solely as an “intuition-priming example, without meaning to imply that all “thinking” is verbal.” As it is further demonstrated from the T2 email pen-pal example, such that the verbal performance from a T2 would breakdown if questioned closely about qualitative details of sensorimotor experience, such as pointed out by the example of conversing about an analog photo attached in the email. Therefore, it seemed that, despite it being on email exchange, the TT seems to expect a successful candidate to perform beyond T2 functionality; which loops back to the hypothesis that Turing thinks T3 is unnecessary and not tested directly because T2 is grounded in T3.

    However, on another question where T2 can successfully simulate formally what a waterfall is and perhaps described it in words, but it cannot answer the about the sensorimotor experience of feeling the wetness of waterfall. However, even a T3 candidate, with autonomous sensorimotor to experience waterfall in the real world, could not put into words what it is to feel a waterfall. As it is impossible to convey in words what it feels like (such as describing seeing a colour red to a blind person). I agree that a successful TT candidate needs not to be just computational. However, So this means that it would require a T3 candidate to pass the TT because it is has the functionality to process the input through sensorimotor (and subsequently converse about the photo) and by it passing the TT and by its autonomous sensorimotor experience to the real world, we would be able to make the assumption that it would feel as what we have feel?

    It seems that in order to pass TT, the candidate will need the functionality beyond verbal (T2) or even any computational simulated models of T3. Turing primarily calls for verbal functionality is to merely remove the aspect of prejudice in appearances and structures. From what I understand is that Turing’s proposed methodology is to fundamentally understand how human cognize. So that brings me to clarify that: from successfully building a candidate that could pass the TT, we would have a cognitive model? And, this cognitive model would have an extra degree of uncertainty or underdetermination due to other-mind’s problem in the said, cognitive science field?

    ReplyDelete
    Replies
    1. Grace, not only is Searle's "Strong AI" exactly the same thing as computationalism, but Searle's "Weak AI" is the same thing as the Strong Church-Turing Thesis.

      T2 only tests verbal I/O, but to be able to pass T2 ("Stevan Says") the candidate would still have to be a robot that could pass also T3, if you were testing it.

      Yes, you can't explain feelings in words except to someone who has had that feeling or something like it.

      We could assume Dominique feels, because she passes T3. But because of the other-minds problem (and not just because of normal scientific underdetermination) we can't be certain. Cartesian certainty is too much to ask of science, but with T3 there's more at stake than whether or not we have the right causal theory, because whereas T3 has explained (at least one way) how and why we can do all that we can do, it has in no way explained how or why Dominique feels (if she does). That would be the hard problem. So if we were wrong about Dominique, and she was really a Zombie, we would have made a bigger mistake than if we came up with the wrong causal explanation for her doing-capacity (because of ordinary scientific underdetermination).

      But with Dominique, even though there is a bigger risk that she's a Zombie than with a real person, the right (and merciful) assumption under the circumstances is that we should not kick her.

      This applies even more strongly to other species.

      Delete
  40. Re: “if telepathy (true mind reading) were genuinely possible”
    One of my professors Eduardo Kohn (remaining mindful that he is on equal grounds of pygmies!) writes in his book How Forests Think the ways in which some Amazonian tribes (by their report) speak to forests (meant in the literal sense, just like you and I can communicate through words). They claim that by thinking with it, they know that it (the forest) can think. One way they do this is through dreams, a function of the mind we tend to dismiss as being empirical. On a side note, maybe this Amazonian mode of thinking is compatible with computational theory, where the world interacts through a set of inputs and outputs, comprised of a mind defined by the sum of all existing minds/realities. To provide a second example back on Kohn’s report, he claims that some Amazonians believe to inhabit the mind of a jaguar by consequence of putting on it’s skin and imitating it’s physical movements. They claim that by mimicking the jaguar’s movement, they as human can be a jaguar without having its physical body (according to this argument, just like I can’t have the body of my roommate, I can sympathize with her because living with her has had me in some ways mimic her ways of living, making her mind part of mine and vice versa). This is not the same thing as knowing what she is thinking, but rather understanding my mind as part of hers. The Amazonian and roommate example would of course differ in degrees of “obscenity” (how familiar the notion can be to me/you), but I think they follow similar logic.

    In assuming the problem of other minds as unsolvable, can that not point to another kind of mind-reading we may be excluding from the picture, that of living in our body through the minds of others? Telepathy, in the sense of knowing exactly what the other mind is thinking is impossible, but I don’t think that rules out the fact of different ways of understanding the mind.
    Is it possible that this Amazonian notion of mind, as that of others, and as a form of “true” mind-reading could be the reason we have yet to pass the Turing test?

    ReplyDelete
    Replies
    1. Krista, "panpsychism" or "animism" is attributing thinking and feeling to things that don't really think/feel. You could think of it as being over-liberal in Turing-Testing! It's a kind of magical thinking. If you want to reverse-engineer cognition, you can't use magic! But I don't doubt that some cultures may be much better at reading the minds of animals than ours is.

      I'm afraid kid-sib has no idea what you mean by "a mind defined by the sum of all existing minds/realities" or what it might have to do with computation. I didn't understand the point about "obscenity" either.

      Delete
  41. To me, the line between T3 and T4 is hard to distinguish...
    For instance, what does Turing *really* consider to be necessary for "total indistinguishability in robotic (sensorimotor) performance capacity?
    If having indistinguishably developed sensorimotor capacity confers the ability to delineate oneself from the external world through all possible senses just as a human does, mustn't the sensory apparatus exactly match that of a human?
    That is to say, what possible machine could "know how it feels" to hit one's funny bone without possessing an ulnar nerve? What artificial alternative could guarantee the same sensorimotor performance capacity if it doesn't' resemble the actual thing?
    It seems to me that it would be hard to design a mimicry of human function that mechanistically resembles anything other than the human body... and thus the quest to attain a T3 might as well be T4 in this regard.
    Of course, this brings to mind the "Brain in a jar/vat" scenario, but this would reduce us back to T2 status in terms of physical ability...

    ReplyDelete
    Replies
    1. Cole, T3 only needs generic human capacities. Dominique need not know what it feels like to hit her funny bone (many people don't). And even two humans could never explain to one another what something feels like if one of them has never felt it, except inasmuch it resembles something they have already felt. (Verbal descriptions of feelings are rather like computer simulations: they only convey the feeling to someone who has already felt it. Turing's point is just that it is unreasonable to ask or expect more of a T2 or T3 than we would from any other real person.

      As to brains in vats: T4, deprived of its body, would not become a computational T2.

      Delete
  42. This comment has been removed by the author.

    ReplyDelete
  43. In “Computing Intelligence and Machinery,” Turing outlines a very general hierarchy so we are equipped with the terms and concepts to discuss, in very broad terms, our potential progress in creating cognitive AI.

    Harnad’s paper thoughtfully clarifies Turing’s main points and discusses the implications of Turing’s paper for empirical research on cognition and computation.

    However, I am still left wondering about the practical implications the T-hierarchy has on our work. It’s just so broad. Have more practical goal posts been proposed, between Toys and T2, that signify REAL progress towards understanding our consciousness?

    Is it possible that there are ways we could pass T2 without understanding much more about our own cognition than we do now? This level of the T hierarchy has a high level of underdetermination, and only requires weak equivalence (does not require the same algorithm, solely the same output for input). Turing’s awareness of the lack of practical knowledge required to pass T2 is inherent in his statement: ”I predict that in 50 years we'll be able to fool 70% of the humans for 10 mins.” I take Turing’s use of the word “fool,” as and an indication that he thought T2 is a somewhat arbitrary milestone, in that our passing it does not necessitate our practical comprehension of our own cognition. I think that we can pass T2 (not lifetime T2, but a situational T2) without gaining the underlying knowledge necessary to design a T3. I reckon that fairly soon, we could pass the Turing test in some capacity by utilizing “big data + deep learning” to generate personality specific pen pal responses. For example, we could make an algorithm that learns using all of the data available via Facebook and instagram and the blogs of the world to imitate how a teenager would respond. My question is, will the technical feats of this “big data + deep learning" inevitably contain some resemblance to our cognition? Is it possible that the minimum machine-human equivalence required for T2 could be too low to be enlightening? In other words, is it possible that we pass T2, and have learned very little about ourselves? Could we pass T2 and be off-track on the road to T3?

    While Turing’s hierarchy has been and will continue to be useful in providing a scaffold for philosophical discussion, and Harnad’s paper has allowed us to better hone in on the most important aspects of that discussion, neither help us direct our efforts towards understanding our own cognition in a practical/technical sense. I think we can already say with some confidence that there must be some equivalence between big data + deep learning and our cognition, but are there better ways we can direct our work between toys and T2 so that we can have better chances at a useful T2 that can direct our work towards T3? Personally, I’m interested in the number of goals we need to accomplish between toys and lifetime T2. Perhaps themes are beginning to emerge through the work of computer scientists working on the Loebner contest, as well as the work in big data + deep learning.

    ReplyDelete
  44. “The Annotation Game” brought up the distinction between simulation and reality. Mainly, that creating a program that is able to simulate a system is not the same thing as whatever that system does in the real world. However, I’m a little confused with the distinction between a real world robot and a computer simulated robot. When we’re talking about the TT, I understand that we’re talking about real world robots. If this is true, why is T2 considered a real world robot, even though it’s interaction is limited to e-mail exchanges – how does this qualify as a real performance capacity? When we’re talking about computer simulated robots, are we then referring to T0 or T1? I think I’m getting mixed up with T2 because it’s capacities are strictly digital. Perhaps I would better understand if someone was able to provide an example of what a computer simulated version of T2 would be? (Something similar to the distinction between a real plane and a computer simulated version of a plane)

    ReplyDelete
  45. Why is there such a rigid comparison of what computers are capable of with the abilities of biological forms? I’m more interested in what computers can do that humans can’t do. Would a computer fail a Turing Test if it gave an answer a human could not, and on the other hand, is there a reverse Turing Test? While it’s pretty obvious that computers can do things we can’t (or do them faster) I think it’s more beneficial to explore this unknown. Can we make a machine prove what cognition and thinking are if we can’t? If you can simulate everything than can’t we simulate a computer that has “all the answers” about machines we’re searching for?

    ReplyDelete
    Replies
    1. Nicholas, the robot would fail the Turing Test if it did something that a human being couldn't, from my understanding. The point of the test is to see if it can pass off as human, so if it did something none of us could, we would start questioning whether it was from MIT. However, you pose some very interesting questions. If it could prove cognition, and answer questions we weren’t able to, that would be the best discovery on the planet. However, I don’t think it is possible. Since we are reverse engineering, a robot’s mental capabilities would not be far off from ours as we are the ones writing in the inputs and the possible outputs. However, it could evolve and create. This connects to how much of a robot could be creative and original, which is part of what Turing calls the Granny objections to people saying that robots could not be creative. So he would say that it would be possible. What I think is that a robot could have some different insight, but that it won’t ‘hold all the answers’ as you hypothesize.

      Delete
  46. This comment has been removed by the author.

    ReplyDelete
  47. “What we mean by "think" is, on the one hand, what thinking creatures can do and how they can do it, and, on the other hand, what it feels-like to think. (…) Now we ask: Do the successful candidates really feel, as we do when we think? This question is not meaningless, it is merely unanswerable -- in any other way than by being the candidate. It is the familar old other-minds problem.”

    Why is it crucial that candidates really feel the way we do? If thinking creatures can do (think and whatnot) and do it the same way we do it (as mentioned, this can’t be known but could be inferred) why is it necessary that they feel, as well? It seems like it would follow from this explanation that we could never know if a machine thinks because of the other minds problem – aka. because we cannot know if they feel as we feel when we think. But if we know that they do what we do, and can somehow be absolutely certain that they do it in the same way as us (this could only be inferred, but for the sake of argument), can’t we be virtually certain that they think, without knowing whether or not they feel as we do at all?

    ReplyDelete
  48. In response to “can machines think?”, I agree, I think this is undecidable. We don’t even know how humans think. As mentioned, all we know is by observing introspection of others and ourselves, but we aren’t aware of how exactly we are doing it. Furthermore, we can never know if others are telling the truth. No one has figured out exactly what is happening in our brain that creates that little voice in our head that is our consciousness. How would you program “thought” and the concept of feeling, if we don’t even know how we’re doing it? No one can tell if someone else is “feeling” because we can’t truly know what they are thinking or feeling. Even if they were to voice their feelings, we can never actually experience the exact same thing. Also, a bit unrelated, but if you’re programming a machine to think a certain way, would that count as the machine thinking? Because it’s not of their own “free will”, but it’s just because we programmed them to feel that way.

    ReplyDelete
  49. I appreciate how this reading outlined the different levels of Turing machine. I found it interesting how T4 seems to be an awkward middle ground between T3 and T5, with nothing but arbitrary lines distinguishing between the three. Is T5 what cognitive scientists are striving for in the end in terms of reverse engineering of the brain and cognition? Is this something we will ever achieve? Or is T3 and T4 sufficient, as Turing (and Harnad?) suggests.

    Harnad distinguishes between artificial intelligence and cognitive modelling. I was a bit confused by this since I thought AI was essentially aiming towards CM. It appears that one can have AI and CM, but not CM without AI. Will we ever successfully model cognition through artifical intelligence? In my opinion, I don’t think we will ever reach the point of discovering the workings of the black box through reverse engineering the brain/cognition.

    ReplyDelete