Saturday 2 January 2016

10d. Harnad, S. (2012) Alan Turing and the “hard” and “easy” problem of cognition: doing and feeling.

Harnad, S. (2012) Alan Turing and the “hard” and “easy” problem of cognition: doing and feeling. [in special issue: Turing Year 2012] Turing100: Essays in Honour of Centenary Turing Year 2012Summer Issue


The "easy" problem of cognitive science is explaining how and why we can do what we can do. The "hard" problem is explaining how and why we feel. Turing's methodology for cognitive science (the Turing Test) is based on doing: Design a model that can do anything a human can do, indistinguishably from a human, to a human, and you have explained cognition. Searle has shown that the successful model cannot be solely computational. Sensory-motor robotic capacities are necessary to ground some, at least, of the model's words, in what the robot can do with the things in the world that the words are about. But even grounding is not enough to guarantee that -- nor to explain how and why -- the model feels (if it does). That problem is much harder to solve (and perhaps insoluble).

80 comments:

  1. RE: “The hard problem is explaining how and why we feel—the problem of consciousness”

    In order to explain the hard problem, one must find a way to explain “why” doing is accompanied by feeling. What function in and of itself does feeling serve? It is possible that feeling is just an epiphenomenal experience that serves no inherent function, but this just seems like a dismissive way to get around the hard problem.

    It is possible that the only way to answer the questions of feeling is to find a way to “transform” the “feeling” that a person feels himself third-person/interpretable data that can be objectively studied (Similar to William James’ ‘I vs. me’ distinction). This is what Chalmers is trying to do with his construction of a subject’s “heterophenomenological world”. I am sceptical, however, that this transformation is really possible. A study was conducted (see link below) revealing distinct neural populations for different frames of self-reference, which suggest that the experience of the actual “feeling” may not be equivalent to the “interpretation” of the felt feeling.

    https://oup.silverchair-cdn.com/oup/backfile/Content_public/Journal/scan/2/4/10.1093_scan_nsm030/3/nsm030.pdf?Expires=1490120906&Signature=dD7U07dJOKpFNlzJHGWlpc7~RaES8PkuNjqAswu2sjPFLTjU2Lynene4qdjqoqF-2rlHxYWYOgBz2S72-j7YAUmKRdAYoD2ULAvjMDRBE7XUn1GAy6Tsk9aXmkbpJCRtNd2vKXvfRG9VGt9ZHg88U2U7Hfa-t-3iPhgmKRP6pQIf7z0yXebyTvZlPe4JX6-wqfHf-qe54~W34qnMMKDB5Ijfp8e47FbP58zGDnEb5q6KekfFUPkv8ePvceB8KWBciwfKR4VdjdLhut83leDxnpT8CcEFFhojxergjW2JNU1pQq7-sEBB~djvr6sSy4m52nOKp4nGgSOzTYMRK~9sLQ__&Key-Pair-Id=APKAIUCZBIA4LVPAVW3Q

    ReplyDelete
    Replies
    1. Manda, how do the correlates of feeling (when and where) explain how or why we feel (the hard problem) when they cannot even explain how and why we do (the easy problem)? And the hard problem is not how we interpret what we feel, but how and why we feel anything at all.

      ("Heterophenomenology" was Dennett; Chalmers was just calling the "hard problem" hard...)

      Delete
  2. First off, I think this piece provided a very thorough yet concise summary of most of the topics we’ve covered so far in class, so if someone asks me what the course is about later I’ll tell them to read this (plus one of the readings on categorization probably).

    Secondly, this piece helped remind me that Turing was the giant who started it all: “Turing was perfectly aware that generating the capacity to do does not necessarily generate the capacity to feel. He merely pointed out that explaining doing power was the best we could ever expect to do, scientifically, if we wished to explain cognition.”

    I totally believe the argument that Turing wasn’t a computationalist, and I don’t doubt that he was one of the first to realize that maybe all we can explain is our doing capacity as opposed to our feeling capacity. However, I’m curious to know how much of this Turing actually wrote about and how much is scholarly interpretation (or “Stevan says”). Is there a paper of Turing’s (besides “the Imitation Game” one) where he discusses cognition specifically, or are most people guessing?

    ReplyDelete
    Replies
    1. Sorry I also should have added that if anyone does know of a paper and where to find it online, please link me!

      Delete
    2. Olivia, I wonder what I would feel if someone sent me this reading before registering for PSYC 538. I think I would (rightfully) feel extremely intimated by the level of abstraction some of these topics are presented (though necessary!). This is a nice moment to reflect on how many concepts feel like second nature to us (e.g pillars of computationalism, Turing hierarchy etc.) that may have sounded like utter jargon on the first day of class.
      In any case, I was also curious about Turing’s true intentions with all this. Since he was so brilliant, it would be shocking to think he ever thought cognition was just computation or that the verbal T2 robot in its pure (ungrounded) form would be enough to pass TT. I just can’t help but think about how many giant’s works are scrutinized by scholars after them just for the giant to respond “hahah LOL no that is not what I meant at all”. It reminds me of a joke where an English class spends an hour class discussing the symbolism behind a red curtain in chapter 7,to which the author rolls over in her grave since the curtains had no further meaning what so ever. Though I am not criticizing Professor Harnad or anyone else who stipulates these things about Turing (since we all agree and it is the reasonable conclusion) I would love to hear what Turing himself would think about all this. I bet he would write a hell of a response to Searle (“The Alan Turing Reply).
      Alright that is all for now. I realize none my post was particularly insightful. As you said, this reading was basically a summary of a lot of things covered. I’ve had fun though!

      Delete
    3. I really liked the placement of the paper this far into the course because it provided a nice summary of how what we have learned so far this semester is connected. However it would've been nice to also receive this paper at the start of the year (though maybe just as something to read, not do a skywriting on). To be honest I had no clue what I was getting into when I signed up for the course, and I think this paper explains things clearly enough that I would have been able to grasp at least the basics of computation and Turing's beliefs.
      I do appreciate however that we were able to form our own opinions on computationalism and grasp a better understanding of the course before Harnad delved into why he didn't think Turing was a computationalist. After what we've learned, I agree with this belief, and while I had to read this sentence about 4 times to understand it, I thinkk it phrases what Turing's goal was perfectly: " Turing's
      contribution was to make it quite explicit that our goal should be to explain how we can do what we can do by designing a model that can do what we can do, and can do it so well that we cannot tell the model apart from one of us, based only on what it does and can do." Turing isn't trying to say that he can fully recreate a human, but that he can recreate the processes/outputs.

      Delete
    4. Hi Olivia, I had the exact same thought when I read that “Turing was not a computationalist” – did Turing explicitly state that T2 would be insufficient to explain cognition or is this Harnad’s interpretation of Turing?
      Given that Turing actually formulated the Turing Test as “purely verbal, via exchange of written messages, with the candidate out of sight” (T2), while conceding that any TT-passing machine would have cognition (i.e. be a “thinking machine” – recall Article 2A), it does seem plausible to argue that Turing was a computationalist. After looking back at Article 2A on our reading list, Turing says that “the condition of our game makes these disabilities [i.e. physical appearance & capacity] irrelevant” and focuses primarily on the “intellectual capacities of man”. Therefore, Turing’s intention in formulating TT as computational was to separate the physical and mental capacities of man, and focus on the latter. In this way, is Turing suggesting that physical capacities (i.e. sensorimotor capacity, dynamic capacity, movement) are irrelevant to one’s intellectual capacities, or just ridding physical appearance from the picture so as not “to penalize the machine for its inability to shine in beauty competitions”? Seeing as Article 2A is the only work of Turing’s that I’ve read, from what I gather, Turing thinks that computation is sufficient to convey cognition, but I don’t think he’d go so far as to say that cognition is computation. In other words, as this article posits, Turing was a proponent of the physical version of the Church-Turing Thesis, in that he “believed any physical, dynamical structure or process could be simulated and approximated by computation as closely as we like”. Frankly, I agree with Jessica that Turing probably did not intend for us to read so deeply into it.

      Delete
    5. Personally, I struggled with some of the topics when they were presented in the first readings. I think if this was first, I would better have been able to read and understand the Turing + Searle readings. I feel like this was the perfect "Kid Sib" explanation of those topics and helped me better understand the relationship between Searle's room and computationalism.

      Delete
  3. This article was a great summary of many of the course’s content so far. However, I take point with the emphasis on: “The contribution of Descartes' celebrated "Cogito" is that I can be absolutely certain that I am cognizing when I am cognizing. I can doubt anything else, including what my cognizing seems to be telling me about the world, but I can't doubt that I'm cognizing when I'm cognizing.”

    Cognition is as cognition does. It is not some isolated thing within the walls of one’s head. Descartes’ formulation of the absolutely certain cogito, I think, has hurt cognitive science more than it has helped.

    I prefer to think of cognition in the pragmatist tradition: “We can begin with Peirce's canonical statement of his maxim in ‘How to Make our Ideas Clear’.
    Consider what effects, which might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of those effects is the whole of our conception of the object. (EP1: 132)”

    Referring to the above quote, our ‘object’ is cognition. It of course has practical bearings. Cognition is as cognition does. Talk of cogito and certainty only muddies the water.

    Moreover, the Turing Test does away with talk of cogito and certainty by focusing on what can be done: performance capacity. This view of cognition is much more productive, aligning with William James’ conception of truth (in this case, getting the truth of what cognition is):
    “Ideas … become true just in so far as they help us to get into satisfactory relations with other parts of our experience. (1907: 34) Any idea upon which we can ride …; any idea that will carry us prosperously from any one part of our experience to any other part, linking things satisfactorily, working securely, saving labor; is true for just so much, true in so far forth, true instrumentally. (1907: 34)”

    ReplyDelete
    Replies
    1. In defence of Descartes, the phrase "cognition is as cognition does" is no more illuminating than that of "cogito". In particular, I have always had trouble with the definition that cognition is everything that a person can do. It is evident that the cognition is behavioural capacity but not all behavioural capacity is cognitive. We have identified certain vegetative functions that are not cognitive. I believe Descartes was trying to illuminate this boundary that we have not very much discussed in this class since thought is clearly cognitive. I do believe that cognitive robotics will eventually create a T3 passing behaviourally indistinguishable robot that has all of our capacities. However, I wonder if we will truly have a causal mechanism of cognition if we do not address the boundary between cognitive and vegetative function. It seems we simply have a causal mechanism for all behavioural capacity, some of which are cognitive but many are vegetative. Is it not part of cognitive science's project to specifically circumscribe the "object" of cognition rather than the much larger set of human behaviour?

      Delete
    2. @YiYangTeoh I agree with you. I too have trouble with the approach to cognition as everything that a person can do. Not only is there vegetative functions but what about habitual functions. When actions are converted by basal ganglia structures to an “auto-pilot” behavior, these actions are no longer subjected to cognition. Driving, for example, is accomplished by basal ganglia structures. When you see a squirrel run in front of your car you reflexivity slam on the breaks. Then, after a delay, you begin to cognize about what just happened. The neurocircuitry supports this. There are direct connections from the neocortex to the neostriatum. This path allows for the initial sensory information, that has been processed by thalamic structures, to be sent directly to the neostriatum (subcortex) which will produce rapid motor action. This motor action is taken rather quickly (slamming on the breaks) rather than wait for feedback from the neocortex to provide a “top-down” influence. There are no direct connections from the neostriatum to the neocortex, rather many indirect pathways. This allows for the rapid action of slamming on the break rather than the influence of cognition to slow down and think about what course of action should be taken. Honestly, after taking Human Cognition and the Brain with Dr. Petrides, I was sad to see how much cognitive neuroscience was dismissed in this course.

      If the brain is the root of the mind, I am still lost at the idea that studying the brain will tell us little about the mind. Perception and cognition go hand in hand, studying how our brain divides the universe, what we off load to habitual behavior and vegetative functions, to me, all seem relevant to understanding our capacity to cognize. Cognition is not the brains main function. In fact, I would argue, that human cognition is a micro aspect of the all the brains operation. However, human cognition is the root to *human* existence. Cognition is what separates humans from the rest of the animal kingdom. When we take a step back from the social utopia we are so-often blinded by, the truth is we are talking animals. We are no better than a monkey throwing poo or a spitting alpaca. Our brains and behaviors are natural and barbaric (animalistic). We forget this because we have rooted our existence in our small facet to categorize and communicate (cognition). Computation models and all this reverse engineering separates our ability to cognize from the rest of the brains abilities. When in fact the system operates as a whole. I believe, if you want to understand how and why we cognize, you have to understand every other aspect to what our brains are doing. Separating the function of cognition from the rest of the brains ability, I believe, is another example of how we are so often blinded by this social utopia we have submersed our true animal selfs into. (I believe human cognition is what many have deemed working memory and I wish we were able to discuss this more in class)

      Delete
  4. This may be a bit nit-picky. However, Searle’s main objection is based on what it feels like to ‘know’ something. This is also supported by Descartes when he says, ‘I can be absolutely certain that I am cognizing when I am cognizing’. These all relate to overt feelings. However, I would argue that large portions of cognitive capabilities are based on intuition and not knowing overly. For example, when playing games, like Go it is possible to win based solely on intuition making moves without ‘thinking’ rationally. How do these fit? The same phenomenon is seen with unconscious priming and Mentalism tricks. The participant will give an answer without feeling as if they ‘know’ why. Would these processes no longer count as cognitive then?

    ReplyDelete
    Replies
    1. I think they do count as cognitive, but perhaps on a different level of consciousness. Instead of ‘thinking’ and focusing mental resources, intuition is, in my experience, the phenomenon where somehow, innately, the answer or appropriate action surfaces – without too much reason initially. Instead of saying this to be subconscious, I do think it represents cognition. Some part of the brain is dredging through information and experience to come up with this intuitive capacity. In my experiences, I sometimes question the origin of my intuitions and once I get ‘deep enough’ into figuring out where this feeling came from, I realize it’s just the result of accumulated experiences and past decisions – all of them rationally and with thought – so that future ones become more innate. Again though, this is just my opinion on how intuition works.

      Delete
    2. Introspection (remember?) doesn't tell us where thinking (cognition) comes from or how it works (remember Mrs. Pouley (sp. ?), the 3rd grade schoolteacher?). "Intuition" doesn't tell you either. But if, while you are "intuiting" something, it feels like something to be intuiting, then that's just yet another example of feeling, yet another instance of the hard problem.

      Delete
    3. Hello Valentina, what you suggested about the phenomenon of seeing "with unconscious priming and Mentalism tricks. The participant will give an answer without feeling as if they ‘know’ why." - these unconscious know-hows only seems to be performance capacity, and, like Amar said, not considered to be cognition.

      Delete
  5. I found that this short, pleasant read concisely summarized many of the topics we have covered in class over the past few weeks. From the Church-Turing theses, to the Imitation Game and Turing Tests, to the Symbol Grounding Problem and more, we’re given something akin to a
    ‘lite’ version of our reading materials. I believe a strength of this article comes from the intertwining of many cognitive science ideas, presented with the presumed opinions of Turing, as he may have thought them. One quote to sum up the current problems in the field – hard and soft – “Generating the capacity to do does not necessarily generate the capacity to feel”. Instead, only explaining how we do things, how we are able to go through the motions, might be as close as well get to solving the ‘easy’ problem.

    ReplyDelete
    Replies
    1. I feel like the easy problem for the mind is already difficult to solve, because for example, according to the Fodor’s paper we read before, our brain imaging methods are not advance enough to accurately solve the where/how problem. As such, we still have a long way to go in the path to solve the hard problem.

      Delete
    2. Zhao, I agree Fodor's paper sheds light on the fact about the inability to solve the easy problem based on brain imaging. However, we do have more information regarding associations in how our brain operates. Certain "probable" neural pathways for certain behaviors and emotions couple with certain neurotransmitters and hormones. These associations postulated by science do bring us a bit closer to the easy problem but will never be able to answer the hard one.

      Delete
    3. Zhao, I don't think that Fodor is suggesting that our brain imaging methods are not advanced enough. Instead, I think he is point out that where things happen in the brain does not tell us anything about how they happen. I find his car engine analogy really useful: he says that if you know that the purpose of a carburettor is to aerate the petrol, what other information is revealed by knowing where in the engine it is? (unless of course you want to take it out)

      Delete
    4. Peihong, I agree that we have a long path to solving the hard problem, but that an alternative point of view is that this path is not necessarily long because the solution to the easy problem precedes it. Some view solving the easy problem as a way forth to solving the hard problem (which might be unsolvable regardless), while others view them as separate, unrelated problems. It is up to you how you want to frame this, and I would wager that there are compelling arguments for both sides, but perhaps we should just concentrate on solving the easy problem like Turing had suggested in the first place.

      Delete
  6. Very nice summary. This paper also correctly points out that Turing set the goal for cognitive science--solving the easy problem. He was a true Giant.

    If we create a machine that does everything we do, we have answered how and why of cognition. However solving this has nothing to do with the hard problem. I am tending to agree that the hard problem is unsolvable through causal answers. Thats why Cognitive science should not attempt to address this problem.

    ReplyDelete
    Replies
    1. Soham, do you mean “do everything we can do” by just actions (a T3 without language capacity), or through language as well (like T2)? If you were referring to a T3 without language, I would be inclined to agree with you that we wouldn’t solve the hard problem. But to do -everything- we can do, including use language in more fluid ways than simply following algorithms, would solve the hard problem, since to have meanings in language is to have referents, senses, groundings, and finally, feelings, for every word. If we can find a way to create a robot that understands meanings, and thus can pass the Turing Test, then I think we’ll be well on our way to solving it.

      Delete
  7. I liked this succinct article - it was a great overview of a lot of the themes we have reviewed in the course so far. It reminds me how much Turing has contributed to the field of cognitive science. After all, where would the field of cognitive science be today if Turing hadn’t developed the Turing Test all those years ago?

    ReplyDelete
    Replies
    1. Hi Laura, I definitely agree! As a few other students have mentioned, this reading summed up many of the most important points of the class very succinctly! Also, it felt very nice to be able to read through the article and follow along with ease; it showed me truly how much I've learnt over the past few weeks. As you said, cognitive science might not be as far as it is today without Turing's contributions. It suffices to say that we might not have had this class without him, so thank you Turing!

      Delete
  8. Ramachandran, in his popular science book Phantoms in the Brain on neurology and neuropsychology, suggests that the hard problem is not so hard after. He proposes that, using future technology, we will someday be able to read and perfectly replicate the complete brain state of one person into another, thereby temporarily replicating the feelings of the first into the second. In so doing, we would have answered the Hard Problem: feelings are just a pattern of neural activity across the brain.

    But that only tells us what we already suspected: that feelings are (perfectly) correlated with brain activity. It does nothing to explain how/why feelings arise therefrom. I think it does, however, give us a second potential way to cross the other-minds barrier (the first being Searle's Periscope), by telling us what it feels like to be another person. On the other hand, it still wouldn't guarantee that the other person is feeling at all, so perhaps there too it falls short.

    ReplyDelete
    Replies
    1. Very interesting point Michael! I am confused as to how ‘accessing’ the feelings of others or reducing them to patterns of neural activity will answer the hard problem? Would we then be able to determine why/how these patterns of neural activity arise in the brain?

      Delete
    2. I’m confused as to how this would help answer the hard problem – this second person would feel what the first felt but any researcher on this future experiment still wouldn't be able to do anything more than someone today. Apart from giving us a concrete glimpse through the Other Minds Barrier, I think the incredible thing in this hypothetical situation would be the technology required to transfer such brain states between individuals. Wouldn't it make sense that if we would do something as amazing as that, we’d have already discovered some crucial things about consciousness?
      Or perhaps it’s the other way around: developing the tech is what’ll lead us to answering the hard problem.

      Delete
    3. I think Ramachandran is confused too. What he proposes doesn't answer the hard problem at all. It would certainly be an amazing technological feat, but if the HP is as hard as Harnad says it is, then no technology will help to solve it.

      Delete
    4. I agree with you, Michael. This is an interesting proposal, but I don't think this answers the hard problem at all.

      That said, I do think this might itself be a reformulation of Searle's periscope or something very similar. If we are able to "read and perfectly replicate the complete brain state of one person into another" this is essentially analogous to (or maybe the same as) causing two entities to be in the same computational state and thus the same mental state. Through this, we would be able to be certain of another's mental state (and whether or not they have a mental state).

      Delete
    5. Hi Michael, I'm really excited that you brought up Ramachandran, because I loved learning about his research in PSYC 410! However, I understood his position about the hard problem a little differently. If I'm not mistaken, he was actually saying that replicating patterns of neural activity would NOT be sufficient to replicate feeling. In essence, if I were to replicate, in my own brain, the exact patterns of firing that occur in your brain when you look at the colour red, there is no way of knowing that my subjective experience or feeling of the colour red is the same as yours! Ramachandran refers to this subjective experience as "qualia" (although I know that Prof. Harnad asked us to refrain from referring to these terms at the beginning of the course). In this sense, I actually wouldn't say that his explanation crosses the other minds barrier! However, I do agree that, regardless of interpretation, the idea that feeling might be correlated with patterns of brain activity does not explain WHY or HOW feeling is generated.

      Delete
  9. “Turing was perfectly aware that generating the capacity to do does not necessarily generate the capacity to feel. He merely pointed out that explaining doing power was the best we could ever expect to do, scientifically, if we wished to explain cognition. The successful TT-passing model may not turn out to be purely computational; it may be both computational and dynamic; but it is still only generating and explaining our doing capacity. It may or may not feel.”

    If this is the best we can ever expect to do I understand that solving the not so easy ‘easy problem’ is what should be our main focus in the present time. However, since this will eventually be answerable with advances in technology and AI (according to Chalmers and many others), where can we turn to next?

    If we accept that the hard question is unanswerable, as suggested by Harnad and Turing, how can we be sure that once the easy problem is further answered and elucidated we still will have no inkling on how to solve the hard problem. Is it naive to think/hope that solving the easy problem could open doors to elements of the hard problem that we cannot currently conceive of, even if it in its entirety remains unanswerable?

    ReplyDelete
    Replies
    1. @ Aliza. I see two streams of thought regarding what you are saying:

      The first is that solving the easy problem will not get us any closer to solving the hard problem. If we solve the easy problem, but have exhausted all the degrees of freedom. This implies that we have simulated every single performance capacity and created a T3, yet there is still nothing that gives us a hint about feeling. Perhaps this can be seen as an “all-or-nothing” scenario where no matter how accurate we are at simulating performance capacity, it doesn’t get us any closer to finding the how/why of feeling because of the same problem we currently have: the other minds problem. This stream sees the endeavor of making a T3 completely irrelevant to explaining feeling. In short, solving the easy problem is not related to solving the hard problem. That they are very different problems.

      The second stream, I think, consists of those who believe that solving the easy problem is a first step to thinking about the hard problem. They believe that reverse engineering a T3 and learning in greater detail about performance capacity may create insights about feeling.

      I see the perspective of both – and I suppose time will tell how the hard problem will unfold.

      Delete
  10. I loved how this article was a brief, clear and straightforward walk through of the entire class. Harnad discusses how Turing gave the field of cognitive science its direction. That being said cognitive science studies a causal mechanism for how/why we do what we do and how/why we feel. The first is called the easy problem and is actually what the Turing Test will help answer. The Turing Test involves email/ verbal communication (such that looks don't matter) between a robot and a human. This test occurs over a life time and communication occurs about everything that humans communicate about. The test is passed if the robot is indistguishable from a human in its communication with the human. The test can't be passed by computation (symbol manipulation, see X do Y) alone. This was shown by Searle's room where he memorised a rulebook of chinese input and out symbols. He passed the TT by being indistinguishable from a native Chinese speaker but lacked an understanding of what he was saying. Searle did not know Chinese or feel like what it is to know what a symbol means. Harnad has called this the symbol grounding problem- where meaning is grounded by sensorimotor experience (but how?). For instance, you can describe a zebra as a horse with stripes but for that to mean something you have to know what both horse and stripes mean. Thus, we can say it would require a T3 robot (verbal and sensorimotor capacity) to pass the TT.

    ReplyDelete
    Replies
    1. I agree with you Kathryn. This article was so refreshingly concise and kid-sibly.

      I enjoyed Prof. Harnad's interpretation on whether Turing was a computationalist. It seems that Turing was a computationalist in the sense that he "believed that just about any physical, dynamical structure or process could be simulated and approximated by computation". But, in the truest sense, Turing wasn't a computationalist because he would have been willing to accept that cognition was not only computation (i.e. that it must be grounded in sensorimotor experience). I would love to hear what Turing would have to say in our discussions with Prof. Harnad’s about the easy and hard problem.

      Also, I highly recommend the Imitation Game! I just watched it and it is brilliant. :)

      Delete
  11. This is a clear and concise article that highlights a lot of the topics we have discussed in class, even in a chronological order. It starts off by describing what Cognitive Science aims to achieve, that being the reverse-engineering of the capacity of animals to think, whatever thinking means. Turing's idea was all about designing a model that can do what we do. Cleverly named the Imitation Game, a purely verbal test was designed by Turing in an attempt to explain cognition. However this would only require successful manipulations of symbols, which Searle argues to be meaningless and thus not accurately representative of cognition. Searle goes on to prove through his Chinese Room Argument that cognition is not just computation, which leads us to Harnad's Symbol Grounding Problem. Among a set of symbols to be manipulated, some have to be learned through direct sensorimotor experience and without this step, there would be no basic set of symbols to manipulate. In other words, passing the verbal Turing Test was not just something that could be done by a T2, but rather it required the sensorimotor capacities of a T3 machine. So what else could be added to the Turing Test in order to more accurately depict what it is we humans do when we cognize? How do we explain what "right now" feels like in the way Descartes describes his cogito? If only the cognizer feels something, how does the cognizer know he is actually feeling something even in a simulated world? This is a concept I still cannot grasp but I am convinced that as we continue to solve the easy problem, we will get closer to understanding our capacities to feel.

    ReplyDelete
  12. I particularly enjoyed the section of this paper where Harnad points out that Searle is effectively passing the Chinese Turing Test. I had never thought of the Chinese Room as a kind of Turing Test, but now it is clear that it is. A Chinese speaker on the outside would never suspect the system in which Searle operates to lack the feeling of understanding Chinese, and yet he does. This is another good way of proving the Hard Problem to be untouchable by Turing Tests.

    ReplyDelete
  13. Concerning Descartes’ Cogito: I think therefore I am. I never fully understood the importance of the present tense, and therefore the current moment. Indeed, memory is fallible, and even if I believe I woke up this morning, there is no way of being fully sure of it. It is almost as if the theory of other minds could also apply to myself, to myself 2 hours ago, or 10 years ago. This goes along Prof Harnad’s example of himself handing 30$ at the gas station, and how convince he was that he had indeed paid the man. Feeling can not only be experienced by 1st person only, but it can also only be experienced in the now, in the present.

    ReplyDelete
    Replies
    1. Hi Josiane,
      Yes – Descartes’ Cogito certainly goes hand-in-hand with the “Other-Minds Problem”, considering that the only reason we can be sure that we are cognizing is because the very act of doubting (cognizing) one’s own existence is proof of the reality of one’s mind (cognition). If I’m understanding your point correctly, the importance of the “present tense” (as you put it) is that in that “present” moment that we are questioning whether we, ourselves, are “thinking”, is evidence that we are in fact “thinking beings”. Only through this act of introspection, can we be sure of the existence of our own cognition. More importantly and with respect to the “Other-Minds Problem”, since introspection is confined to our individual minds, we can never be entirely certain that others are cognizing too, since the only way to be certain that another entity cognizes is to be that entity cognizing.

      Delete
    2. Hi all,

      I really like your point about present tense, as if "Cogito" is a moving phenomena, it's truth is dependant on how you orient yourself in time. Descartes statement cannot apply to thoughts you've had (in the past) or ones you expect to have in the future. I think therefore I am is presented rightly and only in the present. This is interesting when you consider our continuous conscious experience, as if the certainty of our own thoughts hinges on our ability to exist at one point in time, and feeling what it's like to feel at one particular moment. We can reverse cognitive states, but not conscious ones. By that I mean that we can think about apples we've seen, or will see, but we can only know the feeling of holding apple when we are currently doing it.

      Delete
  14. Like the point that I made in another thread, I still am struggling with an aspect of this argument. Why is it that we can completely discount the possibility that a simulated neural construct could think/feel? We shift the frame to feeling and say that it is necessarily a physical process – we need sensorimotor capabilities to feel, and therefore a simulation cannot feel. However, compare the following two situations.

    A) Imagine a human child, who we will call Simone. From the moment of her birth, she has lived in a room supplied with food, water, and a computer. Every need she has had, from education and companionship to nail-clipping, is taken care of by robotic assistants. assume also that the room has sophisticated scanners which can observe and record every synaptic firing that Simone experiences. Putting aside the ethical problems with this, she was lucky to be born in 2025 and finds all the companionship she needs online – as much as any teenager in a box can be, she is perfectly happy. Perhaps she has a pet (a rescue from a shelter).

    B) Simone’s best friend online is named Rodger. He is best able to relate to Simone because he, too, was raised and has lived in a box. Unbeknownst to both of them, Roger does not have a physical body. He exists in a large gray box full of whirring microchips and solid-state drives, which is running a very good simulation of a brain. Assume that 8 years from now (to Jerry Fodor’s dismay) we have mapped and studied every aspect of brain function, and in 2025 created an autonomous functioning simulation of a child’s brain, from the neuron up. Having the sensory inputs from Simone, we provide Rodger’s simulated brain with the same richness of sensory experience. His simulated neurons learn the same way that Simone’s do, X motor cortex firing triggers Y muscle which leads to Z pain, etc…

    Rodger and Simone would obviously pass all the same levels of Turing test, apart from the ability to meet them in person, otherwise we have not designed the experiment well enough. By the logic we have seen in class, Roger is not real, his feelings are not real and he is not real because nothing can mean anything to him. Thus, we can turn him off at any point, and we can provide kicking sensations to his simulation without any cruelty. We could kill his simulated rescue animal without recourse, because when his simulated brain simulates sadness we shouldn’t care.

    As far-fetched as this case is, I conclude that Rodger should be considered a living, feeling person. In answer to the simulated waterfall response, if it looks like a waterfall, and feels like a waterfall, I think it has passed the Turing Test. We have all the same evidence that he would think and feel like Simone would, and if you disagree I challenge you to say what we do differently? Our brains do not touch the world, our nerves tell our brain that our hands do. Rodger’s do too, and only a thought experiment of a man in a box say otherwise. I cannot think my way around this, if someone has any insight I would be very appreciative.

    ReplyDelete
    Replies
    1. Edward the problem is exactly the same for computer-simulated flying as for computer-simulated feeling, except you can see that a plane simulation is not really flying but you cannot see that a simulated feeler is not really feeling (because of the other-minds problem). Feeling is invisible (except to the feeler).

      But you don’t have to go that far: A simulated feeler is not even grounded, let alone feeling anything: As long as all it can do is manipulate symbols (no matter how those symbol manipulations are interpretable by us, and no matter how closely correlated they are with what they are interpreted by us as being and doing), a simulated feeler is not passing T3 (nor T4). T3 and T4 are real world tests. Grounding is a causal connection between symbols (in a language of thought) and the objects to which they refer. There is no such thing as “simulated grounding.” Symbols are just symbols — squiggles and squoggles — unless grounded in the real world.

      To put it another way, if we had a real T4 robot, neither a simulation of the robot nor a simulation of its brain would be grounded or feel. A grounded T3 robot with a (mostly) computational brain might feel, but that’s not what you’re asking about.

      I am pretty sure you are being fooled — by the invisibility of feeling — into imagining that an isolated brain simulation would feel, and would be grounded. (By the way, I doubt that even an isolated brain would feel or be grounded, but we’re not talking about that.)

      Neither Simone nor Rodger nor their wiring nor their interconnections resolve (or even test) anything. What any T3 or T4 needs is the capacity to do whatever we can do in the real world — just like Dominique. That’s why we wouldn’t kick her. And that’s all Turing ever meant.

      (Btw, you should leave out questions about “necessity” in this, because we are talking about science rather than mathematics. So we’re just talking about (high) probability on the basis of the available evidence. Maybe a real brain in a vat, or a simulated brain or a simulated robot — or even a purely T2-passing computer, a star or a waterfall or a rock could be feeling. Turing just points out that there’s no way we could ever know one way or the other, on any evidence — and Descartes reminds us that we can’t even be sure that apples will always fall down rather than up.)

      Delete
  15. “The contribution of Descartes’ celebrated “Cogito” is that I can be absolutely certain that I am cognizing when I am cognizing. I can doubt anything else, including what my cognizing seems to be telling me about the world, but I can’t doubt that I’m cognizing when I’m cognizing.”

    I followed the rest of this article, but the argument in this paragraph seems unconvincing. Why is it that we cannot doubt that we are feeling? Why are we so certain? I agree that it seems highly likely – perhaps more certain than anything else we “know” – but it still seems premature to take it as fact that feeling, as we experience it, is really “real.”

    For example, wouldn’t we be inclined to say the same thing of something like having the conscious, felt will to do something? For example, wouldn’t most of us say “I am moving my hand because I want to move my hand at this moment. I can doubt anything else, but I cannot doubt that.”? However, didn’t Libet’s research on readiness potential call this into question? If we can’t take for granted that that feeling has the causal properties most of us think of as fundamental to the experience of feeling, (in other words, if the fact that our felt ‘conscious’ will to do something actually causes us to do something is questionable) why do we accept that our experience of feeling and cognizing is certain cannot be doubted?

    ReplyDelete
    Replies
    1. You cannot doubt what that are feeling in the same way that it doesn't feel like anything to be not feeling. As soon as you doubt that you are feeling, that doubting itself IS a kind of feeling/thinking/cognising. You can however doubt the causality involved in the thinking or feeling. Why you feel a certain way, or where that feeling came from can be doubted, but never the fact that you are feeling *something* in this present moment.

      Delete
  16. The paper gives a great overview of everything we have covered in the class throughout the semester – especially about the easy and hard problem. Turing pointed out that explaining doing is the best we could do, scientifically – because once all doing is explained, feeling seems superfluous.

    From the Turing Test (TT) at the beginning of the course, we discussed in class the significance for passing each level of TT. With Searle’s Chinese Room argument, we know that cognition is not all computation. So coming to the understanding that sensorimotor level is ultimately needed to ground formal symbols, such that T3 is the right level to test cognition and understand to answer the easy question.

    ReplyDelete
  17. It is interesting to consider Searle’s Chinese Room Argument in the context of the hard and easy problems of cognitive science. The article states that Searle’s thought experiment demonstrated that a being could not be purely computational and would need sensorimotor grounding capacities to pass the Turing Test, however I don’t think Searle showed this. Searle did not argue that a purely computational device could not do everything a human can do (speech-wise). He merely showed that such a device would not feel like a human would, the same way Searle didn’t feel as if he understood Chinese. Therefore, Searle’s periscope only provided evidence that computationalism is wrong if what we are aiming for is a being with feelings. The symbol grounding problem, on the other hand, makes it clear that some form of sensorimotor capacities are needed to perform any non-verbal human functions and to link words to their real-world referents.

    ReplyDelete
    Replies
    1. I agree, Emma! I think Searle only really went half-way in showing why computationalism fails as a model of cognition - or why a purely computational (and not dynamical) system wouldn't succeed as a model of cognition. He did emphasize that Searle/the machine didn't "understand." Maybe if he had taken it a bit further he would have run into the symbol grounding problem, but it didn't seem like he was headed in that direction.

      Delete
    2. I have to disagree with you Emma! Searle showed that he could not understand inside the room. This consequentially showed that a human being could not be computational, because he knows that he doesn’t understand Chinese. It is just symbol manipulation that he is doing inside the room. However, he knows that he does understand English, and acts accordingly in the world. His point was not to show that a computational device could pretend to be human. I believe that he also wanted to show that we cannot be that computational device, because of cogito. I know I feel something when someone talks to me in English, when I talk English, and I don’t only do symbol manipulation, but actually understand what is going on. You are right that he did not touch upon the dynamical aspect. However, I believe that he did a pretty good job in explaining why we are not T2 robots. I think the older thinkers missed a point here and there as they focused on different parts of the easy/hard problem, because the conversation was just getting started by Turing. The contemporary thinkers are pulling the ideas of these older thinkers together, and expanding on it.

      Delete
  18. This article concisely summed up the gist of the course, and brings us full circle to concepts that were brought up at the beginning. Turing devises the Turing Test as the attempt to reverse-engineer cognition. If engineers successfully produce a TT-passing machine, that would provide a causal mechanism for our “doing capacity” and in turn, an explanation for our thinking, understanding or “cognitive” capacity. I enjoyed reading Harnad’s argument that Turing was not a computationalist. We have explored at length why cognition cannot be reduced to computation. As evidenced from Searle’s CRA, no “understanding” can truly be achieved through manipulation of symbols based on their shape. Understanding requires that at least some of the symbols be grounded directly through sensorimotor induction. Surely Turing would agree that computation alone would not provide a mechanism for cognition – it would only provide a close approximation of the “physical, dynamical structure” we wish to engineer (Church-Turing Thesis). It would be very interesting to know what Turing thinks about the scaled-up versions of the TT.

    ReplyDelete
    Replies
    1. Like Professor Harnad argues, Turing knew that cognition was not computation alone. Sending words in and words out is not the only thing that people can do and Turing knew that there was more to cognition than simple input and output. In lecture, Harnad talks about T3 as breaking "the dynamic world" and emerging into sensorimotor robotics where there is interaction with the world in order for sensorimotor grounding to take place. Turing only created the verbal version of the Turing test so that the appearance of the robot, would not distract from decisions about whether the robot was considered to be cognizing or not. I do not think Tyuring would have any issue with a robot that had sensorimotor capabilities. In fact, I think Turing would encourage this line of thinking. I believe Turing would agree that T2 and T3 are necessary for proof of cognition but T4 would be overdetermination. It is overdetermination in the sense that we require more from T4 robots than we require when we assess whether each other (humans beings) are cognizing organisms or not. All this being said, what I have gotten from the readings and Harnad’s lecture is that Turing was interested in the doing capacity of cognition but Turing knew that passing the TT was not sufficient to proving phenomena like feeling.

      Delete
  19. This article was the summary I needed. In having a simplified and short re-cap of the main concepts covered in the course, it in a way helped further “ground” the course concepts. Although it was to be avoided, I did feel this course had quite a bit of jargon, like categorization being “doing the right thing with the right kind of thing”. I do think those rigid definitions were necessary though, to get away from misleading weasel words and keep us all on the same page. Although definitions in class were often repeated, for me they tended to get stranded apart from other concepts. This article helped string those concepts together. Although the lectures do similarly of connecting jargon together, they are interrupted with questions and other attentional distractions. Other readings were not so much a summary but making a particular argument, rebuttal or analysis of another article. Something about a slow reading of short and familiar summary like this one which is refreshing and self-assuring.

    ReplyDelete
    Replies
    1. I agree, Krista. I really enjoyed this reading. It was very clear and easy to understand, which was a refreshing change. It also summed up a lot of the course really nicely.
      My experiencing reading the article also made it very evident how jargony and unclear some of the other readings are, in comparison. Using kid-sibly language is very valuable. It prevents misunderstanding and make the content a lot easier to follow and grasp.

      Delete
  20. As many other students have stated above, this article does a really good job summarizing what we’ve learned throughout the semester. I think it also was fitting to read this article now, as a way of tying everything together. This piece acted as a reminder for me that Turing Machines weren’t built to enact cognition, but rather to perform human functions that, for us, are linked with the ability to feel. Looking back now, Turing Machines demonstrate an important concept and differentiation between the ability to do something and the ability to cognize. This also shows the power of computation, which is present in machines and robots to emulate human functional capacities as well as when simulating physical objects, like planes or waterfalls. Any of these things may be able to do what the real objects or humans can do but they even if they can do such things, they still aren’t the same thing and don’t have the same properties. Furthermore, computation allows simulation of objects, such that when simulating human capacities the machine/robot can do what a human can do, but it still isn’t a human and therefore has no ability to cognize, which is something not entirely done through computation alone. However, computation is still capable of emulating our performance capacities, which the Turing Machine proved.

    ReplyDelete
  21. Regarding: "Turing was perfectly aware that generating the capacity to do does not necessarily generate the capacity to feel. He merely pointed out that explaining doing power was the best we could ever expect to do, scientifically, if we wished to explain cognition."

    The Turing Test and what cognitive scientists have been done so far were trying to solve the easy problem. For the hard problem, we just don't have an idea to solve it yet. Turing was aware that a successful TT-passing model still may or may not feel. From the quote above it says that generating the capacity to do does not necessarily generate the capacity to feel. But what if the capacity to do indeed has some effects on the capacity to feel? What if consciousness is a by-product of cognition in human? I wonder if the specificity of our cognitive capacities contributes to give us our consciousness/feelings. Gamma brainwave is thought to be involved with our sense of conscious awareness. And we only (mostly) feel when we are awake. And plants cannot feel anything. It seems to me that, unlike cognition and neuroimaging which barely have a connection, consciousness and feelings might be a little bit more related to how our brain is structured. I wonder if a T4 or T5 passing models will give us any hint on that.

    ReplyDelete
  22. This is a question that I wanted to ask in class: are we conscious of priming effects? Are we feeling what we are being primed for?

    ReplyDelete
  23. (1) RE: The Church Turing thesis
    The Church Turing thesis stated that “any physical, dynamical structure or process (including planetary motion, chemical reaction, and robotic sensorimotor dynamics) could be simulated and approximated by computation as closely as we like.” Before, I was confused as to how this is not the same as saying that this is the same as computation however the explanation was more simple than I realized. With the example of the plane, the authors show that yes it can be simulated on a computer, but it’s still not a plane. For some reason I had a hard time getting my head around the idea that computation could approximate everything yet it doesn’t mean that “everything in the physical world is just computation.” This example really made it clear for me.

    ReplyDelete
    Replies
    1. I agree that having these types of concrete examples has really helped solidify the key points throughout course. From providing us with the zebra example for symbol grounding to the plane example that you mentioned for explaining simulation vs reality, this paper really does a great job at walking us through important ideas in the field and really reinforces why Turing was so influential. In the end, we see the limits of a TT-passing model in that it may or may not feel, ultimately making it seemingly impossible to solve the hard problem.

      Delete
  24. Explaining how and why we can do what we can do has come to be called the "easy" problem of cognitive science (though it is hardly that easy, since we are nowhere near solving it). The "hard" problem is explaining how and why we feel -- the problem of consciousness -- and of course we are even further from solving that one.

    Before explicating the selected passage, this reading was the essence of the course and the core problem of Cognitive Science. It is clear that studying the when/where of our brains do not explain how/why we do, yet researchers cannot change their projects because of the pygmie fear of losing funding and their careers. It seems that the researchers and funding committees are doing the spadework lest they are asked to solve the Hard problem. Perhaps, the difference between a pygmie and a giant is in having the courage to ask questions, then changing the question once it is known to be the wrong question -- the Hard problem will not be solved by solving the easy problem. Ironically, the easy problem is nowhere close to being “solved.” It may be wise to not refer to the Hard and easy problems as being problems. It gives a false sense of hope that they are solvable.

    The biggest hurdle in the advancement in Cognitive Science is overcoming the sunk cost fallacy that is being committed. It is understandable because of the astronomical amount of dollars, time, and graduate degrees that were dedicated to (or wasted on) the research projects that cannot explain how/why we do. It is tragic how we are stumbling over the hurdle in which we have placed ourselves.

    ReplyDelete
    Replies
    1. Hi Francisco,

      I think it’s interesting how you question the naming of the easy/ hard problems. I think our framing of the concepts is often shaped by how we name them, and it’s interesting to consider if this is the best way we can name cognitive science's problems with respect to the main goals of the field.

      This is the way I see the goals of the field at the moment. It breaks down into four. (Someone please correct me if I’m off-base)

      1) The doing problem (The Turing agenda for cognitive science.) How do we do all that we can do, vegetative functions, ‘cognitive’ functions. We can measure our progress behaviourally via the Turing test, and perhaps eventually by mimicking some of our functions via T4 tech/ computational simulations of the brain.

      2) The feeling problem. How is it that we feel? What is the causal mechanism of our capacity to feel. (This requires lots of progress on the doing problem)

      3) Why do we feel #1. What is feeling’s purpose with respect to each individual organism’s behavioral output? Is feeling a significant step in the causal chain of doing what we can do? (this requires 1 and 2)

      4) Why do we feel #2 Why did we evolve to feel? What is the evolutionary purpose of feeling? (I don’t know what the real point of this is.)

      In addition, I think that calling the easy problem easy, undermines it; making it seem like we’ll be able to solve it through merely continuing to tinker around in the lab. In reality, solving it will require movement on all fronts: robotics/ engineering, computer science, neurophysiology, linguistics, studying animal models, studying cognitive development in babies AND some gutsy giants like Chomsky to shift how we conceptualize cognition. There is nothing easy about all of this. It’s a huge, interdisciplinary feat.
      Calling the hard problem (why are we not zombies) THE HARD PROBLEM, makes it sound like some huge, dark and mysterious, sci-fi-like, thing. Its cool name simultaneously grips people’s imaginations, and, (perhaps unfortunately), makes them get all meta-physical and sci-fi loopy... If we had called it something less looming, maybe people would take on a more pragmatic lens and just think well, yea that’s insoluble now and probably forever -- maybe we should focus on the easy problem for the next fifty years at least. Maybe people would be more likely to push the hard problem aside and concentrate on figuring out HOW it is that we feel - reverse engineering all that we can do, and also reverse engineering the mechanism of feeling.

      Delete
    2. Francisco, Turing is not a pygmy, and he proposed solving the easy problem because the hard problem is not solvable.

      Lauren, Chalmers, a pygmy, named the hard problem "the hard problem." Before that it was called the mind/body problem and a lot of other names. I would have called the two problems the "doing" problem and the "feeling" problem. But I don't think our success or failure has much to do with what we call them.

      Delete
    3. This comment has been removed by the author.

      Delete
    4. I think that studying the hard-problem is completely tied with being a human. Remnants of solving the hard problem are present throughout history long before Enlightenment and modern philosophers (Descartes, Hume, etc) got a hold of it. Trying to explain the experience of life is pretty important to being human. I think what this course discusses is the limitations of cognitive science and science in general in answering that problem. Literature, art, music, religion, etc all have a approached the problem with a different tool kit. Artists are able to invoke feelings of experience through their art, I don't think it's a large stretch to consider how they are working to explain how we feel what we do through a different medium. I think it's interesting to think about how the hard problem has permeated so many different disciplines, and because of the firm divide we have placed between arts and sciences, we're looking at it strictly from a scientific perspective. And yet regardless of how we look at the problem of feeling, introspection alone cannot answer it.

      Delete
    5. I completely agree! I didn't mean to imply that the hard problem isn't worthy of our time and attention. I think the hard problem is a huge impetus for artistic inquiry and expression, and the pursuit of its explanation has added, and will continue to add value to the world. The wonder it evokes is central to the human experience. I think approaching the hard problem with a pragmatic, empirical lens, as cognitive science endeavours to do, is interesting, yet fundamentally limited. In keeping with cognitive science's goals, the how, is more important, and valuable than the why.

      Delete
    6. Professor Harnad, I did not mean to say that Turing is a pygmy, but rather that we are not able to see what Turing intended. More often than not, we hear that the Turing Test has been passed, but the confusion comes from software engineering which confuses those who are not entirely familiar with Turing's "imitation game." Also, I agree that Turing proposed solving the easy problem because the hard problem is not solvable.

      Lauren, the easy problem is easy relative to the Hard problem, but the easy problem is still hard. It is the same as comparing two infinities. One infinity can be smaller than the other, but they are still both infinite.

      Delete
    7. re: Cassie and Lauren - I agree, that the hard problem is basically the problem of human nature. That humans will always be interested in how we feel and what makes us different from other creatures. I also agree that cognitive science should (at least for now) focus on the easy problem before even thinking about the hard problem. I also think it's interesting what you said Cassie about the hard problem having "permeated so many different disciplines," and it makes me wonder that if there are other ways of going about the hard problem that cognitive science, robotics, and neuroscience ignore. I'm not as positive as Harnad that the hard problem will never be able to be solved. Perhaps after cognitive science has solved the easy problem, a combination of cognitive scientific theories with other disciplines could make progress on the hard problem.

      Delete
  25. This was a nice overview of much of the course material, much of which I’m familiar with by now (thankfully,) but it's interesting how sometimes reading a different wording of a concept can make you see it in a new light. What gave me pause here was the quotation:
    “some words have to be grounded directly in our capacity to recognize, categorize, manipulate, name and describe the things in the world that the words denote. This goes beyond mere computation, which is just formal symbol manipulation, to sensorimotor dynamics, in other words, not just verbal capacity but robotic capacity,” particularly, some words have to be GROUNDED DIRECTLY IN OUR CAPACITY to recognize, categorize, manipulate and name the things in the world ….
    I realize that the symbol grounding problem denotes the missing sensorimotor/ neural mechanism between arbitrary symbols and a cognizer’s real world referents… I’m contemplating Harnad’s choice of wording here. Why did he say “grounded DIRECTLY in our capacity,” as opposed to INDIRECTLY. What would the difference be? What exactly is Harnad imagining here?

    How can words be grounded DIRECTLY in our categorization capacity? What does this mean? I’m wanting a diagram. I think I’ll return to the week 4 to see if I’m forgetting something…

    On another note, this course makes me upset that Turing is not more famous. It's really upsetting that most people don’t know his name! Sheesh..

    ReplyDelete
    Replies
    1. Lauren, grounding by sensorimotor induction is direct grounding, grounding by verbal instruction is indirect grounding. (Yes, Turing deserves a lot more credit that he has been given, yet. But, in general, posthumous credit is no consolation... He has been, however, recognized as a giant, both during his lifetime and since. What I regret is those giants -- and even some pygmies, mayve -- who did not manage to do all they might have done because they were not recognized, or not given the opportunity...)

      Delete
  26. I feel like this article would have been difficult for me to read at the beginning of the semester, but now that so many of the concepts it refers to are familiar to me, I found it very satisfying to read.

    As Prof Harnad explained in one of the early readings we did, Turing referred to his test as the "Imitation Game" which was a poor thing to call it, because it it was a actually a serious scientific approach to figuring out if a system is thinking, as a human does.

    One of the popular misunderstandings that arises from this naming is that the system just has to temporarily fool a judge in order to pass the test. On the contrary, the above article reminds the reader that the system has to actually be indistinguishable from other humans for a lifetime.

    So much misinterpretation and confusion can come out of a poorly named concept. It’s interesting that the above film about Turing propagates the imitation game misnomer. For the duration of this course, we renamed the "Imitation Game" as the "Turing Test" because it is a more neutral/accurate name. I am personally in support of renaming things in order to make ideas more accessible, and I doubt that Turing would have objected to us doing this, but it could possibly be seen as disrespectful to the legacy of someone who committed their life to pioneering these ideas.

    ReplyDelete
  27. I wonder if a TT passing robot with no consciousness would think that it is cognizing from what it has learned/been programmed to think, from us. Would it not also think that it might be just a creation of us, maybe not even exist, maybe not really feel the way humans do, maybe that it is programmed to say that it feels when it doesn’t, but be sure that it is feeling in its own way? We might not call that feeling, but the robot might. I guess these are questions inspired by sci-fi movies and that the AI won’t actually be feeling, so they are useless to ask… But nonetheless, if we come to that technology in our lifespans, these will be questions that everyone will be wondering about. Like we asked in the first class, is it okay to kick Dominique if we confirm that she was made in MIT?

    It is also quite powerful to connect all the themes to each other, and go back to Turing. I wonder if he ever knew he would be starting such powerful conversations, and that eventually, we’d go back to his test to explain what we can and cannot do. We tried to expand on what he started, but he had already set certain limits in what we would be able to explain, at least for a long while. We always think that in X years, we will come a long way in a certain topic. However with the hard problem, I am not so sure anymore.

    ReplyDelete
  28. I almost wish I had read this once at the start of the course and then again at the end to summarize, it gives concise and clear perspective regarding our main topics.
    The issue of "feeling" has been the most difficult for me to conceptualize (I guess that's why it's the hard problem) and although my need for concrete answers hasn't and probably cannot be satiated, it now resides in a somewhat sustainable place. Coming to terms with the apparent insolubility of the hard problem leads to the questioning of whether there is even a place for the study of cognition with regards to feeling.
    I still think it is valuable to pursue the computational and physical modelling of human sensorimotor capacities, as much remains to be discovered regarding our mechanisms of perception and grounding that could be valuable for clinical application, and perhaps that asymptotic relationship between the development of computational and machinated models of human capacity and a true feeling "consciousness" will reach a critical threshold for something resembling a TT-passing machine, feeling or not.

    ReplyDelete
  29. This paper made me think about a question I asked during on the first lectures: Is it possible that we might pass the TT with T3, and then subsequently go back and realize that we are able to create the exact same cognitive abilities with a T2? I realize after the CRA that this is probably too computationalist to stand, but I think it’s probably that we might realize not all human-like features are necessary. In other words, once we determined what processes are necessary to create cognition, could we start hacking away to figure out which ones are not? Perhaps only vision and hearing are important, perhaps we only need to see monochromatically or need only to be able to hear certain decibels. I think this is something interesting that we haven’t really touched on in class: what sensorimotor capabilities do we think are absolutely necessary? Which are not?

    ReplyDelete
    Replies
    1. That's an interesting thought, Lucy, but I don't know how one would even go about testing which sensorimotor capabilities are necessary for cognition. Furthermore, because of the Other Minds Problem, we would have no idea when cognition is even occurring anyways. It could be interesting and more feasible to test what is necessary in replicating our functional capabilities, though.

      Delete
  30. I really appreciated this reading because it held my hand through the majority of the main topics and importance concepts I need to grasp from this class. The main distinction emphasized through the class is that of the easy problem explaining how and why we can do what we can do while the hard problem deals with how and why we feel. Turing provided what he felt was an answer for the former, through an entirely computational machine. Turing generates the capacity to do rather than the capacity to feel, which is an important distinction that proved of fundamental use to the field. This is the main misinterpretation by computationalists (that Turing is providing a purely computational answer to cognition) we’ve disentangled throughout the course, as directed through Searle who showed that a model requires some sensorimotor grounding to be successful. Ultimately, we haven’t answered much about the hard problem, but have disentangled how the easy problem may be answered.

    ReplyDelete
  31. I feel further away from seeing the light at the end of the "hard problem" tunnel after this. At the beginning of the course, I had thought that through Turing tests and the improvement of machines and computing that it may one day be possible to have a model for feeling that we could use to study, but now I see that it's not the case. So if computation isn't the way to go to understand it, what is? The mirror Neuron reading didn't seem to explain it any better. The language readings seemed to come the closest into tapping into this question, but it still seems so impossible to answer (not just "hard!")

    This reading also helped me to realize there are different "subtypes" of cognition and consciousness is just one.

    ReplyDelete
  32. "Turing's contribution was to make it quite explicit that our goal should be to explain how we can do what we can do by designing a model that can do what we can do, and can do it so well that we cannot tell the model apart from one of us, based only on what it does and what it can do. The causal mechanism that generates the model's doing-capacity will be the explanation of thinking, intelligence, understanding, knowledge -- all just examples of, or synonyms for: cognition."

    I understand that Turing wasn't a cognitive computationalist, and simply believed that this is the closest that we could get. My problem with this goal is: how do we know that the machine does what we do in the same way that we do it? To me, having the same input and the same outputs does not explain the how and why of cognition, it just shows one way how a machine can do what we do. This isn't penetrating the black box of the mind.

    ReplyDelete
  33. I really enjoyed this reading and felt like it effectively summarized and integrated the concepts we have learned thus far in the course. Harnad argues that the focus of cognitive science should be on passing the Turing Test and achieving T3 rather than on trying to solve the hard problem (or focusing entirely on correlative data which cannot yield causal explanations). After reading Searle’s Chinese Room Argument I thought that the Turing Test was originally unsure of whether or not the Turing Test was still an applicable paradigm within cognitive science (since the Turing Test was related so closely to computationalism and Searle proved that cognition must be more than just computation). However, I think that the solution Harnad proposes is simply to integrate Searle’s insight into the Turing Test paradigm. Thus we understand that sensorimotor information (and using this sensorimotor information grounding a certain core percentage of the words in a theoretical dictionary) is necessary in order to achieve T3.

    However, I think this itself is not enough to reach T3. Evolutionary psychology shows us the importance of social interaction to human psychology. Responding to human emotions and showing signs of “feeling” are essential to “passing as human”. Therefore, a real T3 robot would need to be able to appear as if it feels in a believably human-like way. Now this is not the same thing as saying that the robot feels, but rather that it must act and react to situations as if it does feel. I think whether or not at that point the robot must also “feel” (and if so, does it “feels” the same way you or I “feel”?) provides really good fodder for scifi movies. However, I think speculating on it is probably pointless, especially since we have not yet achieved T3. As long as it acts human-like in all observable, measurable ways, from the perspective of the Turing Test it has passed T3 (since we only care about weak equivalence for T3).

    I do agree that the “why” part of the hard problem is unsolvable. However in a theoretical future in which we have achieved not just T3 but also T4 would that not solve the “how” of the hard problem? In this case we would have created a synthetic brain that was connected and functioned exactly the same way as ours do. At that point, isn’t it safe to say we have found the causal mechanism for feeling (as in how we feel?) based on the assumption that (a) feeling is a part of human cognition and (b) human cognition can be explained by the brain. I agree that this doesn’t explain why we have feeling and why we are all not feelingless zombies, however there appears to be two fundamentally different parts to the “hard problem”.

    ReplyDelete
  34. I will have to echo everyone else’s sentiments about the pleasantness of this reading, though I hesitate to suggest that it should be the first reading presented in the course as I feel the only reason we find it so pleasant is because it is simply a concise summary of the many topics we have covered in much greater length and complexity. I found the paragraph detailing how the Church-Turing thesis has been taken to lengths such that many have mistaken Turing himself for a computationalist, a misunderstanding I feel we have spent much of the course trying to unravel. While I think it’s inevitable that many very intelligent, credible folks will take it upon themselves to making this misunderstanding all the more complex and frankly somewhat pointless, the way that Turing’s easy and hard problems have been described throughout the course, and especially as succinctly as in this article, provides focus to the field that I think is much needed.

    ReplyDelete
  35. I think I am a bit confused. So can grounding be done by robots? Because you can technically ground without feeling as long as you can associate a word to an object/its referent, but doesn’t essentially guarantee that the model feels?

    ReplyDelete
    Replies
    1. The other minds problem makes it impossible to know whether the robot that demonstrably is at T3 level and passes the T2 level can feel! Since we can't even explain the causal nature of feeling in humans, we surely can't probe whether a robot that has a symbol-sensorimotor system feels.

      Delete
  36. The idea of symbol grounding and the fact that some dictionary word(symbols) need to be grounded for us to get out of the circular problem of not knowing the meaning of the words is clear. However, the example of peekaboo unicorn that we discussed in class made me wonder how many words need to be grounded for us to understand the dictionary and whether these can be any words or they must refer to categories that share a specific feature. Let’s suppose that we need 500 words to be grounded and then we can infer the meaning of any other word based on these 500 words. Could we choose any 500 words, ground their meaning and then use them or should they be a specific set of 500 words (a few more or a few less does not matter in this case) that everyone needs to know in order to be able to experience that feeling of knowing a language and be able to use the dictionary? Is the feeling of knowing a language guaranteed to appear with grounding of those 500 words?

    ReplyDelete