Saturday 2 January 2016

(10d. Comment Overflow) (50+)

(10d. Comment Overflow) (50+)

10 comments:

  1. Regarding the physical version of the Church Turing thesis, Turing believes that a computer can essentially simulate anything. But in order for a machine to pass the Turing Test, it can’t just be a simulation, it has to be indistinguishable. So Turing was a computationalist about everything except for a machine that attempts to explain cognition? I think this point is emphasizing that there is a very clear difference between a computer simulation, and a T3 that relies not only on computation but also more dynamic sensorimotor capabilities in order to surpass simply “simulating” cognition but actually being able to explain it.

    ReplyDelete
  2. I felt like this article was a good review of the topics discussed so far in class. In particular, the section summing up the CT thesis. It also helped me to better understand why Turing was not a computationalist; his theory does not attempt at all to explain feeling but only tries to provide the criteria for performance capacity.

    “The successful TT-passing model may not turn out to be purely computational; it may be both computational and dynamic; but it is still only generating and explaining our doing capacity. It may or may not feel.”

    This passage has helped me to put into words my own intuitions about this doing capacity and feeling – mainly, that, as referenced in the cogito, we can only be certain of our own feeling, not in the feeling of others. We may or may not ever know if the TT-passing machine we reverse engineer does feel, but that was not something we could have been certain about in other humans anyway.

    I was wondering if the hard problem was worth thinking about any more beyond this, or if we should abandon it and focus on reverse engineering for now…or, should we just abandon it all together? If we can’t be certain of anything except our own feeling, we wouldn’t know if we had successfully reverse engineered it….

    ReplyDelete
  3. i) "Turing was perfectly aware that generating the capacity to do does not necessarily generate the capacity to feel...The successful TT-passing model may not turn out to be purely computational; it may be both computational and dynamic; but it is still only generating and explaining our doing capacity."

    In other words, the T3 Turing Test is a test of doing capacity and not a test of consciousness. That is not to say that a T3-passing robot necessarily lacks consciousness, but rather that the causal status of feeling remains unresolved. If an artificial agent is able to pass T3, then there is no reason to believe that its feeling capacity is any different from yours or mine. Although the hard problem (of how/why we feel) is distinct from the other-minds problem (not being able to know the internal states of anyone but yourself), I struggle to consider them independently because they are so closely related to one another.

    ReplyDelete
  4. RE: "Turing was perfectly aware that generating the capacity to do does not necessarily generate the capacity to feel. He merely pointed out that explaining doing power was the best we could ever expect to do, scientifically, if we wished to explain cognition. " Tying this back to the idea of a (non-existent) but fifth element that could be a causal mechanism for feeling. Since there is no way to prove the existence of this extra element, and therefore ever attempt to find its corresponding part in an AI, why are we concerned with the Hard Problem at all? Why do we not accept that humans and different from robots because we feel whereas robots do not, since we do not know why we feel we will never be able to test for it in a non-human entity? What is the impulse that drives us to categorize between us and the other and does it really have relevance to any of the goals of Cognitive Science or is it an endless unresolvable debate?

    ReplyDelete
    Replies
    1. Hi Deboleena, I don’t think the idea of a fifth element as causing feeling and the difficulty of finding this element is even the main concern. Even if there is no fifth element, and performance capacity is necessary and sufficient to give rise to feeling, we are still in the exact same place in our ignorance of how to solve the hard problem. I think you bring up an interesting point of how even as daunting or insoluble as the hard problem may seem, it still remains a very popular and sought after question in science. Perhaps this represents the fact that we consider feeling as an essential part to being human, and giving up on the consciousness problem would be admitting that we might not ever know how or why we experience the world the way we do.

      Delete
  5. I agree with professor Harnad when he says Turing did not really think cognition is entirely computation. I probably think this way because all my information about Turing comes from him and we are likely to make similar conclusions.
    To say Turing thought cognition is computation is putting words in his mouth. Also, with the growing popularity and novelty of computers at the time I can see why certain people were tempted to do this. Turing merely said that cognition can simulated by a system of squiggles and squaggles to a close approximation so that it would seem like cognition is as cognition does (the strong Church-Turing thesis and weak AI).

    On another note, the insolubility of the hard problem is amazing. We can derive conclusions from artificial simulations that can be applied to the real world. This depends on if our simulation models the right variables needed. Yet, we can not determine how and why we feel if we model cognition, even if we find out what the important variables for the model are.

    ReplyDelete
  6. I’m not sure whether it’s how far we’ve progressed in the course or that Harnad is writing especially kid-sibly in this paper but this paper is another one that is helping to cement what I’ve learned in the course to date.
    I do find the example of a flying simulation strange. If a person flying a simulation airplane is convinced (while wearing the goggles and gloves), is that equivalent to a candidate passing a restricted turing test? As in – if something is simulated purely computationally, is it not capable of cognizing?

    ReplyDelete
  7. It’s so interesting that we don't need to be taught how to think/how to associate things. You could be taught that something is some word with someone pointing to that object and saying that word, and you would probably associate it, but you then feel it, you feel you know it. It's so strange to think that we do that and to think that we are able to understand what determiners and articles mean, because they aren't an object, they're just some abstract concept that we are somehow able to know and understand. All language is just so abstract and I find that so fascinating.

    ReplyDelete
  8. This article was a nice concise recap of what we’re covered this semester, and really made it clear to me that the hard problem is out of the scope of the methods of cognitive science. Considering that the essence of cognitive science is “the reverse engineering of the capacity of humans to think” (which describes the process of building models that can carry out some human function for the purpose of explaining how humans can do that function), the hard problem is explaining how and why we feel, and the other-minds problem tells us we can’t know if any other entity besides ourself is feeling -- even if we ever successfully reversed engineered a thing that could feel, we would never be able to know, so couldn’t explain why/ how feeling happens. So, RE can’t say anything about the hard problem. If there’s any chance of solving the hard problem (which there probably isn’t) what really needs to be examined and focused on is how to identify when other entities can feel.

    ReplyDelete
    Replies
    1. The article also made me think about the “easy problem” and what exactly about cognition and ‘doing' cognitive scientists are trying to explain when designing models. Are cog. scientists interested in 1.) the kind of cognizing that characterizes human experience specifically or 2.) the kind of cognizing that affords functional equivalence between some T3, A, and a human, regardless of the route A takes to do the things humans do. If the point of the easy problem is to explain *human doing, specifically, not just the functions that humans can do, then 2.) is not appropriate. 2 doesn’t explain why *we can do what we do, it just explains how/ why A can do what we do, which says more about the nature of thing that’s being done, and A, than human capacity. But, for 1.) I believe A must be a T4 or T5 (or we would run into the same problem as 2.), which means that if human capacity, specifically, is the topic of interest, cog. scientists should focus on building robots/ models made of the same materials as humans, e.g., tissue, skin cells..etc.

      Delete