Saturday 2 January 2016

(3b. Comment Overflow) (50+)

(3b. Comment Overflow) (50+)

19 comments:

  1. In refuting computationalism we are saying that thinking and feeling are not simply computation. However, there are aspects of computation which can at the very least be modelled by a computer. In this sense, am I wrong in thinking we still believe the brain is similar to a computer? If it is similar, in what ways is it specifically different?

    It seems to me that the main difference is the ability to infer from memory and self modify. Would claiming that a computer with these abilities still be c computational claim? If not, which theory does this apply to? If so, where is the mistake, and what is the special aspect of the brain outside of this?

    ReplyDelete
    Replies
    1. Edward, Searle showed that cognition can't be all computation, but he did not show cognition could not be computation at all. (Please read 2b.)

      Delete
  2. RE: (2) Computational states are implementation-independent. (Software is hardware-independent.)

    How about things that are both software-independent and hardware-independent? For example, XML is both software-independent and hardware-independent because it isn’t affected by endianness; maybe consciousness is like XML.

    ReplyDelete
  3. “For although we can never become any other physical entity than ourselves, if there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it's got the mental states imputed to it.”

    Is this assuming that everyone’s “software” (i.e. mind) must follow the same basic principles (and thus, be capable of having the same computational states)? It is not possible that every computational state cannot be attained by every individual? There could be multiple computational states that can produce the same mental state by virtue of being functionally (but not compositionally) equivalent.

    ReplyDelete
    Replies
    1. I think this lends to the argument as to how cognition (or our software) cannot be solely due to computational states. If this was the case, I would argue that differing hardware would hinder people from reaching the same computational state. Some hardware would be simply insufficient or lack the capability of attaining a certain computational state.

      Delete
    2. @Alex - I'm a bit confused by your point here. I think we know that the hardware is irrelevant, and only the software (algorithm) is necessary.

      @Colin - I don't necessarily think that that is what is being said. I think the quote is referencing the connection between algorithms and the mind, not algorithms and the anatomy.

      Delete
  4. This post is in response to both papers, "Minds, Brains, and Programs" and "Minds, Machines, and Searle 2: What's right and wrong about the Chinese room argument"

    “[…] but it is Cartesian in the sense that it insists that what is specifically mental about the mind has no intrinsic connection with the actual properties of the brain.”

    I had not thought about dualism in regards to Computationalism until reading Searle’s paper, which now seems like one of the most obvious considerations when evaluating claims related to AI. Searle's point that we are attributing mental states to computer simulations because of our beliefs in dualism, and as I see it, our tendency to liken processes in the world to our own, really resonated with me. There are many theories within cognitive science that rectify dualism, like Embodied and Extended Cognition, which value external and physical processes to be important to and constitutive of cognition. As Harnad called for specification re: whether D2 or D3 is implied in the decisiveness of the Turing test, I believe that individuals need to provide a more specific definition, or elaboration, of terms like “think” and “mind” when considering “Can machines think,” and whether or not they have minds, especially in the advent of new theories of Cognition. How narrowly or widely these terms are defined completely changes the relevant discussion. For example, if “thinking””understanding" and “intentionality" are defined in relation to how they are particularly experienced in humans, then only a D4 could ever be said to possibly have the capacities. In line with an Embodied view, one could say that thinking about something depends on how a system responds to and interacts with the particular thing in question. If defined in a human centred way, a machine that is not built like us would never be able to “think” because the outputs (results, responses, actions…etc) of their thinking is carried out in completely different ways. On the other hand, if thinking and understanding..etc is seen in regards to the ability to take input and then output a relevant response, than a D3 or D2 could be said to do the trick. Turing defines “understanding” as "both the possession of mental (intentional) states and the truth (validity, success) of these states." My question is whether understanding depends on the ability to verify (in some capacity) the truth of the state (or claim). If I had some belief that there was life on Neptune, and then an alien from Neptune somehow contacted me swearing that he was telling the truth, and said that there was life similar to that on Earth, but that everything living is just so microscopic that humans would never be able to observe it — do I now understand what life on Neptune is like? Whether it really was an alien from Neptune telling me the truth, or a dressed up stranger lying, there would be no way for me to validate it: so can I make the claim that I have an understanding of life on Neptune? The same idea could be applied to D3/ D2 machines — if the input they are receiving is true, but they lack the necessary structural components to ever verify the input — is the D3 really “understanding” whatever the content/ input in question is? If a machine can’t evaluate the truth of the claim by itself, does it still understand? For example a machine that can’t “see,” and is made out of metal would never be able to verify the truth of sun burn. Even if the machine asked 100 humans about it, the machine wouldn’t know whether the human’s were lying or not. So to be able to understand, a machine must have at least some baseline capacities that resemble human capacities which are underlined by certain sensory and motor structures.

    ReplyDelete
  5. RE: We're ready to believe that we "know" a phone number when, unable to recall it consciously, we find we can nevertheless dial it when we let our fingers do the walking.

    To elaborate on this point, would our “muscle memory phone dialing” be akin to something like a withdrawal reflex in the presence of pain? In essence this unconscious finger dialing or withdrawal reflex is the result of some computational state(s) (or system?). Keeping that in mind, when humans communicate with each other through language this is also due to some computational state albeit a more complex one. Yet the former is an unconscious state in a conscious being while the latter is a conscious state. Could one of the factors in experiencing a conscious computational state within the human mind, be the complexity of it?

    ReplyDelete
  6. Harnad (2001) reformulates the third tenant of Strong AI in terms of computationalism to say that “there is no stronger empirical test for the presence of mental states than Turing-Indistinguishability; hence the Turing Test is the decisive test for a computationalist theory of mental states.” With the CRA, Searle specifically rejects the purely computational way of passing T2. Harnad (2001), however, raises the crucial point that Searle misses: that although cognition cannot be purely computational, certain aspects of it may well be hybrid (partly computational). The distinction between T3 and T4 becomes important here because while Searle posits that an “understanding” machine must be biologically based (and thus T4), Harnad says that specification above T3 is unnecessary. The example of reverse-engineering a duck was useful in realizing how hazy the distinction between T3 and T4 may actually be, since the structure of a D3 would necessarily be constrained, to a certain extent, by its function.

    ReplyDelete
  7. “But the TT is of course no guarantee; it does not yield anything like the Cartesian certainty we have about our own mental states.”

    Does this imply that the limitation of TT is bounded by the other minds problem? If the nature of the Turing Test is just to prove a functional and behavioral equivalence to human capability and cognition, this doesn’t necessarily imply that the biological phenomenon of the brain is what’s necessary to give rise to a program that could show this equivalence. Even if the hardware is extremely different, if it still was able to pass as indistinguishable from a human in its capabilities, that’s the only criteria we have. Intentionality becomes part of the hard problem then, in that we will never be able to definitively show there is intentionality because of the other minds problem. So TT criteria and its limitation gives room for there to be more ways to recreate mental states than just reproducing the biological phenomenon of the brain. Searle says that intentionality is not extended to the machine if someone who is giving the input and interpreting the output, rather its just manipulating and representing the intentionality of the person who is programming the machine. But if theoretically, we were able to allow the machine the capabilities to receive input independently through its own sensorimotor capacities and it could adjust its behavior and execute goal directed behavior based on the output, then there would be no way to actually tell if the system had intentionality in the same way that we can’t tell if another person really has it or not. In that case, if a machine has some control over the input its receiving and can execute and give feedback to itself, we would just have to assume understanding or intentionality.

    ReplyDelete
  8. On Mind, Machines and Searle. I want to visit Searle’s periscope. It seems that Searle’s CRA works for T2 but he cannot show that it applies to T3. According to tenet number 2 in the paper and all the previous class discussion, we’ve seen that computational states are implementation-independent. Such that if we are given software that can perform certain capabilities, we can implement this software in any physical device that will thus be able to generate these capacities. What is strong about Searle’s CRA is that he himself becomes the physical device that runs the software of manipulating the Chinese symbols input and output, in which he demonstrated that he is able to pass the Turing Test at T2 without understanding anything.

    It seems that Searle doesn’t encounter the other-minds problem because T2 excludes sensorimotor capabilities and is solely verbal, which is what (I believe) allows Searle to conclude that the system is generating all its capabilities even though it doesn’t understand Chinese. However, if we just go to the next level of T3, it is impossible for Searle to become the other mind. Therefore, Searle only showed that computational aspect of passing T2 can be independent of implementation, but we cannot say about T3 due to the other-minds problem. In other words, computation itself is implementation independent, but when we incorporate sensorimotor capacity (to become T3), Searle’s argument no longer holds and it becomes ambiguous. And if we go even a level further, to T4, which is curious because according to Searle, it seems that a robot can only be indistinguishable from human is for it to be a T4-robot; in other words, Searle thinks that T4 is the only way to generate cognition. This claim is obviously contradictory to our case with Dominique.

    With the impossibility of “being the other mind”… there seems to be no way of knowing whether mental states can be computation or states. As there are many possibilities that they could be a function of both sensorimotor and computation as well. Therefore, Searle is wrong to conclude that cognition is not at all computation. Searle’s CRA argument help us to conclude that T2 doesn’t understand and cognition is not all computation, but because of other mind problem, we cannot make the same conclusion with T3. Ultimately, T3 is the right level to test for cognition. And, to simply entertain Searle’s claim on T4, if we already have T3, T4 would be superfluous; such that the details of T4 is irrelevant as any T4 necessity will be wrapped into T3.

    ReplyDelete
    Replies
    1. I don’t understand why Searle’s CRA works for T2 but not for T3. Now, I understand that Searle’s Chinese room is a T2 test, not a T3 test. But if we are willing to bend Turing words to declare that he really meant a (or should have meant) a T3 sensorimotor inclusive test, can we not implement this as well into Searle’s CRA? Is it really so hard to extend the Chinese Room into a Jaeger-esque, megazord giant robot. As Krista Liberio mentioned in the last section (3A), why not change Searle’s CRA into a GRA (Giant-Robot Argument)? If we content that T3 is the legitimate TT we want, shouldn’t we turn CRA into T3 compatible? I don’t really see how this would significantly change the validity of Searle’s argument.
      If Searle stuck in the robot’s head is just following a rule-book ledger, there is nothing to preclude him from (while obeying instructions) writing new instructions for novel sensory-motor interactions. Now obviously Searle does not see the outside world the robot is interacting with. He is only receive the same input symbols the robot’s sensory-motor capacities are sending him, nor does he have any understanding of what these symbols mean, only that he knows how to manipulate them.

      Would such a T3 robot (with Searle inside) problematize the symbol grounding problem?

      Delete
  9. From p. 8: “But of course mind-reading is not really telepathy at all, but Turing-Testing -- biologically prepared inferences and empathy based on similarities to our own appearance, performance, and experiences.”

    My response is a little tangential this week, so I apologize in advance. I thought that this was a really interesting point (although not super relevant to the overall article) in Harnad’s discussion of Searle’s Chinese Room Argument. Mind-reading is the process by which we guess at the mental states of others. This mind-reading process is essentially Turing-testing: we compare another person/machine’s perceived mental states with our own, and then determine whether or not they’re similar enough. At first, this made me question the objectivity of the Turing test if it were to be applied as Turing laid it out (in an imitation-game setting). Because the results of testing another person/machine rely on our ability to mind-read, would this subjectivity affect the results in a meaningful way from person to person? But after some pondering, I think the answer to this question is no – the Turing test tests for generic human indistinguishableness, and an ability to mind-read falls squarely within that generic human capacity.

    ReplyDelete
  10. In Searle’s Chinese argument, Searle is saying that unlike humans, computation doesn’t involve intentionality (knowing of doing). In the case of a numbers, say 5, a computer does not know what that 5 means while we do. However, according to Michael G. Dyer there are computers and computational networks called “expert systems” that are made up of multiple schemas and domains. In these systems each schema contains knowledge of its domain. He argues that these systems are designed to recognize that 5 belongs to a to a set of odd prime numbers. In someway these networks do recognize the contexts of numbers and characters. Maybe this recognition system is made up of multiple sequences of complex rules, but the ability to chose and recognize certain symbols within a context does begs the question of weather Searle was completely correct with the Chinese room experiment.

    Dyer’s article http://web.cs.ucla.edu/~dyer/Papers/Ijetai90Int.html

    ReplyDelete
    Replies
    1. Dyer seems to assume a great number of things for his "COUNT" system. If I am correct in thinking that he means "understanding", "intentionality", and "having knowledge" to be roughly the same thing, then he is simply assuming these things for the schemas in his system and then claiming that because the system then possesses these characteristics, it can understands math and numbers.

      By simply stating "COUNT would also have natural language understanding and generation subsystems", the fact that it would then be able to respond to English seems trivial. It reminds me of an "ad hoc speculation" mentioned in a previous paper that simply ratcheting up the complexity of a system to a suitable degree (here, by implementing many schemas into semantic networks) produces "intentionality" or "understanding". This does not seem to me to be a true counterargument.

      Delete
  11. “For although we can never become any other physical entity than ourselves, if there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it's got the mental states imputed to it.”
    This made me wonder because there are certain mental states we can all agree on (being sad, being in love, being angry), but the ways in which these are experienced internally and externally by different people are very different. These states are not implementation independent. To elaborate on ”by virtue of being in the right computational state”; if the same series of events happen to different people, (same conditions), these events will be interpreted differently. They will react differently. Later, Harnad explains that
    “Moreover, it is only T2 (not T3 or T4 REFS) that is vulnerable to the CRA, and even that only for the special case of an implementation-independent, purely computational candidate. “ But since humans are not implementation independent, is it even valid to compare Searle’s understanding to that of a computer?

    ReplyDelete
  12. I think that relating consciousness to computer software fixes us onto the idea that computation is how the brain works. I agree with the idea that we do compute, but not that computation is the answer we are looking for. In that sense, I feel that Searle’s statements raise questions but don’t get much closer to solving the hard problem.
    I don’t think the mind is like a computer program either, nor that the brain is irrelevant. Even though we can only have 40% of our brain but still be able to function, that is a biological phenomenon as Searle defends. The brain is the command centre, which creates, sends, and executes the innate ‘programs’ that make us human, and that make us do what we do.
    But again, why do we continually look toward machines to understand human beings? It seems illogical. If looking at animals doesn’t get us very far, do we really think it will change with machines? How can we being to create something based off of ourselves when we can barely figure out how we function in the first place?

    ReplyDelete
  13. I found Harnad’s explanation of Searle’s Chinese Room Argument to be compelling and helped to break down some of the concepts that were unclear. First of all, he distinguished between function and structure when attempting to reverse engineer something. Thus, the Turing Test should essentially be framed as a test of functional equivalence (which seems reasonable enough). At its core, the Searle Chinese Room Argument proves computationalism cannot fully explain cognition because it contradicts the implementation independence of computationalism. That is to say simply reproducing a behaviour (ie a functional outcome) is not sufficient to explain the cognitive capacities that led to that outcome. In the machine’s case it achieves this outcome via a complex series of implementational steps whereas in a person they do this via some other process which involves “understanding” that cannot be explained using the machine code.

    However I am a bit confused as to where that leaves us with passing T2 and the Turing Test? Should we view the TT as a test of functional equivalence but not as a means of explaining cognition? Perhaps it is a means of understanding only the computational part?

    ReplyDelete
  14. RE: Is the Turing Test just the human equivalent of D3? Actually, the "pen-pal" version of the TT as Turing (1950) originally formulated it, was even more macrofunctional than that -- it was the equivalent of D2, requiring the duck only to quack. But in the human case, "quacking" is a rather more powerful and general performance capacity, and some consider its full expressive power to be equivalent to, or at least to draw upon, our full cognitive capacity (Fodor 1975; Harnad 1996a).

    I understand that in computationalism, it is the software, rather than the hardware, we are trying to understand (tenet 2 - computational states are implementation-independent). However, when it comes to testing cognitive capacity (learning, experience) does the hardware not play an important role?

    I would argue that cognitive capacity, expressed human “quacking” (or ability to communicate through text, as in the TT), is dependent on that human’s structure and its sensorimotor abilities to interact with the physical world. If the TT asked to describe the feeling of running to the bus on a crispy winter morning, the T2 would depend sensorimotor abilities to answer the question; e.g. the tactile experience of cold against their face, the physical feeling of their heart pounding, their legs pushing against the ground. How can we separate structure from function when function depends on the structure? How can T2 have all the cognitive capacities that emerge from sensorimotor abilities if we are to ignore the structures that allow us to have these sensorimotor experiences?

    ReplyDelete