Saturday 2 January 2016

(3a. Comment Overflow) (50+)

(3a. Comment Overflow) (50+)

18 comments:

  1. I found it very helpful how Searle spent much time addressing the concept of “dualism” in this paper. I n popular media, Strong AI is often depicted as very behavioristic and operationalist. Supercomputers and AI programs are often non-human and there is no indication that their “mental states” are in any way human, ie, where their minds are concerned a brain does not matter. I find myself instinctively accepting these ideas of Strong AI while overlooking intentionalism. Now I can conceptualize the Chinese room thought experiment, if there can be no understanding even when a human mind is used in the formal program of symbol manipulation. If there is not a brain or mind like machine (comparable to that of a human’s) there cannot be understanding and more importantly intentionality.

    ReplyDelete
  2. Searle uses his Chinese room argument to show that computation is not cognition. He accomplishes this by using an English-speaking and monolingual man -- a conscious thing-- as the agent for the manipulation of the Chinese symbols. Even though it is a conscious person performing the symbol manipulation, we can agree that symbol manipulation is not cognition because the man cannot speak (and therefore understand) Chinese even though he can manipulate its symbols. Likewise, a program can only achieve this level of "understanding" and is therefore not capable of cognition.

    ReplyDelete
    Replies
    1. Shanil, I would say Searle's Chinese room argument shows that cognition is not all computation. He did not say that it couldn't be some computation at all. In addition, I don't think you could every really know if this program is capable of "understanding" you would just never be able to prove it.

      Delete
  3. My initial thought was that the process of encoding different symbols with rules in the Chinese Room experiment could be compared to understanding a story in English because it was necessary to understand the rules and “understand” the symbols in a way. “Understanding” the symbols as in being able to tell the difference between them and to see which symbol correlates with which rule. It was comparable to understanding the difference between a square and a circle. After reading further I’m starting to understand what Searle means when he says that they are not the same kind of understanding. The example that really stuck out to me was the automatic door example where he said that it “’understands instructions’ from its photoelectric cell is not at all the sense in which I understand English.”
    The systems reply doesn’t make that much sense to me. It says that the individual does not understand the story but the whole system does understand the story so that should count. How can this reply just ignore the fact that the individual doesn’t understand the story? It feels integral that the individual that is part of the larger system should understand the story for this theory to be complete in some sense. Is there a larger part of the argument that I’m missing?

    ReplyDelete
  4. I'm not sure I quite follow Searle's refutation of the brain simulator reply. He supposes that instead of a series of levers, instead it is a series of pipes which correspond exactly to the human brain's neural patterns. This person still follows the same pattern of input/output, and Searle concludes that this machine (assuming it were possible) would have no understanding of Chinese.

    However, this is a machine which exactly models the human brain, apart from the fact that instead of electrical/chemical firing it proceeds under the power of water in pipes. If we grant that such a system would have the capacity to self-modify like a human brain does (otherwise it is not an accurate simulation), why is it so unreasonable that the sum total of the parts may experience some form of understanding of Chinese?

    Is it that the inputs are too simple for a brain to learn, similar to the unfortunate cat kept stationary in the room and blinded? If so, i propose we grant it similar sensory inputs and outputs to the human brain.

    He says: "As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about
    the brain, namely its causal properties, its ability to produce intentional states"

    In my mind it seems likely that a highly accurate representation of the human brain, even one produced by water pipes, would have this causality, even though the man does not. While he may not understand Chinese, perhaps a single neuron does not understand what it is to feel either.

    Is this still a computationalist argument? If so, where have I made a mistake?

    ReplyDelete
  5. This comment has been removed by the author.

    ReplyDelete
  6. RE: The Robot Reply "Suppose we put a computer inside a robot, and this computer would not just take in formal symbols as input and give out formal symbols as output, but rather would actually operate the robot in such a way that the robot does something very much like perceiving, walking, moving about, hammering nails, eating drinking "

    Symbol manipulation alone is not sufficient to learn a new language. One must interact with the world in order to see patterns of when certain words are used, thus grounding them in experience through our perception. If Searle had been exposed to the world of Chinese people (as the robot reply suggests), he would have been able to make connections between the symbols and the world. The symbols would then have meaning because they would be grounded in experience. So I agree that 'thinking' is not just symbol manipulation, for these symbols have no meaning until they are correlated with experience in the world. My question is would Searle be using his human cognition (in English) to ground these symbols in a new language?

    ReplyDelete
    Replies
    1. I had the same question as yours when we were reading Searle's Chinese Room Argument. But later I realize that grounding symbols in a new language requires an autonomous system that has the capacity to deal with sensorimotor experiences and to understand semantics (I personally thinks it is possible to know which Chinese character is associated with the action that Searle made inside the Chinese room, but not the Chinese semantics.). Therefore Searle inside the Chinese Room will not be able to understand Chinese.

      Delete
  7. Is Searle really talking about “consciousness” or “understanding”? Understanding does not have to be a binary phenomenon. Based on the view of epiphenomenalism, could “consciousness” not be what emerges when a system has sufficient complexity? There could be different degrees of consciousness (i.e. the animal kingdom). Thus, the Chinese Room, which is relatively mundane, would have a low degree of “understanding”, perhaps lower than that which we would consider intelligent.
    If intentionality is a biological phenomenon which cannot be replicated by, for example, water pipes, then Searle must be assuming there is some property of the biologically abundant elements (namely carbon) that is not present in inorganic materials.
    I also do not fully understand what he means when he ends with, “Whatever it is that the brain does to produce intentionality, it cannot consist in instantiating a program since no program, by itself, is sufficient for intentionality.” Of course there would need to exist proper and adequate hardware in order to run the program, and the two must be ideally matched in order to produce intentionality, but I am unsure as to why the biological brain is considered the only feasible hardware for this endeavor.

    ReplyDelete
  8. RE: The point is that the brain’s causal capacity to produce intentionality cannot consist in its instantiating a computer program, since for any program you like it is possible for something to instantiate that program and still not have any mental states. Whatever it is that the brain does to produce intentionality, it cannot consist in instantiating a program since no program, by itself, is sufficient for intentionality.


    The misunderstanding of computationalism is often exacerbated by how the media portrays it. More often than not do we witness “strong AI” in movies as first passing the simple pen-pal aspect of the Turing test (e.g., Tony Stark’s Jarvis) and even if the machine is capable of having T4 is still not sufficient to be treated as equivalent to a human being (e.g., Andrew Martin from the Bicentennial Man). Therefore, the first step is clarifying the misconception would need to start with the media, given its large impact on the general public.

    ReplyDelete
    Replies
    1. I somewhat agree but I believe that by relating consciousness to computer software, we being to think that we're composed entirely of programs. Programs imply a programmer, so do we use the classic explanation that God programmed us? Or do we attribute it to some internal process we have yet to understand?
      I agree with Searle that consciousness is a result of biological phenomena but I don’t believe that digging around and just slicing up the brain will solve the hard problem. This biological process is simply one of the means to ending up at the answer, but it seems unlikely to be the actual answer.
      With this in mind, it continually appears impossible to simply understand the Hard Problem by trying to create artificial intelligence. We need to figure out what real intelligence is first, and we’re restricting ourselves by continually focusing on a single area, rather than on many at the same time. What if behaviour, computation, and whatever else we discover(ed) all function is symphony, but we’re so focused on one at a time that we keep avoiding the problem?

      Delete
  9. RE: The single most surprising discovery that I have made in discussing these issues is that many AI workers are quite shocked by my idea that actual human mental phenomena might be dependent on actual physical/chemical properties of actual human brains.

    I understand Searle’s argument that even if a machine were able to replicate the thoughts beliefs and actions of a human being, it would be impossible to prove that it possesses intentionality. If some machine were to exist like this, wouldn’t its program just be the projection of the “intentionality” of whoever its creators were. All it could do is mimic the concept of understanding that humans have.
    With regards to the creation of a machine we know to possess understanding: Is Searle saying that one would have to directly replicate the structure/biochemical processes within the brain, because it is the only machine known that truly has understanding?
    Lastly, how can we be sure that on the most basic level the brain isn’t operating upon some formal system that contains multiple levels of “understanding” or layers of second/third/fourth order interpretations of the formal symbols? Is this what he refers to about the “human mental phenomena?”

    ReplyDelete
  10. Searle’s response to the systems reply—which holds that even though the occupant of the room cannot understand Chinese, the system as a whole may be able to—depends on the individual’s ability to memorize and carry out formal operations on Chinese characters that he/she cannot understand. However, these operations are outlined by an English book of instructions, which the same individual can in fact understand. Since Searle’s CRA requires this basic level of understanding in order for the occupant of the room to be able to carry out any sort of formal symbol manipulation, isn’t it already assuming too much? Does this thought experiment imply that even formal symbol manipulation without “intentionality” (defined in class as what it feels like to know the meaning of words) might require a base level of understanding? Also, I understand that this is only a thought experiment, but I'm having trouble even imagining the possibility of a person with the cognitive capacity to be able to memorize/internalize the entire rulebook.

    ReplyDelete
  11. “If strong AI is to be a branch of psychology, then it must be able to distinguish those systems that are genuinely mental from those that are not. It must be able to distinguish the principles on which the mind works from those on which nonmental systems work; otherwise it will offer us no explanations of what is specifically mental about the mental.”

    Is Searle contradicting his argument when he makes this statement because later he refers to the task of separating what is mental and non-mental as a form of dualism. If we were to consider that there is a possible way of distinguishing mental and nonmental and calling mental states independent of the hardware, this Searle says would be dualism. Thus, this brings me to think that the above statement is Searle playing devil’s advocate, he is showing that the problem with AI is the very hypothesis on which it bases itself on, perhaps by showing how this argument folds upon itself and would imply that for one to believe there is a distinction between the two, would be admitting that they are a dualist.

    ReplyDelete
  12. In response to Searle p. 12: “The distinction between the program and its realization in the hardware seems to be parallel to the distinction between the level of mental operations and the level of brain operations. And if we could describe the level of mental operations as a formal program, then it seems we could describe what was essential about the mind without doing either introspective psychology or neurophysiology of the brain. But the equation, "mind is to brain as program is to hardware" breaks down at several points.”
    I found Searle’s disassembly of the mind/brain, program/hardware analogy really helpful in understanding some of computationalism’s problems. Searle first points out that if we could create a program that perfectly captures mental activity and if we were to implement this program with human hardware (as in the Chinese room example), there would still be something (intentionality) missing. What was most enlightening for me was his reiteration that a program is not a product of a computer in the same way that the mind is a product of the brain. Even if we were to build a program that simulates understanding, it just doesn’t follow that the program itself understands anything.

    ReplyDelete
  13. In class we talked about how it “feels like something” to understand. Personally I don’t understand why this “feeling” is a relevant critique of computationalism. If we can’t measure feeling in any objective manner and we can never really be sure that someone else “understands” how we “understand” (problem of other minds) then I feel this is outside of the realms of our discussion (and our understanding of the Turing Test). I think the most relevant point I take away from Searle’s Chinese Room Argument is the centrality of sensorimotor information to cognition and the problem of symbol grounding. Even if you can reply in perfect, grammatically correct Chinese and the message you are writing makes logical sense, you would still not be able to describe a real world event in Chinese. Without a real-world meaning we can conclude that Searle does not “understand” Chinese using any common sense definition of the word “understanding”.

    My question, which has also been debated by others in the comment section, is whether or not sensorimotor information + computationalism is sufficient for cognition, and for passing T2?

    ReplyDelete
  14. Do we even know how to truly define the word “understand”? Because what I believe is, yes, you can teach anything to search for an answer and how to write a sentence, but how do you truly define “understanding”? I think that we “know” things by doing just that, searching for an answer that we’ve stored in our minds, but just because a machine can give the correct answer or output, I wouldn’t say that it means that it truly understands or knows what that answer means. I feel like there has to be some sort of sentient connection to their answer (the idea of knowing implicitly versus knowing explicitly), but this goes back to not knowing whether or not something or someone “feels”.

    ReplyDelete
  15. At first I was a little confused about the distinction between knowing a rule book for generating the correct Chinese output in response to every single Chinese input and consciously knowing the Chinese language. I was thinking that maybe knowing a language that Searle in fact does know, such as English, is just a highly internalized and automatized form of knowing all the rules for associating the symbols (English words) to their referents. In other words, Searle thinks that he knows English because the computation process of getting from input to the output has been rehearsed enough that is taking place on some subconscious level in his mind without his active effort or attention. The difference between knowing a language and knowing a rule book of instructions for a language was clarified in the lecture when we talked about how knowing a language should also be accompanied by the unexplainable feeling that you know the meanings of the words in the language that is far more complicated than instructions for picking out the referents in the world.

    ReplyDelete