Saturday 2 January 2016

3a. Searle, John. R. (1980) Minds, brains, and programs

Searle, John. R. (1980) Minds, brains, and programsBehavioral and Brain Sciences 3 (3): 417-457 

This article can be viewed as an attempt to explore the consequences of two propositions. (1) Intentionality in human beings (and animals) is a product of causal features of the brain I assume this is an empirical fact about the actual causal relations between mental processes and brains It says simply that certain brain processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality The main argument of this paper is directed at establishing this claim The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4. 






see also:

Click here --> SEARLE VIDEO

99 comments:

  1. I am having trouble sorting out the difference between Searle's argument that a computer with the proper programming cannot have intentionality, and that we as humans are in a way computers with the right program.
    "If by 'digital computer' we mean anything at all that has a level of description where it can correctly be described as the installation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think".
    This seems to be in contrast to the argument that a system as a sum cannot have intentions just because it has a combination of programs.

    ReplyDelete
    Replies
    1. Adrian:

      1. Searle is arguing against computationalism (cognition -computation).

      2. Forget about "intentionality." It's a weasel-word for consciousness.

      3. Searle is asking whether a computer that can pass T2 (in Chinese) would understand (Chinese).

      4. He shows it would not understand by pointing out that if he himself were executing the very same T2 computer program on Chinese input, he would not understand Chinese.

      5. How would he know that he was not understanding Chinese? Because it feels like something to understand Chinese, and he would lack that feeling.

      6. It would be irrelevant that he was manipulating Chinese symbols so that he would pass T2: He would not be understanding Chinese.

      7. What Searle means by "level of description" is probably the Strong Church-Turing Thesis, according to which just about anything can be simulated computationally. But that doesn't mean everying is a computer, doing computation (symbol manipulation).

      8. The "System Reply" -- that Searle doesn't understand Chinese but :the system" does -- is nonsense, because once Searle has memorized the computer program so that he can do all the symbol manipulations in his head, to get from the Chinese input to the Chinese output, Searle is the whole system! There's nothing else.

      Delete
    2. This all makes sense to me, but I still don't understand why Searle would say that we are all 'installations of any number of computer systems'. This seems in direct contrast to his anti-computationalism argument.

      Additionally: what would you use instead of 'intention'? Would you instead use 'feeling'?

      Delete
    3. Hi Adrian,

      I believe this is the quote you're referencing: " It is not because I am the instantiation of a
      computer program that I am able to understand English and have other forms of intentionality (I am, I suppose,
      the instantiation of any number of computer programs)". I don't think that this assertion is in direct contrast to his anti-computationalism argument as his argument isn't that we are not comprised of some set of computational processes. The computational processes are in fact necessary for some representational state to be succeeded by another state, but a cognizing entity would be one that would choose to carry out any or all of these processes as opposed to just having them be executed in some predetermined fashion. It is difficult to pick out a substitute for 'intention' as we would ultimately just get back to the term 'consciousness', but another way of putting it would be that it is the envisioned cause of some action(s).

      Delete
  2. Clearely searl's chinese room argument has prerequisites: an agreement on the meaning of "understanding", an agreement on the difference between conscious (searl's definition) and unconscious knowledge/understanding (which searl makes clear is not the relevant), and an agreement on the fact that speed does not matter in the study of what "thinking" means (which is the only one of those points that I seem to be convinced about). Starting my comment with the fact that we need to agree on the meaning of words is quite relevant to the fact that in this situation, Searl used language, and most specifically word meanings as an example for something that the computer or person “understands”. I am unsure about the degree to which I understand words or the limit between my understanding of the rules (how each word is used) and the understanding of the meanings (how each word is used as well). Words like "though" or "and" seems to me like it is defined (semantically) through the way it is used (syntactically).
    So do we truly consciously understand” language? Words are not only symbols, but I don’t know if I use each understanding their meaning, I might just be very good at manipulating words as symbols?

    ReplyDelete
    Replies
    1. Julie:

      1. There is no such thing as "unconscious understanding," because it feels like something to understand. (And there's no such thing as an unfelt feeling.)

      2. This is a thought experiment, to demonstrate a point; it is irrelevant whether Searle would have enough time to memorize of execute the program.

      3. If you don't believe (2), try the argument with something simpler than understanding Chinese: If Searle memorized a coded algorithm for playing tic-tac-toe, so the input was not a 3-by-3 array of X's and O's but something that was computationally equivalent to it, in another formalization, then he would be playing tic-tac-toe without knowing what he was doing. And that's the point.

      4. Yes, we do consciously understand language, because (a) you understand this, (b) et tu comprends ceci, (c) de ezt pedig nem érted (which you do not understand until you google it to find the language and then pop it into google-translate to find out what it means). What Searle means by not understanding Chinese is (c).

      5. Yes, some words are functional (and, if, not, whether), but most are content words with referents (zebra, horse, stripe, unicorn). For Chinese, Searle understands neither function nor content words.

      Delete
    2. About point 1: you've said in lectures that we process information in virtue of what it means, unlike computers which do not. Here you seem to be saying that this "interpretive process" (whatever it is) is related to feeling. Are they the same process, and therefore equally insoluble since they are part of the "Hard Problem"?

      Delete
  3. RE: “…addition of such ‘perceptual’ and ‘motor’ capacities add nothing by way of understanding”

    When Searle argues that an AI with sensorimotor capacities (T3) is just fancy symbol manipulation, and therefore, has no more understanding than an AI with purely symbolic capacities (T2) would, he seems to be missing an important nuance. It’s not just a matter of “input”/ “output”.

    Take his example of “the robot’s homunculus” (p.8): Would the robot’s processing of its perceptual input really be a separate program running in parallel with its verbal programming? I find this hard to believe. The two necessarily interact with one another to give rise to understanding.

    For example, by adding sensorimotor capacities, an AI would go from simple inputoutput (i.e. suntanning), to having the real-life experience of tanning (experiencing the warmth of the sun).
    The AI’s sensorimotor capacity is what gives content/meaning to the otherwise arbitrary symbol ‘sun’.

    Therefore, an AI with sensorimotor capacities does not=computation alone; it is more. It’s sensorimotor capacities ‘add’ understanding because its symbols are now rooted in actual things that it does (T3).

    ReplyDelete
    Replies
    1. Hi,

      I think it's possible to add sensorimotor capacities without adding feeling. That is, if you consider that sensorimotor is just the input of physical information that is then entered into the system, processed, and results in a motor output. For example, a seismograph receives a sensory input and gives us a motor output, and its pretty much like most machines work too. So lets say we have a robot that can hear. The robot has receptors that record mechanical waves of pressure (i.e. sounds). The information is transcribed at the receptors into whatever you want (say, electrical signals). The electrical signals reach the homunculus, who has instructions that match specific patterns of electrical signals to others. The electrical output causes a motion of the limbs. But that doesn't meant that the robot "felt" something, as in, was conscious of the sound (if you say that it did, then you have to say that any object that produces an output in response to a disturbance of its physical environment, in other words pretty much anything, also feels and is conscious. Or at least that is my understanding of the CRA...

      Delete
    2. Hello Chloe, I don’t think I completely agree with you. Yes, moving and sensing isn’t thinking, such as our toy robot (T1) points out; however, I believe that it is needed to be able to move and sense in order to think. So in this case, I agree with Manda: with the sensorimotor addition, the robot (T3) will consequentially have the capacity to understand and feel. Remember, that Searle is only talking about T2; such that it seems that the Robot Reply gives a parameter outside of the CRA thought experiment. Including the arguments that Manda have already stated about real-life experience and the symbols are rooted in actual thing the robot does, and also on the premise that it is an sensorimotor addition on what is already a T2: such that the addition capacity of sensorimotor will allow this robot to be able to learn and understand Chinese. Fundamentally, sensorimotor capacity and experience is thought to be needed for cognition because this is how our brain detects the feature of the things that will allow us to categorized them, such as naming them, and in doing this, we can pick out there references of words to make words grounded symbols, instead of ungrounded symbols of formal computations.

      Delete
    3. This comment has been removed by the author.

      Delete
    4. Oops I just saw that sorry. Hi Grace, I agree with you that sensorimotor ability is needed in order to think, but I disagree that it could be sufficient. In the Robot Reply, it is possible to imagine a robot that would still conform to the parameters defined by Searle in the CRA thought experiment. Indeed, we can imagine a case where we have: Searle inside, having internalized a translator program + Chinese input/sensory input + motor output. But Searle does not know that there are different types of inputs. In fact, all he receives are symbols that are the symbolic translations of physical signals (differences in air pressure corresponding to sounds, etc). So ultimately it's the same as if you had added one more language, but Searle is not even aware of it as all he receives are symbols in the formal language of the program, that he matches to output symbols in the same language. Searle is not the one who feels the input, the robot is. Yet at the same time it can't be said that the robot understands the stimuli, since it does not have access to the symbols. Therefore it is possible to imagine a robot that would with sensorimotor abilities that would not understand, i.e., whose symbol system would not be grounded.

      Delete
  4. Firstly, I do not quite get Searle’s reply to the systems argument. Although the person in the room might be the entire system, there is a bigger system that created the room and the person with the understanding of Chinese, giving him the ledger and all he needs to do the symbols manipulation. I am not fully convinced with his reply.

    Furthermore, Searle doesn’t give any argument as to why cognition is not computation. He seems to have this strong innate belief that humans must have some other ‘stuff’ that makes them different, namely our brains, neurons, and the connections between them. What if the brain is a formal systems manipulation? We don’t really know for sure that the brain has causal powers. We are all a product of our genes and our environment. Do we really know that we base our decision off of? It may be the qualities we were born with, our genes, what family and culture we were born into, what friends we happened to have made and so on. I don’t want to have my argument come off as behaviorism, we do have thoughts, feelings, and other internal states that affect our decision. However that personality is largely affected by what life brings to us. We see so much heritability in mental disorders and personality, for example. The inputs we get, which are much more complex than what we might produce in an AI, nevertheless do decide how we will act, who we become, even what mental disorders we might develop. If I my parents made the simple decision of living in another block, I might have come across a lot of different inputs. The hardware we have was given to us by evolution. After, we learned and developed. Would it not be the same idea if we gave the hardware to a robot, and it started making decisions and learning with the basis we gave it and the experiences it will have? Why does it need a brain with our physical and chemical properties?

    Lastly, I do not get the point where Searle gives an example of the belief that it is raining, and say that “it is defined as a certain mental content with conditions of satisfaction, a direction of fit.” When he is talking about intentional states not being formal.

    ReplyDelete
    Replies
    1. I completely agree with your second point that Searle does not look at the big picture (as one system that has accumulated inputs over time and genetics), and is thereby eliciting certain responses from us. Searle tries to argue that it feels like something to understand Chinese, however why does it feel that way? Could he be expressing this emotion because it was programmed from previous experiences/encounters that he is supposed to feel a certain way when he gets something right and is able to comprehend? Couldn’t a well programmed machine in the future also express emotion, and, for example, be programmed to behave in a positive manner? I think a more interesting direction Searle could have taken is the question, why? Why would a person, if they have a choice, randomly pick the wrong answer even if they knew the right one and had nothing to gain from the wrong one? I believe that a machine could not make this decision on a whim, and therefore the random false answer, and why the choice was made, would be a marker of consciousness in human beings.

      Delete
    2. Hi Eugenia, it seems like you are likening Searle's notion of understanding and actual emotion. But I don't think Searle's idea of the feeling of understanding is necessarily emotional feels, its just the subjective experience or consciousness that accompanies human cognition. Emotion could very well perhaps be programmed into a machine, but I don't know if we will ever be able to tell if a machine actually understands because of the other minds problem.

      Delete
    3. Eugenia, you say, "Why would a person, if they have a choice, randomly pick the wrong answer even if they knew the right one and had nothing to gain from the wrong one? I believe that a machine could not make this decision on a whim, and therefore the random false answer, and why the choice was made, would be a marker of consciousness in human beings."

      What if you were to program the machine to randomly output an answer from a set of options, for certain questions? Technically, it could then output a true or false answer 'on a whim'. In that case, how would you differentiate that machine from a person?

      Delete
    4. Lori, the difference would come through the mixing and matching of the responses. We , as humans, can not only choose answers randomly from our own ‘set of options,’ but we come with the unique ability to create new options that make sense to both ourselves and to other humans. We can abbreviate things in a way that automatically makes sense to others, or we can mix phrases together; we even create puns. For a computer to do this kind of imaginative deviation through computer-generated randomness, it would be nonsensical and very obvious that there was a certain pattern and a certain algorithm behind the generation of these responses. We would also be able to exhaust that set of responses until the machine has used all of them, whereas humans can create a nigh-infinite amount of answers.

      Delete
  5. If Searle were to operate inside the head of a robot using the rules given to him, his actions as causal would create and expose the robot to novel stimuli in the environment. Would Searle not have the available outputs in his rule book because the rules were written before the unpredicted/novel situation? If computers have predictive models which adapt with new input, but Searle cannot do this because he has a finite number of rules available to him, aren’t we dealing with two different notions of computation?

    ReplyDelete
    Replies
    1. From my interpretation of Searl's Chinese room argument I do not believe the importance of the logistics behind the manipulation of symbols is relevant. I think the entirety of the argument is to try and explain to and convince readers that manipulation of symbols does not imply understanding of those symbols. So even if we are to include the possibility that there may be some third party that could update the system with more rules that are adapted to each novel environmental experience, we still do not conclude that there is consciousness, understanding or feeling of the Chinese language.

      Delete
    2. Exactly! I completely agree with Nadia. Searle is arguing that the batches of Chinese writings and instructions given to the person in the room mimic the programs given to a computer. Although from an external perception, it may seem like the computer is cognizing, neither Searle in the Chinese room nor the computer is truly understanding. Krista, I would also argue that computers (like Searle) have a finite number of rules available therefore it is a fair comparison to make.

      Delete
    3. Krista, I don’t think we are dealing with two different notations of computation so much as two different types of program. Searle’s Chinese room ledger is a T2 test, which professor Harnard argues is an insufficient test for intelligence. He says instead that T3, with its requisite sensory-motor capacities and potential for embodiedness would be necessary to test a computer. I think the scenario you are describing with Searle following a rule book in the robots head would require us to think of the machine as a T3. Therefore the rule book would have to contain all the necessary rules and instructions for successfully navigating and interacting with the environment, just as humans would. If the book does not have sufficient rules for this behaviour (which a simple Chinese room could get away with) then it is not a sufficiently T3 program.
      Moreover, in Turing’s Universal Computer design the instructions can be stored in the same space as its “scratch paper,” therefore the computer is capable of writing its own code based on prior instructions. Therefore there is nothing precluding Searle's ledger of instructions from containing an infinite amount of blank pages for him to fill with characters as instructed (which unbeknownst to him are more instructions), and then following the new set of instructions. In fact given the current state of machine learning this is likely how such a machine would need to work. It would need to have a core program that can allow it to write rules based off of new inputs. So, no I don’t think we are dealing with two different notions of computation, but we are dealing with a T3-Chinese-room. And it is certainly a lot more fun to imagine Searle stuck in head of a giant robot megazord than a depressing translating desk job. But I guess the great thing about this thought experiment is that these two tests would be indistinguishable to Searle (except when all his papers keep getting knocked about while the robot learns to walk).

      Delete
  6. The core discussion that Searle outlines, to my mind, revolves around the gap between the English speaker being able to take input and produce the "correct" output through a series of procedures, for both English and Chinese, indistinguishably from a native speaker of either language. Searle argues that while the input and output seem to be acceptable to the native speaker, the English speaker still does not "understand" Chinese.

    I'm wondering about the definition of "semantics" in Searle's "syntax without semantics" linguistic analogy – syntax stands for the way words are arranged in any given language, while semantics give meaning to the words. I would argue that there is "meaning" in the syntax of words: an example, perhaps, could be seeing the word "and" over and over again in the same place syntactically, and figuring out that it's the word that joins two other words together. Does this not also have semantic meaning? The level of detail required in the computation needed in Searle's thought experiment would be too subtle and complex to describe, but to a certain extent, words acquire meaning by how they relate to other words.

    This isn't enough, though. We learn the meaning of words firstly by attaching them to objects and the properties of those objects in the tangible physical world, and continue onto attaching words to intangible concepts. I would argue that we claim to "understand" these concepts through communication by aligning our conceptions of them with other people's or our educators' conceptions of them.

    ReplyDelete
    Replies
    1. Wei-Wei, I think that you are totally correct to say that there is information or “meaning” in syntax, and that is a huge part of why it’s so easy for us to forget that programs such as translators or calculators lack the semantic knowledge that humans take for granted. If we wanted to know what happens when we have two apples and then get three more apples, we would have a calculator that is calculating “2+3” and it gives the output “5”. We know that it must have the syntactic knowledge of “X + Y = Z” and this information is coded into the Turing machine (calculator) that is taking an input and creating an output. What it lacks is the information of what “2” is. The calculator doesn’t know that “2” means besides that it is a squiggle and that 3 is a squaggle. The calculator doesn’t know that “2” means “a group or unit of two people or things”. While this is a simple example, think of the implication of a translator taking the sentence “where is the bathroom” and translating it to “dónde está el baño”. It might have the syntactic knowledge to change from the squiggles of English to the squaggles of Spanish, but has no concept of what a bathroom is.

      Delete
    2. Karl, I totally agree with you! My last paragraph tries to address the meaningless squiggle/squaggle point. My argument is that the semantics of words are first learned by attaching them to physical objects and their properties, and we then move from that point into attaching meaning to intangible concepts. I would think that semantics and syntax (to speak in simplified linguistic terms) feed back into each other, and the accumulation of rich meaning where both are intertwined allows for more sophisticated expression of intangible concepts.

      Delete
  7. Searle's "Minds, Brains, and Programs," shows that a computer can have input and output capabilities that perfectly duplicate those of a native Chinese speaker, and yet still not understand Chinese, regardless of how it has been programmed. This highlights how, probably through looking at the development of AI through the lens of the Turing test, we tend to confuse simulation of cognition with AI duplication of cognition. Through his examples of waterfalls and fires, Searle reasons that simulations of any kind, cannot think or feel. A waterfall simulation is not wet, in the same way a simulation of a human brain would not feel. Confusingly, the Turing test is behavioristic in that it only cares about appearances. When I was reading Turing's paper I was under the impression that we would be just as much in the dark trying to decipher if a simulation of a brain was thinking/ feeling, as we would be in deciphering if our mother was thinking/ feeling. After reading Searle, I'm convinced this is not the case. We don't need to be behaviorists. We know simulations can't think/feel.

    I tend to agree with Searle's implication that human cognition is dependent on actual biological/ physical/chemical properties of physical human brains, more specifically, how our physical properties act on a dynamical level. Like Searle, I do not think that we can divorce our hardware from our software in the same way that a computer can. I imagine that the interaction of memories, our present neural and physical environment, and our genetics/ epigenetic factors together create a software/hardware blend which probably operates via some sort of dynamical principles, which results in individual, unique subjectivities. I don't think we will ever be able to trade consciounesses with our friends by running our special algorithim in their brains. I also do not think we can wake up vegetative people by running consciousness program in their brains.

    What else can we say about these causal properties Searle supposes any thinking machine must have? So far, is our sensory-motor interaction with our environment (causal learning via our sensory/motor /dopaminergic system) the only plausible vehicle we have imagined, by which something could have the ability to think? Has someone introduced the idea that we operate and come to think via inborn learning algorithims; something like Chomsky's universal language?



    ReplyDelete
  8. Searle's argument brings forward an important element of the Turing test, that it is possible to pass the Turing Test (T2) without having intentional states. He argues that himself in the Chinese room would be able indistinguishable to a native Chinese speaker, thus would pass the test without understanding how. This is an important distinction and limitation I think to the Turing test, that the test can display human cognitive abilities, but it is not sufficient to capture any conscious state. This notion demonstrates that Turing really was looking at the easy problem of conscious and not the hard, when distinguishing between these two it's important to keep in mind that they are address different problems.

    If the Turing test is insufficient to demonstrate conscious states/intentionality, is it even necessary? If, as Searle argues, that computation is not helpful in functionally describing consciousness then is it possible to separate the easy and the hard problem completely, that is can something be conscious without passing the Turing test? Such a thing would have the inability to for example use language or recognize objects, but would be able to have a feeling of however they could orient themseleves. Thinking about consciousness beyond our cognitive abilities is difficult to conceive, though might be necessary when approaching the hard problem.

    ReplyDelete
    Replies
    1. I also find this issue with the Turing Test that Searle brought up really interesting. Does Searle’s thought experiment prove that the Turing Test relies entirely on behaviourism? In response to your first point Cassie I think that this is what the professor was getting at in class last week, he thinks it is likely impossible for us to pass the Turing Test with T2, and in face we need T3.

      I think this leads also into your final point, it might in fact be possible that we build a T2, then elaborate on it to create a T3 who we determine is fully conscious and possesses intentionality. In the end however, we might find that the T2 had the same ‘mental’ capabilities as T3 all along, except for the sensorimotor input. This would obviously force us to drastically redefine our conception of cognition.

      Delete
    2. Based on the story about the man, the burger and the restaurant, wouldn't the T2 fail the Turing test because it won't be able to semantically understand that the man in the second story did eat the burger even though it syntactically mirrored the previous story where the man stormed out without eating the burger?

      Delete
    3. Furthermore, would we need at least a T4, who is capable of learning without external minds helping it connect symbols to referents, to be able to distinguish between how the man feels in both states in the story?

      Delete
    4. @Lucy - I wonder if it would be possible to build a T2 at all, if in order to do a lot of the computations that we do, if it can't ground them then I think a T2 would hit a snag someplace. Maybe we would need to build a T2-like that may be insufficient to pass the Turing test, but could get us closer to T3.

      @Shanil - I don't like the burger story very much, I think that it doesn't capture Searle's point fully. I think that a T2 could pass stories like this one with the right programming, the fact that it could not understand it is true, but how could a T2 communicate understanding? I'm curious why you think T4>T3 would be able to understand the story. I don't think that neuronal level is necessary, Dominique is T3 afterall.

      Delete
    5. @Cassie and @Shanil - I disagree with your objection to the burger story Cassie. I do agree that T4 would be unnecessary, as the whole point of T3 is that it is indistinguishable. However, I think that the burger story brings up an important objection that we looked at in class, namely that it would be pretty much impossible to program a computer so that it knew all the possible sentences and scenarios available in the world. I think that this also highlights a flaw in Searle's thought experiment, it is a bit of a stretch to assume that it is possible for him to have an algorithm for every given scenario/input. I think that is why our professor thinks that it is T3 that is really dealt with in the Turing Test, and not T2, because without the ability to experience life, how could T2 have a response to an infinite amount of scenarios?

      Delete
  9. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him.

    Along the same thought process, I agree that even memorizing the whole “rule binder” and the whole entire system might not be enough. I strongly believe that true understanding can be demonstrated in the ability to use completely independent learned concepts, and apply them in a novel way in an unpredicted situation. I do not feel like Searle’s Chinese room would allow this type of understanding to be displayed. Indeed, one must be able to create new connections between ideas and concepts. This cannot be attained solely by memorizing a system.

    ReplyDelete
    Replies
    1. I think that's a good point, Josiane. As humans if we memorize something enough, we're able to form connections and associations almost instantaneously due to routine memory. That is to say - it doesn't feel like memorization - it feels like understanding. Would you say that these instantaneous connections always lack understanding?

      Delete
    2. I don't think it’s so much memorizing rules (the syntax) but understanding the meaning (semantics) of those squiggles and squoggles handed to you. You say “true understanding can be demonstrated in the ability to use completely independent learned concepts, and apply them in a novel way in an unpredicted situation”, and I agree – I feel this way about true understanding as well. But in my opinion, this sort of understanding occurs when one understands the meaning behind the symbols, and not just when rote memorization is employed, as in CRA.

      Delete
    3. I agree with you all that in order to actually understand one needs to be able to apply learned concepts to novel situations because if they can do this then they would understand the meaning behind the symbols, which doesn't occur when memorization is used on its own. I think this could also be demonstrated with pets. Some people think their dogs can understand English because when they ask their dogs if they want to go outside they eagerly run to the door. The animals have likely memorized and paired the tone of voice or the sounds occurring with the action of going outside, but if you were to pose the question with different words or with in a monotone way they likely wouldn't react the same way or appear to understand. This again shows that true understanding comes from being able to respond correctly in novel situations rather than just memorizing certain instances.

      I think you bring up a good point Laura. Because as you said by memorizing something enough we, as humans, can form connections and associations instantaneously. However, I would say that at this point the memorization has turned to learning and associations being made, which is when we feel that we understand and can apply what we have learned to new situations. With Searle in the CRA, he can converse in Chinese without actually learning what the symbols mean and making associations between them because of the level of memorization and symbol manipulation taking place, which allows it to appear that he understands. Could it ever be possible for machines to be able to move past memorization and actually form connections and demonstrate true learning? If they could form associations then at this point would we say they actually are able to understand? Or because of what Searle proved in the CRA will it always just be thought that at the level of a T2 only symbol manipulation or memorization is taking place rather than a true understanding?

      Delete
  10. This comment has been removed by the author.

    ReplyDelete
  11. "I will argue that in the literal sense the programmed computer understands what the car and the adding machine understand, namely, exactly nothing. The computer understanding is not just (like my understanding of German) partial or incomplete; it is zero."

    Earlier, Searle was discussing the idea that he didn't know Chinese at all. Because of this, his input/output of Chinese was not unlike that of a computer. My question is - is Searle's (or anyone's for that matter) lack of understanding of Chinese an incomplete or zero understanding? I understand why Searle would assign computers a "zero" for understanding - but what about humans? Humans have the potential for understanding - even when they do not understand something. That is to say - I do not understand Chinese, but I have the potential to be able to understand Chinese. Would this mean I have an incomplete understanding of Chinese (like Searle with German) or that I have a zero understanding of Chinese (like a computer)?

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
    2. Searle analogizes himself to a computer in his thought experiment only to show that we cannot understand symbols via memorization alone. If one hasn't learned what Chinese symbols mean, memorizing a procedure like the one Searle describes will not garner understanding; the symbols will just be squiggles and squoggles. We have the potential to ground symbols, and garner understanding through other processes we don't know much about yet, but Searle's thought experiment doesn't speak to that.

      Delete
    3. @Laura, i am also confused about the same issue! I'm having trouble distinguish between whether Searle is talking about 0 understanding or incomplete understanding. I feel like it comes down to the definition of "understanding". Like @Lauren mentioned, i understand the idea that symbol manipulation itself couldn't be sufficient for understanding Chinese because the man could be writing symbols without anything in Chinese. However, since something about humans - whether biological, neurons, etc, makes us not the same as computers, we should be able to understand it to a certain extent?

      Delete
    4. I like to think of understanding as a more semantic notion – if you can manipulate things based on your knowledge of their meaning, and as Josiane said above, apply your understanding in a novel way, you are closer to fully understanding it. In this sense, computers do have 0 understanding as they only manipulate formal symbols, which is not semantic. I’d say that even though you have the potential to learn Chinese, what exactly is potential? At this current moment in time you don’t know Chinese, and therefore your understanding is 0.

      Delete
  12. RE: "The Robot Reply (Yale)"
    Searle implies that the addition of perceptual and motor capacities to a robot would not have "genuine understanding and other mental states". His analogy of the robot homunculus that is still just manipulating symbols makes me think; how is a human homunculus not the same?
    How can we know that the critical difference is that the traditional homunculus "knows what's going on"?
    Can our own homunculus be as indifferent as that of the robot?
    That is, perhaps an epiphenomenalist view could account for the apparent problem of individual modules of the mind.
    If the mind arises somehow from all the different parts of the brain--the sum greater than the parts--could we not also expect that from a robot with likewise parts?

    ReplyDelete
    Replies
    1. I think that's a really interesting question. Whether robotic parts will ever be able to sum up greater than their parts as I imagine that we do. I wonder if this is something innate to biological organisms or if we just need the right technology.

      Regarding comparing the human and robot homunculus, I think the whole point of cognitive science is to alleviate using a homunculus to explain the mechanism of cognition as it results in a circular explanation. The homunculi could just keep getting smaller and smaller which doesn't create a satisfactory answer.

      But if you're using homunculus to refer to the parts of the mind or program that result in intentionality then I don't think the individual parts "know what's going on" but rather them working together results in this understanding.

      Delete
  13. I agree whole-heartedly with Searle: that formal symbol manipulation (or a program) is not sufficient for intentionality or understanding. This is because the program can be replaced with a monolingual human and there is no additional intentionality apart from the intentionality the human already possesses, merely to manipulate symbols.

    "Third, as I mentioned before, mental states and events are literally a product of the operation of the brain, but the program is not in that way a product of the computer."

    This I feel like is a somewhat misleading statement. Mental states and events, such as catching an apple, arise from whatever is happening in our brains (to be determined by cognitive science and neuroscience). They are the result of the causal mechanisms we are looking for. Strong AI proposes that programs can be a causal mechanism for cognition. Thus, programs can't be the product of the computer because they are the causal mechanisms through which it works. However, motor movement or correctly answering questions about a story etc. can be a product (and are a product) of computers. Correct me if I'm wrong but I don't think that mental events are causal (being happy, desperately trying to remember your third grade teacher's name). If so then mental states and programs can't be likened to each other?

    ReplyDelete
    Replies
    1. From what I understand, one of Searle's notes helps this argument make more sense, which says

      "Intentionality is by definition that feature of certain mental states by which they are directed at or about
      objects and states of affairs in the world. Thus, beliefs, desires, and intentions are intentional states;
      undirected forms of anxiety and depression are not."

      Thus, for his argument it appears that he's only looking at particularly causal/intentional mental states.

      It would be interesting for him to touch more on non-intentional mental states and explain why he assumes they are not relevant to Strong AI. Is there an assumption that non-causal mental states cannot contribute to thinking and understanding? What is the reasoning for this assumption?

      Delete
  14. A reply to Searle's reply to the robot reply.

    I base this on the premise that I as a person consist of more than just my brain: I am my brain and all the other parts of my body. Without the rest of my body (be it biological or synthetic), I wouldn't be the person I am, and I probably wouldn't understand anything because there would be nothing for me to understand.

    I also want to leave consciousness out of this whole question, because it may very well be that the Searle's robot is not conscious and I don't want to get into those murky waters.

    So, my question: in what sense is what Searle-in-the-robot is doing any different from what my brain is doing? My brain is made up of many billions of neurons and other supporting cells. My neurons, in the simplest terms, receive inputs, generate action potentials, and send outputs to other neurons. My neurons have no idea whatsoever of what is happening in the world, and so are doing little more than formal symbol manipulation. This could extend to neural networks and eventually to the whole brain. But, since I am more than my brain, I as a whole person do understand what is happening around me. So if Searle-in-the-robot can be likened to my brain, of course he doesn't understand anything: neither does my brain. But this doesn't mean his lack of understanding extends to the robot; as even he says, the robot as a whole is doing more than formal symbol manipulation, and so am I.

    ReplyDelete
    Replies
    1. I would also love to know what specifically Searle means when he says "causal powers of the brain" and "intentionality." These terms are so central to his thesis yet he doesn't seem to define them anywhere.

      Delete
    2. Very interesting argument/point about the neurons not being conscious. I think this would then lead again to Searle's argument about dualism, however he equates it to a strong dualism, which I don't think is necessarily true. To me, it seems to be more of an interactionist approach - that the neurons/machine/hardware is interacting with some sort of "programming"/functionality that leads to the mind and thinking. I'm not exactly clear why this view is completely disregarded or ruled out as a hypothesis for Strong AI in Searle's paper. This would make it so that understanding wouldn't necessarily have to rely on the physicality of the brain, as Searle argues, but more an interaction of the physical machine with the program.

      In regards to your second question, I was wondering that too, but found that Searle explained intentionality a bit better in the notes section, "Intentionality is by definition that feature of certain mental states by which they are directed at or about
      objects and states of affairs in the world. Thus, beliefs, desires, and intentions are intentional states;
      undirected forms of anxiety and depression are not,"
      and also on page 13 he starts to talk about the argument that computers and brains have "information processing," whereas other machines do not. I think this is perhaps what he means by the specific causal powers of the brain as opposed to other instances. Thus, the brain's causal powers are in terms of information processing, while he argues that currently computers only simulate this information processing.

      Delete
    3. I think it is impossible to leave consciousness out of the question, because for Searle, intentionality which is, I think, consciousness or its key feature is necessary for understanding. We do not understand if we do not have intentionality/consciousness.

      I do agree with your assessment of the robot reply and would like to add that his reply to the other mind's problem is not satisfactory.

      "This objection really is only worth a short reply. The problem in this discussion is not about how I know that other people have cognitive states, but rather what it is that I am attributing to them when I attribute cognitive states to them. The thrust of the argument is that it couldn't be just computational processes and their output because the computational processes and their output can exist without the cognitive state. It is no answer to this argument to feign anesthesia. In 'cognitive sciences" one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects."

      Particularly, pointing back to your comment about "causal powers", he fails to point out what is in a cognitive state that is not accounted for by computation and output without presupposing intentionality, because it is exactly intentionality we are trying to capture.

      Delete
  15. In the Systems Reply, the individual in the room cannot understand Chinese but the system as a whole understands. Searle responds to this with the following proposal: Let the individual learn and internalize the system itself and he will still not understand after he has memorized the system is in his head. Symbol manipulation itself could not be sufficient for understanding Chinese, yet Strong AI supporters would say that English is just more formal symbol manipulation.

    I agree with Searle that even if the individual learns all the rules to this Chinese Room Experiment, he will still not understand what is going on despite producing the correct outputs. This individual would indeed pass the Turing Test but would not have the intentionality of a functioning human mind.

    However, how can we explain the process of understanding English through formal symbol manipulation? We understand because we feel like we understand but what is the causal mechanism underlying this? How is it that symbol manipulations work for English native-speakers yet they do not work the same when the individual is introduced to Chinese? Is there a way we can reverse-engineer our understanding of English as native speakers, into a program?

    ReplyDelete
    Replies
    1. I think Searle is saying that understanding English can't be just formal symbol manipulation.

      Delete
  16. “As regards the second claim, that the program explains human understanding, we can see that the computer and its program do not provide sufficient conditions of understanding since the computer and the program are functioning, and there is no understanding.”

    Searle says that the computer program Is irrelevant to the understanding of the story and that it understands nothing. As per my previous sky writing, I am still not convinced that despite memorizing all the symbol manipulation rules, he still had zero understanding of Chinese. I would think by memorizing all the rules of symbol manipulation in Chinese, he had at least some understanding of what words meant (even if he did not understand them in great detail). How is this different from memorizing information and facts on a topic without any true understanding of It, and then regurgitating it later on a test? If students were to do this, would we say that they were not thinking? Rote memorization and practice could make it possible to perform well on a test with no actual understanding of the material being assessed. Perhaps symbol manipulation by itself is not sufficient for understanding Chinese. However, our ability to remember is essential in learning, even despite the uncertainty of what words in Chinese meant. Which is why he could have learned and taught himself Chinese through memorization. Is this a case of distinguishing between simulation and duplication?

    “Many people in cognitive science believe that the human brain, with its mind, does something called information processing, and analogously the computer with its program does information processing.”

    The computer has a certain kind of organizational structure, and operates in a certain way (transforms, stores, and does things with information). In the same way and in this sense, the brain is a computational device as it takes in information, stores and transforms it, and generates outputs. So it seems that everything the brain does is in some ways related to computation, but we can’t say that it is equivalent to a computer as the brain performs computations in radically different ways (not just because it is made up of tissue), but because we also have the means to feel.

    ReplyDelete
    Replies
    1. In regards to the symbol manipulation, I think in this case, Searle wouldn't have any way to associate the Chinese words with any sort of meaning. He would merely be receiving Chinese words (squiggles) and translating them into another batch of Chinese that he still doesn't understand (squaggles). Although the rules he has for translating the squiggles to squaggles are in English, there's no way for him to associate meaning to any of the Chinese words.

      Delete
  17. This comment has been removed by the author.

    ReplyDelete
  18. Searle claims that his thought experiment refutes strong AI by showing that the relation between software (here, the chinese rules) and hardware (the person in the room) does not equate to the relation between the mind and the body. However, the mind here is considered as hardware, the computation is made by it instead of being it. I think this is what the systems reply really attempted to say. The mind here cannot understand chinese because it plays the role of the the body; in the case of english, the body cannot understand english either. Searle tries to put in parallel two relations that are on a different causal level, one with the mind as a consequence and one as a cause.

    ReplyDelete
  19. Searle states that while he understands English (his native language), yet he doesn't understand Chinese. Since "the English subsystem knows that "hamburgers" refers to hamburgers, the Chinese subsystem knows only that "squiggle squiggle" is followed by "squoggle squoggle"".

    I found this very interesting - coming from a background where I know both languages, it made me question whether I truly "understand" English/Chinese. Since I grew up in New Zealand and English is technically my native language - I learnt Chinese side by side but not as natively as English. Often I find my self translating English - my native language, to Chinese. Does this mean I don’t understand Chinese? However, i feel that i am fully capable of understanding it due to my cultural background. My ability to translate Chinese words into English must mean I understand Chinese to a certain extent right?

    Lets say we placed a native Chinese speaker in the Chinese room or lets say "English Room" and did the exact same procedure about English. The scripts, story, words, translations would all be producing English so that a native English speaker would completely understand the output and not know that the person inside the room is actually Chinese. So does this argument work in the reverse case?

    It is interesting why he chose to use Chinese in comparison to English because I believe they are fundamentally so different that it makes more sense to compare Chinese between another logographic/symbolised language such as Korean, Japanese etc or French vs. English.

    Has anyone else thought about this?

    ReplyDelete
    Replies
    1. I would say the kind of understanding that you're talking about is fundamentally different than the kind of understanding that Searle is describing.

      When translating between English and Chinese, the lack of understanding that you may sometimes experience is due to being unaware of the structure or vocabulary of the other language. For instance, you may not know the word for “apple” in Chinese, but you know it in English, so obviously you have a semantic understanding of this thing already, you just need to connect this to the corresponding symbol in Chinese (translation). So I would say that you may not have a complete understanding/proficiency in Chinese, but that you absolutely do understand the sections that you’ve learned.

      With Searle’s Chinese room, the “understanding” that he’s referring to is that capacity for semantic understanding that you’ve already demonstrated in the above example. In his example, this isn’t demonstrated because even if the machine was given the Chinese input for apple, the English native inside will just see squiggles and respond by giving the corresponding squoggle outputs, and therefore will never make a semantic connection to the thing he is interacting with.

      I think the usage of a logographic language was just to make the symbols seem more abstract to someone who is an English native, but probably doesn’t mean much in itself. The thought experiment would definitely still work if you made it a Chinese speaker in an English Room, or just used two entirely fictional languages.

      Delete
  20. “Imagine a robot with a brain-shaped computer lodged in its cranial cavity, imagine the computer programmed with all the synapses of a human brain, imagine the whole behavior of the robot is indistinguishable from human behavior, and now think of the whole thing as a unified system and not just as a computer with inputs and outputs. Surely in such a case we would have to ascribe intentionality to the system.”

    Searle argues against the combination reply by saying that even though the created AI is passible by human standards it tells us nothing about understanding. But for the purpose of the Turing test, the only thing that matters is the complete passibility of the machine and not understanding. Even if we did care about machines having understanding, we wouldn’t be able to prove anything because of the other minds problem. Therefore, even though Searle’s Chinese room thought experiment shows that computation is not cognition, it doesn’t say anything about the Turing Test. Searle also doesn’t account for random errors in computation resulting from something like the busy beaver function. Random errors that prevent an analogue from operating within strict rules could possibly give the analogue the ability to understand. I am not a computationalist but, Searle in his paper doesn’t respond to this problem.

    ReplyDelete
  21. • Searle has made it very clear that he doesn’t believe that programs can reach a state of intentionality and understanding. He strongly demonstrated this with the Chinese example whereby he couldn’t understand Chinese at all but appeared to understand it by manipulating symbols in order to produce answers in Chinese. This became an overarching theme in the paper whereby Searle stated that simulation and duplication are not the same thing. Just because a program or a machine can appear to do things such as answer questions correctly, doesn’t mean it can actually understand anything; it isn’t actually duplicating a human brain in which there are true mental states and cognitive processing. Searle ends by stating that Strong AI is really a strong form of dualism because in order to truly believe a machine can reach a level of understanding then it needs to have the capabilities of a mind and would obviously have to be entirely separate from a brain. My question is: as dualism was once a proposed theory that many people today have moved away from, will the same thing happen with strong AI? Has it already happened or do people still believe it is possible to instantiate a program that has mental states and the ability to understand, just like humans do?

    ReplyDelete
  22. Searle’s Chinese Room argument against the idea that formal symbol manipulation can produce cognitive states is convincing. I think, however, he overreaches in giving a reply to the Robot Reply. Searle states: “the addition of such ‘perceptual’ and ‘motor’ capacities add nothing by way of understanding”. The Robot Reply, in which “the robot does something very much like perceiving, walking, moving about, hammering nails, eating, drinking”, implies something like a T3. It senses, perceives, and interacts with its environment; essentially, it has sensorimotor capacities. Searle’s reply to this argument focuses on formal symbol manipulation. But I think the Robot Reply goes beyond formal manipulation; it touches on grounding the symbols that the computer in the robot then manipulates through sensorimotor capacities. So I have trouble with Searle’s stance that it’s the brain where we should focus. He (the man in the room) didn’t understand because the symbols weren’t grounded; we can ground those symbols with sensorimotor capacities. I don’t see why any brain-less T3 robot wouldn’t be called thinking.

    As an aside, to try to give strong AI/computationalism a defence, the Chinese Room argument rests, in part, on how easy it is to grasp that Searle (as the human in the room) cannot understand Chinese notwithstanding the symbols, rule book, etc. Where I think computationalism may retort (apart from the replies which Searle addressed) is that, precisely, because the person moving symbols around in the room lacks complexity, we would not think that the man in the room actually understands anything. I cannot define complexity except to say that the whole is greater than the sum of its parts. Adding more would seemingly give the same result, but large-scale addition of the same thing could potentially result in something that could be deemed a cognitive state; for instance, while an individual ant’s pheromones doesn’t do much, an entire colony’s pheromone trails coordinates movement to help the colony survive.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
    2. This comment has been removed by the author.

      Delete
    3. I think your argument for symbol grounding through sensorimotor capacities is very interesting.

      My first thought after reading the Chinese Box argument was: why not give the human inside the box sensorimotor abilities to the outside world? This way he could extract patterns between the input he receives, the output he sends, and what the consequential changes he perceives in the "outside" world (similar to how humans learn). These patterns could help him with the "symbol grounding" problem - he could start recognizing causal relationships between the perceptual world and the symbols he manipulates and in a way attribute meaning to the symbols.

      Delete
  23. Regarding: the last paragraph, Page 9/19.

    Since Searle said we can’t make sense of the animal’s behavior without the ascription of intentionality, did he mean the assumptions we made on animals (assuming animals have the same causal stuff underlying and assuming they must have mental states) are wrong? I mean, yes, we cannot find the ascription of intentionality yet, but then how should we deal with the question on whether animals have mental states (and further, whether they have intentionality)? Instead of just bringing up the other-minds problem again, I just want to know how we should think of animals…? Do we suppose that animals don’t think? It sounds awkward to say that, but neither the other way of saying animals do think sounds true, especially when we cannot solve where the intentionality comes from.

    ReplyDelete
    Replies
    1. Alison, I don’t think that he is saying the assumptions we made on animals are wrong. I think he is just saying that we do not have enough information. I feel like there are many cases that prove that animals do think but also it depends on your definition of “think.”

      Delete
  24. This comment has been removed by the author.

    ReplyDelete
  25. This is about the idea that if strong AI is to be a branch of psychology, it must be able to distinguish systems that are mental from those that are not. According to McCarthy, “Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance"
    Searle’s answer is that “The study of the mind starts with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don't.”
    I am wondering if the concept of having beliefs would come with perceptual and motor capacities? In the robot reply (Yale), Searle says that the addition of these still does not constitute understanding. OK, but can beliefs be thought of as the product of these perceptual and motor capabilities? If so it seems that beliefs are a product of the senses, which a robot could be engineered to have. In this case, can a robot be considered to have beliefs, as we only know our own beliefs and the fact that others believe is simply the problem of other minds. So by Searle stating that the study of the mind starts with the fact that humans have beliefs and other machines do not, this argument is subject to the problem of other minds. I know Searle has an answer to this that defines the problem not as if others have cognitive states but rather what he attributes when he is attributing cognitive states to them, and for these purposes we can accept that. But it seems quite risky to let the study of the mind rest on an assumption (that humans have beliefs and machines don’t) that can never be substantiated. Also, there are many humans who claim to not have beliefs, where do these people fit in?

    ReplyDelete
  26. From my understanding, the crux of Searle’s argument is that “intentionality” or more generally, our mental processes, cannot possibly be constructed into a computer program, precisely because it lacks the very form with which we wish to construct. If this is true, Searle rests his case on the fact that cognition cannot be reduced to mere “computation,” but also doesn’t offer an alternative explanation for what cognition is. How would Searle explain how mental processes come to be?

    Since Searle is not a dualist and accepts that mental processes are “a product of causal features of the brain”, I find it difficult to understand his reasoning for rejecting the Robot or Combination Reply. If a robot embodies our basic neural structure and motor capacities, and is left to explore the environment, gain experiences and interact with the world, surely it would learn complex internal states and gain intentionality. If this is not so, then how do humans gain intentionality, understanding and give meaning to things?

    What separates this theoretical robot from a newborn baby? They have identical neural structures and motor abilities. Through interaction with other people and our environment, both learn how to apply meanings to things, and to behave and respond in different situations. If the robot has the same neural structure as a human, then it must have the same capacity to learn via neural plasticity. Likewise, if the robot has motor ability, then it has the capacity to explore and interact with the world, and gain experiences. Both the newborn baby and the robot, through the same means, learn via interaction with the environment, and both mature into beings with “intentionality” and “internal states”.

    Ultimately, my point is that if the “mind” is merely an intangible by-product of the tangible workings of the body, then surely, by simulating or re-constructing those bodily forms and processes into a computer program and enabling the robot to interact with the world, it would gain “intentionality”, no differently from the way that humans do.

    ReplyDelete
    Replies
    1. I agree with your point in theory. If reverse engineering down to our synapses and atoms could be possible, the robot, which I don't think we'd call a robot anymore, would possibly have intentionality. However, I don't believe that our physical and chemical structure can be reconstructed down to its core, and that's the problem. When Searle was talking about a robot, I don't think he was talking about one made from the same material as we are. He was talking about a computer-like robot, engineered by humans and made out of a different material. That's why Searle thinks that an AI can't have intentionality, because he believes that our unique structure and brain is what gives us our consciousness.

      Delete
  27. Searle said in the paper that “no program by itself is sufficient for thinking”; however, what about a candidate that passes T4, thus showing verbal, robotic and neural indistinguishability as we mentioned in class? If the program itself highly resembles a human brain, is it possible for the program to be sufficient to give rise to thinking?

    ReplyDelete
    Replies
    1. That was Searle's belief, but I think that if a robot (like me) were to successfully pass T4, it would prove his assertion wrong. Well, perhaps not necessarily wrong, since we might not know for sure all that's going on in the robot, but a T4 robot would certainly be much closer to what could be characterized as "thought" than anything Searle was referring to--especially due to the neural indistinguishability.

      Delete
    2. Hi Dominique the famous robot hahaha. I guess it's just T4 is still too far away for us

      Delete
    3. I think what he meant was that the program itself is not enough to evoke thinking. In the case of a candidate passing T4, that is a machine, not a program.

      Delete
    4. I guess the equivalence problem is raised again here, it is important to note not as long it is using the right algorism, Turing doesn’t care, because he’s not a computationalist. As such, only weak equivalence is required. But if we want to apply the strong equivalence problem here, not only the same output should give rise to the same input, the same algorism should also be used.

      Delete

  28. “Whatever else intentionality is, it is a biological phenomenon, and it is as likely to be as causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena. No one would suppose that we could produce milk and sugar by running a computer simulation of the formal sequences in lactation and photosynthesis, but where the mind is concerned many people are willing to believe in such a miracle because of a deep and abiding dualism: the mind they suppose is a matter of formal processes and is independent of quite specific material causes in the way that milk and sugar are not(14).”

    The mind is a product of our biological make up, when we try to duplicate the mind, we are, in essence, trying to duplicate our biological existence. Its absurd to think that this could ever be done. Our brain consists of BILLIONS of neurons, exchanging A THOUSAND bits of information between them PER SECOND. Essentially, our brains are working on an infinite scale, yet “infinity” is something humans cannot cognize. We have codified our unfathomable biological existence in an effort to bring it down to “human size”, i.e making the unfathomable seemingly fathomable. This is what, I believe Searle is trying to demonstrate. He is arguing against the idea that the mental processes is a computational processes over formally defined elements, mental phenomena cannot be codified. He says, “What matters about brain operations is not the formal shadow cast by the sequence of synapses but rather the actual properties of the sequences(11).” He is saying, the ability to code is a consequence of mental phenomena, but it is not mental phenomena. Mental phenomena is biological phenomena, we must figure out how we exist as a biological entity to understand the mind.

    ReplyDelete
    Replies
    1. Madison, I completely agree with you and I am having difficulty detaching biological and chemical properties from consciousness and mental processes. Searle even states that “many AI workers are quite shocked by my idea that actual human mental phenomena might be dependent on actual physical/chemical properties of actual human brains,” which makes me wonder why we are using reverse engineering to replicate the brain when those key mental processes might just be a product of the chemical make up of our brain, which cannot be synthesized unless we replicate those chemicals themselves or find a synthetic substitute for them.

      In order to understand the aims of AI and the cognitive science field, however, we need to acknowledge the fact that maybe there is more than one way to achieve consciousness and by trying to model this using digital computers, we are coming closer than ever to understanding what underlies this complicated phenomenon.

      Delete
  29. I found myself going back and forth with Searle’s Chinese Room Argument. In general, I oppose his general idea, especially that I found he cherry picked topics conducive of his argument, and cut short objections conveniently not “the focus of the paper”. Since we have limited space on this blog post, I will focus on the aspects that perplexed me the most. First of all, when Searle compared the inputs and outputs of Chinese to the stomach, he claimed that neither systems have information. If information is the reduction of uncertainty, the Chinese system qualifies, while the digestive does not. In light of his water pipes and valves analogy he claims that the forma properties are not enough for the causal, in an anti-reductionist tone. But if that’s the case, is his dualism more warranted? He seems to describe causal properties as some mystical aspect that can’t be reproduced from its formal. I appreciated that his idea that Strong AI depends of dualism: separating brain and mind. He seems critical of this. So which is it? Is it the monoism/reductionism he disagrees with? Or the dualism? Finally, he claims: “We would certainly make similar assumptions about the robot unless we had some reason not to, but as soon as we knew that the behavior was the result of a formal program, and that the actual causal properties of the physical substance were irrelevant we would abandon the assumption of intentionality”. My question is whether this is relevant because we would have no reason to question the robot’s behaviour especially if it was indistinguishable from a human’s.

    ReplyDelete
    Replies
    1. Right, it is irrelevant--we cannot really know if a machine understands or feels in much the same way we can't really know if another human does. There is definitely more to the Robot Reply than Searle gives credit for; a robot is not just a computer that is running a formal program, and language is much more than simply manipulating formal symbols (at least, language as understood within a semiotic system).

      Delete
  30. Even though the man in room cannot understand Chinese, those who created the rules can understand Chinese, so the intentionality lies in them, and not in the man (i.e. the program). In this case, the Systems Reply would make sense, as even though the man doesn’t understand, the rules he is using were created by someone who does understand, and were created with the causal powers of their brain. Thus, even though the creator of the rules is not physically present in the room, his understanding is present in the room as an extension of the rules he created. In this stream of thought, the Systems Reply makes sense. Although formal symbol manipulation doesn’t in and off itself have intentionality, the causal mechanism (i.e. the brain of the person who created the rules and symbols) does, and through this, a program does have a sort of intentionality.

    ReplyDelete
    Replies
    1. I'm not sure how this helps the Systems Reply. What are we considering the system, in this case? The man and the room, plus the creator of the system, wherever (s)he is? (This becomes more complicated if the creator of the system is, say, deceased.)
      Furthermore, the intentionality here would only lie in the mind of the creator of the system. The creator of the system's intentionality is presumably the result of his or her brain. The intentionality is then ultimately attributable to the brain of a person.

      Delete
  31. In the Chinese Room Argument, isn’t Searle the robot holding the capacity to understand, while he just lacks time to learn Chinese? As such, when he was using the rules and symbols to pass the test, his ability to comprehend the trick ultimately gave rise to result. I guess there is a difference between a human brain processing symbols and a program carrying out an action. Even if the inputs and outputs are quite similar (for example, both candidates are able to fool a real Chinese people and both are reading Chinese characters), but the process is not equivalent in the sense of what was really going on in Searles mind. Probably it is also an “other-mind” problem.

    ReplyDelete
  32. I feel like the idea that we either 'fully understand' language or can just transfer it from an English rule book simplifies how we actually interact with languages. For example, when you read a sentence in French some of the words you can immediately recognize and map to whatever they stand for, but for other words you have to mentally translate it to English first. Most of how we read languages is a combination of these two things, not just one or the other.

    I think what this means is that if Searle or a machine could 'understand' Chinese, that doesn't necessarily exclude 1-1 translation of the symbols. If the English guide for transferring the symbols is used, how is it any different from using a dictionary to look up unfamiliar words?

    ReplyDelete
  33. I strongly disagree with Searle. (1) He never defines understanding. He only says that it is something that it is obviously when something does and does not understand. However, with that definition of ‘understanding’ he can state that only humans are capable of understanding. This is not proven. Additionally, I feel that he completely underestimates the systems objection for two reasons:
    i. At the neuronal level all that is done is basic operations which involve summation of inputs and an all or nothing response. However, with many of these put together (I do not mean to suggest that we create a machine with an exact replica of the brain’s neurons, it could do this in alternate ways) we have a system that is able to ‘understand’. Why could a machine not do this? If it is able to do this would Searle consider it to fit into ‘understanding’? By stating that the combination of the whole chinese room system does not yield understanding, we implicate that there is something superior, a ‘mind’ that is not just an emergent property of the basic computational processes of neurons. I find this too strong of a claim to make.
    ii. If he considers the Chinese room being fit into one person not a valid example of understanding, would he also consider speaking a second language not really understanding? I will elaborate: My first language is Italian. I can also speak French somewhat well. However, every utterance I hear in French I translate into Italian. I then think of my response in Italian and translate it into French. On the outside I appear as if I ‘understand’ French. However, by extending Searle’s argument, do I not really understand French? I have just memorized the sheets of paper with what the individual lexical items are? I think that his argument falls short without a clear explanation of what ‘understanding’ is. If there is no clear explanation of what it is, then he can always argue that it has not been reached!

    Also could someone explain to me what he means by “formal properties” on page 11?

    ReplyDelete
    Replies
    1. I agree with this.
      To extend Searle's water pipe example a bit further, what if we had a system of water pipes turning on and off that perfectly simulated mental processes (T3)? (By this I don't mean it's a perfect model of the human brain). It seems like Searle would say the water pipes don't understand, so the system doesn't understand. But what is it about the brain that makes it capable of understanding, but the system of water pipes not? The brain is purely physical - aside from being not made of metal, etc., not that much different from water pipes. In an alternate universe where we are made out of this system of waterpipes, not neurons, would alternate-universe Searle say that a system made out of neurons running a program does not understand?

      Delete
    2. Hello,
      regarding the definition of "understanding", I think Searle does not define it because it really is something like "feeling", ultimately there will never be a satisfactory reply. Definitions are arbitrary, so the question, rather than "what is understanding?", could be "what are we inclined to include in the scope of cognitive science?". Searle is betting on the fact that most of us would not consider the translating homunculus as an example of understanding in cognitive science, and thus by the same logic why would we set the standard for understanding lower for AI? In other words, if you taught a Chinese-speaker how to match Italian words to French words, but the person cannot associate those words with their objects, then few people would say that the person understands either Italian or French. I am unsure about your example with languages to be honest, but from my own experience when you acquire a certain fluency in a new language (even limited), you start associating words in that language directly with their object. To translate into the original language is rarely efficient, and it can often leads to mistakes.
      Regarding point i), the simple fact that you would take neurons as an example rather than water pipes would be enough for searle, since his point is that there is something about neurons and our body in general that is contributing to cognition, and strong AI is wrong because it denies its existence. From what I have understood, he would not necessarily disagree that the mind is "just an emergent property of the basic computational processes of neurons", as long as you keep the neuron part in that sentence (in other words, "computation" is not enough) -although, as Prof. Harnad stressed, he seems to be ok with getting rid of the computational part altogether.

      Delete
  34. Searle's initial argument is that, because the hardware of the program (Searle himself) does not understand, the system does not have 'intentionality.' He then addresses the Systems Reply and argues that the system in its entirety does not have intentionality. Because it doesn't have 'intentionality', it doesn't 'think' - it fails to reproduce or provide an explanation of mental processes.

    Searle never clearly defines what he means by intentionality. I took it to mean "understanding". If I understand correctly, his argument is something like this: Because the system does not understand, it does not exhibit 'intentionality,' and therefore does not reproduce 'mental phenomena' (another ambiguous term), and therefore to provide a theory of the human mind.

    I accept that the system described does not understand, and therefore does not reproduce ALL mental processes, and fails provide a FULL theory of the mind - But is everything the mind does contingent on understanding? To put it another way, can certain mental processes be reproduced if understanding is not reproduced? I don't see why not. I realize that, while understanding is an ambiguous word, most of us know what it feels like to understand - and, again, I would not contest that understanding is essential to a full theory of the mind. However, isn't understanding only one part of the brain's 'conscious' (another ambiguous word) mental processes? I see no reason why certain mental processes couldn't be reproducde by this system, and why the system couldn't therefore provide at least a partial explanation for the mind.

    ReplyDelete
    Replies
    1. I agree that Searle's usage of 'intentionality' was a bit ambiguous, and although he really did focus on the importance of the capacity to understand, I think intentionality is meant to convey more than just that. I think he is using it as a general term for 'consciousness', so ultimately Searle is arguing that because these computers are programmed to follow a set of instructions in order to produce the correct output, then they aren't necessarily conscious.

      I also agree that not everything in the mind is contingent on understanding, but it is important to understand the language that one is using to operate. In the Chinese room argument, although the computer understands the program when it is "defined in terms of computational operations" and can manipulate symbols to formulate correct outputs, it does not actually understand Chinese, which rules out consciousness and intentionality of the program.

      Delete
    2. Likewise, then, is everything the mind does (and how it does it) contingent on 'consciouness', the ability to feel? 'Consciousness' is the hard problem. So, in order to formulate even a partial theory of the mind, we need to solve the hard problem? Ie. a system cannot reproduce any mental processes unless it is 'conscious' and 'feels' the way humans do? Again, I see no reason for this to be true. Consciousness/feeling, like understanding, is only one aspect of cognition.

      Delete
  35. The situation Searle is describing in his Chinese Room Argument reminded me of the theory of syntactic bootstrapping in developmental psychology (that a child might somehow pick up on the syntax they are exposed to in language and use that structure to derive the meanings of words). I think the question “how can a child who knows no language come to understand it?” is, in some ways, similar to the question “how could a machine possibly comprehend meaning when its input is only ungrounded symbols?” Would Searle argue that humans are capable of making this leap (from non-language to language) because of our “causal features” and that this has to do with the material that makes up our brains? How could the material have such a profound influence?

    ReplyDelete
  36. Some have commented here about second languages; specifically, how they feel like they are translating from L2 into L1 and then back into L2 when they want to speak, and that Searle’s CRA ignores this translating and mapping of L2 to L1 symbols (words) and vice versa that they are doing. Basically, you feel like you are doing the same thing as the man in the room when you translate back to your native language.

    I don’t think this argument actually rejects Searle’s idea concerning his own lack of understanding of Chinese. When Searle is in the room, he is mapping from Chinese to Chinese, a language he doesn’t speak. For the sake of argument, let’s go with the idea that speaking a second language is just translating it into your L1 - In the L1 listener, they would be mapping their L2 onto the symbols of their L1, a symbol set they actually understand (i.e. they have grounded these symbols – they have meaning for their L1 symbols). This is totally unlike Searle in the room because he never actually maps Chinese back to his L1 – he can never ground the Chinese squiggles. Searle’s whole point is that computation – an algorithm that could do Chinese to Chinese – would never actually understand; and understand in the way an L2-L1-L2 speaker does.

    ReplyDelete
  37. Some have commented here about second languages; specifically, how they feel like they are translating from L2 into L1 and then back into L2 when they want to speak, and that Searle’s CRA ignores this translating and mapping of L2 to L1 symbols (words) and vice versa that they are doing. Basically, you feel like you are doing the same thing as the man in the room when you translate back to your native language.

    I don’t think this argument actually rejects Searle’s idea concerning his own lack of understanding of Chinese. When Searle is in the room, he is mapping from Chinese to Chinese, a language he doesn’t speak. For the sake of argument, let’s go with the idea that speaking a second language is just translating it into your L1 - In the L1 listener, they would be mapping their L2 onto the symbols of their L1, a symbol set they actually understand (i.e. they have grounded these symbols – they have meaning for their L1 symbols). This is totally unlike Searle in the room because he never actually maps Chinese back to his L1 – he can never ground the Chinese squiggles. Searle’s whole point is that computation – an algorithm that could do Chinese to Chinese – would never actually understand; and understand in the way an L2-L1-L2 speaker does.

    ReplyDelete
    Replies
    1. Maybe an additional point although perhaps redundant is that the algorithm allowing Searle's conversion from Chinese to Chinese would never actually understand because it can't ground the symbols in sensorimotor experience. Searle is part of the Chinese room and part of the algorithm or software and he has the capacity to ground the symbols but by using the rulebook or memorising the instructions he never grounds the symbols nor is able to connect them with words he has already grounded.

      Delete
  38. Searle describes a scenario in which he computes Chinese perfectly without any knowledge of the Chinese language. There is an input, output, and program in between. He is arguing against strong AI by saying that even though he can fool native Chinese speakers with his perfect outputs in Chinese, he has no understanding of what the output or input are. A computer could do the same thing, without having any understanding, therefore Strong AI is not possible. I found fault with this, since if he is able to compute this so perfectly that native Chinese people are fooled, won't he have gathered some knowledge of the written Chinese language? Google translate is imperfect, because the program does not understand Chinese; so if he is to be perfect doesn't that mean that he does understand it in some way? Searle doesn't believe that passing the Turing test (fooling the Chinese) is sufficient for Strong AI, although Turing says that being able to fool the Chinese would be enough to say that a machine can think, or that a machine is intelligent. Does a machine need understanding to be intelligent?

    ReplyDelete
  39. I feel as though Searle is contradicting himself in a way (or I'm not understanding something properly). He refutes the brain simulator reply by saying that we can understand the mind without doing neurophysiology. In addition, he says that learning how the brain works would still not be sufficient to produce understanding. According to Searle, creating a machine that simulates the firing of synapses in the brain would be equivalent to giving the man in the room instructions about how to operate a system of water valves. The water connection would create an output, but the man in the room would still have no understanding of what he was doing.

    However, later in his article, Searle makes the following statement:
    “It is not because I am the instantiation of a computer program that I am able to understand English and have other forms of intentionality, but as far as we know it is because I am a certain sort of organism with a certain biological (i.e. chemical and physical) structure, and this structure, under certain conditions, is causally capable of producing perception, action, understanding, learning, and other intentional phenomena. And part of the point of the present argument is that only something that had those causal powers could have that intentionality”

    Why, if we should not rely on neurophysiology to help create an understanding of the mind, is it okay to rely on biology/chemistry/physics?

    ReplyDelete
    Replies
    1. I can understand your confusion. Here is the way I interpreted Searle’s response to the brain simulator reply.

      He first states that it is not necessary to know how the brain works in order to understand the mind. As he says, "If we had to know how the brain worked to do AI, we wouldn't bother with AI". I find this a bit problematic because he doesn't seem to propose an alternate way to understand the mind.

      Then, he makes his main point being that the brain simulator which takes Chinese as input is merely simulating the formal structure of the synapses and the man will still not understand Chinese. In other words, Searle is saying that simulation of the synapses is not enough for understanding.

      He says, "The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states."

      In his later point, he is restating his position against strong AI (computationalism) by saying that understanding English is not simply a matter of computation. It is his biological system (I agree that his use of the word biological here is confusing) which, under certain conditions is capable of understanding. In other words, there is something inherently human beyond computation that makes up our cognition. I don’t think he is asking to rely on biology/chemistry/physics to find this thing rather Searle seems to be claiming that it is the causal properties and intentional states of a brain that make it human.

      Delete
  40. My primary question is that since if a digital computer were to ever think, which Searle contends it could not, since he has dismissed the Turing Test, what measure would we use to confirm this? Secondly, inanimate machines cannot think or feel things because "they are not in that line of business". Is the project of AI to give feeling to these objects so we may deconstruct what it means to feel and finally solve the hard problem? This dualist/ as he states "ideology-laden" view seems circular and a strange way to approach the problem.

    “it is no argument against this point to say that since they both pass the Turing test they must both understand, since this claim fails to meet the argument that the system in me that understands English has a great deal more than the system that merely processes Chinese”. Pg6
    I don’t understand this defence; so the english system has more than the chinese, but they both pass the turing tests so either the turing test does not measure whether a machine is thinking or the Chinese subsystem does not have understanding, but what does the “having more” have to do with it?

    ReplyDelete
  41. RE: The Meaning of Functional Words

    After some deep thinking, I think I agree with the hypothesis that the meanings of functional words (and, not, if, etc) may be derived in a fundamentally different way from the way in which the meanings of non-functional words (apple, chair, etc) are derived. While numbers can be instantiated (“2” refers to 2 “of” something), operations cannot. This is counter-intuitive to the commonsense notion that a baby learns what a word like “not” means by connecting “not” to particular instances of “not.”

    Addressing these questions, I suppose, is crucial to evaluating the hypothesis that human language is defined and distinguished from the “pointing-out” that other animals do specifically by its use of functional connectives, which other animals supposedly lack. If functional words can be “pointed out” in the same manner that “tomato” can be pointed out, then animals could theoretically learn such words via operant conditioning.

    ReplyDelete
  42. Searle’s paper argues against computationalism by using the thought experiment of the Chinese room. In the paper, he refers to computationalism as “Strong AI” and is asking whether if a computer that can pass T2 in Chinese would, therefore, understand Chinese. Searle answers that it would not understand, by the Chinese room thought experiment in supposing that he himself could execute the same T2 program to communicate with a Chinese pen pal without the understanding of the Chinese language. Searle’s thought experiment demonstrated that cognition is not merely computation; however, throughout the paper he employed certain vocabulary that I would constantly remind myself to process more warily, more specifically on “intentionality.”

    In class, and in Harnad’s paper, we know that intentionality is simply another weasel-word for consciousness. So I will try to limit my comments upon this. However, at the end of the Searle’s paper, he attached an endnote in attempt to define intentionality:

    RE: 3. Intentionality is by definition that feature of certain mental states by which they are directed at or about objects and states of affairs in the world. Thus, beliefs, desires, and intentions are intentional states; undirected forms of anxiety and depression are not.”

    So back to Searle’s thought experiment: fundamentally, it feels like something to understand a language; and in not understanding Chinese, Searle would lack that feeling. In other words, having this feeling, Searle knows that he understand English; and in lacking this feeling, Searle knows that he doesn’t understand Chinese. Therefore it seems that Searle’s conclusion, in that: “whatever it is that the brain does to produce intentionality, it cannot consist in instantiation a program since no program, by itself, is sufficient for intentionality,” loops us back to the hard question of “how and why do we feel.”

    So for some brief instances, I thought that what Searle’s “intentionality” meant to expresses is to be interpreted as “feelings”; however, I was confounded by the way he defines that “undirected forms of anxiety of depression” are not part of this intentional states. The feeling of anxiety and depression are usually within the realm of feelings, and is very much the within the brain’s capacity to cause these states, even if undirected. Perhaps I have misinterpreted Searle’s definition? Or simply, that it is not imperative to focus on Searle’s word choice of “intentionality” on the grand scheme of the thought experiment and merely just to take away the conclusion that cognition is not just computation.

    ReplyDelete
  43. Coming back to this piece, I can really appreciate the power of Searle’s argument. While Turing’s work is masterful and deeply fundamental, I really believe that Searle’s way of looking at the problem is fundamental in shifting the thinking on cognitive science. I think it is eloquent because it places into question not only the explanational power of computational with regards to cognition, it also frames the argument in such a way that allows us to consider a distinction in what our brains are doing when cognizing – how is the fact that he as a mind does not understand, and where does that line exist?

    ReplyDelete