Saturday 2 January 2016

3b. Harnad, S. (2001) What's Wrong and Right About Searle's Chinese RoomArgument?

Harnad, S. (2001) What's Wrong and Right About Searle's Chinese RoomArgument? In: M. Bishop & J. Preston (eds.) Essays on Searle's Chinese Room Argument. Oxford University Press.



Searle's Chinese Room Argument showed a fatal flaw in computationalism (the idea that mental states are just computational states) and helped usher in the era of situated robotics and symbol grounding (although Searle himself thought neuroscience was the only correct way to understand the mind).

84 comments:

  1. In the last bit of the Harnad article, he says "The CRA would not work against a non-computational T2-passing system; nor would it work against a hybrid, computational/noncomputational one".
    What is meant by non-computational? I can't grasp how any AI could be constructed in a non-computational manner? Perhaps because we have not yet even dreamed of a way to create a non-computational machine it is difficult not to stray towards the computationalist argument.

    ReplyDelete
    Replies
    1. Adrian, "computation" means formal symbol manipulation, so anything that's not is not computational. The word for the complement of computational is "dynamic": just any physical system that changes with time. Examples of dynamical (non-computational) systems are toaster's, vacuum cleaners, solar systems, waterfalls, atoms... and computers hardware. What's computational is the software that's being executed by the computer hardware (or anything else executing the same symbol minipulations, because the hardware of a computation, is irrelevant). Have a look at the lecture for 1a/b and 2a/b for what computation is (and isn't).

      Delete
    2. So would an example of such a potential dynamic/non-computational system be an analogue artificial neural network (assuming this NN is modelled off of traditional NN used in programming and not actual neurons)? If such a dynamical system were developed would it necessarily be an instantiation of a symbol manipulation, or would it be a non-computation dynamical system?

      I suppose I’m imagining that such a analogue artificial neural network might be built out of a network of hydraulic tubes and nodes (as an example). Where the hydraulic pressure carries the signal strength and the nodes have various thresholds that only release under enough pressure. (Though I’m not sure how back propagation would work or threshold weights would be varied, lets assume this system can do these things.) Anyway provided such a system could function, would it be an example of a implementation-independent dynamical system? Or is implementation-independence the purview of computation only? Or could such a system be combined with a computational system to form a hybrid system?

      Delete
  2. RE: “There are still plenty of degrees of freedom in both hybrid and noncomputational approaches to reverse-engineering cognition without constraining us to reverse-engineering the brain (T4)”

    Searle rejects the TT as a sufficient explanation for cognition because it is too undetermined. Instead, he swings to the other side of the spectrum and argues in favor of an explanation that is at the level of a T4. He does so, partly it seems, so to not have to worry about the ‘other minds problem’. For Searle, the only way to be certain of how/why we do what we do is to reverse engineer the brain itself. Only then will the “SAME ‘causal powers’ of the brain” be present.

    I agree with the author, Harnad (2001), that Searle is being overly cautious. He doesn’t want simulation; he wants “duplication”. But, T3 cannot be so easily discarded as a viable explanation. A T3 means that the robot would be more than just input/output. As Turing (1950) seems to have imply, a machine can be made to be supercritical (do more than just what we tell it to do) if it were to receive a cascade of external information that would alter its programmed symbolic capacities. In other words, a T3 robot perceptual input/motor output would serve as a window for real-world experience. Not only would this give content to its internal symbols, but it would do so by interacting with the robots’ program, without being part of the program itself.

    Would this be enough for the robot to be “supercritical”? If yes, that would mean that the robot had a mind and would have independent control over its connection between its program and the world that it experiences.

    ReplyDelete
    Replies
    1. Manda,, Searle is not just being cautious when he opts for the brain: he is rejecting computation and the TT altogether. He thinks his argument showed that cognition was not computation at all, and that the TT was insufficient. Actually what his argument showed was just that cognition icannot be all computation. But it's true that if we could reverse-engineer the brain completely, and then build a synthetic brain using those causal principles, inside a synthetic body, it would pass T4. But if it was synthetic, there would still be room for doubt (not just ordinary underdetermination, but the other-minds problem). So the brain does not really escape the other-minds problem.

      The more interesting question is whether there is less room for doubt with T4 than with T3 (Dominique).

      The only thing that would reduce doubt to (almost) the size of normal scientific underdetermination would be T5 (a brain and body, built by a kind of protoplasmic 3-D printer aapplying the causal mechanism that we had successfully reverse-engineered).

      But ("Stevan Says") I'd already stop worrying about other-minds uncertainty with Dominique (T3) -- or even Dominique tested only via email (T2).

      All the T-levels are just I/O, however. If computationalism were true, T2 does not ask for the "right" algorithm, just anything that generates the same I/O ("weak equivalence"). T3 does not ask for the "right" internal dynamics either, just the right I/O. Even T4, with a synthetic brain, does not insist on exactly the right hardware. Only T5 insists on exactly the right dynamics, but even T5 is not completely immune to the other-minds problem, though some would say it's just about reduced to normal underdetermination with T5. (What do you think?)

      What's going on inside a robot is not just a program (if Searle's argument against computationalism is right.) The "cognitively penetrable" Monte Hall problem (the three curtains and the prize) is already an example of input "changing our internal computations": in fact the input that successfully penetrates your cognition is also (partly) symbolic, involving formal reasoning (1/3, 2/3, etc.)! But if cognitive penetrability is what Turing meant by "supercritical," it certainly does not guarantee that Dominique feels! So that's not a way around the other-minds problem either. And there are already non-Turing robots that can learn and alter their software. That's just autonomy, not a mind (feeling), let alone free will!

      Delete
    2. When Searle argues that “no purely formal model will ever be sufficient by itself for intentionality because the formal properties are not by themselves constitutive of intentionality…” is he not saying that cognition cannot be computation alone, rather than cognition is not at all computation?
      If Searle is arguing in favor of T4 (exact same causal mechanism), isn’t a T4 part computation (I/O)?

      On another note, I do not think that a T5 would be reduced to just about normal underdetermination (that of other sciences) because it seems as though the other-minds problem is inescapable.
      The fact that we are all active “players” in the TT with one another means that despite internal dynamics, we can only ever evaluate the other by what they can do (I/O).

      For all I know, my mother could be a T5 with the same I/O and exact same internal dynamics (strong equivalence). While I can confirm that she can do everything that I can do, I have no way of ever knowing if she truly feels. (I really hope she does, though!! :P)
      It seems, therefore, that a T5 does not result in a lesser degree of undetermination for cognitive science (with regards to the other-minds problem).

      Delete
    3. Manda, yes Searle shows only that cognition cannot be all computation.

      Yes, T4 includes T3, just as T3 includes T2. But Searle is saying forget about the Turing Test, forget about computation, and just go study the brain. (We'll talk about that next week.)

      T3, like all science, is underdetermined. (There may be more than one way to pass it.) It's even more underdetermined because of the other-minds problem. But T4 may be over-determined: not all brain function may be relevant or necessary for cognition; some of it, for example, is vegetative rather than cognitive (temperature, heart-rate, balance, appetite).

      Besides, unlike the heart (which pumps blood), the brain (which pumps everything we do and can do) does not wear its function on its sleeves. So it's not clear that studying what the brain does as an organ, like the heart or the kidneys, will reveal how it does everything we can do.

      You are right about T5 too. It's more underdetermined than other scientific questions, because of the other-minds problem. Not to mention that the "hard problem" also leaves a huge explanatory gap when it comes to cognition, unlike any other biological trait.

      But worrying about the other-minds problem for T5 is as pointless as worrying about ordinary scientific underdetermination, since there's no way we can do better than T5, and there's no way to know whether even T4 isn't more than enough. (Would you kick Dominique, who is just T3?)

      Delete
  3. One thing I find puzzling about Searle's article, and the replies, is that no one adresses the fact that maybe if he spent enough time in this room, understanding could be achieved. For example, when an infant utters a word such as apple for the first time, even though he might have given the right output for the input, most would argue that he doesn't truly understand the meaning of the word. He might even simply be parotting back what he just heard. However, as he gets more exposure to apples, different kinds on different occasions, the meaning of the word sinks in deeper. The association between the word and the stimulus becomes clearer and "understanding" is achieved. Couldn't the same process take place in Searle's Chinese room?

    ReplyDelete
    Replies
    1. I could be wrong but I don't think Searle has access to the stimulus, all he gets is an input symbol (just sguiggles, not the sight of what that symbol represents) and the corresponding symbol to send out.

      Delete
    2. Josiane, neither the computer that executes the algorithm that passes the Chinese T2 -- nor Searle, when he "becomes" another computer computer that executes the algorithm that passes the Chinese T2 -- sees the objects (like apples) to which the Chinese words refer (as Krista notes). Only a T3 could do that.

      And that's the symbol grounding problem.

      Delete
    3. I believe Krista is right - Searle would only be receiving the symbols. It makes sense that a child could learn to associate and understand the word 'apple' if he's seen an apple, but if one merely receives "squiggles" with nothing to associate them with, it would be very hard for understanding to occur.

      Delete
    4. I don't fully understand how T3 gets us out of the issue of meaning that Searle is talking about. In fact T3 sounds somewhat homuncular to me. To put this in terms of the Chinese Room, Searle could be in the head of a giant robot and receiving input from the robots senses for all he knows. Embodying computation just seems to be extending the physical mechanism underlying the computation to include causal events “outside” some arbitrarily defined boundary separating the “computing device” and the physical environment. It doesn't help us understand how reference/meaning comes about.

      Delete
    5. Re: Auguste's comment. I think T3 gets us out of this problem because the symbols that Searle was manipulating in the room weren’t grounded. The Chinese word for red had no meaning of ‘red’ for him at all. Vision (colour vision) solves this along with interacting with other red things in one's environment. I don’t think T3 means embodying computation; rather, the symbols that are ‘computed’ in computation have to mean something and these symbols’ meanings come from sensorimotor capacity. From what I understand, T3 doesn’t just extend computation but it grounds the symbols used in computation.

      Delete

    6. Gusto, I also can’t see how T3 gets us out of the homuncular problem. If Searle had input from the outside world (for example by watching it on a screen), he could learn through reciprocity (seeing the consequences of his output/symbol grounding), but would nonetheless remain apart from the “computing device”. This kind of learning attributed to trial and error (Searle's output and consequential input from the senses) doesn't tell us how the device itself thinks, it regards Searle's his experience of thinking.

      Delete
    7. Laura,, it's a lot more than learning to "associate": it's learning to recognize, categorize, identify, name and manipulate the referents of the symbols (words) in the world.

      Auguste (and Krista): Searle is just manipulating the symbols that pass T2. Robots are necessarily not just symbolic: They're real objects. If Searle were just a component in Dominique's brain, doing what a computer would be doing in there, that would show nothing one wayor the other, either about T2, or T3, or computation. (Think about it!)

      Austin, yes, T3 is necessarily not just computation. At the very least, sensorimotor transduction is not computation: You can't replace sensory organs with simulated, symbolic sensory organs any more than you can replace an oven with a simulated, symbolic oven. (But don't confuse this with virtual reality, where your real sense organs are being tricked by the goggles and glove that are being controlled by the computer.)

      Delete
    8. @ Krista, the sort of learning you're describing sounds like supervised learning, in which case the way that the device itself 'thinks' is shaped by the rules that it learns based on reinforcement or punishment. In the case of a T3, that sort of thinking in combination with unsupervised learning isn't purely computational as we learned in later lectures and discussions that symbol grounding cannot be done solely on a computational level (at the very least, just yet).

      Delete
  4. The article states that some people make the argument that “when accelerated to the right speed, the computational may make a phase transition into the mental speculation, and that it is a matter of ratcheting up to the right degree”. I don’t know what refutes this argument. It is a speculation, just like every other theory that has been mentioned on cognition and artificial intelligence. Our brain could be the answer to how we are doing it, with its complexity. How are we sure that the answer doesn’t lie there and that we can reverse engineer without it? How are we sure that mental states are implementation independent? An AI we create out of physical properties, and no chemical properties and the sublevels in our brain might be just a simpler version of how we cognize because of the properties it is lacking.

    In addition, for the Systems Reply, did someone not give the man in the room the ledger, and prepare it beforehand? Did someone not write the rules of how to manipulate the symbols, someone who understands Chinese? Isn’t that person the whole system? Even though Searle can memorize it all he doesn’t understand it, but something must have started the process by this example. In the case of an AI, it would be us, the people who wrote the hardware.

    Lastly, I feel like T3 could be vulnerable to the CRA. Even though the AI has the sensorimotor capabilities now, there could still be a function similar to the CRA, where the outputs are not only linguistic anymore, but also physical.

    ReplyDelete
    Replies
    1. Deniz, there is no need to refute pure speculation. But with computation, there were lots of reasons to think it might be up to the task of cognition.

      If there's something in T4 without which T3 can't be passed, we'll find out (but the test is still T3).

      It is not an argument against computationalism that someone has to write the program, any more than it's an argument against Newton that someone has to write the laws of motion. And we already know that someone can write a computer program that learns, and then goes on to learn things the writer of the program did not know, or, having picked up more data, to prove theorems that the writer of the program could not prove. (This already came up in connection with the Turing paper.)

      A causal mechanism that is not purely computational (symbol manipulation) is not vulnerable to the Chinese Room Argument.

      Delete
  5. Honestly, I found the clarifications of Searle's 3 tenets in this essay to be massively helpful in understanding Searle's argument. I also found the reframing of Searle's "intentionality" into "consciousness" a much more useful way of thinking about the problem.

    We assume consciousness in other people (and some animals) by a function of our own anthropocentrism – we see an input and an output of behaviour or language, and extrapolate a computational / mental state parallel to our own. Our own definitions of consciousness centre around perceiving a system to be similar to our own: this would be why we claim to understand other humans through communication. We evaluate consciousness through Turing testing every person that we encounter, including through pen-pal emailing (as in T2), but I’m wondering about evaluating consciousness with people / animals / things that cannot communicate through language. Perhaps this complicates the question a bit, but I think there are varying degrees of consciousness that we can assume in other beings. Plants, for example, are generally perceived to not feel, or be conscious. Mammals similar to us like dogs, for example, are generally perceived to have feeling and consciousness. What about all of the beings in between? Granted this answer may vary from person to person (some people would kick a fish, and others wouldn’t), but I think it’s fair to say that there’s a continuum of level of consciousness that we tend to see in other beings. What criteria can we use for evaluating consciousness in non-verbal creatures, or non-verbal humans?

    ReplyDelete
    Replies
    1. What if consciousness is not actually a gradient. As you describe it in your example, consciousness here is just varying levels of communication capabilities. Communication capabilities is just a BEHAVIOUR and as humans, we often rely on behaviour first to judge whether another being is conscious or not. This is why we may have an easier time believing animals like non-human primates or dogs are conscious where as we may have a harder time using behaviour to judge whether animals like fish are conscious. But a single behaviour (apparent communication capabilities) should not be sufficient to conclude a being conscious or not.

      Perhaps we can categorize everything as being conscious or not conscious without the gradient. This may seem too black and white as some may ask questions regarding deciding to sustain minimally conscious individuals on life support. With the Chinese Room Argument Searle argues that the entire system is not conscious even though its output BEHAVIOUR may lead us into concluding the system understands the foreign language. I guess the big question then is what behaviour (if there is a single behaviour) should we look at to decide if a being is or is not consciousness? So far the only unvarying requirement that we have discussed is feeling.

      Delete
    2. I do not think it is possible to discern consciousness from behavior. While it may seem possible to view consciousness when a lost dog finds its way home or a fish follows the site of food, it cannot be a confirming factor. Just because a person is paralyzed does not mean that they are not conscious. I think this relates to Searle’s Periscope in an interesting way, “although we can never become any other physical entity...if we get into the same computational state as the entity in question, we can check whether or not it’s got the mental states imputed”. As I think it will forever be impossible to completely get inside the mind of a fish, I believe it’s futile to look for a behavior or other sign that signals consciousness.

      Delete
    3. “The critical property is transitivity: If all physical implementations of one and the same computational system are indeed equivalent, then when any one of them has (or lacks) a given computational property, it follows that they all do (and, by tenet (1), being a mental state is just a computational property). We will return to this. It is what I have dubbed "Searle's Periscope" on the normally impenetrable "other-minds" barrier (Harnad 1991a) “
      I am not sure how Searle’s periscope ameliorates the other minds problem for some of the same reasons as Eugenia explained above. I agree that I do not think we can gauge consciousness by behavior. Could this be taken one step further? We ‘know’ we are conscious because we have conscious experience and display certain behaviours, but since we are not accepting Descartes’s “I think, therefore I am” how do we ascertain our own consciousness, before trying to ascertain that of a fish or other being/program? If we cannot pin point consciousness as one of our own behaviours, why would we ever be able to do so for another being/animal/program.

      Delete
    4. Perhaps it's not that different animals have varying magnitudes of consciousness, but different species have very different qualitative experiences. We most likely have a "richer" experience through our language abilities, our propensity for long term memory, and our ability for abstract thought, but I don't know if we are more conscious per se. Are toddlers non-conscious before they realize theory of mind? What about adults of varying intelligence? I think our working definition in class is that something is conscious if it can feel what it's like to be that something. In line with what Aliza has said, I think it might be helpful to fist pinpoint our own correlates for consciousness (behavioral or neurophysiological) before attempting to do so with animals. We at least can use self-report as a correlate.

      Delete
  6. Kid Sib is lost on the notion of unconscious understanding. Is your argument that since Searle in the Chinese room is in the same computational state as a machine but does not have any understanding, the computational state can't be unconscious but is NONconscious, which proves computationalism wrong? Then who is supposed to make the distinction? It seems to always come back to the other mind's problem.

    ReplyDelete
    Replies
    1. Let me see if I can rephrase it. Searle in the Chinese room does not consciously understand Chinese, the argument is that the system can unconsciously understand Chinese which allows computional systems to be able to have mental states, just unconsciously. However one can only have unconscious mental states if one is capable of being conscious, otherwise the system is a non-conscious (not capable of being conscious) and therefore could not have unconscious mental states at all, so the CRA must be conscious. In order to determine if a thing is conscious or unconscious the only way would be to enter the same computational state and check it's corresponding mental state. Computationalism argues that since the systems are implementation-independent, we would be able to do exactly that, which is a special case because otherwise (in a non-computational system), like you said, the only other way to determine unconscious vs. non-conscious runs into the other minds problem. If the system is computational then it would be possible to confirm that Searle in the room does not consciously understand Chinese. To avoid this computationalism must give up mental states totally (making it nonconscious).

      Delete
  7. I'm a bit confused on one section of the article:

    "For although we can never become any other physical entity than ourselves, if there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it's got the mental states imputed to it."

    How would it be possible to get into the same computational state as the entity in question? My interpretation of this section is that we would need to change our computational state to that of the machine's to see if it were experiencing a mental state. But how would we change our computational state? I think I must be misinterpreting this... but isn't our computational state set?

    ReplyDelete
    Replies
    1. I think this is more of a hypothetical – I don’t see how we would actually get into the same computational state but the way I understood this was if there were some way of ‘thinking’ differently (or, likening our thought processes to algorithms, if there was a way of changing them), we might be able to get our computational states to match the state of whatever it is we’re trying to learn more about. I don't think it’s actually possible, but more of a point that was being made. If we could do this, then I figure it would solve or give insight to the solution to the Other Minds problem – it wouldn't be a problem anymore if we could somehow ‘enter their computational states’. Of course, this isn’t a thing.

      Delete
  8. “But there is also a sense in which the System Reply is right, for although the CRA shows that cognition cannot be ALL just computational, it certainly does not show that it cannot be computational AT ALL”

    The idea that the System Reply could be partially right is confusing me a bit. Is this saying that it can partially be right since the entire system isn’t all computational since Searle is involved in the system and he is in fact conscious and capable of understanding, even though the system as a whole would be unconscious? However, in the CRA, Searle was still only manipulating formal symbols and not actually understanding the Chinese at all so even though he is in a conscious mental state the system isn’t actually capable of understanding.

    In order for the system to reach a state of intentionality, if it is is partially computational, does the part that is conscious and non-computational (Searle in this case) actually need to be able to understand (which wasn't true in the case of the CRA)?

    ReplyDelete
    Replies
    1. From my understanding, it is trying to say that cognition does not equal computation. However, it can include it. We can have some parts that can be just computational. However, probably there is more to us than that. There is a part of us that does not understand and only does computation, and a bigger part that does understand what is going on, and has 'consciousness'. Searle does not understand in CRA, he is only doing formal symbol manipulation. The argument says that the room and the system as a whole, with the ledger and the data banks etc, may understand the story.

      Delete
  9. “Can we ever experience another entity’s mental states directly? Not unless we have a way of actually BECOMING that other entity, and that appears to be impossible --“

    If someone claims that they are conscious, there is no way to prove or disprove it. We turn to “mind reading” or making inferences with animal species like us; what is essentially the TT as per the reading. “But the TT is of course no guarantee; it does not yield anything like the Cartesian certainty we have about our own mental states.” So is the flaw of computation cognition? We cannot physically become another person to experience their mental states directly. Whereas this would be possible if mental states occurred purely through computation. If mental states were really synonymous with computational states, wouldn’t the other minds problem be “solved?”

    ReplyDelete
    Replies
    1. You make an interesting point Brittany. A big idea that Harnad tries to get across in this paper is that “Searle's right that an executing program cannot be ALL there is to being an understanding system, but wrong that an executing program cannot be PART of an understanding system,” which I think relates back to your question. I think the idea here is that mental states are not purely synonymous with computational states, however, computational states are a component of mental states. We may never be able to figure out why we cognize purely by using computation, which also means we may not be able to solve the other minds problem purely through computationalism either.

      Delete
    2. Even if computational states were synonymous with mental states, the other mind problem would not be solved. How would you verify the congruency of another person's computational state with your own? Sure, you might have two computers with identical computational states, but what we're trying to figure out is whether that computational state identical to our own computational states: something we don't have explicit access to. So the other minds problem would still persist because we cannot really directly access our own computational states.

      Delete
  10. I think it is totally spot on to say that the synonymy of the conscious (intentional) and the mental is at the heart of the CRA. The CRA doesn’t distinguish between understanding (I'll call it U1) and the feeling of understanding (U2) … it conflates them. We understand things that we are not conscious of, that is, we can compute answers to problems without being aware that we understand how to do it. Our program knows what to do, but "we" the person at home, doesn't. The system-theory refute gets at this. The whole room, together with Searle, can U1, but then Searle fights back with the notion that he in the Chinese room HE wouldn't FEEL that HE understands Chinese (U2). The CRA proves that computation alone cannot prove U2, it doesn't prove that computation isn't responsible for U1. I think it's cool how Searle discredited pure computationalism; nice little baby step. I'm interested in the "hybrid road of grounding symbol systems in the sensorimotor (T3) world with neural nets" mentioned at the end of the paper.

    ReplyDelete
  11. Why would “not purporting to be in a mental state purely in virtue of being in a computational state” necessitate reverting to ‘implementationalism’?

    Does implementationalism claim that a computational state is NOT = to a mental state, and that the mental state would have to be dependent on the specific hardware that implements it? Is this a reference to Plyshyn’s Strong Equivalence argument where the algorithm (program) needs to be the specific/ the same as that of the brain in order for it to generate the same causal powers & mental states?

    ReplyDelete
    Replies
    1. Manda, no, "implementationalism" would demand even more than strong equivalence. It would require full I/O capacity, same computer program and same hardware. And if it denied that the right computer program alone was enough, "implementationalism" would be denying computationalism. (In fact if it denied that I/O equivalence was enough, it would be denying the Turing Test.) But as far as I know, no one is an implementationalist...)

      Delete
  12. In the Chinese room argument, it said that what Searle did was only executing the computer program. However, he did so by matching every character in Chinese with symbols and meanings in Englishs. As such, it is not that he could not understand language, because he surely had to understand English in order to comprehend the rules of the Chinese room. Thus, I feel like the Chinese Room Argument might not be the perfect demonstrtaion of how a program works, because Searle himself does hold the capability of conscious thinking.

    ReplyDelete
    Replies
    1. Searle never denies that he can understand English. He uses it to do formal symbol manipulation, in which he does not understand the symbols, so any input he gets and output he does, he does so by manipulating symbols he doesn't understand. English is his 'program'. He uses it to interpret representations of procedures, so Chinese symbol manipulation in this case. He doesn't translate Chinese to English. Searle in the CRA example does not understand what the symbols mean and what he is doing, he simply uses his english to understand the rules of what he is supposed to do, in a very mechanical way, with a bunch of symbols he has no idea about the meaning of. He only correlates symbols between the batches he is given. This is my understanding of it, I hope it helps!

      Delete
    2. I just realized this after class. So Searle is not performing any tasks involving “thinking” here, I misunderstood when I first read about this argument. He’s merely imitating a computer program by giving an output according to the input, not like he’s understanding or learning any language from the rules.

      Delete
    3. "He’s merely imitating a computer program by giving an output according to the input, not like he’s understanding or learning any language from the rules."

      Exactly. If you tell a computer program to print "Hi, my name is Bob," it will do so. But the program does not understand the semantics of the word print, nor does it understand the semantics of what it printed. It just knows that, when it 'reads' the command, it has to output what follows.

      But that leads me to the following question: suppose you are trying to compile and run a language in a system that doesn't understand it (aka a program it wasn't built to run--such as C++ with regards to Eclipse or Dr. Java). Does that have any relation to this? Or is this a completely different scenario because at least Searle is able to accept any symbols as input, while computer programs are limited to processing only the languages they are built to run?

      Delete
  13. I found Harnad's unpacking of tenet 3 eye-opening. Reading Searle I began to look at the Turing test as a behaviorist, operationalist, test that doesn't get at structure and so doesn't do much for us empirically. I began to see the TT as something a bit more old-fasioned and less useful than I had when reading Turing's paper. However, I hadn't considered that we have no better means to empirically test our 'cognitive' machines than purely looking to the functionality of them, because there is no structure to investigate in computation. It was strange of Searle to dismiss the TT as old-fashioned without putting forward a better alternative, or at least admitting it is the best empirical measure we have at the moment.
    This got me thinking about possible tests/ milestones between toys and T2. Do we have theories yet that can guide us towards possible right track(s) to reverse engineering cognition? I'm not in in cognitive science, but I figure that scientists in the nitty-gritty technical work of computer science/ robotics/ cognitive science must be formulating some plausible theoretical stepping stones between toys and T2. Do we know of any yet?

    ReplyDelete
    Replies
    1. Lauren, I also found Harnad's point that the TT is not a guarantor of the presence or absence of consciousness, but it is so far the best way to determine whether something is conscious because behaviour and functions are observable and can therefore be empirically tested.

      Regarding theories that actually attempt at reverse-engineering consciousness, I don't know a lot about this. I imagine though that there are few attempts at this for pragmatic reasons. Society is more interested in developing AI to carry out functions that serve us, not to better understand our minds. However, investigation into consciousness is a by-product of this industrial need for AI.

      Delete
  14. After reading this article, it made me think about the current conquest by researchers around the world to create a Strong AI. Dr. Harnad’s interpretation (and mine, now too) is that cognition is comprised of something other than computation, along with computation, and my question is: what needs to be added to modern AI in order to bridge that gap?
    My intuition is that sensory-motor capabilities are the answer to this question, because that would be the beginning of grounding the symbols for the AI to then interpret. When we teach modern day neural networks to identify dogs, we expose them to tons of photos of dogs in order to teach it via induction, and eventually the networks wires itself to recognize pictures of dogs even if it hasn't seen the specific breed or otherwise. Is this the capability that was missing and therefore we have already achieved it, or is there still more that we do not understand?

    ReplyDelete
    Replies
    1. In my opinion, one of the most important things in establishing intelligence is creativity. It is one thing to be able to follow basic rules to produce outputs, but it is really something else entirely when a system is able to creatively produce its own novel solutions to problems. Sure, this is all based off of underlying rules--the same way that we learn,starting from birth based on what we observe in the world. However, I believe that it's a closer step to "thinking" when computers can act creatively. A fascinating example of this is discussed in this article (https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html)--Google's AI system actually created it's own language so as to better translate unfamiliar languages. That's pretty intelligent, if you ask me!

      Delete
    2. I totally agree Dominique! Your response made me think of this amazing TED Talk on creativity and neural networks. https://www.ted.com/talks/blaise_aguera_y_arcas_how_computers_are_learning_to_be_creative

      Delete
  15. • I certainly appreciated this analysis of the CRA. Some aspects were complicated for me to grasp so I look forward to discussing this further as a class. First of all, Searle’s claim that the pillars of strong AI assume that a thermometer can have mental states was something I had difficulty refuting logically. The first tenant of computationalism cleared that up: that mental states require the implementation in a dynamic system.

    Secondly, I still have trouble understanding what Searle’s periscope means? Can anyone explain? Finally, I enjoyed the idea that even if we settle for a machine at the functional level and ignore the structural, we have less and less degrees of freedom in structure as we emulate the function. In that sense then, perhaps our best shot at reverse engineering a thinking machine will be to emulate the structure off the bat? (because the finished product in function will not be too dissimilar from the brain/sensorimotor organs’ structure).

    ReplyDelete
  16. RE: “Computationalism was very reluctant to give up on either of these; the first would have amounted to converting from computationalism to "implementationalism" to save the mental -- “

    The paper very clearly demonstrates that the implementation -independent quality of computationalism fails. I am having trouble grasping the idea that T3 system works in a implementation-dependent fashion. I agree with Searle that the system must be a biological machine.

    ReplyDelete
    Replies
    1. I found that Searle's argument demonstrated that formal symbol manipulation alone cannot produce mental states. However, I thought he overreached when focusing on the brain and the role of biological substrates. The real problem, I think, is symbol-grounding. The symbols he (the man in the room) used simply weren’t grounded. Grounding symbols requires sensorimotor capacity; this capacity does not have to be biological at all. I don’t think the brain is needed as a prerequisite for mental states.

      Delete
  17. The article “Minds, Machines and Searle 2” raised a point that I found intriguing and troubling. It mentioned that “The synonymy of the ‘conscious’ and the ‘mental’ is at the heart of the CRA. … This is the ‘other minds’ problem.” So, it seems like Searle’s Chinese Room Argument demands strong AI to be able to show us that it has mental states over and above performance. Searle’s Periscope, assuming the same computational state and checking if it’s got mental states, is demanding more from computationalism than from ordinary human beings in daily life. Searle replies to “The other minds reply” in his article, but the reply doesn’t seem to address that both sides are framing the question fundamentally differently. The thrust of computationalism lies, I think, partly with its focus on indistinguishable performance while Searle actually wants us to get into the mental state of the machine to check. But I think what bridges this divide is that the symbols Searle (as the man in the room) weren’t grounded. It’s not that we need a Periscope to check.

    ReplyDelete
  18. “We're ready to believe that we "know" a phone number when, unable to recall it consciously, we find we can nevertheless dial it when we let our fingers do the walking. But finding oneself able to exchange inscrutable letters for a lifetime with a pen-pal in this way would be rather more like sleep-walking, or speaking in tongues (even the neurological syndrome of "automatic writing" is nothing like this; Luria 1972). It's definitely not what we mean by "understanding a language," which surely means CONSCIOUS understanding.”

    Searle compared Chinese (with zero understanding) to English (which he knows well) to show that executing in the Chinese room will not bring anyone to understand (consciously) the Chinese language. However, I am curious to know what if we compare Chinese to Math language?

    I would like to share an example on the “+” sign. It is not important whether it is called a plus or addition sign or whatever, but the way how we naturally add up the two numbers before and after the “+” is what we learn to execute. In that case, do we say we understand Math? Or, do we say that math is not a language here? I just found the way of recognizing (but not knowing the language) those Chinese squiggles and do what I am supposed to do, resembles to how we deal with the symbol “+” in mathematics. Also, these resemble to how trained dogs sit well after they hear the word “Sit”. Yes, these might not be a full understanding of a language as it lacks syntax/sentence structure, but I wonder, can we say there is a little degree of (conscious) understanding? Will there be a chance that, we just missing something in the Chinese room that enables the conscious understanding of the language, say, some input to the Chinese room that allows the one inside the room to induce syntax knowledge about Chinese language?

    ReplyDelete
    Replies
    1. The language of math (math itself is not a language i believe) as you said is just a convention, in the sense that the relationship of addition could be represented by another symbol, and it would not matter. However, when we use "plus" (which can be said in english too, and not necessarily in a formal language) we usually understand the relationship it represents. The concept of 1 + 1 is intelligible to me, as much as the concept of an apple is. At the level of quadratic equations however, i think many of us (including me) would admit that they are applying the syntactic rules of maths, but without "understanding" the operations. It is difficult to pinpoint at which point "understanding" is reached in real life, but in the case of the CRA, it is very clear that Searle doesn't understand chinese the way i do basic math, and that is all that is needed for his point. It may be possible to add something that would make chinese understandable to Searle, but in this case you will have proven that "understanding = computation (what Searle does) + something else", and not "understanding = computation". Prof. Harnad agrees with you however that Searle ignored this possibility, and concluded that "understanding = not computation at all".

      Delete
  19. There is a part of this whole argument that I don't understand. Actually it is more about its application to today's world. So in Searle's argument, a person in a room following instructions may be able to pretend they can speak Chinese, the same way a computer follows algorithms to perform tasks. And this supposedly proves that semantics is necessary and not only syntax, syntax supposedly being rule based and therefore a parallel to computationalism, so computationalism may be necessary but it is in no way sufficient to the understanding of the human mind.
    However, there is a point I do not understand in the definition of "instructions". Most artificial intelligence softwares programmed today rely mostly on the storage of hundreds of terabytes of data, indeed a program with stronger algorithms but less data tends to perform more poorly than weaker algorithms with access to more data. Following that logic, the instructions that a computer program would follow in order to automatically answer a person's questions (say in Chinese), without an understanding of the language, would be rules like "if --> then", and therefore not rules of syntax (verb before or after the subject). This would be consistent with the fact that semantics is necessary for a true understanding, which computers don't do, but it does not prove that syntax is enough for artificial intelligence, and therefore does not show that computationalism is in any way necessary. In my understanding, rules of syntax could be completely disregarded, if the program was just a very long list linking each possible input sentence (input pile of data) with each possible corresponding answer (output pile of data). The program would then not be using syntax at all, and therefore no computation (in the sense of rule based instructions) but only memory storage and a pairwise set of "if-thens". Does that mean that the Chinese room argument is enough to completely refute the fact that we have to agree that any computation happens in the mind at all?

    ReplyDelete
    Replies
    1. I don't know a whole lot about linguistics/computer science, but it seems to me like you are too narrowly defining syntax and computation. For instance, the Horswell reading provided the functional model of computation which is basically taking inputs and producing outputs, which to me seems exactly like what "if --> then" is doing. So even if we were to simplify the CRA to this level, I think we'd still be evaluating
      computation.

      I also don't really get why "if-->then" isn't considered syntax. It may be very simple, but it is still a symbol manipulation rule, so by my understanding it should be fine regardless of verb placement. If my idea of what syntax can be is too general, please let me know why!

      Delete
  20. It is interesting to consider the conscious and unconscious states of understanding. In my case, I speak Chinese to a certain degree. I can recognize the symbols better as a part of a phrase than individually. I still claim that I can understand the Chinese language, but if I were presented to disjointed I sometimes have no clue what they say, and only when they become part of a phrase do I have an Eureka moment. Is there a point in which understanding a language begins and formal symbol manipulations ends? Do I unconsciously understand the symbols in a jumble up until they form a sequence that I consciously understand?

    ReplyDelete
    Replies
    1. Your point is interesting in regards to Harnad’s mention of “conscious” vs. “unconscious” understanding, and Searle would say that without the “conscious,” there is no understanding, possibly since we cannot communicate our understanding unconsciously. This goes hand-in-hand with the “other minds problem”. To avoid this problem, Harnad concedes that we must enter into the “unconscious” (or nonconscious) computational state to see if the computer is experiencing the corresponding mental state. In order to salvage Searle’s CRA, one must renounce the conscious mental state for the unconscious computational state. However, in doing so, CRA loses the notion of understanding; can we truly separate “understanding” from the “conscious”?
      I think your comment also raises the overarching question of how we ascribe meaning to symbols. Based on the fact that you say you can only understand the meaning of individual words when sequenced in a phrase, I would say we ground meaning to things by the context in which they are placed. Regarding yesterday’s class discussion, the point was raised about how we ascribe meaning to non-referent words (i.e. “or” – waiter asks if you want soup or salad, and from the question, you infer that you can only have 1 of the 2 options). This is to say that unlike referent words to which we ascribe meaning through sensorimotor perception/experience, we learn the meaning of non-referent words by their application and context.

      Delete
  21. I am not sure I fully understood the shift from arguing against Strong AI to arguing against computationalism. Is the point simply that the propositions of computationalism are equivalent to those of Strong AI, but that computationalism is a viewpoint that more people identify with and whose propositions are better-explained?

    I also agree that Searle’s Chinese Room Argument does not leave neuroscience as the only way to learn about cognition. From what I can tell, studying the structure of the brain (short of maybe studying the brain a great deal at the molecular level), would not leave us with very much other than the water pipes Searle described in his response to the brain simulator reply. I think that producing a non-computational, grounded machine that could pass T3 is still a viable option for learning about cognition.

    ReplyDelete
    Replies
    1. I feel like, if we got to the point where we had a machine that was non-computational and grounded AND it passed T3, we wouldn’t have that much more to learn about cognition. We’d already have a machine that could, for all intents and purposes, do everything a human can do in such a way that it would never arouse suspicion of it being a machine at all. It seems backwards to suppose that the end product (a T3 robot) will teach us more about the necessary parts (cognition) utilized in making said end product (a robot that can pass T3).

      Delete
  22. RE: "Consider reverse-engineering a duck: A reverse-engineered duck would have to be indistinguishable from a real duck both structurally and functionally: It would not only have to walk, swim and quack (etc.) exactly like a duck, but it would also have to look exactly like a duck, both externally and internally."

    Searle brings up a great point with this example of D4. Nobody could deny that a complete understanding of how this candidate D4 duck works would also amount to a complete understanding of how a real duck works.

    D3 introduces a duck that is indistinguishable from a real duck only functionally and not structurally as in D4. Searle mentions a microfunctional continuum between D3 and D4 in which certain functions of a duck would require similar structural mechanisms such as cellular function, appendages for walking, webbed feet for swimming, and so on. I agree with this continuum and find it hard to discern the point in which D3 becomes D4. Previously it was mentioned that mentality cannot be found in the hardware and that it's the software that matters. However, any given software will still require some form of hardware in order to run. The Turing Test simply calls for functional equivalence between the reverse-engineered candidate and the real thing. It ignores structural equivalence yet structural similarities may contribute to similarities in function as seen in the D3-D4 continuum. Are there any other continuums involved in computationalism? Why is it that the Turing Test could leave out sensorimotor capacities (T3)?

    ReplyDelete
  23. I had a bit of a problem with tenet 2. While I understand in machinery what hardware and software are, what is the software in humans? I understand how software can be implementation independent, but what is the correlate with human brains? Would this not imply that humans have some program initially stored within them? What would this be?
    Also, is this claim true? I would argue that a very large portion of cognition is the ability to learn and change (whether it be ways of thinking about things or explicit actions). These changes are in turn driven by physical ‘hardware’. This leads to the computational states being computationally dependent, not independent.

    ReplyDelete
    Replies
    1. We don't know what the software might be yet, but it's definitely going to be very complicated, and maybe even unique to each individual. As we learn our neural networks form to reflect the learning. Our experience changes the structure of our neural networks, based on genetics, epigenetic factors, the environment, and the experience itself.
      I'm not a computer scientist and don't know much about the latest AI, but I reckon that, in some ways, our software may reflect how 'Ethereum' operates - just in terms of the level of complication and the use of dynamical computation. It has a totally different function than us (it's a cryptocurrency) but it has a decentralized virtual machine, uses network nodes as well as a token system to operate.
      https://en.wikipedia.org/wiki/Ethereum

      Delete
  24. I found the following very interesting. The Turing Test is not infallible but rather is the only means we have right now of testing for cognition or mental states. It happens to collect the easiest data to obtain which is output or behaviour or functional. But those do not create the necessary proof for if a machine thinks. I also thought the article made it really clear what the CRA showed: that cognition can't be all computation or else there would be no understanding/ consciousness. However, it doesn't proof that computation can't be involved at all.

    I followed the rest of the article but for some reason the paragraph about Searle's Periscope lost me- what is this saying exactly?

    "Can we ever experience another entity's mental states directly? Not unless we have a way of actually BECOMING that other entity, and that appears to be impossible -- with one very special exception, namely, that soft underbelly of computationalism: For although we can never become any other physical entity than ourselves, if there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it's got the mental states imputed to it. This is Searle's Periscope, and a system can only protect itself from it by either not purporting to be in a mental state purely in virtue of being in a computational state -- or by giving up on the mental nature of the computational state, conceding that it is just another unconscious (or rather NONconscious) state -- nothing to do with the mind."


    ReplyDelete
  25. RE: Is the Turing Test just the human equivalent of D3?
    I think that the Turing Test as an analogy of D3 is arguable. In the duck example, the D3 duck is indistinguishable only functionally. It can do anything a duck can do without the structure. A Turing Test isn’t able to do everything a human can in this sense. A human can have emotion, thoughts, can walk, can talk etc. Is it enough for the Turing Test to possess some aspects of function that a human has or is it necessary that it possess all aspects of functionality? If only some aspects are necessary how do you evaluate which factors are the most important for this test to be recognized as a human equivalent of D3?

    ReplyDelete
  26. "By the same token, it is no use trying to save computationalism by holding that Searle would be too slow or inept to implement the T2-passing program. That's not a problem in principle, so it's not an escape-clause for computationalism. Some have made a cult of speed and timing, holding that, when accelerated to the right speed, the computational may make a phase transition into the mental (Churchland 1990). It should be clear that this is not a counterargument but merely an ad hoc speculation (as is the view that it is all just a matter of ratcheting up to the right degree of "complexity")."

    I found this passage very interesting and a bit perplexing. It seems to imply that one of the main features of the human mind is its speed and uninterrupted implementation. It seems to follow that if the mind was missing this feature, it would not have the capacity (as it is currently conceptualized) for understanding/thinking. However, this seems intuitively wrong. Is there a way for the speed hypothesis to be true while the converse to be false?

    I'm also a bit confused what was meant by "it's just a matter of ratcheting up to the right degree of complexity"? Is this saying that speed and complexity are synonymous?

    ReplyDelete
  27. RE: Searle’s Periscope and “Unconscious Understanding”

    From what I can tell, Searle’s Periscope is that understanding is a conscious (felt) state, and therefore being in just the computational state of understanding (as Searle is in when he is in the Chinese Room) is not sufficient for understanding itself. In other words, in order to Know (in the sense that speakers of Chinese know Chinese) we have to both be in the computational state of knowing (have the appropriate symbols and symbol manipulational rules) and be aware of this computational state of knowing (know that you know). Is there anything missing from this formula (Ktotal = Kcomputational + Kawareness)?

    You bring up an example of someone not being able to recall a phone number off the top of their head, but finding that they are able to dial it via procedural motor memory. This person at least in some sense knows the phone number, but does not “know that they know.” In a more extreme case, patient H.M. “knows” how to do the mirror tracing task when he learns it via procedural memory, but does not know that he knows how do to it. So, according to the previously outlined argument, procedural memory is a computational state of understanding, which on its own, lacks the self-knowledge or “awareness” (and crucially – not anything else!) for it be “true” conscious understanding.

    ReplyDelete
    Replies
    1. Gus, it's not at all clear that there is a "computational state" of "understanding" (or "knowing"). There may be a T3 state of grounded know-how, but that wouldn't/couldn't be just computational.

      No reason to believe procedural know-how is just computational either.

      And explaining the causal role of feeling is the "hard problem."

      Delete
  28. Posting this after class, but I want to make sure that I have this correct:
    Searle thought that he had (a) shown that a cognition is not computation at all and (b) that the TT was not a sufficient test?
    From Harnad 2001

    “…it is only T2 (not T3 or T4 REFS) that is vulnerable to the CRA, and even that only for the special case of an implementation-independent, purely computational candidate. The CRA would not work against a non-computational T2-passing system; nor would it work against a hybrid, computational/noncomputational one (REFS), for the simple reason that in neither case could Searle be the entire system; Searle's Periscope would fail. “

    Does the CRA fall short on hybrid systems because it only captures T2 (verbal)? I don’t think I understand why Searle would not be the entire system in these hybrid models, wouldn't the homunculus-y-Searle would be providing the computation and the ‘other stuff’ ?

    ReplyDelete
    Replies
    1. vɪktoɹiə, if Searle were just part of the system rather than the whole system, all bets would be off, the "system reply" would be right, and computationlism would not be under test (see my earlier to Auguste.)

      Delete
    2. II mean my reply to Auguste, not Augustus (Gus).

      Delete
  29. “…it is only T2 (not T3 or T4 REFS) that is vulnerable to the CRA, and even that only for the special case of an implementation-independent, purely computational candidate. The CRA would not work against a non-computational T2-passing system; nor would it work against a hybrid, computational/noncomputational one (REFS), for the simple reason that in neither case could Searle be the entire system; Searle's Periscope would fail. “

    After reading other people's comments and the article about "Searle's Periscope", I'm still confused about the concept - what exactly does it mean and what is it's implications? From what I understand, it assumes that mental states are computation by nature and that a person's mental state at a given time corresponds to a certain "computational state" (correct me if I'm wrong). Does this mean we can experience the same mental state of that person if we had the same computational state? But how are we able to have the same computational state?

    ReplyDelete
    Replies
    1. Fiona, "Searle's Periscope" on the other-minds problem only works if computationalism is true (because computation is hardware-independent). If computationalism were really true, then just by "becoming" the hardware that is executing the T2-passing computer program, Searle would have to have all the cognitive (mental) properties of a thinking mind, for example, understanding Chinese (if the program can pass the Chinese T2). But he would not be understanding the Chinese by executing the T2- passing program (if such a program had existed), so computationalism ("Strong AI") is wrong. (By not understanding Chinese when implementing a Chinese TT-passing program, Searle "penetrates" the other-minds barrier just for this one special case, according to which passing the Chinese T2 means understanding Chinese: Searle shows it doesn't, by "becoming" the hardware that passes T2, without understanding Chinese. That's "Searle's Periscope.")

      But ("Stevan says") only a T3 could pass T2! So this is all counterfactual.

      Delete
    2. I am confused as to whether computationalism and strong AI are equivalents. Is computationalism only different from strong AI insofar as it is a clarified version?

      Is one the subset of the other (i.e. computationalism contains the hypothesis of strong AI or vice versa?) My intuition was that computationalism was simply the belief that cognition is computation, whereas Strong AI is the belief that consciousness/intentionality could be created artificially. While the obvious method to creating artificial consciousness would be through computationalism, I did not take Strong AI to necessarily preclude the possibility of hybrid or artificial dynamical systems that could create artificial intentionality/consciousness.

      Am I wrong in this assumption?

      Delete
  30. "There are still plenty of degrees of freedom in both hybrid and noncomputational approaches to reverse-engineering cognition without constraining us to reverse-engineering the brain (T4). So cognitive neuroscience cannot take heart from the CRA either. It is only one very narrow approach that has been discredited: pure computationalism."

    Kid-sib is having a hard time understanding the first sentence here. I understand that Searle's CRA discredits pure computationalism yet according to Harnad, it does not completely discredit computationalism. Computation is necessary but not sufficient in explaining cognition.

    Professor Harnad, can you explain which "degrees of freedom" you are referring to?

    ReplyDelete
  31. I’m still sort of having trouble determining what exactly the professor’s main issue with Searle’s CRA is. I understand how Searle was wrong in saying that computation did not play a role at all in cognition, but I get lost with the following argument:

    "Searle thought that the CRA had invalidated the Turing Test as an indicator of mental states. But we always knew that the TT was fallible; like the CRA, it is not a proof. Moreover, it is only T2 (not T3 or T4 REFS) that is vulnerable to the CRA, and even that only for the special case of an implementation-independent, purely computational candidate. The CRA would not work against a non-computational T2-passing system; nor would it work against a hybrid, computational/noncomputational one (REFS), for the simple reason that in neither case could Searle BE the entire system; Searle's Periscope would fail. Not that Systematists should take heart from this, for if cognition is hybrid, computationalism is still false"

    Why does the CRA not necessarily work for T3? What is the importance of a “box” in the CRA? Couldn’t we imagine Searle was not in a box, and had memorized all the rules as well as the sounds to speak in Chinese? Wouldn’t he then be a T3? Even though he was indistinguishable from a Chinese speaker he would have no idea what he was really talking about.

    ReplyDelete
    Replies
    1. @Lucy. From my understanding, the CRA is right in saying that cognition is not all or just computation, and that computation isn’t useful to find out about cognition. Yet Harnard is saying that yes cognition can’t be just computation as cognition has internal causal powers, intention and understanding, while computation doesn’t involve these. However he is also saying that we cannot completely dismiss computation as useless for understanding cognition, as computation may be an important part of cognition (without being the only part of cognition).

      Delete
  32. According to computationalism, "Mental states are just implementation-independent implementations of computer programs." And "If all physical implementations of one and the same computational system are indeed equivalent, then when any one of them has (or lacks) a given computational property, it follows that they all do."

    This means that if Searle, in implementing the Chinese Room algorithm, does not understand, then no system implementing the same algorithm would understand either. Presumably this can be extended to any algorithm operating on pure formal symbol manipulation, thereby disproving strong computationalism.

    But it seems to me, as Harnad points out as well, that the argument can no longer apply when the system is doing anything more than just computation. This is the strength of the robot reply.

    For if the brain is doing any computation at all, and neurons at the very least appear to be, then Searle's argument as he puts it would imply that no one understands anything. That is clearly not the case. I wonder, how would Searle reply to Harnad?

    ReplyDelete
  33. Which structural capacities are necessary in the reverse-engineering process, and which capacities are merely “decoration”? I would consider sensorimotor capabilities to fall partly under the structural category (correct me if I’m wrong). I feel like sensorimotor capabilities are definitely important in terms of allowing an entity to explore and learn from their environment. However, I think we need to be careful when trying to determine whether an entity can really understand, once we step away from the T2 pen-pal interaction and are able to actually see the entity in question, walking and moving around the room for example. This kind of comes back to the “other minds” problem….I would more easily conclude that Dominique can understand simply because she appears much more human-like than a robot.

    ReplyDelete
  34. Searle attempts to argue that mechanisms don't necessarily have a mind and cannot produce understanding while still achieving a task aimed to demonstrate understanding. My understanding is that the flaws in Searle's metaphor can be broken down into (1) both in a computer and a brain, there is a capacity for change/adaption and pruning of mechanisms that isn't necessarily elaborated in his Chinese room that makes a big difference in the origin of understanding (2) understanding is fed through the system though the individual nodes don't necessarily understand (3) there is ambiguity in who understands/does not understand Chinese - is it Searle, the room itself, the entire system including the people outside the room receiving the outputs? (4) any form of understanding that can be adequately assessed must go beyond perform a linear straightforward task with one set of rules, intelligence may be defined but understanding is not necessarily an all-or-nothing state of being that can be achieved.

    ReplyDelete
  35. RE: reverse-engineering a duck
    I have a little confusion when it comes to the functionalist settling for less, as in only requiring the outer appearance of equivalent functionality (even if by different mechanisms).
    We previously talked about implementation independence, but does this extend to individual differences of the brain?
    Most humans can functionally achieve the same behaviours, movements etc, and we can assume it is by the same general functions internally, but even just anatomically there is such a breadth of variation. The degrees of freedom shrink as we can (perhaps pointlessly) point towards analogous, and then even more so for homologous structures, until we can point to the basal ganglia across human species and say, "despite the immense variation in the shape, size, activity, etc, this is functionally and structurally consistent.
    I tend to get caught up in the biology as per my studies, and for TT we only really care about performance capacity, so does this line up with your second tenet of Computationalism?

    ReplyDelete
  36. RE: Reverse engineering a duck

    Because of the old saying “if it looks like a duck and acts like a duck, it’s a duck” I’m wondering if a reverse engineered duck/a computer are completely indistinguishable, didn’t you just make a duck or a human? If everything is completely equal, then I don’t really think that it’s a computer is ‘cognizing’, it’s just a human. If something can pass T3 or T4 I think that it would (hypothetically speaking) become a human being. Especially if a human is just what we say it is.

    ReplyDelete
    Replies
    1. That's a great point bringing up the saying "if it looks like a duck and acts like a duck then it's a duck." I think your conclusion that if we can reverse engineer a human then with everything being equal it's just a human rather than a computer is extremely valid. From what we've discussed in class, it would seem that if something passed T3 then it would just become human or rather we would have no reason to believe it not to be (the other minds problem). Dominque is a robot yet we have no problem calling her a human and if that information wasn't provided we would never question she was human or could cognize.

      Delete
  37. “Unconscious states in nonconscious entities (like toasters) are no kind of MENTAL state at all.”

    To ask a kid-sibly question, I wonder what really does count as a mental state exactly. I notice certain terms in this response and in Searle’s original argument, as well Turing’s original argument, that are being used as if they were equivalent / synonymous, or mutually required. In this argument, the terms cognition, conscious understanding, and mental states are all being used. The central tenant of Searle’s argument was that the machine (Searle) cannot understand therefore it fails to think, or to succeed as a model of cognition. Both this argument and Searle’s argument require that something must understand for it to cognize. Why is this necessary? What does it mean to understand, and what does it mean to ‘have cognition’ or ‘be in a mental state’? Can something ‘think’ without understanding?

    ReplyDelete
  38. Perhaps I don’t understand properly what the Searle Chinese Room is, but wouldn’t there be some sort of learning going on between Searle and Chinese? I know that he wouldn’t learn how to read or write per se, but would he not make associations through the different characters just by having the program there to teach him how to answer questions? I imagine that you would see some patterns emerge through answering lots of questions even if you don’t put actual meaning to the characters.

    ReplyDelete
    Replies
    1. I had a similar thought, Anthea. I agree that through answering enough questions that we would probably be able to see patterns emerge and begin assigning meaning to the symbols. However, for the purposes of the CRA I believe that the point was that Searle was able to manipulate the formal symbols in a way that showed that he had an understanding of the language when at this stage of manipulation he really didn't and hadn't made any associations. It demonstrated to us that through computation alone one can appear to understand, while all they are really doing is manipulating symbols.

      Delete
  39. To be honest, I began to get a bit lost near the end of this reading. What I THOUGHT I understood was that Searle views all mental states to be conscious (otherwise they are not mental) and all mental states to be nothing but computational states. Does that mean searle believe computation contains cognition? If a mental state is conscious, is a computer conscious? Is strong AI the assumption that artificial intelligence does really hold consciousness?
    In addition, I was unclear on the distinction between strong AI and computationalism. Is there a difference?

    ReplyDelete
  40. It is very interesting that Searle uses the condition of “implementation independence” to show that computationalism cannot explain cognition. Harnad argues that although Searle is right that computation is not all that there is to cognition, his complete rejection of computation as an explanatory model for cognition is mistakenly too strong and computation is in fact an important part of cognition. I’m wondering what pieces of evidence could support the argument that cognition is partly computation. Are we using computation to explain parts of cognition because we do not have any more powerful model at hand or are there any concrete evidence from brain functions that would lead us to believe that computation is involved in cognition?

    I also found the part on conscious understanding of the language a bit unclear. I agree with the argument that “Unconscious states in nonconscious entities (like toasters) are no kind of MENTAL state at all” but I don’t think that unconscious states in conscious entities must be brief to be counted as mental states. The condition of briefness of unconscious states, in my opinion, does not introduce any additional useful insights for understanding the nature of mental states or conscious understanding.

    ReplyDelete