Saturday 2 January 2016

1b. Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20

Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20, in Dedrick, D., Eds. Cognition, Computation, and Pylyshyn. MIT Press 


Zenon Pylyshyn cast cognition's lot with computation, stretching the Church/Turing Thesis to its limit: We had no idea how the mind did anything, whereas we knew computation could do just about everything. Doing it with images would be like doing it with mirrors, and little men in mirrors. So why not do it all with symbols and rules instead? Everything worthy of the name "cognition," anyway; not what was too thick for cognition to penetrate. It might even solve the mind/body problem if the soul, like software, were independent of its physical incarnation. It looked like we had the architecture of cognition virtually licked. Even neural nets could be either simulated or subsumed. But then came Searle, with his sino-spoiler thought experiment, showing that cognition cannot be all computation (though not, as Searle thought, that it cannot be computation at all). So if cognition has to be hybrid sensorimotor/symbolic, it turns out we've all just been haggling over the price, instead of delivering the goods, as Turing had originally proposed 5 decades earlier.

90 comments:

  1. I found this reading to be particularly eye opening, and helped me further understand last Friday’s class, as this is the first time that I have ever really ask the how question. (How is it that we see our third grade teacher?) One part that I wish was elaborated on (we touched a bit on it in class) was emotions, and how a machine would attempt to compute these. How does seeing a picture of someone make you feel a certain way, especially when these feelings may change over time. What changes internally, and how would a machine compute these emotions?

    ReplyDelete
    Replies
    1. First of all, we will soon learn that computationalism ("cognition is just computation") is wrong. But even if it were right, explaining how and why the brain "computes" emotions would be the "hard problem." This course is on the "easy problem" of how and why the brain causes our capacity to do all the things we can do. Feeling is not something you do but something you feel. We'll learn something about the behavioral and brain correlates and predictors of feeling, but not how or why the brain causes feeling. And that's just as hard a problem for cognitive science, whether or not computationalism is true.

      Delete
    2. The question of 'how is it that we see our third grade teacher?' led me to thinking about how computers store memory. For example, a computer could allocate a space in memory to a reference point, which would point to two other spots in memory: one that stores an image of your third grade teacher, and that stores her name. Referencing this specific reference point would bring up either her image, her name, or both.

      With time, it would need to free up space in memory (garbage collection, deleting information they don't need) (such as how people can forget facts they have no use for), so maybe one of those spots in memory would be deleted. Maybe the computer would still be able to conjure up an image of your third grade teacher, but not her name, or vice versa.

      Granted I don't think this is necessarily how people think/store information, but this is where my thoughts went.

      Delete
  2. It seems like the main issue in all these attempts is that consciousness is both the subject to work on and the tool to work with. We are always trying to explain consciousness with what our own consciousness provides us, which always ends up in begging the question. This results at best in incomplete explanations (as for computationalism) and at worst in superficial or empty descriptions (as for introspection and behaviourism)

    ReplyDelete
    Replies
    1. Let's distinguish trying to explain cognition (the "easy" part: doing) and trying to explain consciousness (the hard part: feeling).

      Everything we explain, whether in physics or chemistry or biology or psychology is done with our own brains. No escaping that.

      Introspection and behaviorism have failed, but has computationalism failed?

      Delete
    2. I'm sorry, i had cognition in mind. But even for computation, in the end it is a concept our mind invented. Could something invent itself?

      Delete
    3. Mael, don't get too carried away by this "inventing itself" paradox. It's not a paradox. People can reverese-engineer the correct causal explanation of how a toaster toasts, how muscles move, how people solve quadratic equations, and how they do everything else too.

      Delete
  3. This comment is on both ‘What is Computation?’ and ‘Cohabitation’. One of Horswill’s claims that interested me most is that arriving at a definition of computation is far harder than one might think. The takeaway for me seems to be that in reality any physical process can potentially be a computation, as long as we can map a logical symbol manipulation process to those physical processes. It just so happens that certain physical processes involving silicon (which are used in my computer) are more suited for the job. So all physical processes are in some sense potentially computations, but nothing is intrinsically a computation. If I’m correct, this underscores Prof. Harnad's argument in Cohabitation that cognition cannot be computation, because the ‘meaning’ of the output of a computation, what it actually represents, is assigned by us. We 'ground' the output pattern, thereby linking it to something real in the world. So thought or cognition cannot be computation because that would require a separate homunculus external to our neural computations to ground or assign the reference to the biophysical process in our brains. But then you need another homunculus separate from the first to ground its cognition, thereby inviting infinite regress.

    I suppose that what is so misleading about the computational metaphor is that we presuppose that the mapping is intrinsic to the computation when it isn’t.

    ReplyDelete
    Replies
    1. Computation is the rule-based manipulation of symbols on the basis of their shapes. But water flowing, or a furnace heating, or a vacuum cleaner sucking is not computation: Not everything is computation (in fact most things aren't). However computation is so powerful that it can formally simulate just about anything. (This is called the "Strong Church/Turing Thesis." We'll get to it next week.) But formal simulation is just symbolic. A computational simulation of an airplane can't fly. It's just 0's an 1's that are interpretable as flying. Simulated water is not wet, even if it is piped into a Virtual-Reality device (which is also not a computer) that fools your senses into thinking it's wet.

      Delete
    2. "But formal simulation is just symbolic. A computational simulation of an airplane can't fly. It's just 0's an 1's that are interpretable as flying."

      But isn't this true for all computation, not just simulation? Fundamentally what's going on in my computer is just very organised electro-magnetic process that has been designed to map onto the kinds of rule based symbolic manipulations that are useful to me. The physical patterns which are outputted are useful to me because I know enough about what my computer is doing to allow me to interpret the pattern, i.e ground it to something in the world. Without my (or another human's) process of interpretation there's nothing fundamentally different between the physical organisation in my computer and any other kind of physical organisation, is there??

      This of course then begs the question as to why the biochemistry in my brain is any different. But at least we can say that it has to be, otherwise we get the infinite regress of homunculi interpreting patterns that I mentioned in my first comment.

      Delete
    3. I think my point/question is actually quite trivial. I'm merely trying to better understand how we can distinguish cognition from computation.

      Within the category of organised, low entropy physical phenomena that we find on Earth, the fact that the one going on between my ears can interpret/ground Shannon information fundamentally sets it apart from other organised, low entropy phenomena such as my computer, a car engine, a snowflake, a whirlpool. Further dividing that latter category into man made organised stuff and natural non-living organised stuff, and then further categorising those, is another point.

      Delete
    4. Yes computers are just computing.

      But not everything is just computing.

      A toaster is not.

      And a computer-simulated toaster is not toasting your bread.

      And no one knows whether the brain is just computing, or something else.

      What else is possible? All of physics and chemistry: dynamics. Neither billiard balls in a pool game nor molecules in a chemical reaction are computing (though both are computer-simulable -- but with no real motion, nor chemstry).

      Delete
    5. Am I doing computations when I use an abacus?
      Does it still count as a computation if a monkey is using the abacus?
      What if wind is blowing and moving the pieces?

      I'm trying to stretch the definition of computation to better understand it.

      Delete
    6. A computation is being executed if symbols are being systematically manipulated according to an algorithm, whether you are doing it, a monkey is doing it, or the wind if doing it. That a monkey will factor a quadratic equation is as likely as that it would write Shakespeare, but if it did, it would be computing. Ditto for the wind.

      Delete
    7. As Dr. Harnad mentioned that computation is a rule-based manipulation of symbols and many things aren't computation, like water flowing or a furnace heating but since computation is as powerful as it is it can simulate these things. I'm wondering if in order to understand computation without really understanding the hard problem, we need to realize that a machine may not be actually thinking and cognizing like humans are, but if the machine can simulate this ability and pass the T2 level of being verbal then that's what matters.

      Delete
  4. "Not just the email version of the TT, based on computation alone, which has been shown to be insufficient by Searle, but the full robotic version of the TT, in which the symbolic capacities are grounded in sensorimotor capacities and the robot itself (Pylyshyn 1987) can mediate the connection, directly and autonomously, between its internal symbols and the external things its symbols are interpretable as being about, without the need for mediation by the minds of external interpreters."

    In the full robotic version, would we then say that consciousness involves the ability to perform actions? I believe that it was said in class that the robot would be able to be passed objects and required to manipulate them as specified. However, clearly one can be conscious without being able to perform actions (for example, a paralyzed person). What is the complete set of requirements for this robotic turing test?

    ReplyDelete
    Replies
    1. First of all, unless I have misremembered the name of this year's T3 robot, it is you Dominique! And a T3 robot must be able to do everything an ordinary person can do. (But remember the T-test is only a test of doing capacity ("easy"), not feeling (hard). And the goal (in order to understand how and why people can do what they can do) is to model capacity, not incapacity!

      Delete
  5. Regarding how we learn the rules to categorize things (p.3) in relation to the symbol-grounding problem (p.8), a question I have is where does perceiving end and categorizing begin? Is there a distinction between perceiving and categorizing, and if so, is it on a continuum or strictly on two sides? To the extent that categories are perceived without or with less ‘processing’ (I’m not sure what exactly this would be), symbols and the world seem more closely/immediately related (perhaps symbols copying or reflecting the world). To the extent that categories are categorized through ‘processing’, then the connection between symbols and the world are more distanced (perhaps symbols as derivatives of the world). How and why we learn categorizing rules seem to differ depending on where to draw the line, if there is a line at all.

    ReplyDelete
    Replies
    1. Perception is passive while categorization is active, no? I would say categorizing relies on perceptual input, but both processes are distinct. Agnosia for example lets you perceive the world but you fail to categorize what you perceive.

      Delete
    2. Avoid the preferred jargon of the priestly class:

      Organisms (and machines) can see and hear. This means they can detect and respond to sights and sounds (that's something they can do, and calls for the "easy" explanation). Also for many organisms (but not for machines, so far) it feels like something like something to see and hear (that's the "hard" problem).

      Among the things people (and machines) can do with their sense and bodies (seeing, hearing, touch, movement) is to categorize (do the right thing with the right kind of thing).

      "Perception" is a wesel-word: it can mean detecting and responding or it can mean felt detecting and responding.

      The rest is about what internal "processing" is needed to be able to do what.

      Delete
  6. “The imagery theorists stressed that, for example, the way I remember who my third-grade school-teacher was is that I first picture her in my head, and then I name her, just as I would if I had seen her.”
    I am wondering if there would be a parallel explanation utilizing the other senses for how a blind person remembers people/things/places they have previously encountered? Would the image theorists have used the same logic: Smell – how a place smelled, audition – the sound of a particular person’s voice, touch – the different feel of one shirt versus another, or did their theory utilize a logic or criteria that required the visual domain and sense of sight?

    While as explained in the reading this reasoning is merely a ‘non-explanation masked as an explanation’ I am wondering if looking at the process used according to the mental imagery theory through the various senses could shine light on an underlying mechanism involved? Could analyzing the commonalities between how the different senses perform recall operations (such as the ‘mental imagery theory’) help answer the ‘real functional questions’ or is the exercise futile for the same reasons described in the reading?

    ReplyDelete
    Replies
    1. Yes, "imagery" explanations include images in any sensory modality. And none are explanatory. But later in the courses you will see that the sense (and movement) are involved in the causal explanation after all.

      Delete
    2. I am of the opinion that is would be an exercise in futility, for the exact reasons described in the reading. The question for the other senses would still be 'how' they manage to do what they do, leaving us once more, begging the question. I think the problem here goes beyond (or deeper) than one or all of the senses - something farther up the hierarchy of processing where we actually cognize, something deeper in the brain rather than just perception.

      Delete
    3. Amar, can you be more specific about what you mean by farther up or deeper?

      Delete
    4. I think he might mean that split-instant journey where, physically, the percept leaves the thalamus and heads to its association cortex. Would that not be the next step in cognition? We have that first step, actually perceiving someone say "What is your third grade teacher's name?" and then we have all the sensations associated with her, but they're eternally on stand-by until we call them up like reserve troops. Couldn't cognition be that moment where, metaphorically, someone comes over to that group of soldiers and says, "If you had a CO named 'third grade teacher', then come forwards," and all the little associations, for 'third' and 'grade' and 'teacher' in every combination come forward? And in that moment, in recognition and matching to heuristics, we have cognition?

      Delete
    5. I apologize for not seeing these comments earlier - but yes Allie, that's exactly what I meant in terms of deeper/farther up.

      Delete
    6. Hi Allie and Amar, I agree with what you are saying to an extent. But I am curious as to whether cognition really is that moment of realization and understanding. Is it the moment when we are able to understand the question that is asked of us, and we are able to correctly recall and identify our third grade teacher or perhaps is cognition instead is everything that came before that? Processes before that allowed us to understand the question at hand? Is cognition the mechanisms that allow us to gain access to stored information (hard problem) so that we can react to it (easy problem)? Regardless, simply identifying the exact moment that cognition (or consciousness; our own awareness of understanding and our ability to come up with an appropriate response) takes place, does not resolve the “cognitive impenetrability” problem. By correctly reporting who our third grade teacher is, we are not “explaining the functions themselves” (Harnad, 2005). By this I mean that we simply reporting the correct answer and by verbally expressing the information that our consciousness has allowed us access to, we are reacting to an internal state of awareness.

      In this course I am assuming we will try understand the ‘easy problem,’ questions about integration of information, our ability to categorize, discriminate and perform behaviours (Chalmer, 1995). I am curious to see more explanations about how we are affected by internal states and react to them in different ways.

      Delete
  7. The paper states "The gist of the Turing Test is that on the day we will have been able to put together a system that can do everything a human being can do, indistinguishably from the way a human being does it, we will have come up with at least one viable explanation of cognition."

    But like Dominique and Professor Harnad have discussed, a “turning test is only a test of doing capacity (“easy”), not feeling (hard). Therefore, the Turing test cannot “set the agenda”, a test that is indistinguishable from the human equivalent cannot really give us a viable explanation of cognition. That criteria is still insufficient to explain consciousness (feeling).

    The T3 robot as a solution to the symbol grounding question therefore poses a few questions. If the “symbolic capacities are grounded in sensorimotor capacities and the robot itself can mediate the connection autonomously”, are we then able to appropriately explain consciousness or even fully determine the T3 robot “feels”? How can we avoid homuncularity in the design and analysis of the robot?

    ReplyDelete
    Replies
    1. Yes, a causal mechanism that can pass the Turing Test does not solve the hard problem, "only" the easy one. But that's still a lot (and we're nowhere near it yet); and ("Stevan Says" -- echoing Turing) it's also the best we can do. The "Other Minds" problem (which is not the same thing as the hard problem!) also prevents us from knowing whether or not T3 feels. (But even if she -- Dominique -- does feel, her functional mechanism does not explain how or why she feels, so it does not solve the hard problem.)

      Delete
  8. Trying to study the universe is particularly difficult because we can't take a look at it from the outside, our viewpoint is always from within. It seems like in studying cognition we come across this same problem. Although immensely interesting, especially as the brain is studied by so many disciplines, do you think that we will ever be able to solve the 'how' whether it be computation or a combination of computation and 'other' (like emotion)? Behavourism didn't ask the right questions; introspection glided over bind spots; in class MRI and other brain imaging techniques were discussed as providing data but no answer to the how; and AI seem to fail account for Searle's Room and 'understanding'. Where is the next step in understanding cognition and the part that computation plays into it?

    ReplyDelete
    Replies
    1. I think the answer for what is the next step might be related to a quote in the article: "If cognition has to be hybrid sensorimotor/symbolic, it turns out we've all been just haggling over the price, instead of delivering the goods".

      It looks as though it is an empirical question of reverse engineering a "full robotic version of the Turing Test, in which the symbolic capacities are grounded in sensorimotor capacities".

      We can then examine the robot to see where computation plays a role. But as the paper ends with “Does it really matter?” - as long as cognitive science can explain why and how we do what we do.

      Delete
    2. Kathryn, what does "look at from the outside" mean, if neither studying the universe nor studying cognition is "looking from the outside"? I think we can solve the easy problem, but not the hard one.
      Austin, yes, whatever successfully passes the TT will show us how the computational part is grounded in the noncomputational part.

      Delete
  9. According to the paper by Harnad (2005), Searle’s Chinese room experiment shows “cognition cannot be all computation,” however I don’t believe Searle does show this. Searle’s thought experiment takes three assumptions for granted: (1) Searle is able to pass the Turing test, (2) Searle does so given the (finite) stack of Chinese writing he is given, and (3) Searle still does not speak Chinese. However, I don’t believe this is possible. If Searle were to pass the Turing test, while not speaking Chinese, this would require an infinitely large collection of Chinese stories that he could use to answer the questions. However, when Strong AI talks about computers being able to think and understand, I do not think they mean with an infinite amount of input. Passing the Turing test requires something else— the ability to learn and thus apply what we gain from our limited inputs to new situations. In other words, the only way for Searle to pass the Turing test in the situation given would be to teach himself Chinese. Whether this can be done with computation is a different question, and not one that can be answered by the Chinese room experiment (given that Searle, a human, is the “machine” in the example).

    ReplyDelete
    Replies
    1. You haven't quite understood the Turing Test or Searle's argument, but Week 2 and 3 will clear that all up. It's partly based on whether we mean the verbal Turing Test or the robotic one. But even just the verbal one is not just question-answering. It's anything you say when you are texting -- with Dominique. The premise of computationalism is that a computer program alone (symbols, manipulated according to rules) can pass the verbal TT). Searle just points out that if he were executing the same rules, and the TT was being conducted in Chinese, he would not be understanding Chinese, even though he was passing the (verbal) TT (also called T2; T3 is the robotic TT.)

      Delete
    2. Professor Harnad, thank you for your response. I didn't mean to say that the verbal Turing test was only answering directed questions, I just thought that the situation Searle set up was presented in this way-- that he was manipulating symbols in Chinese in order to answer questions posed to him in Chinese from outside the room. What I meant about Searle's argument was that, although following the rules of symbol manipulation might allow him to pass the verbal Turing test, the "programming" would have to be much more complex than the task than he is performing. An example might be if (through functions made up of symbol manipulation) he were able to identify patterns in the characters that, overtime, led to educated guesses about their role in the "script." In a case like this, I think he would have a chance at passing the TT, although I don't think it could be said that he didn't understand any Chinese.

      Delete
  10. In light of discharging the homunculus, and not being fooled by decorative cognitive correlates that do NOT provide functional definitions for cognition as empowered by Zenon, I question some of the methods of cognitive science. From my interpretation, understanding cognition can be achieved by modelling computation and thus elucidating its function. But by working in this way, how do we avoid simply modelling/simulating the decorative cognitive correlates, as opposed to cognition’s true form? Somehow I feel like working backwards, from modelling/simulation to understanding cognition, runs a higher risk of reifying and reinforcing cognitive correlates since they are already so misleading and tempting when explanations are needed.

    ReplyDelete
    Replies
    1. Jessica, would you call a device that could do and say anything a human can do and say, indistinguishably from a human, to a human, for a lifetime -- i.e., Dominique -- merely decorative?

      Delete
    2. Professor,
      No certainly not. However, in the example you provided, it is implicated that Dominique's programmers captured the function of humans i.e. they were not fooled by decorative correlates at all. My comment simply highlighted the potentially heightened challenge of avoiding decorative correlates when using modelling computation because it may be reinforcing (or re-modelling) decorative correlates that are intrinsic to the modelling process. Does this make any sense? It is difficult to convey what I mean. I am sorry kid-sib!

      Delete
  11. “…the trouble with ‘picture in the mind’ ‘just-so’ stories is that they simply defer our explanatory debt: How did our brains find the right picture? And how did they identify whom it was a picture of?”

    As written in the article, an idea proposed to this question had to do with “the little man” in one’s head. This is very clearly a roundabout answer, which doesn’t actually answer the question. My question is - how will one ever know they’ve reached the most direct answer? Will it ever be clear how one identifies one’s 3rd grade teacher? Or, instead, will one always be wondering what explains a specific answer? Say it were determined that “x” is the reason we’re able to identify our teacher….. what is it that explains why “x” can do so? Is this just an endless question?

    ReplyDelete
    Replies
    1. Laura, would you agree that once you've reverse-engineered and built a device that can actually do that (and everything else) then you will have explained how? and without a homunculus...

      Delete
  12. "symbolic capacities are grounded in sensorimotor capacities and the robot itself (Pylyshyn 1987) can mediate the connection, directly and autonomously, between its internal symbols and the external things its symbols are interpretable as being about"

    This section suggests that our abstract representations of information is contingent on the limitations of physiological hardware but how does this address the question of semantics posed by Searle's chinese room argument?

    Is it impossible for meaning/semantics to be syntactically defined? i.e. the meaning of input A is that it relates to B as output? Is this refutation of syntactically-defined meaning of symbolsm a prioritisation of humanlike sensorimotor capabilities as a "special-case" of input-output necessary for cognition that differs fundamentally from computational principles? Is sensorimotor capability not simply a computational process?

    I think that this sensorimotor mediation of semantics we ask of intelligence is clouded by our feeling of our own intelligence. I believe the key component of this process of meaning-creation is the autonomy required.

    Particularly if a non-chinese speaker is provided with a minimal set of chinese rules and interacts with chinese texters and slowly acquires more rule-sets that gives them the ability to form more input-output interactions without ever gaining semantic access like native chinese texters, I would be inclined to concede that they have an alternative semantics to their syntactic rule sets that is symbolically inter-defined. Otherwise, we would just be privileging human specific physiological configurations like sensorimotor circuitry, that is not the cognitive process we are investigating, through which the cognitive process of learning occurs. However, I do concede that I don't know what an autonomous non-sensorimotor entity would look like.

    ReplyDelete
    Replies
    1. Yi Yang, you sound like you're talking to a philosopher, not to kid sib!

      You ask what is the meaning of "X" and someone replies it's Y when G and Z when H. Now, with that extra syntax, do you know what "X" means?

      Delete
    2. So we're saying that there is a true referent that gives a particular symbol its real meaning? Or is the way we refer to the referent that makes it the correct meaning?

      What about numbers? As abstract things, do they have a true referent we can verify all our symbolic representations of numbers against?

      Delete
  13. To me, the idea of universal human anosognosia is fascinating.
    Despite our self-assuredness of our memories, mental constructions, and general perceptions, everything points to these being wholly unreliable.
    The commonly cited proposition that many or most of our childhood memories are probably not true, and are simply fabricated and re-fabricated upon recall, is disturbing in that it threatens our personal identity.
    However, if the article can provide any reassurance, it is that this anosognosia of perception and memory may be what defines our unique intellect.
    A computer will have perfect recall of stored information, and although that information can be overwritten, the mechanisms are different than the uncontrolled plasticity of our own memories when invoked. Perhaps this too could be emulated, perhaps not.

    ReplyDelete
    Replies
    1. Anosognosia is not knowing what your missing, but it doesn't explain how you know what you do know.

      Computation can be as flexible as you like...

      Delete
    2. The fact that we don’t notice everything in our surroundings (and the overall malleability of memory) is certainly an adaptive quality in humans. But couldn't a computer built just like us also have this quality? In explaining anosognosia in his paper, the professor states that it is the “picture completion effect,” so if we have a computer programmed to perceive/feel things the way we do, then it would also have trouble noticing everything in its environment and it also wouldn’t be aware of this deficit.

      Computers themselves have limited capacity when it comes to storage. For example, when a PC has too many programs open or too many things downloaded onto it, then it doesn’t perform as quickly or doesn’t complete tasks. This could be similar to the reason why our brains don’t retain or notice everything that we encounter because it would otherwise be overwhelming. There must then be some mechanism in the brain (not a Homunculus of course) that selectively chooses important or salient information that is then triggered upon recall.

      Delete
  14. “The classical TT is conducted by email; it is basically a test -- life-long, if need be -- of whether the system has the full performance capacity of a real pen-pal, so much so that we would not be able to tell it apart from a real human pen pal. If it passes the test, then it really cognizes; in particular, it really understands all the emails you have been sending it across the years, and the ones it has been sending you in reply (Harnad, in press)”

    Does the Turing Test really measure a computer’s “intelligence” and “understanding”, or does it simply measure the computer’s ability to mimic human responses? If the conversation is merely an exchange of everyday exchanges or even questions such as “how do you feel about…?", it’s not difficult to program relevant responses. Even if the content of conversation has depth, the computer has the capacity to scour the Internet for a response that mimics a human exchange. If a computer beats a chess master, does it possess the capacity to “think”, or is it just conducting a series of calculations at impressive speeds which encompasses a multitude of choices - a process that is more mechanical than cognitive? The major failure of the Turing test is that it is conducted by email, a medium which, by its nature, is one dimensional, flat and lacks emotional subtlety or intonation. What largely separates a human from a machine is the human’s ability to identify and manage one’s own emotions and the emotions of others, with empathy and compassion, or his possession of "emotional intelligence”. The medium of the back and forth of the written word puts the computer at a huge advantage in its ability to “pass” as a human. 

    ReplyDelete
    Replies
    1. Texting with Dominique you can text about anything, for a lifetime: Is that just mimcry?

      Yes, algorithms are mechanical. But they're very powerful too. And they can do things that we can do too. Maybe everything we can do, including learning and explaining and understanding?

      I don't know what "EQ" is, but feeling is the hard problem; texting is the easy problem (but a big one!).

      And testing ability is not everything we can do; there's more.

      Delete
  15. PS Jaime, could you please attach your picture to your google login so I can connect you skywriting voice with you earthalking face when I am trying to evauate how well you're grasping the course content? (The freshman pictures are already out of date, and it's easier, for prosopagnosia, to have it right by each text!)

    ReplyDelete
  16. Searle's thought experiment makes sense to me. Just because a program can pass the Turing Test via email communication does not mean that the person that is executing the program actually understands (if you are following instructions, generating output in Chinese characters, and you do not know Chinese, you will not understand the conversation you are having with your pen-pal).
    It was then suggested that cognitive science focus on "scaling up to the Turing Test," instead of abandoning it. I'm not sure what was meant by this. It seems to me that Searle's argument will still be applicable. The Turing Test seems "behavioralist" or "functionalist" (for lack of a better word) in that it depends on the output of the program (and then judgment of an observer). If we are judging based on output, based on our observations of the program, how would it be any more possible to know whether the program *understands* than it would be if the program was just sending emails in Chinese? Take Dominique, for example. Let's say she was a program being run by a person with no understanding of the meaning of anything Dominique says or does or encounters (no understanding of English,social interaction, body language, etc) but this person is able execute the program and make it appear as if she does. As with the Chinese pen-pal example, there is no understanding except in the heads of Dominique's observers. How can this flaw be overcome? How can cognitive science "scale up" to the Turing Test?

    ReplyDelete
    Replies
    1. Regarding your 'homunculus in Dominique's head' example. We can agree that the person running Dominique's program would not truly understand what was happening around her. But do our brains really understand what is happening around us? If our brains are just neurons blindly passing along action potentials based on inputs from other neurons, do they really understand English, social interaction, body language, etc.? If not, does this give us grounds to claim that we as people don't understand anything?

      Delete
    2. You bring up a good point. I think this brings the mind/body problem into it. If we say that we (and, thus, our cognition) ARE our brains, and nothing more, we might be removing mind from the equation. If we are our brains, and our brains are action potentials between neurons and thus are incapable of understanding, then we as people also are incapable of understanding. But I think this phrasing is misleading - 'we as people' implies the existence of a self / selves - and, through that, I would argue the existence of feeling and understanding. 'We as people' don't exist by this logic.

      I think the article presupposes that people are capable of understanding. If people do not understand, what does it mean to understand? Our only reference for understanding is the human mind. If we say humans don't understand, what do we mean when we ask if something understands?

      Delete
    3. I think this is where mind-brain identity falls short. Of course our brains are part of us but I don't think they tell the whole story. Cognition takes place in a body, and the body takes space in an environment. Let's take the nature of self out of the question for simplicity's sake, and talk about a person as a unified biochemical system. It certainly feels like I as a person understand, whether I can define it or not. Unless you're an eliminativist who says it's all illusions all the way down, this understanding (or at least the seeming of understanding) must come from somewhere. If not from our neurons, then where does it arise?

      I touch on some of this in my comment below. I feel like sensorimotor processes play at least a large part in this. Ultimately I think a sort of functionalist view can overcome the difficulty we're running into. That is, if we view the mind as a piece of software, a sort of higher, abstract level of activity being run by our neural and biochemical hardware, can we still say that we understand without our individual parts understanding? And where does this leave us with regard to Searle?

      Delete
  17. Regarding the dichotomy of Pylyshyn's computationalism and Searle's position (dynamicism? analogism?). This seems to me to be a false dichotomy, parallel to others in the history of science, in that the two sides are not mutually exclusive. For example, in the question of nature vs. nurture, there seems to be a little bit of both in just about everything. In this sense, I agree with Harnad that cognition is some part computation and some part dynamics.

    When we look closely at the brain, we see that neurons are like simple processors, highly predictable and (relatively) easily simulable in terms of their input/state/output relationships. Put nough simulated neurons in a room and you get the best chess player in the world, so at least some part of cognition is computation.

    But neurons on their own lack intentionality. They have no idea about the real-world object of their computations, blindly manipulating symbols (action potentials) with no idea that a game of chess is even happening. But the person whose neurons they are is explicitly aware of chess, of their opponent, of at least some of the things happening around them, so intentionality must arise somehow.

    I suppose my question is one about symbol-grounding. Presumably, an active brain in a vat, with nothing to look at and no body to see, could not ground its symbols in anything. In a sense, it would be analogous to Searle in his Chinese room. If this is so, then it seems that sensorimotor activity is necessary for symbol-grounding, acting as the link between blind information processing and real-world happenings in and around us. But is sensorimotor activity sufficient? If not, then where else could we look? And if so, would a truly intelligent robot not be truly cognizing, even for Searle? On a related note, whether or not it was cognizing, would it matter for our moral intuitions or for morality in general?

    ReplyDelete
    Replies
    1. In sum, if not sensorimotor processes, what else could be our symbol-grounder?

      Delete
  18. Something I found interesting is the idea of 'cognitive blind spots' and how I think that they're required to allow us to function. Evolutionarily, I think that we have this blind spots to allow us to make sense of the world in the least overwhelming way possible. They are present in almost every domain of function: attention, memory, perception...

    This idea reminds me of Stuart Firestein's book "Ignorance: How It Drives Science" which discusses how ignorance is thing that science tries to understand, rather than knowledge. I think this is a good comparison for how cognitive science has an interest in not explaining what things we ARE aware of, but rather the blind spots themselves. This leads us to the "how" questions such as how do you remember your third grade teacher.

    ReplyDelete
    Replies
    1. Your comment on cognitive blind spots reminded me of another article I read in another psychology class. It argued that "only single-patient studies allow valid inferences about normal cognitive processes from the analysis of acquired cognitive disorders" (Caramazza & McCloskey, 1988). By explaining and learning about the abnormal (i.e. blind spots or cognitive disorders), we then can know what the norm (i.e. what we are aware of or normal cognitive processes) is and have more insight and understanding about it. But in this case, instead of being unaware of cognitive blind spots, we are unaware of the norm until we realize/learn about the abnormal. Only by looking at what's abnormal can we define what is "normal. "

      Delete
  19. While the Chinese room thought experiment highlights some of the important shortcomings of the Turing Test, I must agree that Searle “went too far” in abandoning it altogether. In its original form, the TT may not truly be able to test whether a machine has the capacity to “understand”, but this doesn't mean that the test is conceptually invalid at its core. For an incarnation of the TT to be valid, the “symbol-grounding problem” is a very important issue at the intersection of computation and cognition that must be addressed; can a machine lacking the ability to connect symbols with their real world representations/meanings really be said to cognize? Syntactic manipulation does not imply semantic understanding. The Winograd Schema, although problematic in other regards, starts to address this issue by asking the computer to interpret ambiguous sentences (i.e. the computer has to determine the referent of an ambiguous pronoun). This shift towards meaning and semantic understanding seems to be a step in the right direction.

    ReplyDelete
  20. RE: "Vocabulary learning – learning to call things by their names – already exceeds the scope of behaviorism, because naming is not mere rote association: things are not stimuli, they are categories. Naming things is naming kinds (such as birds and chairs), not just associating responses to unique, identically recurring individual stimuli, as in paired associate learning. To learn to name kinds you first need to learn to identify them, to categorize them (Harnad 1996; 2005).''

    I found that the human ability of categorisation is extremely fascinating. Categorisation is an important way of bringing meaning to the world and allows us to make sense of things around us. However, I feel that stimulating how humans learn to categorise with a computer would require a huge amount of inference of non-linguistic cues. Replicating categorisation through computation seems like a extremely complicated task because for computation to replicate these categories, it would take a lot of inference of existing elements to do so.

    I think that non verbal cues (such as tone of voice, movement, pointing etc) are important for learning and categorisation since it requires additional internal knowledge structures. However the human brain is highly complex and it seems as though trying to replicate that through computation is even harder. It would be interesting to study the dynamic functions of cognition that enable infants the capacity to learn and categorise being born with almost no pre-existing experience of the world.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
    2. I agree with you Fiona in that categorization is essential for understanding. However, I think replicating categorization through computation could be done and in some sense is already being done. Computer programs can scan, identify items and group them together based on similarly.

      RE: "It would be interesting to study the dynamic functions of cognition that enable infants the capacity to learn and categorise being born with almost no pre-existing experience of the world. "

      In the case of human infants, I think much of what they categorize is based on salience. They become fine-tuned to their environment and learn that being able to categorize and differentiate between two faces is far more important than between two pieces of clothing. In a similar way, I think you could program a machine to categorize objects according to importance based on the frequency of their occurrence in the input.

      Delete
  21. It was useful for me to understand Searle's "Chinese pen pal" thought experiment through a similar process I've went through learning the Korean alphabet – through an afternoon on the Korean language Wikipedia page, I learned what phonetic sounds each Korean symbol corresponded to, and subsequently learned to be able to phonetically read Korean. Being able to phonetically read Korean definitely does not mean that I know what the words I am (clumsily) saying mean, and this I think is the fundamental problem that Searle's "Chinese pen pal" thought experiment tries to elicit. It's easy, from the limited lens of a human being, to think of artificial intelligence as being successful when it successfully fools another human being into thinking it's a human being, but then that just means that the computer has learned how to decode a human's words and spit back a sentence we interpret as meaningful through the lens of a human. I am reminded of how limited and selfishly anthropocentric we are as humans, and I wonder if "scaling up" the Turing Test means to remove its "sociolinguistic code" aspect, and if that's even possible, or the point.

    ReplyDelete
  22. In terms of Searle’s Chinese Room Problem, it does not quite make sense that he would have absolutely no understanding in that scenario. In order to be able to come up with replies to emails, it would be necessary for intelligent (some would argue creative) activity involved in which a unique combination of thoughts, ideas, responses, etc. are combined. Therefore, I propose that Searle should have to be translating the Chinese to a language he understands, interpreting/comprehending the information, formulating a response, and then translating that response to Chinese. So while he does not understand Chinese, he does have understanding and thoughts in regards to the information being processed. In terms of a Turing machine then, it seems that if a machine could pass this test, that perhaps the machine would be translating whatever human language is being input into a (computational) language that the system can understand. It would then formulate a response in its own language and translate the information back into a human language, in much the same way that Searle must be in this thought experiment. Overall, this suggests there must be more understanding involved for these “symbol” or “functional” systems to interact seamlessly with humans.

    Alternatively, Searle is suggesting that there are machines that can answer emails based on some arbitrary language input and come up with a response based solely on the symbols and not on interpreting them. If this was actually true, what would a program like that look like and how can the programmer be sure that the responses would make sense or be a logical response to the “pen-pal” on the other side?

    ReplyDelete
    Replies
    1. I see the Searle's Chinese Room in another way that could possibly count as having Searle not understand Chinese. Searle had to get the symbol manipulation rules somewhere, say a programmer. The programmer of this Chinese Room had to understand Chinese to be able to provide Searle all the appropriate symbol manipulation rules in the first place. This way, one could possibly argue that perhaps Searle had no understanding of Chinese.
      But what exactly is understanding a language? When does language acquisition happen? It seems useless to say that language acquisition is solely about understanding a language, since humans are social animals. It is perhaps more useful to include being able to communicate with others as part of language acquisition. This way, one could argue that Searle does indeed know Chinese, since he can communicate with Chinese speakers fluently.
      Also, since language is an ever-changing thing, it may be hard for Searle's Chinese Room to pass the Turing Test over and over if Searle was told to never break the symbol manipulation rules given to him from the start. Certain words could have new meanings and certain new words may be added to the Chinese language. A machine that can learn and adapt accordingly is more useful in this case. Perhaps if Searle were a learning and adapting machine, he'd simply sound like someone's very old grandfather, trying to learn what the new "lingo" means.

      Delete
  23. “We are unaware of our cognitive blind spots - and we are mostly cognitively blind”

    This point was particularly interesting to me as it is something I haven’t thought about before. I believe it means that because we are able to do certain things so easily, like thinking about what our third-grade teacher looked like, we don’t think about how we were able to do it. This is why we are mostly cognitively blind because for most of what we can do in our minds, we don’t actually know how these things like imagery, subtraction/addition or vocabulary learning are being done.

    ReplyDelete
  24. Regarding Page 8/11 of 1b reading:

    “So Searle simply proposes to conduct the TT in Chinese (which he doesn’t understand) and he proposes that he himself should become the implementing hardware himself, by memorizing all the symbol manipulation rules and executing them himself, on all the email inputs, generating all the email outputs. Searle’s very simple point is that he could do this all without understanding a single word of Chinese.”

    It remind me of the very first day when we are trying to learn some basic daily phrases in a new language. Aren't we do the same as memorizing the symbols and execute them? For example, we may learn the word "bye" (or in other languages) by seeing the bunch of things that always happens after: We hear "bye", see our friend waving hands to us and he/she then leaves. Then we know the symbol "bye" means these bunch of things that are going to happen, we know “bye” is about leaving or separating. Another example is how dogs learn to hear their names. Executing unknown Chinese rules seems resemble to how we or dogs understand a word, by scratch. I would like to ask if there is really NO understanding of a single word of Chinese? I guess no one is sure. However, to firmly say the hardware inside the Chinese room knows NO Chinese, it will be more convincing if we have more evidence to show that it really has no understanding of Chinese.

    ReplyDelete
    Replies
    1. I think there is a difference between what humans consider to be "understanding" vs. responding to input. My interpretation of the reading was that understanding is uniquely human while symbol manipulation (i.e if x then y) is computation and can be replicated by a machine. Therefore as is stated in the readings, cognition is not just computation. The machine could compute Chinese without having a human-like understanding of it.

      Delete

  25. If I understand correctly, Searle’s Chinese Room argument proposes that because the man translating the Chinese does not himself understand Chinese, he is not cognizant of the communication. Extending this to what we learned in class about cognition and consciousness requiring feeling, it is interesting to wonder how Searle’s Chinese Room argument might apply to animals. If, for example, we were able to create a computerized dog that was in truth just a human in a dog suit, who, upon hearing a bark was able to determine the correct output by a series of translations tools without having ever understood what the bark meant, is this identical to the human example? I personally believe yes. However I’m not sure if the same would apply for a less intelligent animal such as a bee. If we apply Searle’s Chinese Room argument to bee dances used to communicate and locate honey, is it reasonable to argue that the person in the bee suit understands just as much as the other bees? Maybe perhaps because (unlike dogs), it is questionable whether or not bees feel? So unlike a dog who knows what it feels like to hear a certain threatening bark, bees do not know what it feels like to be alerted to the location of food. Clearly this is difficult to determine, which might undermine the comparison altogether. But perhaps this is a useful way to determine sentience in animals.

    ReplyDelete
    Replies
    1. Hmm this is interesting. I guess Searle’s CRA could apply to a dog or robot in the room. I think it doesn’t matter who or what is manipulating the symbols, but as long as they are able to produce the correct output (i.e. Chinese), the question still stands of whether the machine can cognize/think.

      Delete
  26. This reading clarified a key concept to me that was touched on in class. Prof Harnad explained how one way to find out how something works, is to try and model something that works just like it. Zenon's attempt to explain cognition through computation, I believe is an attempt to understand cognition by creating a model. I found the metaphor between mind/body and software/hardware very compelling, although I see why Harnad does not believe this is a solution to the mind-body problem; this essentially returns to the problem of the homunculus. Although I believe that mental states as computational states is a better comparison than mental states as a little man in one's head.

    ReplyDelete
  27. This comment has been removed by the author.

    ReplyDelete
  28. This comment has been removed by the author.

    ReplyDelete
  29. "The only way to do this, in my view, is if cognitive science hunkers down and sets its
    mind and methods on scaling up to the Turing Test, for all of our behavioral capacities."

    The section this quote is taken from makes me question a particular cognitive process (designated as being cognitive for now), that of creativity. Having carried out a study on creativity for a research course last year, I had the opportunity to see how individuals presented with just a square filled with a semicircle, dot, a curved line, and a dotted line were able to create a gamut of drawings. The symbol grounding seems relevant here - the images the participants were presented with could evoke associations with very little or a plethora of other images that could inspire extensions of the incomplete drawing before them. Admittedly, symbols in the context of the paper at hand refer to arrangements of letters or numerals that have come to take on meaning, but I wonder if an akin approach can be applied to random shapes on paper. To look at an incomplete drawing on the spot and continue the drawing requires conjuring associations derived from past experiences and perceptions. On the one hand, this sort of stuff is done computationally with machine learning; bots fed in images of any object can parse the images and begin ‘identifying’ smaller fragments, shapes, colors with an array of more cohesive images. In a bit of a tangent, I’m getting at what Professor Harnad prodded at in his 1990 paper; can a robot that is dealing with grounded symbols “autonomously establish semiotic networks”? I do not have the answer for this question yet, but the images generated a few years back from the “dreams” of Google’s AI (http://www.popsci.com/these-are-what-google-artificial-intelligences-dreams-look) are worthy of exploring in this regard.

    ReplyDelete
  30. Cohabitation: Computation at 70, Cognition at 20

    RE: Dynamical vs Causal Systems
    Using the mental rotation example, you describe how computational systems operate discretely while dynamical systems are continuous in nature. However, you never provide an explicit definition of what a dynamical system is, or in what other ways one might differ from a computational system. To what extent might the brain, at least partially dynamic, be considered deterministic?

    RE: Scaling Up the Turing Test
    How might a robot ground its symbolic capacities in sensorimotor capacities? Does a percept become symbolic the moment that it is transduced into a physical impression (such as a shadow on the retina or vibration in the eardrum)? Or is the percept itself the physical impression, and therefore, so far as the chair I am seeing *is* the shadow of the chair on my retina, it is grounded in a way that a neural representation of the shadow of the chair on my retina is not?

    ReplyDelete
    Replies
    1. The relevant difference between dynamical and computational is not discreteness but the fact that dynamical systems are just following the laws of physics whereas computational systems are also executing an algorithm: the manipulation of symbols according to formal rules (algorithms).

      The Turing Test is just about the easy problem: doing, not feeling. So we can't speak of "percepts" only about inputs, outputs, and whatever processing goes on in between.

      Delete
    2. Are you sure that it is possible to solve the easy problem without solving the hard problem along with it? Might "hard" consciousness be a necessary per-requisite or consequence of the ability to do all of the "easy" behaviors that people do?

      Delete
  31. This article also explains almost linearly how and why we went from armchair introspection, to behaviourism to computationalism, and each time it was because a piece was missing. It therefore made me want to categorize and try to list clearly what in our brain's functioning could or could not be explained by computationalism, and therefore extrapolate as to which could possibly be explained by computationalism later (more advanced), or which would require a new theory added to the previous ones to explain a new chunk of the mind. So for now I have: creativity and imagination, fact checking (as in checking the name of the 3rd grade professor after having had a mental image of her and not only going at the dress of her name to retrieve the information), imperfect memory (remembering something we forgot before, if the info was stirred in a computer that couldn't happen right?), and consciousness. Some of those feel like modelling could still be possible with more advanced views on computation, new ways of manipulating symbols. which do you think couldn't?

    ReplyDelete
  32. On The Language of Thought
    What this paper has shown is that the computational theory of the mind is slowly becoming the new behaviourism, in that computation cannot be the only factor in determining what goes on in the mind, much like the stimulus/reward idea. It appears as though it is not capturing the entirety of cognition, and simply another top-down method of attempting to figure out what is going on in the mind. What is the next step for cognitive science? Is it really viable to keep on looking a Turing Tests? At what point does imitation become cognition?
    I was thinking about the idea of I-language, the ‘language’ that babies must have in their heads before they’ve acquired a language. Babies cannot pass the Turing Test so does that mean they’re not conscious? It is proven that they do not imitate their parents when learning a language, meaning that they must be able to process stimulus and to compute it while simultaneously being able to relay it somehow to a parent or caretaker. Is there more to this than the problem of symbol grounding?

    ReplyDelete
    Replies
    1. It is illuminating to think about the Turing Tests as setting the limit of our discoveries. Like you mentioned, at what point does imitation become cognition? But even if it was as close as it possibly can get, it will never be the same. Borrowing the limit theorems of mathematics, even if the limit approaches a number or infinity, it never reaches it. It does approach very close, but never the same.

      Delete
    2. Although I think your point is completely valid I personally think that it is possible for imitation to eventually become cognition one day. Every day we seem to see computers get more and more advanced and more human-like and I don't see this progress stopping any time soon. For example we see the emergence of robots that are able to take care of humans, I think that they can eventually program them to have a sense of cognition similar to humans.

      Delete
  33. “Behaviorists had rightly pointed out that sitting in an armchair and reflecting on how our mind works will not yield an explanation of how it works”
    I agree that we cannot use simple reflection to understand cognition. However, the very basis of the hard problem is the qualia, the feeling of it. The feeling of something is solely reflection. So what is the use in trying to answer the hard problem?
    Let’s say we make an AI. It is able to pass the T3 level of the turing test. We build this AI to have the same structure as us to the synaptic level. How can we know if they feel? When you consider everyone has their own experience of feeling, how can you compare them or say that it has modeled it correctly?
    Is it even valid to consider the hard problem?
    Couldn’t it just be the case that the feelings we get are just a by-product of a narrative illusion?

    ReplyDelete
    Replies
    1. I think this is a pretty interesting point I think. There is really no way for us to express or compare the feelings of things at all. I must assume that others also experience feelings though I don't think that they necessarily need to be the same thing. There's the classic colour argument that the red I see may not be experienced the same way that you see red. It's interesting to consider that if a T3 robot was constructed, would it have an experience of red at all? I think it's possible it would, but that it would not resemble what I experience.

      I think the hard problem is still valid to study despite not being able to determine if a T3 robot had it or if they were mimicking it. Perhaps this is another example of where cognitive science has little to say about the hard problem at all and the Turing Test would only be able to work with the easy problem.

      I think it would be useful to kid-sib down your last point. What do you mean by narrative illusion? Also if they are a by-product how are they produced and why? Would the T3 have a narrative illusion, and would there be any way to prove that it does or doesn't?

      Delete

  34. I think symbolic thought is a) elementary to human experience b) not measurable c) is therefor not explainable.

    In the name of science, I understand the limitations of empiricism and subjectivity (for yielding unreliable data) as unable to explain cognitive function. But, how to make sense of denying introspection as a reliable method of investigation if it is of central experience to that object of study (the cognate)? It seems to be a relentless search if scientists deny the intuitive experience of that very phenomenon they seek to understand.

    I think Searle’s theory of the Chinese room addresses the nature of this problem. From my understanding of his theory, it suggests a computer cannot attribute symbolic thought to categories (shown in his ability to communicate in Chinese without understanding the language), making it an insufficient model of cognitive function.

    I think that in abandoning (rightly or not) a dualistic (mind/body) theory of cognition, we are not to assume that a monist theory is then necessarily attainable. Searle affirms that it has not yet been possible. I would say by nature of the question, it cannot be.

    ReplyDelete
    Replies
    1. Krista, if you're right that we can't explain thinking, then that's bad news for cognitive science! But why can't we explain thinking? And what is thinking (cognition)? (You seem to be using "symbolic" the way it is used in anthropology and poetry, but here we are talking about what it means in computation: arbitrary shapes that are manipulated according to rules called algorithms.)

      Introspection is not being denied. What is being denied is that introspection reveals either how and why we can do what we can do (the easy problem) or how and why it feels like something to think and do (the hard problem).

      We will discuss Searle's paper, but first you have to read it!

      "Dualism" is the idea that there are two kinds of "things" in the universe: (1) things and what things do and (2) feeling. Cognitive science is not dualistic. It is a branch of biology according to which there are living organisms that can do certain things. Cognitive science tries to figure out how and why they can do what they can do. The ability to do what we can do (think, learn, speak) is a biological trait that cognitive science is trying to reverse engineer. We have another biological trait: we feel, rather than just do. Cogsci is trying to explain that too, but that's much harder.

      Computation is symbol manipulation. "Computationalism" is the thesis (of Pylyshyn and many others) that the the mechanism that explains what people can do is computation: Cognition is computation. That's the reading you're commenting on right now.

      Searle tries to show that's wrong.

      Now it's time to read Searle (rather than what has been written about Searle). And then read what's wrong and right about what Searle says.

      Delete
  35. A very similar point can be made about Zenon’s celebrated paper with Jerry Fodor, which pointed out that neural nets were […] subcognitive if they could be “trained” into becoming a symbol system (which then goes on to do the real work of cognition)

    This sentences makes me wonder about Searle and the Systems Reply. Say we were to believe that Searle does not understand anything by himslef but that it is the system as a whole that “understands” does it mean that Searle himself would be also considered subcognitive?

    ReplyDelete
  36. In his paper Harnad reviews different ways psychology has tried to explain cognition. He points to both introspection and behaviourism as lacking. More specifically he references the work of Searle and others and argues that cognition can not only be computationalism (as was suggested by Turing and others). Cognition also involves sensorimotor and visuospatial information and real world meaning. Our minds don’t simply manipulate symbols according to a set of syntactical rules (ie 2 + 2 = 4). The “meaning” those symbols represent is relevant. For example, in order to understand what is met by “cat on a mat” we must know what those words mean in real world terms and be able to identify and categorize those objects. This “dynamical” aspect of cognition cannot be explained by computationalism. He proposes we focus on a functional explanation of cognition and that both mental imagery explanations (which are homuncular) and simulations (which depend on computationalism) are not adequate explanations for dynamical functions. He concludes by returning to the symbol grounding problem, that is how a symbol system is connected to real-world meanings. He posits that explaining how we do this - beyond the “easy explanations” of “association” - is important to understanding cognition.

    My first question is on the idea of the software being “implementation independent”. My understanding is that this assumption is rooted in the theory that computationalism is a possible and sufficient explanatory mechanism of cognition. If cognition is more than computation (as Harnad discusses in reference to Searle’s work), does this assumption still hold? And if not, does that not mean we must view the “hardware” as being inherently relevant to understanding cognition? Or to put it more simply in order to understand how we do what we do, we must understand how the brain does what it does.

    Also, is it possible that computers could simulate (ie approximate) more than just the computational aspects of cognition? Could machines make use of sensorimotor/visuospatial information thus overcoming the limitations of computationalism? This does not seem too far-fetched considering current technology including facial recognition software. Such a system would conceivably incorporate both computation and dynamical processing. Is this what is meant by a “functional model of cognition”? Also is that how we should envision the Turing Test? Ie passing the TT would mean creating a machine that is able to do everything we can do (including both dynamic and computational functions).

    Like others have mentioned I still feel uncomfortable with equating a computer to a human being without solving the problem of sentience/“feeling”. Even if a computer was able to function like a human this does not mean it feels like a human. I know this leads into the philosophical dilemma of “other minds” but, personally, I believe it’s clear that our subjective interpretation and feelings affect our interactions with the world in so many unknown ways and, in my opinion, this is what it means to “be human”. Emotions and feelings also undoubtedly influence human behaviour and how we cognize about the world around us. Maybe it is impossible to explain “feeling” in any empirical manner, however without doing so I feel that our ability to fully understand human cognition and behaviour will always be limited.

    ReplyDelete
  37. I never thought of how we know the things that we know. It’s true that introspection can only tell us what we already explicitly know. But how are we to model the mind if we don’t even know how it fully works? Certain actions just happen automatically due to the way our body works (e.g. breathing, blinking, etc.), but that doesn’t explain how our minds work and what our consciousness is a product of. How are we to know how our mind retrieves memories? In a computer system, you could program it to store information in a bank and then search for it, but in humans, is there a physical bank of memories located in our brain? Is there any way we can actually know how our mind works to find memories?

    ReplyDelete
    Replies
    1. Hey Anthea, I think that might be possible through reverse engineering. When we build something that can do the things we can do, we will have answers to some things. However, I’m curious as to how much that would be the case. I believe that our physical and chemical structures are important in explaining how we do the things we do like storing and retrieving memories. So when we build a robot that passes the Turing Test without our structure, will it really have the capacity to explain how we do things?

      Delete
  38. “Hebb’s point was about question-begging: behaviourism was begin the question of “how?” How do we have the behaviour capacity that we have? What makes us able to do what we do? The answer to this question has to be cognitive; it has to look into the black box and explain how it works. But not necessarily in the physiological sense. Skinner was right about that. Only in the functional, cause/effect sense.”

    If cognitive science is about how to look into the black box and explain how it works, but without relying on the physiological sense, does that not mean that we are already bound to a computationalist framework at the outset? Cognitive Science, being a science of how, but not what, must therefore rest on the assumption that cognition is hardware independent. Since computation is one of the few (relatively) well defined phenomena that possess this necessary feature of hardware independence, I think this assumption is a strong indicator of why computationalism is such a strong model for cognition. It would be very difficult to imagine a form of cognition that is hardware-independent but not computation.

    ReplyDelete
  39. Searle's very simple point is that he could do this all without understanding a single word of Chinese. And since Searle himself is the entire computational system, there is no place else the understanding could be. So it's not there.

    I think this eloquently summarizes the power of Searle’s argument. My struggle with how I’ve read about and interpreted Turing so far is “machines can do, thus can think” but Searle establishes the difference between doing something, and actually understanding or being cognizant of it.

    ReplyDelete