Saturday 2 January 2016

(1a. Comment Overflow) (50+)

(1b. Comment Overflow) (50+)

27 comments:

  1. "The only way to do this, in my view, is if cognitive science hunkers down and sets its mind and methods on scaling up to the Turing Test, for all of our behavioral capacities. Not just the email version of the TT, based on computation alone, which has been shown to be insufficient by Searle, but the full robotic version of the TT, in which the symbolic capacities are grounded in sensorimotor capacities and the robot itself (Pylyshyn 1987) can mediate the connection, directly and autonomously, between its internal symbols and the external things its symbols are interpretable as being about, without the need for mediation by the minds of external interpreters."

    My question is, could this only be possible if the robot was acting fear of death?

    I ask because it seems as if "all of our behavioral processes" are rooted in our effort to self-preserve.

    ReplyDelete
  2. Symbol grounding problem: where do the symbols get their meaning.

    I was thinking about this problem, and it got me to think about the over-used fun fact about Inuktitut: that the vocabulary has something like 16 different words for snow (12? 13? I was recently told that whatever the popular number is, it is actually inaccurate, but the example worked for my thought process...)

    So, I have met and I know Dominique, but as soon as the professor proposed that she could be a fully functional operating-only-on-computation AI, I didn’t really have too much trouble imagining a world where that was possible

    And then, as soon as I thought back again to the language speaker who has 16 different names for some object A where I only have 1 name, I realized that without hesitation I immediately assume that this speaker is absolutely identical to me in terms of their ability to ‘do’ cognition, even though the way they are attaching meaning to the objects in the world is very different from the way I would do it.

    This is kind of perplexing. There’s a theory in linguistics (Sapir-Whorf hypothesis or Linguistic relativity - I’m not even going to begin to pretend that I 100% understand it) concerning how the interaction between the language and the world shapes the way a speaker sees/interprets and interacts with the world around them. So how can I know that this other language speaker actually cognizes the same way I do if they are using a different system of symbol manipulation? How can I be sure that their cognition is the same as my cognition? We both perform some daily computations here and there, but how can I know that they are also “doing cognition” as I assume I am? At what point do I make the leap from a computation machine to a cognizing one, and when/why do I assume others have, too?

    ReplyDelete
    Replies
    1. "concerning how the interaction between the language and the world shapes the way a speaker sees/interprets and interacts with the world around them. So how can I know that this other language speaker actually cognizes the same way I do if they are using a different system of symbol manipulation?"

      I find this aspect interesting as well, and it has led me to thinking about whether speakers of languages that have 10+ words for the same thing cognize differently from those who speak languages that have several meanings for one word. For example, in Q'anjob'al, a Mayan language, the word 'ixim' (corn) is used for anything that is corn-based, such as corn, tamales, etc. And in Russian, 'arm' and 'hand' are the same word (рука), 'please' and 'you're welcome' are the same word (пожалуйста), etc. Do speakers of these languages cognize somewhat differently because they seem to be more context and syntax-dependent, rather than just depending on the semantic meaning of a set of symbols?

      Delete
  3. I am confused by what it was Searle believed. Did he believe that passing the Turing Test meant that one was able to cognize? When Searle conducted the TT in Chinese, he did so by memorizing all the symbol manipulation rules and his point was that "he could do this all without understanding a single word of Chinese." Does he then agree that memorization is a form of cognition? I would think by memorizing all the rules of symbol manipulation in Chinese, he had at least some understanding of what words meant (even if he did not understand them in great detail).

    ReplyDelete
    Replies
    1. To Searle, in order to have cognition, you have to have some understanding of the language. I think the rules he received were more simple so as to allow some argue that he had no understanding of Chinese at all. For example, rules could be like "if you see … on the paper, reply with …" (the "…" would be in Chinese). But in this case, others could also argue that he has an understanding of Chinese, since he knows exactly what to respond with in Chinese when faced with certain combinations of Chinese characters, allowing him to communicate with Chinese speakers.

      Delete
    2. Hi Brittany!
      I think Searle's position is that computation does not necessarily equal cognition. He's saying that just because a computer can behave intelligently, we can't equate this behaviour with "thought" or with the program/machine having a mind of it's own.
      From what I understood, he's using the example of conducting the TT in Chinese to show that trying to understand cognition by using computational systems can also lead us to homuncularity. You could imagine that you have a homunculus in your head, receiving a bunch of inputs in Chinese, and creating outputs by matching the symbols to ones that the homunculus sees on a rule sheet. The rule sheet would say something along the lines of "if you see X, reply Y" (as Chien said). Technically, this homunculus would be able to pass the TT by following a set of rules in order to manipulate a set of Chinese symbols. To an outsider (or to the person on the other side of the TT) the output would be completely comprehensible, but Searle's point is that the homunculus has absolutely no comprehension or understanding of what it's doing, it's simply manipulating symbols.
      Hope that made sense/that I got it right!

      Delete
  4. For a system that is able to response to an email, it does not necessarily indicate that it understands the content and implied meaning of words or emotions. This has also been shown with Searle’s Chinese test. As such, computation is not enough for cognition, what we also need is the dynamic process underlying our interpretations of the world. But is human brain itself able to perform conscious behaviors without the suppot of its body? Undoubtly the physical appearance of a brain does not look like a living thing that is able to think. Thus, maybe both its functions and the intelligent performance of an artificial system can be explained by epiphenomenon. And how could people explain for the intelligence of animals? For example, for a machine being able to replicate sensorimotor and computation functions of an animal but not those of a human, does it show AI or no?

    ReplyDelete
    Replies
    1. For your last question, I guess I will say it does show AI. If a machine is able to replicate sensorimotor and computation functions of an animal, then it could be a successful reverse-engineering of animals' cognition, and even (say, part of) humans’ cognition.
      There are some AI (E.g. AlphaGo) that are good at playing Go or chess. They are able to replicate the computation function of human in playing Go or chess (and do it even better). However, they are not good at doing other things that human can do. So if they are a successful reverse-engineering of how we think when playing Go and chess, then they are AI that shows part of our cognition.
      Those animal functions, that the machine you mentioned can replicate, could be some functions that human also have, in that case I guess it is a type of AI.

      Delete
    2. So after all the discussions about Turing’s tests we know that there is a hierarchy that goes from Toy, T2 (email), T3 (robot), T4 (indistinguishability of total internal microfucntion) and T5 (total indistinguishability in every empirically discernible respect). As such, it would be hard to identify the correct Turing level for an animal, because of both an other mind’s problem and a lack of grounded reference. Probably animals’ intelligence are put to the toy’s level?

      Delete
  5. This comment has been removed by the author.

    ReplyDelete
    Replies
    1. Sorry, seems to be no way to edit after noticing a typo:

      Could you argue that the Chinese box is in fact thinking?
      Could you say that Searle sitting in the box is more representative of a neuron (or a component) than the entire thing?

      It seems that a single neuron does not feel, and we remain conscious (sometimes not even noticing) if you take some of them out. However the brain as a collection of neurons and transmitters and liquids and gut microbes DOES feel - perhaps thousands of Searles and thousands of Chinese boxes all connected could form an awareness that they wouldn't have otherwise? If they developed the ability to learn (which computers can do) with a large amount of time, is it possible?

      This suggests to me that Searle has not ruled out computation = cognition, just problematized the account.

      Would we have to recognize that collective as conscious, even though it is formed through computational interactions?

      Delete
    2. I agree that Searle has not ruled out computation = cognition, but has just problematized the account -- I even think that the Chinese box thought experiment challenges human sentience more than computer sentience. I argue that even if there is an X in a Chinese box that does not understand Chinese, this does not mean that X isn’t thinking or does not have a mind. Searle demonstrated that computation of Z does not necessarily mean that X understands Z in the intended way, but this does not rule out that X doesn’t understand or think at all. From the perspective that we understand and interact with the world through the model that we make of it, this same logic of the Chinese room can be applied to humans. Chinese could be the world as it is, and the "rule books" that indicate how we interact with the external world is our DNA and neurobiological makeup. Although we may not understand what we are interpreting, and could be completely misinterpreting what we are doing/ seeing, we still believe that we are thinking and have a mind. For example, let’s say you took a drug F, and F warped your reality. Your friend Joe sees you walking down the street and has an entire conversation with you. Little does Joe know, you think you are flying through outer-space and having a little chat with a universe world leader. Joe just thinks its a Tuesday afternoon. In this case, your rule book (your neuro-chemicals) are interpreting the world in a completely different way, you do not understand what you are interacting with (in the way everyone else does) — yet you are still thinking, still have a mind. We have no way of knowing that AI, computers, programs, don’t experience the world in a similar way.

      Delete
  6. RE: Explaining how we learn vocabulary and rules of categorization: "The stimuli need to be processed; the invariant features of the kind must be somehow extracted from the irrelevant variation, and they must be learned, so that future stimuli originating from things of the same kind can be recognized and identified as such, and not confused with stimuli originating from things of a different kind."

    I have two things to say about this:
    1) From my understanding, when children are learning categories (syntactic, object features, etc.) they first develop rules (which might be incorrect) based on their experience (e.g. all round things are balls). This rule is only modified internally if feedback is given (say by correction from an adult telling them the moon is not a ball). Could we integrate this idea into machine learning?

    2) Does it matter how we learn these categories and rules? If a computer was theoretically programmed to extract invariant features from the irrelevant ones, it does not mean the computer will necessarily have the same understanding of the world as us but its behaviour/output would mirror that of humans. Skinner would be satisfied with this.

    3) I would argue that all humans do not have the same categories and rules to partition and understand the world. Are there certain categories or rules that do apply to all humans? Where is the overlap in our "human cognition" and our collective understanding of the world across cultures? When we try to model "human cognition", who is the "human" we are modeling?

    ReplyDelete
  7. This passage made me think back to the paper’s earlier discussion of “cognitive blindspots”: those inner cognitive processes (such as creating a mental image of your 3rd grade teacher) that we do all the time every day, but that leave us saying “I don’t know” when someone asks how we did that. If we were able to create this “full robotic version of the TT” that has the same behavioral capacities as humans, would it make sense to program the TT with cognitive blindspots? Would this robot’s ability to mediate the connection between its internal symbols and the things those symbols stand for include self-knowledge?

    ReplyDelete
  8. Hebb argues that behaviourism fails to consider the processes that underlie our capacity for demonstrating behaviours regulated by learning:
    "Behaviorism was begging the question of “how?” How do we have the behavioral capacity that we have? What makes us able to do what we can do? The answer to this question has to be cognitive; it has to look into the black box and explain how it works."

    The ideas used here to object to the view of behaviorism reminded me of the definition of “procedure” in the context of computation: "A procedure (aka an algorithm, program, routine, or subroutine) is a specific method for determining an output value from a set of input values. The difference between procedures and functions is that functions only specify what their outputs are, whereas a procedure specifies HOW to compute them. "


    I think what is missing from the view of behaviorism is the “procedure” that leads to manipulation of representations in the form of previously learned memories or mental imagery and manifestation of behaviors as outputs in response to environmental cues as inputs. My impression from this association is that computational view, although insufficient for explaining cognition entirely, might have explanations for some of the "blind spots" that behaviorism ignored.

    ReplyDelete
  9. If cognitive science can’t be done by introspection because we are cognitively blind, and therefore must come up with explicit procedures to explain how we are able to do what we can do, what does that mean for studies that rely on subjects rating their own performance or subjectively describing their feelings and experiences? Does a large sample size and statistical significance allow us to rely on evidence based on people’s introspective responses, or does this signify our inability to ever truly overcome the hard problem of psychology (that is, describing feeling without subjective, anosgnosic evidence)?

    ReplyDelete
    Replies
    1. Hi Julia!
      I definitely had a similar thought process when doing the reading. What I understood was that there's a difference between using introspection to identify/describe our thoughts and feelings and using introspection to identify where those thoughts and feelings come from. According to the reading, I think you can definitely still use introspection as a tool to say that you feel sad, but it won't help us understand how/where the feeling of sadness is generated in the mind. Does that make sense?

      Delete
  10. In regards to the definition of computation as “rule-based symbol manipulation”, how does this apply to the computational theory of the mind? What are the fundamental “symbols” that must be embodied somehow by the neurons? If they are, for example, action potentials, isn’t their “meaning” simply the fact that they are a physical processes which travel through various areas of the brain and body and may, at some point, cause a particular behavior?

    ReplyDelete
  11. Regarding Searle’s Chinese Room argument, it seems he argues the fundamental difference between a machine passing a Turing test and a sentient being is the concept of “understanding”. Given the complexity of the world, we have yet to devise some sort of machine that can make decisions that to us seem like relatively simple tasks. We have a unique ability to culminate a vast amount of knowledge and integrate it into some sort of “understanding”. Douglas Hofstadter makes a very interesting analogy in his book Gödel, Escher, Bach, likening the concept of understanding to a tree. The above ground trunk represents to real world thought processes that depend on the invisible roots. These roots symbolize the complex unconscious processes of the mind that we are unaware of. Machines are able to chop off the top of the tree and rely solely on the roots to process information. However, there seems to be no way to chop the trunk off real world thinking and rely only on the roots.

    ReplyDelete
  12. Regarding 1a: What is a Turing Machine?
    I am a bit confused on whether numbers like pi and e which lacks pattern in their decimal representation are computable numbers or not. Since these numbers have nth decimals without pattern that the Turing Machine will have to write out all the decimals digit by digit, it seems any function dealing with pi as one of the input will be a never halting task. In that case, do we still say it's computable? It is going to take forever for the answer to be written out.

    The reading says an input x with infinite decimal representation can be represented in the form of program such that the Turing machine can calculate x digit by digit. “However, Turing was able to prove that not every real number is computable.”. I am curious on this part. If we can represent those infinitely long decimals by program, what are those real numbers remain not computable?

    ---------------------------------

    (Sorry that I am posting this again because my comment disppeared! I posted this on 11 Janurary, and I remembered some friends replied me about my question, but I just couldn't find it.)

    ReplyDelete
  13. RE: “Although we’re a long way from being able to fully simulate a brain, computational neuroscience, in which scientists try to understand neural systems as computational processes, is an important and growing area of biological research. By modeling neural systems as computational systems, we can better understand their function.” (p 17, What is Computation).

    This paragraph kind of stood out to me: in saying that we are now trying “to understand neural system as a computational process” through the development of biological research seems curious to me. I understand that perhaps they are talking about using our biological systems as to clues to how we can simulate and reverse engineer, or as they said “fully simulate” a brain, which in that itself, was also somewhat ambiguous as well. To simulate a brain, I wonder if they meant by creating a T4 or to have a computational machine that can simulate it in such that the brain itself is represented in symbols: it seems that the latter won’t be any advancement to figuring out the easy cognitive capacity question.

    ReplyDelete
    Replies
    1. I think that the author's main point is that the computational approach is useful to examine specific functions of the brain (like auditory perception), but questions relating to T4 are outside the scope of the article. In other words, it is possible to apply the computational approach to some brain functions, but the author does not generalize this claim to all brain functions, and thus not to intelligence (which is what is at stake in the Turing test).
      Moreover, the author defines in the same paragraph computation as behavioural equivalence. Therefore, I believe that to "fully simulate" the brain is just to produce the same behavioural output, regardless of the internal mechanisms of the computer. Since the particularity of a T4 is that it produces the same behavioural output and is internally identical with a human, then one could "fully simulate" a brain without having to reach the T4 stage, as long as the behavioural output is the same.

      Delete
    2. In response to your last couple of sentences:
      When posing whether a T4 is necessary or if you could just have a computational system that provides the same outputs as a human mind, the question that you are asking is if we need strong equivalency in order for the results to be of any meaning.

      The Turing Test can be passed with false equivalency, which is as much as we can ask for seeing as how conversational interaction is the best we can do to gauge if any given person around us is in fact human (the problem of other minds). Because of this, I would say that weak equivalency is an informative step to figuring out the easy problem.

      Delete
  14. After reading Pylyshyn's "Computating in Cognitive Science", I am unsure of the difference between Marr's three levels and the three levels in the Classical view of computationalism. Marr's three levels are meant to be three levels at which cognition can be studied. They are computation (function/goal and constraints), algorithm, and implementation. The three levels in the classical view are levels of organization of the mind and computers, and thus three different levels at which cognition can be studied. They are the knowledge level, the symbol level, and the physical level. My guess is that the physical level of the classical view integrates Marr's algorithm level, whereas the symbol level is Marr's "computation" level, which describes the input and output of the cognitive process. So in other words, from top to bottom we would have: 1) knowledge level, defining goals and beliefs, explaining relatively non-specific behavior; 2) computation/symbol level, defining input and output for specific cognitive processes, and representations; 3) algorithm/physical level, where the symbols defined in 2 are manipulated to obtain the right output given the right input; 4) implementation/physical level, where the algorithm is physically implemented (neural system, computer, etc).

    ReplyDelete
  15. Reposting a comment that previously got deleted:

    In class, we discussed how the field of cognitive science entails working backwards from a device already built by Darwinian evolution, and how paradoxically, one way to elucidate the cognitive process in humans is by trying to build something that can simulate it. Although controversial, the brain-computer analogy has become a pervasive notion surrounding cognition. Horswill describes computation as “an idea in flux…”(2); while we are quite familiar with computers as physical devices, it is difficult to formalize a comprehensive definition of computation. The concept of the Turing machine was essential in helping to lay the foundations for modern computing and computational theories of the mind. Two related questions that emerge for me are: (i) how does thinking of the brain solely in computational terms limit progress in the field of cognitive science, and (ii) at the societal level, how can we take advantage of the gap between what humans can do and what computers can do?

    ReplyDelete
  16. How does the computationalist approach to understanding cognition compare with other methods? I imagine that computationalists were more revered in the age of Turing when the information age was just getting speed. It seems to me that computationalism showed us what is not cognition, and we can at best get a close functional approximation of consciousness with computation. Now that we have established, with the help of Searle, that computation is not cognition, I find other avenues such as evolutionary psychology more likely to explain the holes in our understanding of cognition.

    ReplyDelete