Saturday 2 January 2016

(1b. Comment Overflow) (50+)

(1a. Comment Overflow) (50+)

34 comments:

  1. I too found Horswill’s work insightful and digestible (fewf!). Notwithstanding my gratitude towards his explanation of the basic aspects needed to participate in the discussion surrounding computation, I’m having trouble with the idea of computational neuroscience. I understand the value in utilizing computational systems to “try to understand neural systems”, with an ultimate goal of simulating the brain (and thus understanding its neuronal functioning/computation). However, Horswill’s application of financial engineering got me thinking about random events and occurrences (i.e. the market fluctuating). Even if computational neuroscience manages to simulate the brain’s neural systems by capturing the essence of its function, it seems it would still fail to simulate the functioning of the brain as a whole, because it cannot simulate the random occurrences that occur to individuals, surely affecting the prototypical computation within their brain. Drawing back to the financial market as an example, even if a computer could simulate its behaviour (because financial analysts nailed down how the market functions in reality), that computation would be rigid to how the market usually behaves, not taking into account the random/unpredictable changes it experiences. How then can computational neuroscience simulate the brain without the dynamic, nuanced, random aspects that occur to individuals and contribute to their computation?

    ReplyDelete
    Replies
    1. Hi Jessica !

      I really like your insight, because I actually had a very similar thought when coming across his example of financial engineering! This may be slightly outside of this course's main focus, but the 2007 financial crisis relied, in part, on the misleading assumption by economists that all market participants in the global economy behave rationally at all time...! (Source: http://knowledge.wharton.upenn.edu/article/why-economists-failed-to-predict-the-financial-crisis/)

      I feel that computational neuroscience, although very useful, could be problematic in some situations, because (from what I understand) computational neuroscience stems from the assumption that behavior is predictable in order to model a system. As you mention, many have argued that human behavior does not operate solely in response to rational calculations, and I wonder whether the shortcuts that we use to make quick decisions (heuristics etc.); which sometimes lead to mistakes and approximations, could be modeled by a computer.

      Perhaps there exists some computational models of the brain that would be able to incorporate human irrational strategies/errors into their procedure in some way; a form of learning algorithm, where this dynamic and random aspect may be taken into account..?

      Delete
    2. Julia, thanks for your response. What you articulated is exactly what I was thinking as well. I feel as though if it is possible to make computational models that model normal behaviour, then there must be ways to model irrational/errors. However, we are not aware of all the heuristics and shortcuts that human make. Perhaps in general scientists have identified certain strategies, but that does not mean that every person will use that particular short-cut/error in that situation, or similar situations (i.e. psychology experiments that reveal heuristics in problem solving still have people in their sample who used a different, and unpredictable, heuristic). So I again feel stuck!

      Delete
    3. Hi there,
      a bit late in the conversation but what you are discussing is very interesting. Regarding the random events that happen to individuals, the solution is very likely to be as Julia suggests, to just feed more data into the programs. Random events that could potentially affect our decisions (loss of a relative, accidents, work achievements, etc) usually follow specific patterns at the scale of a population (that is how insurance companies establish their fees after all; even crowd movements can be accurately modelled now). As for the decisions we take following those events, again there are often well-defined patterns that a program could use. Whether those decisions are "rational" or not is of little interest (in fact, they are not in many cases). The program just needs to know which decisions are most likely to be taken under which conditions, and obviously like any model there will be approximations, but that is fine as we are not trying to predict individual outcomes.

      Delete
  2. In class, we discussed how the field of cognitive science entails working backwards from a device already built by Darwinian evolution, and how paradoxically, one way to elucidate the cognitive process in humans is by trying to build something that can simulate it. Although controversial, the brain-computer analogy has become a pervasive notion surrounding cognition. Horswill describes computation as “an idea in flux…”(2); while we are quite familiar with computers as physical devices, it is difficult to formalize a comprehensive definition of computation. The concept of the Turing machine was essential in helping to lay the foundations for modern computing and computational theories of the mind. Two related questions that emerge for me are: (i) how does thinking of the brain solely in computational terms limit progress in the field of cognitive science, and (ii) at the societal level, how can we take advantage of the gap between what humans can do and what computers can do?

    ReplyDelete
  3. I enjoyed Horswill's reading, and I particularly appreciated that he was starting from basic concepts. It was really insightful! From the readings however, I was having trouble understanding the difference between representations and levels of abstraction.

    From what I understand, levels of abstractions relate to the model used to describe biological systems and computers, while representations relate to the amount of details given in a procedure for computation.

    I understood that representations provide different amounts of details in a procedure. A representation affects what simpler procedures you can take for granted. Taking an arithmetic example, one representation would break down the steps in an addition, while another representation would directly command in one step to perform a addition.

    On the other hand, it seems to me that levels of abstraction correspond to the scale at which a biological system is described. For instance, one level of abstraction may be to describe the brain at the neural level, while another level will describe it at the chemical, or behavioral level.

    I am not too sure about those concepts, they are new to me ; have I understood correctly those two concepts? Thank you!

    ReplyDelete
  4. Regarding 1a: What is a Turing Machine?

    I am a bit confused on whether numbers like pi and e which lacks pattern in their decimal representation are computable numbers or not. Since these numbers have nth decimals without pattern that the Turing Machine will have to write out all the decimals digit by digit, it seems any function dealing with pi as one of the input will be a never halting task. In that case, do we still say it's computable? It is going to take forever for the answer to be written out.

    The reading says an input x with infinite decimal representation can be represented in the form of program such that the Turing machine can calculate x digit by digit. “However, Turing was able to prove that not every real number is computable.”. I am curious on this part. If we can represent those infinitely long decimals by program, what are those real numbers remain not computable?

    ReplyDelete
    Replies
    1. I agree with what you're saying about numbers like pi and e. The reading didn't at all elaborate on the idea of a "program" for inputting "a computable number having an infinite decimal representation." A "program" simply seems like another symbol system to shorten the length of certain numbers (i.e. pi). I imagine if a number such as pi were to be input as a program, during the actual computation of the function, the machine would have to first convert pi back into its original form from the program form. Then, the function would be computable since all of its numbers would be in the same symbol system, such as in Arabic numerals. This conversion seems impossible, as it seems it'd cause the machine to be writing out the function itself forever and never getting to the actual computing part of the function. Therefore, I'd also argue numbers such as pi are not computable numbers.

      Correct me if I'm completely wrong on this but perhaps 2/0 is an uncomputable real number. Since real numbers include those that "can be expressed as a ratio of integers," and both 2 and 0 are both integers, which makes 2/0 a real number. What makes 2/0 uncomputable is the idea that any number divided by zero is undefined and logically impossible.

      Delete
    2. Alison, I have had the same question! One idea as for why an irrational number like pi can be computable might come from the fact that pi has geometric significance and was generated arbitrarily to simplify the answer to some geometric questions in the first place. Maybe if we follow the steps of those original problems, we will have a set of instructions for a program that would generate the number pi and therefore make pi computable. However, the idea of such program running eternally would be problematic. I think we can add another layer to this question:
      Even if we assume that irrational numbers like pi are computable, how can a Turing Machine perform addition for two computable irrational numbers with infinite digits in their decimal places? These numbers would need to be presented to the machine as inputs with a finite set of instructions for a program generating them. But completing the very first step in the process of addition which is generating the number would take forever and completing the addition would not be possible. Let me clarify this with an example: The Turing Machine has to run eternally to generate the decimal sequences of the number pi. How can this machine perform an addition of pi + pi, which comes as a later step after the procedure of generating the decimal places of number pi itself when the very first step never ends ?

      Chien, I don't think 2/0 is a real number since one of the conditions for real numbers is that the denominator can not be "0". I think 2/0 can only be expressed with the limit of a function approaching infinity and the necessity for using the notion of limits here comes from the fact that 2/0 is not a real number.

      Delete
    3. Thank you Chien and Dorsai,

      Now I have an idea. I guess those irrational real numbers like pi and e and square roots do give a hint on how important computational symbols (E.g. 0 and 1)are, in the machines' world.

      Delete
  5. My reflection is primarily based on the Artificial Intelligence Reading. The section described two levels of of abstraction that "seem to be common to both biological and computational entities";
    1. The knowledge Level
    2. The symbol level

    This immediately struck a parallel for me with David Marr's tri-level hypothesis of vision as an information processing system. In his configuration, there are three levels;
    1. Computational
    2. Algorithmic
    3. Implementational/Physical

    The knowledge level in kid sib terms seems to be hard facts; things which you can say equivocally that you do or do not know.

    The symbolic level seems to be how you do the things that you do.
    In Marr's terminology, the algorithmic level is equivalent to the symbolic level.

    But Marr's levels go further in the sense that they consider its motivations for doing a particular thing (at the computational level)- something that would be rich in human analysis but probably rather flat with regards to AI. Moreover, Marr discusses the physical level, which would be neurons in the case of vision and hardware- related for an intelligent machine. 30 years later, his co-researcher Tomas Poaggio added a fourth level above computation; learning. He said that "Only then may we be able to build intelligent machines that could learn to see—and think—without the need to be programmed to do it." It is interesting for me to see how terminology initially designed to explain vision can be so seamlessly transferred to refer to the workings of an AI machine and its potential for mimicking these behaviours.

    A tangential thought I had while reading the first reading about computation was the question of unconscious processes- the fact that there are a plethora of activities which our bodies and minds do to survive and function to which we pay no attention. Is this something that distinguishes man from machine? That we passively receive, manipulate , and act upon information from the environment?

    ReplyDelete
  6. Horswill illustrates computation as something based off of a predictable model. In other words, computational programs are made with a predictable output in mind. “If it’s predictable, then it should be possible to write a computer program that predicts and simulates it, and indeed there are many computational models of different kinds of neurons” (pg 17). This is, in my opinion, where computation and cognition branch. Yes, there exists many neuronal functions that are predictable. Most of our sensory systems are topographically organized in our brain. Humans have exploited the brains natural mapping and created things like artificial limbs. Artificial limbs move according to neuronal activity in the brains motor cortex. As Horswill mentions, we have also created cochlear implants that work by transducing the frequency of a sound wave into a neuronal signal. It seems as though our senses are organized and function in a predictable fashion, and as highlighted, we can run these sensations in an artificial computational manner. However, these computation models mimic brain functions that are sensory in nature not perceptual. The idea of cognition, or subjective perception, in my opinion, is too elusive to capture and compress to a single brain area or neuronal circuit. One can view cognition to be something that is not strictly a brain entity, but rather a culmination of both brain and environment. How I perceive is influenced by both neuronal and environmental activity. The environment is NOT predictable, and therefore my thoughts are NOT predictable. We are only PARTIALLY pre-programmed. My thoughts are not frequency based like my hearing is, my thoughts are random. Thoughts can be influenced or primed, they are fluid in nature, and therefore can not be condensed to a predictable computational model.  

    ReplyDelete
  7. Re: What is a Physical Symbol System?

    According to the reading, "A model of a world is a representation of the specifics of what is true in the world or of the dynamic of the world." If both computers and human minds can use physical symbol systems to model the world, how can we use their respective models to understand how robots and humans are different? There must be some difference in how they give symbols meaning. After all, computers don't feel like we do.

    Additionally, I still do not fully understand the distinction between knowledge and symbol level. An example to illustrate the two levels may help to clarify how they are different.

    ReplyDelete
  8. According to the physical symbol system hypothesism, intelligence is shown by being able to interpret physical symbols in the real world. However, I wonder whether psychological factors, such as emotions and cognitions, have physical symbols that can be encoded by intelligent beings. Moreover, it is said that “choosing an appropriate level of abstraction is difficult” and sometimes “you may not know the information needed for a low-level description”. However, to what extent is a being considered intelligent? That is, if a robot is able to successfully complete tasks with a high-level description while ignoring the low level abstraction, is it still considered to be intelligent? According to the theories of operation of a turing machine, a finite number of input can actually give rise to an infinite number of output. As such, is it possible that being not able to attend to all low level abstraction can still lead to intelligent actions?

    ReplyDelete
    Replies
    1. I guess here I should mention about the equivalence problem, which also matters. Weak equivalence is solely the same output for the same input, whereas strong equivalence is not only the same output for the same input but also demonstrates the existence of algorithm equivalence.

      Delete
  9. 1) RE: "We’re used to thinking of the arithmetic we do as being a “mental” thing that takes place “in” our heads.  But in this case, it’s spread out between a person’s head, hands, pencil, and paper."

    I would argue that it is still a mental thing. The hard arithmetic problem is broken down into easier arithmetic problems that are still carried out in our heads. The paper is simply a place holder, it is not "doing" arithmetic and would never yield an answer on its own. Further, it is in our heads that we interpret the results. So, both the arithmetic and its interpretations are "mental" things.

    2) RE: Functional Model of Computation "This is effectively a simple animation procedure.  It makes an X move up and down on the screen, or in this case, the paper.  It never produces an output in the sense of the functional model of computation."

    I would argue that every time the computer draws or erases an X, it is producing a "temporary" output that is then modified by the following instruction. So the computer never halts, yet it is constantly producing outputs. Does this make sense under then functional model of computation?

    3) RE: "Behavioral equivalence is absolutely central to the modern notion of computation: if we replace one computational system with another that has the “same” behavior, the computation will still “work.”"

    Say you were asked to compute 10 divided by 3 using two methods: a calculator and long division on paper. Theoretically, both are computational systems, yet they will never yield the exact same answer. The calculator will give you a number with infinite decimal places (although you will never see the whole thing) whereas the long division will always have a finite number of decimal places regardless of how much time you spend on it. If "computation is all about behavioural equivalence", what can we say about these different results?

    In the example above (3), the method of division on paper mirrors the second example (2) where the computational system, i.e. our brain, as explained in (1), will never halt or produce an output based on the functional model of computation.

    ReplyDelete
    Replies
    1. 1) We'll be discussing "extended cognition" and "distributed cognition" in week 11. You're right that when I am doing computations on paper, I'm using my head. But we already knew that computations on paper were just computations. The question remains about how my brain knows how to do computations on paper, and also how my brain knows what the computations mean: Is that just computations too? (The symbol grounding problem.)

      2) "Moving" an X "up" or "down" is computation: is that all that's going on in your head? Computation is super-powerful (the Church-Turing Thesis), but can it do anything and everything we can do? (Can it even move!)

      3) There is weak computational equivalence (same output for the same input) and strong computational equivalence (same output for the same input using the same algorithm -- same symbol manipulation rules). But both weak and strong equivalence are computational. Is computationalism correct? Can the brain generate the capacity to do everything we can do using computation alone?

      Delete
  10. From "What is Computation?" :

    “But it’s also been argued that the real universe is effectively a computer, that quantum physics is really about information, and that matter and energy are really just abstractions built from information.”
    I’m a bit unclear about how physical entities can be “abstraction built from information.” One direction I can think of for answering this question is using the example of the text and comparing the physical universe to the virtual environment of a video game where the binary codes underlie the existence of an environment that can be captured by our senses.
    But does the idea of universe as a computer tie into the concept of physical symbol system? Does expanding the domain of computation to the entire universe suggest that matter and energy are physical representations that can be manipulated by universe as a computer following a set of rule-based instructions in a physical symbol system?

    ReplyDelete
  11. Re: What is a Turing Machine? + What is Computation? + What is a Physical Symbol System?

    I would like to challenge Horswill’s statement that “the difference between what we’ve called functional and imperative models above is largely a matter of what aspects of the procedures behaviour are considered relevant to the task at hand,” as well as the definitions of “computation” that he provides near the end of his paper.

    For me, whether or not a definition of computation can be shared across systems of any material rests on a key difference between the functional and imperative model, in regards to purpose and output. For the functional model, a computational problem is a "set of possible questions, each of which has a desired outcome.” The functional model is all about the output, whereas the imperative model refers to computation as a series of commands that manipulate representations. The procedures of a computer have unambiguous rules and goals — there will always be a defined (and knowable) set of imperatives and goals. For example, a human can always know and predict the goals and outputs of a Turing machine by taking it apart and viewing the machine table of instructions or the code inscribed on the tape. On the other hand, human computation is extremely ambiguous in regards to rules and goals — for example there is evidence that goal directed behaviours can proceed entirely out of humans conciseness, as seen by the effects of environmental priming. The functional model is then not applicable to human computation, and renders it different from computer computation because of a lack of purposely (and knowable) programmed rules or goal states. Near the end of his paper Horswill provides a definition of computation that surrounds behaviour equivalence, defining computation as “the process of producing some desired behaviour without prejudice as to whether it is implemented through silicon, neurons or clockwork.” I would like to modify this statement to attempt a universal definition of computation applicable to any material, humans and computers alike, which describes computation as any interaction (manipulation of a representation) that the system is involved in that prescribes some response or action. In this view, computation is both defined by the system’s manipulations and outcomes, but the outcomes do not have to be formally indicated or known. I would argue that the computation has to result in a response of some kind, or else it could just be thought of as random noise in the system. This definition of computation makes sense with the physical symbol system hypothesis, which entails that "an intelligent agent can be seen as manipulating symbols to produce action” and that the system’s actions (like model building) are judged not on "whether they are correct, but by whether they are useful.”

    ReplyDelete
  12. This comment has been removed by the author.

    ReplyDelete
  13. According to the “What is Computation?” reading, passing the classic Turing test appears to be defining intelligence as a behavioral phenomenon, such that the exact procedure or mechanics are not limited to recreating the exact procedure of the human brain, but what actually matters is behavioral equivalency. Harswill writes that if there is “behavioral equivalence” between two systems, “in some sense they’re interchangeable.” However, according to the class lecture and Searle’s Chinese Room experiment, there are complications with having a solely behavioral definition of intelligence, in that it doesn’t imply understanding or true interpretation of one’s surroundings. If intelligence becomes about the ability to do anything humans can do, how will we know when computers have reached true intelligence if we don’t even truly know the fundamental limits of human knowledge?

    ReplyDelete
  14. From “What is Computation?”:
    “By modeling neural systems as computational systems, we can better understand their function”

    Trying to understand how neural systems operate by using computational systems as a model seems reasonable. Computational systems can be more easily manipulated by humans, and they allow for a greater margin of trial and error when testing different theories of cognition.
    Let’s say we are one day successful in programming a system that is behaviorally equivalent to neural systems, we could consider the output of this system to be “intelligent”. Now, if we were to run this system in a machine, we could consider the machine to be intelligent. However, how would things work if we took this same system, and programmed it in a new machine? Would this machine operate in exactly the same way as the initial one? Would both machines respond in an identical way to the same e-mail?
    What I’m wondering is where individual identity comes from. At the beginning of “What is Computation?”, the author discussed that computation is linked to thought, and that thought is linked to individual identity.
    So, if we were able to create two identical computational systems that we considered able to produce intelligent thought, would there be anything that could differentiate them? In other words, if we focus on this idea of modeling neural systems by using computational systems, how do we program individuality?

    ReplyDelete
    Replies
    1. 0 “Intelligence” is just a synonym of “cognition.”

      1. Two systems are weakly equivalent if they give the same output for the same input. They are strongly equivalent if they give the same output for the same input and they do it the same way.

      2. We are asking now whether it is true that cognition = computation. The question of weak vs strong equivalence makes most sense about computation: Are the two systems executing the same algorithm (strong)? Or are they just giving the same I/O (weak).

      3. Computation is implementation-independent: The same software (computation, algorithm) can be run on different hardwares. If it’s the same software, they are still strongly equivalent (same I/O + same algorithm).

      4. Hardware equivalence would be stronger than either strong or weak equivalence. And if cognition = computation, it’s not necessary.

      5. We are not talking about duplicating individuals. We are talking about reverse-engineering capacities. Two different individuals can have (roughly) the same capacities. Both pass T2 (or T3), but that doesn’t mean they are the same individual.

      6. Both the verbal Turing Test (T2) and the robotic Turing Test (T3) test only weak equivalence (between the capacities of the T2 (or T3) and the capacities of a person.

      7. The Turing Test is based on total capacity, not little fragments of capacity: There may be many ways to do a tiny bit of what a person can do, but fewer and fewer ways to do more and more of what a person can do.

      8. Weak (I/O) equivalence between a computer and a real brain in some fragment of capacity is something, but not very much.

      Delete
    2. If we model a programme simulating the firing of action potentials in a neural network to generate the same thought that was generated in the human being whose neural network was being simulating, would this model be considered strong equivalence? In this case its a program that is generating this thought and the program itself is based on rules, so computation is generating thought, which could have meaning. Would this argument hold as a possibility for the strong Church-Turing thesis?

      Delete
  15. RE: What is a physical symbol system?

    I am confused about the concept of a physical symbol system. If the comment section in 1a wasn't full, I would have replied to Dominique's comment about how a symbol may cease to exist if it's not being thought about, kind of like how a tree may not make a sound when it falls if there are no ears to perceive the sound waves.

    Thoughts and abstractions created within the mind can surely be symbols, which we can mentally manipulate, etc, but they do not demonstrably exist in physical reality, so I don't see how these internal things are "physical". Neurons may be firing, but the thoughts don't physically exist.

    To me, the physical symbol system--if it's including these internal symbols--is just a symbol system, not a physical symbol system. Could somebody help me figure out why they've defined the hypothesis this way?

    ReplyDelete
  16. This comment has been removed by the author.

    ReplyDelete
  17. RE: What is a Turing Machine?

    With regards to Turing machines, the Church Turing thesis states that if a sentient being is able to follow some method (ex sorting numbers), and this task can be completed in a finite amount of time (it halts), then there exists some Turing Machine that can complete the same task as the sentient being. If one could theoretically produce a Universal Turing machine, this could mimic all of the cognitive processes that a sentient being is capable of following. Thus, allowing the machine to act and behave as if it were a living being. However, even though this machine could behave in a manor indistinguishable from a human and possess all of our cognitive ability, could we say that it has consciousness? I would argue that one could never prove that the machine was actually sentient or not. How would you know if the machine “feels” anything or not and is simply acting based on programmed responses from certain stimuli (inputs).

    ReplyDelete
    Replies
    1. Hi Alex, I think even if a Universal Turing was produced, it still wouldn't be able to mimic all the cognitive processes that a sentient being is capable of, just the computational cognitive processes. The church turing thesis says that anything that is computable can be computed by the Turing Machine, but as we've been talking about, it is likely that there is more to cognition than computation. I do agree with you that we would never know if a machine "feels" anything, but we would be able to tell if it can be indistinguishable in its cognitive capabilities from humans based on its behavior and ability to interact with the world.

      Delete
  18. As the paper and in class discussion said, that by finding a causal mechanism, which can pass the TT, we would have solved only the easy problem- how and why the brain causes the capacity to do all that we do. And, if I remember correctly, we have discussed that despite being far from answering the easy question, this will be the best we will be able to do. Throughout the course of the class we have discussed cognition as a hybrid of computation and sensorimotor experiences; and from here brain causes feelings. We can perhaps learn from behavioral and predictions of feelings, but as Harnad said in class, we said we can have an ‘undone action’ but not ‘an unfelt feeling’. Such that it seems that we can’t solve how and why the brain causes feeling; therefore rendering the hard problem unsolvable...

    ReplyDelete
  19. If I understand Pylyshyn's argument about "cognitive impenetrability" correctly, then what he argues is that 1) cognition is computation and 2) anything that cannot be directly explained by computation is not cognition (although it could be "translated" into symbol manipulations, which itself would be cognition). At first, it enables Pylyshyn to avoid answering questions that he deems irrelevant to cognitive science (questions relating to decision theory, an example he uses in "Computing in Cognitive Science"), but his definition of "cognition" then becomes a tautology: "Cognition is only computation, since anything that is not computation is not cognition but sub-cognition". Moreover, the study of the translation of "sub-cognitive processes" into symbolic manipulations (symbol grounding) was attributed to neither cognitive science nor any other field, a weakness that could not go unnoticed (Harnad, Cohabitation, 2005).

    ReplyDelete
  20. How to solve symbol grounding problem:
    RE: In order to ground symbol systems, one must have the “symbolic capacities grounded in sensorimotor capacities [where] the robot itself can mediate the connection, directly and autonomously…”

    I found this point to be very interesting, as it asks for the robot to interact with its environment and integrate newly acquired information. This encompasses computation, and then goes further by asking the robot to learn.

    The comment can be broken down into two parts. 1. The means for learning and 2. The transformation from “used” to “user”. The first part talks about the experience that it acquires through its senses, which acts as a bridge between the machine and its environment. This enables the machine to learn.

    The second part relates to the resulting ability to directly and autonomously mediate that which was sensed. The robot’s capacity to process new information is a necessary component of its learning. This would then mean that the computer has become the little man itself/the user of the virtual architecture that is itself.

    ReplyDelete
  21. "Stop answering the functional questions in terms of their decorative correlates, but explain the functions themselves."

    Is this not subject to a problem similar to Block’s “Reductionist Cruncher”? No matter what question is answered, there can always be another poking at even more "simplistic" questions. Is there ever a point at which no further questions are needed/possible? Although it was said earlier physiology is not necessarily the way to explain the black box, it seems that the logical progression of these questions will eventually reach the cellular/molecular/atomic levels.

    Furthermore, for each high-level question that is answered (ex. "How did our brains identify whom it was a picture of?"), a dense tree of new questions opens (ex. How did the brain extract the features? How did the brain learn these features in the first place?). Thus, even answering the "right" questions by correctly explaining the functions themselves leaves one with substantial "explanatory debt".

    ReplyDelete
  22. My comment deals with mostly the framework in which cognitive theory exists. I find that Turing proposes the most robust explanation for cognition: "cognition is as cognition does." I don't necessarily agree with him, but his theory is strong because successful cognitive theory, according to this reading, must make implicit computation explicit and then test it. Turing does exactly this by claiming cognition is functioning like a known-to-be-cognizant organism. Therefore, we can observe and test its behaviour. Searle's argument that cognition is not simply computation is not as strong a theory because he simply shows what is not cognition. Furthermore, his argument does not necessarily produce anything that can be tested.

    ReplyDelete
    Replies
    1. I guess there are just contesting theories about whether cognition is just computation. The strong AI proposes that cognition is nothing more than computation (also the weak Church/Turing), whereas weak AI is strong Church/Turing thesis.

      Delete