Saturday 2 January 2016

1a. What is Computation?


Optional Reading:
Pylyshyn, Z (1989) Computation in cognitive science. In MI Posner (Ed.) Foundations of Cognitive Science. MIT Press 
Overfiew: Nobody doubts that computers have had a profound influence on the study of human cognition. The very existence of a discipline called Cognitive Science is a tribute to this influence. One of the principal characteristics that distinguishes Cognitive Science from more traditional studies of cognition within Psychology, is the extent to which it has been influenced by both the ideas and the techniques of computing. It may come as a surprise to the outsider, then, to discover that there is no unanimity within the discipline on either (a) the nature (and in some cases the desireabilty) of the influence and (b) what computing is --- or at least on its -- essential character, as this pertains to Cognitive Science. In this essay I will attempt to comment on both these questions. 


Alternative sources for points on which you find Pylyshyn heavy going. (Remember that you do not need to master the technical details for this seminar, you just have to master the ideas, which are clear and simple.)

Milkowski, M. (2013). Computational Theory of Mind. Internet Encyclopedia of Philosophy.


Pylyshyn, Z. W. (1980). Computation and cognition: Issues in the foundations of cognitive science. Behavioral and Brain Sciences, 3(01), 111-132.

Pylyshyn, Z. W. (1984). Computation and cognition. Cambridge, MA: MIT press.

114 comments:

  1. Theoretically, AI could become more intelligent than human beings. While AI is incredibly useful in advancing our own society, are there ethical issues (such as human job loss among other problems) in furthering research?

    ReplyDelete
    Replies
    1. Let's not worry about more intelligent before we have as intelligent (i.e. passing the Turing Test)!

      Technology has always been putting people out of work, or giving them better things to do: Why should AI be different?

      (Thanks for the photo to compensate for my proposagnosia Eugenia!)

      Delete
    2. I think AI crosses the boundary when it comes to technology putting people out of some types of work more than others. Delivery robots, for example, may not cross this boundary but I find that for jobs requiring human interaction, there is something really special about interpersonal connections. Technology certainly facilitates all types of jobs, but mostly as an aid to make performance in the workplace faster and more efficient. If engineers and cognitive scientists find the additional component to computation for creating robots that pass the TT and have abilities to form meaningful connections with human beings (a big reason I personally believe we are unique from anything else on the planet) then wouldn't this create a complete imbalance in society?

      Delete
    3. Annabel, you mean Dominique creates an imbalance in our society? How? Why? (Remember? She's our MIT robot...)

      Harnad, S. (2014) Turing Testing and the Game of Life: Cognitive science is about designing lifelong performance capacity not short-term fooling. LSE Impact Blog 6/10 June 10 2014

      Harnad, S. (2014) Animal pain and human pleasure: ethical dilemmas outside the classroom. LSE Impact Blog 6/13 June 13 2014

      Delete
    4. RE: AI is more intelligent than human beings and will take over our jobs.

      I think that we must be aware of all innovations in order for machines not to replace us. We will constantly learn new things and develop our skills as we advance along with developing machines. Also, I think that there are some fields that cannot be taken over by machines. In my opinion, translators cannot be replaced by machines or for at least for another 100 years or so. Translation work is too complex for a machine and translator's have to take into account many complex factors to make a high quality translation to enable the user to fully understand the translated work. Language is so complex and intricate that i don't think machines could ever translate in the same way human translators could. Thus i'm don't agree that AI could be more intelligent than humans and would take over society.

      Delete
    5. The question of whether computers will "take over" is a fun one, but it's not really relevant to this course. Ditto for the question of whether computers will be "smarter" than us. We're concerned with whether and how computers or robots can help us reverse-engineer how people can do what they can do ("as smart as us".

      Machine-translation is considered an especially important goal because translating is one of the things people can do, so far computers can't (or at least not too well), and some people think that to really be able to translate, the mechanism has to be able to do almost all the other things people can do.

      Delete
    6. In regards to the question of can computers/robots help us reverse engineer how people can do/feel what they do, I am wondering where the aspect of autonomy comes in? Put simply humans can (mostly do not but we have the ability to) make decisions that are free from external control. While a computer can continue a sequence of operations, it needs the initial input. How can computing help us recreate human ability if computers lack the capacity to make decisions that are free from external control? I do not mean to say computers cannot make unbiased decisions or that they cannot generate new answers, but that these answers are based on existing information. The computer does not choose to compute something novel or come up with a new idea like the human mind can.

      Delete
    7. Aliza, are you sure our decisions aren't "based on existing information" too?

      Delete
    8. I initially thought the same thing as you Aliza - that if computers need to be programmed and require an input to generate a particular output then they wouldn't be able to generate spontaneous thought or even creative thought. If the computers are modelled after human brains however, then we would assume that they would be able to encode some source of "inspiration" from their surroundings / experiences to generate new answers (in the way that humans do).

      Delete
    9. @Fiona interestingly enough Google translate made incredible advances this year when Google brain (their deep learning researchers) came together with translation and completely changed how their programming. Apparently it's actually very good and within a week the new translate had made more advances then they had made in about 4 years. The Times mag article on it is really amazing, but is a 40 minute investment of your time if you get a chance I recommend it. https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html . Though I feel language and the translation of it encompass huge social and cultural elements of the human world which are hard to imagine non-human things being able to use, I don't think translation itself is completely out of the question, especially after reading about Google's improvements.

      Delete
    10. Firstly, isn’t most all learning based on existing information – we take our past experiences to deduce how to behave in the future?
      Second, I want to say something about that last sentence, about computers not computing novel things. With neural nets computers are capable of learning, and Bayesian neural networks are capable of integrating prior knowledge and new learning – sometimes predicting things very strongly. Also on the topic of computers making decision – is this not what chess-playing computers do? They begin at the problem, and go through operators to get to the goal state (the winning the move). And this is without getting into language software or the work being done there, as Cassie talked about above.

      Delete
    11. You are correct professor, on second thought I definitely agree that our decisions are based on existing information. I guess I am perplexed as to the limits of a computer's capacity to act like human when it is inherently an insentient being. That is what I am having trouble wrapping my mind around. Is there a way to design a program that enables a computer to 'feel' so that they could either feel an emotion or feel the desire to perform a function, rather than simply generate an output based on an input. A 'conscious' or awareness of a choice to do a desired action not because it has received an input to do so.

      Delete
    12. Annabel, computational models of cognitive capacity have to explain what the brain can do (which is what people can do!). It's not clear whether it helps to "model" them on the brain, because although we know the brain has neurons, and action potentials, and neurotransmitters and connectivity, none of that tells us how the brain can do all the things it (we) can do. So although the brain is the mechanism, reverse-engineering its performance capacities is not so easy (even though it's the "easy" problem). (Kid sib doesn't understand what you mean by "inspiration" in this context...

      Cassie, good point. But we know our brain's are not doing deep learning in google-space to be able to speak with us the way Dominique does...

      Amar, good points, but so far computers' capacities are only "toy" capacities, compared to the total performance capacity of humans. (This point is not answered by saying, correctly, that they can also do thingshumans can't do.)

      Aliza, the puzzle is this: Yes, almost certainly, computers can't feel. So what they can do, they can do without feeling. But there are still lots of things we can do that computers can't. When it comes to Turing robots, like Dominique, they would be doing everything we can do, partly via computation, partly via dynamics. And because of the other-minds problem, we could not know for sure whether Dominique feels. But even if she does feel, there would still be the hard problem of explaining how and why. You seem to be turning that on its head, saying that because a computer (or a robot) does not feel, it can't/doesn't... what? Pass the Turing Test -- by being able to do everything we can do? But why do you think it could not? If you know why it needs to feel, you've solved the hard problem! So what's the explanation?

      Delete
  2. I’m having trouble with the physical symbol system hypothesis, especially with the term "general intelligent action" as a capacity of physical symbol systems: it seems to reduce "intelligence" to computation. I admit this definition could work, but then how would we determine different types and levels of intelligence, if there are any at all? Is a more complex computation more intelligent? Does it necessarily implies that human and computer intelligence are fundamentally similar, since they are both physical symbol systems ?

    ReplyDelete
    Replies
    1. Yes, the physical symbol system hypothesis is the same as computationalism (the thesis that cognition is just computation). IQ tests measure cignitive capacity. If cognition is computation, that doesn't means some people can't be more intelligent than others. Running is the same for everyone, but some people can run faster. But its too early to worry about explaining individual differences in cognitive capacity, because we still have not explained basic cognitive capacity. (Besides, it will soon turn out that computationalism is worng...)

      Thanks for the photo, Mael!

      Delete
    2. @ Stevan Harnad: So I just want to double check the relationship between cognition and computation. Is it according to Strong AI cognition is nothing more than computation (computationalism), whereas weak AI is strong Church/Turing thesis (theres more than computation)?

      Delete
  3. Re: psychological and social issues in computation, is there perhaps an increasing problem with a divorce between the rising use of computational technology and the public understanding of said computation/technology? Modern education doesn't seem to have caught up with the modern methods of technological advancement. Will a lack of understanding create barriers of growth in computational science?

    ReplyDelete
    Replies
    1. For cognitive science, the interesting and important question about computation is whether cognition is (just) computation. In Week 2 (Turing) we'll find out what computation really is. The public doesn't understand what it is; even many computer programmers don't, unless they've taken a course on Turing machines. But the public doesn't really need to understand what computation is -- just how to use it. Same as with machinery. But in this course we need to understand, because computation is a serious candidate for being what cognition (thinking) is.

      Delete
    2. This disconnect between society's views and understanding of computational science versus the field itself could actually be seen as both a positive and negative thing for scientific growth. On one hand, we have the kind of deficits where the public may try to impede the progress of computational science out of fear, resistance to change, a gap in public understanding, or some combination of all three. On the other hand, contrary to creating barriers of growth, a lack of full understanding could help us innovate new technology stemming from that deficit of knowledge. In regards to modern sci-fi media like Westworld and Ex Machina, and even older literary works (especially by Asimov), the gaps in adult human knowledge leave room for children to dream about what could be. Those children could then grow up to pursue their dreams, and permanently change the course of the AI field, since they didn't understand the former boundaries of the field, and unwittingly pushed them back further.

      Delete
    3. This comment has been removed by the author.

      Delete
  4. Regarding simulation & personal identity:
    Would simulating the human brain be sufficient for recreating one’s personal identity?

    I’m playing with the idea that there is a whole other dimension, crucial for one’s personal identity, that transcends the physical. This element is the “lived experience” and historical memory of every human being within a society.

    For example, we can talk about racialized identity. Some authors argue that the lived experience is at the heart of one’s self-identity.

    This leads me to wonder whether an AI would be able to have the emotional intelligence to empathize in a way that would allow them to truly understand this lived experience. Should emotional intelligence be included as an important factor in the Turing Test? In addition, if intelligence is behavioural, is memory a central factor? If yes, would this indicate a lived experience?

    ReplyDelete
    Replies
    1. Would simulating the human brain be sufficient for recreating one’s personal identity?

      Depends what you mean by simulation. If you mean computer simulation (which is not quite the same thing as computation) then the answer is no, for the same reason that a computer simulation of a water is not wet. (Not even a virtual-reality (VR) simulation of water is wet. It may just feel to the one wearing the VR glove.)

      Let me answer your question with something stronger than computer simulation: Supposing there were a way to clone you, during the night, maybe using a "3D printer" that could copy your genes and tissues. The clone would wake up with all your lifelong memories (because it would be identical to you). Would it have your "lived experience" (or would it only feel as if it did)? Would it be you? But then what about the other you?

      Anyway, this course can only deal with the "easy problem" of how to design that can do anything you can do (pass the Turing Test), not the "hard problem" of designing one that can also feel.

      I’m playing with the idea that there is a whole other dimension, crucial for one’s personal identity, that transcends the physical. This element is the “lived experience” and historical memory of every human being within a society.

      You mean like my "lived experience and historical memory of what happened at the gas station in Princeton that christmas long ago that I described? But I still have no idea whether I really paid the $10 in advance or it was just misremembered. It's not my memory that could settle this; only an objective video could (if it was not photo-shopped!).

      (Watch out with "dimensions" that "transcend" the "physical"! Sounds like "dualism," which is a symptom of -- but definitely not the solution to -- the "hard problem."

      For example, we can talk about racialized identity. Some authors argue that the lived experience is at the heart of one’s self-identity.

      Yes, but now we've left computation behind and gone on to social psychology, personality, culture, and maybe even genetics.

      (I don't know of any other kind of experience than "lived" experience. If I didn't live it, how could I have experienced (felt) it?)

      This leads me to wonder whether an AI would be able to have the emotional intelligence to empathize in a way that would allow them to truly understand this lived experience.

      What people usually mean by "emotional intelligence" is the capacity to know (and sometimes even to feel) what someone else is feeling. But that's beyond this course too (except the 2nd to last week) because the course is on doing ability, leaving feeling an unsolved puzzle.

      Should emotional intelligence be included as an important factor in the Turing Test?

      The Turing Test only has to include what the average person can do. To the extent that people can guess what other people are feeling and thinking, the TT robot would have to be able to do that. But not much. People with severe autism have trouble knowing what others think and feel, yet we would not doubt that they were really cognizing. (In fact lots of professors have that problem too.)

      In addition, if intelligence is behavioural, is memory a central factor? If yes, would this indicate a lived experience?

      Memory can be a capacity to do something that you learned to do long ago (but don't remember when or how): "procedural memory."

      Or memory could be facts that you learned long ago (but don't remember when or how): "semantic memory."

      Or memory can be the capacity to remember something that happened long ago, and you remember what it felt like: "episodic memory."

      So maybe episodic memory (when it's faithful) would be what you're calling "lived experience." But without the video, you can't be sure. It could also be imagined experience; false memory.

      Delete
    2. Absolutely true,

      Even if we grant that CTM is correct, there is the impossible task of factoring (coding) in millions of years of evolutionary history including the social factors and cultural factors that shaped evolution (social brain hypothesis) into a program that would result in being both a replica of the human entity and having the lived experience of that human entity. The task itself seems to be impossible. So in theory I can't see how an ideal turing machine will ever exist.

      Delete
    3. Soham, your brain, today, presumably encodes all your past experience. If you want, you can add that it also by shaped millions of years of evolution. But here it is now. How is the real time of your life history (and our species history) playing a role in what your brain is and does and can do now? Yes, that's what got it into that state. But it's in that state now, isn't it. And, as discussed above, it would be in a clone of you too. And if computationalism were true, is would be in a computational duplicate of you too, one that had no real real-time history at all.

      Delete
    4. If computationalism were true, evolutionary psychology would be pointless. However, findings like Chomsky's Universal Grammar and other evolutionary biases tell us that computationalism is not the whole story. Searle's Chinese room argument disproves computationalism as well.
      The question of whether a T3 can be made on "computationalism alone" is to be seen, although "What is Computation" suggests that dynamical systems are likely necessary for a thinking AI. I think it is possible for a machine to be able to think, without passing T3. Passing as a human pen pal calls for very complex features, like sociability, psychological biases rooted in evolution, and the ability to forget unimportant information. Randomness alone cannot account for these features, they will need to be built in to the AI's neural net's rules for categorizing stimuli.

      Delete
    5. Laura, while I agree with you that Seattle’s Chinese room argument suggests that computationalism does not tell us the whole story, I don’t understand your claims a) that computationalism negates evolutionary psychology, or that b) Chomsky’s UG and other “evolutionary biases” disprove computationalism. I really don’t see how evolution plays into the computationalism debate. I think these issues are all quite compatible and there may be many compelling computationalist views of evolutionary psychology. Or evolutionary explanations for the emergence of computation in the brain.

      As far as I understand it, the Chomskean program, relies extensively on an assumption that language is more or less computational. Chomsky’s view of grammar is as a rule-based system for manipulating discrete elements into what constitutes a grammatical sentence. It is in every way a computational approach to language and he would argue that an important core of any individuals grammar is the portion of grammatical rules for transformation ’hard-wired’ into the brain is derived from the genetic endowment. Though I might feel that evolutionary story behind Chomsky’s theories is lacking, he claims that this rule-based and computational grammar is a product of human evolution. Therefore I see no reason why computationalism causes problems for evolutionary psychology.
      Now, just as Searle’s Chinese room argument does not prove that computation has no necessary place in cognition even if it is insufficient, so too, Chomsky’s view of language as a computational process is also compatible with a view that cognition is only in part computation. However, if Chomsky’s general view of language is reliable then we would need to accept that at the very least linguistic cognition is necessarily computational and therefore cognition must be at least in part computational.

      Delete
  5. From: What is a Physical Symbol System?

    "The term physical is used, because symbols in a physical symbol system are physical objects that are part of the real world, even though they may be internal to computers and brains."

    I take this to mean that we can label symbols, as well as other ideas and abstract object, as “physical” if we are thinking about them because of the various brain activity that makes those thoughts present in our minds. Somewhat similarly to the question of “if a tree falls in a forest and no one is around to hear it, does it make a sound?”, does this mean that when nobody is thinking a certain thought (or, symbol) it is not a physical object, and it only becomes one once someone is thinking it?

    ReplyDelete
    Replies
    1. It has nothing to do with the unheard sound of trees falling! Symbols are just objects -- pebbles, gestures, 0's, 1's. They can be any object because they are just objects whose shapes are arbitrary and we all agree to use them in a certain way. In arithmetic, "2 + 2 = 4." Computation is the manipulation of symbols based on their (arbitrary) shape based on rules, like those of arithmetic. In cognitive science there is the hypothesis that cognition (thinking) too is just symbol manipulation. The hypothesis is called "computationalism,' and we will be looking at it closely for a few weeks.

      Delete
  6. From: What is Computation?
    I'm having difficulty sorting through the difference between the Functional Model of Computation, and the Imperative Model of Computation. The article says that the Functional Model's only important feature is the output, and the Imperative model cares about any manipulation of the output.
    Is the distinction that the Imperative model cares about the process of achieving the output whereas the functional model does not? For example, the functional model wouldn't care if the process of getting 3X5=15 entailed adding 5 to itself 3 times or relying on your memorisation of multiplication tables.
    If this is correct, how do we explain things under the Imperative Model if we do not fully understand the process of cognition. There is no way to definitively say that process Y occurs between asking someone 'what is your favourite colour?' and them responding 'blue'. Is the Imperative Model then simply a theory without any known examples in the field of cognition?

    ReplyDelete
    Replies
    1. From: What is Computation?

      Re: Is the distinction that the Imperative model cares about the process of achieving the output whereas the functional model does not?

      I think the key distinction between the functional and imperative model is that the imperative model considers all kinds of manipulations of representations to be computational, while the functional model only determines a program to be computational when there are pre-designated inputs and outputs.
      The functional model only sees the output as important (not the procedure), while the imperative model doesn’t require there to be an output in the functional sense at all. It considers animation procedures, procedures that manipulate the memory of your computer, the software on your cell phone to be computational, even through there is never a clearly designated output.
      I reckon that you could use the functional model to determine a simulated brain's favourite colour, because you have an output in mind. It seems like we'd have to consider studying the stream of consciousness under the imperative model because it never really stops, procedures keep on going until we die or are sedated or in delta wave sleep... (maybe..)

      Delete
    2. Don't worry about these "models" of computation. (They're uninteresting and unimportant.) But computation is important. And computation is the manipulation of symbols according to rules that are based on the symbols' shapes (but not their meanings).

      The best example of a symbol manipulation rule is the recipe for solving for "X" is a quadratic equation of the form: aX**2 + bX + c = 0. The recipe is: X = -b+/- (SQRT(b**2 - 4ac)/2a

      We all learned that in high school. Maybe we still know what it means; maybe we knew it back then; or maybe we never knew what it meant; but we knew how to apply it in exams as a recipe to solve equations like "3X**2 - 9X + 7 = 0.

      That's computation. And it's also what enables computers to do all the things that they can do. Computationalists think that's what the brain does too.

      Delete
    3. The real distinction here is "weak equivalence" vs "strong equivalence." If you have something (say a child, doing long division) and a computational model of it, then they are weakly equivalent if for the same input they both give you the same output. They are strongly equivalent if they also both do it the same on the "inside" -- use the same rule internally, in the same order.

      Please raise the question of weak and strong equivalence in class because we will be discussing it in connection with Zenon Pylyshyn's computationalism.

      Delete
    4. Is Searle’s chinese room argument effectively arguing that in order to create a computer that can ‘understand’, we need to aim for stronger equivalence than standard computation could ever give us?

      If I understand Searle’s analogy correctly, he reasons that if a brain is effectively a computer, then the process of memorizing all procedural steps that a computer would undertake to process and respond to an email, would not lead Searle to UNDERSTAND the Chinese message of his pen pal. Going off of Pylyshyn's computationalism and what you wrote in the final paragraphs of your paper, this leads us to thinking that there is more to cognition than computation because in standard computation, symbols are not grounded in meaning. To realize the ‘understanding’ component of cognition, we likely need to make simulations that go beyond procedural computation and mimic the dynamics of matter and energy in our brain - design a simulation of the brain something like how modern meteorologists predict the weather?
      Did Searle "go too far" because he didn't see computation as useful to understanding cognition? How exactly did he "go too far?"

      Delete
    5. Equivalence -- both weak (same I/O) and strong (same I/O and same algorithm) -- are about computation. Searle soes not think cognition is computation at all.

      Searle's argument just shows that cognition cannot be all computation. Searle thinks it shows cognition can't be computation at all. (That's going too far,)

      Computation + simulation is still just computation, because simulation is just computation.

      Delete
    6. Thank you for clearing that up. I guess Searle should have better defined his more narrow vision of computation so as not to disclude simulations of dynamics (unless he didn't think simulations would help us either...)
      This speaks to Horwell's notion that the definition of computation is in flux.

      Delete
  7. The article by Horwill discussed the fact that we are a part of the “information revolution.” I couldn’t agree more with this statement - information is available more than ever - one just has to glance down at his or her smartphone to prove so. This brings about an interesting question - is the educational system moving along at the same speed as the information revolution? In other words, why are students still required to memorize facts which are readily available at one’s fingertips? This is not to say that learning should be minimized. In fact, the opposite is true - learning should be a priority, with memorization at the back burner. It is too often that success in school relies upon the regurgitation of numbers and facts. In the current technological age, courses should be restructured in order to focus on learning, rather than memorization.

    ReplyDelete
    Replies
    1. Here is an analogy: In a world where devices can carry us around, why should we still move on our own?

      Two answers:

      (1) "use it or lose it." Without moving, our muscles wither, and then if ever our devices failed us, we'd be lost.

      (2) We're not just interested in movement for transport. We want it for exercise and sport and dance. If we can't do it for ourselves, that's all gone.

      Learning how to find and manipulate data (information) is not "memorization." It's like practising the piano so that eventually we can play or even compse on our own. CDs alone can't do that for you.

      Delete
  8. From: Cohabitation: Computation at 70, Cognition at 20
    Could Chomsky's theory of Universal Grammar be applicable to other human behaviour? For example, most cultures promote similar underlying moral codes but differ in the ways they execute them. Every culture promotes respect for the dead, but some do this by cremating their deceased, some by burying them and others by eating them.

    Additionally, how plausible is it that once we create AI that passes the Turing Test we will have a viable explanation of consciousness? Already, programmers have created self teaching algorithms which no one is completely sure how they work. Will it be possible to reverse engineer an AI that has in part learned from itself?

    ReplyDelete
    Replies
    1. Chomsky's Universal Grammar is something very special, for special reasons. It has nothing to do with cultural universals. We will discuss it in weeks 8 and 9. (But I have no secrets, so ask me for a quick preview in class; I just have too many comments to reposnd to now to be able to do it in writing here.)

      Passing the Turing Test provides a candidate explanation for cognition, but only the "easy" part -- a possible explanation of how people can do all the things people can do -- but not the hard part: explaining how and why they can feel (i.e. consciousness).

      If you write an algorithm, you know how it works, but given data or execution time it may go on to do things that the one who designed the algorithm didn't know or expect.

      The model that passes the Turing Test (Dominique) has to be able to do anything we can do (which, I suppose, includes "learning from oneself" -- though I'm not sure exactly what you mean by that).

      Delete
  9. RE: "Everything is text"

    If we look at the brain (or some of the things the brain does) as computational, then is there a sort of unit that things (memories, language, imagination...) are encoded in like binary? Do neurons either firing or not firing correspond in some way to 0s and 1s in computers?

    If so, then because it's a physical thing that occurs and then is gone would the brain be able to store sequences in the same way as a computer does binary? I think that this is too simple of a correspondence but the text gives no other way of how the brain could encode information other than neurons (assuming the brain does encode).

    ReplyDelete
    Replies
    1. If cognition is computation then, yes, the brain would have a symbolic code -- but not necessarily binary. For computationalism to be true, the brain would need to be like a computer, computing, but not necessarily an externally programmable computer like a PC or a Mac. More like a dedicated computer. And of course it would have to be able to store too.

      Delete
    2. Ultimately, we know that there has to be some information organization system in the brain that corresponds to all of the things we know and the memories that we can conjure up (ex: your 3rd grade teacher's name).

      But, I think that the computer analogy falls apart because, to our current understanding, the brain works by up and down-regulating series of neurons that are connected (similar to "Neural networks" in computer science, but with much more complexity). All of the computers that we are familiar with do not use these neural networks, so that’s why I think the metaphor can get confusing at times.

      To answer your question, I think the brain must encode information, that information must be coded by the neurons, and computer scientists have tried modeling basic systems to imitate our neural biology (diagram attached below).

      https://upload.wikimedia.org/wikipedia/commons/thumb/4/46/Colored_neural_network.svg/300px-Colored_neural_network.svg.png


      Delete
    3. Karl, your scepticism seems to be about computer hardware: but computationalism is about software: Computation is "hardware-independent" -- not in the sense that you don't need hardware to execute the algorithms, but in the sense that the physical details of the hardware are not relevant. The same computation can be implemented on many different hardwares.

      Delete
    4. RE: the computer/neural network analogy Nick mentioned; I thought I should just write here since I was going to comment something similar

      So we are pretty sure that there are networks of neurons in specialized brain regions that act in particular ways. For example, the hippocampus is really important for memory recall, and it may be that there are physical changes that happen there after an experience. So to push the computer-neuron analogy, I guess you could argue that the brain is "storing" these physical changes, in so far as the fact that these changes are occurring is a form of storage, just like a Turing machine would write down a '0' or '1'. And you could also argue that there must be some sort of pattern of physical neuron firings and patterns of these new physical features in membranes that would allow for memory recall, and that the stored physical changes may be representations of what happened in the real world when that memory was formed - representations that are created so that we re-experience/remember that real-world event. But even if these representations are more complex and of a more numerous base than binary, that doesn't mean that large-scale metacomputation is doing all the work here. It’s not enough to add a degree or two to the system - ternary and quaternary and so one still wouldn't be sufficient to explain cognition - because cognition is more than just symbol manipulation (which is our definition of computation). And i don't think you can separate they computation for the hardware - yes its independent in that it can run on any hardware, but computation is dependent by the physical constraints and capabilities of the hardware its currently running on.

      In any case, you can try to push the analogy as far as possible, but I think the analogy is mostly useful for describing a part of cognition, or one of the many functions of cognition (to perform computations) but the computation analogy - computationalism in general - cannot give us the whole picture of cognition

      Delete
    5. vɪktoɹiə, yes, the brain could be the hardware for implementing the software that generates cognition: but is it? and can software generate cognition?

      Delete
    6. It is apparent that there are several ways in which computation fails to explain cognition. Computation is simply symbol manipulation and although there may be correct output to a given input, computation alone does not allow a programmed robot to feel. At first I had questioned whether a purely computational robot can develop feeling capabilities. I came to the conclusion that a purely computational robot does not in fact feel, or develop these capabilities even after a long period of time (Searle’s Chinese room as an example), but can deceivingly emulate feeling so it is difficult to immediately decipher if the robot is actually feeling.

      So if we phrase ‘can software generate cognition?’ differently, I could ask ‘can feeling capabilities emerge from the software of the robot?’ Computation is symbol manipulation. Symbol manipulation can be programmed with an algorithm (or software). Yet symbol manipulation never becomes feeling or cognition. But is cognition a combination of symbol manipulation and another type of thing that allows for feeling of the symbols we manipulate? What may this other thing be and how does it cause feeling? Delving into this question would be trying to conquer the hard problem and that is not essential for this course.

      Delete
    7. Nadia, it is interesting to think about whether computation could generate feelings in a robot. You say that if a robot exhibited feelings, it would be deceit. However, looking at the point of view of computationalism, what if feelings are in fact simpler than we thought they were and are only products of inputs of information combined with past experiences, and outputs in terms of sweat, heart rate, and the past experiences that come into mind. Could it not be possible to make the robot think that they are feeling something, very similar to what we are feeling? What would be the difference then between what they are feeling and what we are feeling then? What is the main ingredient of what makes a feeling real?

      Delete
  10. From: “What is computation”

    “What if the computer could fool people into thinking it was human? In an important 1950 paper, Computing Machinery and Intelligence, Alan Turing (of Turing machine fame) argued that if a computer could fool humans into thinking it was human, then it would have to be considered to be intelligent even though it wasn’t actually human.”

    I would like to turn this suggestion the other around. Let’s say a human could not succeed at the Turing Test, and fool the tester into thinking it was human. Let’s say this person suffered from mental disabilities, would this mean that this person would have to be considered as non-intelligent, or deprived of cognition?

    ReplyDelete
    Replies
    1. (1) Passing the Turing Test is not about "fooling" anyone. It is about really being able to do anything a person can do, indistinguishably, to a real person, from a real person. For example, Dominique.

      (2) The purpose of the TT is to reverse-engineer how the brain does what it does. It would be of no interest to design a robot that was indistinguishable in what it could do from what a person in a deep, terminal coma (a chronic, vegetative state) can do. We want to explain capacity, not incapacity (at least not yet)! (We don't mistake people with mental disabilities for machines! And the TT is only interesting if applied to something we've designed, so that we know how it works. But yes, we are always Turing-Testing one another, as we are doing with Dominique.)

      Delete
    2. I believe it might be helpful to understand the Turing Test as a sufficient but not necessary indicator of intelligence in a way such that no entity that passes the Turing Test may be considered unintelligent but that there may be many intelligent entities that fail to pass the turing test.

      Delete
    3. Let's replace "is it intelligent?" with "is it really thinking?" Passing the TT requires being able to do everything a real person can do. If it can't then it doesn't pass the TT. A real person that can't do everything a real person can do is still a real person, and probably still thinks. It's irrelevant because the TT is for testing models we've built. But if the model fails the TT there's no basis for inferring that it really thinks.

      So, TT capacity is not necessary for thinking. (But it isn't necessarily suffiicient either! Dominique could be a Zombie....) It's just the best we can do.

      Delete
  11. “But if our brains, and thus our thoughts, can be simulated, to what extent does that mean we ourselves can be simulated?” – page 17, “What is Computation”
    In reference to this question and the ideas of cloning oneself to have the exact same mode of intelligence, with all past memories and experiences, I would say this does not work for long. Even if the two were the same for a moment, it seems as if they would branch off and become two completely different beings. From the point of cloning, there are infinite possibilities as to future life experiences, and since the two cannot be in the same exact place at the same time, it is inevitable that their experiences would be different.
    Moreover, it seems that even if the same “thoughts” could be simulated, they would not actually be the same due to the “hard problem” (inability to simulate the specific feeling of consciousness). While the same exact neural connections, symbols, computations, etc. can be activated, it does not mean the same feelings and sense of being accompany that.
    This failure implies that the symbolism and computational accounts also fail. There must be something else in addition to computation or symbolism to create intelligence. This is a typical “the sum is greater than its parts” problem, where the brain is composed of neural connections that function as a computational system, but that the combination of all of them create more than a computer simulation of these networks. Perhaps what is missing in these accounts is an awareness of intelligence – that it’s not simply enough to be able to compute and abstract, but that the being must also be aware of its capability to do so. Hence, for the Turing test, it would not be completely believable that something is intelligent unless they believe it themselves and are able to portray that level of belief to us. This does not rule out the possibility that eventually (and even now) there are AI machines that have this level of intelligence.

    ReplyDelete
    Replies
    1. A computational model is not the same thing as a clone: A clone is an exact physical copy; the computational model is just a computational copy. The difference is obvious when we think of the difference between a clone of a furnace (i.e., another furnace that heats) and a computational model of a furnace (which does not heat, it just formally models the furnace and the heat with symbols: 0' and 1's -- or as Searle calls them "squiggles and squoggles"). A robot, by the way, would necessarily be a hybrid between the two, since computers cannot move ("movement is not computation"): they include peripheral devices as sensors and effectors.

      Yes a person and their clone would immediately start to diverge, because they no longer had exactly the same history. This would be equally true of a computational model or a robotic hybrid. But we're not trying to copy individual persons, just people's generic cognitive capacity (to do all the things they can do).

      I agree that computation cannot solve the hard problem of feeling. But neither can physics, physiology, or cloning! So that's not the relevant difference.

      You seem to be saying that we could not have full cogntive capacity (intelligence) without "awareness." Perhaps. But if you can explain how and why that's so, you've solved the hard problem! (And no one's done that.) On the other hand, a device getting "access" to what's going on inside itself is something that it is trivially easy to do (whether through hardware or software). The hard part is to make it felt access. (That's why "awareness" and "consciousness" are weasel-words. They exploit the enormous difference between unfelt access -- easy, trivial -- and felt access: hard.)

      Belief? Do you mean felt belief or just having data? (Another weasel-word.)

      Delete
    2. The distinction between 'unfelt' and 'felt' access seems to be just a different way of addressing the same problem, unless I'm not understanding this correctly. The Turing test demands the same evidence about 'feeling' for AI and Humans. (Hypothetically) If we developed sufficient mechanical/electrical/neuroscientific understanding of the activity in the human brain when we 'feel' something that we can identify it in each other, could we reasonably test an AI with the same (or a modified version of the same) paradigm? If we can accurately test humans, can we accurately test AI?

      In this case, where we understand the physical components of feeling, would you accept a test for AI instead of, or supplementary to, the Turing test?

      If not, is there a level of evidence where you would be comfortable making the distinction?


      Delete
  12. For “What is Computation: The Limits of Computation”
    The infinite loop, running on the same input without stopping, leads to the halting problem, determining whether a program would halt if it ran on a given input. The halting problem seems to imply that the brain cannot be a computer; however, I don’t think it would be because “people can tell whether programs will halt and computers can’t” as the article described. Rather, there is no set of input that can be run by a program which is always computable. There must be a ‘hiccup’ within any given set of input that can be run. However, the brain does not get stuck in infinite loops in thinking about thinking while computers must necessarily at some point get stuck in computing computation. So the brain as computer idea is incomplete. It can be a computer in many ways, but it is more than a computer. Nonetheless, I’m not sure what exactly is missing (perhaps feeling, as in what we feel at every moment?) or maybe the brain as computer is a special computer able to deal with ‘hiccups’ in any set of input (but how?).

    ReplyDelete
    Replies
    1. Neither the halting problem nor Goedel's incompleteness theorem has any implications for computationalism (the hypothesis the cognition = computation). Some (true) things are provably unprovable. That does not means they are untrue. And unprovable means unprovable. Proof is computation. It means their truth cannot in be computed: not by mathematicians, not by the mind (whether or not the mind is just computational).

      And it doesn't mean that "knowing" their truth -- i.e., having a proposition that states that they are true, and treating that proposition as true -- cannot also be computational.

      (In fact, until further notice, there's no reason to believe that feeling that they are true cannot be computational either: it's just that if it is, we cannot explain how or why it is.)

      If you want more on this, ask me in class about Lucas and Penrose.

      Delete
    2. (I will definitely ask about Lucas and Penrose in class).
      I tried to read through the “Chomsky and Penrose” paper and understand as much as I could. So (a) is the refutation of using Godel’s proof on cognition that true unprovable statements can be computed? Furthermore (b), is Penrose wrong because the limits of proving a true unprovable statement are not the limits of cognition as computation?

      (a) Where I’m stuck is how true unprovable statements can be, in principle, computable. Having a statement 'Two' stating that statement 'One' is true, and treating statement 'One' is true seems to create an infinite regress of statements referring to the next as true.
      (b) Referring to p.1 of the “Chomsky and Penrose” paper: “Penrose is probably right that feeling/intuition is not in fact implemented in the brain computationally. Rather, it is implemented dynamically – because of the symbol-grounding problem”.
      Does this mean that feeling gives meaning to true unprovable statements?

      Going on a tangent (and a wild thought) on “how can the symbols in a symbol system be connected to the things in the world that they are ever-so-systematically interpretable as being about: connected directly and autonomously” (Cohabitation: Computation at 70, Cognition at 20), can it be possible that it’s a ‘things-in-the-world-grounding problem’ rather than a symbol-grounding problem: where infinite regress in the manipulation of symbols actually creates the things to which symbols are connected?
      So rather than symbols being grounded, can things-in-the-world be grounded in symbols – where they aren’t really things-in-the-world? In principle, could this be a defensible stance at all?

      Delete
    3. On "Meta-Programming"

      Assuming computationalism, could we practically build an AI that cognises indistinguishably from other humans? As a consequence of human programming and representation of procedures in bit-string format, we have created intelligence capable of learning. But, analogous to Godel's incompleteness theorem, could we as humanly intelligent, apprehend the meta-program that is the basis of this intelligent via rules prescribed by the meta-program?

      Delete
  13. Do at the basic level all symbols have to have a referent? That is to say, even “ other symbols […] useful concepts that may or may not have external meaning”. It is possible to imagine that no symbol is created that does not at its base have a referent. So, these symbols without apparent external meaning just refer to other symbols, which in turn do have external meanings. This would support Newell and Simon’s hypothesis that there are the necessary and sufficient means for gAI because of emergent properties. By having simple symbols with referents more abstract symbols could be created referring to these leading to eventual intelligence. (Imagine a recursive system with infinite time and resources. This would allow the ability of infinite symbols and systems of symbols. Therefore, at least one of these could be considered intelligent) Furthermore, this could perhaps be extended to a theory of cognition as an emergent property of basic symbols with referents that through combination lead to cognition.
    More Kid Sib Friendly version: if every symbol we have is a representation of something in the world that we perceive through our senses, then abstract symbols just build these basic one’s up. So, if a machine continues adding all of these together in different ways then it could be smarter than us because it could make any thought (assuming a thought is just abstract symbols).

    Additionally, what is the purpose of AI? If it is to model cognition, then should we take Block’s reductionist cruncher approach and find the level to model the way cognition occurs in humans? Or is it possible that insight can be gained by alternate ways that machine’s acquiring intelligence? If the gain is to model cognition without understanding how it’s done we could employ artificial neural nets that ‘learn’ almost independently. However, then we can no longer understand how it’s reaching it’s output. For example the AI AlphaGo, is able to employ intuitive methods and beat the world champion at Go but we don’t understand how it’s ‘mind’ really works.

    I also do not understand Turing’s claim that “not every real number is computable”. Is this because the Turing’s U machine could not divide? Then why can it write out pi?

    ReplyDelete
    Replies
    1. Not all symbols have referents. "Chair" has chair, but what is the referent of "if" or "or"? Those words are called "closed class" and they are only about .001% of all words, very similar in all languages. They have logical and grammatical functions rather than referents. All other words are "open class": their numbers keep growing whenever we have new referents we want to name.

      Some (open class) words have a referent you can point to directly (e.g., chair). Others not (e.g., democracy). But all of them are abstractions, even "chair": they name categories with lots of members -- referents -- that all share certain features).

      Many words name categories whose features are also categories. This means they can be learned from a verbal definition or description (as long as you know the meanings and referents of the words use to define them). (This leads to the symbol grounding problem.) Their meaning is a recombination of prior, grounded categories.) Zebra = Horse with Stripes.

      You haven't kid-sibbed "Block's Cruncher" (but I doubt it's worth the effort!)

      We are talking about the capacity to do anything and everything people can do. If a model we design can do it, then it's a candidate explanation of how we do it.

      If a neural net can learn things using a learning algorithm, that's fine: That's an explanation of how people do it (if we know the learning algorithm, and have shown it can do it).

      But this does not apply to toy pieces of human capacity (such as playing chess or go), which can be done in many ways. It applies to the full Turing Test: being able to do anything and everything we can do.

      You can keep computing real numbers till doomsday, but computing is discrete and finite, whereas real numbers are continuous and infinite. (In fact even integers are infinite, but a lower "degree" of infinity or "cardinality"...)

      Delete
  14. "[...] indeed there are many computational models of different kinds of neurons. But if each individual neuron can be simulated computationally, then it should be possible in principle to simulate the whole brain by simulating the individual neurons and connecting the simulations together." - from What Is Computation
    If one were to do this - create an entire 'brain' out of computational representations of neurons - would this machine be sentient? Would this machine "feel"? As other posters have pointed out, AI presents an issue when it comes to the hard problem. While computer programs may be "intelligent" in that they are capable of performing difficult computations, or even are capable of behaving in a human-like manner, many of us would not consider them to be able to feel and be aware. For instance, if the program for Siri on our phones were advanced enough that talking to Siri felt like talking to a friend, many of us still would not regard Siri as a something that feels.

    However, imagine if a program representing the entire brain - individual neurons, dendrites, axons, ion channels, and their connections, etc - was created computationally. Imagine that this program exactly resembled the human brain in all of its detail, but was artificial rather than biological. Would we still say that this machine does not feel and is not sentient? If so, then what more would be necessary for the machine to be sentient? If we conclude that the machine is indeed feeling and sentient, say we begin simplifying it. At what level of simplification do we conclude that the machine is no longer feeling and sentient, and is 'just a computer'?

    ReplyDelete
    Replies
    1. This comment was posted by me (Kara Smith.) I thought my Google profile was working - apparently not! I will try to fix that. Apologies!

      Delete
    2. Hi Kara.
      1. The easy problem is just to explain doing.
      2. If a model can do everything we can do, it's a candidate explanation of how our brains do it.
      3. The question of whether the model feels is the "other minds problem."
      4. If the model does feel, the question of how and why it feels is the "hard problem."
      5. 1- 4 apply whether or not the model is purely computational.
      6. Is Dominique purely computational?
      7. Simulating the brain is just simulating the brain (just as simulating the heart is just simulating the heart).
      8. The goal of cognitive science is to reverse-engineer the brain's capacity to do what people can do. (The heart model has to really pump liquid.)
      9. Whether the model does what it can do computationally or non-computationally, in a brain-like way or not, what we need for an explanation is a model that can really do what we can do.

      Delete
  15. From: What is a Physical Symbol System

    The knowledge level is what an agent knows and believes in regards to the external world, and the symbol level is comprised of information about how the agent is actually doing the reasoning that it is doing. It therefore seems that the symbol level has more detailed information than the knowledge level, so is it correct to compare the knowledge level to a high-level of abstraction and the symbol level to a low-level? It also seems that if we were to be referring to a human, the knowledge level would be what a human is physically doing, while the symbol level would be what is going on inside the human’s brain to execute the actions and thoughts it is having in the external world.

    Furthermore, the knowledge level is said to be sufficient to describe humans and robotic agents, so in what instances is the symbol level needed to describe a certain entity?

    ReplyDelete
    Replies
    1. "knowledge level" vs "symbol level":

      Kid-sib replies: "What on earth does that mean, and what are these levels of?"

      Objectively speaking we have behavior (what the organism is does and can do: input/output capacity) and whatever is going on in its head that generates that input/output capacity.

      Computation is symbol manipulation according to rules (algorithms) based on symbols' shapes (not their meanings).

      If computationalism is true - if cognition is just computation - then the only relevant thing that's going on in the brain is symbol manipulation, and the task of cogsci is to find the algorithm(s) that generate the I/O capacity ("doing" capacity).

      The "symbol level" would then just mean computation. And perhaps the "knowledge level" could mean the capacity to do what the system can do (its "know-how") via computation.

      But the usual conflation that is made is to smuggle consciousness into the "knowledge level" -- because, in general, it feels lke something to know something and to be able to do something (even if, as with remembering your 3rd grade school teacher, you don't really know how you know it or do it).

      We're going to try to avoid that, leaving the explanation of why and how if feels like something to know something (the hard problem) and focus only on how we know how to what we can do (the easy problem).

      That gets rid of the need to explain a mysterious relation between a "knowledge level" and a "symbol level": it's just the level of behavioral capacity and the level of the internal mechanism that generates the behavioral capacity.

      Hope this gets clearer as we discuss what computation is.

      But remember that computationalism is just a hypothesis: Maybe cognition is not just computation. Maybe there's other things going on in the brain; not just symbol manipulation.

      Delete
  16. "A physical symbol system has the necessary and sufficient means for general intelligent action." - Newell and Simon (1976)

    While computation does show its importance in generating intelligent action, there must also be other factors that cause something to be intelligent (not just having the ability to manipulate symbols). Human intelligence does not only consider the execution of a certain program or action, but it also ties in other factors such as experiences, emotions, and awareness. AI programs can beat humans in games like Go and Chess by manipulating symbols but there are some human experiences AI will never understand. For instance, the feelings of paranoia, anxiety, and confusion during an exam cannot be reduced to simple computation. Emotional intelligence is a strong factor in how humans base their decisions and it requires more than just a physical symbol system. The ability to experience emotional intelligence and feel then becomes part of the hard problem, which goes beyond computation.

    So what are the capacities of physical symbol systems and how far can they go?
    What system, if any, can be used to explain non-computable phenomena (such as emotional intelligence)?
    How should intelligence be defined? Should it be further separated into different, more specific "forms" of intelligence?

    ReplyDelete
    Replies
    1. Hi Neil,

      There must be an underlying mechanism explaining even the most abstract and complex of things, namely emotional intelligence. I agree that humans are largely shaped based on their experiences and that these experience shape one's emotional intelligence and identity. However, I disagree that emotional intelligence goes beyond computation.

      As is stated in the reading, "An intelligent agent can be seen as manipulating symbols to produce action. Many of these symbols are used to refer to things in the world. Other symbols may be useful concepts that may or may not have external meaning. Yet other symbols may refer to internal states of the agent." If you were able to break down those experiences (i.e the internal states of the agent) into symbols, you could presumably code those symbols into computers to produce action. Would there be a difference between the "emotionally driven" action performed by the human and the identical action performed by AI? In my opinion, it is simply a matter of time before technology becomes advanced enough to replicate all human action.

      Delete
    2. I agree with Elise. Even the most complex procedures can be broken down to simpler procedures that contribute to how it happened. For example when we fear something, there are a set of possibilities that induces that feeling. Some are more evolutionary, such as the fear of heights or spiders that is 'written in us' in a sense. Some are learned ones such as the fear we get before an exam. The inputs are mostly predictable in one person, and thus can be replicated in Al, I believe. We see a certain stimulus, perceive a threat, react to it, and our bodily reactions of our heart rate and sweating even further increases the anxiety. The word 'emotion' sounds complex, but although the pathway we use to get to some emotions might be complex and hard to understand at first, the act of feeling it and where it started from is actually not as complex I believe.

      Delete
  17. Regarding What is A Turing Machine?

    Obviously this is not the main point of the article but nonetheless I was thrown off by the account describing how Turing could prove that not every real number was computable. Computation and numbers seem to come hand in hand so it was surprising that only a few real numbers, themselves, can be computed.

    'The decimal representations of some real numbers are so completely lacking in pattern that there simply is no finite table of instructions of the sort that can be followed by a Turing machine for calculating the nth digit of the representation, for arbitrary n.

    In fact, computable numbers are relatively scarce among the real numbers. There are only countably many computable numbers... (A set is countable if and only if either it is finite or its members can be put into a one-to-one correspondence with the integers.)'

    The proof for why so many real numbers are uncomputable seemed to be lacking. If pi with a seemingly infinite number of post decimal numbers can count as computable, what numbers are not computable? And what does the latter part of the explanation mean by 'one-one correspondence with integers'- perhaps that would clear up my confusion?

    ReplyDelete
    Replies
    1. I'm also struggling with wrapping my head around non-computable numbers, but I can try to help! The best example I found online of a non-computable number is the solution to the halting problem. Since you can translate text into a representation that a Turing machine can read, the solution to the halting problem technically counts as a number that cannot be computed (because it doesn't exist in text).

      As for the "one-to-one correspondence with the integers" part of your question, the easiest way for me to think about this kind of correspondence is to consider the set of numbers that exist on a number line between the integer 1 and the integer 2. There's 1.00001, 1.12, 1.0000004234, 1.2342342300003, 1.934735346534573453, and so on. If you were to try and count all of these numbers (i.e. assign each of these numbers an integer), you wouldn’t be able to – you’d run out of integers because there are more elements between 1 and 2 than there are integers (i.e. there isn’t a one-to-one correspondence between the two sets of numbers). George Cantor was the guy who proved this using the diagonality argument, and I found this page pretty helpful in explaining it: http://www.coopertoons.com/education/diagonal/diagonalargument.html.

      Delete
  18. Does the Turing Test not limit what can be considered “intelligent” by requiring the entity to communicate with humans in a human-like way? I can imagine an artificial entity without any language function that can computationally preform any task a human can.

    Along the same stream of thought, according to Good’s idea of the “intelligence explosion”, a recursively self-improving machine would nearly instantly surpass human intelligence by several orders of magnitude (and continue getting smarter). If the difference between its intelligence and our own is similar to say, the difference between human intelligence and mice, why would such a machine have a desire to communicate with us?

    I do not disagree that language and the ability to communicate are essential for humans, but this is due to the fact we are social animals. Why should we presume artificial general intelligence will be as social?

    ReplyDelete
    Replies
    1. I wouldn't say that the Turing Test limits what is considered "intelligent" so much as sets the initial standard for the sort of human capacity that an artificially intelligent agent would have to emulate; in other words, the sort of recursively self-improving machine that you're describing in the second paragraph of your comment is a feat that won't be achievable in the near future (and certainly wasn't feasible during Turing's time), so we must first strive to create an AI that can at the very least perform as well as we can. Furthermore, I don’t think there’s an assumption being made that a prerequisite to a fully fledged AI is being social - that may be the doing of science fiction films and shows such as Ex Machina and Westworld. As I understand it, the Turing Test in its design is fairly simple; during a 5+ minute blind email conversation, human interrogators must be deceived at least 30% of the time. With the current goals that software engineers have for AI, it seems that the ability to process language is critical for AI as that will facilitate the generalizability of its capability in assisting humans perform an amalgam of tasks (which is after all the main practical purpose of an AI at this time).

      Delete
    2. I am not sure why a general purpose AI created in order to assist humans would need the capacity to imitate a human being. I can concede communication would likely be necessary for such a machine, but not the imitation required by the test. As Turing suggests, in simple tasks such as arithmetic, the AI would have to wait a significant amount of time before giving the answer to difficult problems (or even say “I don’t know” if it determines the problem could not be solved by a human in a reasonable amount of time). This is not demonstrating true general intelligence but rather an understanding of the human psyche and its strengths/limitations.

      In regards to a self-improving machine being unachievable in both Turing’s and our own times, Turing himself wrote about the subject:

      “In this sort of sense a machine undoubtedly can be its own subject matter. It may be used to help in making up its own programmes, or to predict the effect of alterations in its own structure. By observing the results of its own behaviour it can modify its own programmes so as to achieve some purpose more effectively. These are possibilities of the near future, rather than Utopian dreams.”

      Many prominent modern day AI researchers and scientists (IJ Good, Vernor Vinge, Ray Kurzweil, Stephen Hawking) are also of the opinion that such feat is possible.

      Delete
    3. Liza, I think as the professor pointed out, the real Turing Test is actually whether or not a human can be deceived indefinitely, not just for five minutes. So, going off of Colin's point, I think that if a computer were to deceive a human they would at least have to be able to feign emotion, if not actually have them. In that way, I think Colin's original point is quite interesting; maybe we will create a computer who is just as intelligent as humans, but who is uninterested in being social. Maybe then this points to a flaw in the Turing Test, as it shows how it focuses on intelligent that is specific to humans, rather than what intelligence might look like in different species.

      Delete
  19. RE: What is computation?
    Universality

    The idea that computers have the property of being Turing complete makes me question which other devices may also be classified as such or even humans. To have the property of "universality" means to be able to stimulate a Turing machine, according to the article, and I believe (if I remember correctly) that Turing machines are set to mimic human intelligence. Does this mean that humans contain the property of universality as well?

    ReplyDelete
    Replies
    1. From what I understand from that part in the reading, I guess universality refers to the ability of a machine(or computer) to simulate any known computer, therefore being able to compute anything we know how to compute. This word seems to describe machines specifically.

      Yes, I agree that machines are first designed to mimic human intelligence, but it seems that the universality has the focus on being able to compute any function.

      (BTW I really liked your question, you made me think really hard on whether humans have some degree of universality, but then I asked myself, can humans solve all the functions, say, in math?…)

      Delete
  20. After reading "What is computation?", I was wondering to what extent Turing's proof that there are uncomputable problems is comparable to Gödel's incompleteness theorem. Both proofs rely on the concept of "meta-computation" and self-reference. Is Turing's proof a sort of follow-up on Gödel's proof or am I comparing apples with oranges?
    Also, I am unsure of what "general" in "general intelligent action" means.. is it in contrast with a system that could perform only a specific type of actions (in other words, does it refer to the almost unlimited potential of decisions an intelligent being can take)?

    ReplyDelete
  21. “In Western culture, we tend to take our capacity for thought as the central distinction between ourselves and other animals” (Horswill, 2008).

    Why do people consider the idea that machines have the capacity for thought, without considering whether other animals do as well? And if other animals do have the capacity for thought, why should a machine need to imitate a human to pass the Turing test? If a computer could imitate an animal, what would this mean? On a side note, when I was younger, I was told that the circle was the only 2D shape that had more sides than vertices. I tried to draw very complex shapes to disprove this, but it never worked— all it did was make counting the shapes and vertices more difficult. Similarly, by demanding that machines emulate humans instead of other animals, are we simply making the problem more difficult for ourselves without making it any more solvable?

    ReplyDelete
    Replies
    1. The problem is that we don't know how the brain cognizes yet (the easy problem/ how we do what we do). We can go about this problem in two ways:

      Bottom Up: The route of the Human Brain Project, simulating a mouse brian/ smaller brain's first, building a replica neuron by neuron.
      https://www.youtube.com/watch?v=ldXEuUVkDuw

      Top Down: Theorizing how cognition might happen, how it might be that we are different from other animals, to better guide our inquiry.

      I think you're right in that at the moment bottom-up is proving more fruitful, but some scientists argue that this type of inquiry (reverse engineering the computation of the brain) will never lead us to discover the dynamic nature of our cognitions. We need theories, like how Hebb made the switch from the behaviorist model to the cognitive psychobiological model.

      Delete
    2. I guess it’s partly a question of whether we can deem something to be “intelligent” and “understand” while not considering the hard problem as well— whether the machine would need to have some kind of subjectivity. If we’re only focusing on how we do what we do, then I see how it would be useful to examine the differences between humans and animals and produce theories about neural differences. However, I still think it would be interesting to perform a sort-of Turing test in which a machine imitates an animal over an extended period of time— because if this cannot be done effectively, there might be little use in having it attempt the more complex functions that are specific to humans.

      Delete
  22. Regarding: What is computation? (section on computational neuroscience)

    Computational models of neurons simulate how neurons behave individually and with respect to one another, but how can these models be used to show human behaviour or to explain consciousness at a broader level? If one goes about creating a computer that is a simulation of a brain, then do we assume that he/she would be working bottom up - building on simulating processes at the molecular level, then using these programs to simulate the cellular and eventually the behavioural level? Isn’t this method only as good as our understanding (or the programmer’s understanding) of the most fundamental molecular processes in the brain? Ultimately, isn’t it important that we first understand all of the molecular (or even quantum) processes in the brain before creating a computational model of the brain as a whole? Otherwise, it seems that our final product wouldn’t be an accurate simulation of how the brain works at all levels despite the fact that it may still give the same output.

    ReplyDelete
    Replies
    1. I had a similar question after reading the high-level abstraction vs. low-level abstraction part of the "Representations" page! I definitely initially agreed with you that, in making an exact model of the human brain, you would need to know the most fine-grained, low-level details before you started, and then work your way up from there. But where would you possibly start in building that kind of a model? At the neural level, each neuron is connected to a bunch of neurons that are connected to a bunch of neurons that might eventually connect back to themselves. That’s a lot of dependencies to factor in if you want to pattern a single neuron’s behavior. I think a more feasible strategy (and the strategy currently being used by computational neuroscience (but I don’t actually know for sure)) would be to start at a higher level of abstraction, like the network level, and keep working your way down until your model is behaviorally equivalent to the human brain. I still wonder if, using that strategy, we’d ever be able to create an artificial brain that functions just like a human brain at all levels, including the particle level.

      Delete
  23. This is really more of a question than a comment:

    You keep talking about the ramifications of an AI passing the Turing Test, but does that Turing Test have to be completely unrestricted? The classic T Test where a computer and a human are communicating via text (similarly to how we are communicating now) seems like it is restricted by the fact that it is only using text to communicate.

    If I could create a program that responded to each week's reading as a skywriting and you responded to it as though it were really me, would that qualify as passing the Test? I understand that this would probably qualify only as a restricted Turing Test. Although, even in an "unrestricted" test, there is only going to be a small sample of questions, so how could we ever devise a truly representative and passably “unrestricted” test?

    The reason I ask this is because many AIs have passed versions of restricted Turing Tests, but it seems to me like a "true" unrestricted test is more a theory than a reality (although I might be totally off base with this)

    ReplyDelete
    Replies
    1. To my knowledge, no AI has yet to pass the Turing Test as officially defined. Each year there is an organization that hosts a contest for computer scientists who aim to build the best chat bot. The goal is to fool a certain percentage of judges for a certain amount of time. I believe it's over 50% for five minutes but I could be wrong. Perhaps there are other versions of the Turing Test that some AIs have passed, but this is the most widely recognized and they have yet to deem any AI as passable AFAIK.

      Delete

  24. RE: Cohabitation: Computation at 70, Cognition at 20

    This article suggests that the object of cognitive science is about how we connect symbols in our head to the actual objects in the world, rather than computation. So does this mean that computation refers only to the easy problem (i.e. how we do what we do), whereas the question that cognition or cognitive science is asking refers to the hard problem (i.e. why we do what we do)? Would this make the symbol grounding problem the ‘hard problem,’ as it is asking about why and how symbols in our mind can be connected to objects in the world?

    ReplyDelete
    Replies
    1. Sorry I'm trying to see if the photo uploaded (Test #1)

      Delete
    2. The hard problem is not "why we do what we do," in fact that is part of the easy problem. The hard problem is "how and why do we feel?" So symbol grounding is part of the easy problem.

      Delete
  25. Regarding: What is computation?
    "In Western culture, we tend to take our capacity for thought as the central distinction between ourselves and other animals."

    The concept of thought in this piece is too general and broad. Other animals have "thought" similar to that of human beings. It is not necessarily thought, but the feeling as discussed in class that transcends simply thought that is at the central distinction between ourselves and other animals. As Charles Darwin wrote in his book, The Descent of Man, humans and animals only differ in degree, not kind. This may still be true for the thought/feeling between humans and animals. If that is the case, even the thought of making a distinction is filled with conceit. It may just be human egotism that is at the core of this debate.

    ReplyDelete
    Replies
    1. Are you saying animals don't feel?

      Delete
    2. I completely agree with the idea that thought is not what distinguishes us from animals and I'd like to add to this post.

      Firstly, how can we really know that this is the case? The only proven thing that differentiates us from animals is that we've managed to cull our way to the top of the food chain.
      Humans' innate abilities are there to ensure survival. They’re evolutionary traits, and the innate ability of computing would be no exception. Much like language, computing depends on an input to produce a useful and meaningful output, and language appears to be some form of computation.
      In the essence of this thought, let's consider that if computing is innate, such as calculating and speaking language, does the fact that we're able to handle speech at such a high rate and high volumes, as well being able to process it with such efficiency and speed, mean that our survival has grown dependent on communication and interaction with each other?
      We as a species have developed to function much more efficiently in unison relative to animals, is that how Humans have managed to conquer the animal kingdom? Have we evolved simply to be able to effectively make connections with other people and to function efficiently as a community, much like a neural network?

      Delete
  26. There may be a part I have not understood, please help me clarify: According to the document on what computation is, the algorithm consisting in writing Xs on top or under others and erasing them is not a computation. The reason given is that it does not clearly compute a function with an input and an output "value". However, according to the turing machine document, a Turing machine is an idealized computing device. Yet, if I understood correctly, a Turing machine could technically go through infinite operations such as calculating/displaying the numbers of Pi. So then, irrespective of the validity of the constraints of the functional model, the Turing machine wouldn't be a computing machine under the function definition? or is it because each new decimal of Pi is technically its new output to the function?

    ReplyDelete
    Replies
    1. Only under the functional model is computing defined based on its input/output relation. The author uses the X animation example to illustrate some of the shortcomings of this model, since the algorithm used to animate the X's seems like it should be computation - and under the imperative model, it is. This model of computation is one based on the execution of commands according to an algorithm. In this sense, the Turing machine too can be considered computational even if it's not computing a function per se.

      Delete
  27. On "What is Computation"
    "For one thing, if brains can be simulated by computers, then computers could be programmed to solve any problem brains can solve, simply by simulating the brain. But since we know that computers can’t solve certain problems (the uncomputable problems), that must mean that brains can’t solve the uncomputable problems either."

    Clearly, there are "problems" that cannot be solved by both human brains and modern computers.
    However, even if we were able build a computer able to simulate the human brain from a neuronal information capacity and synaptic plasticity standpoint (a cause for which computational neuroscience is hard at work as can be seen here https://en.wikipedia.org/wiki/Mind_uploading#Computational_complexity), and additionally assuming that somehow we were able to "program" it to emulate the same processes of human problem solving, does it stand to reason that it should be able to solve any problem a human can?


    To rephrase, are there hypothetical "problems" that a computer--no matter how emulatory of the function of the human brain--could not solve?
    With the proper apparatus of sensory input and other information, a computer could realistically identify and categorize "human-specific" experiential and affective traits and emotions (as some facial tracking programs have begun to show promise), but could a computer truly solve the "problem" of empathetic human experience solely off the fact that it functions at a chemical level and with the same architecture of a human brain?

    ReplyDelete
  28. On "What is Computation"
    "For one thing, if brains can be simulated by computers, then computers could be programmed to solve any problem brains can solve, simply by simulating the brain. But since we know that computers can’t solve certain problems (the uncomputable problems), that must mean that brains can’t solve the uncomputable problems either."

    Clearly, there are "problems" that cannot be solved by both human brains and modern computers.
    However, even if we were able build a computer able to simulate the human brain from a neuronal information capacity and synaptic plasticity standpoint (a cause for which computational neuroscience is hard at work as can be seen here https://en.wikipedia.org/wiki/Mind_uploading#Computational_complexity), and additionally assuming that somehow we were able to "program" it to emulate the same processes of human problem solving, does it stand to reason that it should be able to solve any problem a human can?


    To rephrase, are there hypothetical "problems" that a computer--no matter how emulatory of the function of the human brain--could not solve?
    With the proper apparatus of sensory input and other information, a computer could realistically identify and categorize "human-specific" experiential and affective traits and emotions (as some facial tracking programs have begun to show promise), but could a computer truly solve the "problem" of empathetic human experience solely off the fact that it functions at a chemical level and with the same architecture of a human brain?

    ReplyDelete
    Replies
    1. I tried to comment on your question here but I was not very familiar with the system and I ended up publishing it as a separate comment. Feel free to take a look at my comment below.

      Delete
  29. Please disregard the double post, I used the wrong account by accident

    ReplyDelete
  30. There seems to me to be a fundamental difference in carbon- and silicon-based computation that goes beyond hardware. Neural computation is analog and continuous, generating action potentials against a background of spontaneous noise, vulnerable to modulation in excitability. On the other hand, microchips are discrete and binary, with only two possible states and a fixed rate of transmission. How, if at all, do these differences in computation translate to differences in functionality or computability?

    ReplyDelete
  31. Re: What is a Physical Symbol System?

    Whether the physical symbol system hypothesis is valid depends on what exactly is meant by “general intelligent action”. Human intelligence involves the capacity to solve infinite problems, to use prior knowledge from which to extract relevant details, and to think fluidly and dynamically to adapt to novel situations: their plasticity is their symbol system. Computers, despite their ability to “create, destroy and modify symbols”, are programmed for specific tasks and often encounter difficulty when faced with a problem within an “un-modeled world”. The article describes the “delivery robot” who can perform its specific task at a high level of proficiency. Even if it’s able to model its world with a low level of abstraction, and do its task flawlessly, if you were to ask anything beyond the scope for which it’s been programmed, it wouldn’t be able to answer, because its physical symbol system cannot create a model which allows it the ability to form an opinion on another subject matter. An observer might consider the 'intelligence' of the delivery robot as being high within its scope, but not beyond what it’s been programmed for. Could that same agent use its symbol system to model a completely different environment and have it solve an entirely new problem? If that is achievable then the hypothesis holds true, if it is not, does the agent truly have “the necessary and sufficient means for general intelligent action”? Ultimately, it depends on how one defines “intelligent action”: is intelligence how well you can complete a certain task or is it the ability to think abstractly, and thus, adapt to new or unexpected situations?

    ReplyDelete
  32. I find the comparison between brain and machine interesting, and it definitely brings up questions and thoughts leftover from watching the movie "Her". For those who haven't seen it, it basically outlines a possible near-future in which true AIs are developed as personal-assistant-esque software, leading to a society in which AIs are increasingly seen as real people to the extent of becoming friends and/or lovers. I think it's interesting to think of humans and machines being limited in very different ways. As there are problems that computers can't solve that humans can (as in the "uncomputable problems"), there are also things that humans can't do that computers can. The most obvious one I can think of is in terms of processing power – while human brains are unimaginably powerful, they are vastly overshadowed by computers when it comes to processing large amounts of data to orders of magnitude that we can't even imagine. I came away from watching Her with the tentative conclusion that what defines us as humans (when contrasted with AIs) is actually our limitations: we as humans construct our identities based on what we choose to think or do with our limited time and capacity. The greatness of a human concept such as love feels vastly cheapened when, as in "Her" (spoilers ahead), an AI reveals themselves to be talking to many people at once, and in love with many people at once. In a culture so built around monogamous love, in this context it seems that what makes any of our monogamous love so great is our singular devotion to it – in other words, that we choose to devote the entirety of our very limited existences to it. Perhaps, if we start thinking of our identities as humans through our limitations, we can delineate more clearly what it means to be a computer as well.

    ReplyDelete
    Replies
    1. To clarify, this was mainly in response to the "Who Am I?" tangent on page 17 of the "What is computation" reading.

      Delete

  33. When a computer is able to simulate a brain we say that they are behaviourally equivalent. However, this is assuming that the brain is a closed system, when in fact it has been shown that other components of the human body, such as gut microbes, can influence behaviour. This made me think of something I read this morning in the Economist about 2017 potentially being the year of the first successful brain transplant. Can we assume that the brain on body X will be behaviourally equivalent to the brain on body Y if the body Y is influencing the brain in different ways than body X? This relates as well to Pylyshyn’s piece where they discuss whether or not architecture influences how an algorithm is carried out. It will be interesting to see if the brain/body transplant is successful, and whether or not the man’s behaviour will be impacted.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete

    2. Lucy, the gut microbe point is very interesting to me. It makes me think of how the brain is more 'imperfect' than a machine could ever be, and for a machine to actually pass the Turing test it would have to be able to mimic these quirks that brains have. Especially because cognition can be influenced by so many things (like you said, microbes, or also mood, hunger, etc.)

      I think then the 'brain' Turing refers to would have to be a hypothetical perfect brain that doesn't have influence from these other factors in order for it to be comparable to a machine.

      Delete
  34. These three readings were my first introduction to the concept of computation. It is fascinating to me that while humans invented the computer, we don't fully understand what computation is, or how it works. The one thing I couldn't stop thinking about while doing these readings was, humans made the computer, but humans also are the computer (even the universe is a computer??). The second reading on "what is computation" was incredibly enlightening to me. The kid-sib-ly writing style was helpful to a naive thinker when it comes to computation. I appreciated the spelled out examples of describing arithmetic to children, especially when using the tip percentage example to explain representation.

    ReplyDelete
  35. Horswill, 2007/2008: “What is Computation?”

    RE: Metarepresentations.
    Computers encode different representations in the metarepresentation of binary, while brains encode different representations in the metarepresentation of neural activity. A neuron can be interpreted as having a binary function since it either fires (1) or it doesn’t (0). What are the limits (if any) to this analogy? How is binary encoding different from neural encoding?

    RE: Fundamental limits to Human Knowledge
    Are humans as susceptible to the halting problem as Turing Machines are? If so, what does this entail about the “fundamental limits of human knowledge?” The Incompleteness Theorem states that the consistency (truth or falsehood) of a system of axioms cannot be proven within the system itself. Are the limits to human knowledge defined specifically by knowing what we can or cannot possibly know?

    ReplyDelete
    Replies
    1. What is a "representation"? What is a "metarepresentation"? Binary is a code. Action potentials don't mean there's computation going on. (There might be.)

      The halting problem and the Goedel theorem are about what is and is not computable. If something is not computable, neither humans nor computers can compute it, whether or not cognition is computation. But uncomputable data can be gotten by means other than computation, e.g., luck.

      Delete
  36. This comment has been removed by the author.

    ReplyDelete
  37. With respect to todays discussion about cognizing and vegetative functions, are we assuming that the ability to perform vegetative functions is not a criteria of the Turing Test? Would a robot still be at the T3 level if it can't necessarily do everything that we can do?

    For example, if Dominique and I spent 48 hours together and I noticed that she never ate or slept, then this would automatically make me suspicious.

    ReplyDelete
  38. By explanation of meta-representations, the article lays out the logic of how we can define a computational problem. It is made explicit that this does not mean we know what computation is. This had me wonder, what kind of events or situations can provide a direct example of computation at work (if any)? And could that be equated with the same logic which explains a computational problem?

    I am unsure as to the validity of this research (see link http://noosphere.princeton.edu/), but The Global Consciousness Project may be such an example. If a variation in consciousness precedes computational output, this may hint at the hard problem of consciousness of how our brain functions the way it does. At that, the meta-representation of shared consciousness (such as experienced stress during a natural disaster) may be deduced through other shared experiences (such as empathy, “mirror neurons” [although this theory has issues of it’s own…], down to basic cell homeostasis). It seems if we are to explain how our brains function, we are to know why, and I don’t think any computation could give meaning to existence, but rather, prove its possibility. This involves retaining introspection as a natural product of existence, without denying the validity in the logic of computational problems used to explain the easy problem of consciousness.

    Perhaps we are looking at things backward if looking at models computation to explain cognition. Could it be that our cognition is our environment, and computation is a proof of that function?

    ReplyDelete
    Replies
    1. Krista, There are two questions:

      (1) What is computation? and

      (2) is that what cognition is?

      But here you're getting a bit ahead of yourself and mixing up the two.

      Computation is the manipulation of symbols based on their shapes, according to formal rules.

      Cognition is whatever is going on in people's heads that generates (causes) them to be able to do what they can do (like thinking, learning and language).

      Cognitive science is trying to "reverse-engineer" what is going on in people's heads: How and why are people able to do what they can do (like learning and language): What is the mechanism? That's the "easy" problem.

      Besides being to do what they can do, they also feel (i.e., they are conscious). Explaining how and why people feel rather than just do is the "hard" problem. (Hard, because once all doing has been explained, feeling seems to be superfluous, causally.)

      Mirror neurons will come in two weeks. They don't explain anything.

      The "Global Workspace" is a (not very informative) attempt to explain consciousness. It does not explain how and why we feel. It just interprets what's going on in our heads in a homuncular way.

      Delete
  39. RE: Are there hypothetical "problems" that a computer--no matter how emulatory of the function of the human brain--could not solve?

    I'm assuming that you are referring here to a problem that the human brain CAN solve. In this case, I think this piece of text from "What is Computation?" might be very relevant to your question :
    "It’s always possible that one computer might have some kind of special command that simply cannot be simulated by the commands of another, and therefore that it might be able to compute functions that the other cannot."

    The text suggests that the problem of sufficiency of commands can be solved by the Universal Truing Machine. To go back to your question, I think if we build a computer as good as the Universal Turing Machine, in terms of the capacity of its commands, it should be able to solve any problem solvable by the human brain. In other words, I think your assumption that the computer is a perfect stimulator would guarantee that it can reach perfect problem solving at human level.

    That being said, as we discussed in class, there are more to the human cognition than problem solving. I think the problem solving aspect of of cognition can be captured entirely by computation as it entails executing commands and manipulating representation which are the very definitions of computation.

    As for the last part of your question and "solving the problem of empathetic human experience", I think you are thinking about the hard problem of cognition to which we might not have a satisfying answer for a while.

    Maybe we can put this hard question like this: If we succeed in discovering how the brain generates our cognitive capacities by reverse engineering, would the answer to this easy question help us understand something about the hard problem of cognition and shed some light on how we feel what we feel?

    ReplyDelete
    Replies
    1. Dorsai, I agree with you in that reverse engineering and understanding how our cognitive capacities are generated are essential in order to begin understanding the hard problem. However, the only concern I have is that the hard problem, having feeling, is much more complex in my opinion and affected by many different factors.

      While brain circuitry and cognitive processes may be fairly uniform in most people, how we experience the world and how we actually feel can be greatly affected by personal experiences and genetics. Because of this, even though I believe figuring out the easy problem will shed some light on how we feel what we feel, it will still be very challenging to replicate this ability to feel and be empathetic in a machine that simply won't have all of the experiences that humans have which contribute a lot to our ability to feel and be emotional.

      Delete
  40. It took me a while to grasp the full impact of these readings. As a cognitive science student somehow in all the courses I’ve taken that touched on cognitive processes I never came across the concept that computer models could give us insight into cognitive function and that this could be a valid methodology for understanding how our minds do what they do. Although now I have a much greater appreciation for the potential of this method I do still have some reservations. Nonetheless I find it very exciting to think about the possibilities!

    My primary point of contention hinges on the assumption of the hardware being “irrelevant” because the “hardware is independent from the software”. There are two reasons why I take issue with this point. The first is that in terms of practical applications, cognitive scientists (particularly those in the psychology or neuroscience streams) are often looking for ways to modify cognitive functions or fix cognitive deficits. How can this be done without an understanding of the underlying hardwire? The hardware is thought to be irrelevant because of behavioural equivalence, that is if two machines (A and B) are able to give identical, correct outputs we don’t care about the processes. However if there were to be an error in machine A understanding how machine B works would not necessarily tell you how to fix the error since the processes or “code” is different. I think this is clear to anyone who’s taken a computer science course and done complex programming. Of course I’m sure using some deductive logic and having a general understanding of the problem could give insight into the system (which is undeniably useful) however the greater the difference between the hardwares the more difficult this becomes. And in the case of human brains, cognitive deficits can be highly specific and bizarre (for example, very specific linguistic deficits) and this is often due to a disruption in a particular neural circuitry.

    I think that computer modelling is limited in application if it is not closely correlated with the existing patterns of neural circuitry in our brain. If you are able to model these circuits (even crudely) you have created a powerful experimental model to test the outcomes of various aspects of the circuit (and the effect of disrupting a specific part). However, in order to do this you need to have a basic working knowledge of the human brain and must draw upon other methodologies within neuroscience. I think this also demonstrates the importance of finding the proper “level of abstraction” for accurate modelling of a system. I think the point I’m trying to make is that if two hardware systems (ie the “how”) are vastly different, then even though the outputs (or “what”) may match it can only provide information about the functioning of the systems at the most abstract level, and I question if information that abstract is really useful. As well, this certainly does not do much in terms of passing the Turing Test if we are only able to model very specific cognitive functions independently when in fact the goal is to create a machine that can do everything we can do which would mean integrating numerous cognitive functions seamlessly.

    ReplyDelete
  41. Brains and computers are similar in the sense that they both compute input and execute programs to get an output. Hypothetically speaking, if we were ever to create a machine that has the exact same capacities of a functioning human brain, at what point would that machine be considered to have a consciousness? How would we know if its thoughts were its own, and not just a result of the software that its running?

    ReplyDelete