Saturday 2 January 2016

2a. Turing, A.M. (1950) Computing Machinery and Intelligence

Turing, A.M. (1950) Computing Machinery and IntelligenceMind 49 433-460 

I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B. We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"




1. Video about Turing's workAlan Turing: Codebreaker and AI Pioneer 
2. Two-part video about his lifeThe Strange Life of Alan Turing: BBC Horizon Documentary and 
3Le modèle Turing (vidéo, langue française)

107 comments:

  1. RE: An interesting variant on the idea of a digital computer is a "digital computer with a random element." These have instructions involving the throwing of a die or some equivalent electronic process; one such instruction might for instance be, "Throw the die and put the-resulting number into store 1000." Sometimes such a machine is described as having free will (though I would not use this phrase myself), It is not normally possible to determine from observing a machine whether it has a random element, for a similar effect can be produced by such devices as making the choices depend on the digits of the decimal for.

    This made me think of a somewhat of tangential question to Turing's "Can Machines think?" I think most people would agree that free will is a fundamental difference between humans and computers. While humans have agency, computers are simply responding to symbols according to rules. Does adding a random element to the digital computer account for free will? More simply, is free will random? In the same way that Turing states that we can not determine by observation whether the machine is operating randomly, how can we determine whether other humans are operating randomly?

    ReplyDelete
    Replies
    1. I would argue that our free will is far from random. While it might seem as if some of our behaviors or decisions are random, the way we behave is the product of every experience in our lives leading up to that moment. That's not to say that we are stuck to perform one specific response to a given stimulus, our free will does let us decide. However, even if our decision is a seemingly random coin flip, we CHOSE to respond with a random choice through our will, which in turn was driven by our past. This is different from a computer having random responses hard-coded into its "decision making".

      Delete
    2. But do we really have free will?

      https://vimeo.com/75647511

      To what extent are our actions determined by our genes, our past, the laws of physics, and to what extent are they authored by "us"? And by "us", are we just talking about a homunculus? How are its decisions authored?

      The infamous Libet Experiment has shed some neuroscientific light onto this question, with researchers being able to predict the decisions of participants using the so-called Readiness Potential, and this before the participants themselves were aware they had even made a decision.

      Perhaps you're talking about another sort of thing, what I'd call autonomy, the ability to make decisions free of coercion (e.g., a gun at your head) or compulsion (e.g., a brain tumor). This is certainly a real thing, but it doesn't seem like anything an appropriately-designed computer could not emulate.

      Delete
    3. I think that generally speaking free will in computers would be the ability to program themselves. What in the reading was described as ‘control’. I think for as long as humans program these controls machines cannot have ‘free will’. However, I understand the argument that every experience a person has causes some epigenetic changes that alter the original ‘control code’ a.k.a. DNA in some way or another. Therefore, showing humans to lack ‘free will’ as well. Additionally, there are many examples of machine learning and altering their ‘code’.
      I don’t know how productive this is to the imitation game. If a machine is able to ‘learn’ and produce the same output as humans, then shouldn’t that be enough?
      Additionally, Michael what you were saying about the Libet experiment made me think of 'Laplace’s' view from the reading. If you could find a more accurate way to measure global potentiation in the brain could you theoretically predict every thought a person has and every action they will do?

      Delete
    4. Turing machines are discrete-state machines, and by extension so are computers where "given the initial state of the machine and the input signals it is always possible to predict all future states." This fundamentally contradicts with any kind of human idea about free will since the future state is always known if we know are current state and the input signals. Even probabilistic mechanisms cannot allow for free will because the outcome is determined by random probability, with no plausible alternative, and not any kind of free exertion of will.

      As for epigenetics and DNA allowing for learning, and the idea of a machine's self-alteration of its original programme imples that there must be some code written in the original programme that guides this alteration and provides the rules for these inputs to change the original code that gives rise to learning. Following from that, then it seems that all possible learning must have been stipulated in the algorithm already. Considering that, free will has no room in machine learning either. Instead, within the program must be a "laplacian" computational table.

      I believe that invoking free will as a metaphysical commitment is not at all necessary to understanding or re-constructing human cognition at all.

      Delete
    5. The concept of "randomness" is interesting to me since some studies have shown that our conception of the quality of "randomness" is actually quite different from randomness in the strictest, statistical sense. An example is if you tossed a coin 100 times and recorded the sequence, and then asked a human to make up a sequence of 100 random coin tosses, a human would be biased towards shorter continuous strings of the same result (eg. "oh there have been 5 heads in a row, better do some tails"), skewing the end result from a truly random string. I'm not entirely sure if this can be used as part of a Turing Test, however – we could entirely possibly program these human biases into a computer's "randomness algorithm" to produce a kind of "humanly random" sequence.

      Delete
  2. Although I agree that machine learning gives good hope for thinking computers by letting them program themselves and give them more personality, two issues can be addressed. First, although a child machine may seem easier to reproduce, it only pushes the question further back instead of solving it. Only superficial experience-related behaviour is removed, while we still need to figure out what basic computing is needed to let the machine think. Related to this issue, relying too heavily on learning could lead back to behaviourist theories, which we have seen are not the solution for cognition.

    ReplyDelete
    Replies
    1. I agree that an undue focus on machine learning is just pushing the question back further. It seems that the question of “how can machines think” is substituted with “how can machines learn”. I also agree that using terms like punishments and rewards to explain the learning process makes this look more behaviourist. My thought is that: is there any way to defend Turing’s idea of machine learning in relation to his explanation of digital computers? It seems that his idea of learning machines can make sense in terms of programming. A digital computer could be programmed with techniques for solving problems; these techniques can be programmed to find, generate, or select other such techniques. An analogy would be ant colonies that use pheromones to eventually find the most direct paths to a food source. This might be a starting point to show that his idea of machine learning isn’t relying on behaviourism.
      Incidentally, Turing’s idea of a learning machine seems to also refute Lady Lovelace’s Objection, that “the analytical engine ... can do whatever we know how to order it to perform”. As in the reading for 2b “The Annotation Game”, having rules govern a digital computer system doesn’t mean that everything it does can be predicted. A machine doesn’t need to originate anything, but it can learn.

      Delete
    2. We addressed something similar to the first issue in class, where we discussed to some extent the idea of a child machine. In this case, we are discussing T3 robot with the capacity of a child. I don't necessarily agree that the reverse engineering of this pushes the question further back or replaced Turing's question of "can machines think". Yes, a child machine may seems simpler; however, it doesn't derail from the ultimate goal, which is to reverse engineer a robot to pass the TT, such that because of it passing the test it is almost as likely that it think and feels as other people do. If I remembered correctly from the discussion in class, in creating the robot there are no baseline to be drawn but solely based on its cognitive capacity, which is determined by its passing the TT that it is comparable to that of humans'.
      For the second issue, I agree with Austin, such that Turing's proposed idea of machine learning. It seems that Turing's proposal wasn't in the dependence of behaviourism, but on how the machine will react to "the changes it might undergo," which are time-invariant. As the machines are programmed to interact with it's surrounding input then self-modifies in programming in order solve problems, so perhaps such is that the "learning" of the machine.

      Delete
    3. Creating a machine that can learn and adapt to changes is not sufficient to pass the Turing test. If you want it to learn to do what humans can do you need to presuppose and provide the potential to do all this. It should already "think", and learning would only make it think better. So we are back to the starting point.

      Delete
    4. One thing that I think needs to be clarified here is that this part is suggesting not that a "child" robot would be attempting to pass the Turing test by being tested against a child human, but that Turing is suggesting that a child robot can be created which can then learn just as a human would and achieve adult intellectual capacities to be able to pass the test as an adult, against an adult human. Hence, this is not exactly what we were discussing in class.

      Further, it seems to me that the feat of creating a child robot is viewed as easier than it actually is. I would argue that parts of it would be harder and others easier. It may be harder to program how a robot would learn each different aspect of intelligence that humans are able to learn. For example, it is still greatly argued as to what types of grammatical information humans have innately and which are learned. It would take a long time to fine-tune something like that to work as the human system does. Also, it would be difficult to ensure that the robot has the ability and capacity to learn everything that it needs to, which would still need to be programmed somehow. The main short-cut a child robot would provide would be to give it the ability to learn scientific, vocabulary, and other one-to-one information, instead of programming it all directly.

      Delete
  3. This comment has been removed by the author.

    ReplyDelete
  4. From “Contrary views on the main question”: “I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.”

    I have a (long… sorry!) clarification question about Turing’s imitation game. In the above passage, he predicts that computers will be so successful at playing the imitation game that the interrogator will have less than a 70% chance at correctly identifying the computer as a computer and the human as a human. Let’s say we successfully designed this amazing computer that is totally indistinguishable from a human when you’re talking to (or when you’re emailing with) her. So to prove that this robot is intelligent, we have her play the imitation game against a real human. If the robot is just as good as a human when it comes to conversation, how could the interrogator have more than a 50% chance of being correct? In other words, if the computer and the human are truly matched in ability, wouldn’t this leave the interrogator at a 50/50 guess for deciding who’s who?

    ReplyDelete
    Replies
    1. Here he doesn't say the computers in question will be indistinguishable from humans. He says they will be able to "fool" the tester at least 30% of the time, which is a much weaker claim.

      Delete
    2. Hi Olivia!
      I definitely see your logic here, but this quote helped me to understand the idea a little better: "Surely the goal is not merely to design a machine that people mistake for a human being statistically as often as not! ....No, if Turing's indistinguishability-criterion is to have any empirical substance, the performance of the machine must be totally indistinguishable from that of a human being -- to anyone and everyone, for a lifetime". So in terms of the TT, it's not about whether the machine is able to trick the investigator into thinking they're human, because we're looking for REAL performance capacity. I think Turing complicates things when he starts talking about the chances or correctly identifying the computer as a computer, the whole point is that we're trying to build a machine whose performance capacity is indistinguishable from that of a real human. So the investigator should truly believe that they're interacting with two humans. Not sure if that clarified anything for you, hopefully it did!

      Delete
  5. RE: In response to Alan Turing’s question, “Can a machine be made to be supercritical?”

    From what I understand of the imitation game, for the machine to do what an average human can do would require a sort of domino effect to occur within the machine. In order for it to be supercritical it would have to have added together many ideas/experiences into a new whole that is independent of its programmed set of rules. In other words, it would need to be able to “learn”/think.

    In order to test this, Turing suggests that one should program a child’s mind and then teach it, in order to “simulate” an adult mind. A judge would then rate its capacity to learn.

    While trying to stay within the bounds of the “easy problem”, would the acquisition of compassion be considered necessary for adult intelligence? It can be argued that reasoned adult decisions cannot be made without this. For example, would a nurse AI, although knowing what steps to follow, be able to compute and make the best decision without this type of intelligence?

    ReplyDelete
    Replies
    1. The issue of compassion is interesting. Two issues come to mind. If, instead of asking this about machines, we asked this about the humans around us, I’m not sure we would answer for sure that the acquisition of compassion is necessary for ‘adult intelligence’/thinking. Second, relating to the class examples of Dominique, even if we considered compassion necessary, we would only be able to judge compassion in other (presumably) humans based on their performance. A machine could seemingly be programmed to perform as we all perform compassion (ideally); it would be indistinguishable.

      Delete
    2. I think it's wise to distinguish here between emotional and cognitive compassion. There is a feeling associated with being compassionate, but there are also ways to reason oneself into compassion in a dispassionate way and achieve the same effects. Perhaps we could program or teach this latter kind of compassion, or witness it emerge from more fundamental processes.

      Delete
    3. I think the capacity for compassion/ empathy/ sympathy is a necessary component of human social intelligence, as it seems to guide our social behaviour, but so does the fear of rejection, guilt, shame etc. I think you're opening up the can of worms of emotions with your question, and in essence asking "do we need to simulate emotions to create machines with human-like intelligence?" What about morals and value structures? We tend to feel as if our emotions, our values and morals, govern our behaviour. To create machines that are intelligent in all the ways we are (T3), do we need to create a learning algorithm that, likely through rewards and punishments, gives the computer the tools to learn the rights and wrongs of our social world? Anyway, I think we need to simulate basic categorization/ cognition before we add in the complexity of emotions/ social interaction.
      Also, do we need to make machines with this capacity for any reason other than to understand our own cognition? Humans make great nurses. I reckon that AI that tries to behave compassionately, that doesn't have the capacity to feel, would be pretty creepy, and fall into the Uncanny Valley.

      Delete
    4. I’m not sure if the ability to have compassion would be necessary as much as the ability to identify compassion, and other such feelings, would be. For T2 this could be difficult, since oftentimes written items carry fewer/differently interpreted emotions than their authors intended, but to teach the child computer how to identify different emotions could lead into story-interpreting, like the burger scenario where humans would know whether or not the man enjoyed his burger without the story explicitly stating so. Once a child machine has learned to pick up on those subtle context clues that denote anger and sadness and compassion, and of course others, more steps can be taken toward story interpretation without having to ‘give’ the machine emotions at all.

      Delete
  6. Since Turing's time, much progress has been made in the capacities of computers and machines. Humans interact with computers everyday and technological advances have made machines smarter, faster, and more powerful. With this in mind, would Turing still rephrase the question "Can machines think?" or better yet would he come up with a different, more updated question that takes into account all the technological advancements we have made in the area of Artificial Intelligence? Would there be more to add to the Imitation Game or would there be some components taken away?

    ReplyDelete
    Replies
    1. Personally I don’t think Turing would change the question on machine thought, or the Imitation Game at all. From the start of the experiment he addresses the idea of futuristic scenarios where people may be “dressing up” machines in artificial flesh. In order to account for future sight or touch technologies, he ensures that the machines are all in separate rooms so that there is no way the interrogator can make inferences. I believe that in an Imitation Game setting, machine processing speed would be the biggest difference between machines in the 1950 compared to today. In order to account for this I think it would be important to have all answers (both from the person and the machine), be written responses and potentially have a set time for responses to be delivered (e.g. 30 seconds after the question was asked by the interrogator, would they receive the response). As Turing specified that he only wanted digital computers used in the game, I believe this would be a reasonable track to follow as most computers used today are digital.

      Delete
    2. RE: "Would Turing come up with a different, more updated question that takes into account all the technological advancements we have made in the area of Artificial Intelligence? "

      I think Turing puts these technological advancements into consideration when he argues that progresses in the design of digital computers might change our vocabulary and our use of the word "thinking" with reference to machines. However, the main question underlying "Can machines think?" is "What is cognizing and how cognisers do it?". To answer this question, Turing proposes the imitation game instead of investigation of the use of language, which might be highly influenced by our advancements in technology and our culture of using them.

      I do not think that technological advancement would change the nature of the question imitation game is trying to answer. Imitation game assumes that "cognition is as cognisers do" I don't see any components in this single assumption that would require revision with advancements in technology, so the components of imitation game should not change over time.

      However, I do agree with Eugenia that progresses such as increasing the processing speed would be positive steps towards the eventual success in imitation game. As we discussed in class, Turing also makes predictions based on such progresses about the feasibility of success in the imitation game in the next 50 years. I think imitation game supposes an ideal machine to investigate the nature of thinking and our technological advancement are just steps towards this ideal picture.

      Delete
  7. The paper makes me question why we would want to develop an AI according to Turing. Are we not trying to understand if a machine can think, and also understand how we think by reverse engineering? However the more the paper talks about the Turing test, the more it leaves out things that it dismisses as not being important to pass the Imitation game, and thus the more it leaves out things that can potentially help us understand cognition.
    In addition, I am also curious about the random choices an AI would make. The paper talks about the idea of a machine having free will by throwing a dice and making a choice, having a random element. However, the more you spend time with this AI, applying the test, would you not start to see the irregularities in some of his answers? For example everyone has different music, movie, and flower preferences. It is not even just the genre. Everyone will have a different scene that will be memorable to them, or a different passage in a song they will get obsessed with. Everyone will choose a different flower from a shop, which has a lot of different qualities, one might not even think to program in, and a machine might not know what inputs to draw from to pick a favorite. How can one explain why some people choose orchids while others choose daisies? Or, if one asks the AI how they are feeling every single day, would there be a pattern, a personality? Or would there be an input to act in an unconventional way from time to time as humans tend to do? Would they suddenly feel down one day? These would be small but noticeable differences if you spent some considerable time with the AI.
    I believe the real test would be to leave out a part in the programming, and see if the machine would develop it, or try to find it out on its own. It would be interesting to consider if a machine would start trying to explain an emotion it had never experienced or heard of before. I believe that Turing doesn’t do the best job in explaining the argument from various disabilities. The problem people have with those questions are in grasping how these complex actions can be imitated, as a lot of them seem to have a lot of different possible explanations. For example humor is not a when someone says X, say Y kind of a function. It is spontaneous, learned, creative, and very personalized at the same time. Humor changes from culture to culture and from human to human. Or, what about morality, and lying? How is a machine going to decide what decision to make? Does it always follow societal rules, or consider breaking them in special situations? How can you program a machine to answer a question such as the ‘a train is coming, on one track there is one person you know, but on the other track, there are 5 people you don’t know, which do you choose?’, kind of a question? The roll of a dice?

    ReplyDelete
    Replies
    1. Hi Deniz,

      I found your comment very interesting and I agree about your point of view on programming.

      "Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?"

      As mentioned in the reading, we could try to produce a programme to stimulate a child's mind as it would start from a blank page. Much like humans, we are born with the initial state of the mind - however, the way that the world "programs" a real child's mind would be highly different from programming an artificial mind. Children use different senses, experiences, feelings and emotions to develop their minds. As for a machine, even if it is programmed to do so, i believe it doesn't develop it quite the same way. For example, a baby's first step - to fall and to get back up again, to cry and know that you have to keep persisting to ultimately reach that goal. You could program a machine to walk, but would a machine know the feeling of a first fall? To make mistakes and get back up again - something that makes us human. I believe that to learn from our mistakes and to know how to improve is not simply "programmable". This adds onto the Theological Objection where "thinking is a function of a man's immortal soul" and since "thinking" is unobservable, we wouldn't know/would not be able to confirm that it is thinking about how painful the first fall was or what is really running through the artificial minds. The human brain is so complex and intricate that we can barely understand it at 100%. What makes us think we can program a artificial mind to mimic a human beings?

      Delete
    2. "How can you program a machine to answer a question such as the ‘a train is coming, on one track there is one person you know, but on the other track, there are 5 people you don’t know, which do you choose?’, kind of a question? The roll of a dice?"

      I think you pose an interesting question. Let's say you programmed the computer to choose the answer to such a randomly, so that any time it was asked this question, it could output either to save the five strangers or to save the one person they know. Regardless, would either answer have an effect on whether you believe the response to be from a machine or a person?

      Morality is a spectrum, and depending on what the machine outputs, you could easily be talking to 1. a person who would be entertained by the thought of five people dying, 2. someone who really despises the one person they know, or 3. someone who believes that saving five lives at the expense of one is morally correct.

      Delete
  8. Turing’s discussion of learning machines stuck out for me amidst his argument that the digital computer can do well in the imitation game. He mentions “We need not be too concerned about the legs, eyes …. Communication between teacher and pupil can take place by some means or other”. His example of Helen Keller doesn’t hold water as she learned through her tactile sense (i.e., feeling the world around her). Putting the specific example aside, his idea of learning based on evolution becomes problematic when he discusses applying a teaching process through “‘unemotional’ channels of communication”. It’s not clear how punishments and rewards make sense for machine learning without any grounding of what those punishments or rewards entail. I struggle with what he means by these “unemotional channels”. If we take up this idea, what does it mean for a machine to learn by rewards and punishments while the channels of communication are entirely “unemotional”? If we say that what he meant by “unemotional” is nonsensory, where does that leave his argument?

    ReplyDelete
    Replies
    1. Turing answers your first question here: “The machine has to be so constructed that events which shortly preceded the occurrence of a punishment signal are unlikely to be repeated, whereas a reward signal increased the probability of repetition of the events which led up to it.”

      I think he also says that “[t]hese definitions do not presuppose any feelings on the part of the machine” because although the system of punishments and rewards in the machine does not have the emotional dimension that the biological system in us has, the systems are still behaviorally equivalent and would have the same learning capacities (which would help the machine in the imitation game).

      Delete
    2. I had a similar sentiment while reading Turing’s passage on Thinking Machines. It is almost counterintuitive to think that reward-punishment learning, a process which acts on the unconscious and emotional motivation systems in humans, can take place in some “unemotional” channels of communication. Turing suggests “it is possible to teach a machine by punishments and rewards to obey orders given in some language, e.g., a symbolic language”. In other words, by assigning positive and negative valences to symbols (events & behavior), the machine can learn which events or behaviors are to be repeated or not, according to their assigned valences. This reintroduces the problem of symbol grounding. Does the machine merely assign positive and negative to rewards and punishments respectively, or does it actually understand the emotionally-charged meaning behind the behavior with which is being rewarded or punished? In that Searle could carry out the algorithm with Chinese symbols, without understanding Chinese, can the computer similarly carry out emotionally-charged responses, without understanding emotions? Therefore, a machine’s responses are not consciously meaningful, and are thus ungrounded, as mere “mechanical” responses to a given input. In that way, is the machine “thinking”?

      Delete
  9. This comment has been removed by the author.

    ReplyDelete
  10. In attempting to imitate an adult human’s mind, the article by Turing suggests simulating the creation of an adult’s mind - starting with a child’s mind. I find this an extremely interesting concept. It is obviously true that an adult’s mind is the byproduct of a child’s mind, but I am still hesitant to believe a computer could simulate an adult’s mind from an initial programming of a child’s mind. After all, computers don’t have the biological framework and environmental experiences that we (as humans) have.

    Turing writes, “If this were then subjected to an appropriate course of education one would obtain the adult brain.” This logic makes sense, but I don’t understand how it would be possible to subject a computer to the appropriate course of education. Turing also says, “We normally associate punishments and rewards with the teaching process. Some simple child machines can be constructed or programmed on this sort of principle.” While reinforcements and punishments are certainly important in learning, might there be other factors equally important in one’s mind development?

    ReplyDelete
    Replies
    1. I think Turing’s general idea of machine learning, just as a child learns, is justifiable against the criticism of a lack of biological framework, but faces issues with the lack of environmental experiences, as you point out. Turing responds to the criticism from continuity in the nervous system, which the Professor summarizes in the article for 2b “The Annotation Game”: “Any dynamical causal system is eligible, as long as it delivers the performance capacity”. There doesn’t seem to be a reason for why biological material is necessary and why other materials can’t also deliver the performance capacity; that biology isn’t necessary seems to distinguish T3 from T4/T5. But on the point of environmental experiences, Turing’s idea of machine learning, I think, faces some problems. He dismisses “legs, eyes, etc.”, which are important for experiencing the environment. Sensorimotor performance is bracketed out but presumably, we need to experience the environment through sensorimotor performance to learn.

      The idea of accounting for machine learning through punishments and rewards could be defensible when we consider Turing’s position that “the imperatives that can be obeyed by a machine ... will be ones which regulate the order in which the rules of the logical system concerned are to be applied”. If we accept his notion of “unemotional channels of communication”, then I think punishments and rewards along with the imperatives could be sufficient to account for learning.

      Delete
    2. I find this concept interesting because it brings up the question of (1) what is learned through experience and (2) what is already coded or innate.

      1) What exactly is it about experience that we encode and learn from? Based on the cat experiment that we have talked about in class, the cats needed both sensory and motor experience to develop and learn to use their eyesight. But do we need emotion to learn? If a robot child were to follow every step of a human child, would it learn what the child learns? Would the robot child be T2 since it has "learned" through the behaviour of the child, but has never "felt" like the child has? Would the "unemotional" channels of learning reflect T2 but not T3?

      2) If an "appropriate course of education" or experience is sufficient to turn a child's brain into an adults, what is it that already existed in the child's brain that allowed him/her to integrate that information about their experiences in order to learn? Chomsky's theory of an innate Universal Grammar could give us some guidance, but even this theory is vague.

      Delete
  11. The works and customs of mankind do not seem to be very suitable material to which to apply scientific induction. A very large part of space-time must be investigated, if reliable results are to be obtained. Otherwise we may (as most English 'Children do) decide that everybody speaks English, and that it is silly to learn French.


    I found this analogy to be truly relevant to the question of machines & cognition. As a child, I believed that French was the only language that could be truly spoken. Therefore, while Spanish or Russian people seemed to be doing something resembling to what I did when speaking to one another, I thought that true communication could never be achieved by them. My understanding was obviously not mature enough. Could it be the case that when we say that “true cognition” cannot be possible for machines or for some animals, we are simply applying the same logic as the naïve child that I was? I just thought this was a great analogy upon which to ponder.

    ReplyDelete
    Replies
    1. What you said is reminiscent of the meta-hard problem, namely, is there something about our psychology that might make correct solutions to the easy or hard problems unsatisfying or unacceptable to us?

      Delete
  12. This comment has been removed by the author.

    ReplyDelete
  13. I agree with Turing in that Machines can often perform "creative mental act[s]" and surprise the designer. I disagree however, with his claim that the view is due to the fallacy/assumption that "as soon as a fact is presented to a mind all consequences of that fact spring to mind".

    I feel that this is an overexertion of what most people assume to happen. It is not that all consequences of that fact spring to mind, but only a small sample of consequences. Other consequences can then be deduced and considered, some of them surprising. This supports the idea that machines can surprise, since our own deduced consequences can also surprise even the most considered processes. Therefore, I must think that the “surprise” offered by the machine is not a characteristic of the machine, but of us (the designer).

    So while I agree with Turing in that machines are capable of producing surprises, I have to in part agree with the critic that says “surprises are due to some creative mental act on my [the designers] part, and reflect no credit on the machine.”

    This leads me to consider “7. Learning Machines”. Regardless of whether a program simulates an adult of child mind, Turing argues (correctly in my opinion) that a program can be taught. He also postulates that the instructor will be ignorant of what the learning process of the program actually is. In a way, this consequence /results of such a learning process would be considered a “surprise”. But that surprise still lies in the instructor and human observer. I am not convinced that the program/machine itself can appreciate this sense of surprise, as all consequences would be just an output for it.

    ReplyDelete
    Replies
    1. I think Turing's description of how he himself has become surprised by his own machines (programming in a hurry) is parallel of the surprise you explain as characteristic of the designer.
      There might be a chicken and the egg situation here where both the surprises of the computer can be due to it's own ability to "think" (the becoming of an adult from a child mind) and/or an algorithm overseen by the programmer. I think Turing accepts both possibilities (but I could be misinterpreting!).
      I am not too sure what you mean about the machine appreciating the sense of surprise.

      Delete
  14. This comment has been removed by the author.

    ReplyDelete
  15. Turing seems to be thinking 'kid sib-ly' in his reasoning for straying from the question "Can machines think?". To answer this question, first you would have to define both 'machines' and 'think'. Defining 'think' can bring up the following as explanations: something that someone with consciousness does, to be aware of things like ideas going on inside your head, something someone does when going about every day actions. These in turn require further definitions or explanations of words like 'consciousness' and 'aware'. This question would be circular much like the first day of class when we attempted to explain to kid sib what 'explanation' and 'conscious' were. How can you answer a question when you are not really sure what you are asking? Turing bypasses this altogether by replacing the circular question with one expressed relatively less ambiguously. This new question is "Are there imaginable machines which do well in the imitation game?" Assuming kid sib had a childhood and was either told or read stories then he would have at least a vague notion of what 'imaginable' means- something that does not necessarily need to exist but you can come up with something in your head that fulfils the requirements. The 'imitation game' is a game with a judge and two participants, one man A and one woman B. The judge is trying to guess which one is which. B is pretending to be A while both answer questions in typed form. If the judge says that B is the man, then B has passed the imitation game. Now replace B with a discrete-state machine. If the machine does well in the game then why would we assume it couldn't think? We assume that other humans can think, we assume that A can think, we assume woman B can think so if machine B can perform the same way why would it not be thinking too? Notice we never have to answer what thinking is exactly while we still get an answer as to whether the machine does it or not. Thus it would not matter how ‘thinking’ occurs but rather that it does, even if the machine ‘thought’ one way and we ‘thought’ another. An analogy could be comparing two people: one is being tickled with a feather on her foot and the other has crosswiring of her sensorimotor map such that tactile pressure on her knee is felt on her foot. They are both feel like something is touching them on the foot but the manner in which it is happening is different.

    ReplyDelete
  16. There seems to be a difficulty with the Turing Test in that it measures only performance and not competence. We test what a machine can do without asking how it does it.

    To clarify: Block presents a hypothetical machine with "the intelligence of a jukebox" but which could nonetheless pass the TT. It would consist of a store of every possible sentence of English associated with an appropriate response. (Yes, this would require an infinite store, but we are talking about a hypothetical machine, not a real one.) Thus, it would be able to carry out a perfectly human conversation using only the simplest mechanisms which, once unveiled, could hardly be counted as "thinking." In this sense, the machine has perfect performance but remains almost completely incompetent.

    Of course, performance is all we can ever measure, even in our interactions with other people, but a TT with no measure of competence seems incomplete to me. Stevan says that a T2 TT is sufficient to measure intelligence, and realistically Block's jukebox can never be constructed, but his point seems to me pertinent nonetheless.

    ReplyDelete
    Replies
    1. Note: I wrote this before reading Harnad on Turing, so it may be that this text will enlighten me.

      Delete
    2. You touch on an interesting point here. You seem to be saying that it could (theoretically) be possible to make a T2 or T3 robot that really is just blindly simulating responses (given a near infinite store of information and processing power). So would it be possible for us to find ourselves in a situation in which we have 2 robots (A and B) which both pass T3, and yet only one of which is actually conscious? What would be the criteria for deciding this?
      Seems to me that the creation of T3 robots would require some fundamental knowledge about brains/dynamic systems that we do not have now, and that this knowledge would at least give the engineers the capacity to differentiate between the kinds of dynamics that give rise to "sentience" or the kind of "computation" that is grounded in semantics as oppose to mere syntax.

      Delete
    3. Hi Michael,

      The jukebox you are describing is actually really similar to the commonly cited counterargument to the TT – the Chinese box by Searle. Searle’s goal was to demonstrate that the person in the box could ‘blindly’ perform computations and still not really understand the Chinese they were translating. But I think the whole point of the TT is to demonstrate that if a machine is doing 'everything' that we can do, and we are convinced of this by its performance, then it must be cognizing (since cognizing falls into the category of the things we can do). The jukebox is only demonstrating the verbal stuff we can do – obviously that’s not all we can do.

      And Auguste – what do you mean by computation that is grounded in semantics as opposed to mere syntax? A computer doesn’t know if something is true or false in the real world – the computer only has what the programmer has given it, and this is the problem of the homunculus (since the human programmer has to provide the T and F judgments – the semantics – we can’t say that the computer did it).

      Delete
    4. Yes that's what I mean. Computers as we know them today do not process information by virtue of its meaning, unlike humans

      Delete
    5. What I got from your comment was that you said "sentience" = "a kind of computation"
      but our definition of computation is formal symbol manipulation, so for computation to happen we don't actually need the semantics. And sentience is really just consciousness which is really the hard problem, so I don't think it’s a kind of computation. Computation may be a big part of consciousness, but to me that’s like saying a human is a kind of pattern of cells. That's no wrong, but it’s definitely not enough.

      You didn't really answer my question of what do you mean by "computation that is grounded in semantics as opposed to mere syntax". Do you mean to say that there's a type of computation with inherent meaning or are you trying to say computation + meaning = cognition?

      Delete
    6. In either case, I don't think it’s essential to understand the dynamics of consciousness (what you called sentience) in order to create a T3 - biology and evolution didn't "understand" what it was doing. I think what you’re getting at is actually that in order to be confident that we achieved cognition/consciousness - that is equivalent to what a human has - in our construction of T3, we would need to understand cognition. But, I'm pretty sure Dominque the AI is conscious without me actually understanding those dynamics, so it looks like this understanding is also inessential to being confident that our T3/T4 is cognizing the same way we do.

      I hope I haven't lost your direction, please correct me if my interpretation is off! This was definitely a bit of ramble...I'm just trying to work through it!

      Delete
    7. Assuming it's even possible for a robot to be conscious, I would say yes, it is possible - but, as Victoria said we're getting into hard problem territory and it's hard for me to remain anything but agnostic in that regard. That said, performance being equal (let's say it is), perhaps the only way to distinguish between the jukebox and the sentient robot would be to dissect the mechanisms at play, though without an answer to the hard problem it's hard to say what positive criteria we would use. Negative criteria might be easier: it can't be a jukebox - but those alone seem incomplete. It may very well be that our T3 robot would be conscious without our even knowing it. Hell, it may be that this laptop is conscious, but how would I know?

      If I can respond for Auguste, I think what's at odds here is exactly the point Victoria brings up with the Chinese room. Computers as we understand them (and Searle in his room) blindly manipulate formal symbols with no understanding of what the symbols refer to - that is, there is no symbol grounding. In this sense, they operate on pure syntax. Humans, on the other hand, (and Searle out of the room) attach meaning to these (internal or external) symbolic operations: we know what 1 and 2 are when we say 1+1=2. In that sense, we have both syntax and semantics.

      Is there something in between? Perhaps a robot could be fully T3, doing all the things a cognate system could do, while still being purely syntactic. In this case, there must be something missing from the robot which may also be the symbol-grounder in humans (consciousness?), but which is not essential for cognition.

      Or perhaps it would acquire its semantics solely by virtue of its immersion in the real world. In this case, something like sensorimotor capacity must be necessary, if not sufficient, for symbol-grounding. Consciousness could or could not come along for the ride, and may or may not be essential for cognition, but it seems all too early to tell.

      Delete
    8. I should clarify though: I'm almost certain a jukebox could not pass T3.

      Delete
    9. ...but Searle in a robot might.

      Delete
    10. This comment might fringe on a discussion of metaphysics and I might be contradicted but I would like to point out that it is not at all clear, to me at least, that humans ground symbols to their referent in ways that are not symbolically mediated.

      Delete
    11. What do you mean by symbolically mediated? And symbolically mediated as opposed to what?

      Delete
  17. This comment has been removed by the author.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
  18. Turing first presented the question: “Can machines think?” But this is eventually replaced by the question: “Are there imaginable machines that can pass the imitation game?” From the followed-up question and Turing’s examples, it may have been interpreted that his claim was if the machine pass the test, then it is thinking; however, my interpretation is that there is a distinction such that Turing’s claimed was rather that "if the machine passes the test, then we cannot claim that it is not thinking". What do you think?

    Further bringing this distinction into thinking and feeling. In reading Turing’s discussion on the contrary views, more specifically on the argument on consciousness, he acknowledges the unknown aspect of consciousness and the other minds problem. It was interesting that Turing stated that the solipsist point of view may be the most logical, but it makes communication of ideas difficult. So he suggested that instead of arguing over who is thinking, it is “usual to have the polite convention that everyone thinks”. Does this statement expand over to feelings as well? It seems likely so. By philosophy of Descartes’s cogito, we know ourselves exactly what it means to feel. So we may not be certain whether other people feel; however, as parallel to the previous case of thinking, there is no need to be “certain” whether the other is thinking or feeling. And by such that if a robot could pass the TT, it is almost as likely that it feels as other people feel.

    ReplyDelete
    Replies
    1. Hi Grace,

      I’m not sure about that first interpretation. The way I see it, couldn’t the Chinese Room argument be applied to it as well? Somehow an algorithm is constructed, a hypothetical set of perfect instructions such that the machine could pass the imitation game, without really ‘knowing’ what it’s doing. In this scenario the machine isn’t thinking but it has passed the test. I’d like to think that the question should have been set up more as ‘if one passes, one has displayed the ability to distinguish human vs machine’ rather than ‘if one passes, one is thinking’.

      Delete
  19. This comment has been removed by the author.

    ReplyDelete
  20. I wonder if focusing on replicating human minds is not putting the cart before the horse. Our brains evolved from simpler mammal brains, and those mammals from even simpler creatures. Surely a more modest (and still significantly challenging) goal would be to replicate some fundamental features of all living things, such as autonomy. Even the simplest, brainless organism 1) generates and regenerates its own components 2) acts in ways to maintain itself in the face of the 2nd law of thermodynamics which disturbs its internal homeostasis.
    Is this irrelevant to cognition? How can it be if it is what cognition evolved from after 3.5 billion years? How far back in evolutionary history can we trace back the "seeds" of what would become cognition in simple neural systems, then small brains, and eventually our own brains?

    ReplyDelete
    Replies
    1. I do agree that it could be helpful to consider cognition and the mind in even simpler terms by looking back at its evolutionary history, but in order to create a model of how a human adult brain works, I think we would still have to focus on an advanced human brain. Moreover, I think it is most important to look at the development of the brain in its early stages – from birth to adulthood. When Turing stated, “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's?” he really convinced me that this could indeed be the key to figuring out how we cognize throughout our lives. It also would tell us more about the learning process and why it is evolutionarily important. Additionally, I think it is beneficial to work backwards from an adult human brain rather than to build from the bottom up. Turing gives the “skin-of-an-onion” analogy where he explains that “in considering the functions of the mind or the brain we find certain operations which we can explain in purely mechanical terms.” I find that this is really tangible considering if we want to create a machine that is a perfect model of the adult brain, then we need to peel back the layers and see how much we can encode based on our current knowledge of the brain and cognition. By stripping off this “skin” then we have the potential to find the “real” mind.

      Delete
    2. It is interesting to discuss where to start and looking at what has been done in the field appears to be going rather backwards that seems true. AI's are currently able to solve all the games in the world (following beating the world champion in the game of Go this year), and yet they are doing pretty poorly at things that 1 year olds can do like recognizing shapes and scenes.

      Though I disagree that evolutionary history would have much help for cognitive modelling. It's useful to understand how biological mechanisms arose, however the point of behavioural equivalency (from last weeks readings) is that though regeneration is a necessary prerequisite for biological cognition, it wouldn't be for a computer. It's a bit awkward to compare the life of an AI to that of a human. For example, humans brain material grows from embryonic cells over time, and undergo stages of development to reach full cognitive capabilities. However it would be very strange to design a robot to undergo the same developmental process. In the same way at the level of human populations what is useful about our development?

      Delete
    3. Absolutely Auguste, I do agree that evolutionary history could explain more of the causal mechanisms behind human cognition. U are also right in saying that there are some laws (entropy and homeostasis for example) that govern life. Keeping this in mind, I do feel that understanding how we cognize is more of a bottom up process than a top down process (reverse engineering). The drawback with machine and computer learning is that they have absolutely no adaptability. And adaptability as we all know is the key to any form of evolutionary success. For example little children don't know that fire can be harmful. Ether their parents have to teach them that fire is harmful or a burn causes them to learn that. Perhaps a lot of adaptability is related early sensory and motor learning. If we could code or create machines with biosynthetic materials that could react to and differentiates between environmental cues, we would have some starting point.

      Delete
    4. Continuing on this thread, I doubt free-standing cognition can be re-constructed without non-cognitive processes to support it but the question of which non-cognitive processes are required for cognition is an interesting one.

      Delete
    5. Interesting, I don't know if there are equivalent terms in cognitive science for what the field of linguistics calls synchrony and diachrony, but the tension between the two reflects on the issue you bring up. A diachronic approach to linguistic analysis would take into consider the evolution of language through history, a diachronic would not and would be closer to the structuralist/Saussurian (a kind of "mapping technique" of connecting signs to what they signify) we discussed in class last week. I wonder how that debate could inform the methods of cognitive science which as of now seems to favour the diachronic. I think Chomsky's poverty of the stimulus notion (possibility for innate language acquisition capacity, potentially inherited through evolution) raises an argument for both linguistics and cognitive science to consider how a synchronic approach may not suffice for replicating a model of the mind.

      Delete
    6. Cassie, I think that although it is true that evolutionary history of the human mind might be irrelevant in determining how to move forward with AI, I think writing it off all together is perhaps too extreme. We have already seen that modelling human cognition has proven to be helpful in programming computers to do the same thing. You brought up a perfect example of this: the game Go. To program computers to beat human masters, they modelled their algorithms off our dopamine reward signalling feedback. So, although I agree with you that in the end it might prove to be irrelevant how human minds work, I think it is too soon to write off the whole idea all together.

      Delete
  21. In the " Argument from Consciousness", Turing summarizes Professor Jefferson Lister' objection from the perspective of Consciousness as follows;
    "No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants."
    This argument appears to be a denial of the validity of our test. According to the most extreme form of this view the only way by which one could be sure that machine thinks is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice."

    Basically, the description and appraisal of feeling is such an inherently subjective experience that the only way one could say that feeling is occurring is to be that other person. (This made me think once more of the Problem of Other Minds and how this is paralleled with regards to thinking.) Turing loses interest with the question of 'Can machines think?', but I wonder what makes the process of thinking so categorically different from feeling that we may attempt to answer it whereas the question of feeling is impossible to answer?

    ReplyDelete
    Replies
    1. I thought that this was a very interesting question. I guess the question of feeling has to do with "qualia" which in itself is hard to explain. Even if people go through the same kinds of experiences it does not means that they will have the same feelings throughout the experience. I think that it is impossible to understand exactly how someone else is feeling unless they develop some crazy machine that allows you to do just so. So in my opinion the question of feeling is impossible to answer.

      Delete
  22. Digital (discrete) music usually has a sampling rate of 44.1kHz, but it’s actually really hard to tell the difference between sampling rates above this (essentially, 44.1 kHz captures all the info the listener needs for the music to be perceptually indistinguishable from analog – an LP or record or live performance).

    As we continue to build increasingly closer approximations/simulations to human capacity, will there be a point where people can no longer tell the difference (i.e, is this the point where the TT is passed?). I drew this little to help demonstrate:

    |Computer today-------------TT--------------cognition|

    So I guess what I’m asking is, even if there is a really close approximation of cognitive capacity to the point where our percept is indistinguishable from a human (passes TT) does it even matter if we get even closer or have we done it? Just like how a 55kHz sampling rate is indistinguishable from 56kHz and most likely also from analog music, do we really care if we pass the TT plus a little extra? I’m going with no, it doesn’t matter, because I don’t think you can have one cognition that is less cognition-y than another cognition – that just doesn’t make sense.

    |B----A-----------analog--------------Live concert|

    If digital file B is less like analog than digital file A, are we going to say that one is less musical than the other? No, probably not.

    ReplyDelete
  23. I was quite puzzled by how cautious Turing was about rejecting the argument from extrasensory perception, considering how readily he seems to have dismissed some of the other objections (i.e. the theological one). If I understand correctly, his concern was that ESP—namely, telepathy—is a uniquely human trait, and so if it were to be factored into the TT, “machines” would thus be at a disadvantage. Instead of calling into question the validity of the “statistical evidence” for telepathy, Turing proposes “to put competitors into a ‘telepathy-proof room.’” Doesn’t this defeat the entire basis of the TT as measure of a “machine” being able to do whatever a human can do, indistinguishably? Furthermore, if it was really the case that extrasensory perception was characteristic of human performance (e.g. in identifying the suit of a card), then why, by Turing's argument, shouldn’t a digital machine be able to simulate it?

    ReplyDelete
    Replies
    1. I found that section equally perplexing (and quite unexpected). I especially agree with the point that if ESP is possible (as he postulated it may be), then by the tenants of computationalism it should be replicable by a Turing Test passing machine. The "telepathy-proof room" tends to undermine the entire premise of the TT.

      Despite this, I think now that we understand the evidence for ESP is extremely lacking, the TT can still stand on its own, and perhaps is even more steadfast that Turing's original proposal.

      Delete
  24. “Godel's theorem…shows that in any sufficiently powerful logical system statements can be formulated which can neither be proved nor disproved within the system, unless possibly the system itself is inconsistent.”
    From what I understand, something that can neither be proved nor disproved implies that there is no way of proving it, but NOT that no proof exists. Turing goes on to say that even though it has been established that machines have limitations, we have not proven that humans have no limitations, we simply said so. Just because we can prove that machines have limitations does not mean we can assume that we, who are ‘unlike’ machines, have no limitations.
    Let’s suppose a human was playing chess against a machine (as Turing in the end of his article suggests would be a good demonstration). If the machine got itself into the position of stalemate, would it have won? (and vice versa for the human). In this position, any future move/computation would be losing/a mistake, but in the current position the player is undefeated. The player’s loss cannot be proven, as it is currently safe in its position, but its win cannot be proven either, as it has not gotten the opponent into checkmate. In this case, despite the fact that there is no proof of winning/losing, it does not mean that a win/a loss cannot happen, as stalemate is considered a draw (which could be interpreted as a win/loss for both players). Because the player has no legal next move, they cannot make any move but have done nothing wrong in getting to their current position, and they could get there either by accident or purposefully. The true winner of such a game could not be proven or disproven in this ‘system’ of chess, as the result is a tie. What would a stalemate in a game between a computer and a human imply?

    ReplyDelete
    Replies
    1. I am not a math major so I could be wrong but I think that when Gödel says "unprovable" he means that the proof cannot exist within a formal system that aims to "represent" the real world (as this was Hilbert's and many other mathematicians' goal in the 20th century). In other words, the constraints put on the mathematical models of interest to logicians are their own weakness. Another model might be able to handle those kinds of paradox, but it would lack the features that made the original models attractive in the first place -namely logical coherence. Since computationalists in Cognitive Science are hoping to model a brain using syntactic rules only (i.e. logical coherence), they would rather accept the incompleteness of their system (limits on the intelligence of digital computers) than incoherence.

      Delete
  25. Re. “I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning”
    While Dr. Harnad touched on this in class, I would like to further elaborate and pose my own question.
    Dr. Harnad said that an agent that is part of the Turing Test does not need to pass the test only a specific percentage of the time, but it has to constantly be passing the test and never fail (with an example of this being Dominique and how she is constantly being a rather convincing human, even if she were made in a lab at MIT). But my question is, if had a program/AI that was able to pass the Turing Test convincingly on every even day of the month, and then fails miserably on every odd day of the month and clearly shows that it is only a computer program, what would we do? In the time that it is passing with flying colors, it seems like it is clearly thinking and feeling and we would never think of harming it, but in the times where it is failing, shouldn’t we still not think of hurting it? If we know that it can cognize half of the time and it does that as well as humans do even part of the time, how should we treat it? Even if we do think of hurting it or treating it as inanimate, what if it “wakes up” to us treating it poorly and is as shocked and appalled as any of us would be if we regained consciousness only to find ourselves being hurt or worse by others? And to further this, what if it is only succeed 1 of every 4 days that it is tested?

    ReplyDelete
  26. With reference to Turing's "Argument from Consciousness," it would be better to ask the following question: Given the technological advancements that address Professor Jefferson's concern, does it simply mean that machines equal brain? Or is there another step before we can surely say that is the case?

    Specifically addressing the fact that the machine should "write it but know that it had written it," it is difficult to say whether the metacognition is an ability that can be learned or innate. If metacognition is what limits machines to be equal to human, the problem would have a direction.

    What if we test that the machine undergoing the Turing test is aware of the task it is given? Or the awareness that it is being tested to pass the Turing test to be recognized and make its identity legitimate.

    ReplyDelete
  27. This comment has been removed by the author.

    ReplyDelete
  28. On the point of machines being unable to “create”, or “surprise” humans with some output: surprise will often come about when one does not fully understand something (in this case, the machine’s abilities). It is based solely on the observer’s knowledge and does not seem to accurately reflect any information regarding the actual machine.

    When a new machine is created with a functionality that is the first of its kind, it’s functional output will be surprising to many (with perhaps the exception of the creator). However, over time, this knowledge will be incorporated by the observers and no longer seem so novel.

    ReplyDelete
    Replies
    1. It's very logical that over time, a new functionality of a machine would become a norm and no longer be perceived as surprising, if it happens again and again. But this does not really seem to be a concern when the question of the imitation game is if we can distinguish humans apart from machines (70% of the time). The "interrogator" wouldn't know which is the machine anyway, so nothing is expected of each "witness," therefore everything is somewhat of a surprise and not a surprise at the same time in the game. Also, actual humans surprise me all the time, so it's not a great way to distinguish humans from machines. For example, people who think females are less worthy than males certainly do surprise me. I'm curious as to what this idea is based on.

      Delete
  29. RE the abstract: “Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words”.

    The discussion has left from the question of “Can machines think?” to the question of “Can a machine pass the imitation game?”. I think this is the wrong way to replace the question. We have spent so much time focusing on the nuances surrounding this silly game, I feel as if we have diverged too far from the objective, “can machines think”. To delve into this question we need to define thinking. When we define the aspects of thought we are trying to replicate, these questions-that have delayed our progression- should hopefully dissolve. The pitfall of the “imitation game” is the leap from A) creating a device that passes the game, to B) concluding the machine can think. We can not make this leap because we don’t know what thought is! When we agree on what encompasses thought, we can then argue if the machine encompasses these qualities.

    ReplyDelete
    Replies
    1. Throughout the readings there are examples of how a computer could be "made to be like a human" by lowering its capacities, for example purposefully making it wait 10 seconds before answering a difficult arithmetic question. This example is actually one that can be explained logically, because it could make sense that the neural circuits are slower than the machine's circuits. But other examples make less sense biologically speaking, and can not be explained by a logical understanding of the differences between computation and biology. One example like that would be a computer statistically pretending not to know the answer 1 every 50 questions (when in reality, the information is indeed stored in memory) to make the machine react in a human way (we sometimes forget things that we remember later and that was then stormed in memory). But then if we need to program a way to "dumb-down" the machine to make it looks like it is thinking, aren't we either redefining "thinking" as a "dumb" process, or redefining the purpose of the Turing test? because if the goal of the Turing test (advanced versions, T3 and up) is to understand the way humans think by creating a model, then we know that such progradations steer us away from the biological/human thinking-process. So adding "dumping-down" processes also lowers the turing test to a "simpler" goal: how close can humans get to making a machine that resembles them. In that case the understanding of the process of thought seems secondary.. What do you all think about these processes aiming at making a powerful computer seem slower (when it is not in reality, not just making the mechanism slow, but artificially adding time) to make it look more human?

      Delete
  30. This comment has been removed by the author.

    ReplyDelete
  31. Re: "The nervous system is certainly not a discrete-state machine. A small error in the information about the size of a nervous impulse impinging on a neuron, may make a large difference to the size of the outgoing impulse. It may be argued that, this being so, one cannot expect to be able to mimic the behaviour of the nervous system with a discrete-state system."

    I’m interpreting this quote to mean that it is hard for a discrete-state system to act as and be indistinguishable from a human being due to the complexity of how humans as well as their nervous systems function. As they are far more intricate than discrete-state machines and therefore quite hard to replicate, small errors may in the transmission of information may be likely but these would have a large and noticeable impact on the output of the machine, which would prevent it from acting like and being perceived as a human being.

    ReplyDelete
  32. Towards the end of the passage, Turing suggests the alternative approach of attempting to simulate a child's mind rather than an adult's. However I would argue that the development of a child's mind more closely resembles supervised machine learning than any sort of programmable "education" that Turing suggests. To me, the development of a child's brain seems less a notebook of blank pages to be written in, and more of a series of attempts to write correctly.
    Funnily enough, the purported analogy of the experimental process of teaching new mechanisms to the child brain translates well to supervised machine learning, arguably better than direct programming of behaviour. Turing is quick to point out the problems with a strictly "reward and punishment" style of teaching, but is that not how a child classifies the world around them?
    If a child is let loose into a kitchen with the impetus to touch/feel everything, it takes the mistake of touching a boiling pot on the stove to know that it will be hot... (this is obviously a huge simplification, but I am curious as to everyone's thoughts)

    ReplyDelete
    Replies
    1. I agree with you on that. Perhaps Turing is suggesting that behavioral methods are partially responsible for children's learning but not sufficient?

      I think the blank notebook idea is wrong as well. I started to discuss this in an earlier thread - but to me, it seems there are blank "equations," systems, templates that children have that are then filled in, changed, specified more as the child learns and grows.

      Delete
    2. I think what Turing was getting at with this whole child mind model was the idea of plasticity, and being an open-enough system that it does not 1. succumb to rigid rules which cause logical fallacies and 2. learn from its mistakes and experiences. With regards to your point about machine learning resembling teaching new mechanisms to the child brain, I'd like to break it down into the two big theories of categorization, namely

      1. Prototype Theory

      2. Exemplar Theory

      (I'm not going to describe them here because I'm pretty sure you've all seen them at some point and I realize this is becoming something of a tangent.) It seems like prototype theory would be more "Behaviourist" in the sense that you either match the prototype or not. Machine learning seems more akin to exemplar theory as you are going through all the thousands of instances of seeing a similar image and classifying it thus. So, even within the process of early learning/categorizations and computer comparisons, there are fine distinctions to be made.

      To offer a counterpoint to your statement that a child classifies their world around them primarily through a "Reward and punishment" style of teaching, I would argue for the social-facilitative and modeling behaviours through which children learn to emulate the actions of those around them (not necessarily by experiencing reward or punishment for doing so.)

      Thanks for giving me loads to respond to!

      Delete
  33. Turing presents an interesting approach to simulating the adult human mind, by emulating the very process by which the adult mind developed. In starting with the “initial state of mind” (the child-like mind) and expanding its knowledge base by way of education, I’m hesitant to conclude whether the desired adult-like minded machine would result. Turing himself acknowledges that the processes that brought the adult mind “to the state it’s in [are its initial state of mind, education and other experiences]”. Without the experience component in the learning process, what can become of the machine’s “mind”? Surely, it would lack intuition and abstract thought – especially, since the only mode of learning suggested is that of reward-punishment learning, which merely teaches right from wrong. A problem with the reward-consequence approach is that the machine has to be able to flexibly decide whether the present situation follows the template with which to proceed with its appropriate mode of responses. This would firstly require an adequate means with which to explore the environment, as reinforcement learning is founded upon reception of feedback from environment to learn. Lacking the adequate means (i.e. legs, eyes) with which to explore the environment, how can the machine's reward-punishment learning be achieved? Moreover, it would require abstract thought to apply its past experiences, each of which will be infinitely different from the other (however infinitesimally small), and to understand, based on these slightly different experiences, which is similar enough to apply its response to the present situation.

    ReplyDelete
  34. "Thinking is a function of Man's immortal soul, God has given an immortal soul to every man and woman, but not to any other animal or to machines. Hence no animal or machine can think."

    The Theological argument is in conflict with Turing's view that "thinking" is simply classified as being able to succeed at the imitation game. The theological view states that only humans can think as the result of having a piece of the divine in them. Turing's view is that, within 50 years after writing his essay, machines will be able to perform as well on the imitation game as humans. I believe in a middle ground between these arguments. I don't believe that humans are the only ones capable of thought, but I also do not believe that thought for humans is indistinguishable from thought for machines. I am not a supporter of the argument that the electrical nervous system is no different from the electrical circuits of the digital computer. Perhaps it is not a divine soul, but there must be something that sets humans apart from the Turing machine.

    ReplyDelete
  35. Like Robots, humans are not completely autonomous. Although we may not see out autonomy as comparable to that of AI, I have begun to philosophically question whether all our actions are of our own free will. That is are we free to do whatever we please and are our actions as humans without any outside control? Much like the sort of step wise reasoning that a programed computer would have to go through, ‘if 0 go to place x’, people are simply made of years of programed experience in which we have learned, or that has been neurally programmed through years of synaptic plasticity. The argument that free will should be a requirement to pass the Turing test, and should be a requirement for cognition is sometimes put forth but do humans themselves really have free will? Or are we just slaves to the neurally encoded patterns in our brain much like an AI's programmed code of instructions?

    ReplyDelete
    Replies
    1. Even if assuming our own experience as determined, I think we also receive the consequences of being causal in the lives of others. I think one way to conceptualize free will is not just by one's choice of action, but as a contributor in the lives of others and an active receiver of the consequences of those actions. Perhaps what makes us "free" (if at all) is our ability to create spontaneous/unexpected space as a recipient of our own contributions.

      Delete
    2. How do we know free will is a requirement to pass the Turing test?

      Delete
  36. I found Turing’s comparison between the development of a “child machine” and evolution very interesting. I think the idea of producing a machine that has the capacity to learn (but not very much knowledge) seems like the most possible and efficient way to produce something that eventually thinks like an adult human (rather than trying to program in every bit of information a person comes across throughout their lifetime). However, I do not think the “rewards and punishment” Turing goes on to talk about later would be a sufficient teaching mechanism. As Tolman once pointed out, humans learn by methods other than the simple stimulus-response relationship, such as by observing others around them. In order for a machine to learn in this way, it would have to be able to manoeuvre around the world (like the T3 robot Harnad speaks of in the second paper). It would also need to have some mechanism through which it can observe input from the outside world, in order to observe events that are going on around it. As was pointed out by Harnad, a “computer” per se would not be able to take in this input, but I believe there is potential for people to create a “machine” that could learn in this way and therefore “grow up” to have the capacities of an adult brain.

    ReplyDelete
    Replies
    1. I also thought Turing's "rewards and punishment" idea to be quite problematic. He talked about some orders being transmitted through the "unemotional" channels so as to diminish the number of punishments and rewards required. This is certainly not how humans learn, we do not have "unemotional" channels. If this could really be implemented in machines to reduce how "sore" it'd feel after each mistake, then lots of things the machine learns would have no emotional side to it. If we are trying to make humans and machines less distinguishable in the imitation game, this is certainly not the way to do so.

      Also he suggested the orders to be given in a "symbolic language" in order to reduce the number of punishments and rewards. This seems as he was suggesting a symbolic language would be less emotional; it may be for most humans as it's not the kind of language we grew up with, therefore do not associate it with any emotions. But in the case of machines, if it's the first and only language they'll ever learn, who is it to say there'd be no emotions attached to this symbolic language?

      Delete
  37. This comment has been removed by the author.

    ReplyDelete
  38. ‘An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside, although he may still be able to some extent to predict his pupil's behavior.’
    We can make machines that can win games or that can act like people, but the point still remains that they are following a set of instructions and rules that govern the game that is being played. The games of Chess and Go require you to think many moves ahead of you and your opponent. You’re predicting behaviour by anticipating a set of moves he could use according to certain rules. Humans constantly change their goals based on the rules that govern them. Sometimes they ignore the rules completely and live their lives by their own set of rules. Humans do things for specific reasons because something in the mind tells us act or not to act in this world. On the other hand, no machine is programed to do nothing. You can program a machine to act Human but at what point does imitation become cognition if it must act in a certain way? The question then becomes: is it possible create a machine that has the innate capacity to adapt, act, and survive in a world with no specific rules it is programmed to follow?

    ReplyDelete
  39. Regarding The Argument from Informality of Behaviour:


    I’m curious how much experience plays a role in all of this. In Turing’s red light example, the human knows what to do when the red and green light are presented together based on experience, not necessarily based on having seen a red and green light together but through experiencing what other drivers have done in the past, and that proceeding with caution in situations like this has worked in the past.

    So my question then relates to what we discussed in class regarding the sensory perception of T3. Is one of the reasons that sensorimotor perception is necessary to be pass T3 (and thus probably to pass T2) is not just because it allows you to do, but also because it allows you to experience? And then in turn these experiences impact your future thoughts and actions?

    ReplyDelete
  40. "These choices make the difference between a brilliant and a footling reasoner, not the difference between a sound and a fallacious one."

    The above sentence shows the importance of the order of applying rules in the logical system to Turing. Most of the time, this may help machines mimic humans better. But actual humans have fallacious beliefs all the time. This is why we have so much discrimination such as racism, sexism, homophobia, and etc. in the real world.

    Turing also gave an example of such a proposition: "If one method has been proved to be quicker than another, do not use the slower method." It is very easy to find instances of humans not applying this rule, as we are emotional creatures. For example, one may take a longer detour to school due to the better views the detour offers.

    ReplyDelete
    Replies
    1. Your comment reminds me of a class where I learned the difference between artificial intelligence(or neural network) and human. Somehow I guess humans are beings that are not always proceeding to the correct output like machines. When AI learns an algorithm, it tries hard to reach the desired output and fix all the errors. However, humans are not the same. Sometimes we prefer errors or the less optimized ones than the desired output. I have been so curious on how AI can mimic human in this.

      Delete
    2. This is really interesting because it makes me think of what the "correct output" is in different situations. Sometimes we make irrational decisions, and sometimes we choose the option that doesn't make sense in the long term but makes sense in the short term or vice versa.
      Are these decisions predictable? Or do they have an element of randomness?
      Turing mentions in the last paragraph, "it is probably wise to include a random element in a learning machine". Could adding an element of randomness make a machine's decision making more similar to ours?

      Delete
  41. This comment has been removed by the author.

    ReplyDelete
  42. (Posted this in 2b. Comment Overflow by accident)

    In the section on learning machines, insights into Turing's views on cognition is made clear. Notably through the following quote, we get a sense of whether Turing was a computationalist or not.

    "Intelligent behaviour presumably consists in a departure from the completely disciplined behavior involved in computation, but a rather slight one, which does not give rise to random behaviour, or to pointless repetitive loops." (Under 7.Learning Machines)

    Here, Turing says that intelligent behaviour arises outside of the disciplined behavior involved in computation, such as programming and "special coaching." If we presume that thinking (or cognition) is an intelligent behaviour, he is saying that programming a machine with the appropriate instruction table alone will not result in what we know to be "thinking." So it seems that Turing was not a computationalist as he believes that cognition and computation may not be synonymous.

    On another note, the notion that no animal can think under the section on the theological objection does not seem to hold true today given extensive research which seek to bridge the gap between humans and animals based on the similarities they do share. One example I can think of is Jane Goodall's findings on Chimpanzees and their ability to make tools that facilitate gathering of food (i.e., making a stick thin enough to push through a hole for termites). Prior to this, humans were believed to be the only species capable of tool making. The aforementioned is one example that shows how animals can produce intelligent behavior as well. Perhaps the prevalence of animal models in the experimental world today also represents the shift in our beliefs on the differences between man and other animals.

    ReplyDelete
  43. RE: The "Heads in the Sand" Objection

    Something I thought was interesting (going off Turing's critique of this objection) is that machines created by people could be seen as an extension of their own "computer". If a machine is better at computation than a human, why would humans be afraid that it is superior to them if it's something that they created? While I don't think it's justified for humans to think that they are superior to animals and machines, I think it's silly that people are so fearful of something that can be viewed as a product of human thought.

    This view of machines also relates back to the main idea of the Turing Test and I think it's interesting to note that these machines are all created by people: someone writes the code and sets up the system.

    ReplyDelete
    Replies
    1. I think the fear may be in the idea that a virtual computer may develop in ways unforeseen by the (human) computer. I suppose there isn't much difference in having a child and watching it grow up. It could be that parents experience this to a certain extent, I wonder how and why that fear is more explicit in the case of creating virtual reality. Maybe there is difference in the parent-chid relationship with concern over care and maintaining an empathetic relationship? Perhaps though a good T3 could involve this parent-like relationship too...

      Delete
  44. This comment has been removed by the author.

    ReplyDelete
  45. I agree with Turing’s response to The Mathematical Objection. The contenders of this objection argue that the fact that there are some questions that a universal machine might not be able to answer proves a disability of machines to which the human intellect is not subject. I would disagree, as the human intellect is diverse. If we were to compare different individuals across a set of questions, there would certainly be some that not all of them could answer. I think that this ties into Turing’s reply to The Argument from Various Disabilities. Both of these examples illustrate the fact that it’s important to remember that the point of the Imitation Game is not to successfully create a machine that can do everything imaginable that any human in the world can do, but to create a machine that could fool a human investigator into thinking they were interacting with another human. As all humans are unique and have different capacities, the fact that a machine that could participate in the Imitation Game might get an answer wrong every now and then, or might not be friendly seems logical.
    Finally, I would like to address Lady Lovelace’s Objection. If AI can only perform whatever we know how to order it to perform, then, at least as a starting point for AI, there seems to be a limit on it’s ability. Obviously, we cannot currently order AI to perform capacities that we ourselves either do not currently understand or simply cannot perform. One would hope that we could one day build a program that would be advanced enough to build on it’s own capacities, and teach itself new things, all the while expanding human’s understandings of their own minds.

    ReplyDelete
    Replies
    1. I think the point you raised about diversity of human intellect is an interesting one. This idea of a machine being “able to do what humans can do" assumes that we have a consensus and have properly defined exactly what the "average human/most humans" is/are able to do, even though we know there is such large variation in ability between people. I think his reply to The Argument from Various Disabilities is generally a good one, however in order to create a machine that would past T2 we would still need to first define the various cognitive capacities that (most/the average?) human is able to do and program the machine accordingly.

      As well, your point challenges the purpose behind the Turing Test. If we assume for any given problem-solving task there will be some people who attempt it that are successful and others who are not, then what information (ie reduction in uncertainty) do we really gain if a given machine passes this task? Is the Turing Test meant to be about "fooling the interrogator" and passing as human? Or is it about a machine that is able to recreate ("imitate") our cognitive capacities thus providing a mechanism to better understand how we think? Purposefully introducing calculations errors to fool the interrogator so as not to be “unmasked by its deadly accuracy” implies the former.

      Delete
  46. My struggle with the imitation game and Turing test, is the consideration that the answer to 'can machines think' has become the benchmark for intelligence; and while Turing's input was groundbreaking, the act of emulating thinking is not equitable to actually being capable of thinking. Turing was around when AI only began to emerge as a study and since then our understanding and even our definition of intelligence has most likely changed since that time. Chatbots have "beat" the Turing test (albeit in convenient conditions) since 2011.

    If the original thought that led to this question was the Church-Turing hypothesis where if there is a method for obtaining the values of a mathematical function, the function can be computed by a Turing Machine - ie. a logical machine can solve anything with a rule - then machines/expert systems have long outperformed humans in this domain. It was previously unheard of that a computer would be able to beat a human at chess, and yet this has become evident to not be the case and forces our conception of intelligence to be reconsidered as we progress.

    "Intelligent behaviour presumably consists in a departure from the completely disciplined behavior involved in computation, but a rather slight one, which does not give rise to random behaviour, or to pointless repetitive loops"

    We increasingly see machines that aim to optimize and improve human behaviour, in targeted non-arbitrary ways - however this is not constituted as intelligence. My main thought would then be, is the Turing test, in its original form, a relevant benchmark anymore or is there a need for redefining the boundaries originally considered. Is the Turing test an insufficient benchmark, or is are we just unsatisfied with the concept of a machine being intelligent at a base level?

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
    2. I have some thoughts about Turing’s question “Can machines think?” or whether machines are intelligent I think it’s actually asking “Can machines have cognition?” I believe that this is asking whether machines can move beyond dynamical systems that only behave according to the laws of cause and affect, to causal systems that not only behave but also feel. I think it is important to have this question, as it is what would distinguish a machine such as a T3 from other dynamical, causal systems such as a waterfall or a thermostat.

      All that the Turing Test is asking for is that the performance of the machine should be indistinguishable from a human’s performance capacity. So, if Dominque can stand up and sit down, talk to us, and do a variety of other things that don’t make us question her ‘human-ness’ she passes the TT. The passing criterion is quite intuitive in a way and so I think it is a sufficient way to assess a T3. I'm not sure what you mean by a machine being intelligent at "base level" ?

      Delete
  47. “I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. (…) We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"’


    Turing makes a fairly major jump here. He replaces asking whether a machine can think with asking whether a machine can ‘pass’ (or, rather, be treated by the interrogator no differently than a man or woman) the imitation game. While he goes through many possible objections to his argument, he doesn’t sufficiently address the gap between these questions. Why are we to assume that a machine appearing to think (doing well at the imitation game) is virtually the same as a machine thinking?

    His explanations for this logical jump seem unsatisfactory. He objects to directly asking the question because of the difficulties in defining the terms ‘machine’ and ‘think,’ and concludes that the answer to the question would have to be found via a Gallup poll, which would be absurd. However, his alternative solution is somewhat comparable to a Gallup poll. Performing well on the imitation game is comparable to asking the interrogator whether what they are communicating with is a person / can think. Thus, whether or not the machine can think is ultimately a question decided by the opinion of others as to whether or not it can think. One could argue that, if a Gallup poll solution to the question is absurd, this also carries an element of absurdity.

    ReplyDelete