Saturday 2 January 2016

(2b. Comment Overflow) (50+)

(2b. Comment Overflow) (50+)

30 comments:

  1. “Thinking is a function of man’s immortal sould. God has given an immportal soul to every man and woman, but not to any other animal or to machines. Hence no animal or machine can think.” (p8)

    Turing objectd this point of view but conceded that there is a difference between human and animals. As such, is it possible for a machine to have a level of intelligence that is comparable to that of an animal but inferior than that of a human being? Or, for example, if a machine has a physical appearance of an animal while still demonstrates superior artificial intelligence that is comparable to that of human’s, how would the Turing system characterize it?

    ReplyDelete
    Replies
    1. PeiHong, yes, of course animals think and feel too. But we couldn't do a cat Turing Test because we don't know nearly well enough what cats can and cannot do, and we're not nearly as good at mind-reading cats as we are at mind-reading people. So a cat TT probably would not help us reverse-engineer cognition very much.

      Delete
    2. Dear professor, thank you for your reply! So animals will be like Toy?

      Delete
    3. Zhao, animals do not come under toy. They have their own levels of T1 - T5 that are similar to human T levels but separate. We simply don't understand animals well enough to determine what characterizes a T4 cat machine, for example. This is because of the other minds problem: we don't know what it's like to be a cat.

      Delete
  2. In modern discussions of machine learning and AI the idea of ‘singularity’ is inevitably brought up. Assuming limitless and exponential increases in the capability of machines to learn, eventually such a system would become more ‘intelligent’ than a human. This intelligence would doubtless be limited to the domain in which it could learn, but a generalizable ‘humanlike’ intelligence seems likely to develop with enough time and effort. Many popular figures such as Elon Musk believe that this will spell the end for humanity. Media like Ex Machina, and Westworld portray similarly grim prospects for the interactions between humans and AI.

    My own personal belief is that a ‘robot apocalypse’ is unrealistic, and that those who believe that such a future is inevitable are paranoid. That being said, with unlimited computational resources and storage space (not addressing the logistical hardware concerns), it seems reasonable to imagine that AI which can simulate human intelligence could ‘feel’ at a much faster rate than we could. If their experience was similar to ours, at least indistinguishable from the point of view of a Turing test, how realistic is AI as a threat to humanity? It seems that such an AI is much more likely to be an ally than an enemy to us.

    Fear of AI seems to require three things: that AI is more intelligent/powerful than humanity, that it becomes ‘aware’ of us as inferior, and that it is therefore antagonistic towards us.

    The ability to feel does not mean that an AI would feel in the same way as humans. The desire to compete with and eliminate threats is a biological one, and something that I imagine could be left out of the programing. As for the fear of having AI that is more intelligent than your average human, I would say that we already have AI that can beat us in chess, can compute information much faster than we can, and all other manner of things that I personally cannot do as well, or even at all. The fear of the final part, awareness, stems from a concern that we have treated AI badly – or that they believe we are inferior – and therefore wish to harm us. This again assumes that Strong AI is similar enough to humans to share our flaws, which I believe to be unrealistic. For these reasons, I have no reason to expect that an exceptionally powerful AI would harbor any antagonism towards us.

    My question is in two parts: Firstly, how likely do you believe it is that an AI ‘singularity’ occurs? Secondly, would such an event be necessarily a bad thing for humanity?

    ReplyDelete
    Replies
    1. Edward, so far, all talk of the "singularity" is sci-fi fantasy; ditto for either computers or robots taking over the world. But what about the question at hand: Can computation alone generate cognition? (I doubt the capacity to feel has much to do with speed: why and how would speeding something up turn an unfelt state into a felt one? If you can explain that, you've solve the hard problem!)

      Delete
  3. RE: Can Machines Think

    Reading this part of the article made me question the limits to which machines can achieve. If machines, like the article states, will be able to not only think but surpass humans in the imitation test then what else will they be able to achieve, and is there a limit to how much they can learn? If in 50 years machines can achieve this feat will they be able to surpass human cognition quickly enough that their cognitive capabilities will be unimaginable to humans?

    ReplyDelete
    Replies
    1. Jenna, before wondering "machines" (what's a machine?) can surpass us, how about just the "easy" problem of getting them to do what we can do? We're trying to reverse-engineer human cognition, not augment it...

      Delete
  4. From the argument of consciousness: “According to the most extreme form of this view the only way by which one could be sure that machine thinks is to be the machine and to feel oneself thinking.”

    1. Well...Did Turing just answer his question: “Can machine think?”? (I am seriously asking!)

    2. In the part about the “skin-of-an-onion” analogy, there is a question on whether we can come to the “real” mind after we strip all the skin off, or do we eventually come to the skin which has nothing in it. So far from history (and maybe plus the experience as a cog sci student), peeling off the onion skin, digging into the study of cells and neurotransmitters seems not helpful in knowing how we think at all. Turing said if we eventually come to the onion skin which has nothing in it, then the whole mind is mechanical. But I wonder, seeing there is nothing inside the onion skin is an observation made by outsiders. Since being the machine (or human) is the only way to feel oneself thinking, how can Turing make a statement saying that the brain is mechanical?

    ReplyDelete
    Replies
    1. Alison:

      1. Turing is talking there about the other-minds problem: The only way to know for sure whether anything thinks is to be that thing! (Yes, I think Turing thinks that machines can think, because he thinks people are machines, and they are! A "machine" is simply a causal system, and organisms are machines -- but they are thinking/feeling machines.

      2. All Turing means by "mechanical" is that the brain, like the heart and the kidneys, is an organ, operating under the cause-effect principles of biology. Organs and organisms are special cases of physical systems, and we are trying to figure out how they work, by reverse-engineering them. (The only alternatives to "mechanical" are either random or magical.)

      Delete
  5. If a burnt child learns to avoid fire because of fear, is it possible for a machine with high level of intelligence to learn anything from experience? Since usually machines are tested right after them being made (that is, without entering the real world of human beings), they are not able to develop any experience or processes of learning that teach them to approach or avoid certain things. As such, is it possible that machines’ lack of feeling is actually due to their lack of experience of living like a human being?

    ReplyDelete
    Replies
    1. PeiHong, yes, even simple machines can learn from experience. But why would the experience need to be felt experience? Apart from feeling, experience just means data and history.

      Delete
    2. Dear professor, but isn't it another "other-mind" problem? Or it's solely data.

      Delete
    3. Hey Zhao, I think Prof. Harnad meant in his answer that even if machines learnt from experience, it would not mean that they feel anything. That is because this experience could be only formal: for digital computers at least, it comes from data fed into the system. And since machines do learn, it's not lack of experience per se that explains their lack of feeling.
      If a machine could learn and think like we do (successful in the Turing test), then the other-mind problem would be to decide whether thinking feels the same for the machine as it does for us.
      As for your other question ("is it possible that machines' lack of feeling is due to their lack of experience of living like a human being?), i guess it would depend on what you include in the definition of "living like a human being". If you include "feeling", then that is already the other-mind problem.

      Delete
  6. RE: More Than One Method of Producing Thought?

    In section 2 of his paper, Turing raises the question, "May not machines carry out something which ought to be described as thinking but which is very different from what a man does?" Since he is only interested in what thinking things do, Turing chooses not to dwell on this question – but could it not actually be crucial to his objective? Assuming that intelligent life exists on distant planets with vastly different environmental conditions from that of Earth, the bodily mechanisms of these lifeforms would have to be very different from our own in order to withstand such conditions. But if there exists more than one of these lifeforms in the same place (why wouldn’t there be), they would have to somehow use their vastly different organs to accomplish the same goals of communication that animals on earth do with their organs. If the brain (as we know it) is not the only thing that can produce intelligence (there’s nothing “special” about the brain), then there’s no reason why a machine couldn't as well.

    ReplyDelete
    Replies
    1. Gus, before we worry about whether there is more than one way to pass TT, shouldn't we first worry about finding at least one way that does?

      Delete
  7. In the section on learning machines, insights into Turing's views on cognition is made clear. Notably through the following quote, we get a sense of whether Turing was a computationalist or not.

    "Intelligent behaviour presumably consists in a departure from the completely disciplined behavior involved in computation, but a rather slight one, which does not give rise to random behaviour, or to pointless repetitive loops." (Under 7.Learning Machines)
    Here, Turing says that intelligent behaviour arises outside of the disciplined behavior involved in computation, such as programming and "special coaching." If we presume that thinking (or cognition) is an intelligent behaviour, he is saying that programming a machine with the appropriate instruction table alone will not result in what we know to be "thinking." So it seems that Turing was not a computationalist as he believes that cognition and computation may not be synonymous.

    On another note, the notion that no animal can think under the section on the theological objection does not seem to hold true today given extensive research which seek to bridge the gap between humans and animals based on the similarities they do share. One example I can think of is Jane Goodall's findings on Chimpanzees and their ability to make tools that facilitate gathering of food (i.e., making a stick thin enough to push through a hole for termites). Prior to this, humans were believed to be the only species capable of tool making. The aforementioned is one example that shows how animals can produce intelligent behavior as well. Perhaps the prevalence of animal models in the experimental world today also represents the shift in our beliefs on the differences between man and other animals.

    ReplyDelete
    Replies
    1. Brittany, good quote: I agree that Turing was not a computationalist (but here he might just have just meant that creativity requires something more than just following an algorithm -- or at least Turing-scale (giant) creativity does).

      Although the notion of "speciesism" (by analogy with sexism and racism) is not coherent logically, it's certainly anthopocentric to keep telling ourselves that we're the only species that thinks.

      Delete
  8. [Did not post last week for some reason] I really appreciate this review of Turing’s work. It articulated some of my reservations—ones that feel unwarranted against a giant. I especially appreciated the distinction between the goals of artificial intelligence and computational modelling. Indeed, it is necessary to exclude humans as candidates, even though they are dynamic causal systems and thus machines, because the goal of CM is reverse engineering; understanding how it works. Not just about creating a useful machine even if function remains elusive. That being considered, I feel like the criticism against Turing calling it an “imitation game” was perhaps over board. I feel like introducing computing machinery and intelligence during this era was so overwhelming and theoretical, that calling it “methodology for reverse-engineering human cognitive performance capacity” would have alienated laymen, and made the entire concept less digestible. Lastly, I absolutely agree that it was incorrect to exclude T3, and that the T2 level is easily distinguished from a human. In light of the idea that this stems from our verbal abilities being grounded in our non-verbal, is this not arbitrary? Yes I agree that sensorimotor capacities enhance and contextualize our verbal abilities, but perhaps other humanly aspects are grounded them just as essentially (such as emotion, self-awareness etc.)

    ReplyDelete
  9. Thinking about the TT in terms of different hierarchical levels really helped me to make sense of some of the vague/unclear language used by Turing in his original article which has been subject to so much misinterpretation over the years, in large part because “…using T2 as the example has inadvertently given the impression that T3 is excluded too…” However, as Prof. Harnad goes on to explain, real world sensorimotor experience is necessary even for measures of verbal performance capacity. Thus, only a machine with sensorimotor capacities would be able to genuinely pass T2. Does this imply that the criteria of a scaled up version of the TT might actually be too strict?

    The “peek-a-boo unicorn” discussion in class further emphasized the importance of sensorimotor grounding; to cognize includes being able to understand even the most unlikely concepts in relation to things that we have actually experienced/given meaning to in the real world.

    ReplyDelete
    Replies
    1. In terms of the hierarchy of the Turing Test, I think the way the different levels are separated is fundamentally flawed. As Prof. Harnad suggested that only a machine with sensorimotor capacities would be able to genuinely pass T2, I'd also like to argue that passing T3 would need "indistinguishable external performance capacity" as required for T4. Passing T3 isn't about how well one's (or a machine's) sensorimotor skills are, it's about how indistinguishable the skills are from a real human being. In order to know how indistinguishable the sensorimotor performances are, I'd imagine lots of these tests would need to be monitored directly with the naked eye. In this case, it's quite impossible to discount what the machine looks like externally.

      On the "peek-a-boo unicorn" argument, it's also important to point out that these things are based on cultural, geographic and language differences a person has been subjected to. Not every real human being will know or even be able to understand such concepts. If we consider Turing's idea of making a "child machine," a machine definitely has the capacity to learn such interesting, unlikely concepts from enough exposure to them.

      Delete
    2. I agree that a T4 robot may be required to pass T3, but this is irrelevant to the question we are trying to address. True that passing T3 isn't about "how good a machine's sensorimotor skills are", but how could a cognitive agent ground symbols (and thus, even stand a chance at passing the TT) without the ability to interact autonomously with the referents of words in the world?

      Now that we have gone over both the SGP and categorization, I am better able to understand the power of the peek-a-boo unicorn as a well-defined category indirectly grounded in verbal explanations, which are in turn grounded in direct sensorimotor experience (at some level).

      Delete
  10. If T3 robot would need to be able to do everything in the real world, not the virtual world, wouldn’t that mean that its ability to learn and develop its linguistic capacities and knowledge needs to be grounded within societal culture and social interaction with other people? Thus, in creating an empirical T3 robot and situating it in the real world, the robot would face obstacles in the way its embodied and physically represented in the real world. The T3 robot, even if not able to be completely indistinguishable with all physical materials (e.g skin), would still need to closely resemble humans in overall structure as much as possible to avoid environmental biases or drastic disparities from the standard human experience.

    ReplyDelete
  11. RE: Gödel's theorem: Although it is established that there are limitations to the powers of any particular machine, it has only been stated, without any sort of proof, that no such limitations apply to the human intellect.

    Gödel stated that no consistent formal system (for kid sib: a formal system is a kind of symbol manipulation procedure, governed by a strict set of rules that determine what configurations can be obtained) is capable of proving all truths. Obviously this theory is applicable to Turing Machines, or any machine, as they all operate on an instance of a formal system. However, this only implies that machines adhere to this system on a hardware level. But couldn’t machines contain a higher level “informal” manipulation of symbols? To clarify my question, let’s suppose the central nervous system is the lower level hardware that operates on a formal system. We can think about Calculus or appreciate fine art (higher level processes), while the lower hardware level is still functioning on a formal system. Therefore, some TT candidate could still be an instantiation of a formal system that adheres to Gödel’s theory and still be “intelligent”.

    ReplyDelete
  12. RE: "all causal systems are describable by formal rules... we know from complexity theory as well as statistical mechanics that the fact that a system's performance is governed by rules does not mean we can predict everything it does"
    I don't quite understand complexity theory, or what you mean by statistical mechanics. If it is causal system, does it not mean that by defining the cause (described by formal rules) we can predict the outcome in future instances?

    ReplyDelete
  13. I couldn’t help but get stuck thinking about arguments of various disabilities, specifically those regarding “feeling." I know this is a “granny objection,” but maybe if considered in an alternative way can reflect more a formidable objection to the idea that machines can indistinguishably do what thinkers like us can do. Although we don’t know whether humans or machines act because they feel, or if either "feels" at all, we do know that computers are rule-based symbol-maniupatling devices that will follow their rules unless specified not to. Even if a random element is programmed, the random element is part of the machine’s “book of rules.” The random element is no surprise to the machine itself, as the machine simply follows the rules it is given, and then adjusts according to the random event. Human feeling influences behaviour (or we feel like it does), and many times these feelings start out of our awareness and then later become apparent. The way a person behaves, or changes their behaviour (or not) upon becoming aware of some feeling they have is a large part of what composes a person's particular human personality. I believe that without the advent of feelings that affect behaviour (unknowingly) that then later become apparent to the feeler subsequently causing an adjustment in their behaviour, a machine will fail to fool a judge in a Turing Test in which the judge has an unlimited amount of time. This is not to be conflated with the other minds problem — regardless of whether or not humans or computers can actually feel or are conscious, we know from observation that currently computers do not change their functioning based on a sudden (perception of) awareness of their internal state, unless they are programmed to do so. If they are programmed to do so, then the state and resulting behaviour is not arising out of surprisal, but out of adherence to rules, and so will not influence or produce action in the same way as it would in humans (assuming that the machine does not think itself is conscious or has intentionality). For arguments sake if machines do think they are conscious, like humans, I still think that their program will never truly resemble the human experience or resulting behaviour relevant to our discussion, because human programmers do not know enough about the nature of how this process happens, of our believed sub/ unconscious, and so wouldn’t be able to create a sufficient code. I believe that an intelligent judge, adept at emotional manipulation, could design situations and tests that could pick out this difference between machine and human. In sum, I think that a machine could not be programmed to emulate this human behaviour 100% of the time, and so can not do what human thinkers do. The experience of not knowing that you are behaving in ways that are in line with hidden feelings is not insignificant, as it is observable to other people, especially to those involved in the hidden feelings. (For example, this kind of behaviour could be obvious to a person who is the subject of a crush of someone who won’t admit to themselves or doesn’t know that they have.)

    ReplyDelete
  14. Harnad makes an important point when he says that Turing did not intend to make his test a "game" where the object is to "trick" a human into believing the machine in question is a human. He used the allegory of an imitation game to illustrate his point that cognition is any output that cognition can produce. Moreover, a machine can theoretically approximate very closely the output that a human can produce. (The machine can never perfectly simulate human cognition because it uses a discrete nervous system composed of 1s and 0s; whereas, a human's neural networks are dynamic and continuous).
    The ability of a machine to trick a human is simply a more creative way of illustrating this, although the trick and game are both nonessential.

    ReplyDelete
  15. A few different points here:
    - “ So thinking, a form of consciousness, is already ostensively defined, by just pointing to that experience we all have and know.” In light of what we have already discussed about cognitive blindness and the unreliability of our descriptions of whatever mental processes are going on when we are “thinking”, this “ostensive definition” does not feel complete or robust. It doesn’t suffice to say that thinking is [those] experiences we all have and know when we are at such a loss to describe how those experiences arise.

    - If “performance capacity” is all we are concerned about to determine whether a machine thinks or not, this is reminiscent of behaviourist emphasis on observable results with a disregard for the underlying cognitive processes.

    - If T3 is the intended level of the test, the one in which all that matters is performance capacity, and it is not concerned with the internal workings (which only need to be identical at the next highest level), then how can this model help us understand anything about human cognition? If the T3 robot arrives at the same conclusion using a different method, what has that revealed about the nature of our own mental processes?
    “A device we built but without knowing how it works would suffice for AI, whose goal is merely to generate a useful performance tool”. Is this what Turing wanted? Later on the intent of the Turing Test is described as a challenge to create a machine which “can generate our performance capacity, but by causal/functional means that we understand”, this sounds like he would have leaned towards the cognitive modelling account rather than AI.

    ReplyDelete
  16. I think this idea of intentionality/free will/consciousness that others have discussed is irrelevant to the Turing Test. If I’ve understood correctly from our lectures these are all synonyms for what it “feels” like to think and know something. But the Turing Test is not designed to test consciousness or feeling, it tests observable behaviours. If Dominique is able to do everything we can do and respond like a human, then she has passed T2 (we don’t need to know whether or not she “feels” nor do we have any empirical method to test that). To pass T2 it would not be enough though to do things like math, language and problem-solving, such a machine must also be able to navigate social situations and express/interpret emotions in a human-like manner. However, I don’t think the machine needs to “understand” or “feel” those emotions in order to pass as human. In the same way, Google Translate doesn’t need to know what “an apple” is to translate it to “une pomme”. However it is certainly intuitively hard to imagine a machine that is able to "think" and do what we do without having consciousness/feeling.

    On a different note I really agreed with the point that Harnad makes here: Taking a statistical survey like a Gallup Poll instead, to find out people's opinions of what thinking is would indeed be a waste of time, as Turing points out -- but then later in the paper he needlessly introduces the equivalent of a statistical survey as his criterion for having passed his Turing Test!

    I think this is an important contradiction in Turing’s paper. The Turing Test is meant to an objective, empirical measure demonstrating a machine is able to “think” or as Harnad says, “do what “thinkers” like us can do”. However the way the “imitation game” is framed implies the way to pass T2 is to trick the human interrogator into believing the machine can do everything humans can do when in fact it is to design a machine that can actually do everything we can do.

    ReplyDelete
  17. “It is not doubted that computers will give a good showing, in the Gallup Poll sense. But empirical science is not just about a good showing: An experiment just not just fool most of the experimentalists most of the time! If the performance-capacity of the machine must be indistinguishable from that of the human being, it must be totally indistinguishable, not just indistinguishable more often than not.”

    I understand why it is desirable to say the computer must be indistinguishable from a human %100 of the time. There seems to be a certain pragmatic necessity to involving statistics for a Turing test, which I think Turing is aware of. If the T3 is indistinguishable from a human then a human interrogator should guess the right only answer 50% of the time. That is to say if the computer is A and the mimicked human is B, the human interrogator should correctly guess the computer is A 50% of the time. If the computer is so indistinguishable that it always ‘tricks’ the interrogator into choosing B 100% of the time, then the T test has taken on a wholly different tone. Moreover, such a computer would need to be not only able to appear human-like, it would need to anticipate, predict and in effect mind-read the human interrogator’s up-coming decision and thus not only appear indistinguishable from a human but also actively convince the interrogator that B is the computer. Such a computer seems intuitively even more advanced than what T3 traditionally requires since it must not only also have the capacity to mind-read and dissemble, but to be successful all the time. At which point the computer is going well beyond imitating human verbal output.
    However once a Turing test computer is out-performing the human counter-part and is identified as the computer less than %50 of the time, then from a meta-analysis level, a secondary interrogator could determine which is the computer without even witnessing the conversations, but simply observing that the computer rarely or never gets selected. Therefore, the only way the computer could be truly indistinguishable from a human was if it is selected as a computer in the Turing Test approximately %50 of the time. I think statistics are therefore a pragmatically necessary component in the Turing test.

    ReplyDelete