Saturday 2 January 2016

(2a. Comment Overflow) (50+)

(2a. Comment Overflow) (50+)

16 comments:

  1. [Did not post last week for some reason] Turing’s paper was certainly interesting. Though I didn’t agree with everything he wrote per se, I believe it is integral to read his original work. It is tempting to scrutinize his ideas through the lens of the present (what we now know about computation, etc.) but understanding his work means adopting the narrative and knowledge of his time. In fact, I couldn’t help but think about how the genesis of the computer essentially came from an exercise where a man answers questions in a “female way” and vice versa. Certainly this gendered exercise would not be as politically kosher today. In any case, when discussing Babbage’s analytical engine and how it was mechanical, I appreciated the explanation that yes, most digital computers are electrical but not because it is exactly the same to the biological nervous system, but because it happens to be the most efficient (and thus why both systems, digital computers, and the CNS, use electrical signals). This illustrates the idea of behavioural equivalence. However, it still serves to point out that yes, if a machine is indistinguishable from a human, we must accept that irrespective of their inner workings. But, since the idea here is reverse engineering, perhaps our best bet in achieving indistinguishability is emulating the human body and brain as best as possible since we know that is at least one way the human mind/body works (at least as a starting point). Lastly, I loved when Turing spoke about people’s fears about an entity becoming more superior than machine and offered reincarnation as a consolation.

    ReplyDelete
  2. For me, his objection to the mathematical argument is particularly interesting. It has been shown that there are limits to the powers of what machines can do, but these limits have not been yet found to apply to human intellect. Turing claims that just because these limitations haven’t been found yet, doesn’t mean they don’t exist. However, if we are never able to discover limitations on human intellect, can we ever definitively answer the question “can machines think?” Does this question depend on eventually finding that there are limitations to human knowledge, or perhaps, would it only be when a thinking machine is created could we then admit a limitation to human intellect? Also, I don’t understand how Turing can determine that only a “small fraction is used for higher types of thinking” and therefore, the storage capacity wouldn’t have to be close to that of a brain to pass the imitation game. If a thinking machine would need to account for sensorimotor relation to equate itself to the capacities of humans, wouldn’t machines also need to retain “visual impressions”? Is it really possible to just separate out the capacity for higher types of thinking, without any reliance on less conscious processes?

    ReplyDelete
  3. RE: Computing Machinery and Intelligence
    Regarding Turing tests and a candidate having to pass T3 in order to pass T2, which one of the following scenarios would be a valid question in a T2 test?

    1) Asking the candidate to name the color of a specific object in the real world, or asking the candidate to perceive some stimulus in the real world.
    2) Asking the candidate to name the color of an apple (or some other object), or describe the sounds a bird makes.

    If the questions could consist of the former type, then I would argue that “real world” sensory perception would be a must to pass T2. Therefore, if the candidate passes T2 then it must be able to pass T3. However, if the questions consist solely of the latter types of questions I think it would be feasible to program a machine to respond. If you type the question “what does a bird sound like?” to the candidate, a machine would be capable of responding using language. It could describe a bird call even though it does not “feel” sound. If the question types do not deviate from the second example, a machine passing T2 does not necessarily entail passing T3.

    ReplyDelete
  4. LOST COMMENTS FOR LAST WEEK:


    “Thinking is a function of man’s immortal sould. God has given an immportal soul to every man and woman, but not to any other animal or to machines. Hence no animal or machine can think.” (p8)

    Turing objectd this point of view but conceded that there is a difference between human and animals. As such, is it possible for a machine to have a level of intelligence that is comparable to that of an animal but inferior than that of a human being? Or, for example, if a machine has a physical appearance of an animal while still demonstrates superior artificial intelligence that is comparable to that of human’s, how would the Turing system characterize it?

    ReplyDelete
    Replies
    1. Peihong, it was not lost, the overflows for 2a amd 2b just got mixed up. I replied in 2b.

      Delete
  5. LOST COMMENT FOR LAST WEEK:

    If a burnt child learns to avoid fire because of fear, is it possible for a machine with high level of intelligence to learn anything from experience? Since usually machines are tested right after them being made (that is, without entering the real world of human beings), they are not able to develop any experience or processes of learning that teach them to approach or avoid certain things. As such, is it possible that machines’ lack of feeling is actually due to their lack of experience of living like a human being?

    ReplyDelete
    Replies
    1. Overflows for 2a amd 2b just got mixed up. I replied in 2b.

      Delete
  6. This comment has been removed by the author.

    ReplyDelete
  7. I argue that Turing’s imitation game is a poor substitute to the question “Can Machines Think?” All the game shows us is that a human may be fooled by a machine that has been programmed "to provide answers that would naturally be given by man.” Success in this game is neither a good indicator of the original question, nor the question of whether machines can do what human thinkers can do. Although the question “Can Machines Think” may not be the most specific, I think that the general motivation of this question is obvious, and that Turing’s replacement question and Imitation Game is not, like he says, “closely related to it” at all. I believe that the motivation of the original question largely surrounds whether machines can flexibly and spontaneously behave and intuit like humans, without the need for human reprogramming after creation. Turing’s game is far from answering this. Besides the fact that the game is limited to verbal interaction, machine success in the game merely proves that a machine has certain capacities that could fool a human. I would like to propose an alternate version of the Imitation Game, which keeps the general rules (i.e type of machine, interactions aloud..etc) of Turing’s original game. The alternate imitation game is a test where the machine also participates as a judge, and succeeds if he can guess similarly to the human judge in regards to which player is human. One way the game could play out would be if a machine (in question) and a human played the classic imitation game 100 times -- each judge playing separately and with the same sets of players. The machines that were players could be of varying design/ level of complexity (would at least have to include one machine that would pass the classic Turing test). If the machine’s judgments on the 100 trials are similar to the human’s judgments, then he succeeds at this alternate version of the game. I believe that this version is more indicative of the real motivation behind the question “Can Machine’s Think?” To me this is a better indicator of whether machines can think and and act similarly to humans, because it involves reasoning and action that resembles (whether “true” or not) a certain human understanding implied in the question "Can Machines Think."

    ReplyDelete
  8. (put in 2b by accident)
    RE: The argument from consciousness
    "Jefferson (1949): "Not until a machine can [do X] because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain"
    "There is no way to know whether either humans or machines do what they does because they feel like it -- or whether they feel anything at all, for that matter. But there is a lot to be known from identifying what can and cannot generate the capacity to do what humans can do. "

    I wonder if in the future it will be possible for machines to accomplish such feelings and emotions. As of now we might not think that this is possible but I'm sure back in the 1800s people never could have guessed what these machines were capable of. I'm sure that the idea of touch-screen, siri, GPS, etc seemed like an unrealistic ideal to many but now we see the inventions of self-driving cars and robots that can make food for you which is amazing. Is it realistic to put a ceiling on what technology can accomplish?

    ReplyDelete
  9. What I find fascinating is that during the 18th century, with the construction of extremely sophisticated human-like automata, a very similar discussion about the nature of the human mind arose. The discussion went well beyond the scope of mechanics. Philosophers asked whether an automaton could be said to feel, given that it could produce music for example that itself seemed to contain emotion (although it was pre-programmed to do so). Indeed, human players' ability to convey emotions was often judged based on their movements during musical performances. Therefore, it made sense to judge automata based on the same criteria (and as we know, the movements of some automata were extremely complex, like those of a flute player who not only breathed into the flute to produce sound, but whose breast moved along: https://www.youtube.com/watch?v=bLb54FCMt9o 22'30). This in very similar to the Turing test. Obviously, those machines had no learning ability, but this was an attempt to define feelings and consciousness in "non-mentalistic" terms, which is the defining feature of the Turing test according to Block.
    Another example in the debate over intelligence is the "Turk", an automaton who could play chess. It turned out that a human was actually hiding inside and guiding its movements, but before the treachery was discovered many considered playing and winning a chess game to be an appropriate test for intelligence.

    ReplyDelete
  10. Anastassiadis' comment about 18th century people wondering if automatons who could produce emotional music and movements could actually feel what they were creating got me thinking about non-Turing test methods that I have applied in my life to try to figure out if those around me are human.

    One particular situation that comes to my mind is when I used to play the video game "Halo." The online multiplayer in this game puts you into a virtual environment with a dozen other player avatars who you see and either shoot or cooperate with. Sometimes I would be matched up with other players who were so good, that I would be convinced that they were actually "bots" who were controlled by computers. Because these players seemed to never make any human errors, and because I could not otherwise verify that they were human, my conclusion was that they probably weren't cognizant. I recently looked into the prevalence of bots in the Halo game, and found that they didn't really exist, which means that they actually were people...

    I think this scenario is interesting because while it is not really comparable to a Turing test due to all the limitations, it is a situation where I mistakenly judged another person as not being a person because they were too high in intelligence/skill to fit my idea of a typical person.

    ReplyDelete
  11. I think it is important to note that the idea of an imitation "game" is irrelevant. I agree with Harnad that the TT is not really about fooling a human, it's about showing that cognition is as cognition does.

    However, I still don't understand WHY Turing can say that cognition is as cognition does. How would Turing reply to Searle's Chinese Room Argument which states that cognition is more than what cognition does. This ultimately boils down to the hard problem, but I'm genuinely curious to know how Turing would respond to Searle's convincing argument.

    ReplyDelete
  12. I think a main concern with the question “can machines think?” is how we define “think”. Can machines give correct outputs when given a certain input? If programmed properly, then yes. But can they generate their own thoughts and can they truly understand what they know? How can we ever be certain of that? I would argue that even we don’t know if we’re truly generating our own thoughts. Hypothetically speaking, if we were in a world of machines that could simulate the mind, how would you ever know if someone was a human or a machine if we were purely looking at their responses to questions?

    ReplyDelete
  13. “Provided it could be carried out sufficiently quickly the digital computer could mimic the behaviour of any discrete-state machine. The imitation game could then be played with the machine in question (as B) and the mimicking digital computer (as A) and the interrogator would be unable to distinguish them. Of course the digital computer must have an adequate storage capacity as well as working sufficiently fast. Moreover, it must be programmed afresh for each new machine which it is desired to mimic.”

    I really liked the way Turing built up his imitation game argument by first positing an imitation game where a physical discrete-state machine is mimicked by a digital computer, rather than jump straight to human/machine imitation. It is an important proof to show the equivalency between computers regardless of material. But I was struck by the fact that he left it unclear how the mimicking computer is able to mimic the discrete-state machine. Was Turing imagining something like machine learning here? Or was he just imagining that a programmer sufficiently familiar with the inner workings of the discrete-state machine could program a digital computer with effectively the same program?
    Perhaps he is only concern with the theoretical equivalency between the computations of the two machines but I do wonder about the feasibility of developing an imitating computer (A) which is capable of observing only the inputs and outputs of another computer (B) and thus after a sufficient number of iterations of exposure to these input/outputs would be able to program a set of functions that perfectly imitates computer B. (It may be necessary in such an imitation game, however, that computer A, be given the ability to determine the inputs given to B, in order to hypothesis test). Perhaps this has been done before, but it does strike me as a logical first step towards passing the Turing Test; first develop a computer that can, via observation, learn to mimic another computer. Then apply the same technique to a human.

    ReplyDelete
  14. This comment has been removed by the author.

    ReplyDelete