Saturday 2 January 2016

(10c. Comment Overflow) (50+)

(10c. Comment Overflow) (50+)

9 comments:

  1. I find it fascinating that there is this popular idea/ concern expressed in movies and culture regarding AI that surrounds the discussion on the idea of consciousness and feeling. It seems that in some way, the criteria or standard for a machine to be considered equal to a human is in its ability to feel specifically not just in its ability to do. So if we are never actually able to tell whether a T3 that does everything we do can actually feel or not, what kind of moral/ethical dilemma does this bring us to? Do we put all our resources into trying to get closer to understanding the difference, or do people just have to get over their xenophobic discomfort and apply the same ethical and moral standards to a robot as we do to humans?

    ReplyDelete
    Replies
    1. Hi Julia, interesting revelations. The problem also is that even if we were able to tell if a T3 had feeling (which we can’t) and it happened to not have feeling, how would we even go about understanding the difference between the zombie without feeling and humans who have these same performance capacities but also feel? The hard problem becomes even more complicated, because then behaviour is not just giving rise to feeling, and something more is going on. Where do we even begin to research what could be accounting for feeling in that case?

      Delete
  2. i) Even if it was possible, what would be the advantage of reverse-engineering an artificial agent with both our doing and feeling capacities vs just our doing capacity alone? For all we know, a TT-passing robot that is able to do everything that we can do for a lifetime may or may not be conscious, so what is the point of wasting time and resources on trying to find a causal mechanism for something that is almost certainly "causally superfluous" when the other-minds problem prevents us knowing whether or not anyone/anything else actually feels anyways. I was interested by a passage from the section on the other-minds problem:

    "First, it's not that there is any doubt at all about the reality of feeling in people and animals. Although, because of the 'other-minds' problem, it is impossible to know for sure that anyone else but myself feels, that uncertainty shrinks to almost zero when it comes to real people who look exactly as I do."

    Does this imply that scaling up the TT past sensorimotor function to T4 or possibly even all the way up to the T5 level would shrink uncertainty to the point where we could be confident that a T5-passing agent is conscious (i.e. feels)? (although this would completely overreach the purpose of cognitive science)

    ReplyDelete
  3. RE: ": Is our species not "programmed" for our capacity to feel by our DNA, as surely as we are programmed for our capacity to breathe or walk?" This strikes me as a bit of a circular question: if our genes code for feelings, and our genes are not comparable to a computer program, what distinguishes us from robots is our ability to feel, then our biological matter and constitution is what separates us from robots. But this is refuted in the example of the cuddly AI movie. My question is why it feels like there is some fifth force when we take a decision that feels like free will (like spontaneously clenching a fist.) Neuroimaging studies have shown that the thought to make this action occurs even before we have a conscious awareness of having decided to do it. This boggles my mind and makes me wonder whether we lack the vocabulary to discuss why we feel due to categorical perception or inaccurate tools for measurement or the limitations of our understanding of phenomenology.

    ReplyDelete
  4. Regarding why it is that our capacities are not functed, could it not be that the explanation lies just in physics (how do we get felt states from unfelt ones) rather than evolutionary theory (what purpose is there in feeling)? Maybe we feel because there is no other way than to feel, in other words our biology (the one that enables us to live and perform actions) is physically causing our feelings in a deterministic manner? Yet that doesn't imply that feelings have an advantage added to them, maybe they are here just because they could not not be here. Also, is it actually possible to study feelings given that we do not know what not feeling is? i can study different types of feelings (categories), but can i really study a feeling given that i have no non-member in this feeling category?

    ReplyDelete
  5. “An entity that feels is conscious; an entity that does not feel is not. The rest is merely about what the entity feels. What it is feeling is what it is conscious of. And what it is not feeling, it is not conscious of.”

    As I was reading, I made a note on this part of the article because it stood out to me. This is often an argument that is misinterpreted and used to justify eating animals. People tend to argue that certain animals, such as fish, probably don’t feel pain or aren’t aware of what is happening to them, which is a poor argument because they don’t actually know anything about what fish can or cannot feel. If fish are conscious beings, and consciousness and feeling seem to go hand-in-hand, shouldn’t people assume that fish can, in fact, feel? And if fish can feel hunger, which is something people can also feel, who are we to just decide that they can’t also feel pain?

    “Racism (and, for that matter, speciesism, and terrestrialism) is simply our readiness to hurt or ignore the feelings of feeling creatures because we think that, owing to some difference between them and us, their feelings do not matter.”

    This is an interesting point to bring up. I, personally, never made the correlation to racism before, but after reading this broader, I completely agree with the viewpoint presented. Perhaps more people would be encouraged to adopt vegetarianism/veganism if they were presented with this less human-centric idea of racism?

    ReplyDelete
  6. I’m very much convinced at the end of this that ‘doing’ and performance capacity should be the central goal of cognitive science. However, how can we be sure that performance capacity is a correlate of consciousness and feeling? The paper touches on this a few times and says that the system that passes the Turing Test is the most likely to be conscious but even there, we won’t know whether, how, or why [it feels].
    And so my question is – why wait on the turing test for consciousness? It seems like we might be waiting for something that might not materialize! If people have been attempting to explain consciousness by other means – namely religion and philosophy – for years now, why rely so entirely on a science that doesn’t seem to want or be able to answer the question. The only answer I can come to is that it’s not even a question worth attempting to answer and that we ought to focus on the ones we can. I think I’m finally starting to understand some of the papers I’ve read in previous classes on the meta-hard problem (whether or not the hard problem is a problem worth chasing at all). I might have to dedicate a few weeks of summer to sitting down with some previous readings and wrestling with them…

    ReplyDelete
  7. Up until this point I had totally settled into the position that the feeling/ doing relationship/ the causal role of feeling is unknowable. But after reading the article, particularly the section on correlation and causation, I’ve become more confused than ever. My thought process was as follows:

    "We feel pain when we have been hurt and we need to do something about it: for example, removing the injured limb from the source of the injury, keeping our weight off the injured limb, learning to avoid the circumstances that caused the injury. These are all just adaptive nociceptive functions. Everything just described can be accomplished, functionally, by merely detecting and responding to the injury-causing conditions, learning to avoid them, etc.”

    Thought 1— if not for the feeling of pain that is correlated to the nociceptor function, how would we, humans, be able to “merely detect” what’s happening to be able to respond. The pain feeling is our detector. Nociceptor function would be useless without the correlated pain feeling.

    "All those functions can be accomplished without feeling a thing; indeed, robots can already do such things today, to a limited degree. So when we try to go on to explain the causal role of the fact that nociceptive performance capacity's underlying function is a felt function"

    Thought 2 — Robots demonstrate that these functions can be accomplished without feeling, but isn’t this accomplishment specific to a robot of such design? We know the function *can* happen without feeling for an entity, but so what? The function isn’t successful for humans without the accompanying feeling of pain.

    "we cannot use nociception's obvious functional benefits to explain […] the fact that nociceptive function also happens to be felt”

    Thought 3 — Does this apply to humans? For humans, the function of nociceptors *is* the feeling of pain, if not how else could we (given our specific internal design) detect the cause of the pain and respond appropriately?

    ReplyDelete
    Replies
    1. From these three thoughts, I wonder; Just because behaviours that follow from injury, like removing the body part from the source of injury, are adaptive bodily functions and CAN be carried out by certain entities without accompanied feeling, doesn’t mean the same for humans given the differences in human and robot design, so nociceptor's functional benefits (in humans) can explain why the function is felt — because otherwise, the function would not be useful. So, maybe feeling is just a necessary function for certain life forms given how we are made, and simply arises from the combination of all our internal happenings. Humans need to be able to ‘feel' to know how to respond, and that is just the way our information processing works. If I’m in a room talking WAY to much, and everyone is giving me external signs telling me to stop, unless I feel like I’m talking to much, even if I’m looking right at their faces, I won’t stop.

      Robots don’t NEED to feel, because their programming allows them to detect injury, detect emotion…etc. without it. They can process information and respond accordingly without having to feel anything because their program allows them to do so. Our program, though, does not allow us to do so, we don’t ‘automatically’ detect injury unless we feel pain, so can't we just say that ‘feeling’ exists as a necessary adaptive function for organism’s with a certain internal make up to survive? Then isn’t the question “Well why can’t humans detect injury…etc. (process certain information) without it having to be ‘felt' to respond if other systems can” similar to saying “well why can’t humans camouflage to increase our fitness if other systems can.” The answer to both simply being: the way we are designed and what that allows us to do — feeling, then, developed as a necessary byproduct of our (and certain other organisms) specific biology and demands. And you can attribute the fact that other living things, like plants, don’t have feelings because they don’t share our design or have the same demands we do. But, after thinking of all of this and being very confused, I’m not quite sure if any of this makes sense/ matters to the hard problem…

      Delete