Saturday 2 January 2016

10c. Harnad, S. & Scherzer, P. (2008) Spielberg's AI: Another Cuddly No-Brainer.

Harnad, S. & Scherzer, P. (2008) Spielberg's AI:Another Cuddly No-BrainerArtificial Intelligence in Medicine 44(2): 83-89

Consciousness is feeling, and the problem of consciousness is the problem of explaining how and why some of the functions underlying some of our performance capacities are felt rather than just “functed.” But unless we are prepared to assign to feeling a telekinetic power (which all evidence contradicts), feeling cannot be assigned any causal power at all. We cannot explain how or why we feel. Hence the empirical target of cognitive science can only be to scale up to the robotic Turing Test, which is to explain all of our performance capacity, but without explaining consciousness or incorporating it in any way in our functional explanation.




92 comments:

  1. The authors point out that we are far from explaining the easy problem, let alone cognition as a whole. Perhaps by solving the easy problem first, one can hope to get closer to solving the hard problem.

    RE: “Are AI systems useful for understanding consciousness?”
    Model for consciousness: just I/O

    Wouldn’t scaling up a Turing robot to explain performance capacity get us closer to explaining feeling? If feeling is just a combination of the four causal forces that exist in our world, then perhaps a model for “doing” could result in an after-effect/emergent property of “feeling”.
    Does the explanation for why we feel have to be mentalistic in nature?
    The act of introspection, although not an explanation of how/why we feel, is a marker of feeling. So if a scaled up TT robot could not only do what we can do, but could also demonstrate verbally that it introspects, then it is possible that that could serve as a closer approximation of cognitive capacity as a whole.

    ReplyDelete
    Replies
    1. Hi Manda! I actually had the same optimistic thought that maybe feeling could emerge if we succeed in creating an easy-problem-solving, human-Turing-test-passing system. Harnad points out (in his answer to the question about conscious systems vs. unconscious systems) that maybe this emergence of feeling is possible, but even if it were, we still wouldn’t know what we did to get our TT-passing system to feel (i.e. we wouldn’t be closer to a how/why explanation) and we wouldn’t know how to verify if it’s feeling (because of the Other Minds problem).

      However, I think I disagree with the introspection part of what you said. Please correct me if I’m wrong, but I think your argument here is that if we had a TT-passing robot (which is capable of introspection because, by definition of “TT-passing”, it can do everything that we can do), we could just ask the robot “how’re you feeling today?” and see what they say. The issue for me here is the Other Minds problem again: we can never really know for sure if the robot is feeling spectacular, if that’s what they reply when we ask, or if they’re feeling anything at all.

      Delete
    2. Hey Olivia & Manda,

      I actually commented on Olivia's post for 10b, and in that post I proposed that a T3 (maybe T4) passing AI that has sensory input could be asked about its feeling, and that would be a stepping stone in the right direction for understanding the why/how of feeling. While Dr. Harnad seems to be skeptical of this idea in this paper, I don't fully understand why.

      If you were to ask me how I was feeling and I told you "good, but a little cold and stressed", you have no way of knowing whether I am feeling that or not. You take my word that I’m feeling because I am similar to you, and you know that you feel (the cogito), but why do we put this special distinction between artificial and organic systems? If Dr. Harnad is willing to claim in this paper that even invertebrates feel, then why wouldn't a similar, human-created system be able to feel in the same way? I understand that simple multi-cellular organism are still unbelievably complex, and it's possible that our artificial versions would not be comparable. But if they could be as complex at some point in the future, The only thing that makes them different is the idea of some special form of matter that constitutes the soul (Dualism), which has been discredited for a long time now.

      Delete
    3. Karl, I believe that Harnad's argument is that feeling has no causal efficacy. It's essentially epiphenomenal, meaning that it arises from a system but does not then have a causal effect on the system. In this paper he says "Let us agree that to explain something is to provide a causal mechanism for it". Therefore, if explaining something is providing a causal mechanism and feeling has no causal mechanism, then it's unexplainable.

      As I mentioned in my reply to you on 10b., we can likely get to the point where we're quite certain that a specific kind of dynamical organisation that is particular to brains also gives rise to experience. But we can never be *certain*. That is the other-minds problem

      Delete
    4. To address Karl's reply briefly, I think the problem is not whether T3/T4 feel when they say they do feel. The problem is how or why they feel. Because T3/T4's are reverse-engineered we can explain how functions or how they do what they do. It might make it clearer by distinguishing doing something and feeling something. We do pump blood but we don't feel pumping blood (other than the fact that we feel like we are alive). We can explain the doing easily in the case of the heart and possibly in the case of T3/T4 but not the feeling.

      Delete
    5. Karl, I think you make an error in your argument. The problem is not knowing if you are feeling cold and tired, but how you are feeling cold and tired. We must take it for granted that your reports are accurate because we cannot (due to the other minds problem) ever know if you are feeling as you report. The hard problem does not address if a feeling is occurring--we must assume that every conscious being is feeling--but how/why that feeling occurs.
      Auguste, I agree with your assessment of Harnad's argument. The problem is that observing behaviour will never give us a full answer to the hard problem since it is not a causation relationship. Feeling is not a force in and of itself, therefore cannot cause events to occur (this would be telekinesis). Without a force, feeling can have no causal effect on the individual or the environment, and we cannot attribute behaviours to it. Furthermore, we cannot assume that if we reverse engineer all the behaviours of a conscious being we will solve how/why they feel. This is the reason why TT can only solve the easy problem, not the hard problem.

      Delete
    6. Hi Olivia,

      Thank you for your comment.
      I am no longer as optimistic as I once was. Fazed by the razzle-dazzle of reading 10a, I believed that the hard problem was not as hard as it seemed to be.

      Regarding the other minds problem, I agree that it is inescapable. However, my point was not to say that one can bypass it, but rather, to stress the fact that there may be ways to reduce the level of underdetermination via closer approximation.

      The other-minds problem is not the main issue, however, since even if heterophenomenology were to translate feelings into third-person, the question of "how/why" there is any feeling at all would remain unanswered.

      Delete
  2. Re: “But unless we are prepared to assign to feeling a telekinetic power (which all evidence contradicts), feeling cannot be assigned any causal power at all.”

    Regarding the causal role of feeling, I see two different positions that need to be squared.
    (1) “For feeling is not a fifth causal force. It must be piggy-backing on the other four,
    somehow. It is just that in the voluntary case it feels as if the cause is me.”
    (2) From John Searle’s TED talk, Our Shared Condition – Consciousness: “[Some say] maybe consciousness exists, but it can't make any difference to the world. How could spirituality move anything? Now, whenever somebody tells me that, I think, you want to see spirituality move something? Watch. I decide consciously to raise my arm, and the damn thing goes up. Furthermore, notice this: We do not say, ‘Well, it's a bit like the weather in Geneva. Some days it goes up and some days it doesn't go up.’ No. It goes up whenever I damn well want it to.”

    The question comes down to, from what I understand, how can we get felt states from unfelt states? This question is fundamentally unsolvable! But, I think, the kicker is that this question already assumes a false dichotomy.

    The mind/body problem assumes a false dichotomy between physical and mental, corresponding to unfelt and felt. The solution is anomalous monism (Donald Davidson) in my opinion. “In "Mental Events" (1970) Davidson advanced a form of token identity theory about the mind: token mental events are identical to token physical events. One previous difficulty with such a view was that it did not seem feasible to provide laws relating mental states, like believing that the sky is blue or wanting a hamburger, to physical states, such as patterns of neural activity in the brain. Davidson argued that such a reduction would not be necessary to a token identity thesis: it is possible that each individual mental event just is the corresponding physical event, without there being laws relating types (as opposed to tokens) of mental events to types of physical events. Davidson argued that the fact that no such a reduction could be had does not entail that the mind is anything more than the brain. Hence, Davidson called his position anomalous monism: monism, because it claims that only one thing is at issue in questions of mental and physical events; anomalous (from a-, "not," and omalos, "regular") because mental and physical event types could not be connected by strict laws (laws without exceptions).”

    Therefore:
    “The form of description — whether mental or physical — is thus irrelevant to the fact that a particular causal relation obtains. It follows that the same pair of events may be related causally, and yet, under certain descriptions (though not under all), there be no strict law under which those events fall. In particular, it is possible that a mental event — an event given under some mental description — will be causally related to some physical event — an event given under a physical description — and yet there will be no strict law covering those events under just those descriptions. My wanting to read Tolstoy, for instance, leads me to take War and Peace from the shelf, and so my wanting causes a change in the physical arrangement of a certain region of space-time, but there is no strict law that relates my wanting to the physical change. Similarly, while any mental event will be identical with some physical event — it will indeed be one and the same event under two descriptions — it is possible that there will be no strict law relating the event as described in mentalistic terms with the event as physically described. In fact, Davidson is explicit in claiming that there can be no strict laws that relate the mental and the physical in this way — there is no strict law that relates, for instance, wanting to read with a particular kind of brain activity.”

    ReplyDelete
    Replies
    1. Applying Davidson to the question posed in the paper: “Can we not just satisfy ourselves, then, with feeling as a "correlate" of function?”
      Not really: “This does not mean, of course, that there are no correlations whatsoever to be discerned between the mental and the physical, but it does mean that the correlations that can be discerned cannot be rendered in the precise, explicit and exceptionless form — in the form, that is, of strict laws — that would be required in order to achieve any reduction of mental to physical descriptions. The lack of strict laws covering events under mental descriptions is thus an insuperable barrier to any attempt to bring the mental within the framework of unified physical science. However, while the mental is not reducible to the physical, every mental event can be paired with some physical event — that is, every mental description of an event can be paired with a physical description of the very same event. This leads Davidson to speak of the mental as ‘supervening’ on the physical in a way that implies a certain dependence of mental predicates on physical predicates: predicate p supervenes on a set of predicates S ‘if and only if p does not distinguish any entities that cannot be distinguished by S’ (see ‘Thinking Causes’ [1993]).”

      Delete
    2. I have come across Davidson's argument before and you articulate it quite clearly but there is an equivocation that is made in his argument that relies on his different levels of description between the mental and physical. Simply, there are only the physical events and that the physical event of neurons firing can be simply "interpreted" biochemically within a physical framework or mentally as feeling. This kind of equivocation is exactly that Dennett makes to take it one step further as a reductionist.

      Also a note on supervenience: such arguments usually run into causal overdetermination. If the neural event causes biophysical events (behaviour) and from the often assumed causal closure of the physical world, there is a sufficient cause in identifying the neural event, the mental description seems friviolous and by supervening on the physical event, it seems to be derivative from the physical event itself. so if mental events are dependent on physical neural events and physical neural events already sufficiently cause biophysical behaviour then what are mental events doing?

      Delete
    3. I was not previously familiar with Davidson's argument on monism, however the mind/body problem is certainly a recurring theme throughout many of my psychology courses. Oftentimes when I've encountered the topic it's presented as “here is the mind/body problem and competing explanations”, however I have had little exposure to the arguments underlying each theoretical position. That being said I think it is of fundamental importance to anyone who wishes to study the brain and solve the easy or the hard problem.

      In this context, I would frame the question as “How much of human cognition (ie how much of our "mind") do we believe can be explained by understanding the brain itself?”. Personally I can see why one would argue that mental events are simply the product of neurophysiological changes in the brain. One might reasonably argue that if we perfectly understood all of the changes that go on in the brain (no easy feat considering its staggering complexity) then we could understand all aspects of human cognition (and if what we call “feeling” is part of human cognition then we should have explained that too). However, in the way that we have conceptualized the easy/hard problem in class, this would essentially be equivalent to solving the easy problem, which Harnad argues can only be done by reaching T3. I would argue that since we are concerned with understanding the internal processing and the specifics of the “how” we do what we do, we really want to achieve “strong equivalence” or T4. However if I am understanding the logic of Davidson’s argument, what we call “feeling” should be encompassed in any system that achieves T3/T4 since monism posits that “mental events” are no different than their physical manifestations and are thus frivolous. The mental events “supervene” physical events.

      However, I do not think monism (even if it is correct) can necessarily explain the hard problem. I’ll speak for myself and say that I have very strong subjective experiences of thought and feeling. And while (thanks to the other-minds problem) I cannot be sure that other humans have feelings, I am pretty sure that they do. If we assume that people have feelings (which I think that most people who are not philosophers accept intuitively) then we are left with the question of why we have these feelings in the first place? Harnad frequently argues that the experience of feeling is not inevitable and it is not clear why it happens. Do we need to “feel” in order to produce the correct behavioural outputs to a given situation? What purpose does our subjective interpretation or “mental events” serve?

      Delete
  3. “Will systems that can perform more and better be more likely to feel? The answer to that might be a guarded yes, if we imagine systems that scale up from invertebrate, to vertebrate, to mammalian, to primate to human performance capacity, Turing-scale.”

    While I tend to agree with this idea, it seems to suggest it is only a matter of increasing the complexity until something has the performance capacity of humans, and thus is more likely to feel. This seems to run counter to a point made in response to Searle’s CRA regarding the transition from the computational to the mental:

    “It should be clear that this is not a counterargument but merely an ad hoc speculation (as is the view that it is all just a matter of ratcheting up to the right degree of "complexity”). “

    Are these arguments are, in fact, in contention with each other or if I am misunderstanding one or the other (or both)?

    ReplyDelete
    Replies
    1. For clarification, the second quote is from Harnad, S. (2001) What's Wrong and Right About Searle's Chinese Room Argument?

      Delete
    2. Hey Colin,
      I think they are contending claims – both are about ratcheting up complexity until something happens and consciousness appears, but one is for, and the other against. I do want to say that the transition point Searle talks about, between the computational to the mental – I hardly think we can objectively categorize that (and say when exactly it occurs, or at which point in evolution an animal gains consciousness/thought beyond computation). My reply does line up with what Harnad presented in this paper – perhaps we ought to focus on working towards the robotic TT, instead of just focusing on models.

      Delete
    3. They may seem contending but here in the second quote we are only talking about potential for feeling, and this seems indeed logical: you are more likely to expect feeling from a complex system rather than from a simple one. However this is just a correlation and not a causation (we are still not able to find any!). Increasing performance capacity is not enough to create feeling (hence the argument in the second quote), yet we only find feelings in most complex performers, with the richness and complexity of feelings increasing with the performance capacity.

      Delete
  4. I feel like we take something for granted when talking about feelings, which is that we don’t know what it is like to not feel. We know the opposite of most things, like black and white, good and bad, walking and sitting, seeing and keeping your eyes closed.. But we don’t really know what is feels like to be asleep, or not be conscious. The oppositions help us understand things better. So can we say that it is a separate entity without really knowing what functioning would be like without feeling?

    I also feel like there could be a 5th force that explains feelings, as it is probably the most abstract thing we have encountered. In addition, since we attribute feelings only to ourselves and animals, which are the most intelligent beings on earth, then it could be a produce of evolution that lets us make more complicated decisions for our survival.

    Lastly, with the readiness potential, why can it still not be a correlate? We know that feelings can’t exist without neurons, even if that is not all there is to it. However since they are somehow connected, feelings can accompany the neural firings afterwards, with a delay.

    ReplyDelete
    Replies
    1. I very much so agree with your first paragraph. How do we know exactly what we are looking for (and subsequently how will we know if we find it) when we have never experienced functioning without feeling in the first place. Conversely, what about people who have severe alexithymia - the lack of ability to experience emotion. What is missing that allows us to feel emotion and not them? Because technically they cannot experience emotional feelings, are they "less" conscious? Is feeling really necessary for consciousness?

      Delete
    2. Hi Deniz,
      The first point you make is definitely a good one to think about and I agree with where you’re coming from – not knowing what it feels like to not feel, makes it harder to figure out what it means to feel. However, in response to Eugenia, I don’t know if emotion is really necessary for consciousness: perhaps it's a sufficient but not necessary condition. And to give the opposite example of yours - what about people who are very empathetic and more emotional than others? Are they 'extra' conscious? And as far as forces go – we have 4 so far, but why couldn't there be more/another? Just a little while ago it was believed that all of physics had been solved, and then there was a major breakthrough. Maybe we’re just waiting for the next big discovery.

      Delete
    3. Hi Deniz, as Eugenia and Amar said, I really appreciate your idea that a better understanding of what it is to be “unconscious”, “non-feeling” and “mindless” would enhance our understanding of “feeling” and perhaps even provide cognitive scientists and robotics engineers with some direction to reverse-engineer consciousness.
      Eugenia and Amar – also interesting points raised, but I think you both conflate emotion and feeling. Simply stated, feeling is consciousness, and it involves different facets. Article 10b outlines types of feelings such as “sensations, emotion/sensations (feeling pain), psychomotor states (willing an action), desire-states (wanting something), and complex feelings (believing, doubting, understanding)”. Lack of emotion does not affect consciousness, as it is a part of consciousness – consciousness (feeling) is what gives rise to emotion. Therefore, it begs the question when one asks “is feeling really necessary for consciousness?” because feeling = consciousness – these are all euphemisms for the same phenomenon.

      Delete
    4. Hi all,

      I think what you guy are discussing relates well to what ee discussed this in class this week. Cognition is categorization, and categorization requires a right and wrong answer. Without encountering something that does not belong in a category (ie. Negative information), you can't determine the salient features of the category which indicate whether something belongs or not. This is the liliac (sp) story. The problem with feeling is that we do not know what it is like to not feel something, to not be conscious. Even given the many examples of things that we are unconscious of, there is still no negative evidence of non-feelings. This paper does a good job of opening up that issue and talking through what you guys are discussing here, and makes a good argument for why it doesn't make sense to talk about unfelt feelings and why that's a problem for consciousness. We can't deal with it in the same way that we deal with easy problem cognition, because those are categorical and consciousness is not.

      Delete
    5. I think Jaime makes a good point that this thread is wrongly assuming that feeling and consciousness are different. However, I think it’s still an interesting example to look at emotions in terms of their role in consciousness. Emotions are one type of feeling. I agree that this article seems to imply that more emotional people are more conscious and less emotional people are less conscious. In a way, I think this is true, because people who notice their own and the emotions of people around them having more moments of feeling (consciousness) and are more aware of their own state and the states of those around them. As this article says, awareness is nothing else but another name for consciousness, so if these people are more aware then it would follow that they are more conscious.

      Delete
    6. Hi all,
      I agree – and this ended up being pivotal in the course - that the lack of negative evidence about feeling is problematic when we are trying to think about the (variation) hard problem, i.e how can we find the invariants of feeling, like the layleks example. This is only part of what makes the hard problem so hard.

      To the discussion about alexithymia – emotion is not the only thing that is encompassed by feeling. Amar’s point about it being a sufficient but not necessary condition is interesting but also a little redundant, because if the thing we have reverse engineered ‘feels’ emotions, then it does in fact have ‘feeling’. I think it sounds a little bit like saying ‘feeling is a necessary condition for feeling’ (Which also what Jaime said and I realized that after I typed this)

      I think all of this goes back to the Cogito – we can’t ever be certain of the feelings or the absence of feeling in another person. The only thing we can be certain of is that we are feeling something right now. Even if we thought we had a better understanding of what it means to be “unconscious” or “unfelt”, would we really? As in the Cogito, if we can only be certain of what we feel right now, then we can’t be certain of what another person feels or non-feels.

      Delete
    7. Also Rebecca, I am curious as what you would classify as more or less "conscious".

      If awareness is another word for consciousness, and consciousness is another word for feeling, and we can only be certain of your own feeling that you are feeling right now (ie. not what anyone else is feeling nor what you have or will fill ---Cogito/Sentio stuff), I think your logic runs into a bit of a problem...

      I'm hesitant to think anyone could be a lesser feeler or doing less feeling....since everything we do that we feel has a feeling, and we have no way of knowing about the things that weren't felt.

      People who are not aware of what other people are feeling are still going about their day feeling - just feeling different stuff. And people who are hyper aware are feeling that someone else is having feelings....but everyone is still feeling all the time....Are you trying to talk about an amount of feeling in the total lifetime as a cognizer? Or the amount of feeling at a single time point? Either of those would be susceptible to this problem

      Sorry that was so long, but I hope it makes sense!

      Delete
    8. RE: what Eugenia said: “Conversely, what about people who have severe alexithymia - the lack of ability to experience emotion. What is missing that allows us to feel emotion and not them? Because technically they cannot experience emotional feelings, are they "less" conscious? Is feeling really necessary for consciousness?”

      Assuming that these people are awake and mobile, don’t they still know what it feels like, for example, to touch their hair? To blink their eyes? To hear a song? Just because they can’t experience an emotional response to these actions and others doesn’t necessarily entail that they don’t feel. Thus, I don’t believe that they are “less” conscious. They just experience certain facets of consciousness and not others, and certain types of 'feelings'. But I personally also don’t really see consciousness as a scale: either you are conscious, or you’re not.

      Reading this article kept bringing me back to the following questions: do we feel because we are conscious? Or are we conscious because we feel? As of right now, it doesn’t seem that we can have one without the other, so what exactly is the correlation between the two? Or are they really one and the same?

      Delete
  5. If consciousness is feeling, how does it explain for an unconscious mental state in a conscious entity, just as the system reply in Sear’s Chinese room? As such, when a candidate is unaware of his or her feelings, or if s/he has feelings while is just not feeling the thing s/he is conscious of, does it mean the consciousness is not always equal to feeling?

    ReplyDelete
    Replies
    1. Hello Zhao, I think both the paper and what we have discussed in class addressed this. Being conscious is to be aware of something, which means to feel something. Consciousness is feeling. If you are not aware of it, you can't feel it, such that “an entity that feels is conscious.” There is no unfelt feeling and no unconscious knowing. When you talked about a candidate being ‘unaware/unconscious of his or her feelings’ it isn’t in their mental state and it is unfelt presently – so, ultimately, not a feeling.

      Perhaps you are talking about someone feeling something as a result of a trigger (e.g. recalling the third grade teacher with a hint of the first letter P, or feeling of love by seeing someone's face), but keep in mind that any words or hints that influence what you feel is not anymore part of your memory state. What is going on in the mental state is what you are feeling presently…

      And in talking also about priming, forced choice, and blindsight (or any of the unconscious know-how) is fundamentally performance capacity of the brain – and has little to do with feelings.

      Delete
    2. Yes I asked our professor in class also, right now I understand that it’s conflicting to say “we’re feeling something while we’re not aware of it”, because I can’t feel something that I don’t feel and vice versa.

      Delete
    3. Hi Peihong, another example that might help clarify this for you (as it did for me) was to reframe all of this by just using the word “feeling” instead of any of the other synonyms that we use to loop around our main points. You cannot be unaware of your feelings, because to feel your feelings is to be aware of them. You might be conflating awareness of feelings with ability to articulate feelings, which are two very different things. Even if you are not sure of how to describe what you are feeling, the fact that you are feeling something is undeniable. I hope this helps.

      Delete
  6. Our brain is like biologically programmed by natural selection, however, we sometimes conduct behaviors that are opposed to the evolutionary forces, like altruistic behavior and unconditional love. In my other psychology class, it was mentioned that human species are slightly polygamous, I guess it’s for the males to be able to pass down more genes. However, our feelings towards our love taught us to be committed in a relationship. As such, is it possible that our feelings actually counteract those natural programming?

    ReplyDelete
    Replies
    1. I feel like our feelings still run parallel to our innate programming, since sexual desire and love aren’t necessarily the same thing. You can love someone completely, and still feel sexual desire for another at some point. For example, take the coolidge effect: it’s the phenomenon where a male will show renewed sexual interest in a female, even after he’s just finished mating with a different female. Both men and women want life-long partners to help raise their young, and at the same time it’s a biological function for them to be sexually interested in someone else to pass on their genes, as you said. Same with altruism- ethnocentrism at its prime.

      Delete
  7. I found this paper highly intriguing, easy to follow and read. Many of the points brought up solidified the class material and the main gist of this course.

    Some points I found interesting was what is the point of even seeking how/why explanations of feeling?

    "is whether feeling is a property that we can and should try to build into our robots. Let us quickly give our answer: We can't, and hence we shouldn't even bother to try."

    I finally understood why the hard problem is so hard and insoluble. If we can successfully create a T3 passing robot then but we cannot explain how or why, then we wouldn't never be able to pass T3 because to reach T3 in the first place you need to explain the how/why.

    Since the hard problem is literally insolvable, what is wrong with not knowing how/why explanations of feeling when we know it exists. Yes perhaps we want to find out the how/why in order to potentially build robots with the same capacity to feel. However I do not see the implications in creating a T3 passing robot - it seems rather meaningless to be as we keep going in circles of solving the hard problem, when in reality - we can't.

    ReplyDelete
    Replies
    1. Well I guess it’s still meaningful to create a T3 passing robot, because all these processes are part of reverse-engineering and we’re just trying to understand the hard problem (in human beings) better.

      Delete
    2. Hi Fiona,

      I'm not sure if it was this or another paper, but I believe that Harnad argues that the hard problem is something we can never answer, but that it doesn't mean that what we're doing here (in cognitive science, with TTs) is not useful. To build a T3 robot would still give us a lot of information about the easy problem (which still hasn't be solved). I think this class has an underlying message that cognitive science should focus on the easy problem and ignore the hard problem, since we will never be able to answer it.

      Delete
  8. I found the notion that ‘feeling is not a fifth causal force’ very interesting. It reminds me of the ‘user-illusion’, that says that all the things we ‘feel’ are just an emergent property of all the biological and environmental factors within us. Therefore, we have a response to a stimulus and it is an illusion that our feelings are what produce an outcome. When I first learned about this theory, I thought it was silly. However, upon further reflection it seems that this theory has more too it and should be considered seriously. This is what is supported by Libet’s experiments. If feelings just provide the user with a narrative for their lives, what is the use of them? Why should we even bother with the hard problem? While these questions are relevant, I do think we need to model and understand feelings. This is because they undeniably make up a large part of human consciousness and it does not seem possible to pass T2 without them. In the same way that symbol grounding is necessary, I believe that this ability to feel will be as well, otherwise it seems impossible to be indistinguishable from a human.

    ReplyDelete
    Replies
    1. You bring up some really interesting points.
      While I agree that it may be difficult to pass T2/T3 without feeling (of course, this isn't something we could ever know), and it may be necessary, just like symbol grounding, I don't think concerning ourselves with modelling feeling is necessary or even useful. Our primary concern should be generating the performance capacity necessary for a machine to be able to pass T2/T3. Once we have something artificial that *can* pass T2/T3, maybe then we could somehow begin to investigate the how and why of feeling. But I don't see how understanding the how and why of feeling would help us create something with the performance capacity to pass T2/T3. (Even it were useful, it would be a much more difficult task than generating performance capacity, if not impossible.)

      In other words, I think the only way to go is:
      Focus on generating performance capacity alone -> create something that passes T2/T3 -> maybe somehow attempt to understand feeling ONCE this T2/T3 is created

      and not:

      Understand feeling -> generate feeling -> generate performance capacity -> create T2/T3

      Delete
    2. I think you bring up a really good point, Kara, that maybe we should just dissociate these two things for now and focus solely on our performance capacities and how to generate them. For humans, obviously it is clear that we have feeling. However, this phenomenon is correlated with our functional abilities and there is no clear causal explanation for it, so we cannot generate this ability in machines. However, perhaps the ability to feel isn’t necessary for our functional abilities to occur and it is just an additional experience that allows us humans to be more than just “doing” creatures and rather, creatures who can feel and cognize. Thus, my point is that looking at the causal mechanisms and the work that has been done on the easy problem of how we are able to do what we do could allow us to build machines that would pass T2/T3 and could replicate our functional abilities, just not with the ability to think or feel.

      Delete
  9. Overall I really enjoyed this reading. It provided me with a more nuanced understanding of the hard problem in that it centres on the “causal role of feeling in generating (and hence in explaining) performance, and performance capacity.” I never thought about feeling as a mere correlate of performance capacity so I was even more amazed to read about Libet’s “readiness potential” work in which it this process occurs even before the subject feels the intention to move. I hope we talk more about this in class because that is kind of freaky. It seems like humans are less volitional then we think. Although I guess that concern would totally negate this entire writing since feeling is not a prerequisite of performance. I just mean that regardless of that fact, it is a difficult concept to digest: the irrelevance of feeling.
    On that note, I wanted to clarify how organisms could still learn to avoid injury and danger without pain? Though I could think of a number of examples that highlights the possibility of performance capacity irrespective of feelings, this one in particular strikes me as one that really does rely on NEEDING the feeling of pain. I recall learning in Psyc 100 that individuals who cannot feel pain (condition called congenital insensitivity) do not live past childhood usually because they have no mechanism by which to avoid and recover from injury. Does this not prove the importance of pain in avoiding injury?

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
    2. Pain and responding to pain is essentially nociceptors sending action potentials through axons and synapses up and through the central nervous system, then axons and synapses sending action potentials back down to the motor system, causing us to remove ourselves from the thing that's causing the pain. Everything going on here - or in the body/brain/nervous system at all - needs only the four causal forces mentioned in the article. So it doesn't seem clear why we need to FEEL what it feels like to be in pain to avoid pain, so long all of these receptors and axons and synaptic connections are functioning properly.

      Regarding congenital insensitivity, these people don’t feel pain, but they do feel what it feels like to not feel pain. They cannot respond to painful stimuli because they are missing something physical in their nervous system. It seems like that is the crucial component: having or not having this physical thing in our nervous system that enables us to respond to pain and avoid injury. It is not clear to me at all why feeling itself is necessary here. Yes, pain itself is important, but what about feeling what it feels like to be in pain?

      Delete
    3. Jessica,

      I was intrigued by Libet's findings as well. Their findings about the readiness potential indicate that neural activity might precede our feeling of intention of generating action. This leads us to the idea that our sense of will and volition in life might actually be illusory. But I am interested in why we would experience these feelings in the first place, if they might not even be considered correlates of the cause of action. My thoughts are that, in order to make sense of our actions, our brain generates this feeling of intention. Without it, our actions might seem pointless. But then again, this relates to the idea (brought up my a few students) that we don't actually know what it feels like to be devoid of feeling. This is kind of a difficult thought to conceptualize, I just thought that LIbet's findings shine an important light on the function that feelings might serve in our everyday lives.

      Delete
    4. Kristina, I'm not sure if I agree with your idea of our brains generating a feeling of intention in order to make sense of our action. It sounds a bit homunculus to me - who in the brain is deciding to generate this idea of feeling? Why do we need to "make sense of our action"? This image makes me think of our bodies being just shells with something like divinity or destiny ruling things and that in truth we are just tricked into thinking we have volition. Personally that sounds a bit too sci-fi to me.

      Delete
  10. "For the clenching of my fist to be caused by my willing it to be clenched, rather than by some combination of the usual four feelingless forces of nature, would require evidence of a fifth causal force -- a telekinetic force. And there is no such evidence, hence no such fifth force."

    This is related to an argument against Cartesian dualism (ie, mind is an immaterial "substance" that controls the material body; mind is made up of different stuff) based on the Second Law of Thermodynamics: energy cannot be created or destroyed. We (not me, but physicists) arrived at this through centuries of physics and it's true with as much certainty as science can afford (ie, quite a lot).

    For "mind stuff" to influence "body stuff" there would have to, at some point, be an injection of energy from mind into body that caused some part of the body to move. Descartes proposed that this should happen in the pineal gland, which is interesting for other reasons (ie, DMT), but it could be any other part for the sake of argument. The energy would have to come from somewhere, amounting to something like a telekinetic force, but since the mind is immaterial, it would essentially be adding this energy into the world. But, since energy can't be created, this doesn't make any sense.

    ReplyDelete
  11. In some ways, I tentatively agree with this paper. Perhaps focusing on solving the easy/hard problems, what it means to feel and how we do it, is what’s causing us to get stuck. Coming at the problem another way would be to do to what Harnad suggests in this paper – scale up the robotic TT, and perhaps once we get far enough consciousness correlates might start to appear. Perhaps this method will allow us to come close enough to bridging the gap in our knowledge that some important conclusion on the problem of consciousness will make itself apparent, when our robots are at a higher level of TT. While I’m all for making a different approach, I don’t (in my little undergraduate opinion) believe we should just not ‘even bother to try’ building feeling into our robots. Now, I say this knowing it’s not possible right now and may never be possible, but not bothering to even try reduces our chances to zero. Furthermore, do we have a ‘checklist’ of sorts, for what constitutes consciousness (much like the 7 characteristics of life)

    ReplyDelete
  12. Professor Harnad you write "the rest is merely about what the entity feels. What it is feeling is what it is conscious of. And what it is not feeling, it is not conscious of". I am not quite sure I agree. In fact I think it is possible to feel (be aware) without something to be aware of. In that way csc is strictly speaking not necessarily instrumental. We are always aware but are not aware that we are always aware. If we say we are not aware we already affirm our awareness. Meditative states are an excellent testimony to cultivating a state of meta-awareness where there is only feeling but not feeling of. This example is a testimony to the fact that feeling is not causally dependent on cognition. However when we study aspects of directed awareness like attention there is an instrumental relationship with to the object of awareness. In this way feeling can cause change in cogntion. Attention to an emotion, for example anger, effects the outcome anger. Soham says there is some kind of top down causal relation between csc and cognition.

    ReplyDelete
    Replies
    1. Soham I think you may see this more clearly if you drop the unnecessary words: We feel all the things we feel, from what it feels like to see yellow to what it feels like to feel hungry or what it feels like to understand chinese or to believe that 2 + 2 = 4. All of those feelings feel like something. There are no "unfelt feelings". (Reflect on that before disagreeing!) Forget about "being aware" of feeling. Just stick with feeling something -- whatever is being felt. The hard problem is there, in full. Meditative states, too, feel like something. Only delta sleep feels like nothing.

      Delete
  13. “…even less can we explain why adaptive functions are accompanied by feelings at all.”

    The question as to how and why we evolved feeling instead of some type of detector that knows when to perform a function (as a robot may have) is puzzling, however, is it not acceptable to assume that feeling simply evolved randomly and was passed on because it was “good enough”? Is this answer too simple? Evolution is not always optimal, therefore the answer to the ‘why’ question might be related to the idea that feeling (ie. pain) was random and sufficient enough to be passed on. In terms of the ‘how’ question, I’m not sure whether to approach this from an evolutionary perspective or using an AI system/neural model (which “still does not provide a hint of causal explanation”).

    ReplyDelete
    Replies
    1. Descartes is credited for determining what the hard problem is, although it was not called the ‘hard problem’ until much later by Chalmers. Unlike dualistic accounts of there being physical stuff and mental stuff, materialist do not doubt that the brain causes feeling. The question lays with how and why the brain causes feeling. Why the brain evolved feeling is still puzzling because as Harnad points out, as Darwinian survival machines evolution cannot tell the difference between feeling and non-feeling organisms.
      Language is something that only humans possess but feeling extends beyond just humans to several other species in the world. Could the evolutionary capacity to feel have emerged like language did? Language has allowed humans to become a species that surpasses other species in terms of communication abilities. Could the evolutionary need to feel have the same purpose? For example, the welfare of sentient beings seems to matter more than non-sentient organisms like plants, does mattering have an evolutionary advantage? Of course I have not answered the question of why it is that we feel. I am just proposing possible reasons why feeling may have emerged after years of evolution.

      Delete
    2. I really don’t think that it evolved randomly. I think that would be over simplifying something so important in our lives. Something that affects our decision making, survival rate, and makes us different from the rest, and much more that I can’t know that would point to why it exists... You say that it could be passed on because it was good enough. Good enough for what purpose exactly? It is still a good question; I wish we had talked more about the hard problem and the possible meaning and explanations for the existence of feelings. The AI system would help us explain how, whereas an evolutionary perspective would help us generate theories of why. Maybe such performance capacity has to be accompanied by feelings to navigate such a complex mechanism, in an environment that we know the organism is going to make complex too.

      Delete
  14. I think this paper did a great job of summing up what we learned in class today. It helped clarify for me the fact that consciousness is feeling. One thing that specifically interested me in this reading (and also in class) is the concept of a “fifth” causal force. As mentioned, there are four forces: electromagnetism, gravitation, strong subatomic forces, and weak subatomic forces. Because these are the vital forces, I don’t understand why feeling wouldn’t be encompassed within them. I understand that these forces are able to generate feeling. And I also understand that we don’t currently know why or how this happens… but that doesn’t mean we will never know. It’s just as was mentioned in the paper - “We once thought it was impossible to explain life without a fifth ‘vital’ force. But that turns out to have been wrong. Genetics, biochemistry, anatomy, physiology, and developmental and evolutionary biology are managing to explain all known properties of life using only the four known forces.” Why is there hesitation to the idea that these four forces will eventually be able to explain how/why for feeling, as well?

    ReplyDelete
    Replies
    1. I think you make a good point, why can't the other four forces be used to explain why and how we feel? In class it seemed as though we did not address this question but focused more on why feeling could not be the fifth force. So we concluded it is not the case that we feel because 'feeling' is an independent causal force in the universe. Can we not eventually use the other forces of electromagnetism, gravitation, strong subatomic forces, and weak subatomic forces to determine a causal mechanism for feeling? Harnard addresses this and simultaneously highlights another feature of the hard problem that makes it difficult to solve; the four forces that potentially could be used to explain feelings are all unfelt forces. Is it possible for something that is felt to emerge or be causally explained from four unfelt forces? Or are the authors of the article eluding to the fact that the four unfelt forces will most likely be insufficient in explaining how and why we feel? Also I am still a bit unclear about why feeling cannot be considered the fifth force, the explanation given in class was that “feeling cannot be a force unto itself” but I don’t find this fully explanatory.

      Delete
  15. “An entity that feels is conscious; an entity that does not feel is not. The rest is merely about what the entity feels. What it is feeling is what it is conscious of. And what it is not feeling, it is not conscious of (Nagel 1974; Harnad 2003).”
    If any entity that feels is conscious then anytime you are feeling anything, you are conscious and aware of those feelings. What about when your conscious awareness and physical feelings do not match up?
    “A counterintuition immediately suggests itself: 'Surely I am conscious of things I don't feel!' This is easily resolved once one tries to think of an actual counterexample of something one is conscious of but does not feel (and fails every time).”
    What about when you cannot feel what you are intended to feel? For example, you have very cold hands and warm water feels like hot water. You are able to look at the faucet angle and consciously interpret that the water is only lukewarm and should not be felt as painfully hot, however you feel the water as much hotter than it is. You are aware – you feel – a discrepancy between what you are physically feeling and mentally conscious of.

    Harnad explains the case of anaesthesia, “Even if the involuntary spasm occurs while my hand is anaesthetized, I can see the spasm, and that feels like something. And even if my hand is behind a screen and anaesthetized, and someone simply tells me that my fist just clenched, that feels like something.”
    However, my curiosity lies is the discrepancy between felt feelings and the feeling/awareness of the conscious aspect of feeling. How do we reconcile this?

    ReplyDelete
    Replies
    1. I am also curious on what you brought up. I remember from my other courses I learned about the phantom limb pain, which the patients do feel pain at the limb which is amputated. It seems to be that feeling is way more than just perception or neural pathways sending nociceptive information.

      Delete
  16. This is most definitely not a causal answer but more a reflection on the why of feeling. Why do we do things feelingly and not just doingly, why are we humans that feel rather than zombies that just react without feeling? Why do we taste sugar as sweet rather than just eat it to increase energy or when energy is low? Perhaps, feeling acts to make us care (which yes is another feeling so not a causal mechanism). Learning can occur just with negative and positive feedback but it occurs more quickly and saliently when you care about the learning. Perhaps if we care it means we value our lives (and hopefully others) more. If you look at patients with problems in the frontal lobe area they have bad foresight skills, don't care or think about the consequences of their actions etc. These people tend to have shorter lifespans often only until early adulthood. Similarly, there are people who lack one type of feeling, pain, and they similarly have lifespans that are shorter than the population that do feel that feeling, pain. The basis for feeling is an evolutionary advantage for whatever reason as removing types of feeling decrease the chances of survival.

    ReplyDelete
  17. I understand Prof. Harnad’s argument about how feeling is not the same thing as computation, but I’m still struggling with the final conclusion of this article that AI cannot feel. He states that feeling is an unexplained process, yet has completely concluded that anything with symbol manipulation cannot feel. If neurons combined together can feel, then why can’t a circuit board? For that matter, why can’t a simulation of a circuit board? Symbol manipulation, on its own, cannot be enough for cognition (by Seattle’s experiment). However, isn’t a neon firing just this? The intense interconnectivity of the brain seems to be the part at which feeling emerges from mere substance (admittedly, we do not know how or why), why do we discount that a digital computer of sufficient complexity cannot do the same thing? If it is not something that has been demonstrated empirically one way or another, I feel uncomfortable jumping to the conclusion that it is absolutely, an all cases and situations conceivable, not possible. I worry that this logic could be applied to any living thing - If I open up someone’s skull and say “where is the feeling? Show it to me. You can’t, therefore it doesn’t exist” - no doubt you would all call me crazy. Yet we seem to be very comfortable making this leap with AI, based on a thought experiment.

    To put my objection more succinctly, it seems that we say two things:
    1. We do not know how feeling emerges from the brain
    2. Feeling cannot be symbol manipulation
    The second of which is a conclusion from a premise which is not itself proven.
    Why can it not though? Really, all we have to go on is a thought experiment, (admittedly, a convincing one) which is enough to SUGGEST a conclusion - but in my opinion, not to reach one. Hopefully someone can point me in the right direction, because everyone else seems convinced.

    ReplyDelete
    Replies
    1. Hey Edward, I don’t have all the answers, but I’d like to try to address a few of your points.

      I don’t think anybody has concluded that “anything with symbol manipulation cannot feel”. Searle’s conclusion was that a system that is *just* symbol manipulation cannot feel. This means that humans could be computational systems, but there’s an extra something present that allows for feeling. Searle did this by “becoming the machine” and showing that there is no way it could be feeling, even though the machine may be able to pass T2 (which suggests it is cognizing, by human standards). In doing this, Searle gets past the problem of other minds by becoming another mind. While it is just a thought experiment, I think it is logically conclusive.

      I also don’t think that Prof Harnad has in this article, or elsewhere, concluded that AI will not ever feel. The conclusion is that AI as it exists right now cannot feel, and that it is going to be very difficult or even impossible to get it to the point where it does. This is because the experience of feeling is not scientifically accessible. As such, humans can engineer robots that are increasingly able to do more human-like things and eventually consciousness may arise out of their circuit boards--as it does out of our neurons, but if and when this happens, we will still not understand how.

      In regards to feeling arising in a simulation of a circuit board, I think we have to be even more skeptical. As we discussed early in the semester, a simulation of a waterfall may be a 1:1 representation of the waterfall down to a molecular level, but it will still not be “wet.” No matter how closely it approximates the physical world, it will never have physical properties. If we assume a materialist view (seems to be the scientific/philosophical consensus), than I think consciousness would need to be based in a physical, not simulated, reality in order to emerge. If we assume a dualistic view, than maybe it could emerge in a simulated reality.

      Delete
  18. Because we cannot explain how feeling and its neural correlates are the same thing; and even less can we explain why adaptive functions are accompanied by feelings at all. Indeed, it is the "why" that is the real problem

    On the adaptive function of feeling, we may try to imagine what a world evolving without it would look like. Humans would probably have evolved in a similar way, being able to learn from operant conditioning what is/isn’t dangerous, what food provides a positive reinforcer on our brain functions, etc. However, possessing feelings allows us to have a varying gradient of salience. Instead of things being black or white (good or bad) we have a gradient, we feel strongly about our own children, not so much about strangers. Because it feels like something to give birth to a child, you are likely to care for it, because of the love you feel. Not simply because of “vegetative instincts” inside of you. This added layer of salience (how you feel towards something) is a powerful asset. Even more so when it is reciprocated by someone else.

    ReplyDelete
    Replies
    1. Hi Josiane,

      I'm not sure I agree with the first few points that you made. In a world evolving without feeling, I don't think that humans would be able to learn from operant conditioning. The experience of positive reinforcement increases the occurrence of behaviour because it feels good to be reinforced. The ability to learn about what is and isn't dangerous in the environment relies on the ability to feel fear in the face of danger, and the ability to feel like you need to stay away from that situation! I definitely agree that feeling can have a gradient of salience, in the sense that you can feel more or less strongly about something, but I think it's really difficult to imagine how we would have evolved without it!

      Delete
    2. Hey Kristina,
      While it is difficult to wrap your head around at first, I think we would be fully capable of evolving and living exactly as we are now even if we did not have feeling.
      It is easy to show that operant conditioning does not actually need to have any feeling involved what-so-ever. For example, we can build robots that sense certain things in their environment as being harmful to them. They can then interact with these harmful stimuli and learn to avoid them in the future (because the interaction lead to an undesirable state). These are very simple toy robots that undeniably do not feel, yet are capable of operant conditioning. Therefore, operant conditioning is not dependent on feeling. Organisms are also capable of mechanically adapting to environmental hazards or benefits. They do not have to feel fear or pleasure, but just gauge (automatically) if doing X led to a favorable outcome and if it is worth doing in the future.

      Delete
  19. This comment has been removed by the author.

    ReplyDelete
  20. I found the video above to be both disturbing and inspiring. "We suffer as equals, and in their capacity to suffer, a dog is a pig". Animals are conscious therefore they feel. Animals feel and therefore they are conscious. The video does a great job at reminding us why feeling matters. We cannot explain how or why we feel but we can certainly shouldn’t underestimate the importance of feeling. Thinking is also feeling. It feels like something to think of your 3rd grade teacher and it feels like something to believe in astrology.

    The reading was helpful in summarizing and integrating the concepts from class and makes the point that cognitive science will never be able to truly encompass consciousness/feeling. We can't reverse engineer feelings the way we can reverse engineer performance capacity. The Turing Test may be able to explain the easy problem (performance capacity) but never the hard problem. As Harnad says, we "can't and hence we shouldn't even bother to try" incorporating feeling into a robot. Robots simply do not feel. "Until further notice, neither AI nor neuroscience nor any other empirical discipline can even begin to explain how or why we feel." With this in mind, can we agree to retire the hard problem once and for all?

    ReplyDelete
  21. “Although, because of the "other-minds" problem, it is impossible to know for sure that anyone else but myself feels, that uncertainty shrinks to almost zero when it comes to real people, who look and act exactly as I do. And although the uncertainty grows somewhat with animals as they become more and more unlike me”.
    With respect to the Other-Minds Problem, is the certainty underlying whether another entity feels, a function of how similar its behavior and appearance is to ourselves? Based on everything I’ve learned so far, I’m confident that an entity’s performance capacity, no matter how alike it may be to myself, cannot provide any certainty that the entity feels because I am not that entity. The only way we can be certain that an entity feels is by being that very entity that feels. By framing the Other-Minds Problem as quoted above, it allows the possibility of cognitive robotics to potentially “achieve” consciousness if those robots are indistinguishable from us in performance capacity or appearance – and that is certainly NOT an answer to the Other-Minds Problem. I may be nit-picking here but I think this is a vital distinction that needs to be made.

    ReplyDelete
  22. The paper talks about the impossibility for cognitive science to empirically explains feelings and best to do is to answer first the easy question.

    First things though, the video of TedxMelbourne of Philip Wollen’s talk is powerful and left me quite dumbstruck at the end. One of the most impressionable moments was when he talked about our follies and said that good people “all genuinely want to change the world, as long as they don't have to change themselves.” In a world that advocates for higher educations and leadership, it seems pretty insensible and deluding to be striving to better the world and aspiring to leadership, whilst knowingly attributing to crimes against other species and our environment. Yes, in our endeavours of studying at the university level, isn’t it fundamentally to help, understand, and better the world? Yet, have we been so blind to the fundamental reality of the crimes we have been supporting all along?

    In relating back to the paper, and still far from approaching the hard problem, but also knowing that feeling is all that matters - perhaps, a small part of the functional aspect of feelings is fundamentally help us in connecting across species in compassion and empathy. Maybe this sounds too melodramatic to be consider academic advancing, but fundamentally, because we could feel what it is to be in pain (more than just nociceptive functions) and, in knowing this, to have the capacity to relate to other species in suffering, and ultimately work to protect those around and our environment. Well, regrettably, we know it is not the case now, but perhaps a part of feeling was meant for it to be a casual advantage in helping us to avoid the follies that Wollen had talked about. Maybe feeling is meant to be a universal junction where we can relate inter and intra species and evolve holistically.

    Oh no, I know it sounds all grandiose and somewhat off-track. But perhaps, we have accredited it all wrong: that yes, there is feeling in eating (and unfortunately enjoying) a sacrificial meat dish, and there is feeling in fulfilling our greed that impinges on our environment. But, maybe when it comes down its virtue, it seems that feeling was meant for some sort of connectivity across organisms. What do you think?

    I am sorry for the mess of thoughts above – that was by no means an answer to any aspect of hard questions but just a thought to entertain. And would love to hear what others have to say.

    ReplyDelete
  23. I am also left in awe from Philip Wollen's talk at the TEDxMelbourne. As a student minoring in International Development Studies (IDS), I am ashamed to have been blinded by the theories that I was taught in the courses and conferences that I took. The problem and the solution of International Development was not in adjusting fiscal policies and formulating new foreign aid strategies, but having the insight to see the vulnerability of other species and protecting their rights as fellow organisms in the planet Earth which we co-habitate. The solutions suggested by organizations who are trapped in their own False-Paradigm model cannot and will not "feel" that there is a connection between the problems that they are tackling and the protection of animal rights. Like Philip Wollen said, our folks and knives (and chopsticks and hands and mouths and our feelings) are the weapons of mass destruction.

    Like you said Grace, "there is feeling in eating...a sacrificial meat dish." Just because we can feel, does not mean that we should feel everything that we can feel. I cannot recall who it was, but "freedom consists not in doing what we like, but in having the right to do what we ought." We have the freedom to feel whatever we want to feel, but we shouldn't. There are certain rules and regulations in place to help us enforce that, so should the animal rights, especially in the legal system that we live in.

    ReplyDelete
  24. I really liked the comparison in this article, between life and feeling. In both of these cases, it seems as if something extra emerges from the system. Matter can exist without life, so why in some cases is life possible? And living organisms can exist without cognition, or feeling, so why do feelings occur? I would argue that, the same way life is not important for matter to exist, feelings are not important for survival or reproduction. Furthermore, I think the main reason it was possible to find the processes that produce life in an organism is that a more clear definition was probably chosen— one relating to dynamic processes within the organism rather than feelings and any other things we normally associate with living beings. I am not sure how to define feelings in a similar way, but I think this may be the first step to uncovering how feelings come to be.

    ReplyDelete
  25. After some more reflection, the lingering doubt about "feeling" vanished after it was explained in the terms of the absence of fifth causal force and its interaction with the other four (electromagnetism, gravitation, strong and weak subatomic forces) to “feel as if the cause is me.” This clarity, nonetheless, was accompanied by other questions which were discussed in class and in readings: if “feelings” are what matter, how can we study it (explanatory gap)? Or did we simply reach “an explanatory cliff” in which we cannot surpass with the limited understanding or limited feeling we have as human beings? I am inclined to integrate recent findings in quantum physics and the theory that gravity is evidence for another hyper dimension in which we cannot sense (https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.83.4690).

    Gravity is the weakest physical force and the only force which human beings can voluntarily overcome (e.g., lift your finger or jump up). The theory explains that there is a leakage of gravitational force in another dimension which human beings do not have the physical capability to “feel.” If the fifth force of feeling is in such a dimension, we will never be able to detect and observe the causal force of feeling.

    ReplyDelete
  26. In this paper, Harnad really exemplifies in a clear manner that the problem isn’t the certainty of whether or not we and other animals experience feeling. There’s also no doubt in the functional abilities we have, referred to here as our performance capacity. However, what one needs to really understand in understanding the complexity of solving the consciousness/feeling problem is that our ability to feel isn’t one of our performance capacities; rather, it is a correlate of that capacity. Therefore, by building Turing machines or reverse engineering, we are only digging in to the functional capacities our brain has. Doing this will not answer our question of how or why we are able to feel. It will only allow us to replicate computational mechanisms that humans home, independent of any ability to feel.

    ReplyDelete
    Replies
    1. I think the hope is that because feeling is a correlate of performance, if we can program performance, feeling may arise. You're right that it wouldn't tell us the how/why we are able to feel, but maybe that correlate would be strong enough for cognitive science to applaud itself for having solved the hard problem; where correlation between when/where is strong enough to suggest causation of the how/why.

      Delete

  27. I’m still trying to wrap my head around the purpose of building a robotic Turing test without trying to program consciousness into it (that is not to say that I think feeling can be programmed into one). How does that project differ from the work of other bio-engineers? I thought what made cognitive science unique as a discipline was its goal to explain cognition, I don’t really see how consciousness can be pulled apart from that project. Can feeling really be torn apart from performance capacity? It seems counter-intuitive to me where emotion is such a motivator for performance. What could we expect from a robotic Turing test more than what we already know about the function we programmed into it? Models of consciousness may not be useful for AI and vice versa, but what use then does an AI have other than *hoping* for consciousness to arise out of function?

    ReplyDelete
    Replies
    1. Hi Krista, I don't think there really would be an attempt to program consciousness (or feeling) into a robot. I think the speculation is more that if one were to program and build machine with the same performance capacities as humans, would feeling follow? I think feeling can certainly be torn away from performance capacity as there would be no way to tell that it exists in a T3 or T4 machine. Additionally, AI can't explain why or how we feel but it can out perform humans in certain cognitive tasks.

      Delete
  28. After reading Chalmers, Dennett, it’s clarifying to read something that hones in on the goals of cognitive science and resists getting tangential and philosophical. The questions raised get straight to the heart of Turing’s agenda for cognitive science. The article was really concise, and i don’t have much to critique, or add at the point, but am left with a lot of big, daunting, questions.

    It did however, make me wonder if the easy problem (including how it is that we feel) is potentially soluble without being a quantum, 5th force, phenomenon. The four forces and their interplay within the brain is a lot to work with already, and we have been able to make progress on every other conceivable problem using the four forces. However, just because we haven’t conceived of it yet doesn’t mean it doesn’t exist. Regarding the quotation, “other systems are alive because they have the objective, observable properties of living systems. There is no further unobservable property of "living" about which there is some additional uncertainty -- no property whose presence you can only ascertain by being the system, as in the case of feeling.” I think that this speaks to the limits of our imagination, not the limits of the universe.

    This article also made me think about the practical use of asking why we feel, at this point in our scientific progress. Originally, I thought of the WHY as having no real point other than being a question to produce wonder and awe, and then after week 7, a just-so-story generator. However (maybe I've been a little bit slow on this...) but I’m now seeing it as a potentially fruitful point of inquiry. Maybe thinking critically about the why we adapted the capacity to feel, framing the capacity to feel as an adaptive advantage, will point us to the causal root of feeling. What advantages does feeling give us over a zombie? How does feeling make us more efficient?
    However, it is entirely possible that this capacity did not evolve for a ‘reason’. As we know from other readings, sometimes adaptations arise because of a clear, tidy, logical reason, but sometimes they don’t. Sometimes a capacity is a byproduct of another adaptation, due to genetic drift, or just a random happenstance. Therefore, question of why could be useful, if and only if, feeling is adaptive. I guess we have enough reason to believe that it is, given that we all seem to do it, and many animals seem to do it too.

    ??

    ReplyDelete
  29. Regarding: "The research of Libet (1985) and others on the "readiness potential," a brain process that precedes voluntary movement, suggests that that process begins before the subject feels the intention to move. So it is not only that the feeling of agency is just an inexplicable correlate rather than a cause of action, but it may come too late in time even to be a correlate of the cause, rather than just one of its aftereffects."

    I found this concept really helpful in specifying that feelings are not a cause of action. Through the reading, it is quite clear that feelings are for sure won't be able to be reverse-engineered, and feelings are separated from performance capacity. We human beings can easily feel while we are performing cognition or actions, however, we don't have the state of "not feeling" but performing as a negative example. This part about the "readiness potential" reminds me of some feelings that do not co-occur, or not even related to the action/output we do. Not just that a feeling could be coming out much later after the completion of an action, but also some sensations/feelings like phantom limb pain that lack stimulus to trigger, that lack any (related) action/output. We know how it feels like to perform a memory recall on our elementary school math teacher, know how it feels to eat an apple, to see the red color etc. I found feelings very special in a way that it could happen without having any stimulus or goal of actions, and I wonder if this might be the reason why we are so lost in knowing how these feelings occur and how/why they are in us.

    ReplyDelete
  30. This comment has been removed by the author.

    ReplyDelete
  31. In the previous reading I was confused as to how we would tie consciousness with feeling. This reading really cleared it up for me as I understand now that thinking is actually feeling. It’s not something that I would have thought of on my own but now thinking about it I guess it’s really not possible to think of something without feeling it. I think my mindset was stuck on defining feeling as some sort of emotion, and I did not really understand how a sentence like “a cat is on the mat” would provoke something that could constitute as feeling, yet I was conscious of it.

    ReplyDelete
  32. From this article it is evident that both feeling (consciousness) and performance capacity are correlates of one another and nothing more. What we make of AI is limited to its performance capacities and because of the other minds problem, we will never find out if AI can feel no matter how similar to us AI becomes. Thus AI systems are not useful in learning about consciousness. There is nothing we can reverse-engineer to grasp the idea of feeling more than what we already know. I agree with the statement that no empirical discipline can ever begin to explain how or why we feel. Despite helping us reverse-engineer many causal mechanisms that occur in biology, science will not give us the answer to the hard problem. In order to know if an entity feels, only the entity itself would know if it feels and no observer outside the entity in question could ever find out if it feels. Passing the Turing Test can only help to explain functional capacity, even that of neural systems, but feeling itself cannot be explained in this way. In fact, nobody even knows what could explain feeling. And by feeling we mean that indescribable phenomenon that occurs when we think while being conscious. Nociception is a type of feeling accessible to robots but that deals with function embedded within performance capacity.

    ReplyDelete
  33. This article provided a clearer understanding of the hard problem. I found this part especially interesting:

    RE: “The film depicts how, whatever the difference is, our attitude to it [the robot] is rather like racism or xenophobia. We mistreat robots because they are different from us.”

    This made me think about how it doesn’t matter if the T3 feels or not, as long as it imitates perfectly our behavior when we feel. It’s not relevant for us that other people feel or don’t feel due to the other minds barrier. However, as long as other people or robots react in the way we do if someone kicked us (i.e. flinching, saying ouch etc.) we must assume that they do feel. Or as the article put it, as long as the feelings are “functed” and not felt in other people, it’s good enough for us to assume they feel so why should or would we create a different criteria for robots?

    ReplyDelete

  34. RE: "readiness potential," a brain process that precedes voluntary movement, suggests that that process begins before the subject feels the intention to move. So it is not only that the feeling of agency is just an inexplicable correlate rather than a cause of action, but it may come too late in time even to be a correlate of the cause, rather than just one of its aftereffects.Feeling itself is not performance capacity. It is a correlate of performance capacity.

    You are saying that there is an illusion of the conscious will, for conscious thought does not result in action. I wonder, then, if you would agree with the statement that feeling is a story that simply provides a false reason to our bodies natural reaction to the environment. We seem to do things for a reason, but this reason is made up after-the-fact. Would you agree that conscious dialog is just a false narrative that gives our lives substance? It is a way to justify the things we do- with no true function? Consciousness is the first person perspective that creates a coherent narrative to our life- an after the fact story that our mind makes up to make sense of the “blooming buzzing confusion”, if you will.

    What if this served a function. This narrative, is something YOU (first person) feel. It provides a sense of personal volition that provides motivation to live. This first person perspective is more complex than just "feeling".

    This is why I think it is important to separate feeling from consciousness. This internal, existential crisis, of “what is cognition?” “who am I?” "what is my purpose" is only felt when you are aware that you are feeling, like we are. I am not denying that animals feel, but are you really going to extend empathy to any organism with a central nervous system? There is definitely a difference between animals that move through life with personal volition and those who are just reacting to the environment haphazardly. Camus, and his thought that the one true philosophical problem is the question of suicide. Our species has a tendency to question it’s purpose, as if there is something bigger (a 5th force perhaps) that we need to feel in order to wake up in the morning. “And the problem is to answer this question using only the known four forces, all of them unfelt forces.” I do not understand the basis for the claim, is there no room for other forces to be discovered? Is it too much like religion, soul, and the hebe jebe mish mosh to believe in a 5th-source-like motivator? At first I thought that the problem of suicide was only a human problem, but it turns out that Orca’s at sea world have tried to commit suicide too. I would like to know your response to suicide (as an existential crisis) as an example of feeling being performance capacity. (assuming one was that motivated by such crisis to actually do it).

    I think my argument admits this word vomit is: there is a motivating force to life, and if you deny that it is because we are feeling what do you propose it to be?

    ReplyDelete
  35. RE: "The system that passes the human Turing Test has the best chance, but even there, we won't know whether, how, or why."

    We have discussed that fact that humans (and animals) have feeling, but cannot explain how it arises or why. This is also true for many of our other biological functions; I can say that I know I have two lungs and a heart and that my kidneys are filtering my blood as I write this, but I don't *feel* the mechanisms that are performing this function. Modern medicine and imaging can show me this, or a representation of it, but it is not a self generated understanding.
    An organism is not able to explain their functional mechanisms from introspection or feeling.
    However when we talk about designing full-human-capacity TT passing robots, I am perhaps more curious/hopeful about the implications.
    The authors propose that *perhaps* a TT robot with approaching or meeting human capacity could feel, but that we would never know for sure, much less how or why.
    However if this robot could explain all of its other mechanisms from its own introspection of sorts e.g. "the contractile muscles in my 3D-printed heart function in this exact way to pump my blood", is it out of the question to think that, again IF it was possible to achieve feeling simply from generating all of our other capacities, that it could make the leap and understand its ability to feel?
    Its an ironic position after such denial of the value of introspection in determining human conciousness/feeling, but how about the introspection of a robot?
    I may be misinterpreting here...

    ReplyDelete
  36. The danger of just scaling up the Turing Test to eventually stumble upon consciousness is that we will never know when it is achieved, and we will have no way of preventing it from happening. There is no way to measure consciousness (ie. Other minds problem) thus trying to build a full human TT is the best hope, but isn't the only potential model for consciousness. If it is a symptom of certain cognitive performances, then maybe the subset required is smaller than that of a full human. If we believe that animals can feel then this argument makes more sense. The problem is we will never know where and when it occurred in the building process.

    I really love science fiction movies depicting AI, but they often glaze over the question of consciousness or assume it’s there to begin with (like in the movie presented here). Recently, West World (HBO tv series), tries to come to terms with consciousness in TT passing robots, the argument is that the robots are not able to feel and the show goes through the process of obtaining it. While this shows like this are extremely popular and enjoyable, I really wish that a director would take on some more of the complex issues we've been talking about in class and as I've outlined above. I think that usually pop fiction skims the surface of AI talking more about how they could love, show emotion, deceive, and less about the other minds problem which I think about be a very interesting concept! Any future directors or writers in the class? Please go and make these conversations into science fiction.

    ReplyDelete
    Replies
    1. We won’t have a sure way of knowing as you said because of the other minds problem, but we will have to assume that since it has the exact same structure as we do, it does feel. You mentioned that maybe a subset would be enough if we are looking at certain cognitive capacities, and it makes sense if we believe that animals can feel. However, we would need to recreate our own species in order to make sure we created a T5 and do the Turing Test. We do not know what it is like to be a cat, so it would not make sense for us to reverse-engineer a cat. Also, I believe that if we come to a point where we are able to create a T5 robot, we will understand much more about feelings and how it works. Thus, the question of whether the robot feels, the other minds problem, will become easier to answer, as we will probably know the answer to how, and be on our way to why by then (hopefully).

      Delete
  37. RE: "Hence even when it feels as if I've just clenched my fist voluntarily (i.e., because I felt like it, because I willed it), the real cause of the clenching of my fist voluntarily has to be a lot more like what it is when my fist clenches involuntarily, because of a reflex or a muscle spasm. For feeling is not a fifth causal force. It must be piggy-backing on the other four, somehow. It is just that in the voluntary case it feels as if the cause is me."

    When we say that being conscious is to feel, I remain skeptical about consciousness being feeling, because feeling is a trigger that pushes us to do things. We feel hungry so we go eat. We feel happy so we smile. We feel sad so we eat chocolate. What does this tell us about the mind? Is there not cause before feeling?
    Perhaps this is where just functioning comes in, where we are just ‘On’, much like when you power up a computer. We are idle, and we do specific things when we are idle to make sure that our minds can function, that there’s a proper environment for consciousness and cognition to take place. If so, identifying what our idle mechanisms are (what consciousness is not) and isolating them would tell us more about what feeling really is. At this point, feeling looks a lot like input and output rather than the software itself, it is the motivation or result of something our minds respond to. If you don’t feel you don’t do, if you don’t do then you don’t feel and you die, or you’re an inanimate object. Animals feel hunger and react accordingly, plants need light and react to the sun, rocks don’t do anything. Which of those are truly conscious and which are not?

    ReplyDelete
  38. Two things really stood out to me in this article
    (1) The discussion of Libet (1985) really affirmed my view that in order to successfully study the physiological/psychological/other-logical mechanisms underlying our doing capacity, we cannot be trying to model the feeling. Not because Dennett thinks these feeling reports are inaccurate, but because the events that these feeling (Dennett: subjective/1st-person) reports are correlated with are simply just correlation relationship. We have no reason to believe they are any more causally connected than we have evidence for them being the result of some 5th sense.
    (2) in the correlation and causation section, evolutionary accounts cannot tell us why it is adaptive for a feeling should be felt, and we can’t be sure of how neural correlates are related to the feeling under investigation

    Both these pointed out to me why making feeling the dependent variable, instead of some cognizing/doing capacity, is problematic when it comes to interpretation of correlation.

    ReplyDelete
    Replies
    1. The discussion of Libet (1985) stood out for me as well. Libet's findings stated that feeling kicks in just after the subject has the intention to move. Therefore, feeling does not cause an action, but it somehow coexists with the action.
      I will look into the study more to see how it was conducted and whether these findings are verifiable. But does anyone else have more knowledge about this study?

      If the study is accurate, then where does the intention to carry out an action come from? It is tempting to assume there is a mysterious fifth force. Furthermore, the evolutionary adaptive value of feeling becomes even more mysterious. Is feeling potentially a byproduct of something else? At the same time, feeling is unlikely to be a byproduct since so many organisms have it even though they evolved in very different environments

      Delete
  39. Although we’ve discussed it throughout the course, I think for some reason it took me until this paper to fully understand that we will never know for sure how or why something feels, or even if it does at all. After the discussion of the answers to the AAAI Symposium questions this really became clearer to me and I think it was because I considered the reverse of the CRA. If Searle was able to put himself in the same computational state of a computer or another mind, he could only be sure that he was not in the same mental state (did not understand Chinese), but not that he was in the same mental state. In this way, the CRA only works to negate the possibility of feeling, but it could never prove feeling.

    ReplyDelete
  40. “The film depicts how, whatever the difference is, our attitude to it is rather like racism or xenophobia. We mistreat robots because they are different from us. We've done that sort of thing before, because of the color of people's skins; we're just as inclined to do it because of what's under their skins. But what the film misses completely is that, if the robot-boy really can feel (and, since this is fiction, we are meant to accept the maker's premise that he can), then mistreating him is not just like racism, it is racism, as surely as it would be if we started to mistreat a biological boy because parts of him were replaced by synthetic parts. Racism (and, for that matter, speciesism, and terrestrialism) is simply our readiness to hurt or ignore the feelings of feeling creatures because we think that, owing to some difference between them and us, their feelings do not matter.”

    I think that there is an important parallel to be drawn here between Professor Harnad’s article, and Philip Wallen’s TEDxMelbourne talk. While Professor Harnad was referring to the differences between a robot-boy and a human-boy, I think that this example can be extended to the differences between an animal and a human. We mistreat animals because they are different from us. We are subject to evidence that animals can feel, they cry and moan when they are tortured, they show signs of suffering – because of this, mistreating animals is racism. I think that this point is very well iterated in something that Wallen said: “When we suffer, we suffer as equals…..in their capacity to suffer, a dog is a pig, is a bear, is a boy”. This is an extremely powerful sentence, and it really resonated with me. If we would morally object to torturing a human child, then, by the same logic, we should morally object to the torture of animals.

    ReplyDelete
  41. RE: The 6 questions

    Given that AI isn't useful for understanding consciousness, then why is computer science a field in cognitive science? While the algorithms and understanding of systems is obviously useful at a basic level, the pinnacle of CS seams to be artificial intelligence. While I always saw the direct relation of Psych, Neuro (I disagree that it can't be used to show how we feel, I think the field isn't there yet but someday may be), Ling, and Phil, I can only see how CS is just barely useful for the field and the end of the readings affirms this for me. I think the six questions that were answered mirror the "sexiness" of artificial intelligence being a replica of consciousness. We all want so badly that machines can be like in Ex Machina and seem to always ignore the vast differences between AI and the brain.

    ReplyDelete
  42. if the robot-boy really can feel... then mistreating him is not just like racism, it is racism, as surely as it would be if we started to mistreat a biological boy because parts of him were replaced by synthetic parts. Racism (and, for that matter, speciesism, and terrestrialism) is simply our readiness to hurt or ignore the feelings of feeling creatures because we think that, owing to some difference between them and us, their feelings do not matter.

    I really liked this, as well as Philip Wallen's TED talk. They both provide an eloquent way to think about feeling outside of our anthropocentric POV that we've approached a lot of the topics in this class with. I think more than anything, these examples serve as potent examples as to the importance of understanding what we've studied. It encapsulates so deeply and emotionally what feeling, or specifically in these cases what suffering is. While we cannot answer the how or why, the existence of the phenomena is so deeply engrained in our state of being and the state of being of animals and maybe one day robots (in the fictional Spielberg sense) that we must be diligently aware that the science of its origin and function has nothing to do with its experience. This collection of videos + the article articulates the hard problem in a way that not only makes that distinction but reminds us of the responsibility that comes with each of us being capable of feeling, and aware of others' feeling regardless of the Other Minds barrier.

    ReplyDelete
  43. “If we could explain what the causal advantages of feeling over functing were in the cases where our functions are felt (the ‘why’ question), and if we could specify the actual causal role that feeling plays in such causes (the ‘how’question), then there would be scope for an attempt to incorporate that causal role in our robotic modeling.”

    Can’t the AI robots themselves answer these questions? By that, I mean that AIs do not perform as well or fluidly as humans, and so it would seem the only thing missing is feeling. This suggests that the why of feelings is to make other human beings acknowledge you and the fact that you’re alive WITH feelings. Specifically, empathy and mirror understanding is the main way that humans determine what other creatures are conscious, meaning that if you are able to show emotion-related behaviors, and if human beings have these empathic capacities overall, then it would create a privilege among humans. This would ensure more survivability of our species, creating a way to differentiate human beings (and possibly a desire to do so). Although this is a just-so story, it fits in generally with this paper and the movie being discussed.

    ReplyDelete
  44. The question of “feeling vs function” seems similar to the question of reverse engineering to me. Harnad states “We cannot explain how feeling and its neural correlates are the same thing.” Like the neural correlates of pain are the same thing as pain. I suppose this is what makes it the hard problem; there is no observable relationship between neural activity and what is felt, I.e. why it is more difficult to reverse engineer the brain than the heart.

    The question of whether it is adaptive to feel rather than just function is very interesting to me. Especially since, as Harnad noted, agency is an aftereffect of action, so maybe feeling evolved as an aftereffect of function? And it isn’t adaptive to feel at all. However, I can think of many ways that feeling could be evolutionarily adaptive, in that there is more motivation to perform life-saving behaviour if it feels like something to be afraid of death.

    ReplyDelete
  45. I am a little curious about the role that memory can play in consciousness. While this might be completely besides the point (and please let me know if it is!!), I’m sure a good number of us know what it feels like to be blackout drunk. While definitely an unhealthy state of being to be in, it raises some important questions to me: can we see somebody who is blackout drunk as someone who is feeling? I have had some unnerving experiences where the people around me had no idea that I was blackout drunk, since I responded and acted normally (normal enough to pass all levels of TT, even). Somebody who saw me in such a state would assume that I was conscious, and able to feel. But I, personally, have no memory of those few hours, and when I come out of being blackout, I feel as if I have regained consciousness, as if I had just woken up from sleep (a state that is definitively unconscious, as we do not feel when we are asleep). Have I, in those few hours, unwittingly become a Dominique? You would not kick me, as you would assume I was feeling, but I myself would not say that I was capable of feeling anything in those few hours. Is this because I was truly not feeling, or because I was feeling but could not commit those feelings to memory and so a few hours later would claim that I could not feel? (In my weak defense of my own behaviour, I have not been that drunk since the first year of university. Not an experience I would recommend to most.)

    ReplyDelete
  46. Every time the fist-clenching problem resurfaced in this article, I kept thinking about the mechanism of breathing, because it can be both a conscious and unconscious process. Generally, we breathe unconsciously and the action is controlled by the brainstem. So, technically, we know what it feels like to breathe, even though we are not acutely aware of the action and don’t actually feel the inhale/exhale process. Is this different from knowing what it feels like to breathe when we are controlling when we inhale and exhale and actually being aware of the air we are taking in and releasing? Or is this latter situation one where we know what it feels like to breathe and, on top of that, what it feels like to be aware of our breathing?

    Or, in the case of unconscious breathing, do we not actually know what it feels like to breathe because we don’t know what it unconsciously feels like to not breathe? Thinking about it, we only know what it feels like to not breathe if we, or an external force, restrict our breathing and we are aware of the fact…

    ReplyDelete
  47. To be honest, I don’t really see why in sci-fi, people are trying to recreate consciousness. What purpose would that serve us? Are people just trying to understand consciousness or is there some sort of use to us, because I really don’t see it. It is also just so impossible. Relating back to the other-minds problem, the only mind that you can access is your own, and I’m assuming to get even anywhere near discovering how consciousness happens (even though we never will), you would have to have other people’s help, but you can never know what they know or what they are feeling or even if they’re telling the truth.

    ReplyDelete
  48. RE: “the problem is with the causal role of feeling in generating (and hence in explaining) performance, and performance capacity.” & “There is no evidence at all that feeling is or can be an independent causal force, even though it feels as if it is.”

    I’m thinking about the possibility of our feelings being essential for our performance capacities as cognizers. If that is true, will it be the case that we won't be able to solve the easy problem because we need to solve the hard problem first? The four known forces of physics cannot explain how it is that we feel something as we cognize and if feeling is a necessary component for doing what we do, we won't be able to explain the doing part of the problem (easy problem) either. What we have discussed so far has been in favor of going the other way around and solving that easy problem first but what is our evidence (if any) for eliminating the possibility of hard problem being a prerequisite for the easy problem?
    I think the work of Libet (1985) that was mentioned in the text is not enough to be this piece of evidence because it shows another correlation between appearance of patterns of brain activity, feeling and doing. Determining the order of appearance of these three parameters does not necessarily give us any insight into their causal relationships.

    ReplyDelete