Saturday 2 January 2016

11b. Dror, I. & Harnad, S. (2009) Offloading Cognition onto Cognitive Technology

Dror, I. & Harnad, S. (2009) Offloading Cognition onto CognitiveTechnology. In Dror & Harnad (Eds): Cognition Distributed: How Cognitive Technology Extends Our Minds. Amsterdam: John Benjamins 


"Cognizing" (e.g., thinking, understanding, and knowing) is a mental state. Systems without mental states, such as cognitive technology, can sometimes contribute to human cognition, but that does not make them cognizers. Cognizers can offload some of their cognitive functions onto cognitive technology, thereby extending their performance capacity beyond the limits of their own brain power. Language itself is a form of cognitive technology that allows cognizers to offload some of their cognitive functions onto the brains of other cognizers. Language also extends cognizers' individual and joint performance powers, distributing the load through interactive and collaborative cognition. Reading, writing, print, telecommunications and computing further extend cognizers' capacities. And now the web, with its network of cognizers, digital databases and software agents, all accessible anytime, anywhere, has become our “Cognitive Commons,” in which distributed cognizers and cognitive technology can interoperate globally with a speed, scope and degree of interactivity inconceivable through local individual cognition alone. And as with language, the cognitive tool par excellence, such technological changes are not merely instrumental and quantitative: they can have profound effects on how we think and encode information, on how we communicate with one another, on our mental states, and on our very nature. 

96 comments:

  1. This paper nicely put into words some of the thoughts I had when reading “The Extended Mind”. The idea that a mind can extend beyond the entity’s body leaves much room for ambiguity. Take the example of a telescope. When one is viewing a distant star, those in Chalmers’s corner may claim that the instrument is now part of the distributed sensorimotor state that includes the viewer’s physical body. However, when the viewer pulls back, the telescope has not been altered in any way. The same photons are entering the lens and emerging from the eyepiece as before, but now, the telescope is no longer considered part of the distributed state. What has changed to make this so? Is this change in the spatial dimension, or does it rely more on the perception of the viewer? While this is a sensorimotor example, I believe the same principles apply to the question of the mind and cognition.

    ReplyDelete
    Replies
    1. I agree with your point that the sensorimotor case informs why cognition is not extended. The case of the telescope helps make it clear that the telescope does not form part of our sensorimotor capacity, it just augments it. The input is still to my “skin-and-in sensorimotor state”. This is analogous for cognition and using a calculator, for example. The calculator does not form part of our mental states, it just augments it. We either feel something or we don’t, i.e., there is either a mental state or not. My interaction with the calculator is just an interaction, and the calculator, per se, does not form part of what it means for me to cognize. “It is still cognizers who cognize – the tool-users, not the tools”.

      Delete
    2. I do agree that this example makes the argument clearer, however I think it's interesting to look at examples that are harder to classify as being either extended mind or not. For example, if we were able to add in an extra memory drive into the brain, would that be considered extended hardware? Or even more puzzling, if the brain can function without certain parts, do these classify as extended cognition capacities that only "enhance" mental states? This is discussed in the paper. If they are considered extended mental states it's interesting to ask which parts of the brain are not considered extended, that is, which parts of the brain are absolutely essential for cognition?

      This comes back to where and what in the brain, and I am sure Fodor would disagree. But I do think this argument brings up essential questions about the localization of cognition.

      Delete
  2. This paper does a great job of sorting the muddle in the Clark and Chalmers reading. Specifically, the point about the migraine is quite helpful:

    "There is no such thing as a distributed migraine – or, rather, a migraine cannot be distributed more widely than one head. And as migraines go, so goes cognizing too -- and with it cognition: Cognition cannot be distributed more widely than a head -- not if a cognitive state is a mental state."

    Cognition, in virtue of being embodied and in the context of a world, uses things outside of it ("cognitive technology") to do what cognition does. But at no point does the tool itself become the cognition or a cognizer itself. Cognition has a characteristic 'internal' aspect that it can extend relative to, i.e there is always someone or something is doing the cognizing. This is a crucially important point, as it is required for the extended mind thesis to differentiate between a mind and an ant colony. Part of figuring out the easy problem and building a T3 will likely require further insights into what exactly that distinction is.

    ReplyDelete
    Replies

    1. Hello Auguste!
      I strongly agree with this paper and the point of view you put forward. However, I wanted to try to play Devil’s advocate, as a mental exercise. In the example of a migraine there are distinct parts. What it feels like in that exact moment, we could call that the mental state, is one that cannot every exist beyond that This because of what we describe as a mental state and because a migraine that I had a year ago will be different from one I have today and one that I will have in a year. There is no way; with this definition that one can argue a distributed cognition, because we will eventually run into the hard problem of cognition. How and why we feel what we feel. However, memory seems that it may be different. Let us say I write down in as many words as is necessary what it felt like for me to experience this migraine. Then a year later I read this, it elicits a mental state and in some ways I can feel what that migraine was like because of those words. Those words are shaping my mental state and in some way they could be considered a part of it. Of course part of the process is done endogenously, but some is reliant on external factors. So, not ALL parts of mental states are external, BUT if mental states can be elicited and shaped by exogenous factors then why can we say beyond a doubt that cognition cannot be distributed?
      I think the killer argument here lies in the hard problem. Namely, maybe cognizers are the ones who feel and cognitive technology are the ones that do not.
      You argue that the tool itself can never become the cognition or the cognizer itself. I think that is a fair point, however from that could we argue that there is a ‘perfect cognizer’ who is completely undisturbed by cognitive technology? What would this look like, I would argue that it is impossible. A cognizer must in some way interact with it’s environment. Let’s take categorization as an example. If we take the definition that categorization is ‘doing the right things with the right kind of things’ then we would quickly see that there need to be THINGS that are used. All aspects of cognition require something external, and if this is the case then couldn’t we say that these external things are part of cognition itself? (I am not saying that the external things are cognizers)

      Delete
    2. "What we describe as a mental state and because a migraine that I had a year ago will be different from one I have today and one that I will have in a year…. Let us say I write down in as many words as is necessary what it felt like for me to experience this migraine. Then a year later I read this, it elicits a mental state and in some ways I can feel what that migraine was like because of those words. Those words are shaping my mental state and in some way they could be considered a part of it”

      I agree with you when say that a migraine today and a migraine last year will be different and unique from one another. Furthermore, yes, if you were to write down in sufficient detail how it felt to experience last year’s migraine, you likely would remember the pain, and to a certain degree feel the ‘ghost’ of that migraine (but importantly, you would not experience that migraine over again, you would only remember how it felt – without actually feeling those feelings). I disagree however, that those words would become a part of your mental state. Yes, they would have the capability to shape it, or cause you to recall the emotions you felt at the time, but I do not believe those words would count as being a part of your cognition. I do think you have a point when you say this rests in the hard problem – with cognizers being the ones who feel and cognitive technologies being the ones that do not. However, when it comes to cognition that requires external ‘things’ – I still do not see how those external materials are anything more than tools or resources.

      Delete
    3. The confusion that arises when we talk about what is part/not part of your mental state was put quite nicely to me the other day: apparently in Star Trek there's some kind of creature that is basically a hive or conglomeration of these small flying parts. Every time it comes into contact with anything it engulfs it, breaks it down and makes it part of the hive-mind thing. So it just keeps growing and growing indefinitely. The idea is that the extended mind is kind of like this: every time you interact with anything in any way, it becomes part of your mind. So by the end of our lives we each have these gigantic minds with connections all over the globe? What does that mean? How does that really help us understand the mind, besides merely stating what we already knew: the mind, in being part of my body which is a living, organismal unit, exists in the world and requires a world to survive. It constantly offloads capacities to the environment both ontogenetically (through Baldwinian evolution) and phylogenetically.

      Delete
  3. I thought this was a good choice as a final reading for the semester because it subtly captures the “concerns” and perplexities people may have towards cognitive science in general. In any case, my response to 11.a expressed support for Charlmers’ idea of extended cognition because I love the idea of cognition being dynamic with the environment in equal parts. It seems more ecologically valid to me. However, this paper provided the important clarification that external features in the environment can extend our cognitive capacity, sure, but they cannot cognize themselves. They are not mental, in that they are not feeling. This aspect cannot be distributed beyond the mind, the same way a migraine headache cannot be distributed. In light of this, I must retract some of my excitement towards Clark and Charlmers’ idea. A dynamic interplay between mind and environment seems valid, but accepting these constituents as equal and causal when it comes to the organism’s behaviour implies that the notebook for long multiplication is a cognizer: an implication I do not support.

    Undoubtedly, the massive offloading power of technology has caused some people to blur the important distinction between a cognizer and the cognitive technology at its disposal. Thinking back to Searle’s paper, he suggests that if mental states are just computational states then thermostats qualify as such”. It is easy to see the absurdity of distributing our mental state onto something like a thermostat, but with technologies such as Google, Wikipedia, language (technologies that can be distributed themselves forming these massive networks accessible for cognizing) it becomes especially important to remind ourselves that mental states are feeling states and even the most sophisticated technologies are not cognizing.

    ReplyDelete
    Replies
    1. Hey Jessica,

      I agree with pretty much everything you said above, but the last line when you said
      “…[I]t becomes especially important to remind ourselves that mental states are feeling states and even the most sophisticated technologies are not cognizing”
      It made me question what we would make of an organic human who becomes gradually more integrated with technology.

      I think part of Chalmer’s insight can be interpreted as questioning where the line is between the cognitive technology that we use and cognition itself. It’s easy to dissociate the two when we are talking about organic organisms, like humans, using technology such as a calculator. But, what about a human with technological implants that enhance their cognitive capability or replace their damaged sensory capabilities?

      The example in 11a of the man with Alzheimer’s and his notebook functioning as his memory begins to straddle the line between cognizer and cognitive technology. If we continue this logic, and the man with Alzheimer’s continues to lose more cognitive capabilities, and with each loss of cognitive ability, a new technology replaces it, eventually all the tasks of his brain are replaced by technology. Is he still a cognizer, or just a sum of cognitive technologies? In a more relatable example, imagine someone who has been deaf since childhood and then they are given a permanent cochlear implant to restore their hearing. Is it the cognitive technology that is doing the feeling or is it the cognizer?

      Delete
    2. @Jessica: 11/10 summary, definitely agree!
      @Karl: I think your example of a person with a gradually-replaced, technology-based brain is really interesting, and I definitely agree that Clark/Chalmers were trying to get at this fine line with their own examples. As for your question about whether he’s still a cognizer, I want to say there can’t be an answer because of the Other Minds problem – we’d never know whether or not they’re having the feelings required for cognition. (Obviously this is just “Olivia says”)

      Delete
    3. In response to Karl, like Olivia, I find your example really interesting, although it doesn't quite make sense to me to draw a distinction between the cognitive technology and the cognizer. Wouldn't it make sense to say that the cognitive technology is now part of the cognizer and the cognizer feels? Surely, our arms and legs and sensorimotor capacities are not feeling but they are constitutive of us who feel. It is our biological "technologies" (capacities) that allow us to feel the things we feel because they allow us to do the things we do. It feels like something to play baseball or ski, but I doubt I would be able to feel that if I did not have a baseball bat or skis and be actively doing those things, because I wouldn't be able to do those things. It seems that when we are talking about the capacity to feel, we can't feel the things we cannot do and if cognitive technologies extend our doing capabilities, they extend our feeling capabilities and become part of cognizer who does more and feels more.

      Delete
    4. @Yi & @Olivia:
      After taking a few days to think about the articles and reading your responses, I've come to agree that there isn't a need to draw the line between the cognizer and the cognitive technology. The reason that my example above was drawing that distinction was to show that eventually there is no difference between the cognizer and the technology. In a way, it reminded me of the Ship of Theseus. The analogy is usually used to question identity in a constantly changing body, but this time instead of the body it is the mind.

      Delete
    5. I have two issues with extending cognition to cognitive technology. The first is the issue of an autonomous system. Where do we draw the boundary of the cognizing subject and that of the cognizing object. Granted that we draw information from a certain perceptual spatiality, we cannot conclude that objects that fall into that space are solely the objects of cognition. Nor can we conclude that they become receptors for offloading our cognition. Cognition works on two kinds of objects. Ones which are external to the senses and one's which are internal (AKA objects of the mind). We tend to use both these objects to offload information on to. Much like the notebook and the brain example, we tend to offload cognitive tasks within the mind itself. For example, different memory consolidation processes within the mind. Unlike the notebook which we do not perceive as apart of ourselves, we do acknowledge that mental objects are within us. So offloading from subject to object continuously happens, but only internal objects become a part of the mind rather than what is outside the subjects purview. In our cases the spatial identification of where our mid is located. The problem becomes very clear when u try to include other cognizing subjects. They set our boundaries for how extended our minds can be.

      Delete
    6. In response to Jessica, I definitely agree/understand that external minds cannot necessarily be considered cognizers in themselves, such that they do not experience feelings. However, I wonder what it means for cognitive science (if it means anything at all) when the external means that we use in the environment to extend our cognitive capacity able able to generate feeling in another cognizer. Does our ability to offload feelings onto other cognizers by means of cognitive technology have any importance in this whole discussion on cognizing?

      Delete
  4. This paper definitely helped clarify why I felt unsettled by the idea of extended cognition as outline in 11a. In my response to the 11a reading, I basically said I didn’t get how feelings (which, as we’ve discussed in class, are important for cognition) could be housed in stuff outside of the mind. This piece outlined the important differences between cognizers (us) and cognitive tools (stuff we use to enhance cognitive capacities but that don’t themselves cognize), and raised some really interesting points about the past and the future of cognitive technologies.

    ReplyDelete
    Replies
    1. One of the things I disagreed with in the 11A article is outlined in this reading – of how tools don't count as an extension of the mind, just a change in input/output or performance capability.

      “If you look at a star through a telescope…[is it the case that] your visual capacity is augmented by the telescope’s power…Or is it just input to your narrow, skin-and-in sensorimotor state – input augmented by the telescope?”

      If this the telescope were a cognitive technology, the authors of article 11A would seem to argue that it is therefore be an extension of my mind – and so in this sensorimotor example, the counterpart analogy would be that it’s an extension of my eye/eye’s abilities. The same analogy could be kept with the car, or the crane. However, when using sensorimotor technology to try and get Chalmers and Clark’s point across, it seems weaker than with the cognitive case, and easier to see why it doesn’t make much sense. The car, the crane and the telescope are not parts of me, they just augment what I can do by changing inputs to my senses, or outputs from me. Cognition in my opinion (and it seems to be yours as well), remains firmly between the ears.

      Delete
  5. I found the idea of distributed cognition fascinating. However, I see the problem of “distributed cognition” vs. cognition within a single cognizing organism as related to finding out where and how the feeling of thinking takes place i.e. the hard problem. The authors give an example of TT robots interacting and collaborating as an example of “collaborative cognition” among individual TT-robot cognizers. Now, isn’t all cognition “collaborative”? We ‘feel’ thoughts because we “collaborate” or interact with objects, people, ideas around us. Without this collaboration or interaction, we would be zombies or machines that just cognize without any feelings. I don’t think that ‘individualness’ starts and ends with skin, because we are always in some sort of context which our cognition relates to. I understand the uselessness of seeing absolutely no boundaries between one organism and another, or the concept of Gaia. However, I'm also not certain how the individual can be totally separated as an "individual." And so I think distributed cognition doesn't necessarily have to mean that everything is affecting cognition, but rather than certain things in our environment are more salient to us (such as using social media to share certain . news stories) and these are the kinds of things that lead to distributed cognition.

    I understand the idea of “autonomy” of a system, in that it is autonomous if it can do what it does “on its own.” But this is why I think cognition implies a collaboration, while something like breathing is more autonomous. Even in cases like doing multiplication, we need the numbers to have been grounded in sensory objects and so our “feeling” for the number seven is the result of a cognitive collaboration. I think to ask whether cognition is autonomous or distributed is in a way asking whether feeling is autonomous or distributed.

    ReplyDelete
  6. I found the argument about the migraine very interesting- that unless a system can feel as one organism it is not a single cognitive system. I wonder then at what point did colonies of unicellular organisms become one multi cellular organism? In the question of the amoeba a group of them is considered an independent system in and of itself. Furthermore, I'm not sure if this is cleared up in the paper, but is the slime mould an independent cognitive system or is it only an independent distributed life system?

    ReplyDelete
    Replies
    1. When I read the first article (11a), all I could think to write about in my skywriting was that I did not fully understand! There was something about the barrier between the cognizer and the things that they cognize about in the world that I could not see. This second article helped me understand a lot about that extended cognition concept, but reading your comment about unicellular organisms and multicellular organisms made me realize that this whole principle of "single cognitive system" (as opposed to what it is surrounded by and the tool it uses in the world) may actually not be as "singular" as I thought it was. Now that I remember a lesson I had on the social life of bacteria, I remember that a single bacteria within a colony, has the ability to "commit suicide" (apoptosis) in order to release substances that were previously produced within its body, so as to kill the competitive bacteria colony, and thus protect the rest of their origin colony. In other words, save other unicellular entities for no benefit of its own. This made me realize that really, the bacteria itself is not really a rational cognizer, but as a whole, the colony is! So with that example we can finally visualize the fact that cognition can be happening outside of the independent entity, and inside the cognitive life-system (the colony). So really the term "independent" does not refer to the individual but to the entity that is capable of cognition, which in this case might be the sum of independent bacteria and their "social" links, their lack of independability. Here, Cognition = each self + the other selves around them. Please let me know if you think that a colony of bacteria is not a cognizer, but it seems to me like they are doing the right thing with the right kind of thing, whilst be capable of decision making and all..

      Delete
  7. It seems agreed upon in readings 11a and 11b that eventually we will have devices that, similar to telescopes enabling us to see further, will allow us to think faster and further and that these devices will enhance our natural performance and cognitive capacity.

    Dror and Harnad state, “Both sensorimotor technology and cognitive technology extend our bodies’ and brains’ performance capacities as well as giving us the feeling of being able to do more than just our bodies and brains alone can do.”

    If our goal, in terms of furthering humans’ ability to cognize efficiently and profoundly, is to design and incorporate these ‘cognitive technologies’, does this not demand an accurate and thorough understanding of the what/where in the brain? I would think these efforts would be needed if we were to eventually incorporate such systems. Knowing where a circuitry is and what it is connected to or how it malfunctions will enable us to design technologies that can enhance such a circuitry.

    Therefore, while a dynamic interplay between external technologies and our brains does not give such technologies the ability to cognize on their own, incorporating them into our system of cognition requires deep knowledge of how our system operates. While the questions of how/why may eternally stand, studying what/where seems useful for eventually integrating technological systems into our innate ones.

    ReplyDelete
    Replies
    1. This is an interesting point Aliza. You are suggesting that designing and implanting a device that would restore memories in someone's with Alzheimer’s would require that we first understood where (and to some extend how) memory storage happened. This reminds me of the way we've been able to successfully design anti-depressants without understanding the exact mechanism of depression and its manifestations the brain. Perhaps designing a technology for memories will be another feat of trial and error.

      This also ties into a point made in the reading. The authors make a distinction between being conscious to remember something and being conscious of how you are remembering that something. As stated, "When we recognize a chair, or understand a word, or retrieve the product of seven and nine from our memory, the outcome, a conscious experience, is delivered to us on a platter. We are not conscious of how we recognized the chair, understood the word, or retrieved “63”. Hence the brain states that implement those cognitive functions are not conscious either." This implies that a technology serving to match the function of memory retrieval would also not be conscious.

      Although not conscious in itself, it would be part of our system and it will certainty affect our brain organization and capacities. The brain may even rewire. Synaptic plasticity could allow us to adapt to the presence of this new technology and compensate within other areas of our brains.

      Delete
    2. Hi Elise,
      I'm glad I scrolled the comments and stumbled upon yours! I read an article a few years ago about a biomedical engineer/neuroscientist at USC who has been designing silicon chips that mimic the signal processing that neurons do, and these would eventually be implanted in the brains of patients who have disrupted neural networks and who are unable to form long term memories. While the neuronal processing and connectivity can be confirmed in vitro, I wonder if we need to solve the 'how' aspect of the hard problem (in this case, how we feel when forming memories) for the chips to eventually restore the cognitive function of forming long term memories. There is certainly a feeling associated with recalling memories, but would the formation of long term memories - and being in the cognitive state in which memories are formed - be plausible with artificial implants that computationally mimic electrical signals propagated by real neurons? Furthermore, would these implants thus count as cognitive technology that is external to us if they will be housed in our brain and physically connected to other parts of our brain? Apologies if this comment is a bit rushed and unstructured!

      Delete
  8. Re: "Living and feeling are not necessarily the same thing. There can be living organisms that have no mental states and there can be nonliving systems that do have mental states."

    Can there really be nonliving systems with mental states? I assume that the authors are referring here to a T3, for example, but what I want to contend is whether we can really consider a T3 to be nonliving.

    I think it comes down to how we define "life." If life is defined by the stuff its made of - organic molecules in the case of flora and fauna - then no, a T3 is not living. But if we define life functionally, in terms of what it does, I think we get a very different result.

    A dead body, for example, is made up of all the same stuff as a living one. None of its basic composition has changed, but what it does (apart from laying and decaying) has altogether stopped. So it seems like we should define life functionally as opposed to just structurally.

    If a T3 could do everything a human could do, then by definition it would be functionally the same. If you tried to turn it off, it would try to stop you. It would have all the same drives that making a living thing (want to) live, and do what it can to keep doing so.

    So, despite the material difference, can we really say it's not alive? Does life, too, have implementation-independence?

    ReplyDelete
    Replies
    1. Hi Michael,

      I agree with you! How can there really be non living systems with mental states. I feel like we would never know because of the other minds problem. If it takes a robot as a non living thing to pass T3/T4/T5 to cognize, i definitely don't think we've reached to that stage yet.

      Delete
    2. This is a very interesting point --

      I started to think about characteristics of life and what might distinguish all living things from a T3. One of those things is the fact that live things grow into their forms. All living things start from a few cells, and grow cell by cell into their final state. A T3 agent would not be that way, it would have been created by humans and there would be no way for it to "grow itself" from some seed of a T3.

      Or maybe I'm wrong here, maybe I just can't visualize far enough into the future, and one day, we could have parent robots having kid robots. I think that if we ever reached that point, where T3's were generating without human construction, then they would have enough of the features of living things for me to consider them alive.

      Delete
    3. If anyone can explain how and why a nonliving T3 could not feel, they have solved the hard problem...

      Delete
    4. Dominique, I hadn't thought about that. The aspects of growth and reproduction seem definitive for life. I wonder if it would be different if a T4 "grew" from self-replicating nanobots. It would have to "eat" something to get it's energy too, and could maybe even "reproduce". Since a T4 replicates the structure as well as the function of the human organism, just in another material, would that be alive despite the material difference?

      Delete
  9. I think there are different unconscious brain states, like when vegetative functions such as breathing and homeostasis are carried out, and those automatic cognitive processes like habits or the pops-out of 7 X 9 = 63 . As such, I just want to double check if it is correct to say that unconscious mental states can be mental, and not all mental states requires cognitive resources or effort?

    ReplyDelete
    Replies
    1. What would make unfelt internal (i.e. cerebral) states "mental" (other than the fact that they are occurring in the brain of an entity that can feel)? But then we might as well call circulatory, or digestive, or immunological states "mental" just because they are occurring in a feeling entity.

      Delete
    2. Thank you for your reply professor! It’s interesting because it’s all these vegetative functions that give rise to an epiphenomenon that ultimately make us “feeling entities”. Asking if brain states are mental or not is just like asking if cells are alive while we are alive, I’m just wondering if it’s possible that we are in fact conscious of all these seemingly automatic mental states, while we’re just so used to it that we tend to ignore them? For example, if we all focus on our breathing rate right now, would you find it annoying because you’re afraid of forgetting to breathe the next second?

      Delete
  10. I wonder if “Google storage and retrieval” can act as an active externalism as mentioned in the article of 11a because Google is just like a notebook for us. But for Google to be an active agent in the environment driving our cognitive process, does it have to be something that we knew already (while we just cannot recall it right now) and we know that there is an association between the fact we are looking for and the Google storage? And this is how we link the computer with our brain (wide with narrow mental states)?

    ReplyDelete
    Replies
    1. I think to try and link computers to our brains would be the wrong way to look at it. When our brains are involved in any memory retrieval, we are not conscious of how we arrived at the retrieval of that memory yet we are still considered to be cognizing because our minds still have the ability to feel. Although we are able to search things up in google and use the cues that google out puts (through unknown mechanism much like our own minds) to help our own retrieval capabilities, I don't think this links google to our mind. Google is simply acting like an outside technology (like a microscope), that enhances our performance capability. Google itself it not cognizing and does not contain the capacity to feel.

      Delete
  11. In the paper, the authors write, “There is DNA, which can help resolve (up to cloning) whether or not two bits are (or were) indeed parts of the same organism. But genetic relatedness is only relative, which is what allows some to argue that species are individuals and that Gaia is a mega-organism.”

    When the authors mention that genetic relatedness is only relative, what do they mean? I guess I just don’t understand why genetic relatedness wouldn’t be more determinative? Sure, it’s relative, but still certain DNA is more related than other DNA.

    ReplyDelete
    Replies
    1. More determinative of what? Nothing prevents us from declaring that all organisms that share DNA up to percentage P (pick your P) are all parts of one and the same "thing." We can also add (if we wish) that they are all one and the same thing if they can intebreed, and not part of the same thing if they cannot. But what does that "determine" other than the way we decide to use the word "thing"?

      Delete
  12. Another point I wanted to discuss is when the authors mention: “There is no such thing as a distributed migraine - or, rather, a migraine cannot be distributed more widely than one head.”

    But, how do we know this? Doesn’t the “Other-Minds” problem come into play here? I’m not trying to say that I believe inanimate objects are able to feel my migraine (I definitely do not believe this), but I guess I just don’t understand why the other-minds problem wouldn’t also apply here.

    ReplyDelete
    Replies
    1. Hi Laura,

      To my understanding, the migraine example was not meant to deny the possibility of feeling in any other system other than oneself (other-minds problem). "Maybe Gaia 'could' have a migraine" (p.12)

      Instead, the authors used the example to highlight the fact that living should not be conflated with cognitive capacity.

      Although, even if a bunch of living organisms (that we could somehow confirm cognize) were to make up a large system, the system itself would not be cognizing. There would be no such thing as a higher order/shared mental state that is composed of smaller mental states. Rather, the system would be made up of individual cognizers, since cognitive capacity belongs to a user/individual that has/feels their respective migraine.

      Delete
    2. Hi Laura,

      I also had some different concerns about the distributed migraine example. Mainly, I am not convinced that a migraine is a state of the mind, and instead see it as a state of the body. It is a physical sensation that affects not only your head, but your entire body. We wouldn’t say that a broken foot is only contained in the head and not the rest of the body. I still do not think that a migraine can technically extend beyond one person’s body, but I do know there are significant effects that one person’s pain can cause on the people around them (sympathy/empathy pain), and that a person can cause another person to have a migraine (by being loud, etc.).

      I think perhaps the Other-Minds problem could still be a valid point here, but that Harnad denies it due to the fact that it is a problem for all of cognitive science and is itself insoluble.

      Delete
    3. Rebecca, sure migraines are caused by the body. (So are undetected, benign tumors.) But a migraine is felt. Feeling, too, is caused by the body (trouble is we don't know why or how). Yes, it is possible that rocks feel. (That's left open by the other-minds problem -- but [I'm sure you'll agree] it's as likely as that apples will fall up instead of down.) And feelings are felt. So the real question is whether the feeler (hence the feeling) can be wider than a head. Is it any more likely than a feeling rock?

      Delete
    4. I have the same question Laura, and agree that this is a case of the other minds problem. Any conclusion that we come to in relation to other thinking, feeling minds is going to come up against it. When the authors say “a migraine cannot be distributed more widely than one head” there are two ways to interpret this comment, one of which I think is defensible.

      Firstly, they could be saying that in no possible case can individual components cannot come together at a distance to feel: I think this claim is overreaching, especially when we do not understand what it is that induces feeling. I don’t see how anyone can claim something is impossible without understanding the mechanics of the possibility in the first place. That being said, I do not think this is the author’s argument.

      Secondly, and I believe what they intended to say here, is that a migraine cannot be felt unless something is doing the feeling. As of now we have only encountered feeling things that exist in one head, and something must be able to feel in order to feel. If all things which feel exist in one head, then migraines cannot be felt outside of it. Regardless of the situation of the body (what causes the feeling) the end result is that one singular feeling thing feels the feeling.

      If they argue the second point, then the only aspect to debate is whether or not we should worry about the other minds problem here. We can avoid it by shifting the burden of proof with common sense – saying it is very, very, unlikely that a distributed network can feel if it is not in one head like ourselves, and that anyone who wishes to disagree should provide a different example which does not beg the question.

      Delete
  13. I really enjoyed this reading as it clarified a lot of the uncertainties about the last one. I liked the example of VR to help explain the parts of our sensorimotor abilities extend beyond our mental states and the ones that don't. I also really appreciated the part on conscious, which clarified how consciousness is not the "mark for cognition." The abilities to balance and breath are clearly cognitive functions, but this does not mean that they are necessarily conscious decisions/we are not conscious of how we control these mechanisms.

    One difficulty I had with this reading was the idea of "Gaia" - of earth being an organism, or even the idea of a species being an organism. I think perhaps maybe metaphorically this interpretation could be used in the sense that all parts of our ecosystem/planet work together to function properly, but I do not believe that it is a tangible idea. Living organisms have very different cognitive states than the earth. While I live in a house and in my house there are plants, people, and pets, I would not say that my house is an organism.

    ReplyDelete
    Replies
    1. And I suspect that the reason you don't think the house (including its contents) is an organism is because you don't think the house and its contents can feel. Only some of its contents, individually, can feel.

      But I think that's just the animism that is hiding inside vitalism: "all living things feel." But that's incorrect. Living and feeling are not the same thing, though up to a point (or rather starting from some point, beyond plants and micororganisms) they are closely correlated -- with having a nervous system. Trouble is, no one can explain how or why...

      Delete
  14. RE: "Being an organism was conflated, animistically, with having a mind. This is an error; living and feeling are not necessarily the same thing. There can be living
    organisms that have no mental states and there can be nonliving systems that do have mental states."

    I am puzzled as to how there can be a nonliving systems that have mental states? Due to the other minds problem, it seems as though we can't be sure that non-living things have a mind - perhaps they do, perhaps they don’t; we would never know. I could see how the reverse argument that living organisms have no mental states. Vegetative states are (non) felt states, therefore they are mental states and cannot cognise. If the definition is mental states = cognitive (felt) states = being conscious while brain states are being implemented, the argument that nonliving systems have mental states seems to be flawed. Contrary, the difference between cognitive and vegetative state seems some what arbitrary but seems distinguishable from felt/unfelt states.

    ReplyDelete
    Replies
    1. Fiona, see the discussion of this point above. Internal states can be vegetative or cognitive. Both cognitive and vegetative states can be felt. "Mental" just means felt. Vegetative states can also be unfelt. But isn't the only reason it even makes sense to talk about unfelt "cognitive" states the fact that they are happening inside an organism that also has felt states?

      Delete
    2. This comment has been removed by the author.

      Delete
    3. The article reads that “consciousness itself cannot be the mark of cognition”. Cognition is whatever gives cognitive systems the capacity to do what they can do – but whether that entails “feeling” is still up for debate, especially when we conclude that the T3-passing robot is the product of successfully reverse-engineering cognition (as it pertains to the easy problem).

      Prof. Harnad, I just want to clarify whether feeling necessitates consciousness (Article 10c says so), because you say that “vegetative states can be felt” – can you explain how that is? How can a nonconscious state be felt?

      Delete
  15. The advent of cognitive technology definitely marks a new trajectory of brain development. The article does a convincing job of defining what it means to cognize, what it means to possess mental states, and how environmental tools can enhance or alter our minds. A frustration I had with the first externalist article was that it seemed like semantics after a while; the endless bracketing of "the criteria for individuating wider and wider forms of (both physical substates), and life" does indeed become arbitrary and kind of pointless to discuss.
    A pure externalist view can never really avoid these hurdles, and so I find the latter article more poignant.
    The effect of cognitive technology is undoubtedly a big one. The explosion of developed ADD /ADHD/general attentional problems in children/adolescents has been thematically linked with the trend in children's cartoons of becoming more and more fast paced and erratic, and less contemplative (i.e. Sesame Street > Spongebob) (whether this is actually due to a changing televisual environment or changing clinical practices, who knows), and others have pointed to the emergent multi-tasking generation who do their homework while listening to music, texting, and having 10 browser tabs open simultaneously.
    These are, if not shallow and unproven, exemplar effects of changing cognitive technologies on the mind, and will surely be the first of many.

    ReplyDelete
    Replies
    1. Cole - I find you make a really interesting point here. Harnad concludes his paper in stating that cognitive technology truly has the power to "affect our brain development, organization and
      capacities" which could translate into changing how our minds work down to the basics of living, thinking and communicating (i.e. in the ways that you have mentioned). So, if I have the right idea, cognitive technologies are not themselves an extension of the mind, but it is agreed upon that they do play a huge role in extending cognition.

      Delete
    2. Hey,

      I'm curious about that link, I think it would be very difficult to prove that children's media (cartoons, TV shows, video games, etc) with less contemplative motifs can be linked to ADHD in children. There are too many confounding variables in the environment that would also have an effect like changing increased use texting similar technologies, or multimedia in classroom settings - but I suppose that's a different argument for a different time.

      I think that changing cognitive technologies are particularly interesting in the 21st century given the great number emerging, but I find it easier to look at similar and examples. The advent of print has been shown by historians to have changed how people interacted with each others ideas, instead of using written word as pure memory aid, they moved to communicate through text deeper information decreasing the use of dictated speech. It's interesting to think how even printed technology could alter brain use and development. Moving now from printed text to online text I find a similar parallel, it's interested to think about how current paradigms in technology are similar to changes that happened hundreds of years ago.

      Delete
    3. @Cassie

      Well said; its basically impossible to prove a causal link between children's media and ADHD or other attentional disorders, because of environmental confounds or alternative possibilities ("It's government chemicals in the water!").
      In fact some people consider this this proposed causation of cartoons and ADHD to be a "tin-foil hat theory" of sorts.
      However it remains that cognitive technologies are most certainly altering our capacities and tendencies, just as written word led to a more academic tradition, and the internet/worldwide televisual media likely is changing our sense of self in relation to the rest of the planet.
      Whether or not our guesses at what these changes have been/will be are correct (or even likely), it would be foolish to think that there aren't changes happening.

      Delete
  16. Most, if not all the cognitive technology devices have improved our cognitive abilities. It could be efficiency (like speed of processing) or maximum level (like allowing us to do things that were possible before), but all the effects seem to be positive. However even without this technology, our most “basic abilities” can still considered cognition. Maybe If we were looking for an “ingredient to cognition”, we should look to decrease the efficiency and level of cognition (through lesions and the removal of cognitive technology). Because much like how our current cognitive technologies are cognitive technologies, some of our current “basic abilities” (like more advanced brain areas) could be considered cognitive technologies in the distant past when brain was much less developed. We can keep peeling away layers of the onion until we find the core to our cognitive abilities.

    ReplyDelete
    Replies
    1. Hello Jack, so what you suggest is in order to look to solve the easy question, of answering how and why we do what we do, is to take away layers "until we find the core to our cognitive abilities"? So, in getting rid of the cognitive technology (any extension or enhancement) and aiming for what you said "basic abilities", you think that it will be somewhat easier to reverse engineer a T3 robot and find the causal explanation? It's fascinating and, if I remember correctly, at the beginning of the course, we did discuss about 13-year-old Ukrainian boy, passing the available TT. That, yes, we don't need a T3 with telescope visions or calculator processing speed to pass T3 and understand cognition - that having a child robot passing T3 seems just as viable as any. As, fundamentally, we are looking at generic capacities of human...

      You also brought up how our current cognitive basic functions are enhanced and can be seen as cognitive technologies in the past when brain was less developed. It is true that our brains have changed throughout, and definitely developed higher cognitive functions, but are they really cognitive technologies? Wouldn't it still counts as part of the main cognitive process? It seems an interesting thought to entertain, but in merely looking at the fundamental, striped down "core of our cognitive abilities", are you suggesting that we look to re-engineer a caveman T3 robot to understand cognition? And whether if you are implying that with this approach it would be easier?

      Delete
  17. According to this article, cognitive tools influence our minds, but are not cognizing systems themselves.
    The question of distributed cognition is intricately entwined with the notion of living and cognitive capacity. The marker, however, is whether the system in question “feels” and is aware of its feeling (cognitive capacity) because a felt state is a mental state, and a mental state is a cognitive state.

    An inescapable issue in drawing the boundaries of the mind is that feeling is invisible (other-minds problem). There is no way to know whether any other system, other than oneself, is truly cognizing. One way we bypass the other-mind problem is through approximation. Our assumption that a system cognizes just as we do is based on its appearance and affordances.

    Based on our mirror-neuron deductions, a system with distributed cognition (NOT distributed body) is highly improbable. The problem is that such a system would necessitate a distributed mind, but what is a “distributed mind”? Who would it belong to? New technologies, for example, that widen our knowledge (and enhance our cognitive capacity) do not act as an “extension” of our mind. They must still rely on a cognizer to use them.

    ReplyDelete
    Replies
    1. Manda, "aware of its feelings" is redundant, just as "feels its feelings" would be.

      And a felt state is a mental states. And cognitive states are felts states. But do we consider all felt states (e.g. feeling hot) "cognitive" states? Can't vegetative states be felt too, even though they are not "cognitive."

      (But don't worry about it, because the vegetative/cognitive distinction is arbitrary, whereas the felt/unfelt one is not.)

      Turing's "approximation" is that the closest you can come to determining whether a system feels is on the basis of whether it passes T3 (or T2 or T4). That's also the closest you can come to explaining how and why it feels -- namely, nowhere at all, since all you can explain with T3 is how and why it can do what it can do...

      "Who would a 'distributed mind' belong to?" = "Who would be feeling the feeling?"

      Delete
  18. I was a bit skeptical of this paper at first because it seemed to completely disregard the interplay of the outer world and the inner mind, in its claims that the mind can only be internally contained. However, at the end the authors begin to discuss how other objects, such as cognitive technology, affect our brain development and change how we "think, learn, and communicate... reshaping our minds." I think this is a good way to reinterpret Clark and Chalmer's ideas of externalism, such that the mind doesn't actually extend beyond an individual, but that the world outside the individual affects the cognition of their mind and their mind and body in turn affect the world. This seems both more intuitively true and also to fit empirical data and well-accepted conceptions of cognition and mental states.

    ReplyDelete
    Replies
    1. Right??
      It also seemed to get at the ideas of grounding that we've been discussing through the course. It seems reasonable to me that Harnad would return to grounding or the interaction of a cognizer with its environment to explain how cognitive technology reshapes brain development. Those final few paragraphs were such a relief - I was worried the topic wasn't going to be dealt with and I'd be left frustrated by an argument that had started out so promising haha.

      Delete
    2. You have a point – perhaps I should reinterpret Clark and Chalmers’ paper in the way you put forth. However, in reading through their, I keep getting the impression that they believe the mind does extend beyond the individual, this was the main reason why I disagreed with it so much (I just don't understand how this could be possible). As Harnad put it in his paper though, makes a lot more sense – yes our environment affects how we think, and it definitely shapes some of the neural circuitry within us. But that still doesn't mean those environmental factors are now a part of my cognition. In short, I agree with you.

      Delete
  19. This change in perceived body image is indeed a change in mental state; but although its distal inputs and outputs certainly extend wider than the body (as all sensory inputs and all motor outputs do), the functional mechanism of that altered mental state is still just proximal -- skin and in – exactly as when it is induced by VR technology

    I wonder to what extent can we consider the body image to be restricted to skin and in. Indeed, anyone suffering from the phantom limb symptom will argue that even if there is no actual hand to be seen or held, it is certainly felt, and alters the patient’s mental state. Pushing this even further, if we were to put a robotic hand to replace the amputated one, going beyond virtual reality, could this new and added extension be included in the decisive mental states of the person?

    ReplyDelete
    Replies
    1. Josiane, the replacement of a robotic hand, assuming it can feel touch and grasp things in the same exact manner as a biological one, is just an extension of sensorimotor capability. The person would be able to feel the same somethings as one with a biological arm, but this feeling would be taking place in the mind rather than being an "extension of a mental state". I think it would be better to think of it as an extension of sensorimotor capacities.

      Delete
    2. Hi Josiane and Alex, actually a robotic arm has already been created for paralyzed patients, and scientists have been able to create the feeling of touch by directly stimulating the patient’s brain. Like you said Alex, by stimulating the brain, they are able to create the sense of feeling because feeling is a mental state. If you’re interested in reading more about this, here’s an article: http://www.theverge.com/2016/10/13/13269824/brain-implant-chip-feel-touch-robot-arm-paralyzed-tetraplegia

      Delete
  20. Yes! This is exactly what I needed to read following Clark and Chalmer's writing on cognitive technology. Dror and Harnad have put into words, some of the misgivings I had about Active Externalism as I was reading the chapter.

    Conceiving of cognitive technology as a tool versus an integral part of oneself is much more intuitive and reasonable. Additionally, it allows us maintain the boundaries of the cognition and the self that seem crucial when considering what we've discussed in this class regarding cognition.

    Several people have noted that this is an excellent piece to end this semester on and I would agree! It feels like an application of the theories we've been considering and building over the course of the past few months.

    I found this to be particularly true when Dror and Harnad explored why cognitive states must be mental states.

    By de-coupling cognitive and mental states, we blur the line between user and tool and cognitive states become "merely instances of functional states in general". We discussed functional states much earlier, in the context of the Turing machine, working on states. If Searle had not poked at the underbelly of computationalism, I might not have had such trouble with this, after all. After all, cognition would be implementation independent! But instead, I'm left having to bring FEELINGS into all this.

    Cognitive technology does not have the means to pass the T3 Turing Test (we've agreed to leave dismiss the T1 and T2). It has no way of grounding any of its cognition, and thus cannot be cognizing. (If it were grounding, it would be a T3 and would be a cognizer itself, not simply 'cognitive technology'!)

    It does feel like something to use cognitive technology - and I think that's part of why I initially found the notion of Active Externalism so appealing! When I use my phone or language or the internet, it does feel like I'm engaging with something larger than myself and at times, more 'intelligent' than myself. But, as Dror and Harnad, point out, this feeling is occurring within the bounds of my own brain - these are my impressions and cognitions - allowing me to feel as if I'm cognizing more or I'm being aided in my cognition.

    ReplyDelete
  21. I found this reading much clearer in stating the relationship between cognition (what our brain does) and cognitive technology (what extends our performance) than the 11a. It emphasizes that cognitive technology, can sometimes also do some of what cognizers can do, but that does not make them cognizers.

    I like the part saying that writing and speaking allow us to offload our knowledge and memory outside our own narrow bodies, rather than having to store it all internally. When we compute arithmetics on big numbers, it is often easier to do it by hand than by the brain alone because our working memory span has a limit to deal with large numbers. Having language/ math symbols allows us to visualize and deal with the whole arithmetic beyond our working memory span. I think language and tools are what arisen to make our abilities extend out of our bodies. I agreed that the cognizing capacities of using those tools are still inside our brain. The feeling of using a tool, the feeling of using language to communicate, those are all the extensions. But a tool, even a robotic arm implant, or a google search engine, they won't be able to feel, to experience a migraine headache, nor to be a cognizer.

    ReplyDelete
    Replies
    1. I agree wholeheartedly, this aided me in soothing the doubts I felt, but couldn't refute, from the reading in 11a. Moreover, I also agree that tools cannot themselves feel, nor are they particularly useful without a cognizer. A tool is just a T1, something to be used by the cognizer to aid in its cognizing. Giving a calculator to someone who doesn't know how to use it is as effective as letting it be by itself.

      Delete
  22. This comment has been removed by the author.

    ReplyDelete
  23. This is a wonderfully written paper. I appreciate the straight-talk, and think it does a great job at exploring all that our definition of cognition could encompass.

    Perhaps I’m being too pragmatic, but I tend to like narrower definitions of cognition like “the causal substrate of all that we can do”, or “the generating mechanism”.Since we have not asserted that feeling what it is like to understand/ know/ think something, has any real part of the causal chain of us completing a behaviour, perhaps we don’t need to bring it into the definition of cognition yet - maybe, it might be helpful if our definitions and theories parse cognition from consciousness. It certainly feels like our mental states play a causal role in what we do, but we don’t actually know empirically how much it does. Feeling could have a strong purpose, it could be an epiphenomenon, or it could be somewhere in-between those poles. Since it is our nature as humans to be enthralled with our own mental states, perhaps we are biased into believing that our mental states are more crucial to our behaviour than they actually are. All this to say that maybe, for the time being, it could be more helpful for the definition of cognition to focus on causal substrates/ mechanisms/ algorithms, rather than attempting to encompass everything that we could possibly do, think or feel, let alone how far we could stretch our definition if we have our meta-physics hats on.

    I found the analogy drawn between categorizing biotic vs abiotic organisms, and cognizing vs non-cognizing organisms, a good model of how a narrower definition might be helpful. If our definition of cognition were narrower, maybe we could separate organisms based on three dimensions, instead of two.

    Separating organisms

    1) one biotic system from another
    - DNA
    2) cognizing system from another
    - one sufficiently complete nervous systems from another
    3) conscious state from another
    - one migraine from another

    In this case Siamese twins, there would be two cognizing systems blended together, but two entirely separate conscious/mental states. They have two migraines, but because they share a spinal cord they share fundamental constituents of their nervous system. The system by which they do everything that they can do, is partly shared. To some degree, they share the substrates of their performance capacity and, by the narrower definition that I’m proposing, they share components of their cognition. The fact that they do not share mental states could be considered irrelevant.

    Because of the other minds’ problem, the only way we can deem worms as non-cognizing organisms is because our mirror neurons reckon that there is nothing it feels like to be a worm. Maybe we could think about a worm’s cognition as how the worm does all its worm stuff via its worm central nervous system. If it’s all vegetative, that’s fine.

    Right now, our construct of cognition is attempting to encompassing 2 and 3, while putting special emphasis on the 3. Maybe our definition of cognition should be fundamentally linked to the central nervous system, and consciousness can be left out of it for now. We like to conceptualize the ability to cognize as the ability to experience a mental state, however, if we look at all of this with a pragmatic lens, this starts to look like a conflation we could do away with. Cognitive science asks how we do what we do, but we don’t know if feeling has a role in how we do what we do. Just because mental states are our life, doesn’t mean they are how we live life.

    For obvious reasons I’d like to believe the fact that our mental states play a causal role in our behaviour… but what evidence do we have of this?

    ReplyDelete
    Replies

    1. Hi Lauren, I found your comment very interesting. You say that “Just because mental states are our life, doesn’t mean they are how we live life. For obvious reasons I’d like to believe the fact that our mental states play a causal role in our behaviour… but what evidence do we have of this?” If I understand your argument, you are saying that our mental state is not correlated to our biological actions? If so, this does negate any idea of conscious free will. I’ve seen two arguments of this sort, and I’m curious which side you fall on. Neither one holds much water for me, for the following reasons.

      Firstly, that we do not cause our own actions and that everything is predetermined by the interaction of particles & forces. Our history, mental state, etc.. are predetermined, and thus so are our reactions to things. This is a sort of internalized behaviorism that is impossible to prove or disprove, yet I struggle with it for a very simple reason: I feel like I have freedom of choice, and can do things which contradict my ‘programming’. Perhaps I was conditioned to be a contrarian; I don’t think there is a real answer to this sort of assertion, but it is impossible to prove (at present) and my experience suggests otherwise. Life very well could be an illusion and we are all floating in The Matrix-esque vats, but I have no reason to think so.

      Secondly, some have argued that our conscious actions are pre-determined by unconscious neurological states, and that consciousness is just an accidental by-product of deterministic processes we do not control. Benjamin Libet’s experiment in 1983 demonstrated that predictive unconscious neurological activity precedes conscious ideation. The argument refuting this is nuanced and long, but the basic idea is that we all have unconscious processes in the brain, who is to say that that our choices do not causally impact them? If my unconscious processes follow goals I have set, then aren’t they still my choices?

      The main problem I have with both arguments is that they assume we know enough about our own neurology (or causality) to begin refuting common-sense ideas, i.e. that I make my own choices. We don’t know how thinking and feeling come about, yet we are ready to assume that we do not control our actions? I’m sure any person who has been punished for breaking the law would love to hear this, especially those poor folks on death row. As for what evidence we have that “mental states play a causal role in our behavior” we don’t have anything except personal anecdote, but I challenge back – what evidence is so strong that you’re willing to believe otherwise?

      Delete

  24. This article does a good job at explaining why we have little reason to consider our mind as extended beyond our own body. Most simply, if we don’t even know whether another has a mind or not, we can then be even less sure on any postulations on what the geographic boundaries of the mind are. Also, if we wouldn’t consider conjoined twins as having the same mind (who share the same body) then there is no reason to believe that two different bodies have the same mind. The migraine analogy puts this all together nicely; although the migraine may be externally causal, we can only infer it is spatially located in the individual mind feeling that state. After all, the extended mind hypothesis is but another analogy that seems better encapsulated by the idea of offloading.

    ReplyDelete
  25. Another thought, what problems can then be inferred in accepting that our mind is bounded to our own body? I think the analogy of an ecology of minds, where thinking or feeling is done with and by others, helps us empathize so as to treat external organisms with the same respect we would attribute to our own. But if the analogy of the extended mind hypothesis is discounted, how then to enable a respectable amount of empathy with our shared environment? If we can’t feel with other entities, can we adequately ground feelings to our notions of them? Taking the animal cruelty discussion into consideration, could part of the reason we can accept the validity in the logic of veganism but not be vegan is because we don’t know what it feels like to be that dying animal? In watching a video of a fox being stripped of it’s skin alive, we may not think our mind is in fact theirs, but the analogy of being the fox by imagining being in their place, may just be enough to stop buying fur Conversely, could there be danger to our empathic expressions in assuming our mind as in-part inanimate (as extended to the calculator or google search engine)?

    ReplyDelete
  26. Overall, I agree with the idea that feeling, and therefore mental states, cannot be distributed to external cognitive technology. However, I still think there is a problem with limiting the “narrow state” to the brain— this is simply due to the fact that we don’t experience feeling in the brain but in the body. When a person has a migraine, it is their head that hurts, not specifically their brain. This leads us to the question of what it truly means to “generate” a cognitive state— if we are not talking about where the feeling is experienced, but where it is “caused” it is much more difficult to argue that it is not caused by external processes as well as the brain itself. Therefore, I think it is misleading to say that “the only place [cognition] is ‘distributed’ is within a single cognizer’s brain”— the entire nervous system has to be considered as a whole.

    Additionally, when asking “is there cognizing without consciousness,” how do we treat the moments of our lives during which we were performing “conscious processes” but did not feel conscious of what we were doing, for instance being in a daze due to a lack of sleep? It is difficult to say whether we were truly conscious of something at the time, since all we have are memories of our experiences— so do we simply assume we are conscious at every moment during which we are performing a cognitive process?

    Lastly, I don’t agree with Turing’s statement that was brought up in this paper— that “we have no more (or less) reasons to doubt that [a robot] has a mind than […] for doubting that other human beings have minds.” The reason we have for this is that robots are made of a different material than us and other humans are not. For this reason, in order for a robot to have a mind, consciousness must not depend on the material from which a being is constructed (i.e. a conscious being could be T3 rather than T5).

    ReplyDelete
  27. The paper gives a pretty comprehensive response especially in to the arguments in Clark and Chalmers. In discussing about extended mental states, system collectives, and distributed cognition – I understood and agree that distributed cognition is based mostly about cognitive technology, which extend our bodies sensorimotor and cognitive performance powers in the outside world. To simply resolve that, if cognitive state is a mental state, cognition cannot be distributed more widely than a head.

    Then the paper goes on and discuss about living and nonliving systems and mental states. They said that “there can be living organisms that have no mental states and there can be nonliving systems that do have mental states.” I was a bit confused by the latter part of the sentence: how can nonliving systems have mental states? (I've also read other posts above with the same confusion).

    Perhaps, the best to think of it is that we cannot know that whether nonliving things have a mind or not, but we also do not know that they don’t. When talking about consciousness, both in class and in the paper, being conscious means having a mind, which in terms of being able to feel. Maybe the point of the section and the quotation was merely to say that being alive is does not equate with having a mind – that the two are not meant to be synonymous. What do you think?

    ReplyDelete
  28. I really enjoyed reading this article as it resolved most of my conflicts with Clark and Chalmers, while also making clear what is meant by the “extended mind” – specifically with the analogy of the migraine. The migraine serves as a proxy for a mental state (felt state), and what it means to have a “mind” at all. According to Dror and Harnad, to have a mind merely requires the capacity to “feel”. It does not necessitate performance capacity nor an understanding of the causal mechanism underlying that performance capacity – only the ability to feel what it’s like to have and execute that performance capacity. In the same way that a migraine cannot be distributed to others’ heads, cognition (or one’s felt state) cannot extend beyond the head of the feeler. Though I do appreciate, and wholly agree, that humans have become so accustomed to offloading much of their physical and cognitive burdens onto iPhones or other technologies, so much so that it has actually altered how we think and cognize – that is NOT to say, that the cognitive technologies onto which we offload acquire cognition. Cognitive technologies are mere extensions of our cognitive capacity, they are not, and cannot be, cognizers themselves.

    Ultimately, this article has settled why exactly the hard problem is insoluble. Precisely because of the “other-minds problem” that creates uncertainty when we reflect on whether cognitive technologies are cognizing, or on whether the people to whom we communicate with are cognizing or even on whether the superordinate system, “Gaia”, composed of all living beings, is itself cognizing. It all boils down to the “other-minds problem” that produces uncertainty when we propose that any entity outside ourselves is feeling and cognizing.

    ReplyDelete
  29. Re: “We simply need to make the observation that what makes some of our capacities cognitive rather than vegetative ones is that we are conscious while we are executing them, and it feels like we are causing them to be executed – not necessarily that we are conscious of how they get executed”.
    This seems to be the main point, from both the reading and class discussion, since mental states are felt states. In order for cognition to be distributed, then it means that felt states can be distributed, which is just plainly absurd. Clark and Chalmers’ assertion about extended cognition assumes that “not every cognitive process ... is a conscious process”. This is akin to, as someone mentioned during class, the systems reply to Searle’s CRA. The problem with extended cognition is that it is the opposite of Searle’s response to the systems reply: rather than internalizing the room, extended cognition is like externalizing Searle himself. It’s just not tenable. That is just to say, as in the reading, e.g., that migraines can’t be distributed. They are either felt or unfelt by me or you. We can experience migraines at the same time, but the migraine is felt by me or by you, not as some esoteric distributed process. A distinction made in this reading, I think, explains the problem with Clark and Chalmers’ argument. They seem to assume that unconscious brain states are mental states. But what is obvious is that they are not because of feeling: mental states are felt states. When we acknowledge that extended cognition is just conflating unconscious brain states with felt states, then we can move onto the more productive task of seeing how cognitive technology can augment mental states – as opposed to constituting mental states.

    ReplyDelete
  30. Until about halfway through Dr. Harnad’s paper, I couldn’t quite understand what he meant when talking about distributed life and distributed cognition/mental states. The example with Siamese twins however, cleared much of this up for me – but introduced a new thought. I agree with what is stated – yes, if I were to come across Siamese twins, I too would think of them as 2 distinct minds sharing the same body, as different cognizers sharing physical space much more closely than the average individual. However, the first image of Siamese twins in my mind was that of either twins conjoined at the hip (to a great degree – at the leg too, and torso) or that of one large body, and two heads conjoined together. The point being that either way, though the twins share a body, there are 2 brains, 2 minds, 2 distinct cognizers. But what happens (as in the case of the conjoined twins presented in the link below) when the two share brain matter? Are there still 2 cognizers, or just 1; 2 people in one cognizing mind, sharing thoughts and senses? I realize it’s a very specific situation but I am still curious to see what others think of this.
    http://www.vancouversun.com/health/Through+sister+eyes+Conjoined+twins+Tatiana+Krista+were+extraordinary+from+beginning/7449226/story.html

    ReplyDelete
  31. Re: Sensorimotor Technology and Augmented Reality

    I wonder to what extent brain plasticity plays a role in these sensory experiences that we attribute to external components of our reality. As mentioned in the reading, one feels a distortion of reality when operating a crane (having the power to lift something heavy) or driving a car and squeezing through narrow passages. This reminds me of the fake hand illusion where one can become convinced that a fake hand belongs to them but then can also perceive that it was only an illusion. Is it possible that cognitive technology is the input that is then triggering a plasticity mechanism in part of the brain, altering one's cognitive state, which ultimately outputs the experience of augmented reality?

    ReplyDelete
  32. This paper does a great job supplementing the paper by Clark and Chalmers with, in my opinion, a much more realistic approach to the function external events and cognitive technologies have. As I mentioned in my post for The Extended Mind paper, there’s no doubt that environmental events or cognitive technologies contribute to our cognition, but the important thing to note is that, while they do this, it doesn’t attribute cognition to them. As Harnad clearly states in this paper, the ability to cognize is solely a conscious phenomenon, such that even if our interaction with things in our world enhances our cognition, because these things are not conscious, even if they are alive, like trees, they are still not cognizing. Furthermore, we even performance capacities in our brain that we are not conscious of and are thus not part of cognition. The only things that are cognition are things we are conscious of, so even the mechanisms behind our cognitive processes that we aren’t conscious of are not considered cognition. Thus, the main important conclusion here is that Clark and Chalmers’ idea that the mind is extended to external events that are also performing cognitive operations cannot be possible as these external entities are not conscious and therefore can not cognize. However, it is still very important to know that these external events strongly advance and contribute to our cognition.

    ReplyDelete
  33. “A system is autonomous if it can do what it does “on its own.” It’s just that systems differ in what they can do on their own. A toaster is an autonomous system that can only toast bread -- and that, only if a person plugs it in, puts in the bread, presses the switch. A person is an autonomous system that can (among other things) plug in a toaster, put in bread, and press the switch. And so it goes. Both autonomy and functional capacity look modular, and superordinate autonomous systems may include the distributed modularity of many component autonomous systems. “

    This article includes a discussion about where we draw the line – what is the highest level of super-ordinate – for cognition. Similar to my comment in 11a, I think these superordinate distributed systems that include the notebook and tools as part of cognition aren’t actually necessary to explain cognition. So adding them into the recipe for cognition seems redundant. For example, including the notebook or a computer as part of distributed cognition does not help to explain the mechanism underlying mental states ie the easy problem.

    ReplyDelete
    Replies
    1. I think sometimes looking a step ahead can help to understand the current step we're on. It opens up new perspectives of understanding consciousness if we look at superordinate distributed systems. They aren't necessary, but they're interesting and they force a more big picture understanding of consciousness. Just like how we study cell biology but also psychology, this is just the next level in that step.

      Delete
  34. I don't find it quite clear yet the distinction between what could constitute an autonomous mind. I find the example of the siamese twins really interesting. It is argued that the Siamese twins are two separate beings because they have two separate minds, even if their bodies are completely attached (share the same head). However, what does this mean for split-brain patients? Can it be argued that this is an example of two separate minds? I don't think so, and I believe this highlights a weakness in the absolute distinction between autonomous minds being separate beings.

    ReplyDelete
  35. I really enjoyed this final reading, I think it was a great way to close the course and brings up a lot of the themes we've already examined. In the climate of current AI tech, I think the problem or discussion of extended cognition is incredibly interesting. I think that this quote really encompassed this issue for me:

    "In particular, can external cognitive technology serve as a functional part of our cognitive states, rather than just serving as input to and output from them?"

    There's no doubt that cognitive technology has extended what we are able to do. The range of inputs available for us to sense and perceive, as well as the range of outputs we are able to communicate and transmit has been greatly altered, and yet their functional role is what we must consider.

    This months National Geographic featured an article on how humans have been shaping their own evolution using technology (here's the link, it's a fun read you should check it out: http://www.nationalgeographic.com/magazine/2017/04/evolution-genetics-medicine-brain-technology-cyborg/). The author begins by discussing a man who as supplemented his colourblindness using a fiber-optic sensor that converts light frequencies into vibrations to the back of his head. He claims he is able to sense colour and can now correctly identify things that are blue, but he can also sense UV light. The range of things he is able to detect has increased as a result of this cognitive technology, but can we say that the antennae is an extension of his mind. Is the creation of these new cognitive states more than new input to the brain, or does it act as a functional part of the cognitive state itself. I think that this type of technology really works into the problem because of how linked it is to the man's ability to perceive (cognize) colour. I argue that how the vibrations are used, understood, and categorized are still elements of the brain itself, and the technology has still only provided more input to the system, thus cannot be considered cognition.

    ReplyDelete
  36. This article really put everything in place Clark and Chalmers. Sensorimotor technology, like driving in a car, doesn’t directly give us the feeling and I feel like this is an important distinction to make. It affects certain parts of our body, but we would not be able to feel it without our cognition. If the car was a distributed cognition, it could have the ability to affect our felt states directly, but it doesn’t. It affects our certain states, which then leads to felt states through our cognition.

    In addition, I agree with the statement that there can be living organisms with no mental states, but can there truly be nonliving systems that do have mental states? Would living not be a basic criterion for having cognition and mental states? Living and feeling might not be the same thing necessarily, but is living not a part of feeling? Furthermore, do we need to be conscious to be mental, and to feel? I feel like the answer is not always. We have many bodily functions that operate consciously and unconsciously at the same time. Our unconscious processes affect our feelings and cognitive states all the time. When I think of unconscious, it’s not always the vegetative states like breathing. For example, the implicit attitudes test. It feels like something when we look at whatever the test is measuring, but we do not always know why. Our explicit attitudes are not always the same as our implicit ones. So that feeling comes from unconscious feelings and attitudes. Would that still not be considered cognitive, mental, and felt? It is still not the same as a toaster. The unconscious processes and states are still a part of cognition. They are just another side of it.

    ReplyDelete
  37. 11b. Is it such a stretch from the spatially continuous and tightly coupled causal interactions of the amoebae that constitute a slime mold to the only somewhat more spatially disjoint and less tightly coupled causal interactions of the ants that constitute a colony?

    I find this notion very interesting. In essence humans are just a massive colony of otherwise unfeeling organisms (cells) yet somehow the structure comes together to produce some sort of feeling. The big question is, when does this feeling arise, at what complexity and structure of individual cells does one begin to feel. In addition, could this feeling come in certain degrees. We see ourselves as the ultimate “feelers” capable of perceiving a multitude of stimuli. Consider a snail. It has a nervous system capable of sensing touch. If we were somehow able to experience the feelings a snail does and nothing else, would a feeling of pain be the same as ours (obviously this is impossible). When does the very first and most basic form of feeling arise in organisms and this this constitute a mind?

    ReplyDelete
  38. This paper did a great job of making clear the distinction between cognizing or feeling and the things that give you an ability to do that. Clark & Chalmers brought up how a notebook and a person who heavily relies on the notebook should make one cognitive/ thinking system. Harnad et al. argue that cognition is merely feeling and that tools such as telescopes, Google, books, social interaction are extensions of our sensorimotor capacity which can in turn affect our feelings. Language is a cognitive technology and it allowed us to offload some of the burden of having to directly ground and learn things by trial and error. An example of this is the mushroom pickers simulation where picker’s that ‘stole’ information (used others’ trial and error to figure out which mushrooms were good or not) had higher success/survival than those that did the grunt work themselves. Language is not an extension of cognition or feeling because a collection of people communicating do not all simultaneously ‘feel a headache’. Language allows us to offload information (reduction of uncertainty) and memories. Google is much the same way and its functions can affect our feelings but they are not our feelings. Feelings seem to remain within our brains yet our sensorimotor capacities can be extended.

    ReplyDelete
  39. Dror and Harnad propose this question before delving into the explanation of consciousness and how it relates to extended cognition and cognitive technology:

    “Does the fact that cognizing is a conscious mental state, yet we are unconscious of its underlying functional mechanism, mean that the underlying functional mechanism could include Google, Wikipedia, software agents and other human cognizers’ heads after all? That question is left open for the reader.”

    The authors disproved the extended mind theory using the extended migraine analogy and this helped clarify why the plausibility of the extended mind seems very unlikely. The extended mind theory does not explain how things (the technology) outside our minds engage in cognition (other than extending our cognitive capacity, similar to sensory technology e.g. telescopes that extend sensory capacity). I now understand that being able to feel is what is essential for cognition. This is not saying that it is only cognition if there are felt feelings because there are internal states that are not felt. A system that is capable of cognizing should be capable of feeling and passing T3.A system that is capable of cognizing should be capable of feeling and passing T3. For example the actual mechanism in the brain that are involved in retrieving the name of your 3rd grade teacher are not felt feelings despite there being a feeling that we ARE conscious of this retrieval. This is just something that technologies like Google or Wikipedia cannot do and therefore cannot be considered an extension of our cognition.

    ReplyDelete
  40. RE: "There is no such thing as a distributed migraine – or, rather, a migraine cannot be distributed more widely than one head. And as migraines go, so goes cognizing too -- and with it cognition: Cognition cannot be distributed more widely than a head -- not if a cognitive state is a mental state."


    I feel as though this statement represents cogniton as something too static. However our mental states are too elusive and dynamic to be described like this. I agree that cognitive technology is not cognition, but it sure influences mental states. When someone verbally says a story, you create a mental picture of it, you internalize the external and create cognitive thought. The way in which the story is told has a DIRECT influence on what mental state you will be in next. Cognitive technology actively changes your cognitive state. That is not to say they are the same, but they cannot exist without the other.

    ReplyDelete
    Replies
    1. Cognition is not static. It is interactive, it learns, and it offloads. I don't think the article is trying to say that we are not dynamical systems. However, I feel like you might still be confusing our cognitive states with the environment it interacts with. Just because they affect each other, does not mean they are related in any level. We have referred to ourselves as dynamical systems from the beginning, but never really considered cognition being outside of us till Clark and Chalmers. Also, what do you mean by saying that cognitive technology and cognitive states cannot exist without the other? We, and thus our cognition and mental states have existed without cognitive technology for a long time. We created it, and with that came the dependency. However we could have survived without it, and still could if we wanted to.

      Delete
  41. RE: "Gaia"

    This reaffirms my curiosity with the idea of extended consciousness that I had in my comment on the previous reading. Where does our mind stop if it's extended to the exterior environment? Is your mind part of mine?

    Physiologically I think it's interesting that everything is composed of smaller parts. Cells make tissues make organs make systems make organisms (make Gaia?). Could the hard problem of cognitive science be so hard because it's the result of the mounting complexity of all of these living systems? I think distributed consciousness can't be understood until we can fully get a grip on consciousness on the personal level.

    Anyway, could the idea of "hive mind" in social psychology be explained by distributed consciousness? Mob mentalities are very well documented and have a significant effect on individual people and this article makes me wonder if that would be an easy way to start to understand this theory.

    ReplyDelete
  42. I liked this paper much more than the Chalmer's one, as I feel it more adequately encompasses the relationship our minds have with the context and tools that surround us. The example of a migraine is a potent analogy and really makes the relationship being having a mind and feeling clear. Chalmer's seems to think that this is a bidirectional relationship with equal pull either way when this makes it more clear there exists a hierarchy. The extensions of our 'minds' like cars and iPhone and notebooks etc, allow us to feel, engage parts of ourselves in their use but without a mind to work with, we would be unable to make anything meaningful out of that relationship.

    ReplyDelete
  43. Fantastic paper on the significance of cognitive technology and what it can do for our cognitive resources. Ultimately, only an entity who possesses cognition can be rightfully called a cognizer. As cognizers we have the performance capacity to think, understand, and to know about our internal states and what we perceive from our environments. With the advent of cognitive technology, we have expanded our abilities to do such thinking and we have developed language as a powerful tool to access the minds of other cognizers. This is not to be mistaken for our ability to think; the cognitive tools that make our abilities to cognize more efficient cannot actually think or understand on their own. They are present as our tools to serve our own intentionalities. It is still our minds as cognizers that do all the work, since the cognitive technology we utilize do not have minds of their own. What is worth noting is that cognitive technology makes our lives easier by offloading our brain functions onto tools. In much the same way as a telescope can help us perceive and visualize objects from distances unseen by the naked eye, a cognitive technology such as the internet helps us communicate with one another in a space readily accessible by multiple cognizers. Books can even be a form of cognitive technology. For instance, William Shakespeare wrote poems in the 16th century yet we can still access his ideas nearly 400 years later. We are able to retrieve cognitive states stored from hundreds of years ago. It will be interesting to see what other forms of cognitive technology will be developed by us cognizers. Mental efficiency gained through cognitive offloading truly has a wide range of benefits. To what extent can we utilize both sensorimotor technology and cognitive technology to improve on our performance capacities? What other feelings can we discover from using tools that exist outside of our brains and bodies?

    ReplyDelete
  44. I think that it is interesting to consider how cognitive technology can affect how individuals see themselves, and how they go about transforming themselves or achieving their goals. Cognitive technologies such as the web allow us to interact and offload our cognitive functions onto other cognizers without necessarily needing to be in physical contact with them. While this may aid communication between cognizers in a way, it also increases opportunities for comparison between cognizers. We now have access to more information about the thought processes and lives of other individuals. It is interesting to think about this could have either a positive or negative impact on people’s cognitive functions. It also makes me wonder whether our extended ability to cognize through the use of cognitive technology might actually be straining or eventually end up straining our own individual systems.

    ReplyDelete
  45. Reading this article solidified my thoughts stemming from the last article. Yes it is clear that cognitive technologies modifies the way we cognize. It allows us to do far more than we would be able to do with just our minds, and changes our self-concept. However, as in the telescope example as changing sensorimotor capacity, I think cognitive technology is just cognitive Input, not extended cognition in itself. Even language, which has vastly enhanced our cognitive capacities, does not cognitize in itself - it does, however, vastly increase the amount of input we are exposed to, and the amount of output we can create. While language is information input, the cognizing still happens in our heads.

    One thing I thought of while reading the section on how cognitive technology changes our body image, is phantom limb. People with phantom limbs have a self-concept that extends beyond the boundaries of our skin. If self-concept is a mental state, what does this say about the boundaries of our minds?

    ReplyDelete
  46. The extended mind problem is not really a problem. It tries to further complicate the hard problem without actually dealing with the hard problem because tools used to assist us don't have the feeling component of cognition. (This short Skywriting was written once I discovered there was no minimum word count for Skywriting)

    ReplyDelete
  47. (1) The notion of an "extended mind" -- with mental states (i.e., felt states) “distributed” beyond the narrow bounds of the individual brain – is not only as improbable as the notion that the US government can have a distributed migraine headache, but arbitrary.

    I enjoyed this article and agreed with the points it made, especially the above statement. The above passage makes an interesting point in saying “mental state (i.e. felt states).” Let’s consider the example of Otto with his guidebook. I can accept that the guidebook helps him cognize and whether you consider this external thing that helps him cognize a part of his “cognition” depends on an arbitrary definition of what counts as part of one’s “cognition.” However, if it’s claimed that this is part of a mental state and therefore a felt state, this seems more difficult to accept. Because it’s part of a felt state, is it therefore feeling? If the book itself is not feeling or not feeling in conjuncture with the rest of your mind, what is it doing? How is it part of a felt state? Does the other minds problem play into this at all? Just as you wouldn’t be able to know with certainty that the book isn’t feeling on its own, how would it be possible to know whether the book is part of your felt state?

    ReplyDelete
  48. This was a well-articulated response to the frustration I felt with Clark & Chalmer’s paper in 11A. External features can help extend our cognitive capacity, but those external features do not feel, and they cannot cognise. There is a distinction between cognisers and cognitive tools – if we could, sometime in the future, plug in an extra short-term memory module to help us cognise, we would not argue that that that module is itself cognising, or feeling. The way to think about it that helps me make this distinction is a hardware/software distinction – while cognition is implementation-independent, extra modules to help with cognition can extend its capacity. If we thought of cognition as the software and our brains or any extra modules as the hardware, maybe we would be able to rid ourselves of arguments that use extra memory modules or hydroencephalus as evidence for conflating cognition with its implementation.

    ReplyDelete
  49. I found that this article provided clarity on the idea of an “extended mind” or “distributed cognition”. I agree with Harnad that cognitive technology would be better conceptualized as a tool to extend the power and scope of our cognition (ie it facilitates cognition) but that at the end of the day it doesn’t extend our “mind” beyond the boundary of our individual mental/cognitive states. Thinking of a group of people or a library as distributed cognition is a rather arbitrary and unhelpful notion. In my mind, cognitive technology provides information input to the brain. The brain can leverage this information as long as it is provided as some form of sensorimotor input that can be meaningfully interpreted by the cognizer. Thus I think we should think of cognitive technology in terms of sources of information (information being the reduction of uncertainty) within our environment.

    I also found the idea of offloading, for example through language, to be an interesting concept. It made me think about the potential of evolutionary “offloading” by encoding behaviours or traits. In theory, this can save individual organisms hardship and facilitate survival. For instance instead of learning from the experience of being poisoned by a snake (or watching a member of your group be poisoned by a snake) evolutionary psychologists posit that we have instead evolved an innate fear of snakes.This made me wonder if the purpose of emotions and feelings could be to facilitate offloading in a way? In the case of a fear of snakes, it is the emotion of “fear” that is encoded and passed down even in the absence of explicit knowledge that snakes are venomous. I believe evolutionary psychologists would posit that all emotions are examples of such evolutionary offloading. For example, a child’s emotional attachment to their mother could also be an adaptive “feeling” as is “hunger” or “loneliness”. Maybe these feelings provide shortcuts so that we can act to fulfill our basic needs in the absence of explicit knowledge or time to cognitively process our situation. Thinking “I’m hungry” is simple and intuitive and does not require a complex calculation about the last time you ate, how much you ate and when is the next time that you need to eat (even though we could similarly base our eating patterns off such cognitive calculations). However, as with many evolutionary explanations, I think it is difficult to draw definitive conclusions about such hypotheses, particularly as we often lack negative evidence (ie the absence of feeling) by which to better understand feeling.

    ReplyDelete
  50. I found this paper much more clear than the last paper. It helped explain that things from our environment can extend into our cognitive capacity, but they do not themselves feel. We are able to use external objects in order to clear up “space” in our minds, while not losing that information, but it does not mean that that object which we used to offload our own cognition has a mind of its own. For example, if you’re using a voice recorder to offload some cognition, that tape does not have a mind. But going back to that tape recorder and re-listening to it, it causes you to think about whatever you’re listening to and it might evoke emotions or thoughts that weren’t previously associated or felt with that recording. Does this mean anything? I know that the tape recorder isn’t cognizing or feeling, but you might be feeling differently than you were at the time this was recorded. Would it almost feel like you are talking to someone else, aside from the fact that you won’t get a response? I guess this is pretty testable and it also depends on the content of the tape.

    ReplyDelete
  51. Several parts of the comparison between sensory motor extensions (such as using a car or VR experience) with cognitive extensions (such as offloading cognitive process of name search to a phone book) captured my attention. These extensions can act as the causal factors that activate the cognitive mechanism in our brain in certain ways. The extensions provide inputs to the processes that are responsible for generating feelings (we don’t know anything about these processes and this is the hard problem). Hence, a car, which changes our body schema or the way we feel about moving in a street, is providing inputs to the cognitive mechanism confined within the limits of our skin. Those mechanisms are generating the feelings. Therefore, I think we can use the expression of “cognitive extension or sensory extension” in this sense but not “distributed cognition” or “distributed sensation”.

    As it comes to the term “extended mind” we get into more complications. We can say with certainty that we define mental states to be felt states but I don’t think the text had any convincing evidence for claiming that cognition only happens if there are felt states. From what I understood, mental states cannot occur without underlying cognitive capacities of an organism but we do not have enough evidence to say that cognition cannot happen without mental states.

    I also think there is another line of that could use Searle’s CRA to rule out the extended mind argument. I tried to share this in class but I don’t think I was very clear about it. Let’s suppose that I, as a cognizer who has felt states, am using a computer to perform a cognitive task.

    Searle with CRA showed that the computer is incapable of feeling the mental state that accompanies the task. In this system, there are only two components: me and the computer. Since the computer is not feeling anything what the system in general feels is what I feel. Therefore, the felt mind is my mind and there is no such a thing as an extended mind in this case. The same is true for using any other object as tool for offloading cognitive processes.

    We can start to talk about the union of cognitive processes of several different cognizers capable of feeling mental states, in this case I think the arguments about the arbitrariness of defining units of living organisms would be useful examples that should be considered in conversations about joint cognition.

    ReplyDelete
  52. "Whether we want to include in a cognitive state everything that can potentially enter into anyone’s cognizing or only what actually enters into someone’s cognizing, either way, on this extended view, cognition is looking exceedingly wide."

    I agree that if we consider all the things that can be cognized, cognition looks pretty wide and beyond what lies between the ears. I am curious, though, how “wide” this can go. Is it only “wide” when it is necessary? Say I never learned the multiplication table because I always use a calculator. Every time I want to do a multiplication, I don’t even bother to calculate it “in my head”, I just punch in the numbers and I get my answer. As I obtain an answer, the calculator is being cognized by me. But, when I am not using it, is it still “part of my cognition”? The calculator itself does not have a “mental state”, but it is certainly augmenting my mental state, but only when I am using it.

    "The migraine is just a stand-in, here, for our intuitions about what it is to have a mind at all. To have a mind is to be in a mental state, and a mental state is simply a felt state: To have mind is to feel something – to feel anything at all (e.g., a migraine)."

    Regarding the calculator example, as mentioned, the calculator does not have a mental (or felt) state because it doesn’t have a mind. As mentioned in the paper: “Note that what is essential for having a mind is not having the performance capacity itself—nor is it essential to have an understanding of the causal mechanism of that performance capacity”. Since my mental state cannot “combine” with the mental state of a calculator (because it does not exist) as one felt system, this mean we can’t be a single cognitive system. In other words, I am merely using a tool to augment my cognition, but the calculator cannot become part of my cognition.

    "Is language itself a form of distributed cognition? How does the knowledge in other people’s heads, conveyed to us auditorily, differ from the knowledge in books, conveyed to us visually? Both allow us to access information without needing to gather it the hard way, through our own direct, time-consuming, risky and uncertain sensorimotor experience. Writing and speaking also allow us to offload our knowledge and memory outside our own narrow bodies, rather than having to store it all internally."

    Let’s say instead of always using a calculator, I learn the multiplication table from a teacher who instructs me, in person, with language. Since we assume that this teacher has a mental state just as I do, when they distribute the information about the multiplication table to me, have we become one cognitive system? I would argue “no”, because we each have a separate mind of our own. Intuitively, I wouldn’t say that my teacher’s cognition has become part of my cognition. Since my teacher would know when they are having a migraine just as I would know if I am having a migraine, one could argue that we have become one “system”. But I doubt that this system itself is cognizing. I am cognizing and my teacher is cognizing, but a shared mental state would not exist. Maybe we would share a mental state if the teacher gets a migraine when I get a migraine, or even better, if we could share a migraine. Since this is highly unlikely, our combined “system” is just made up of individual cognizers who each have their own respective and independent mental states.

    I do think that this last quote really demonstrates the nuclear power of language. As university students, I’m sure we can all agree that it explains why we are here! We have instructors who can transfer information to us auditorily and visually. Each university class is composed of a “system of cognizers”, but the individual cognizers have their independent mental states. Language has the power of allowing us to understand what we learn. Moreover, language has the power of offloading. Information can be stored outside of our own heads through language; that seems quite powerful to me.

    ReplyDelete