Saturday 2 January 2016

(5. Comment Overflow) (50+)

(5. Comment Overflow) (50+)

39 comments:

  1. I agree wholeheartedly with the ideas behind the symbol grounding problem. In the paper (2003) Harnard mentions Frege and his works in the Philosophy of Language. Frege was a big influence to Bertrand Russell who's writings on language I was a big fan of.

    Of course we, as people, need to give symbols their meaning. We assign referents to these formal symbol systems (either static or dynamic). I do not believe machines or programs are able to give symbol systems such grounding.

    ReplyDelete
    Replies
    1. We don't "assign" referents to words, we assign words to referents. But how do we recognize the referents in the first place? If not via computation, how?

      Delete
  2. In the article of the symbol grounding problem, issues regarding “grounding” were raised. It is argued that symbol system can be grounded when it holds the capacity to interact with the world and to have “meaning”. As such, numbers in a mathematical equation or letters of a world do not really refer or interact with object in the world, and are thus not grounded. However, I feel like this argument fails to consider other languages like ancient hieoglyphy or Chinese characters, when each counterparts in fact holding a meaning referring to things in the world and they combine together to give rise to another meaning.

    ReplyDelete
    Replies
    1. The problem of how spoken symbols connect to their referents is exactly the same as the problem of how written characters connect to their referents. And the problem is the symbol grounding problem.

      Chinese characters, like the signs in sign language, started out with some "iconic" similarity to their referents. But in both cases the similarity became much less important, because the shape of linguistic symbols is arbitrary. However, it is true that because of the remaining iconicity of Chinese characters, and because words are very re-combinatory in Chinese, it is more often possible for a Chinese speaker to guess the meaning of a brand new word that they have never seen before than for a speaker of an alphabetic language. (That's probably also true in sign language .) But it's no more mysterious than the fact that English speakers can understand the brand new word "double-meowing" because they already know what "double" means and because "meow" sounds like meowing, and because the new word is a combination of old words.

      Delete
    2. Thank you so much for your reply professor! I'm just wondering if ancient languages like hieroglyph or the ancestors of languages like Chinese are more "grounded" since they look more like their referents? Or is it still like adding up symbols that show similarity to their referents

      Delete
    3. No, iconic symbols are not more grounded than arbitrary ones; they just might be learned faster in some cases, because of the similarity (and if the category is already learned). But it does not help if the category is hard, unless the icon is an icon of the invariant feature. But then it's a bit more like verbal learning, where the instructions explicitly tell you the invariant features.

      Apart from that, iconicity is irrelevant to (linguistic) meaning.

      Delete
    4. I see, so iconic symbols are just like arbitrary ones, even if they look more like the “real thing”, but for example each vertical or horizontal line we “draw” in Chinese is just like en English letter, which is not grounded.

      Delete
  3. “A computational theory is a theory at the software level; it is essentially a computer program. And software is "implementation-independent."”

    I understand that software is implementation-independent. But, for the computational theory of the mind, is it stating that human consciousness is not restricted to the physical body of human beings? I've mentioned this in a previous skywriting, but in a computationalist approach, consciousness would be no different than the execution of HTML, whereas formal system would be that of XML.

    I would even suggest calling symbol grounding as "human rendering." Without human rendering, even the information stored in our brain cannot be meaningful. I would like to poing to IBM's research in "meaningful data." Even with Big Data, it still requires humans to make meaningful connections. It's the context and reference in which all things take significance. For example, IBM has been trying to understand what makes addresses take meaning. This was conducted to be able to run analysis without using real data that are restricted due to confidentiality agreements. IBM researchers asked how we know that an address is real without it being really real? Or rather, how is the address interpreted as being a possible address? They still have not address these questions and they may never be able to until neuroscience and cognitive science understands what the brain is doing to generate meaning.

    ReplyDelete
    Replies
    1. Consciousness is feeling. A computationalist would hold that a felt state is a computational state. Searle shows this is untrue for the felt state of understanding Chinese.

      The words on a piece of paper and the symbols in a computer program may have meaning in the heads of their interpreters, but cognitive science is trying to explain what is going on in the heads of those interpreters.

      Delete
  4. What I don’t understand about discussing whether sensorimotor grounding is sufficient for meaning is if we will ever even be able to know whether it is sufficient or not, or what the test for that would be. If a robot is built with sensorimotor capacities and can pass T3, does that imply sufficiency? Or, which I am more inclined to believe, will we simply never know the answer to whether sensorimotor grounding is sufficient for meaning because of the other minds problem? But have we really explained cognition without including consciousness or knowing whether consciousness has been achieved? Can we really treat them as two entities, when in our minds they are automatically intertwined?

    ReplyDelete
    Replies
    1. Julie, let me introduce you to Dominique (our T3), and you can take it from there. (And that's part of Turing's point.)

      Delete
  5. From socphilinfo.org commentary on a second approach to dissolving the Symbol Grounding Problem, “A second approach is to abandon our assumption that there is a direct relation between symbols and referents. An alternative approach might be found in research in embodied cognition, which takes cognition to be highly dependent on the physical capacities and actions of an agent. Specifically, we can view meaning as a way of coordinating action to achieve certain goals.”

    Is the last sentence implying that meaning could simply consist of an agent’s actions based on some input and desired goal? Wouldn’t this be akin to behaviorism in that consciousness is only due to a specific set of learned behaviors in response to a certain stimulus in the environment, or motivational state?

    ReplyDelete
    Replies
    1. Alex, I'm afraid that (apart from the specific case of Gibsonian affordances) all that "embodiment" stuff is just hand-waving. Of course you need a body to be a T3 in the world, and to ground the categories that are based on sensorimotor affordances (object shape and body shape) rather than just object shape. That's all there is to it. The rest of the body-talk is just rhapsodizing without providing a model (i.e., without reverse-engineering the magical powers of the body!).

      Delete
    2. Nagel's 1974 article "What is it like to be a bat?" is a really poignant illustration of the principle of embodiment in philosophical terms. In it, he argues that we as humans can't possibly understand the experience of bats since they have such different bodies and sensorimotor capacities from us. This can be extended to the limits of artificial intelligence; in this view, it is impossible for a robot to share human experience or meaning. I understand the basis of this principle but it makes much too strong a claim.

      Delete
  6. By reading Steels' paper, all my questions, including those in the earlier posted comment, were solved. At first, I was wondering why can't robots solve the symbol grounding problem. If the Chinese room debate accepted that semantic processing can be done by computational systems, and there are artificial systems that are able to retrieve features and map objects and descriptions, then why not? Steels' paper made me realize the reason why robots were able to do that: It is because the semantics, that enable artificial systems to autonomously acquire grounding concepts, are from humans. Instead of humans giving the semantics, an autonomous system is required for robots to ground symbols to their meanings. And this part by Steels made the picture clearer in knowing why Searle is correct about the person in the Chinese Room will never understand Chinese.

    ReplyDelete
  7. Following up on the lecture discussion on deictic words, here I’m trying to square this with the symbol grounding problem. While we mentioned that language allows us to acquire new categories without going through sensorimotor experience, how exactly does the transition from icons to categories explain deictic words? It was mentioned that we can’t point to a deictic word but its meaning is contained in a statement such as “here is a chair”. Whereas other abstract categories (e.g. justice) can be approximated with combinations of grounded categories, I’m not sure how combinations of categories can explain deictic words. For something like justice, it is approximated with other categories that have been grounded; we have corresponding icons for these ‘base’ categories and justice inherits their grounding. The problem, in my view, is that while justice and deictic words are both abstract, deictic words seem to be grounded everywhere – which makes it not grounded at all. “Now” is everything that is right now; “you” is everyone apart from “me”. Although situations give context to deictic words, situations don’t seem to explain how deictic words are grounded. Any given situation can only contextualize a deictic word after a deictic word has been grounded. A grounded deictic category should be able to have its ‘base’ categories traced and ultimately, their icons. I don’t see where situations come into this pathway. So I am a bit at a loss as to how deictic words are grounded.

    ReplyDelete
  8. I found myself more confused about the topic of “grounding vs. meaning” following the lecture. To know the “meaning” of something requires all of the following: a symbol or referent, sense (with which to connect the symbol to the referent), and the “feeling” that comes with knowing or understanding the meaning of something. I’m confused about how you would define this “feeling” and moreover, why having this “feeling”, which I assume is undefinable, is even relevant to “meaning”? The obvious response to this question is that this “feeling” is the central basis of Searle’s CRA, in that Searle has this “feeling” when he speaks English, yet lacks it when he is mechanically receiving inputs, re-arranging symbols and producing an output in Chinese. I think it was said in class that grounding would include the first two requirements, while the presence of the “feeling” is what separates meaning from grounding. Please let me know if this is correct.
    If this is correct, I’m lost as to how this fits into cognitive science, since I thought the goal of cog-sci was to reverse engineer cognition. If this “feeling” is what distinguishes grounding from meaning, then why is this distinction necessary if the question of “feeling” falls into the scope of the hard problem which we have concluded is insoluble? From my understanding, the distinction between meaning and grounding (according to this definition) seems irrelevant, and we should focus more on “grounding” rather than “meaning”; or perhaps this definition is just insufficient. Any response would be greatly appreciated, since I fear I might have misunderstood the distinction.

    ReplyDelete
    Replies
    1. I’m not a hundred percent sure about the feeling problem too. I guess it gives rise to the problem of other mind the dualism. Perhaps there’s a biological property of feeling right there but we haven’t found a good reason to explain it well.

      Delete
  9. What does the symbol grounding problem actually solve? From what I understand the problem asks the question of what the symbols in our heads refer to in the real world and therefore calls to question the meaning of these symbols. In terms of understanding the cognition as a result of certain causes, I don't get how the symbol grounding problem accounts for innate capacities of cognition (for example language acquisition according to the poverty of stimulus theory). I am not questioning the role of the symbol grounding problem in the claim that cognition is not just computation, but I am questioning weather or not there needs to be symbol grounding for a t3 to perfom like a human. Since we are born with innate capacities, if a device was programmed with the same innate capacities, would it be able to pick out the referents for the symbols inside its program? I am still confused with how the symbol grounding is a pre-requisite for a t3, shouldn't it already be included in the t2. Also, we are born cognizing agents, so symbol grounding should be a problem for the for culture surrounding the agent rather than it being a problem for the agent.

    ReplyDelete
    Replies
    1. Symbol grounding is not included in T2 because in T2 the candidate is merely manipulating the symbols to generate an output like what Searle did in the Chinese room. However, nothing is grounded because if you have a symbol system without semantic interpretation it doesn’t mean anything.

      Delete
  10. "Meaning is grounded in the robotic capacity to detect, categorize, identify, and act upon the things that words and sentences refer to"
    - Wiki Article

    Is this to say that when a T2 robot speaks solely on subjects of abstract and non-material things, that there is no meaning? These subjects do not refer to anything that can be detected, categorized, identified, or acted upon by a sensorimotor capacity. For example, a T2 having a conversation regarding multi-dimensional mathematics would need not point to anything in the real world.

    ReplyDelete
    Replies
    1. As an addendum:
      How do symbols such as Chinese characters and tally marks fit into the symbol grounding problem? Their shape is not entirely arbitrary and can convey meaning in itself. I cannot see how these would not be symbol systems, despite these systems being defined by containing symbols whose shape is meaningless.

      Delete
    2. Obviously I'm writing this a little later on in the course, after we've talked about the symbol grounding problem and categorization in a bit more depth, but I thought I’d take a stab at this!

      Firstly, from what I’ve understood from class discussions, a T2 robot (i.e. an email-capable robot without sensorimotor capacities) basically just manipulates symbols, kind of like Searle in the Chinese room. There’s no real knowing what a symbol means, let alone a feeling of knowing what something means.

      But for a T3 robot like Dominique, once sensorimotor stimuli are grounded through seeing, recognizing, manipulating, naming, and describing them (as described in the 6a reading), a T3 can abstract away from those through categorization. If a T3 wanted to learn about multidimensional math, she could acquire facts through hearsay.

      Delete
  11. Harnad (2003) defines a symbol as an arbitrarily shaped object of a symbol system, which can be systematically manipulated and semantically interpreted. From this perspective, the symbol alone is not inherently connected to the object it represents. Instead, the grounding of symbols relies on the “mediation of brains.” Harnad describes language as an example of a formal symbol system, but this seems to be a point of contention in the literature. Belpaeme & Cowley (2007) argue that language cannot be formally represented and that the SGP should therefore be expanded to include the language acquisition process. Similarly, Vogt (2015) relies on a semiotic definition of “symbol” to emphasize the extent to which the development of meaning is entwined with the development of lexicon, and how this relationship needs to be incorporated into the SGP. However, I don’t see how symbol grounding with reference to language poses an issue when we are focusing on a T3 machine with sensorimotor capacities indistinguishable from ours because such a machine should, in theory, be able to learn and acquire a grounded language system.

    Belpaeme, Tony, and Stephen J. Cowley. "Extending symbol grounding."Interaction Studies 8.1 (2007): 1-16.

    Vogt, Paul. How mobile robots can self-organise a vocabulary. Language Science Press, 2015.

    ReplyDelete
  12. This comment has been removed by the author.

    ReplyDelete
  13. This comment has been removed by the author.

    ReplyDelete
  14. Harnad (2003) seems to be concerned with articulating the SGP as it relates to symbols denoting concrete objects. I came across an interesting article online addressing the question of how cognitive agents might deal with more abstract notions like emotions and feelings (Mayo 2003). Mayo proposes that cognitive agents understand the vast amount of sensory information available to them at any given moment in reference to "task-specific sets...formed in order to solve specific problems in particular domains." Thus, a cognitive agent will organize the same sensory information differently depending on the particular activity it happens to be occupied with. The transition from task-specific icons to grounded abstract notions proceeds by a process of "decontextualization" whereby overlapping or “intersecting” parts of the iconic representation are generalized. As discussed in class, all concepts are abstract by nature, but this functional model allows us to talk about different degrees of abstraction based on specific tasks and problems confronting the cognizer at any given time.

    Mayo, Michael J. "Symbol grounding and its implications for artificial intelligence." Proceedings of the 26th Australasian computer science conference-Volume 16. Australian Computer Society, Inc., 2003.

    ReplyDelete
  15. This comment has been removed by the author.

    ReplyDelete
  16. It's important to reconsider the meaning behind symbols. To say that we have meaning when we think of the word "apple" is not the same as a machine thinking of the word apple needs qualification. For example, if someone is talking about a story and casually mentions an apple, we hardly ponder the apple. Its meaning exists only in the word contributing to the greater story the person is relating. However, if we are made to sit and ponder "apple" we start to think about its taste, colour, and other feelings, and only then does it gain meaning.
    Another example would be numbers. There is meaning when we are first learning about numbers. We are shown pictures of objects and the number of objects corresponds to the symbol that stands for that number. However, when we do operations such as addition now, for instance 7+2, I don't think there is any meaning in the numbers beyond the symbols and its manipulation. We simply output "9" without any thought to what the "9" means.

    To summmarise my point, perhaps we should rethink the framework in which we think about symbols and their meaning because at times they have no meaning to us even though we are cognizant beings.

    ReplyDelete
    Replies
    1. Shanil, you bring up an extremely interesting point. I think this can be especially observed when attempting to learn another language. Sometimes you hear a word you don't quite know and you have to refer to it in your native language, or think about the image or meaning of it. However, as you progress in the language learning this is hardly thought, a sentence is spoken to you and you respond without breaking down the meanings of individual words or referring to the symbols their grounded in. So I guess, why at one point do we simply stop thinking of words (symbols) in terms of what they're grounded in. Does this really cease happening, or does it start to be taken over by subconscious mechanisms?

      Delete
  17. RE: Wikipedia “What about the meaning of a word inside a computer? Is it like the word on the page or like the word in one's head?”
    I thought this was an interesting phrase because it ties together the questions we have had up until this point in the course, and indicates the shortcomings of the different theories we have discussed.
    On the one hand, it demonstrates the shortcomings of computationalism, in that it is clear to most people that the literal words on a page differ in how they are ascribed meaning from the words in one’s head; if the words are meaningful inside a computer as they are meaningful to the words on a page, then we have missed to mark with computationalism, because we have not explained the meaning or origin of the meaning of the words in the head.
    On the other hand, what is it about the meaning in the head that distinguishes it from the words on the page? – this is the symbol grounding problem, the mechanisms of which are discussed through the lenses of many theories throughout the article.

    ReplyDelete
  18. The benefits of a hybrid system?

    By eliminating the need for the autonomous symbolic module, Harnad (1990) writes that an intrinsically dedicated symbol system will emerge as a consequence of the bottom-up grounding. In this system, “Symbol manipulation would be governed not just by the arbitrary shapes of the symbol tokens, but by the nonarbitrary shapes of the icons and category invariants in which they are grounded”

    I don’t see why a computer couldn’t be built with a database of grounded elementary items, advanced feature detectors, and an integrated symbol manipulating system – I think there is something missing from this hybrid system, in that the ability to use the nonarbitrary shapes of the icons and invariant features given to the computer in the proposed database in the symbol manipulation process just seems like computation with added information and rules about what manipulations can happen. I don’t see how a symbol manipulation system that is “intrinsically dedicated” to a particular task is different from the autonomous module in so far as it is executing computation.

    ReplyDelete
  19. RE: Steels, L. (2008) The Symbol Grounding Problem Has Been Solved. So What's Next?
    “So does this mean that the symbol grounding problem is solved? I do not believe so. Even though these artificial systems now autonomously acquire their own methods for grounding concepts (and hence also symbols), it is still the human who sets up the world for the robot, carefully selects the examples and counterexamples, and supplies the symbol systems and conceptualizations of the world by drawing from an existing human language. So the semantics are still coming from us humans. Autonomous concept and symbol acquisition mimicking that of early child language acquisition is a very important step, but it cannot be the only one. The symbol grounding problem is not yet solved by taking this route and I believe it never will.” Steels

    My post is in regards to supervised learning experiments with artificial learning systems. I believe that Steels’s claim that semantics come from humans in these experiments is misguided, and that his logic doesn’t support that humans are significantly more autonomous in the way we acquire semantics than AS in this context. Steels asserts, “it is still the human who sets up the world for the robot, carefully selects the examples and counterexamples, and supplies the symbol systems and conceptualizations of the world by drawing from an existing human language. So the semantics are still coming from us humans.” The logic is flawed because it can be applied to humans -- to myself (Nancy)! I have neither set up the world I am in nor took part in creating the symbol system I use. Although the contexts I find myself in are not “carefully select[ed]” in the exact same sense they are for the AS, whereby the “main approach is […] artificial learning system [are] shown examples and counterexamples of situations where a particular symbol is appropriate,” I don’t think the way the AS vs. my contexts are set up and are used to ground symbols is wildly different or significant. The way we (Nancy and the AS) learn to associate a symbol with its referent and use the symbol appropriately are both by exposure to various situations. The situations are just of different complexity and over different time spans. I experience the contexts over the course of my lifetime, whereas an AS might be rapidly fed the situations in an experiment. Although the situations presented to the AS are ‘crafted’ by human experimenters and may be less complex, a large amount of the situations and contexts I have been exposed to, which have led me to ground a concept, are also by large crafted (in a sense) and have been guided by others. Steels’s statement that the human experimenter “supplies the symbol systems and conceptualizations of the world by drawing from an existing human language” is no different from the way that I use and have learned an already existing language that I did not help create. So, his claim that “semantics are still coming from us humans” is not really meaningful if the statement aims to imply that an AS isn’t creating meaning as autonomously as I am. I guess the statement could also be interpreted in a way that considers AS and human as a species, and in this sense humans construct the AS, and though I haven’t crafted my symbol system or situation, some system like me (a human) did. But, this doesn’t matter for the symbol grounding problem (SGP) since the question doesn’t concern whether the symbol-system is created by the system, but rather if a system can learn to directly link some symbol to its referent based on its own experience. Also, just because a human programs the AS and its ability to recognize symbols, the way AS use that ability to form its semiotic network is no less autonomous than the comparable process in humans. The AS is coded to have some baseline algorithm or statistical methods, which is like the way I am born with certain innate feature detecting machinery which underlies the development of my semiotic system.

    ReplyDelete
    Replies
    1. I don’t think Steel’s logic has real clout towards what he considers the key question for symbol grounding: “Harnad (1990): if someone claims that a robot can deal with grounded symbols, we expect that this robot autonomously establishes the semiotic networks that it is going to use to relate symbols with the world.” Steel implies that we don’t know if an AS can spontaneously create their semiotic networks without direct specified exposure or the guidance of a human, but this notion and reasoning can also be extended to humans. The creation and upkeep of my semiotic network is heavily guided by others – I wouldn’t be able to ground English words or understand the roles of each note in a C major scale if no one taught or pointed it out to me. As such, I don’t think the issue of experimental situational complexity here goes against Harnard’s emphasis on “the difficulty of picking out the objects, events and states of affairs in the world that symbols refer to,” in that it should be expected that a system of different build and sensorimotor capabilities need more specific guidance to identify symbols in the way humans do. The only thing the AS experiments have shown us is that AS can ground symbols autonomously, and although the situations may be less complex and more directed, as technology improves there is no reason to think that if an AS is given proper guidance, alike that of a child’s education, it couldn’t navigate a more complex environment or autonomously establish their semiotic networks in a way comparable to humans. I don’t think it’s fair for Steels to say the SGP can never be solved by these methods, because ultimately if the evidence demonstrates that AS can autonomously develop its owns methods to ground symbols based on direct exposure, the key mystery is whether the symbol feels like a particular something to the AS, or if what we are observing is meaningless response to code.

      Delete
  20. Firstly, I feel like it is important to make the distinction that we created the symbols; they weren’t there for us to interpret them. So the meanings were in our brain, and we found a way to express those meanings to others by forming symbols. Aren’t the ungrounded words on a page very similar to the grounded words in our brains? Without the right capacities, means to interpret and say the words we want to say out loud meaningfully, we would not be able to express what is grounded in our brains. For example people with Wernicke’s aphasia is able to produce language, but it won’t be meaningful. So aren’t the ideas grounded, in the way we used to think before we produced language, and not the words themselves? Thus, I feel like we need to discuss this issue without talking about the symbols we are using. How would we think and act without language and symbols?

    In addition, although picking out referents is a dynamical property and it is grounded, is it not dependent on our past experiences and personality? We have a basic system composed of these, and when someone says a word, we interpret it in the most logical way, picking the referent with the highest probability of meaning in that context. Is Siri not accomplishing this even now? Maybe Siri might be doing it on a basic level now, but would picking out referents really not be possible by programming several meanings and having the AI choose the most probably one, and continue updating the meanings, and possible referents?

    ReplyDelete
  21. RE: To be grounded, the symbol system would have to be augmented with nonsymbolic, sensorimotor capacities -- the capacity to interact autonomously with that world of objects, events, properties and states that its symbols are systematically interpretable (by us) as referring to.
    Concrete words, e.g. table, are symbols that have referents that can be measured or observed, i.e. experienced through our physical senses. The word table has a physical table as a referent. When we talk about grounding symbols, we can say that these words gain their meaning through our sensorimotor interactions with the physical referents that enable us to have these experiences. How, then, are we able to ground abstract words, such as love, if we never have sensorimotor interactions with the referents? Could it be that these abstract words are grounded on other previously grounded concrete words?

    A common finding within psycholinguistics is that unilingual adults process concrete words more easily than abstract words (e.g., love vs. table). According to Paivio’s Dual Coding Theory, concrete word advantages arise because their meanings are grounded both conceptually and perceptually, unlike abstract words whose meanings are grounded only conceptually. Last year I was working on a project with a psycholinguistics professor. We measured time and event-related potentials in bilinguals as they processed concrete and abstract words. We found that concrete words were generally easier to process, and this effect was enhanced in their second language.
    The symbol grounding problem is interesting to me because it discusses how a link between a symbol and its referent gives this symbol meaning. Kroll and Stewart’s Revised Hierarchical Model suggests that unbalanced bilinguals (with a dominant first language) have stronger conceptual links in their first language than in their second language, resulting in slower processing of second language words. This theory suggests that unbalanced bilinguals revert back to the stronger conceptual links in their first language to process words in their second language, resulting in the slower processing speed.
    I wonder how symbol grounding works when learning a second language. Are the new (second language) symbols grounded to the same referents as the old (first language) symbols? Is the sensorimotor experience and meaning grounded in the referent accessible to the new symbol? To what extent do these meanings overlap, considering similar words in different languages can attain very different meaning depending on the context in which they were learned.

    ReplyDelete
  22. “A symbol system alone, whether static or dynamic, cannot have this capacity, because picky out referents is not just a computational property; it is a dynamical (implementation-dependent) property.”

    While I agree that in order to be grounded a symbol system would need to be “augmented with non symbolic, sensorimotor capacities” which would not be computational I am still left wondering if computation is the only phenomena that is implementation independent? If not, then there is no reason to necessarily assume symbol grounding is implementation-dependent. If computation follows a set of internally consistent rules is it possible for other non-computational dynamical systems to exhibit implementation-independent phenomena as well? I am really unsure about the answer to this question, but it seems to have significant ramification for a lot of what we’re talking about. Moreover, my intuition from examples of multiple-realizability suggest that many dynamical systems may manifest non-computational implementation independent systems.

    ReplyDelete
  23. I found an interesting idea in this reading parallel to the answer that was given to a question in the class about the nature of feelings and self-awareness. In class, we discussed how feelings need to be felt by a cognizer to be feelings as an implementation of the hard problem of cognition. Here there is an argument that meaning should be actively experienced in the brain or another entity capable of cognizing to be meaning in the first place. The emphasis on the “mediating process” required for executing the rules of getting from the “inner-word” to the outer referent, I think points out an important feature of studying cognitive science (the feature that requires investigation of underlying mechanisms which give rise to cognitive capacities such as meaning something rather than their manifestations). However, I’m wondering if this mediating process involved in ascription of meaning is consciousness and if consciousness is a tool for identifying and perhaps examining this mediating process.

    ReplyDelete