Saturday 2 January 2016

5. Harnad, S. (2003) The Symbol Grounding Problem

Harnad, S. (2003) The Symbol Grounding ProblemEncylopedia of Cognitive Science. Nature Publishing Group. Macmillan.   

or: Harnad, S. (1990). The symbol grounding problemPhysica D: Nonlinear Phenomena, 42(1), 335-346.

or: https://en.wikipedia.org/wiki/Symbol_grounding

The Symbol Grounding Problem is related to the problem of how words get their meanings, and of what meanings are. The problem of meaning is in turn related to the problem of consciousness, or how it is that mental states are meaningful.


If you can't think of anything to skywrite, this might give you some ideas:
Taddeo, M., & Floridi, L. (2005). Solving the symbol grounding problem: a critical review of fifteen years of research. Journal of Experimental & Theoretical Artificial Intelligence, 17(4), 419-445.
Steels, L. (2008) The Symbol Grounding Problem Has Been Solved. So What's Next?
In M. de Vega (Ed.), Symbols and Embodiment: Debates on Meaning and Cognition. Oxford University Press.
Barsalou, L. W. (2010). Grounded cognition: past, present, and future. Topics in Cognitive Science, 2(4), 716-724.
Bringsjord, S. (2014) The Symbol Grounding Problem... Remains Unsolved. Journal of Experimental & Theoretical Artificial Intelligence (in press)

122 comments:

  1. The article (Harnad, 2003) helped tie together the lectures/class discussions so far on T3. Two general points came to mind regarding what has to be added to give symbols meaning, specifically, where the article “[suggests] one property, and [points] to a second”. First, regarding the suggested property, which is the capacity to pick out symbols’ referents, the role that sensorimotor capacity plays became clearer after I read the 1990 (Harnad) article. But I’m curious as to how we make the transition from iconic representations to categorical representations. The 1990 article states that “icons must be selectively reduced to ‘invariant features’ of the sensory projection”; what defines invariant features? I’m having difficulty with this idea because it reminds me of Wittgenstein’s attempt to define a game but ultimately failing to give an essential definition of game. So, would invariant features reflect what Wittgenstein called family resemblance? Related to this, I’m also wondering how we transition from categorical representations to higher-order symbolic representations. How exactly does “zebra” inherit the grounding from “horse” and “stripes”? Would we say that this inheritance of grounding is strictly additive (literally a horse with stripes) or does the combination of horse and stripes make zebra a category over and above being a horse with stripes? If so, how does inheritance generate something more than what it originally started with? Related to inheriting grounding, what exactly defines “the intrinsic grounding of the elementary set”? In my view, the symbol grounding problem can be solved through this elementary set. Where I struggle is how exactly it can provide grounding to other categorical representations. If it’s purely by inheritance, then it goes back to how inheritance works.

    Second, for the second property that was pointed to in the 2003 article, which is consciousness, the discussion on a zombie that could pass the TT reminded me of epiphenomenalism. I would suspect that once a T3 robot has symbol grounding along with the corresponding iconic, categorical, and symbolic representations, it must necessarily be conscious/feeling. The question, for me, then is whether this feeling is an irrelevant by-product (similar to epiphenomenalism) for passing the TT or whether it's needed to pass the TT. Which is to say I don’t care whether or not it’s actually feeling because, for practical purposes, I only care whether or not this feeling contributes to my judgment on whether its feeling or not.

    ReplyDelete
    Replies
    1. (Feel free to ignore: extra comment regarding invariant features).
      More tangential but still related to the question of how invariant features can be derived, I’m also reminded of Bertrand Russell’s example of a table: “Let us concentrate attention on the table. To the eye it is oblong, brown and shiny, to the touch it is smooth and cool and hard; when I tap it, it gives out a wooden sound. Any one else who sees and feels and hears the table will agree with this description, so that it might seem as if no difficulty would arise; but as soon as we try to be more precise our troubles begin. Although I believe that the table is 'really' of the same colour all over, the parts that reflect the light look much brighter than the other parts, and some parts look white because of reflected light. I know that, if I move, the parts that reflect the light will be different, so that the apparent distribution of colours on the table will change. It follows that if several people are looking at the table at the same moment, no two of them will see exactly the same distribution of colours, because no two can see it from exactly the same point of view, and any change in the point of view makes some change in the way the light is reflected.” Putting aside philosophical/metaphysical quandaries, purely in terms of going from iconic representations to categorical representations, how would we account for something like what Russell described? What is invariant in a table that makes it a category? How would our answer to the table example generalize for all the things we interact with in our environment?

      Delete
    2. To answer the query about whether feeling is an irrelevant by-product for passing T3 or if it's necessary. I think this goes back to the easy and the hard problem. This class and cognitive science are ultimately looking at answering the easy problem- how we do what we do. The hard problem is how we feel. The TT isn't all knowing as discussed by Searle and Harnad but it is the only empirical test we have of studying the mind- whether or not a synthetic product can pass as a human (with a mind) for an entire lifetime. This fallability pointed out by Searle shows that cognition can't only be computation. Something can pass the TT (Chinese Room) but still lack feeling or understanding or consciousness. So to answer the question of whether feeling is needed to pass TT I would say it's not. The Chinese Room passes the TT but lacks feeling because it has not been grounded. Obviously that example isn't at a T3 level but ultimately I think the TT is not rigid enough to require feeling to pass it- but it's the only test we have thus far.

      Delete
    3. Austin:

      1. Either there is something that distingushes a game from a non-game or there isn't. If there isn't, then there's no such thing as a game. If there is, than that's the invariant. It might be complex (property X or if not Y then Z...) but if it doesn't exist, the category doesn't exist. And if it exists, it's something that every member shares (in the way that every apple shares the property "red or green"; "family resemblances" are just such either/or properties).

      2. Grounding transfers when knowing what A's are and what B's are, you also learn what C's are from being told C = A + B. "Higher-order representations" [I no long use the weasel-word "representation"] are just verbal combinations of lower-order ones. The essential thing is that they have to exist to make it possible for you to recognize the referent. At the bottom, the category names have to be grounded directly through sensorimotor experience (induction) rather than through verbal recombination of grounded categories (instruction).

      3. Nothing to do with "epiphenomenalism" (which means and explains nothing: it's just another way of saying we have no idea how to solve the hard problem. The hard problem is giving a causal explanation of consciousness. "Epi" is just another way of saying we have no idea. But the question of whether T3 (Dominique) is or is not a Zombie is not the hard problem: it's the other-minds problem. We have no way of knowing that for sure either. It's certainly not "necessarily" true that Dominique is not a Zombie. Turing's point is just that since there's no way to tell, don't lose sleep over it. The truth, however, certainly does matter, but not to you: to Dominique, should you come to the wrong conclusion, and decide it's ok to kick her.

      That's why I'm a vegan.

      4. About the table's invariants, see 1, above.

      Kathryn, if you could explain how and why feeling is necessary to pass T2, you would have solved the hard problem. All I would venture to say is that T3 capacity (grounding) is needed to pass T2. But to be able to say feeling is needed (or not needed!) to pass T3, you would again have to solve the hard problem. Meanwhile, Turing points out that you may as well forget about the Other-Minds problem because you'll never solve that either!

      Delete
  2. When discussing robotics in "The Symbol Grounding Problem," Harnad writes: "But if groundedness is a necessary condition for meaning, is it a sufficient one? Not necessarily, for it is possible that even a robot that could pass the Turing Test, "living" amongst the rest of us indistinguishably for a lifetime, would fail to have in its head what Searle has in his: It could be a Zombie, with no one home, feeling feelings, meaning meanings."

    Isn't this just the problem of other minds? Although it's true that a robot may not have in its head what Searle has in Searle's head, couldn't we argue the same thing about Searle? It's impossible to know that what someone else is feeling, which is why we rely on assuming that if someone is like us, they feel like us. Shouldn't we be doing the same thing with robotics?

    ReplyDelete
    Replies
    1. I agree, we know that grounding plus feeling, we would able to obtain the meaning. So I think you’re right in that because of precisely the other-mind’s problem, it seems impossible to know whether grounding is sufficient for understanding. We know that to understand means to feel like what it is in understanding, so despite grounding is certainly necessary, we can never really know if it alone will be sufficient. Such that, it seems like we would never know if Dominique actually understands anything when we talk with her, or if she has any feeling of knowing despite her cognitive capabilities in grounding. Furthermore, in addition to the other mind’s problem, I believe the hard problem also prevents us from explaining how or why grounding is sufficient, in the event that it is.

      Delete
    2. Identifying that lack of grounding is the main problem for Searle's "understanding" in the Chinese room is a helpful step for us in understanding the difference between Chinese-Searle and English-Searle. Upon first read, it might seem that both Searles are understanding equally well, if we base that on his actions. However, we can all grasp the concept that he is simply acting mechanically without using any true meaning of words, due to our own personal experiences with symbol grounding.

      However, it's true that we're relying on Searle to tell us which of these languages he understands and which he doesn't. If he is indeed following his instruction book perfectly, we have no way to know which language he understands, other than to trust what he tells us.

      Without using advanced brain imaging technologies, we can't know in which way he's responding to language stimuli--by true understanding, or by simple computational rules. And even with advances in brain imaging, can we truly trust that? Even if we learn that certain brain regions are for certain things (like the teapot area), much of this is based off of the subjects reporting things like "yes, I'm thinking of a teapot right now". It seems difficult, or perhaps impossible, to ever know for sure what others are thinking with no degree of uncertainty. Therefore, until we can, it seems not possible to determine if I, or Searle, are actually feeling, or are just following directions that hold no meaning to us.

      Delete
    3. I agree with Dominique but I think the teapot example really should be retired. Someone with a severed corpus callosum can most likely think of a teapot- would this show up on both sides of the brain or just one. And would this be different from a regular person? Would it be a different location due to the lack of communication or due to the patient being a different person with a different brain? I think thinking of neuroimaging as a study of where objects is found is really dangerous- how many objects are there in the world and how big are our minds? The amount of space where 'teapot' is stored is microscopic if it even exists at all, I don't think it does. These studies are phrenoligist in nature and do seem arbitary, pointless, and non-scientific. But most studies are not trying to pinpoint exact spots for exact objects. And those studies studying systems and what's happening when you're undergoing a certain situation or reaction might actually be more relevant to this discussion.

      Delete
  3. Regarding: “One property that the symbols on static paper or even in a dynamic computer lack that symbols in a brain possess is the capacity to pick out their referents. This is what we were discussing earlier, and it is what the hitherto undefined term "grounding" refers to. A symbol system alone, whether static or dynamic, cannot have this capacity, because picking out referents is not just a computational property; it is a dynamical (implementation-dependent) property.”

    There are deep learning models trying to reach the capacity to pick out the referents. Experts have been training computational models like the Deep Boltzmann Machines to match images with texts. For example, there is a model that is able to choose the best relevant/correct bird species after receiving an input--which is the description of that species from Wikipedia. By embedding images and texts into a joint space, the model is able to choose the best relevant image to match a text description or a word, and to retrieve and generate captions when an image is given to the model. I don’t want to go too far because I barely know anything in this field, but I heard of such an invention that enables machines to match a word to its referents. Even though not all the trials give successful correct text-image matching, still, it makes me wonder whether such a thing (model/system/machine) really has no capacity to pick out their referents, as the reading suggested. One day (or maybe right now), if we have a computational model that have the capacity to pick out their referents, then do we say this model is now “grounded”?

    Also, is it possible to have a symbol system that does not have the sensorimotor capacities to interact with the objects, however being able to pick out their referents when the system receives a word as an input?

    Searle’s Chinese Room Argument suggests that a system passing T2 does not necessarily means it understands. If a system passes T2 and it has the capacity to pick out the correct referents to match with the word given, then can we say such a system understands? Or understands that language? (Leaving consciousness and other-minds problem alone)

    ReplyDelete
    Replies
    1. I think with the example of the text matching with symbols, the grounding comes from the human who programs the system to recognise a certain image. For example, a human who knows what a cat looks like can tell a computer that this picture is an image of a cat, and then the computer can decompose the pixels and then recognise a picture of a cat in the future.

      Delete
    2. Adrian, I could be wrong, but I think that in this case the computer is only given the text, i.e. a verbal description of a cat, and can then pick out the pictures of cats from a set of pictures of cats or non-cats. The programmer will tell the program if it's correct or not, but I don't think it is given a picture of a cat to begin with.

      That said, I still don't think we can say this deep learning model is grounded. Sure, it can match descriptions to images, but these descriptions and images are only so to an external observer, and only once presented on the monitor (which the computer can't see). In the computer, there are only 1s and 0s. It amounts to (impressive, but) straight computational symbol manipulation. Words on a page.

      If, on the other hand, this program were in a robot, and, given a description of a cat, could go up to an actual, physical cat, pick it up, and say "So this is what you were talking about?", then I'd say it was grounded. Until then, we're still in the realm of formal symbols. Without sensorimotor capacities, formal symbols is all you get.

      Delete
    3. I don’t believe Deep Boltzmann Machines are grounded – the way I see it, there’s a method to apply Searle’s Chinese Room argument to this as well, albeit in a different way. Instead of text coming in and going out, it’s images coming in and text as output (or vice versa) with some sort of program in between. The computer still doesn't know what these things are – as can evidently be seen when you take a look at its image-matching failures (the spectacular ones tend to be entirely unrelated to the given word). In my opinion a sufficiently advanced Boltzmann machine, one which doesn't make mistakes, isn’t grounded, it just matches things better without ‘understanding’ what it’s doing. And also, replying to that last point: if I type ‘Tony Blair’ into a Google search, it will show me the PM of the UK. Does this mean Google is grounded? I don’t believe so.

      The second point – whether it’s possible for a grounded system to pick out referents without having sensorimotor capabilities – I’m not entirely sure. I don’t think it is possible, and I go about thinking of it with one example: How would you go about picking out an apple vs. a tomato? Both are round, red fruit, about the same size, have varying tastes and are used in cuisine. Both are common in society. Tomatoes tend to be squishier, but what does it mean to be squishy? How does one understand any of this without having seen both fruits in order to make the distinction? Sensorimotor capabilities seem pretty crucial in order to be able to pick out referents, to me.

      Delete
    4. Thank you all for replying! Your comments make me understand the importance of really interacting with the world in order to pick the correct referent of a meaning. However, I still have a question: How about the case of picking out the referent of "unicorn"? (Or even some abstract nouns like "Justice")

      Since our sensorimotor will not be sufficient to interact with a unicorn, when we decided the horse-looking animal with a horn on its head is the correct referent of a unicorn, how different are we from a deep learning model that can choose the correct bird picture after receiving the input--the description of the bird species from Wikipedia?

      Delete
    5. Alison, referents are things in the real world (kinds of things): apples, tables, people. Grounding is not matching symbols to symbols, nor even symbols to images. For grounding, you need a sensorimotor T3 robot in the world, able to do (and learn to do) the right thing with the right kinds of the things in the world, just as we can do: recognize them, identify them with their name (categorize them), and do all the other (robotic) things we can do with them. If you want to think of grounding, think of Dominique. Part of what's going on inside her (and us) might be the activity of "deep learning" nets, receiving our sensory inputs and learning to abstract their invariant features. But that's only a part of T3, hence only a part of symbol grounding.

      We can't have a sensorimotor interaction with a unicorn because there aren't any. But we can interact with a picture of a unicorn, or recognize a mechanical imitation of a unicorn. But our symbol "unicorn" is nevertheless grounded by the words "a unicorn is a fictional horse with one horn." "Justice" is grounded mostly by words too (but there are examples: "that was an unjust thing you did" etc.)

      Adrian, computers simulating deep-learning nets really can classify (digitized) image inputs and match them to words. But that isn't categorizing things in the world, and it's not T3.

      Michael,, right.

      Amar, right (see above)

      Delete
    6. I think if a system passes T2 and it has the capacity to pick out the correct referents to match with the word given, we cannot necessarily say this system understands, because it only demonstrates that it has the ability to generates the same output from the same input, but not necessarily by the same processes (from a strong equivalence point of view).

      Delete
    7. "Searle’s Chinese Room Argument suggests that a system passing T2 does not necessarily means it understands. If a system passes T2 and it has the capacity to pick out the correct referents to match with the word given, then can we say such a system understands?"

      Maybe I'm thinking about this the wrong way/not understand something, but wouldn't the system have been programmed by a human? And thus, wouldn't any symbol that it understands be a result of that symbol being defined by whoever programmed the system? Thinking in terms of computer science and whatnot, a program knows what command to execute, what space in memory to refer to, etc., based on what the programmer has defined or told it to refer to previously in the program.

      Given that, I agree with what Zhao above has said, namely that we cannot guarantee that the system understands, because it only demonstrates that it has the ability to generate a specific output given a specific input. But this can be predetermined by whoever created the system.

      Delete
  4. It seems the most important application of this article is the distinction between what is necessary and what is sufficient for cognition. Harnad argues that symbol grounding is necessary for cognition but potentially not sufficient. My question is if we can ever discover what is sufficient for cognition. We are aware that we have a brain, that we have senses and we have cognition but we do not know how the latter arises from the former two. Even if we can say that consciousness is the brain and the senses, we cannot know how.

    ReplyDelete
    Replies
    1. Leaving out consciousness, and focussing only on cognitive capacities, I think we will know once we're able to build something we understand that has all those capacities.

      Delete
    2. If what is sufficient for cognition includes a reduction of underdetermination such that one can be certain that the other is cognizing, then it seems as though one has reached the “hard problem” of cognition. @Michael, even if we were to build something with greater cognitive capacities, this would not solve the other-minds problem because it only deals with the “easy problem” (i.e. Cognition is as cognition does).

      For Searle, it seems as though the other-minds problem is a necessary component (along with grounding) for cognition. While Searle argues that computation is not cognition, he bases his reasoning on the fact that any simulation of the brain is not the brain itself. Therefore, it cannot possess the same causal mechanisms as the brain. For Searle, it seems as though the only way to be sure that an AI is truly cognizing as humans do, would be to have an explanation of the brain itself. This would then be a sufficient explanation for cognition because it dodges the other-minds problem all together.

      However, I am skeptical about this argument. I do not think is escapes the other-minds problem. We do not even know if other brains (aside from our own) cognize, let alone AI recreations of the brain. I agree with you, @Adrian. If meaning is not sufficient for cognition, and being certain that the other is cognizing is a necessary component for cognition, then it seems as though we are at a stalemate.

      Delete
  5. I found this reading to be a great summary of some of the concepts that we have previously discussed in class.

    RE: "So if Searle is right, that (1) both the words on a page and those in any running computer-program (including a TT-passing computer program) are meaningless in and of themselves, and hence that (2) whatever it is that the brain is doing to generate meaning, it can't be just implementation-independent computation, then what is the brain doing to generate meaning (Harnad 2001a)?"

    From what I understand, Searle believes that the meaning of a word inside a computer is the same as one on paper, in that they are both ungrounded. It is a bit unclear to me whether sensorimotor capacities are a necessary condition for picking out referents. Is grounding only possible via sensorimotor capacities? Must something be conscious to be capable of picking out referents?

    Although symbols can be manipulated by a computer to be weakly equivalent (same input-same output) to a human, I would think that the symbols would have no meaning without a human consciousness/interpretation. If the brain is generating meaning (and grounding) based on lived experiences, can we say for sure that a computer will never be capable of grounding?

    ReplyDelete
    Replies
    1. I think his reference to sensorimotor capacities is just to indicate there are other necessary (but not sufficient) qualities an entity must have before it can "ground".

      On your last point, many artificial neural networks rely on experience (supervised learning) before they are useful (i.e. can understand some inputs and give appropriate outputs). However, even if a sufficiently complex network is capable of grounding, groundedness may only be a necessary, but not sufficient condition for meaning (as Professor Harnad states at the end of his article).

      Delete
    2. Could an artificial neural network alone really be capable of grounding? Even if it learns, if it only exists in a computer, then it's not doing anything more than formal symbol manipulation. In this sense, yes, I think you do need sensorimotor capacities for grounding. Conscious or not, I think it's necessary to be able to operate in the world and interact with physical things to be grounded, or else it's just 1s and 0s that are only interpretable by us.

      Delete
    3. @Elise
      RE: Must something be conscious to be capable of picking out referents?
      Yes, meaning is constitutive of both the referent and the rules that allow one to pick out the referent. From what I understood, consciousness is essential, or else one would just be arguing in favor of the Systems Reply argument.

      Sensorimotor capacities are necessary for meaning in that one’s experiences allows for the referents to be grounded. In other words, “picking out the referent” is to connect the referents (symbols) to real life experience/doing (i.e. that which the symbols are referring to).

      Delete
    4. With regards to the question whether something must be conscious to be capable of picking out referents, I think is a loaded question and depends on what we mean by conscious.

      Since we only ascribe consciousness to things that behave indistinguishably from us, then the capability of picking out referents must be one of those behaviours. So, conscious beings must be able to pick out referents but they must also be able to do a whole other bunch of stuff.

      If by conscious you mean feelings on top of behavioural capacity, then we're moving onto address the other minds problem.

      Delete
    5. @Yi - I don't think that we only ascribe consciousness to things that behave indistinguishably from us. I think most (if not all) people would agree that many animals are conscious. I think what is important is that sensorimotor capacities are mostly likely necessary to cognize (which is why T3 could pass the TT but not T2).

      I think it is cool however to note that if we agree that other species are also conscious, how this might change our definition of what consciousness might look like for AI. If we alter the TT so instead of the requirement be that the computer is indistinguishable from a human, but rather from an ape, does this change how we define consciousness and intelligence in computers?

      Delete
  6. The question of symbols and their "interpretation" may be an explanation for why the Turing test cannot yet be passed by a robot using a computer with algorythms running on binary rules. The main difference between the way modern computers handle symbols and the way humans (or more broadly "cognition-capable biological systems") explains why computers/robots have not yet been able to pass the Turing test, and why many doubt they ever will.
    Humans recognize symbols, and can associate the symbol recognized with their referents (where it comes from, where it's been seen before, the other symbols it looks like or has a meaning similarity with…). On the other hand, computers only manipulate symbols, there is no recognition because the understanding of the symbol seen is not linked to its referend, which do not allow them to be used to make Turing test passing robots. The way that computers use symbols does not involve grounding of that symbol in a system of "all the symbols". It then leads to discrete categories for each symbol where the symbol is or is not part of a category, a clear yes or no answer, binary, which is the system that computers is based on. For humans though, in a specific context, a log can fit in the category of "chairs" and a human may decide to sit on a log! Because in the context of the symbol "chair" (understood as "thing to sit on"), a log may fit, whereas in another context, the log would count as "necessary thing to feed the chimney". How can a result like that one (observed in humans) be implemented in a binary system such as the one computers are based on? If the symbol representing the log (word or image) is inputed in a human with capacity to ground the symbol in a large picture where each symbol is connected to every other (like in a multi-dimensional spider-web), then subtle human behaviors can arise (burning it or sitting on it), whereas when the same symbol is inputed in a computer, with discrete categories (which does not allow for creativity of interpretation), then some behaviors do not arise, which does not let robots perform EVERY human behavior, and therefore not to pass the Turing test.

    ReplyDelete
    Replies
    1. I completely agree with you!
      “So the meaning of a word in a page is "ungrounded," whereas the meaning of a word in a head is "grounded" (2003). (Word on the page exists in and off itself, the word in the head exists only in the mind)
      What is being proposed is a means of intentionality aka how a physical phenomena becomes a mental phenomena.
      However, I do not believe that giving a robot the ability to categorize will result in such a phenomena. As you basically described, categorical perception lacks content. It describes a mental state to be about a single object or a single referent of an object. This means that in different contents a single object will produce the same mental phenomena, which we know not to be the case. The mental can intended the same object in different contents and produce two different conscious perceptions. A classic example is the idea of Peter Parker and Spider Man. Peter Parker and Spiderman are two contents for one person (one object). When the person takes on these different contents, we intend different conscious perceptions. In the content of “Peter parker” the object is perceived as a nerdy newspaper photographer, while in the content of “Spiderman” the person is perceived as a heroic savior. Would the robot proposed, who has categorical perception and has learnt the difference between a newspaper report and a super hero be able to realize that they both "objects" are in fact one "object"?
      As I have said before, it seems like we are trying to deduce an INFINITE system into ONE rule (or even a hybrid of a few rules), and this is just not possible. We cannot quantify infinity.

      “To be grounded, the symbol system would have to be augmented with nonsymbolic, sensorimotor capacities -- the capacity to interact autonomously with that world of objects, events, properties and states that its symbols are systematically interpretable (by us) as referring to. It would have to be able to pick out the referents of its symbols, and its sensorimotor interactions with the world would have to fit coherently with the symbols' interpretation.” (2003)
      The word autonomous in this passage puzzles me. How can you program a robot to be autonomous, the sentence itself seems contradictory.

      Delete
    2. This comment has been removed by the author.

      Delete
    3. To elaborate on why you cannot have a robot "interact autonomously with that world" I assume would go into the free will debate. Humans seem to be preprogrammed by genes. Each person has a specific disposition, mannerisms, and pleasure systems- however we do not know how much this is contributed to genes and how much to experience. This transfers to T3, how much do we need to preprogram the robot and how much do we leave to reward and punishment?

      We can even speculate over these questions- are genes and environment the only forces perpetuating humans in a meaningful way? Maybe the issue is with time and not space. Instead of having the robot interact “autonomously with that world of objects, events”, maybe the robot need be interacting with time. Have the robot categorize relative to the continuum of time, rather than the continuum of matter.

      Delete
  7. I am looking forward to discussing the symbol grounding problem more in depth on Friday. I am starting to see a certain pattern of writing in these cognition readings in which an intuitive definition of a concept is first offered and then quickly dismissed as incomplete. In any case, I agree with the statement that it is “unreasonable to expect us to know the rule, explicitly at least… our brains need to have the “know how”…but they need not know how they do it consciously”. This seems like an obvious statement as we seldom are consciously aware of the workings of a given bodily or mental process. It seems effortless. Yet the fact that there is debate over meaning and how we derive it, provides those opposed to Turing and AI with a good weak spot to attack. However, just as scientists revealed the processes of the heart and enabled the creation of an artificial one, so to may neuroscience reveal the means by which something ungrounded becomes grounded in the head. I also think the distinction between systematic interpretability (notwithstanding its power) and meaning is an important one. A computer that formally manipulates symbols that are semantically interpretable (pillars of computationalism) cannot be said to derive meaning. I also want to ask whether those who are blind or deaf lack the process of meaning (as their symbol system is not grounded with “nonsymbolic sensorimotor” aspects).

    Finally, I read the paper by Steeles (2007) that suggests the SGP has been solved by his team’s elegant robotic agents that “autonomously generate meaning, autonomously ground meaning in the world through sensori-motor embodiment…and autonomously introduce and negotiate symbols for invoking these meanings”. The paper is compelling but I am unsure. Perhaps the solution is a good one and the only elusive feature that remains is consciousness. Professor (and peers), would you agree that these agents are a solution to the SGP?

    ReplyDelete
    Replies
    1. Hi Jessica!
      To address your first question, I think that conceptualizing the symbol grounding problem with regards to those who are blind or deaf would be similar to conceptualizing the problem with regards to a language other than English. Every language has it's intricacies and grounds meaning in unique ways. I don't speak sign language myself, but I feel as though it is even more reliant on symbolic sensorimotor aspects than most verbal languages are.

      Delete
    2. I agree with Kristina regarding deaf/blind people. If the grounding process did not apply to them the same way it does to non-deaf/blind people, then for them to be able to use language (sign language for example) the way they do would mean that the symbols they use have some sort of intrinsic meaning -unless you want to argue that they dont understand the symbols they use. But since different symbols (including hand gestures) have different meanings in different places, I dont believe that this is the case.

      Delete
  8. The symbol grounding problem asks how we connect a symbol with its referent through sensory-motor interaction. It frames a discussion about what processes we have to attach meaning to symbols/ symbols to meaning.

    When you said "meaning is grounded in the robotic capacity to detect, identify, and act upon the things that words and sentences refer to," it got me to think if words, or symbols, need to be part of the problem just yet. Perhaps first, we need to look at other processes that garner meaning that do not have to do with words, or even symbols.

    We are basically born knowing what an embrace means. The action of being swaddled and cooed-to is innately attached to the meaning of safety. If a baby is yelled at, the loud noise and the aggressive face already, innately refer to the baby's emotional centres. It has a built in signal for when it is not safe. As we age, we build a movement-meaning framework via imitation, proprioception, and generation of motor movements via social interaction. We come to categorize more and more facial and gestural expressions as we grow up, and add more layers and nuances to our framework. In looking at how we develop, and looking at our evolutionary line with non verbal primates, our body language abilities clearly precede our verbal abilities.


    and it’s most adaptive emotional referent are hardwired to be readily connective in the brain. For example, we are born with an easy-to-activate highway of nerves between the portion of the visual cortex that represents the sight of a growling mouth full of teeth, and the activation of the amygdala. We just need to wake up that connection through experiencing a situation that brings about that connection.

    Clearly, we have the capacity to manifest a complicated meaning-system of rules for interpreting gestures, but there are no symbols to be manipulated here. The meaning lies in the connection between the gesture and the emotion. There is no symbol--> syntax process. Gestures carry meanings, yet these meanings are not formed via symbol manipulation in the same way that we learn a new word. When we read a book I think we performing higher-order processes than we would be if we were watching a mime performance.

    Back to the symbol grounding problem, like you and many other people, I wonder what exactly "grounding" refers to. What is it that the symbols are being grounded into? We could just leave it to cognitive science and neuroscience like you said, but I wonder if we are in fact installing our verbal language into our innate, gestural-meaning framework; the cortex-limbic system connections that link experiences in our world via emotions. I think before we have 'cognitive referents', we have emotional referents. I think 'cognition' comes with verbal language ability, but I think our cognition referents as they pertain to verbal language are rooted in our gestural-meaning framework.

    I feel like studying the cognitive development of lower order animals (respectfully…V…) could help us figure of the gestural meaning-framework that precedes the symbol grounding problem.

    ReplyDelete
    Replies
    1. Perhaps robotics could be another way to study this idea. I skimmed through the review of the literature by Taddeo and Floridi gesture and found Varshavskaya's Behavior-Based model to be most in line with what I was thinking. There is a child-robot-head at MIT named KISMET that Varshavskaya's claims has emotional capacities.

      "Learning to communicate with the teacher using a shared semantics is for KISMET part of the more general task of learning how to interact with, and manipulate, its environment. KISMET has motivational and behavioural systems and a set of vocal behaviours, regulatory drives, and learning algorithms, which together constitute its protolanguage module. Protolanguage refers here to the “pregrammatical” time of the development of a language – the babbling time in children – which allows the development of the articulation of sounds in the first months of life. To KISMET, protolanguage provides the means to ground the development of its linguistics capacities. KISMET is an autonomous AA, with its own goals and strategies, which cause it to implement specific behaviours in order to satisfy its “necessities”. Its “motivations” make it execute its tasks. These motivations are provided by a set of homeostatic variables, called drives, such as the level of engagement with the environment or the intensity of social plays. The drives must be kept within certain bounds in order to maintain KISMET’s system in equilibrium." Kismet has “emotions” as well, which are a kind of motivation. I'm really interested how these "emotions", "motivations", and "homeostatic variables" work in KISMET.

      The authors criticize KISMET and say that without representations, KISMET is unable to connect a symbol to a category of data. They are skeptical that he will ever be able to solve the symbol grounding problem, and accuse Varshavskaya of "innatism" and "externalism". Why is innatism and externalism bad? Won't learning more about our innate hardwiring, and how exactly certain patterns of wiring 'come online' in our development, be instrumental in laying the earth to then solve the symbol grounding problem?


      Delete
    2. correction to beginning of 3rd paragraph above:

      *Perhaps important gestures* and its most adaptive emotional referent are hardwired to be readily connective in the brain.

      Delete
    3. Lauren, good points: Yes, categorization (both innate and learned) has to precede symbol grounding.

      But it is symbols (words) that have (linguistic) meaning (sense, reference; semantics). The symbol has to connect to the referent (category).

      Gestures can resemble what they are imitating (just as a painting can), but imitation is not categorization, nor is it referring, nor is it a true/false proposition.

      Even pointing is not the same as reference. Drawing attention to something is not referring to it -- except if the the one pointing has language, and is pointing to expression the proposition: "I am referring to that." (Besides, so far it looks as if only humans point.)

      And emotions are not the only referents of our words, nor are they the most frequent or typical referents. There are apples too. And anyone who can categorize apples (do the right thing with them) in some sense knows what apples are.

      Nonhuman animals also know what being embraced "means." But that "meaningfulness" is not a link between a word and its referent.

      You ask "What is it that the symbols are being grounded into?": The word (category name) is grounded in the capacity to "do the right thing with the members of the category to which the category name refers."

      I agree that understanding the way nonhuman animals learn cetegories is a very big component of symbol grounding. But it does not become symbol grounding till the symbols are the words of a language. And words are not words until they can be combined into true/false subject/predicate propositions describing categories by combining the names of already grounded categories to define or describe further categories (often by describing the named invariant features that distinguish the members from the non-members).

      KISMET is a trick, like Siri. Turing-testing is not a game; it is reverse-engineering of the real thing. What we need is a Domnique, not KISMET.

      Delete
  9. Regarding:
    “One property that the symbols on static paper or even in a dynamic computer lack that symbols in a brain possess is the capacity to pick out their referents… A symbol system alone, whether static or dynamic, cannot have this capacity, because picking out referents is not just a computational property; it is a dynamical (implementation-dependent) property…
    To be grounded, the symbol system would have to be augmented with nonsymbolic, sensorimotor capacities -- the capacity to interact autonomously with that world of objects, events, properties and states that its symbols are systematically interpretable (by us) as referring to.”

    Dr. Harnad puts forward the idea that for a cognizer to be grounded, it has to have motor capabilities in addition to sensory ones. My question is: Isn’t the motor capabilities portion of this assertion a human centric perspective? If there were an artificial intelligence that had full sensory capabilities and could influence the outside world with communication via a non-motor source (i.e. voice, text, indirect control of other robots and/or biological organisms), could it be grounded? Karl says that it would be grounded, because it still perceives the world, has the ability to create categories (i.e. chair, plate, human) that can then be used to instruct what to do with whatever it is, and can interact in a non-motor way with its environment.

    The two objections that I could foresee to this would be 1: the experiment with two cats, where one walks around the circular environment and the other is passively brought through it, where this hypothetical example is similar the passive cat and 2: that saying it indirectly interacts with it’s environment via something else (as stated above) is the same as saying that I indirectly use my arm to interact with the world (which defeats the point).

    If anyone has any idea about whether this is incorrect or correct, I would be happy to hear some feedback and if you can present me with a good enough counterargument, I could change my position.

    ReplyDelete
    Replies
    1. Hi Karl,

      You bring up an interesting point (for me, at least). I do believe that a grounded cognizer needs both sensory and motor capabilities, but your entry made me rethink why, especially with the motor part. Perhaps there exists some alien species, capable of cognizing but entirely incapable of locomotion and motor sensation. I think that in this case, such a being (or system in the case of your AI) would be able to pick out the referents that are associated with the senses it does have: vision, audition, etc. However, it’s ability to pick out referents may not be entirely the same as ours – what would squishy, velvety, hard, soft, mean to a system that has never felt them? You need to be able to grasp, touch, feel, in order evaluate this. The AI you give an example of either can do this – in which case it has sensorimotor capabilities – or it can’t, and it also has no idea what the above mean. I understand it’s very human-centric to say this but perhaps our capability to pick out the full range of referents (provided that's something we can do better than anything else) is due to the fact that in addition to our sensory capabilities, we have motor ones too. Not sure if it’ll make you rethink as well, but I wanted to give you my thoughts.

      Delete
    2. Hey Amar,

      I really appreciate your response to my original idea and I think you make some really interesting points, especially regarding the alien species that challenges our ideas or sensory-motor capabilities.
      Regarding your point on textures such as velvety-ness or squishy-ness, it comes back to the affordances that we as human as given based on our senses. You are right that a being without the sense of touch would never even be able to understand those things, but a person that is born blind would never be able to understand “greenness” or “brightness” in the same way. If a person were born without the nerves necessary for the sensation of touch, they just as well would never understand what “velvety” means, but their symbols could be grounded in the other senses that they have.

      Delete
  10. Comment on Natural Language and the Language of Thought (Harnad 2003)

    Thinking of language as a symbol system that is ungrounded made me recognize the universality of our language system in terms of implementation-independence representing different languages, dialects and cultures.

    “The symbols, in other words, need to be connected directly to (i.e., grounded in) their referents; the connection must not be dependent only on the connections made by the brains of external interpreters like us. The symbol system alone, without this capacity for direct grounding, is not a viable candidate for being whatever it is that is really going on in our brains”

    We use our brains to pick out referents and ‘ground’ our language, regardless of what language we are speaking. However, people that are translating between different languages with different meanings and symbols/shapes seem to be relying on multiple processes to accomplish this ‘grounding’ and these processes seem to demand more than the simple picking out referents as there is a translation involved in the computation. This implementation-dependent process seems to differ for different languages and translating between them.

    Contrary to the set-up and claim in Searle’s Chinese room argument, translators do understand the words they are computing and translating. Therefore, in addition to picking out referents as we do for grounding language ordinarily, must there be an additional element of consciousness in operation for the seamless computations translating between languages?

    ReplyDelete
    Replies
    1. For people translating from one language to the next, I wonder if the process differs based on their fluency and how ingrained the languages are. If someone has been regularly speaking two languages from birth (say French and English), they may require the same amount of demand when picking out referents as someone who only speaks one language. I can however see how the demand would increase if the language were one recently learned or not as well comprehended.

      Delete
  11. My first question relates to the “systematic correspondences between symbols”. First we have to accept the premise that the meaning of the symbol exists in the mind of the human speaker or hearer. When you consider the relations that symbols have with each other, does the meaning affect the way they relate? More explicitly, do the semantics (meanings) of symbols affect our ability to relate symbols? If a machine does not have ‘grounding’ of symbols would the ability to understand the relations among symbols be also compromised?

    My second question relates to the notion, “directly grounded” proposed in the Encyclopedia of Cognitive Science. I don’t fully understand what this means. What are the limitations imposed on directly grounded symbols? If it is having seen or heard directly, then how can we say to know the meaning of ‘unicorn’? If instead, directly is looser then it is possible to imagine theoretically that by grounding very few essential symbols all others could be generated. This would be a much simpler goal.

    ReplyDelete
    Replies
    1. RE 2nd question: Drawing off of Harnad's assertion that one can “combine and recombine categorical representations rule-fully into propositions that can be semantically interpreted"(1990), it follows that it would be sufficient for a word such as 'unicorn' to be decomposed into directly grounded symbols such as 'horn' and 'horse', ones that we have functional referents for due to sensorimotor capacity.

      Delete
    2. Hi Valentina,

      Regarding your first question, I do think the semantics of symbols affect our ability to relate them (specifically our ability, as a syntax system wouldn't care about semantics). Relations are based off meaning – my relationship with my family is based on the fact that I am a son, a brother, etc., and what these relations mean. Were I instead just a work colleague to my sister, our relationship would be quite different. I understand this version of ‘relation’ is a bit different from what you meant but I look at it in the same way. Is an ungrounded machine, ie. a computer, even understanding the symbols running through it? Not semantically, no.

      Delete
  12. "But if groundedness is a necessary condition for meaning, is it a sufficient one? Not necessarily, for it is possible that even a robot that could pass the Turing Test, "living" amongst the rest of us indistinguishably for a lifetime, would fail to have in its head what Searle has in his: It could be a Zombie, with no one home, feeling feelings, meaning meanings."

    I'm struggling to understand this. A robot could have meaning, and therefore could pass T3, but not be conscious? I thought Harnad said that what we mean by meaning is precisely feeling. It feels like something to understand english, and it feels like something to not understand it. That makes sense to me, also because I have been convinced by other arguments that draw a tight connection between meaning and consciousness. However Harnad seems to go against this here, arguing that you can ground something to a referent (have meaning) without consciousness. I'm not sure what it means to have grounding without meaning.

    ReplyDelete
    Replies
    1. In that case we would say that robots we build today are grounded, but have no meaning?

      Delete
    2. Based on the article, I don't think we have yet built a grounded robot, because that would imply that we have successfully built a hybrid symbolic/sensorimotor robot which could potentially pass T3. I agree that its difficult to distinguish the difference between meaning and consciousness, because when we understand or attribute meaning to something, we automatically consciously interpret and understand it. But I think the difference pointed out by Harnad is that sensorimotor grounding would allow a robot to have the capacity to interact with the objects, events, properties and states that the symbols refer to and thus, make the association between the symbol and the referent. But maybe its not sufficient because just having sensorimotor capacities doesn't imply that there is the experience or feeling accompanying the perception of a referent.

      Delete
    3. From what i've gathered, i think grounding is necessary for passing T3 and probably necessary for meaning. But since we don't know if other people/T3's feel - which people probably do, this is more related to the other minds problem. I feel like the Symbol Grounding Problem takes play when we realise that the symbols in a T3 would only ever be squiggles and squaggles and are not grounded to referents.

      Delete
    4. I did some searching, and found this paper: "Active Learning for Teaching a Robot Grounded Relational Symbols"
      https://flowers.inria.fr/mlopes/myrefs/13-ijcai-actsymblearn.pdf

      It seems to me that even if we haven't yet succeeded in building a true T3-passing robot, it might be imminent. Looking at the amazing progress that has been made in the past few decades with artificial intelligence, I think it won't be long before we have robots that are quite human-like, in their words and actions.

      Well, let me backtrack a little--perhaps it will be a while before we have a T3 robot that actually succeeds in tricking humans that it is also human, BUT I think that it won't be long before we have robots which we generally accept as acting as a human would act.

      And as this paper suggests, some sort of grounding for the robots is in the works.

      Delete
    5. @Auguste I believe your claim is right, that we could build grounded symbol systems (robots) that have no meaning. I think meaning is equivalent to consciousness in the way that, grounded robotic systems that fail to pass T3 is not conscious/have no meaning. But if such a robotic system passes T3, we will be hesitant to deny that the robot has meaning just as we are hesitant to deny other people have meaning.

      Delete
  13. From section 3.2 of the 1990 version: “Note that both iconic and categorical representations are nonsymbolic. The former are analog copies of the sensory projection, preserving its "shape" faithfully; the latter are icons that have been selectively filtered to preserve only some of the features of the shape of the sensory projection: those that reliably distinguish members from nonmembers of a category.”

    This section looks at the different human behavioral capacities that a cognitive theory must be able to account for, and specifically focuses on two: our ability to discriminate (to decide how different two inputs are) and our ability to identify (to give a name to an input once we have determined what kind of input it is). We are able to discriminate between a black horse and a white horse running in a field at different times, for example, by comparing their sensory projections (i.e. what they look like, including their coloring) and deciding that the difference in color between the projections is large enough for them to be different horses. This makes sense to me across types of sensory input, but it’s easiest for me to think about visually: we take the visual input of the black horse and overlay it with the visual input of the white horse, compare the two across a bunch of different visual features, and decide that the color feature is different enough to mean these are two separate horses. At the level of granularity that this paper discusses, I think this is a totally plausible process underlying our discrimination process. However, even though the connectivism section of this paper as well as the Fodor diary entry from last week both warn that we shouldn’t care too much exactly how the brain does something in developing cognitive theories, I’m stuck wondering why this is. This paper did clarify how symbols are grounded theoretically (via iconic and categorical representations), but doesn’t offer any suggestions for how this could physically be done (which makes sense because that would be totally outside the paper’s scope). I guess my question is, given that symbol grounding hasn’t been physically achieved yet in something other than a human brain, doesn’t it make sense to look at the brain, determine how it physically stores/filters the color feature of horses, and then design a symbol grounding system that mirrors this deeper “how” process?

    ReplyDelete
    Replies
    1. In response to Olivia’s comment, I think that trying to look to the brain to determine how it physically stores/filters the colour feature of horses would be attempting to tackle the ‘hard problem’ of cognition. We would be endeavouring in the task of determining how and why it feels like something to see black and why and how we are able experience that same feeling whenever we come upon referents of black.

      Although understanding how and why we feel when we see colour may never be fully understood (at least not in the next few decades), I feel as though I am beginning to understand how we ground symbols (an explanation of the easy problem). Symbol grounding is our ability to take referents that we experience via our sensorimotor capacities and attach them to a given symbol such as a word, number, sign etc.

      Understanding symbol grounding does allow us to appreciate its complex nature and the human language is a perfect example of this. We are able to connect words like ‘black’ and ‘horse’ to previous sensory experiences. We are able to use these previously grounded symbols to correctly identify and pick out a referent we have never seen before but has been described to us using grounded symbols of language. Better yet, we can use previously grounded symbols to understand referents that may never actually be picked out or have the capability of being interacted with by any sensory modalities (like the peek-a-boo unicorn that has come up so frequently in class).

      Delete
  14. This clarified a lot of Searle points about the CRA. More specifically it was interesting to grasp more details about how he uses consciousness - the brain, to demonstrate that understanding cannot just be computation.

    "Meaning is grounded in the robotic capacity to detect, identify, and act upon the things that words and sentences"

    To clarify things, does that mean grounding is necessary for passing T3/T4? From what I understand, T3 is the level at which we solve the "other minds" problem in everyday life and the one at which symbol systems can be grounded in robotic capacity to manipulate objects. However, we don't know if they can feel of having meaning = does not cognise. Since understanding cannot just be computation, if T3 robots are conscious beings - one that can interact with the environment, does that mean T3 robots can cognise? I'm not sure if I'm going in circles but I'm quite stuck at SGP in regards to T3.

    ReplyDelete
    Replies
    1. T3 has sensorimotor capabilities, so if it was built in such a way that required activation of that system, then it would be grounding symbols. I think Harnard does a great job of demonstrating that grounding would be necessary to pass T3. We know that meaning requires grounding, but it also (possibly) requires consciousness which brings it back to the other minds problem. We know that T3 cannot be computational, so if there was someway of knowing that T3's had conscious mental states in addition to their ability to ground symbols then I think that T3 robots could cognize. But how could we possibly know that another thing is conscious? Searle's CRA is a computational system which we know would not understand meaning, but a non-computational T3 robot using some other means of intelligence might be able to cognize, but I think that we would probably just need to assume it does because it passes the TT.

      Delete
  15. • Harnad mentions at the end of the article that groundedness is necessary for meaning but may not be sufficient because even if we have a robot that is indistinguishable from humans, it still may not have what we have in our heads and therefore won’t have feelings or understand meaning. With that being said then, it sounds like in order for meaning to occur the machine must be a conscious entity. I say this because if it isn’t enough for the machine to just be indistinguishable from humans since inside its head may be empty and it still won’t understand anything, then the only way we could truly know it understands meaning is if it is made up of the same things as we are. Would this then mean that the machine would have to pass T4 in order to really have meaning?

    ReplyDelete
    Replies
    1. I'm just throwing this out there.....perhaps I'm completely wrong. But could we possibly consider 'meaning' in a similar fashion as we do for T1, T2, T3, etc.? For example, M1 could correspond to meaning at the level of a T1 robot, simply understanding enough of the content to produce a good verbal response, while M4 could be the full meaning that we attribute to human understanding, since T4 agents are said to be neurally indistinguishable from humans, so the "meaning" level could likely be equal as well.

      Just a thought, I'd like to hear others knock it down!

      Delete
    2. Hi Dominique, but how do you distinguish these different levels of meaning? And how does separating meaning into levels give us any more insight into being able to know if the machine actually understands and has meaning, because we still face the other-minds problem just the same.

      Hey Maya, if a machine passed T4, we would assume it has meaning but just like I don't know if my friend or my parents or anyone else actually has meaning, we still wouldn't be able to definitively conclude that a T4 has meaning just because its made up of the same stuff we are. Also, I'm starting to wonder what would be the point of aiming for T4, because in order to create a T4, we would need neuroscience and biology to figure out all our biological mechanisms and wouldn't that make the idea of reverse engineering to understand cognition redundant?

      Delete
  16. To follow up on my previous comment, according to Searle (1980), the reason we know that animals, like dogs for instance, understand and have mental states like us is because we can't make sense of their behavior without ascribing intentionality to it and because they are made up of similar material to us. But if a robot can be made that is grounded and can therefore identify referents and interact with the things in the world, and if the robot is made to look like humans, then why can't we attribute understanding to it if we can do this to animals?

    ReplyDelete
    Replies
    1. I'm not sure I agree that the reason we attribute understanding to animals is due to only being able to make sense of their behavior through ascribing intentionality. Was this Searle's argument? Human beings are known to attribute intentionality to a lot of inanimate objects that should not be given intentionality, and so to me it seems that humans have a tendency to attribute intentionality to most of life, to model the way we see the world based on the way we function. I'm not sure what it is that differentiates animals from robots in terms of consciousness, but I'm assuming that that is the hard problem and maybe we will never know. Your argument also sounds a bit like the other minds problem, so to me it doesn't really seem to have a clear answer. One reason for more attribution of understanding to animals could be societal influences, in that we are taught that animals are conscious beings whereas technology/robots are not.

      Delete
    2. I got the same idea as Maya that we can't attribute understanding to animals because we are lacking the information about animals’ intentionality. I think she poses an interesting question; in my opinion it seems as though robots are based off humans. I don’t know much about the field but if it’s true that scientists are basing certain models of robots off humans wouldn’t they have the same intentionality? Or maybe the intentionality would be different – maybe the intentionality would be following instructions from the person controlling/programming the robot.

      Delete
  17. The Natural Semantic Metalanguage (NSM) is a linguistic theory that can offer insight into solving the symbol grounding problem. The NSM approach to semantics suggests that there is a list of semantic primes – words that are directly translatable, not reducible and shared by all languages. The meanings of these words must then inherently be “programmed” into humans from birth. These words are broken into categories such as logical concepts (not, if, because), mental predicates (want, feel, see), time (now, before, after), etc. These are then manipulated in different ways to produce meanings that we as humans can understand.

    The “Chinese/Chinese Dictionary-Go-Round” problem is an issue with all dictionaries because dictionary definitions can often be circular. For example, if you look up the word “afraid” in the Merriam-Webster dictionary, you will find “fear” in its definition and vice versa. If one doesn’t know the meaning of either of these words, then one cannot grasp (or ground) these meanings. This is why NSM goes about ‘explicating’ words using only the universal semantic primes to create a unique definition that is assumed to be naturally understood.

    The important question, however, is “can a robot deal with grounded symbols?” Although the NSM approach doesn’t tell us how these representations are formed, I think it may be leading us in the right direction. If one can somehow program a T3 with this set of semantic primes, then we can see whether these truly are the building blocks to language by seeing whether other symbols/words can then be grounded and learned by the robot. Then one could also see whether the robot can autonomously generate meaning from these basic primes.

    ReplyDelete
    Replies
    1. Information on NSM:
      https://www.griffith.edu.au/__data/assets/pdf_file/0006/419064/Goddard_2010_OUP_Handbook_Ch18.pdf

      Delete
    2. Annabel, you wrote:

            "The Natural Semantic Metalanguage (NSM) suggests that there is a list of semantic primes – words that are...not reducible and shared by all languages. The meanings of these words must then inherently be “programmed” into humans from birth."

      I had heard a little before about the semantic primitives of Anna Wierzbicka, in fact I think we corresponded a very long time ago, but I have never looked at it very closely because it didn't seem empirical to me. I mean it seems to be some a-priori hunches about what "basic meanings" might be, and then it becomes a kind of hermeneutic exercise in which all meanings are reduced to those primitives (the way Schenkerian analysis reduces all music to the cadence V/I).

      There may be something to it, but what's missing is how those primitives get their meanings. Saying they are "programmed" by evolution is not the answer: what is programmed, how, and why? What does it mean to "program" meaning?

      Our research is different, and data-based, in that we reduce real dictionaries to the smallest number of words whose meanings must already be known (somehow) and then all other words in the dictionary can be defined out of just those words. Those words turn out to be learned younger, more frequent in the language and more concrete. And about 1500 are enough. But they are not unique. Lots of minimal sets of 1500 words exist that can define all the rest.

      But our analysis only looks at the 99% of the words in a dictionary that are "content words," i.e., nouns, verbs, adjectives, and adverbs (chair, person, run, nice), which all have referents, and name categories, which have members. We ignore "function words", which have mostly syntactic or logical function (not, if, when, is), for which it might make sense that they are inborn formal constraints. But it's the content words that are needed for grounding.

            "These words are broken into categories such as logical concepts (not, if, because), mental predicates (want, feel, see), time (now, before, after), etc."

      The grounding problem concerns content words (which amounts to almost all words) rather than function words.

            "The “Chinese/Chinese Dictionary-Go-Round” problem is an issue with all dictionaries because dictionary definitions can often be circular."

      Not just "often"! All dictionaries are always completely circular. You can corral the circularity into a minimal grounding set, but to break the circularity, all those grounding words have to get their meanings other than through definition.

      Delete
    3.       "NSM goes about ‘explicating’ words using only the universal semantic primes to create a unique definition"

      But there "explicating" means "translating" words and sentences into just those semantic primitives, but without explaining how the primitives get their meanings -- so without solving the symbol grounding problem. (It's also not clear whether every word and sentence can be translated into those primitives: With our dictionary analysis, we know every word can be defined out of our the words in any one grounding set.)

            "Although the NSM approach doesn’t tell us how these representations are formed, I think it may be leading us in the right direction. If one can somehow program a T3 with this set of semantic primes, then we can see whether these truly are the building blocks to language by seeing whether other symbols/words can then be grounded and learned by the robot."

      The grounding problem is a problem with "programming" (i.e., computation, symbol manipulation). Meaningless symbols and syntactic rules for manipulating them. How would "semantic primitives" get their meaning? That's the symbol grounding problem. (We'll get to that next week. No secrets: It's through sensorimotor category learning -- not "programming"... And it is based on inborn and learned sensorimotor feature detectors.)

      Delete
    4. It may be a bit of a leap to assume that semantic primes can tell us what we need to know about meaning because you are right that we don’t know how they themselves get their meaning. It is however the first approach, to my knowledge, to try to break down language into such a basic form, which is why I find that this approach is going in the right direction. While NSM may not be purely empirical, corpora are used to find lexical correlations and frequency as evidence. This may be a bit of a stretch, but I wonder what you think of sound symbolism as it relates to iconicity? This linguistic approach is certainly more empirical, although it may be a bit abstract to prove that there is some type of a universal “sound-symbolic substrate” that is at the core of all language. I also see a flaw in this approach because it certainly doesn’t relate to all symbols/words and therefore may simply be suggesting a coincidence.

      It may be too easy to say that evolution did the work for us by ensuring that we were born with basic ideas and concepts that later were more susceptible to having symbols/names attached to them (these may be the 65 primes or the minimal sets of 1500 words that you mentioned) but up until now, this seemed to me like the only explanation. You are right though that this doesn’t explain exactly what is being programmed or how/why it is being programmed. It seems that we are given the tools (our brain and sensorimotor capacities) at birth to eventually be able to ground all of the meanings of content words that we encounter.

      So now, to create a robot that is Turing-indistinguishable, we need to create one that can ground symbols. I understand that having sensorimotor capacities is a key to this grounding process and therefore a robot that can see and hear can have a grounded perception of what these words mean. Here we can connect our class discussion on affordance and how our actions in relation to an object can relate back to its meaning (eg. a robot with opposable thumbs like ours can then understand what a round door knob is and what one does with it). Are we assuming that interacting with objects is our best way of grounding their names? If this is true and we create a T3 that can do all of these things, then have we discovered what the brain is doing to generate meaning (connecting an object/referent to how one would interact with it)?

      Delete
    5. Annabel, I may be wrong, but I think much of the evidence for NSM is post-hoc and interpretive, which makes it is more like the interpretation of texts -- or of dreams, or of the stars, orof the constitution (i.e., hermeneutics) -- than it is like the experimental testing of a hypothesis that can be shown to be wrong. When is comes to the explanation of meaning itself, hermeneutics (even with the help of statistical correlations) can be a false guide, yielding only confirmatory "evidence." That's the "hermeneutic circle."

      There is a bit of onomatopoeia in all (oral) languages (and gestural languages a lot more). But what makes words language is that their iconicity (resemblance to what they denote) is irrelevant (rather like a dead metaphor, like the "legs" of a table). But since I don't even believe the theory that language began orally, I certainly don't believe in any universal "sound symbolism." Even universal gesture symbolism across all sign languages would only be a quaint left-over side-effect, not related to meaning. Ditto for whatever residual iconicity there is in ideographic languages like Chinese. (See my reply to Peihong in the week 6 overflow.) So if there is any "universal" sound symbolism (some have suggested "mama" comes from the sound of nursing), it too is just a bit of left-over odds and ends, of no real significance for either grounding or the nature of meaning.

      The dictionary's minimal grounding sets are not like NSM's "primitives" because the 1500-word grounding sets are not unique (even though they tend to be learned earlier, more frequent, and some are more concrete): Very many different combinations of 1500 words can each ground the whole dictionary. And even if there were universal words, present in every grounding set, there is no reason to suppose their meanings were innate. The only words whose meanings might be partly innate are some of our function words (if, then, not, is) but those words have no referents and are not counted in our minimal grounding sets, which are only content words (i.e., category names).

      You're still using "programming" in a way I can't understand.

      To connect words to their referents means connecting them to the members of the categories that they refer to. That means most of grounding is not linguistic at all. (And it deepens the mystery of why other apes don't go on to have language.)

      Delete
  18. Re: Natural Language and the Language of Thought.


    I know that we already discussed in class the Piraha tribe, and how this was an instance of Everett (a pigmy) critiquing Chomsky (a giant). However, I’m confused how rationalism and empiricism fits into this article, T3, and the case of the Piraha tribe. So if I understand correctly Chomsky believes that language is innate, and not some learned capability. But if this is true, than why does T3, as suggested in class, require sensorimotor feedback? Isn’t sensorymotor feedback have to do with experiences? And if language is an innate ability doesn’t that suggest it is encoded somewhere into our brain? If language is innate shouldn’t T2 be enough?

    ReplyDelete
    Replies
    1. The capacity for language (combinatorial symbol manipulation) is innate, but specific languages (English, French, Chinese) are not. That means none of the words (at least, content words) are given but have to be learned through experience in the world. And, you can only experience the world via sensorimotor capacities. Otherwise, the symbols in language are just meaningless squiggles (like Chinese characters to a non-Chinese speaker). T3 needs sensorimotor capacity because it's robotic by definition. The suggestion is that T2 is not enough because, being purely computational, it would have nothing to ground its symbols in, and would therefore (it is suggested by Harnad) fall short in terms of its abilities to discuss anything about the physical world.

      Delete
  19. "The symbols, in other words, need to be connected directly to (i.e., grounded in) their referents; the connection must not be dependent only on the connections made by the brains of external interpreters like us. The symbol system alone, without this capacity for direct grounding, is not a viable candidate for being whatever it is that is really going on in our brains (Cangelosi & Harnad 2001)."

    While this article was very helpful in bringing together all we have learned so far this semester, I am still struggling a bit with the necessity of sensorimotor capacities in symbol grounding and consciousness. While I understand that they are important to a persons understanding, I feel like it must not be the only thing important to consciousness.

    ReplyDelete
    Replies
    1. I agree with your confusion and argument. Perhaps we need to go a step further in elucidating what sensorimotor capacities allow for in terms of contributing to consciousness. In the next paragraph, Harnad writes, “But if groundedness is a necessary condition for meaning, is it a sufficient one? Not necessarily, for it is possible that even a robot that could pass the Turing Test, "living" amongst the rest of us indistinguishably for a lifetime, would fail to have in its head what Searle has in his: It could be a Zombie, with no one home, feeling feelings, meaning meanings.”

      From this quote I suppose that the notion of feeling is what sensorimotor capacity allows for in terms of establishing ‘consciousness’ by ensuring symbols are grounded. Meaning is something that cannot be explained but must be felt by the ‘being/robot’. Thus without sensorimotor capacities, while a robot could understand literal and other forms of meaning, they would never feel meaning or significance, which is another dimension of meaning that is currently unique to human cognitive capacity.

      Delete
    2. Re: your confusion about the necessity of sensorimotor capacities in symbol grounding and consciousness, I think that I interpreted it in a way that can be best explained through a historical anecdote which may or may not be accurate: anecdotally, Helen Keller (a blind and deaf scholar) was unsurprisingly non-verbal and behaviourally erratic until a time when her teacher put her hand under a stream of water and tapped out the word "water" in Morse code on her arm. I think that 'symbol grounding' lies in this initial connection between the symbol for water in Morse code (otherwise meaningless taps) and recognising the sensory experience of water.

      Delete
    3. So if I'm understanding this correctly, the reason sensorimotor capacities are required in symbol grounding is to be able to have this direct connection between the symbol and the physical aspects of it, which is why Harnad mentions that the connections can't just be dependent on the connections in our brains? Do our sensorimotor capacities allow us actually understand and attach meaning to symbols, such that we wouldn't fully understand and be able to ground the words "tree" or "water" without knowing what these feel and look like? Then to answer your question, Eugenia, I think that solely with consciousness and without sensorimotor capacity we wouldn't be able to ground all of the symbols in our world, which is hard for us to imagine since we do have these sensorimotor capacities. To touch on what Wei-Wei Lin said, individuals with sensory deficits may therefore lack the understanding and meaning behind symbols since they don't have the same interaction with the world and can't link referents to symbols in the way we do, which prevents the grounding of them.

      Delete
  20. Firstly, I feel like it is important to make the distinction that we created the symbols; they weren’t there for us to interpret them. So the meanings were in our brain, and we found a way to express those meanings to others by forming symbols. Aren’t the ungrounded words on a page very similar to the grounded words in our brains? Without the right capacities, means to interpret and say the words we want to say out loud meaningfully, we would not be able to express what is grounded in our brains. For example people with Wernicke’s aphasia is able to produce language, but it won’t be meaningful. So aren’t the ideas grounded, in the way we used to think before we produced language, and not the words themselves? Thus, I feel like we need to discuss this issue without talking about the symbols we are using, which are just simplifying the way we think and the way we put meaning to things. How would we think and act without language and symbols?

    In addition, although picking out referents is a dynamical property and it is grounded, is it not dependent on our past experiences and personality? We have a basic system composed of these, and when someone says a word, we interpret it in the most logical way, picking the referent with the highest probability of meaning in that context. Is Siri not accomplishing this even now? Maybe Siri might be doing it on a basic level now, but would picking out referents really not be possible by programming several meanings and having the AI choose the most probably one, and continue updating the meanings, and possible referents?

    ReplyDelete
  21. I disagree that the symbols in Searle's Chinese Room or in the Chinese/Chines edictionary are meaningless. For symbols to be meaningful they just need to be logically significant, i.e. have the potential to designate something properly in the language they are written (or spoken) in. Meaning is not something we attribute to the symbol, it is inherent to this symbol. Searle's problem is accessing the meaning in order to link it to a referrent. Meaning itself can't refer to anything, and doest not provide information: the issue is whether the person or computer does receive information from the symbols, i.e if it can make the meaningful symbols it manipulates refer to something.

    ReplyDelete
    Replies
    1. I don't think meaning is inherent in a symbol. The meaning we attach to symbols is perfectly arbitrary. So yes, the Chinese symbols have meaning to Chinese speakers, but only because they are familiar with the arbitrary relations between the Chinese symbol system and the world.

      I'll illustrate with an example in English. In English, the word 'apple' refers to a particular kind of red, round fruit growing on a particular kind of tree. But, if I could get every English speaker in the world to agree to swap the meaning of the word 'apple' with the meaning of the word 'cat', then I could say "I'm petting my apple and eating a cat." I would make perfect sense and vegans wouldn't give my statement a second thought.

      Then I could swap the meanings of 'pet' and 'eat' and say "I will eat my apple then pet a cat" and it would mean (in our version of English) "I will pet my cat then eat an apple."

      What I've done here is alter the meanings of symbols in a perfectly reasonable way. The only thing stopping my version of English from coming to fruition is social convention, and social convention is not inherent. Therefore, the meaning of symbols cannot be inherent.

      Delete
    2. Here you are not changing the meanings of the words but what they refer to. The word "apple" now refers to a cat, but the way you refer to it, i.e. the meaning, has not changed, it is still "apple".

      Delete
  22. "Meaning is grounded in the robotic capacity to detect, identify, and act upon the things that words and sentences refer to (see entry for Categorical Perception).

    But if groundedness is a necessary condition for meaning, is it a sufficient one? Not necessarily, for it is possible that even a robot that could pass the Turing Test, "living" amongst the rest of us indistinguishably for a lifetime, would fail to have in its head what Searle has in his: It could be a Zombie, with no one home, feeling feelings, meaning meanings."

    If I understand this passage correctly, Harnad is saying that grounding and meaning are distinct and potentially separable. In this sense, a T3 zombie could successfully ground its symbols through sensorimotor interaction with things in the world, but would have no meaning attached to those symbols. On the other hand, I both ground my symbols and know the meaning of the symbols.

    If the key difference here is that I feel but the zombie doesn't, then is meaning not just another word for feeling? Or perhaps meaning is something humans do, such that an animal can feel but knows no meaning? Of course, animals don't have linguistic symbols to mean anything, but they certainly interpret and interact with things around them in a coherent way, can learn categories, and have basic communication. Is this not also a form of meaning in a more basic (or at least non-linguistic) sense?

    ReplyDelete
  23. From Harand’s Symbol Grounding Problem, we understand that identification and association of referents of individual does not seem to explain much; however categories and feature detection is what is necessarily to explain how we do what we do. For example of a horse, identification by “association” would be to memorize every sample of horses you ever encountered, but there exist numerously, all in different colour, sizes, and forms. It seems like we won’t be able to reverse engineer a T3 by forming said “associations” between the symbols to the referents. Therefore, Harnad's paper talked about feature detection so we would heave to learn which things are and are not within a category (such that, in this example, the capacity to be able to detect what features are shared by horses that are not shared by other things.) If we were to still stick to using the word “association”, then I believe we can say that grounding always starts with referent action associations, which as Stevan says “learning to do the right thing with the right kinds of things.”

    There is one question that I have wondered but perhaps found an answered to my own question: is the case of bilingualism. So that once I already have a grounded first language, then any other second or third language will also inherit this grounding through translation. In bringing back to just one language, it seems like there must be a point where we would likely have sufficient words to ground the rest of the remaining vocabulary. Yet, in learning a new word, we always still experience this ‘hybrid’ between new verbal definition and explanations supplemented by sensorimotor examples. But by principle, it seems like there would be one point where we would have enough grounded words supported by previous sensorimotor experiences that we would be able to learn new concepts and new words without sensorimotor capabilities?

    ReplyDelete
  24. To present a counter argument, I am going to apply the logic of symbol-grounding to the problem of pedestrian navigation of the environment. Following the reasoning presented in the above sources, a street name (ie. Duluth), refers to a specific street in Montreal. But knowing that this name refers to a specific lane is not sufficient to confer meaning. The meaning of the street is not in the string of symbols "d-u-l-u-t-h". Rather, the meaning of the street is its physical location in relation to other streets. If someone told you to meet them at the bar on 200 Duluth, if you didn't know where that was, they would give you cues based on relational information (above Napolean, under Rachel). Sensorimotor ability is not necessarily a prerequisite for understanding the meaning of 200 Duluth. An agent given its geographic coordinates receiving no other input could still navigate its way there and know that functionally, Duluth means the position between these other two positions (supposing that a category street has not been created.) Now, if you were to ask this agent whether or not Duluth was a "good" street, presumable it would be stumped, as this seeks to go beyond a purely utilitarian definition of the street.

    ReplyDelete
    Replies
    1. I don't quite follow your line of thought here. You argue that sensorimotor ability is not necessary to ground the meaning of Duluth, but in the same paragraph, you argue that the meaning of Duluth is 1)physically relational, which depends on a prior understanding of sensing the physical environment and being able to visually relate concepts; and 2) based on the ability to navigate to that street, which requires motor ability and a previous experience of such motor ability as a way to ground what it means to navigate. As Harnad has argued in the past, even talking about sensorimotor aspects of consciousness (T2, such as navigation and relational), to be truly convincing, a machine would need to have these sensorimotor capabilities (T3) in order to have real experience and a grounded understanding of what it was talking about.

      Delete
  25. RE: "There is a school of thought according to which the computer is more like the brain -- or rather, the brain is more like the computer: According to this view, called "computationalism," that future theory about how the brain picks out its referents, the theory that cognitive neuroscience will eventually arrive at, will be a purely computational one (Pylyshyn 1984). A computational theory is a theory at the software level; it is essentially a computer program. And software is "implementation-independent." That means that whatever it is that a program is doing, it will do the same thing no matter what hardware it is executed on. The physical details of the implementation are irrelevant to the computation; any hardware that can run the computation will do."

    Does this paragraph suggest that the meaning of a word inside of a computer is “grounded” or “ungrounded?” In cognitive science we learned from Ned Block that the mind is like the software of the brain. If this is true does that mean that the word is grounded in the software? In the above paragraph it says that the word is “grounded” in the head. Is “in the head” the same as or equivalent to the mind? Since the software is embedded in the computer, does that suggest that it is grounded in the computer as well?

    ReplyDelete
  26. As far as I can tell, this article states that the brain is able to ground symbols by directly connecting them to their non-symbolic referents. What I see as left up for debate is exactly what sorts of referents are “non-symbolic.” Are percepts such as images or sounds non-symbolic, or are they actually code (mentalese?) for the truly non-symbolic sensations that they produce? The only way I could see groundedness being an independent property from consciousness (as it seems to be described in this paper) and T3 as theoretically capable of symbol grounding is under the assumption that percepts as well as sensations are non-symbolic. Otherwise, the “independent functional role” of consciousness could possibly be symbol grounding itself.

    ReplyDelete
  27. Harnad offers an explicit explanation for how meanings are ascribed to things – first in discovering the iconic representation acquired through direct sensorimotor interaction, and thereafter in categorizing, based on a common feature into broader categorical representations. In other words, we ascribe meanings to things, in the most basic form, through our sensorimotor capacity. From our sensorimotor capacities, we extract relevant physical details, then selectively deduce its invariant features to establish categories for a given thing or concept. In relation to the argument over sufficiency, Harnad’s explanation of symbol grounding is certainly sufficient to building a T3-robot and to solve the Easy Problem, since both are only concerned with behavioural output. If we say it is insufficient in that “it’s possible that TT passing robot would fail to have in its head what Searle has in his: it could be a Zombie, with no one home, feeling feelings, meaning meanings,” then we again fall subject to the other-minds problem, whose debate is superfluous since the problem is insoluble. We will never know whether the T3-passing computer is merely a zombie any more than you and I are; and therefore, consciousness should NOT be required for symbol grounding.
    If Harnad’s account is insufficient, then it is simply insufficient in that it neglects to describe the alternative means through which we ascribe meanings to things with our feelings – which is a Granny Objection in itself, and leads us to the Hard Problem (which is beyond the scope of this class). The insufficiency in his proposal is its lack of explanation for how we ground more abstract phenomena, like “love” or “peace” – things that are not grounded in our sensorimotor, but rather in our emotional associations. But I suppose the answer to this attribution of meaning to abstract phenomena is further categorization? Not just grounding everything we perceive of the world directly with our senses, but also encompassing contextual cues that accompany the referent to form the concept as a whole. In this way, the symbol grounding should ONLY require the “capacity to pick out referents”.

    ReplyDelete
  28. This comment has been removed by the author.

    ReplyDelete

  29. RE: The Symbol Grounding Problem Remains Unsolved (Bringsjord)

    In T&F’s proposed solution to the Symbol Grounding Problem they suggest creating two-machine artificial agents (AM2) using a theory of meaning called Action-based Semantics (AbS). The robot cannot rely on “innatism” for meaning therefore they cannot be pre-wired to act or perceive an object in a certain way. It cannot rely on “externalism” either as this would imply that the meaning of the object comes to them from the external world. I find this completely related to the nature vs. nurture debate: whether meaning is something that is innate from our birth, or whether it develops as we discover the world. I guess this also links to Chomsky’s ideas about universal grammar.

    Reading T&F’s proposal made me think about how a child understandings the function, purpose and meaning of an object. They interact with it and learn from it in the real world over time. I think time is a key element here because implementing any semantic capacities beforehand (which T&F’s proposal says we shouldn’t do) removes and reduces the affect that time plays in creating meaning. For example, a child that interacts with a ball (arguably) doesn’t already know what a ball is and how it works. However, as they grow, they see people playing with a ball in the park or on tv, they observe their siblings using it in particular ways, and with time they create context-specific meanings that come-together to form a holistic understanding of the object. To relate this back to the SGB, I think if we resolved the nature/nurture debate and nurture wins, then the SGB would also be solved. But feel free to contest this claim.

    ReplyDelete
  30. I found this reading to be very helpful in how it delineates the referent from meaning in a way that is relevant to our class discussion, especially concerning the Chinese Room argument.
    I have been struggling to try and explain to myself why I am dubious of the Turing Test having value when it comes to the question of understanding.
    I think it is unlikely that we could even know for sure if a machine programmed to pass the Turing Test at the T3 level was performing with grounded, and therefore "meaningful" symbols; even with the sensorimotor capacities which seem to be necessary for proper grounding, it is nigh impossible to be certain when meanings take on meaning.

    ReplyDelete
  31. Harnad states, "one property that the symbols on static paper or even in a dynamic computer lack that symbols in a brain possess is the capacity to pick out their referents...because picking out referents is not just a computational property; it is a dynamical (implementation-dependent) property." I interpreted this as saying that picking a referent is NOT computational. If meaning is picking a referent, and this is a unique quality of the brain (being implementation dependent) then consciousness must be more than computation alone.

    ReplyDelete
  32. When considering the question of whether a word meaning within a computer is more like the word on a page or the word in the head, it’s necessary to take into account the level of Turing test to which we’re comparing to. If we’re looking at the input/output aspect of T2, then the computer would be like the word on a page because picking out referents cannot be down through computationalism. On the other hand, if we’re looking at T3 and we understand how a computer thinks. Than it seems reasonable to think that a word meaning on a computer resembles the word in the head because even humans don’t know how we pick out referents sometimes. But the fact that in T3, the computer can think like a human, this should allow it to pick out word referents like humans do.

    ReplyDelete
  33. In the section on Words and Meaning, it is mentioned that some have suggested that the meaning of a referring word is the rule or features one must use in order to pick out its referent. The example given is "Tony Blair" having the same meaning as "Cheri Blair's husband" or "The UK's current PM." This certainly cannot be the case since one can find counterexamples to the referents being picked out. For instance, if Cheri Blair had multiple husbands, how would you know that Tony Blair is the husband being referred to by the sentence? In such a case, "Cheri Blair's husband" would not come closer to the meaning than "Tony Blair," and the phrase could even refer to a second husband, let's call him "Tommy Blair."

    However something strange occurs when we pick out referents. We take a word's meaning, pick out its referent, and then the meanings appear in our brains. A connection is made between the inner word and the outer object, and this is partially explained in the section of Formal Symbols. The given example is arithmetic being a form of symbol manipulation system based on shape rather than meaning. Although the symbols make sense (i.e. "1" is one and "2" is two), that sense is in our heads and not in the symbol system. Meaning can only be found in our minds where there are dynamic processes involving sensorimotor activity, making connections in real time. Sensorimotor interactions with the world account for symbol interpretations, given these assumptions. In the end, it is important to ask how squiggles and squoggles end up being things we understand and assign meaning to. If groundedness is a necessary condition for meaning, what element can we find to be sufficient for meaning? Does it lie within the realm of our sensorimotor capacities?

    ReplyDelete
  34. It appears to me that the symbol grounding problem lies strictly within the realm communication, and doesn’t break into that of consciousness (at least not to a large extent). If we were to take a single person, and that person were to be living in this world completely alone with only nature (so without other human beings), then would there still be a symbol grounding problem? If not, does this mean that symbol grounding is based off communication alone/interactions between more than one person?
    The person described wouldn’t be able to pass a Turing test, but they would still be conscious. Based on this, it seems logical to deduce that computation and symbol grounding is only relevant when you add other people into the mix, which logically complicates things as there are more exogenous variables to consider. To what extent does this really contribute to understanding consciousness if communication is what symbol grounding is about?

    ReplyDelete
    Replies
    1. This is a really interesting train of thought. I think there would still be some sort of symbol grounding problem for an individual alone in the world. In that case there could not be any formal language, but I would assume that the individual would create some sort of internal dialogue/language in order to "communicate" with oneself, much the way that we do now with our internal dialogue. Either way, there would still need to be meaning of some sort attributed to the thoughts and cognitions of the person, even if there was no concrete language. This meaning just would not have a need to be outwardly expressed to others.

      Delete
    2. I agree with Rebecca that symbol grounding would still take place in the mind of the individual even if they had no language to build up an internal dialogue with themselves. In the 1990 paper that Prof Harnad linked, he spoke about the "inert taxonomy" that humans develop in order to keep track of categories. The categories themselves are shaped by our history of behavioral interactions with the referants. The names of different categories in this taxonomy are arbitrary and although they are usually socialized into us if we have other people around, we would likely still make them if we were isolated.

      I think you're right that the person described wouldn't pass the TT , but I think this is just because the TT is meant to test for the typical human being. A person in a case like this is so abnormal that we would not account for their cognizing with the regular TT for the same reason that we would not account for a comatose person (even though the person in both scenarios might be conscious and cognizing).

      Delete
  35. This comment has been removed by the author.

    ReplyDelete
  36. "A symbol system alone, whether static or dynamic, cannot have this capacity, because picking out referents is not just a computational property; it is a dynamical (implementation-dependent) property."

    I am not sure I am understanding correctly: the SGP is an argument against computationalism or functionalism? Since these theories exist at the software level which is implementation independent but here so stated that the capacity to pick out a referent seems to be dependent - then consciousness is a separate property where the rules that pick out the referent object exist?

    Equally, why is it assumed that there is a direct link between the referent and its meaning, in that these are discrete things that can be mapped directly to one another. As suggested at the beginning the referent object takes on different meanings depending on the context in which it is placed? So then meaning is related to the goals of the actor which would equally depend on his external environment, rather than arise from attempting to arbitrarily map each referent to its meaning for every situation which seems to have infinite potential answers that seems very computationally difficult to prune over any period of time?

    ReplyDelete
  37. This is referring to the section on words and meanings. Specifically, “It is probably unreasonable to expect us to know the rule, explicitly at least. Our brains need to have the "know-how" to follow the rule, and actually pick out the intended referent, but they need not know how they do it consciously. We can leave it to cognitive science and neuroscience to find out and then explain how.”
    Remembering or attaching meaning to a referent seems reminiscent of language. In particular, how we form sentences and how much of language is learnt versus innate. Studies show us that words are learnt but children don’t memorise word order, they learn the rules for it but are never explicitly taught them. There are rules that we cannot verbally describe but we know that we can and follow them. Cognition seems much the same- we know we do it and that there must be some system of rules or some basics (like the words grounded in sensorimotor behaviour) that are followed for said cognitizing to occur. Language studied this by developing syntax which also incorporates logic, philosophy, semantics, and phonetics. Syntax is the study of the internal structure of sentences and is trying to get at those innate rules that we are born with and develop that enable us to say ‘subject object verb’. Not just for English but also trying to come up with universal patterns that describe all languages with minor logical adjustments. Is syntax is coming up with a theory of how sentence structure is formed explaining enough of the how that we are looking for in cognition? Is the internal structure of the sentence grounded like the individual words are? What gives the order of words in every langugage it’s meaning, the grounded words or the rules itself?
    I bit the apple. English speakers know that ‘I’ is doing the biting and that ‘apple’ is the receiver of the bite. Compare that to the apple bit me. The word bite is most likely grounded by sensorimotor action as it’s done by something we all possess- teeth and is an action we do every day. Has the grounding this word similarly grounded the words in its vicinity giving them the implicit meanings of doer or receiver? Or is that not meaning associated with an object but rather its relationship?

    ReplyDelete
    Replies
    1. I think from a linguistic perspective, children don't memorize word order like you said. But I think from a psychological, they learn the order through trial and error and by observing others use languages.

      For example a very young child could say something like "Apple bite I" and I think that adults would understand what this means regardless of the order, however, the child would learn that this is the wrong order when later hearing an adult say "I bite the apple" or by correcting them (there are different hypotheses how this happens exactly).

      Delete
    2. Hey yeah I think my question got lost in translation but basically lets take the simple sentence like 'Anna bit the apple'. Here we can ground the individual words in sensorimotor experience: Anna (physical appearance, familial relation etc), bit (action done by teeth), and apple (crunchy, taste etc). When they are placed in a simple sentence we get the additional meaning of who is experiencing the biting (apple) and who is doing the action. This is essentially theta role theory developed by linguistics but I'm just wondering if this additional layer of meaning is also grounded (in language/ sentences not in performing of the action) or if it's more of a software type thing or Searle's rulebook, like an algorithm that given the grounded symbols we can get their relation?

      Delete
    3. The additional meaning being the semantics given by the relation of the words in a sentence.

      Delete
  38. "But if groundedness is a necessary condition for meaning, is it a sufficient one? Not necessarily, for it is possible that even a robot that could pass the Turing Test, "living" amongst the rest of us indistinguishably for a lifetime, would fail to have in its head what Searle has in his: It could be a Zombie, with no one home, feeling feelings, meaning meanings. And that's the second property, consciousness, toward which I wish merely to point [...]"
    This section implies that "consciousness," as well as symbol-grounding, is necessary for meaning. Is this really the case? And if so, how do we know? If something cannot feel, can it 'understand' (for lack of a better word - not the best term to use because this implies feeling) meanings?
    Let's say a robot was able to flawlessly associate symbols with their referents at high levels of complexity (ie. not just one symbol, "apple," but entire sentences / paragraphs) - Its symbol system is completely grounded. Let's also say we could somehow be absolutely positive this robot is not conscious / does not feel.
    Do we say that these words and sentences do not have meanings for the robot? What is it about feeling that makes it necessary to meaning? In order for something to have meaning, do we have to feel what it feels like to understand?
    One objection would be that the robot might not "know" the difference in meaning between "Tony Blair" and "Cheri Blair's husband" as they have the same referent. But let's say the robot could also formulate sentences which indicate it "knows" the difference in meaning. Would this help the robot's case? Or could Tony Blair and Cheri Blair's husband only have different meanings for the robot if it felt / was conscious?

    ReplyDelete
    Replies
    1. If we take again the example of the CRA (i hope im not over-using it but it seems relevant in this case), I believe it is possible to have a robot that 1) Can perceive the same stimuli we do; 2) Acts like us based on those stimuli. If it "sees" an obstacle, it steps aside, etc. 3) contains a Searle homunculus that translates physical stimuli (ex: vibration of a tissue that acts like our tympan --> electrical signal --> Searle) into physical outputs (Searle --> output electrical signal --> movement of the head). Then again, we have dissociated the homunculus from the robot/interlocutor/the thing that passes the Turing test. So the robot you describe in the end might pass the Turing test, but if in the first version of the CRA you agree that there is no understanding, i think in this case there would not be understanding either.

      Delete
  39. The passage describes meaning as being made up of a head, a word inside it, an object outside it and whatever processing is required to connect the inner word with the outer object. However, I think we need to consider whether, for a word to have meaning, it would need to have meaning for more than one person. This does not necessarily mean that a word could not have a different meaning for one person than it does for others (as in the case of someone misunderstanding a word), but that the word would need to mean the same thing for at least two people. I believe that, if a given symbol and its referent were only linked for one person, this would simply be matching two stimuli, as opposed to understanding a meaning. Therefore, I would argue that more than the head, word, object and processing are required for symbol grounding— that there needs to be equivalent processing in more than one person’s head.

    ReplyDelete
    Replies
    1. This is an interesting thought, I think it brings us back to the problem of others minds – can a word really mean the same thing for at least two people? I find it easier to imagine it being possible for mathematical language, but what about abstract concepts like love/fear? Maybe for these more abstract words we infer a shared meaning with the other by believing our projections onto others as true to their experience. An ability for empathic experience provides me the impression that I am feeling what others too feel, especially by effect of mirror neurons. But then I’d still argue that in this case each has their respective meaning.
      I’d challenge what you say about a symbol would be lacking meaning if it were only linked for one person. I think poetic experience with words can be subjective, abstract and meaningful to the respective person. I value your definition of meaning though because it suggests that our way of grounding in the world is dependent on the other, on shared experience - I think something elementary to species co-operation.

      Delete
  40. In some of the readings from the previous weeks, I had expressed concerns regarding how Searle could claim to have zero understanding of Chinese having memorized all the rules of symbol manipulation in Chinese. I think this stemmed from not fully comprehending what symbol manipulation meant, which the section on the Chinese room in this reading made clearer for me this time around. Searle’s argument against the symbolic theory of mind is based on the notion that manipulating shapes is not the same as knowing the meaning. This is because he is using syntax manipulation (the form of symbols) but does not know semantics (the meaning). So the TT is not being passed based on understanding (symbols are ungrounded), but on the inputs of shapes. According to this, can we then conclude that grounding is a necessary condition for understanding? Though I have to wonder if it is enough.

    ReplyDelete
    Replies
    1. I think from this we can conclude that grounding is necessary (a bunch of squiggles and sqoggles are just that until some action has been performed that these describe, ascribe, build upon. Whether grounding is sufficient for understanding is a really interesting question. I think perhaps that might lead into the hard problem, is it sufficient to be able to perform actions and allocate symbols to represent them, have an algorithm to manipulate them but not have the feeling of doing so? I would presume from the multitude of words available to describe emotions alone (one part of feeling) that perhaps grounding also requires feeling in addition to sensorimotor experience.

      Delete
  41. It seems as if the article is only talking about “meaning” with respect to symbols in language. However, I think it would be interesting to imagine an example in which an action could have meaning, without necessarily the intention of communicating it. For instance, if a person were to act in a certain way because they felt a certain emotion, wouldn’t the symbol (the action) be external, and the referent (the feeling) inside the head?

    ReplyDelete
  42. The articles on symbol grounding really helped synthesize concepts from previous lectures. As I was reading the 1990 article contrasting the symbolic model of the mind with connectionism, I felt that the strengths of connectionism (as well as the limitations of a symbolic model) were understated. I found the hybrid model and the explanation of symbol grounding to be satisfactory (ie a symbol system must be grounded in the real world through sensory inputs in order to have meaning). However, in my view, the limitations of a symbolic model go beyond the problem of symbol grounding and thus call for a greater emphasis to be placed on connectionism (or another equally compelling theory) as an explanatory model for cognition.

    Here are the reasons I think this: Harnad (1990) states that symbolic models of cognition necessitate a set of “explicit rules”. “It is not the same thing to "follow" a rule (explicitly) and merely to behave "in accordance with" a rule (implicitly).” However I would argue that a great deal of human cognition (and thus behaviour) is rooted in these “implicit rules”. In fact, I would argue there are very few situations (such as arithmetic) in which we only consider “explicit rules”. Language does have a syntactical structure that can be reduced to a set of explicit rules however language also has implicit rules that convey often ambiguous meanings. Similar ambiguities exist with any symbolic or categorical representation. If a zebra is a horse with stripes, but you paint a zebra so they are entirely black, would it be a zebra or a horse? If your judgment was based purely on the explicit rules you would say no, because a zebra has stripes. However, in reality, there is clearly an implicit rule that painting a zebra black doesn’t transform the zebra into a horse. So, first I think the symbolic model is problematic because in order to be effective you would need to define and make explicit countless very vague and ambiguous implicit rules we encounter in everyday life.

    I also think in reality humans are receiving a constant stream of sensory information and interpreting and filtering this information in a subjective manner. If you see the word “tear” you are going to interpret this stimuli differently depending on its context within a sentence (ie. “tear rolling down your cheek” vs. “tear your shirt”). I think it is in this way that symbolic modeling falls short and the advantages of connectionism are evident. For a symbolic model (or computational) model of cognition to parallel the human brain (or surpass T3) it must be capable of facilitating continuous crossovers between the abstract computations performed on symbols and the semantic interpretation of those symbols with sensory input from the real world - because that is how our brains operate in the real world. Symbol grounding alone is insufficient. We rarely make computations divorced of their contextual meanings and those meanings can change as we receive new sensory input. And, of course, our sensory input is also filtered through our attention (we attend selectively to more salient stimuli we deem “important” for whatever reason and ignore others).

    In my opinion, overcoming these challenges would be necessary in order to achieve T3. We would need to first make all implicit rules of human behaviour explicit (even things as complex and nuanced as emotional expressions). A machine would need to be able to constantly integrate relevant sensory input with its computations and produce the appropriate outputs in order to “pass as human”. Theoretically it doesn’t seem impossible though (assuming the computational power of the machine was great enough). In fact, considering as humans, how often we misidentify or miscalculate situations (e.g. misread tone of voice or facial expressions of other humans) it’s even possible a robot could be better than us at “being human” since it would be more efficient at processing multiple sources of information and applying statistical analyses.

    ReplyDelete
    Replies
    1. Oops I meant surpass T2 (in third paragraph).

      Delete
  43. In the notes section it's stated, “Similar considerations apply to Chomsky's (1980) concept of "psychological reality" (i. e., whether Chomskian rules are really physically represented in the brain or whether they merely "fit" our performance regularities, without being what actually governs them). Another version of the distinction concerns explicitly represented rules versus hard-wired physical constraints (Stabler 1985). In each case, an explicit representation consisting of elements that can be recombined in systematic ways would be symbolic whereas an implicit physical constraint would not, although both would be semantically "intepretable" as a "rule" if construed in isolation rather than as part of a system.”
    This raised two questions in my mind:
    How can physical constraints be semantically interpretable "as a rule," even in isolation?
    Is this saying that symbolic or physical constraints are ONLY semantically interpretable in isolation and not in a system?

    ReplyDelete
    Replies
    1. I’m going to give a shot at answering your first question. : How can physical constraints be semantically interpretable "as a rule," even in isolation? . By physical constraint, I understood it as mentally constrained, such as Chomsky’s universal grammar. In that sense, examples of constraints, such as wh-islands, would be physically present and grounded in someone’s mind. The proof being that these rules are present globally, across cultures and languages, and have been present for thousands of years. In this case, the rule does not have to be explicit or in a system to exist. They can be interpreted in complete isolation.

      Delete
  44. One thing I found really interesting was the idea that someone could not learn a first language by only reading a Chinese/Chinese dictionary (but although difficult, could do so as a second language).

    First of all, it is unusual to have an advantage in learning a second language than in learning a first language (if within the critical learning period). However, it makes a lot of sense, seeing that a non-linguist human would have no previous knowledge of how symbols represent meaning and so could not use that knowledge to help understand the repetition of unknown symbols in a dictionary.

    In addition, this example could help explain why it is so difficult to learn a language past a critical period, possibly because the brain developed without an explicit system of symbolic meanings and representations used in order to communicate. The human brain may at some point find other ways to ground meaning and are are not accustomed to mapping that grounding onto an explicit rule-based symbolic system.

    ReplyDelete
  45. It seems to me that implementing pockets of knowledge (like a wealth of knowledge about camels, but within the realms of cigarettes or animals or music for example) would be useful for creating a viable TT candidate, where different topics can overlap to help simulate the “dynamic implementation dependent” property of our own brains, and based on context can choose the correct application of the word or phrase much like we do with priming. Even hot and cold guessing games simulate this kind of analyses, so even if the candidate doesn’t understand the actual referents themselves, it can clue into their meaning through more and more referents. Unfortunately, the machine still isn’t understanding, like Searle’s chinese room, but it could be a step towards how to program a candidate that can juggle information like the human brain can.

    ReplyDelete
  46. In the 2003 article, professor Harnad defines natural language as a formal system, one where ‘meaningless strings of squiggles’ become ‘meaningful thoughts’. Frege’s “On Sense & Reference” determined that there is a difference between the ‘referent’ of a sentence, (what the symbol refers to) and the ‘sense’ (the symbol itself). For example, ‘a measure of the average thermal motion of molecules’ and ‘temperature’ have the same reference, and different senses.

    An interesting extension to this topic is Donald Davidson’s paper A Nice Derangement of Epitaphs, where he argues that “there is no such thing as language, not if language is anything like what many philosophers and linguists have supposed”. His argument rests upon the same Fregean distinction between sense and reference, specifically how we can understand someone who makes errors in speech if language is a formal system. For example, a malapropism (such as George Bush’s famous “They have miscalculated me as a leader”) is still understood by the listener to mean what the speaker intended. He proposes that in any speech interaction the important part is not the formal language, but rather how we are prepared to understand/be understood. Due to the frequent errors in formal structure of language, and their lack of effect on understanding, he maintains that speaking is purely an interpretive process. The ‘sense’ of any given utterance in this case is irrelevant, as long as the listener can arrive at the same interpretation of the symbols as the speaker. Davidson determines that these conventions only exist in a conversation, and therefore do not exist in a formal ‘language’ that we all share, because the structures are not conserved. Shared grammatical rules cannot, in his mind, be the foundation of understanding.

    While this argument is a bit extreme, in my mind it relates to the symbol grounding problem. If the formal language (squiggles and squoggles) is inconsistent, full of errors, and yet still interpreted without effort by humans, then there is an interpretive mechanism which allows understanding to occur. Not only are symbols meaningless on their own, they are inconsistently used.

    ReplyDelete
    Replies
    1. I think that this argument further weakens the relevance of symbols, there are many examples of when the referent is incorrect and yet understanding is communicated (ie. your friend makes a mistake and substitutes a different word another, yet because of the context of the conversation, you understand what they meant, sometimes you don't even notice they made the mistake).

      In isolation, without other information from which your brain can make inferences, I think the integrity of the symbol is more important. For example when reading the word "dog" alone on a piece of paper, the shapes themselves need to be pretty intact, but in the context of a book about animals where other symbols confer other meanings, the symbols of the word dog I think would be less important. And so the symbols themseleves become less important when more information can be extracted from other related symbols.

      Delete
  47. Maybe this will get clarified somewhat in next week's lesson on categorization, but for now I'm wondering if we have an "icon" grounding problem as well…

    "Icons" were loosely defined in class to mean gestural-symbols, sound-symbols, sight-symbols, etc. All of these were said to have an evolutionary bias, and so are not symbols. There are some instances I can think of where evolutionary bias is clear, for example, when we see a growling mouth of teeth we are biased by our evolution to experience fear, and therefore to have a somewhat predestined emotional-referent for this symbol.

    Yet there are others instances where it is not so clear. For example, how did we come to learn what a wink means? Presumably, the re was no evolutionary/genetic bias guiding us to interpret this gesture as something. Evolution didn't predict the adaptive value of expressing cheekiness. How did we come to ground the meaning of a wink?

    If we don't know how it is that we ground a new gestural symbol , let alone how it is that we come to express a more innate association like fear, why are we jumping ahead and trying to solve the "linguistic-symbol" grounding problem? Isn't grounding icons in meaning a lower order process than grounding symbols in meaning? Isn't it plausible that we are integrating with our early icon-meaning groundings, when we learn our first words? When we ground the word apple, we are grounding it WITHIN the icon-grouding of the apple.

    I understand that solving the symbol grounding problem, is interesting in it's application to building AI that can, through symbol manipulation, pass T2. However, the course is pointing us to believe that cognition is not purely computation, and so isn't figuring how we categorize anything, both icons and symbols, relevant to the easy problem and solving T2? Since we can study how animals learning categories, isn't the study of the development of animals a plausible route to solve the symbol grounding problem?

    ReplyDelete
  48. But that does not settle the matter, because there's still the problem of the meaning of the components of the rule ("UK," "current," "PM," "Cheri," "husband"), and how to pick them out.
    Perhaps "Tony Blair" (or better still, just "Tony") does not have this component problem, because it points straight to its referent, but how?


    I find it interesting that in this article, it is suggested that meaning somehow originates from its symbolic component. (after being “processed” in our brains) Couldn’t it be the other way around? The meaning generated in our brain is what leads to the symbol creation? For instance, there could be two or three different men named Tony in a room. T-o-n-y is not what points to its referent since more than one person is associated with this symbol. The same way where “grateful” and “thankful” have different symbols but point to the same concept. This is why I believe that saying that meaning arises from symbol might be wrong.

    ReplyDelete
  49. From how I understood it, Harnad’s 2003 article explained consciousness as the mechanism which “grounds” words; that which associates the word with the referent to then create meaning (p.2). It is said that minds mediate these intentions of picking out referents and executing them. But what here is the difference between consciousness and mind? The article concludes to say that grounding may not be a sufficient condition for meaning because a T3 which can experience word and referent associations may not experience meaning (as demonstrated by Searle). This leaves the assumption that consciousness may be the second property of meaning, but that doesn’t bring me any closer to an understanding of what that consciousness is.

    ReplyDelete
  50. In response to the wiki article:
    Re: The symbols inside an autonomous dynamical symbol system that is able to pass the robotic Turing test are grounded, in that, unlike in the case of an ungrounded symbol system, they do not depend on the mediation of the mind of an external interpreter to connect them to the external objects that they are interpretable (by the interpreter) as being "about"; the connection is autonomous, direct, and unmediated. 

    This is untrue. Searle’s Chinese showed us how symbol manipulation can be done (also without an “external mind”) without having an understanding of what those symbols mean. Since symbol grounding precedes meaning, there is no reason to believe that a T3 is capable of symbol grounding.

    Just another comment on the wiki article, I find Damasio’s theory of consciousness interesting as it goes beyond a black-box definition of consciousness. The somatic marker hypothesis fits well with the symbol grounding theory, where knowing what it “feels like” to know something can be elementary to the symbol’s respective meaning.

    ReplyDelete
  51. The difference between discrimination and identification is a contentious one in cognitive science.
    When looking at prosopagnosic patients for instance, discrimination of faces is not intact, although identification persists. The icon seems to be innate because of the evolutionary necessity for face recognition, and yet we have dissociation between the two functions. A domain specific view would claim just this, that the invariant features of a face have a holistic and innate claim to our cognition. But as we know, many icons cannot be innate, and must be learned from experience (i.e. expertise in antique cars).
    The grounding gap can be explained by feedback and connectionism, a view I tend to share, but its hard not to lean towards a hybrid system, especially as domain generality seems more and more likely.

    ReplyDelete
  52. I think that the first step of the symbol grounding problem relates to Harnads symbol/symbol merry go round – how do humans ground the first symbol/set of symbols that then allows them to ground new symbols? But the problem goes so much farther than this. If you take a referent for which many words can be used to describe it, all of these words need to be connected in some sort of node with the referent and also with each other. Those words then need to be connected with their individual meanings, and still those meanings might be attached to the particular contexts in which they’re used to make their interpretation more simple. Essentially, all of these connections would create our vocabulary. What I find interesting is how language develops over time to create new words. For example, neologisms are created so frequently today that it’s almost difficult to keep up with them. Furthermore, for these words to enter into our vocabulary, someone obviously needs to come up with them. My question is, what capacities allow us to generate new words that refer to pre-existing referents?

    ReplyDelete
  53. RE: Wiki article

    There's an experiment (it was on SONA last year) where you choose one of 6 or so nouns and you have to define it. Then, you have to define each of the words you use in the definitions. You have to continue like this until you close the loop and all of the words you've used have been previously defined.

    In doing this, it showed me how hard it is to avoid "infinite regress". It's nearly impossible without using shortcuts, shifty definitions, or other tricks to close loops. It's also interesting to note that other words in the experiment proved to be more useful. I wonder if words/symbols/etc have degrees of 'groundability'. In relation to other words.

    ReplyDelete
  54. If we can create a software program that exactly simulates how a human brain works, then what would separate some computer with this program from a human (aside from physical appearance)? Would the computer be able to ground symbols at all? Would it also be able to read a passage and pick out an answer that is not explicitly stated?

    ReplyDelete
  55. This comment has been removed by the author.

    ReplyDelete