Saturday 2 January 2016

8b. Blondin Massé et al (2012) Symbol Grounding and the Origin of Language: From Show to Tell

Blondin-Massé, Alexandre; Harnad, Stevan; Picard, Olivier; and St-Louis, Bernard (2013) Symbol Grounding and the Origin of Language: From Show to Tell. In, Lefebvre, Claire; Cohen, Henri; and Comrie, Bernard (eds.) New Perspectives on the Origins of Language. Benjamin


Organisms’ adaptive success depends on being able to do the right thing with the right kind of thing. This is categorization. Most species can learn categories by direct experience (induction). Only human beings can acquire categories by word of mouth (instruction). Artificial-life simulations show the evolutionary advantage of instruction over induction, human electrophysiology experiments show that the two ways of acquiring categories still share some common features, and graph-theoretic analyses show that dictionaries consist of a core of more concrete words that are learned earlier, from direct experience, and the meanings of the rest of the dictionary can be learned from definition alone, by combining the core words into subject/predicate propositions with truth values. Language began when purposive miming became conventionalized into arbitrary sequences of shared category names describing and defining new categories via propositions.

81 comments:

  1. RE: “A symbol is essentially part of a symbol system […] based on rule that operate only on the shapes of the arbitrary symbols, not their meaning.”

    I understand that the shape of word (i.e. what they look like on a page) in symbol systems such as the English language don’t have any meaning within themselves. However, what about pictorial languages and symbols? As a hominid who began carving tools and weapons, I don’t doubt that they lacked the intelligence to start creating images of deer on the sand or in a cave. After many encounters with deer and this new encounter with the two-dimensional symbol of the deer, the hominids could recognize the symbol of the animal as representing the animal itself because of the difference between the sensory-motor interaction with the symbols vs the sensory-motor interaction with the animal itself. In this way, yes symbol grounding is necessary for the origin of language. However, just as most of the dictionary words ground themselves in other words (except for the “kernel”), I think that the arbitrary, meaningless symbols have also grounded themselves in pictorial symbols that did have meanings in the shape themselves.

    ReplyDelete
    Replies
    1. Nimra, we'll be talking about iconic symbols next week, and about the transition from iconicity to arbitrariness. Used as mime, icons are not words. Once they become words, even if they started out iconic (resembling their referents), the resemblance loses its function and they might as well have been arbitrary. (Sometimes the obsolete iconicity can still give a hint as to the meaning of a new word; this is especially true in Chinese, where all (written) words are combinations of basic icons:

      "Altogether there are over 50,000 characters, though a comprehensive modern dictionary will rarely list over 20,000 in use. An educated Chinese person will know about 8,000 characters, but you will only need about 2-3,000 to be able to read a newspaper."

      "Stevan says" that language did not being with drawing nor with writing nor even with speaking (which has really limited iconicity) but with gesture and mime, which has much more iconicity than any of the other modalities, including static drawing. Then once reference and propositions begin, the icons become (or have become) conventionalized and arbitrary. (Surely reference began before language, with pointing to draw attention to a referent.)

      What words are grounded in is the (learned) capacity to recognize the category (or individual) that they refer to. Assigning the category an arbitrary name is then trivial.

      Delete
  2. "We will try to avoid the pitfalls of both the over-wide and the over-narrow view of language by proposing not a definition of language but a test to determine whether or not something is a language at all."

    My question is - isn't a test to determine language kind of the same thing as the definition of a language? Also, could this be related to the Turing Test? For example, if a robot passed the T4 test, it would be indistinguishable from humans in every way. Could one then say that a T4 robot = the definition of a human?

    ReplyDelete
    Replies
    1. I would say no...even though T4s would be indistinguishable in behavior and appearance, that does not mean indistinguishable "in every way". It would take a T5 robot to meet that description.

      I looked up "definition of human" out of curiosity and got:

      "relating to or characteristic of people or human beings."

      which is not very helpful, but I think that to be considered human to the full extent, you would have to match all of the key characteristics, which includes anatomical makeup. This is why a synthetic organism such as a T4 would not be human, while a T5, which is an actual organism, could be.

      Delete
    2. Laura, a test for something is not a definition. A litmus test will tell you whether something is an acid or not, but it won't define what an acid is.

      There are "operational definitions," like the behaviorists' "definition" of "hunger" as "number of hours of food deprivation" -- but of course that's not hunger either, just a correlate or predictor of hunger.

      But remember that even a definition is only approximate ("a picture is worth a thousand words"), but that definitions have to be made out of already grounded words.

      Yes, people can pass T4 (and T3, and T2), but that does not define what people are. (And if a Zombie can pass the TT, then it isn't even a correct test for what is a person. Turing's insight was that, nevertheless, it's the best we can do.)

      Dominique, dictionaries are good for finding out what a word means and how it's used, but they're not the place to go to find out what the referents of the word really are scientifically (e.g., biologically). You need encyclopedias or textbooks for that.

      Btw, that definition of "human" you found was circular. Of course all dictionaries are ultimately circular (that's part of the symbol grounding problem) but the circle neds to be wider than the definition itself. It's not informative to be told that "an X is an X"!

      Delete
    3. Yes definitely, I meant to acknowledge the circularity (perhaps I was not clear enough in my "not so helpful" statement), but I was mostly interesting in discussing the "characteristics" part of the definition.

      Delete
  3. I questioned how AI’s would learn language after reading this article. If there are innate rules of grammar that we all share, and if they are necessary to produce language at our level, then would language ever truly make sense to an AI below T4 or maybe T5? Would they ever be able to create, and go beyond just the simple input and output we have programmed them to do to learn and expand, without having the fundamental concepts of language engrained in their physical/chemical structure as language is the gateway to all these?

    Furthermore, the explanation about how animals seem to be lacking motivation about learning language was really interesting and it explains a lot. So where did this motivation that only our species were able to find come from? Could such a specific motivation ever be built into an AI? Motivation is such an abstract and innate concept to us now. It is hard to explain why we are motivated to do certain things. It is also hard to explain why someone would be motivated to do something beyond reproduction and survival thousand of years ago, such as pursuing language. It seems that we had to have motivation before we started seeing the real benefits of language to really pursue and learn it. I feel like it might have seen more like an effort than something beneficial at the beginning. So where did this motivation come from in the first place, that we seemed to be able to find, but chimps can’t?

    ReplyDelete
    Replies
    1. “I feel like it might have seen more like an effort than something beneficial at the beginning. So where did this motivation come from in the first place, that we seemed to be able to find, but chimps can’t?”

      It could be that for chimps, in the lab and without much context or motivation beyond food reward (which must at times lose it motivational ability), don't see the point in taking the time and expending the energy necessary in learning, using and applying a language. It could be that there’s nothing motivating enough for them in the lab setting, and that they don’t entirely understand the point of communicating in the way we are teaching them – what would an early hominid do if suddenly (hypothetically) faced with a language? There’d be little point in thoroughly learning it because no one else would understand, it would take time to teach it to others and there likely already is a rudimentary system of communication. The language development process for humans was definitely slower than the amount of time it takes to teach a chimp sign language, perhaps it’s just more time that’s required before we see any improvement or before the chimp sees any benefit? It’s not a hard answer to your question, but it’s all I can imagine would affect this motivational aspect of language.

      Delete
    2. I definitely agree with your point Amar. Chimps might not develop the ability to vocalize the categories that they learn, because this wouldn't really confer a significant advantage in a lab setting. In terms of what you said about the time it takes for the development process of language to occur, this got me thinking about how these experiments on chimps are actually done. Baby chimps mature/evolve much more quickly at the beginning of their lives than human infants do, and because of this I think that it would be interesting to find out whether this kind of training has ever been done with baby chimps. Given the fact that they're still developing, their capacities/abilities are much more malleable, and this might have an influence on their ability to learn categories and potentially learn to vocalize them over time.

      Delete
    3. Regarding Deniz' inquiry about whether language would ever truly make mistake to an AI below T4 or T5 - isn't the capacity for language derived from the capacity to have a symbol-sensorimotor system? In which case, a T3 would presumably have the foundation for understanding language, based on the following quote.

      "The natural candidate, of course, is direct sensorimotor experience: since
      words are the names of categories, we can learn which are and are not their members through
      trial and error induction."

      With a symbol-sensorimotor capacity, the AIs at T3 level might be able to categorize by both induction and instruction. Perhaps the way that the AIs would communicate would not be UG-compliant, but they might be able to communicate nonetheless.

      Delete
  4. “Our version of the very same criterion as Katz’s Glossability Thesis was the “translatability” thesis: Anything you can say in any natural language can also be said in any other natural language—though, it is important to add, not necessarily said in the same number of words.”

    While I agree that technically speaking, anything in one natural language can be translated to another, there are sayings and phrases in some languages that simply do not exist in others. I am wondering how translatability reconciles these situations? Though perhaps these seemingly untranslatable phrases can be explained by semantics and context, the idea of translatability is puzzling considering these phrases. Despite the fact that translatability of language is not the translatability of individual words, certain phrases and jokes that make sense and have a particular semantic meaning in one language but are irreproducible in another language make me wonder whether translatability only means literal meaning translations.

    ReplyDelete
    Replies
    1. This is an interesting point Aliza! I definitely agree that there are some things that language conveys - humor, sarcasm, culture, beliefs - that are difficult to translate even with x amount of added words, however, in its most literal form it is always possible to translate something from one language into the other. In that sense, I think you are absolutely right that translatability means literal meaning translations.

      In the end, some aspects of what someone is trying to get across (humor, etc.) do get "lost in translation". These aspects of language, however, are often tied to experience and although we can always "tighten the approximation" using more words, it is extremely difficult to be exact.

      Delete
    2. Hey Aliza, I think the important part for this is that any proposition can be made in any language. So one language may not have exact words for 'the ostrich is break-dancing', but they can convey the fact that a large, silly looking bird with a long neck is spinning on the ground to music. It's a silly example, but the point is that no matter what you want to say, you can say it in any language. Some languages might do it in more or less words.
      This ability to say any proposition is what makes language unique from what apes can do. They can sign for nouns, verbs etc, but they can't create propositions using those signs.

      Delete
    3. Adrian summarized pretty nicely the “translatability” point between languages above, but I’d also like to add a brief musing on the ability of language to create meaning from its own form, as well. This applies most saliently in our daily lives through the beautiful joke format of puns – though a personal favourite of mine, I’m wondering how these fit into the framework of translatability between natural languages. If one natural language’s form (not necessarily the syntax, but the shapes of the arbitrarily-decided words used in the symbol system making up that language) is crucial to the underlying meaning of what is being said, how do we translate such things between languages?

      Delete
  5. [Comparing Humans to Chimps]
    So maybe that’s what evolution did with our species. Because we were more social, more cooperative and collaborative, more kin-dependent—and not necessarily because we were that much smarter—some of us discovered the power of acquiring categories by instruction instead of just induction, first passively, by chance, without the help of any genetic predisposition.

    I do agree with the author here, but I also do feel like he is not mentionning the fact that are brains are also strongly more developed. Indeed, it has been proven that most monkeys do have the necessary structures (Broca, Wernicke, etc.) and the motor apparatus that would ne necessary to produce speech. However, Their brain structures definitely lack complexity, and the overall neural wiring seems to be missing. Could it be possible that our ability to categorize and our “universal grammar” abilities have more to do with our neuronal wiring and patterns than we would think?

    ReplyDelete
    Replies
    1. Josiane, I totally agree with you on your point regarding chimp’s brain structure and their cognitive capacities as the limiting factor for spoken language. In this way, the chimps are similar to programs attempting to accomplish T2 or Chatbot systems that have the capability to generate the answers in a conversation. The Chatbot could easily print out responses that are fitting for the conversation, but it’s software/hardware are unable to generate/compute the correct answers based on our incredibly complex system of language. It’s possible that this lack of capability leads the chimps to be less motivated, because it is so difficult for them. This kind of thinking seems to contradict Dr. Harnad’s idea that motivation is the preventative factor in chimps speaking, instead claiming that there is a problem of cognitive capability.

      Delete
    2. Josiane, while it is true that in this exerpt the authors do not bring up our physical differences, I do not think that the authors do not acknowledge that our brains are more complex than apes. Many authors, like Chomsky and Pinker, have suggested that our intelligence is the root of what gives us the propensity to take advantage of the possibilities that propositions, and a symbol system, can offer a species. This is a rather compelling idea that I think these authors acknowledge. However, these authors are offering another lens to view the question with; motivation. Maybe language hasn't purely resulted from the natural selection of genes that code for the ability to use propositions, but rather a more Baldwinian model.
      The development of language areas in the brain is a chicken/ egg toss up. We don't know if in the generations where language was first developing, if the brain came before the ability, or the ability caused the brain to adapt. Like most adaptations, it was probably an interaction between environmental forces and physiology. The fact that we have a Wernikes area/ Broca's area and apes don't, does not tell us why we can speak and why apes cannot. Like was said in Fodor's article, the when/where of the brain does not give us the how/why.

      Delete
  6. Just a small comment (that is relevant to but not directly from the reading): I read in a study (I can't find it anymore, please let me know if you know which study I'm talking about) that the minimum number of words necessary to build a dictionary is around 100 in english (and french, and generally in latin-based languages). So starting with 1 word, and defining this word, and then defining all the words in the definition of the first word, it is possible to "bite your own tail" and define every word with only words in the definitions of others with a minimum of 100 words. So I was wondering what that meant in terms of symbol grounding. If each word is a symbol, and each symbol has a meaning (semantic meaning based on categorization principle, so a "kind of thing"), then does it mean that ultimately we understand the world with 100 symbols? it seems very counter intuitive to me, so maybe it would be less incorrect if I said: we can understand subsets of the world with 100 symbols minimum? but then it is still incorrect because a dark room can be explained with 2 symbols (a binary for "room" vs non-room, and another binary for "dark" vs lit). Does anyone has insight on how this finding of the ~100 word rule can be explained cognitively?

    ReplyDelete
    Replies
    1. Hey Julie!

      I had a lot of similar questions during the week we spoke about symbol grounding (see my post in week 5). There are some theories that touch on this idea. Some say there are minimal grounding sets of 1500 words that can be used to define the other words, while others claim there are even less than that (65 in NSM). It is probable that some of these words are more cognitively salient (the Boolean connectives - and, or, not - if, then...). It could also be possible that some of these words are somewhat innate while others are just easier to learn because they describe aspects of everyone's experiences and are encountered more frequently. Ultimately, those words that seem to be most salient are grounded through sensorimotor category learning and then the rest builds from there.

      Delete
    2. As you said, the increased salience of function words/Boolean connectives makes a lot of sense when you consider the cognitive necessity. In some of these cases, they seem to be necessarily innate because you can't, for instance, ground symbols (even inductively) at a human capacity to create propositions without having some prior understanding of the verb "to be" with its associated tenses. It is a crude example but short of UG, the gradient of salience may at its extreme be absolute necessity.

      Delete
  7. This comment has been removed by the author.

    ReplyDelete
  8. I found the part discussing Proto-Languages very interesting because I took a Historical Linguistics course last semester. The main problem with disproving the claim that is made is that there is not enough verifiably valid linguistic data. Most Proto-languages are reconstructed via the comparative reconstruction method which is the best method found. There are certain ways to decrease uncertainty in reconstruction, however there is still a large uncertainty. Therefore, the terms that can be reconstructed accurately tend to be the morphemes that are most resistant to change, kinship terms functional connectives etc. This does not give a whole picture to be able to say that words from a modern Canadian English could not be explained in Proto-Indo-European. However it does seem that it would be impossible to explain ‘surfing the web’ in Proto-Indo-European so that it would be grounded in the same way as it is to us today. The main problem being, we cannot accurately see if the kernel words are present in Proto-languages.

    Also, I found the notion of syntactic rules strange. While it is true that syntax functions primarily with symbols themselves, so you can say, “colorless green ideas sleep furiously”. This was Chomsky’s demonstration that syntactic rules can function without considering semantics (meanings). However, there is interplay between semantics and syntax in many sentences, so that certain symbols can select different theta roles. Therefore, sometimes the way the symbol is grounded will affect the rules that structure the symbols into structures or sentences. Then on top of this you have the roles of explicit semantics and pragmatics that make the utterance well or ill-formed.

    Additionally, when considering sentences. I think there should be a distinction between truth-value and truth conditions. Generally, the truth-value means that you know whether or not a sentence is true or not. Truth-conditions are usually what is needed to understand the meaning of he sentence. While it is hard to always know the truth-value of an utterance it is easier to know the truth conditions.

    ReplyDelete
    Replies
    1. Adding on to your point regarding the grounding of words between proto-languages and modern languages, I think we can simply look at words whose meaning have shifted with cultural innovation. Simply take the example of a "computer" which used to refer to a person who did computational work but rather now for most individuals are grounded in the man-made machine that allows me to type this response.

      From this possibility of semantic shifts, I think your uncertainty of syntactic rules may be made clear. While semantics constrain understanding of certain syntactical structures, what syntactical rules do is allow for the possibility of concatenations of certain "kinds" of things. "Colorless green ideas sleep furiously" fail to make any sense to us, but it can make sense if the grounding of the words that constitute this sentence changes just like "computers" did, i.e. if the word "green" no longer referred to a colour, "ideas" referred to things that could sleep, and "sleep" was something you could do furiously, etc. I think it is a feature of any syntactical manifestation of universal grammar to accommodate all sorts of utterances that make no sense simply because it would be inefficient to evolve a grammar constituted by syntactical rules that create only comprehensible sentences.

      Delete
  9. We can use computation to explain artificial formal languages such as mathematics, logic or computer programming languages because they are rule and shape based, but for natural language, is it also solely a symbol system that only operates upon arbitrary rules and shapes? If universal gramamar is inborn or selected by evolutionary force as the authors argued in paper 8a, is it possible that the different use of gramamar and syntax affects the cultural interpretation of the meaning of the symbols? For example, in English or Asian language like Chinese, we express our liking for you would be “I like you”, in which the verb conjugates with “I”. But for Latin languages like Spanish, it would be “me gustas tú”, in which the verb conjugates with “you (tú)”, as if it is some property of “you” that makes me like you. As such, is it possible that the universal grammar varies among different cultural groups and the meanings the symbols or rules refer to change across ethnic groups? I don’t know if this questions is very relevant or not but it’s something that I’ve always been curious of. For instance, would the use of language or gammar actually affect people’s socialization? If we still use “I like you” as an example, would a person who’s mother tone is English be more focused on his or her own feeling towards “you” and a speaker with Latin languages focus more on the properties that “you” have attracted me?

    ReplyDelete
    Replies
    1. RE:"As such, is it possible that the universal grammar varies among different cultural groups and the meanings the symbols or rules refer to change across ethnic groups?"

      I think UG does not vary among different cultural groups as UG is not about English/Chinese/Spanish/etc but a set of syntax rules that is innate, to give us the ability to communicate verbally. It is universal among all languages. If you think of the “translatability” thesis:"Anything you can say in any natural language can also be said in any other natural language", you will know that no matter what language, or what sentence you want to make, "I like you" or "You are liked by me" will eventually get to the same meaning. The ways to say a sentence might vary among spoken languages, but that might not be solely due to how spoken languages causing people socialize or think, but it could be just the most common way to say that.

      Delete
  10. If language is really as powerful and necessary for our survival in a evolutionary sense, why don’t all animals evolve to be able to speak languages? Like the experiments with apes and chimpanzees, regardless of the training given, they never learned the languages and were not motivated. So does it mean that sensorimotor induction is enough for their survival or the body postures they use to communicate with each other is also like a language? Just as birds’ have different dialects in their song, and wheels make sound in the sea, isn’t it too arbitrary to say only those with the inborn universal grammar that we human use are languages, but the natural sound other creatures make is no language at all?

    ReplyDelete
    Replies
    1. Hi Peihong,
      RE: “So does it mean that sensorimotor induction is enough for their survival or the body postures they use to communicate with each other is also like a language?”

      It seems as though the critical distinction to made is between gestures and language. Language is thought to have evolved from gestures. Language has propositional power. It is the capacity to bypass show and go right to tell by combining a finite set of grounded categories into new categories. Apes can name/categorize through sensorimotor induction, as well as communicate via gestures, but this is not the same as language. For some reason, they can’t go further to actually harness the power of language. The authors argue that this may be due to motivational differences between humans and primates.

      Delete
    2. @zhao peihong
      @Manda

      If it really were due to motivational differences between humans and primates that primates "can't go further to actually harness the power of language", I wonder what it was that made humans motivated. Apart from the explanation that human species were “more social, more cooperative and collaborative, more kin-dependent,” because these characteristics can be observed in the primates, what was the motivation factor? And did that motivation factor essentially lead to the development of the different brain structures and complexity that we observe in humans and primates?

      Delete
    3. So according to our class on March 10th, our language possesses propositional power which apes don’t have, and that’s why they can’t get involved in our class here even though they are capable of understanding a lot of things. Before language we have categorization, categorical learning, induction, imitation, intentional pointing, but language is proposition. Perhaps, language has a predecessor that’s not quite language like gesture?

      Delete
    4. I think some important aspects in evolution that help explain this are diversity, and relative necessity. Gene diversity comes from random mutation, so maybe somewhere along the line a human ancestory "got lucky" and had the precursor for language that an ape did not. Also, while language is evolutionarily useful for humans, it might not be for other animals. A silly example, but do snails really need to talk to each other? If it's not either required for survival, or natural occurs and doesn't harm survival that's how a trait will pervade.

      Delete
    5. As Zhao said, our language has propositional power, which differentiates it from gestural communication. Language then must have evolved because as humans we developed a need for propositions in that propositions must have conferred a considerable advantage to the way humans can efficiently cooperate and socially interact. I wonder if the neural mechanism to process propositional power evolved as a byproduct of evolution, which then allowed us to develop language, or whether motivation/need for propositional language induced specific natural selection to allow our brains the capacity for this function? I suppose this question ties back to Pinker’s article as well, and whether Darwinian evolution can explain the complexity and transition from gestures to language or whether nonselectionists are right in asserting the complexity of language cannot be explained by natural selection.

      Delete
  11. • I really like the “from show to tell” theory to explain the transformation and emergence of vocal language. Without knowing what language was really like before it became as it is today, and if it was comparable to the form of communication used by other species, I wonder if they also have the capacity for the language that we have. We know that while great apes are extremely smart they don’t have language and thus don’t have the ability to ground symbols and categorize, since language is essential for these. Different species often have their own non-verbal language, though, and are able to communicate with each other. Why is it that us humans have transitioned to the “tell” stage, but other species haven’t, especially those as intelligent as apes? Is it purely because they don’t need the same level of communication that we do? Do they have the same capacity though, had it been necessary for them to develop language (by that I mean UG), or is UG specific to humans?

    ReplyDelete
    Replies
    1. Furthermore, could it also be possible that not only are apes for example not able to perform symbol grounding or categorization since they don't have language, but they don't have language because they don't have the same consciousness and cognitive processes that we have such as feeling and thinking? Of course we can't figure this out since this deals with the other mind's problem, but maybe it isn't a question of whether they have UG or if have an evolutionary need for language, as we do, and rather is whether they even have the cognitive capacities that can utilize language.

      Delete
    2. I felt a similar way about the "show to tell" hypothesis, however I was a bit less convinced. While it makes sense for survival, why was vocal communication necessary for this? Many species are thriving because they are able to select food to eat that is safe for them. Other than having a more developed brain, what separated us as needing vocal communication? Is it the fact that we can more easily warn people - especially if they're not in touching distance? Or is it that certain animals may have a strong sense of smell/taste and are therefore able to distinguish something that may be poisonous to a better extent?

      Delete
    3. Eugenia : "But once a species picked up the linguistic ball in the gestural modality and began to run with it (linguistically, propositionally), the advantages of freeing its hands to carry something other than the ball, so to speak (rather than having to gesticulate while trying to do everything else)—along with the advantages of naming when the teacher was out of the learner’s line of sight, or at a distance, or in the dark—would quickly supersede the advantages of gesture as the start-up modality for language".
      This is a fair explanation of why language may have switched from gestural to vocal, but I don't think this is what Maya had in mind. The main puzzle is why do we have the ability for language (whether vocal or gestural) while other species, even very close to us in terms of intelligence, don't? If the ability to say just about anything provided by language is an evolutionary advantage, how come great apes are not even close to having it?
      One may say that our language is just a better, perfected version of communication and that with time apes will get closer to it with time, but this paper shows that it looks rather like a distinct trait from other types of communication, that cannot be reached by simply improving them. Whether other species will sometime acquire the ability for language is still a mystery...

      Delete
  12. My question is regarding the pac-man mushroom game experiment. I understand that it shows the power of language as instruction rather than induction.However, as reading about it, I thought to myself- would these pac man be able to show innovation. For example- if it started snowing and mushrooms died quicker, would they be able to adapt?

    Is language enough to explain how it is we innovate? Or can language only facilitate? From what we have discussed it seems like the function of language is social collaboration and categorization. These are both, of course, extreme advantages to society, but they don’t explain how language could have resulted in house building or fire control. Did we have the capacity to do this before we named/categorized what we were doing? Is it possible that our intelligence and cognition skills came before the language?

    ReplyDelete
    Replies
    1. Interesting questions! I personally think it’s likely that the cognitive components necessary for innovation, as you call it, were present before language began. As the authors of the piece point out, we are thought to have “affordance neurons”, which help us recognize what we can do with the things in our environment. To go off of your example, building a house without language might have involved detecting unusual affordances in the materials of the environment (e.g. that mud affords the building of high walls) and using those affordances to create something new, or innovative. Plus, other social species don’t have language capacities but might have affordance neurons, and I like to think that other species are just as innovative as we are.

      Delete
    2. Hello, I agree with Olivia in that cognitive capacity for innovation preceded language. Similar to the Olivia's points about affordances, we have been able to create and categorize as inductive learners in order to do the right thing with the right kinds of things before language emerged. We do have this innovative capacity to create and build houses without needing to name the categories, as other species have also demonstrated.

      When we put a label for categories (either through gestures or a natural language), it ultimately serves to foster instruction learners, where they would be more efficient without having to do as much induction as others before. So it seems to explain that language fast track our category learning, resulting in the advancement of our innovative creation, such as building complex housing or having efficient fire control through communications.

      What do you think?

      Delete
    3. I agree that these cognitive components would have preceded language, because we could still learn categories through direct sensorimotor experience. As Olivia said, we already had certain affordances and could recognize what we could do with objects out in the world and thus, could already form categories and figure out what do with things. However, we would not have the advantage of learning by instruction or being able to create complex composite categories that language allows us to do, so learning before language would have been more taxing and inefficient. Language fosters unique social cooperation which greatly increases the capacity for innovation.

      Delete
    4. "Instruction requires far fewer training trials (in fact, a good instruction should allow correct categorization from the very first attempt), but it does take longer to apply the instruction to each instance encountered, at least initially; the reaction time for categorization when it has been learned by induction is faster than when it has been learned by instruction"

      This quote from Professor Harnad's reading seems to suggest that though instruction is the best way to optimize category learning it is not as fast as induction in some cases. I am not quite sure why induction based reactions are faster than instruction based ones? Is it because instruction is based on language which is evolutionarily more recent when compared to induction, which is an evolutionarily older process. Also what advantage does induction based learning have (post children acquiring the core kernal of words that help would help them define all other words)?

      Connecting this to Julia's point on cognitive components and affordances, could we suggest that humans are still better inductive category learners than instruction category learners because we have more affordances that support induction learning. For example we tend to forget so much of what we have learnt through instruction but remember almost everything we have learnt through induction.

      Delete
  13. This comment has been removed by the author.

    ReplyDelete
  14. If it really is the case that the great apes are able to understand propositions but do not use them consistently/at all due to lack of motivation, could the same not apply to humans? There could be another element of language we are unable/unwilling to undertake that would provide us with some new capacity. A recent movie explored this idea, but I won't give any spoilers.

    ReplyDelete
    Replies
    1. Hi Colin,

      No i don't agree that humans would lack motivation because it is essential for survival since language is an evolutionary adaptation. Proposing is adopted mutually between the learner and instructor - which is unique to us human beings. To understand new categories, we pick up new composite categories through observational learning and then motivated to help others by sharing categories - thus this is how we get the proposition. Interestingly enough, chimps cannot do this because they have the observational capacity and the ability to categorise, but they somehow lack the motivation to pass on categories like us humans do. However, language is very important for survival so i do not see under what circumstance us humans would lack motivation for us not to reciprocate.

      Delete
    2. The word motivation is used without any clear definition. I worry that it is just like the homunculus. It has explanatory power only in the sense of a black box. What exactly do we mean by motivation?

      Delete
    3. Austin, I really agree with you (if I understand your concern correctly). It seems to me that the notion of motivation is being used as a magical explanation for why Chimps, though technically capable, do not produce the propositional quality of human language (i.e. the quality that makes it so powerful and unique). I too would like a clarification. The way motivation is being discussed on this forum implies a sort of complacency amongst chimps. I don’t think it’s that simple. I also have seen some contradicting explanations from different students. My interpretation was that chimps can categorize (do the right thing with the right kind of thing) and arbitrarily name those categories through the process of induction. I thought it was clear that they cannot employ instruction because they do not possess the capacity of using propositions (“the apple is red” as opposed to just “apple” and “red” and “apple red”). These categories may be combined into “quasi propositional strings” but I thought there was a blatant capacity missing besides motivation here.

      Delete
    4. I think perhaps motivation can be read as something that would be evolutionarily advantageous to posses, and for chimps, for some reason, there is no evolutionary motivation for them to be able to posses language. I think Colin's original idea is really interesting. If humans have been motivated by evolution to use language and that has in turn enhanced our cognition, what other capacities might language have that we have yet been able to tap into?

      Delete
    5. Hi Austin, I think that’s a very good point! The authors do seem to be throwing out the explanation of “motivation” without actually defining what it means. Motivation can be understood from a variety of different facets – i.e. general, educational, social, cognitive, etc. If, in this article, motivation is supposed to be taken as an overarching “umbrella” term, hosting a variety of different explanations, it actually takes away from the effectiveness of the authors’ arguments. It allows for freedom to fabricate many reasons for certain human behaviors, which ultimately takes away from the argument's falsification. Moreover, the overarching “motivation” does not explain why this “disposition to learn” would actually affect language capacity specifically. So yes, it certainly does serve as a “black box”, or perhaps, motivation may be merely a euphemism for some other capacity that offers a more informative explanation. Nonetheless, a definition of motivation would certainly strengthen their argument.

      Delete
    6. I think something happening in this thread and the rest of the comment section for this paper is adopting a falsity that language was created because it was necessary for survival. Language perpetuated because it may have increased the ability to survive, but it was not necessary at the time (It could be argued that today, language is necessary for our current situation, but that is not related to evolution of language).

      I also agree that motivation seems to be vague and a non-answer to the question. There seems no reason that motivation is the deciding factor for humans versus chimps. Where would this motivation have come from? How is it causal and not just a feeling (such as having the motivation to move my arm, which Harnad argues is a feeling and has no causal powers). and how can we be sure of the social differences between chimps and humans (the proposed reason for more motivation)? This brings us back to the other minds problem - we can't be positive that the chimp's social system is not up to par of our own.

      Delete
    7. While motivation does seem to dodge the question, I interpreted the justification for chimps not “wanting” or being able to use propositionality in language, despite understanding it, is framed as an evolutionary one: over time, natural selection selected in humans those who wanted to use propositionality in language, as this dramatically enhanced survival. This lead to the evolution of a species that were biologically predisposed to be motivated to use propositional language. To this extent, I wonder if breeding experiments have ever been proposed to elucidate such a claim, or if such an endeavor would even be ethical or achievable within a realistic timeline.

      Delete
  15. The author brings up Kernel words and the Minimal Grounding Set (MGS) that were briefly mentioned in class last week. I was rather confused by the relationship between kernel, core and MSG. From what I interpreted, Kernel words are words that can be used to define all other words which makes up approximately 10% of the dictionary. But kernel words cannot be defined by other kernel words without using core words. On the other hand, core words define kernel words - however, the author does not go into much detail about exactly what core words are. Judging from the diagram in the article, it seems as though core words are a bunch of kernel words grouped together? Is there a section in the dictionary that only pertains to core words?

    Also i came across another confusion about the Minimal Grounding Set. As for the MSG, the author uses MGS to define kernel words where "the smallest set of words is the minimal grounding set". I am confused about the difference between the two - are there many "minsets" in the Kernel or are they the same thing.

    ReplyDelete
    Replies
    1. From what I understand, there are three circles.
      (1 – largest) Kernel: words that define each other
      (2 – within Kernel) Core: not only define each other, but are not defined by any others
      (3 – Within Core) MGS: combinations within the core that can give rise to all other definitions

      Delete
  16. This comment has been removed by the author.

    ReplyDelete
  17. Re: The disposition to propose: Intelligence or motivation? “So maybe that’s what evolution did with our species. Because we were more social, more cooperative and collaborative, more kin-dependent—and not necessarily because we were that much smarter—some of us discovered the power of acquiring categories by instruction instead of just induction, first passively, by chance, without the help of any genetic predisposition.”

    Two points of concern come to mind. (1) Motivation is akin to an empty explanation like the homunculus. What exactly constitutes motivation for the disposition to learn and use and recombine symbols?

    (2) I find it perhaps a bit too optimistic to characterize human beings are somehow more cooperative and collaborative than other creatures such that we have language and they (seem) to have less of language capacity. I would be more cautious in attributing the disposition to propose to motivation. I’m reminded of Hobbes: “Hobbes invites us to consider what life would be like in a state of nature, that is, a condition without government. Perhaps we would imagine that people might fare best in such a state, where each decides for herself how to act, and is judge, jury and executioner in her own case whenever disputes arise—and that at any rate, this state is the appropriate baseline against which to judge the justifiability of political arrangements. Hobbes terms this situation “the condition of mere nature”, a state of perfectly private judgment, in which there is no agency with recognized authority to arbitrate disputes and effective power to enforce its decisions.”” I’m more inclined to believe that the State of Nature dominated and that people did live “nasty, brutish, and short” lives until the formation of government or state.

    This is why (going back to our class discussion) I am more inclined to find the basis for the disposition to propose within our sensorimotor capacity. It may be the simplest difference in our anatomy that affords a series of gradual ‘base’ capacities needed for language from which language coalesced. I conjecture that these base capacities include geometrical categorization as one of the primary places to begin. Geometrical categorization is underscored because it can reasonably be seen as necessary to ‘get about’ in the world that is full of objects. Geometrical categories then allow for the rule-based Boolean phrases to combine categories and to propose. I think this ultimately may trace back to a difference in vision in evolution. Therefore, I’m inclined to support the non-gradual change view of language, thus, not by natural selection.

    ReplyDelete
    Replies
    1. Austin, you are right that "motivation" is vague, but it does cover genetic tendencies like the duckling's disposition to follow the first moving object, or the child's disposition to name things during the vocabulary explosion. Baldwinian evolution works by enhancing the disposition to learn things quicly -- things that are predictably present in the organism's environment. It's part of the "laziness" of evolution.

      I agree that Hobbes got it backwards (but I don't really see the connection with Hobbesian speculation).

      It seems plausible that sequential (boolean) propositions evolved from pantomime -- and panotmime is both spatial and temporal (sequential). But I can't see the point of your special emphasis on geometry.

      Delete
    2. That bit of Hobbes was essentially pure conjecture based on zero data about the actual "state of nature". We now have tons of anthropological evidence from hunter-gatherer societies, which are our best estimates of what the human ancestral environment was like, and their lives are far from "nasty, brutish, and short." They are generally very happy and healthy, and collaborative by necessity, since this is the only way to survive in the wild. Christopher Ryan has written and spoken a lot about this if you're interested.

      (As something of a demonstration, take a walk in a forest somewhere and notice how much death and suffering you see versus how many animals living in peace. Of course, there is death and suffering, but let's try to think more proportionally.)

      Delete
  18. Re: the symbol grounding problem and the grounding kernel

    The symbol grounding problem is how words get connected to their referents. This connection can happen in two ways: through instruction or induction. As Blondin-Masse & Harnad have shown, a set of fewer than 1500 words need to be grounded by sensorimotor induction in order for all the rest to be learned by induction.

    To bring this back to T2, it is this kind of inductive learning that's essentially missing in purely computational models of cognition. The algorithm can use words, and can even learn new categories by combining ones it already knows, but the fact that it is lacking a grounding kernel (and, indeed, any categories learned by induction) is what makes it inadequate in capturing all of cognition. This would be why T3 at minimum is needed to explain what we can do.

    ReplyDelete
    Replies
    1. The article ultimately concludes that ‘tell’ supersedes ‘show’. The ability to use instruction appears to be far more beneficial than induction through trial and error. Not only is it faster to learn categories through instruction, but instruction using verbally communicated language allows us to create several other categories that cannot be made without the use of propositions and instruction. What makes the power of instruction over induction even more interesting is the fact that it can create categories that cannot ever be learned through sensorimotor induction-for example the peek-a-boo unicorn that is often mentioned in lecture.
      So in response to your comment Michael, I completely agree with you, sensorimotor capacity is ultimately the reason why T3 is minimum. We can create a category (like the peek-a-boo unicorn) that can never be attained through sensorimotor grounding, yet this category could not have been created without prior sensorimotor grounding of categories like animal, number, limbs, horn, visible etc.

      Delete
  19. The paper convincingly demonstrated that language is derived when attempts to communicate through miming became conventionalized into shared sequence of category names, which made it possible for humans to transmit new categories to one another and slowly allowing the shift from induction learners to instruction learners. This paper is comprehensive as it ties in almost perfectly everything we have learned and discussed in this class.

    I found it also helpful in its distinction between formal language, such as arithmetic or python programming, and natural languages. By using the example 2 +2 =4, we know that this symbol manipulations are independent of meaning and purely formal. But once we start semantically interpreting them in our natural language, by knowing the meaning of “2”, “equal”, and “true”, we start to manipulate words based on their meaning. Coming to the understanding that formal languages are parts of natural languages.

    In bringing back what we learned at the beginning of the about formal symbols and TT, we know T2 is lacking the capacity in inductive learning, thus insufficient to describe cognition and we conclusively concurred that T3 being the right level to test cognition. We know perfectly well that we can describe in words as much closely as we can about a sensorimotor experience, but it would still be lacking from the experience itself (a picture is worth a thousand words example). So it seems that the Strong Church-Turing thesis, despite it being formal simulation and computational, is also applicable to natural language.

    Taking the points above and considering that formal language is a part of natural language, we could conclude that natural language would be, in the least, as strong as computation (formal). But does the fact that natural language is grounded in sensorimotor make it possibly stronger than computation? Or would it be irrelevant since computation is only concerned about symbol manipulation, and not their semantic interpretation?

    ReplyDelete
  20. I enjoyed this article, because it helped me understand an evolutionary emergence for language use that does not assume that, ‘it just happened’ or is irrelevant to understanding language use. I think this step is crucial, because gorillas in several aspects (spatial cognition, visual perception, ability to mimic, etc…) display similar intelligence to humans, but do not speak. Indeed, they do not seem capable of understanding language the way that we do (although it is difficult to confirm this without speaking to them). At some stage in our evolutionary past our ancestors selected for linguistic traits in a way that they did not, such that we can now instruct each other on this very distinction without needing to be in the same place, or period of time.

    Specifically, the transition from ‘induction learners’ to ‘instruction learners’ seems to be a good place to ground this inquiry. It has the additional benefit of answering how humans first learned to associate symbols to meanings, answering the symbol grounding problem for evolutionary history. As pointed out by the author and many others in the thread, a sufficient number of inferred (grounded) categories allows us to infer the rest, and thereby begin to teach them to each other. Insofar as this capacity conveys a survival benefit, it explains both the ‘how’ and ‘why’ of language development.

    I wonder, however, how we learned to infer the meaning of incorrectly spoken utterances. We can understand that George Bush means ‘peacemakers’ when he says something like “We'll let our friends be the peacekeepers and the great country called America will be the pacemakers”, yet this is not explicit in the language. Is the capacity for understanding malapropisms one of UG, or one that is learned? Perhaps something different altogether? I think that an understanding of language evolution needs to address the ability to successfully interpret incorrect (badly formed) utterances as well.

    ReplyDelete
    Replies
    1. Wow! Interesting point Edward. I think that humans make similar mistakes when it comes to incorrectly spoken utterances. As UG points out, there are certain types of errors that we NEVER make. Conversely, there is a subset of mistakes we are prone to make, in spelling, articulation, morphology, etc. I think this is mostly a result of genetic factors (how similar the vocal production mechanism of two sounds are, mental mechanisms of sentence structure, patterns of memory recall, etc).

      In your George Bush example, I think we are able to infer what he means because it is a phonological error and we know the correct form. Of course, someone who does not know what a peacemaker is would not be able to identity this error. In this way, the capacity to understand malapropisms is learned because without knowing the correct form, you can't identify the error.

      Delete
    2. Hello Edward and Elise, interesting points and examples by both of you. As we know that all languages are UG-compliant so, like Elise said, UG violations are hardly ever made. And this, we assumed to be innate. However, as for spelling, vocabulary, articulations, and other ordinary grammar, we learned it like any other ones with our capacity of categorization, right? Such that we would get feedback as to the correct or incorrect instances, and therefore, would be able to identify malapropisms.

      Delete
    3. Hi Ted,

      I think it's a great point, mistakes are a large component of human language and especially learning another language. I think your example can be taken even further. At point is a language incomprehensible. What threshold of grammatical errors can I make before it makes no sense. For example, when I poorly communicate in French, there a tons of mistakes both in situational use (I speak in proper grade school French only), accent, prosody, grammer, and I don't speak with any particular emotional affliction as I do in English - in other words there's a lot of hoops to jump through to understand what I mean to say. And yet these mistakes are trivial and predictable. Likely, learning a second language uses a lot of the mechanisms from learning a native language, so it's interesting to think through the differences of learning your first language from a language follow it.

      Delete
  21. Re: "When we successfully learn a new category by sensorimotor induction, our brains learn to detect the sensorimotor shape of the feature(s) that reliably distinguish the members from the nonmembers."

    While I agree that sensorimotor induction is most often the initial means by which we learn to categorize, I would also like to point out that instruction may also allow us to notice new things that we may not have interacted with before. It seems as though the process of induction is slow but necessary, and that instruction can occur quite rapidly and has unlimited capacities only after sufficient induction has occurred. We interact with shapes and symbols in the environment through direct sensorimotor experience and later on form ideas and words about them. Such words can then be used in conjunction with other words to form new categories and new concepts.

    If we take a centaur from Greek mythology, for instance, one must imagine a mythological creature with the upper body of a human and the lower body of a horse. Before visualizing/imagining that this could even be possible, we would first have to categorize both humans and horses. And even before that, would we have to have been inducted or instructed to find out what humans and horses are? Why is it then, that we can imagine such a creature even though we know it is "not well formed" based on our sensorimotor experience (assuming nobody has been in contact with a centaur before)?

    Section 4.2 demonstrates an experiment by Cangelosi & Harnad (2001) that shows how instruction may surpass induction in the capacity to form new categories. The advent of language was the beginning of rapid instruction, as simple pantomiming required previous induction of objects in the world and experiences. This section says that you can't convey new categories by pantomime alone but I disagree since one could already do this with words alone. Induction of simpler categories would first have to occur, but instruction can form an endless list of new categories.

    ReplyDelete
    Replies
    1. You raise solid points about the potential advantages of instruction beyond the bypassing of the sensorimotor experience and associated risk necessary for induction.
      Instruction has the additional benefit of broadening the scope of grounded concepts to add new categories.
      The article mentions the lack of "motivation" amongst apes to continue to explore their linguistic capacity, or to "run with it", which makes me wonder if the training of sign language in apes can be considered instruction... It sounds like it is a pure exercise in symbol grounding, but more abstracted ideas (like your centaur) seem impossible to provide context for.

      Delete
    2. Neil, I think you may be confusing pantomime with formal languages, regarding pantomimes being used to categorize, and therefore containing propositions. To categorize objects, one works solely with propositions (what does and does not belong in that category), whereas words are different than pantomimes in that we use them to construct propositions. For example, there isn’t a gesture that means “to be/ is,” and if there was then it would have to be a part of a formal language since trueness and falseness can only come about after symbols are given meaning through their systems. Pantomimes don’t convey meaning, and can’t be either true or false, just simply ideas or objects.

      Delete
  22. RE: " In many ways, the origin of language amounts to the transition from show to tell". "How was language born? And why? And what was the adaptive advantage that it conferred?"

    Although it is obvious that both show and tell are essential, I am convinced that learning by instruction is evolutionary advantageous (as demonstrated in the experiment). It is more efficient because it allows more combinatory power. We can convey more complex ideas and knowledge (i.e there is a bear that comes to x location at y time) that can not be conveyed by mime. As Prof. Harnad often says in class, evolution is lazy and learning categories through instruction is less risky than finding out all categories by induction (i.e going location x at time y and encountering the bear).

    The authors point out that before orality, language was non-vocal and likely local and interactive. This makes me wonder how our language will evolve as technology continues to advance and play a crucial role in our lives. We have gone from language w/o speech, to speech, and again to language w/o speech where we can convey whole ideas, emotions, tone and intentionality entirely online. It is interesting to think of how much language has evolved, and how it may continue to do so.




    ReplyDelete
  23. This comment has been removed by the author.

    ReplyDelete
  24. I found section 5.1, “The disposition to propose: Intelligence or motivation?” to be the heart of the paper.
    The fundamental difference between humans and apes, is our ability to utilize propositions, truth values. I think it’s interesting to consider the root of this difference as motivational rather than in intelligence- as a product of Baldwinian evolution, rather than pure natural selection. Pinker/ Bloom frame the question as a difference in intelligence, in that we have a Chomskian inborn learning mechanism, and a protolanguage, while Harnad et al. frames it more as a motivational difference. Harnad et al’s alternate framing has given me food for thought.
    In the final paragraph, Harnad et. al stated that instruction requires fewer training trials, but that, at least initially, it takes longer to apply the instruction to each instance encountered. I hypothesize that evolutionarily, this lengthy initial application period represents leisure time that apes simply have not had. Because we humans discovered a means to cook our food, we were able to garner way more calories for our efforts than monkeys ever could dream of. By cooking our vegetables, we didn’t have to spend a huge percentage of our caloric intake on digesting them. We built shelters so we didn’t need to waste our energy shivering in the cold. We made weapons and built traps so we didn’t have to kill animals painstakingly with our bare hands. I wonder if this all freed us up to take the gamble to invest time and energy in passing on knowledge to our kin, first gesturally, and eventually by developing a symbol systems to more efficiently instruct our learned categories to kin. Harnad et al, also state that we were “more social, more cooperative and collaborative, more kin-dependent” than apes. Perhaps our dependence on our kin, our social nature, and our leisure time, all coincided to give us a “propositional attitude”. Perhaps this attitude was amplified via Baldwinian evolution, and became more or less dispositional over time.
    I wonder if we made a monkey-utopia, where gathering nutritious food was never a challenge, and the environment was such that passing on knowledge was a surefire adaptive advantage, if over the course of many generations monkeys would garner a “propositional attitude.”

    ReplyDelete
  25. Re: The power of language (according to our hypothesis) was, in the first instance, the power of acquiring a new composite category from other people who already knew the old categories out of which the new one was composed.

    I’m a bit confused about this. If it is true that there exists a language of thought, does the power of language necessarily have to be shown through instruction? The paper discusses an imaginary contemporary hunter-gatherer, who they say would be just as adept as communicating as we are. Is it possible that this person could have also been able to make new categories out of old ones? For example, it seems reasonable to assume that they could have imagined a peek-a-boo unicorn. Does this not negate the importance of instruction (and then human-human interaction in general)? Or am I misunderstanding the argument?

    ReplyDelete
  26. RE: A symbol is essentially part of a symbol system, a set of such objects, along with a set of rules for manipulating those objects— combining and recombining them into composite shapes, based on rules that operate only on the shapes of the arbitrary symbols, not on their meanings.

    Our language is a formal symbol system, one governed by a set of syntactic rules with an interpretable semantic meaning. So any well-formed sets of symbols could be reached by computational machine, meaning language is computable. However, Searle proved that cognition cannot be all computation. Does that mean that there are some aspects of our cognition that are simply inexpressible in a formal symbol system (language) and therefore cannot be communicated?

    ReplyDelete
  27. I am intrigued by the idea that any natural language makes it possible to express any proposition, however I think this may be similar to the example of picking someone up at the train station— although you can be as specific as you want about their description, it can never be infinitely descriptive of the person, and there is always the possibility of more than one person meeting the description given. I have had a lot of experiences with people who speak other languages who have said that they cannot tell a certain story in English because it is only good in their other language. I don’t think this is only because it would take more words to express the same thing in a English— I think it is because words are discrete units and no number of them can be a proper replacement for a single word or phrase that is infinitely correct for the meaning that a person wants to convey.

    ReplyDelete
  28. On intelligence or motivation
    Perhaps humans are social beings because the lone-wolf traits died out. You can do much more when you can communicate effectively. Even just acknowledging something or saying 'yes'or ‘no’ is more effective and faster than attempting to get the response you want by shouting or roaring. The motivation appears to be that we can work more effectively as a unit, but what would have motivated our species to speak unless there was some species-threatening issue such that humans needed to speak in order to survive? Apes didn't harness this ability, and yet the experienced the same environment our ancestors did. Was it simply that we were more effective at surviving as a unit? Or was language development bound to happen at some point? If it was, then it seems logical that language were to happen to other species’ at some point in time.

    ReplyDelete
  29. Massé and Harnad’s hypothesis for the birth of proposition seems possible to me. We already know of altruistic genes that cause kin to help other kin, in order to advance their gene pool. If a mother learns a category, and this category is important for survival (e.g. edible mushrooms), of course she will communicate this category learning to her children. From this, it is possible for there to be a referent linked to that category; a way to refer to it in order for those children to pass on this knowledge to their children. Seeing this as a potential mechanism for the emergence of language is logical to me. The transition from iconic referents to symbolic referents is not an unimaginable leap to make.

    ReplyDelete
  30. “How much induction do you have to do before language can kick in, in the form of instruction”

    The Dictionary experiment to answer this question is genius, in my opinion. It is a clever way to show exactly how many words one must learn (The Minimal grounding set) before one is able to learn the rest of the dictionary using the words in that subset of definitions. It is astonishing how small this number is! Clearly, second-hand learning (instruction) is a powerful tool for knowledge transmission, and must have been an adaptive trait to be passed on through natural selection.

    ReplyDelete
  31. “This means that with fewer than 1500 content words (category names) plus a few function words, such as and, not, if, etc., plus the all-important power of predication, we can define all the other words contained in our contemporary dictionaries … and hence all the words in our mental lexicon.”

    This article suggests that the connection between symbols and their references occur through two mechanisms, instruction or induction. The article ultimately concludes that ‘tell’ is a more potent mechanism than ‘show in that is a more effectively mechanism for learning categories and allows for the creation of broader categories that are outside the realm of sensory motor induction. This reinforces the idea of why T3 is the correct level for cognition. A T2 algorithm lacks the capacity to learn through induction, while T3 has the sensorimotor capacity to learn the basic sensorimotor categories required as the basis. However, more so than the sensorimotor experience, I wonder if T3 can adequately capture the experience of cognition that goes beyond anything sensory-motor like justice/ethics are described in previously that plays an essential role in the scope of our consciousness.

    ReplyDelete
  32. RE: "Extracting a Dictionary’s Grounding Kernel and Core"

    I participated in the Dictionary Study (Prof. Harnad's, I believe!) and this reading really helped to crystallize what I got out of my experience from that. While doing the study, I thought it was almost completely impossible. The only way to start closing the loop was to take shortcuts and provide the absolute bare minimum for definitions. Reflecting, I know realize that there were a handful of words that were very useful for this process and now I understand that these words had more use for "grounding". There were other words, however, that I would only use once or maybe twice. It's interesting to note the varying degrees of how words can be used to relate to other words.

    ReplyDelete
  33. The distinction made between instruction and induction was very helpful to see just how far language has extended our cognitive abilities. Induction is the method of learning categories or grounding words using sensorimotor experience and trial and error. Instruction is learning categories by word of mouth. For instance, going out and learning which mushrooms are poisonous and which aren't by tasting them or asking someone to describe/ show/ tell you which ones are poisonous and which aren't. Instruction and the propositional properties of language have a nuclear affect on our ability to do things. We don't all have to physically go out and do complex experiments to deduce the law of gravity but we can listen to someone to come up with the same conclusion and understanding. Something that was mentioned in class that I found particularly interesting was gullibility and hypnosis. Propositions have truth values and you don't expect those true propositions to be true 50% of the time, false 20% of the time and inclusive 30% of the time. Thus when someone makes a statement you assume it is true unless explicitly told otherwise. Because afterall, what would be useful about propositions if they were never conclusive or didn't give you information (a reduction of uncertainty)?

    ReplyDelete
  34. • Where today “tell” has superseded “show”, we use words to communicate about our sensory motor experiences (both individual and collective) moreover silent gestures. This makes intuitive sense to me as much of how I communicate is through words, and silences are often representative of a relative suggestion concerning what words are not present. I was a little confused though about the analogy given to explain this. From what I understand, a presentation is mostly about telling, and a powerpoint which shows slides can both throw the presenter off guard/distract the audience, or it could alternatively be a useful tool to keep the presenter on track. What confuses me is that a powerpoint has highly relevant language on it, which comes off to me as being another way of “telling”. So, I’m not too sure I understand the difference of showing, and telling. Would an essay be tell or show? This article raises an interesting question as to whether show and tell are mutually exclusive or not, and how we tell them apart.

    ReplyDelete
  35. From my understanding a symbol system is a system by virtue of the fact that the symbols both arbitrary and are connected to each other (and those two features themselves are connected: icons by definition refer to a specific thing, and so they do not refer to anything outside of that). So essentially a symbol system is a dictionary.

    If I'm correct about this, my question is whether or not the degradation of iconicity in the transition from show to tell simultaneously involves the formation of a system. If not, does the system have to come before the degradation of the iconicity, or does it come after as a result of the degradation? Furthermore, does one need a symbol system for there to be syntax, or does syntax allow for icons to come together as a system??? Or is my question just based on a misunderstanding about symbol systems?

    ReplyDelete
  36. Now the likelihood of first discovering this boundless propositional power is
    incomparably higher in the gestural/pantomime modality, which already includes all the nonarbitrary things that we do with (concrete) categories; those acts can then be short-circuited to serve as the categories’ increasingly arbitrary names, ready for recombination to define new categories.
    But once a species picked up the linguistic ball in the gestural modality and began to run with it (linguistically, propositionally), the advantages of freeing its hands to carry something other than the ball, so to speak (rather than having to gesticulate while trying to do everything else)—along with the advantages of naming when the teacher was out of the learner’s line of sight, or at a distance, or in the dark—would quickly supersede the advantages of gesture as the start-up modality for language (Steklis & Harnad 1976). (One could almost hear the grunts of frustration if the intended learner failed to see, or the teacher’s hands were otherwise occupied—
    the “yo-he-ho” and “pooh-pooh” theories of the origin of language parodied by Max Mueller come to mind.)

    I very much enjoyed this article, and I thought it brings up many interesting points, but I wonder why it’s so much more likely that language that language began as something gestural/pantomime? What is the basis for this? It seems to me like gesturing plays a fairly small and insignificant role in normal communication of natural language (not including sign language) in modern humans. Rarely does it have any real “meaning” – pointing, waving, nodding, and shrugging seem to be about the extent of it. Do we see animals engaging in a lot of gesturing? It seems like spoken part of language is more fundamental and gesturing could just be an ‘add-on.’ It seems just as possible that naming these with spoken words came first, and propositions developed out of these – Let’s say early humans starting yelling “ya” when they saw a bear and “ba” when they say water, then eventually formed the proposition “ya ba” meaning “there is a bear in the water/stream.” Why is gesturing a better explanation for the origin of language than this?

    ReplyDelete
  37. RE: “the “translatability” thesis: Anything you can say in any natural language can also be said in words.”

    Is it really true that you can say anything that can be said in one language in another one? If we focus on transmission of meaning in the translatability arguments and we want to have the same meaning being transmitted in both languages, we will get into trouble. However, if we are talking about communicating a similar idea about the referents in both languages, then that can be done. The first interpretation of translatability is problematic because the meaning of each word in a language has a feeling component to it. The word you use for a specific category induces a certain feeling in you that cannot be replicated with another word in another language. The problem gets more difficult when you do not have an equivalent word in the second language and you use conjunctions of words (and categories) from the second language to refer to the category or word that exists on its own in the first language. I don’t think it is impossible to communicate a similar idea about those categories in both language but I don’t think the meaning of the two propositions would be the same.

    ReplyDelete