Saturday 2 January 2016

9a. Pinker, S. Language Acquisition

Pinker, S. Language Acquisitionin L. R. Gleitman, M. Liberman, and D. N. Osherson (Eds.),
An Invitation to Cognitive Science, 2nd Ed. Volume 1: Language. Cambridge, MA: MIT Press.

The topic of language acquisition implicate the most profound questions about our understanding of the human mind, and its subject matter, the speech of children, is endlessly fascinating. But the attempt to understand it scientifically is guaranteed to bring on a certain degree of frustration. Languages are complex combinations of elegant principles and historical accidents. We cannot design new ones with independent properties; we are stuck with the confounded ones entrenched in communities. Children, too, were not designed for the benefit of psychologists: their cognitive, social, perceptual, and motor skills are all developing at the same time as their linguistic systems are maturing and their knowledge of a particular language is increasing, and none of their behavior reflects one of these components acting in isolation.
        Given these problems, it may be surprising that we have learned anything about language acquisition at all, but we have. When we have, I believe, it is only because a diverse set of conceptual and methodological tools has been used to trap the elusive answers to our questions: neurobiology, ethology, linguistic theory, naturalistic and experimental child psychology, cognitive psychology, philosophy of induction, theoretical and applied computer science. Language acquisition, then, is one of the best examples of the indispensability of the multidisciplinary approach called cognitive science.

Harnad, S. (2008) Why and How the Problem of the Evolution of Universal Grammar (UG) is Hard. Behavioral and Brain Sciences 31: 524-525

Harnad, S (2014) Chomsky's Universe. -- L'Univers de Chomsky. À babord: Revue sociale es politique 52.

105 comments:

  1. In an earlier lecture, Prof Harnad introduced us to Chomsky's sentence "colourless green ideas sleep furiously" which is an example of a sentence that is grammatically correct, but semantically nonsensical. He then recited a poem by John Hollander:

    Curiously deep, the slumber of crimson thoughts:
    While breathless, in stodgy viridian
    Colorless green ideas sleep furiously.

    I was reading about John Hollander and Chomsky's famous line after the lecture, and I found out that there was a contest held by Stanford to try to come up with a poem that creates meaningful context for the line in 14 verses or less. This was the winning submission. I thought it was quite clever and that the class might enjoy reading it.

    Thus Adam's Eden-plot in far-off time:
    Colour-rampant fowers, trees a myriad green;
    Helped by God-bless'd wind and temp'rate clime.
    The path to primate knowledge unforseen,
    He sleeps in peace at eve with Eve.
    One apple later, he looks curiously
    At the gardens of dichromates, in whom
    colourless green ideas sleep furiously
    then rage for birth each morning, until doom
    Brings rainbows they at last perceive.
    -D. A. H. Byatt

    ReplyDelete
    Replies
    1. Cute, though I think he/she strains a lot more than the Hollander poem, which is lyrical and reflective without effort. "D.A.H Byatt" seems to have made something of a career of puzzle-poems from 1972-1996. Hollander wrote "Coiled Alizarine" in 1971. The Stanford contest was in 1985. "Byatt" might have been influenced by "curiously" (or maybe there aren't that many English words that rhyme with furiously -- though adverbs are in general easy to rhyme...)

      Delete
  2. This article defines so many aspects of language-learning that I will concentrate on learnability theory. According to Pinker, the maturation of the brain allows for increasing info processing and planning abilities. In order for these capacities to be used to learn language, there has to be four things: 1- a "class of language" (target to learn), 2- an environment (info extracted from the world), 3- a learning strategy (hypothesis about the language’s grammar), 4- success criterion (confirming the hypothesis, keep new rules in memory). The part I am not so sure about is the link from 3 to 4.
    Since non-correct English is still grammatical (creole, black slang), then it must mean that people talking with non-grammatical (I believe we are talking about ordinary grammar) still hear correctly grammatical English in their everyday life. So, as children, they have two sets of information available about which "strings of words are grammatical sentences in the target language" (positive evidence), as well as corrections and feedback (negative evidence). But they somehow discriminate the UG-Compliant and OG-non-compliant sentences they hear, and "choose" that the former is positive evidence (to copy) "ignore" the latter because they don't receive it as Negative evidence (they hear UG-compliant English but do not copy it because they are not corrected to do so).
    The problem for me comes from the fact that I previously believed that Negative evidence was pretty useless, some bilingual children start speaking one language, and start speaking a "second" language (if they were taught both simultaneously) later on, almost fluently, without ever having been corrected (so they learnt 100% on positive evidence). Which is an example that negative evidence is not necessary.
    So is there a bottom line? Is negative evidence necessary condition or is it not?

    ReplyDelete
    Replies
    1. Julie, ordinary grammar (OG) is irrelevant. It is learnable and learned, by induction and instruction. Children make mistakes and are corrected. The problem is with Universal Grammar (UG): Children never produce or hear UG errrors, hence no UG corrections. That's the "poverty of the stimulus."

      UG-compliance is like a category that children can "do the right thing with" from the beginning. It's also like the "Laylek" example I gave in class: Everything is a member (positive evidence); they never hear or produce non-members (negative evidence) (till they are adult linguists and try to generate them deliberately). So the rules must be in their heads already at birth.

      Negative evidence is certainly not useless in (difficult) category learning. It's necessary so your brain can find the features or rules that distinguish the members from the non-members. Negative evidence is only unnecessary when the features/rules are so obvious that they can be picked up through unsupervised learning: merely being exposed to everything, without feedback as to what is what. Mountains and valleys may "pop out" from such a landcape; so might correlations (night follows day follows night follows day). But UG is much too complex to be learnable without negative evidence, through unsupervised learning. (Even OG learning needs supervision: positive and negative evidence; correction).

      All languages are UG-compliant. The only difference is the "parameter settings," and those are learned and learnable through ordinary unsupervised and supervised learning, as well as explicit instruction. So the difference between a first and second language is UG parameter settings (learnable) and OG (also learnable). They already share UG.

      Delete
    2. Hello Julie,
      I agree with what professor Harnad said. However, individuals who speak creoles, pidgins, dialects and non-Standard English do so because they are exposed to that particular language. The first thing is languages that are not compliant with Standard English OG are no less complex than Languages that do not. Therefore, the rules that they use to learn to speak non-Standard English is the same as someone who is learning Standard English, just that those two people are not getting the same input. So the distinction you are making is not fully relevant, you can think of people speaking different varieties of English as Bilinguals. A lot of research has been conducted on African American Vernacular English (AAVE) and they found that it differs on multiple linguistic levels such as syntax, morphology, phonology etc. This then brings me to the part on Bilingualism, there is a phenomenon called ‘code-switching’ or ‘code-mixing’ which is when you go back and forth between languages in different sentence, the same sentence or sometimes even the same word. Bilingual children tend to do this much more than adults, however, with exposure to the two languages (positive evidence) alone they are able to correctly set the UG parameters for each language and be fully bilingual. I had read a review on this topic a few years back that debated if it is harder to learn languages with very different parameter settings, however if I remember correctly they concluded that it was not the case. It does bring up an interesting question on over generative languages. Different languages allow for either having an explicit subject in a sentence always required as in English or French, option as in Italian or Spanish. This means that the pro-drop (pronoun-dropping) parameter has to be set as either on or off. As a bilingual speaker of Italian and English I had to learn that while in Italian it is okay to not have overt subjects sometimes, in English you always need it. It is hard then with Italian exposure to understand that you cannot do this in English and negative evidence does not appear to play a large part in language acquisition. However, I was able to set the parameters differently for the languages, presumably because I never heard a sentence in English without a subject when I was acquiring it. In contrast, my family, who acquired English later in life often make the mistake of not adding a subject into a sentence when speaking English. This may have to do with Pinker’s explanation of decreases in neural plasticity.
      I have also always found the notion of non UG languages being ‘un-learnable’ interesting. The rules that comprise UG have been greatly decreased since Chomsky first proposed the idea and while his idea that some part is innate and some part is the environment still remains largely accepted by the linguistic community, many of the rules have been discarded. Therefore, I wonder if a child was only exposed to a language that violates a part of UG, say X-bar theory, would they not be able to acquire that language?

      Delete
    3. How can we learn UG rules if everything is UG compliant? To clarify from what I’ve read in the replies above and from what I understood, we know that the rules of UG are not learnable from the impoverished, positive data (“laylek” example), which are the only ones accessible to children learning languages.

      Yet, the rules of UG, in virtue that Chomsky was able to identify and tell us what they are, are indeed learnable – but it seems that it is to be figured out explicitly through trial and error. If I remember, Stevan talked briefly about a team of linguist looking at possible UG rules by using their implicit knowledge of the rules of UG – such as if something x violates UG and y doesn't, then we would have the positive and negative evidence needed to know the rules.

      But we know clearly this isn’t at all how children acquire UG – so it goes back to how UG is innate and the seemingly mysterious story of how it became integrated in our gradual evolution.

      Delete
  3. As the chapter by Werker shows, language acquisition begins very early in the human lifespan, and begins, logically enough, with the acquisition of a language's sound patterns. The main linguistic accomplishments during the first year of life are control of the speech musculature and sensitivity to the phonetic distinctions used in the parents' language. Interestingly, babies achieve these feats before they produce or understand words, so their learning cannot depend on correlating sound with meaning. That is, they cannot be listening for the difference in sound between a word they think means bit and a word they think means beet, because they have learned neither word. They must be sorting the sounds directly, somehow tuning their speech analysis module to deliver the phonemes used in their language (Kuhl, et al., 1992). The module can then serve as the front end of the system that learns words and grammar.

    This ability to categorize different speech sounds is present not only from mid-babyhood, but actually directly after birth. Aldridge et al (2001) showed that newborns split vowels into categories that are similar to the categories that adults use to separate different vowel sounds.

    They show this through a preference test, where babies showed preference towards certain sounds, which roughly corresponded with adult vowel sounds. This is more evidence for certain innate features of language (and our brain's reaction to it), which lead to our ability as humans to learn languages from scratch as a baby.

    ReplyDelete
    Replies
    1. I think it's probably a very important feature of humanity that we begin to categorize in terms of linguistic speech sounds at the time of birth. I'm assuming that this is a human-specific feature, and like Harnad has said, why categorization is so important for our ability to have language. Animals have shown some abilities to categorize "speech" sounds, but it does not seem to be an innate ability and must be taught to them. This seems similar to me to children who grew up in "wild" environments where these children can no longer seamlessly categorize and learn different speech sounds, in the same way that infants can (although, they are still privileged in terms of that compared to animals, due to the innate ability still being in their genetics, just not used until later in development).

      Delete
    2. RE: “At first glance prosody seems like a straightforward way for a child to break into the language system.But on closer examination, the proposal does not seem to work...the effects of emotional state of the speaker, intent of the speaker, word frequency, contrastive stress, and syllabic structure of individual words, are all mixed together, and there is no way for a child to disentangle them from the sound wave alone….Worse, the mapping between syntax and prosody, even when it is consistent, is consistent in different ways in different languages. So a young child cannot use any such consistency, at least not at the very beginning of language acquisition, to decipher the syntax of the sentence, because it itself is one of the things that has to be learned.”

      Noting how widespread prosody is in all languages, and how it changes when a mother speaks to her baby, I wish the paper talked more about its importance. Prosody is essential to language. As Dominique discusses sound is a primary means of language acquisition. There must be a reason mothers naturally change pitch when talking to a baby. If it is not syntactic could it aid in semantics? Just as certain music instinctively makes us feel a certain way, can we view parsody the same way. One can imagine reaching for a cookie and their father yelling “No!” in a deep ferocious tone- almost automatically you retract your hand and, possibly, shiver. It wasn’t the word “no”, per say, that made you realize you shouldn't take the cookie, but, rather the way in which the word was said. Prosody of speech definitely conveys meaning, and I wonder how much this meaning is retained during language acquisition.

      Delete
  4. This paper highlights the strong importance of input from language in order for it to develop. This input needs to come from an interaction with the language, such as between two individuals conversing. It can’t be passively from hearing the TV or the radio. However, with sufficient input, children are able to acquire language by the time they’re three years old. This made me think a bit about our discussion on how sensorimotor capacities are essential in order to categorize and ground symbols, which is also required here with language to some extent. In order for an individual to learn and fully develop language, he/she needs to have an abundance of physical interactions using language – either hearing or speaking it. Thus, interaction seems to really be a prerequisite for the development of several aspects of human cognition. While there is clearly strong evidence for language development from input and exposure, there’s also evidence showing language structure and the capacity to learn language is definitely innate. This is primarily demonstrated by the fact that children know what to do grammatically with language, and therefore what not to do, even without possibly getting feedback about all of the incorrect and ungrammatical things that could be said. The only way to explain the fact that children don’t make mistakes grammatically on things they’ve never been told aren’t allowed allowed in their language is the fact that grammar is innate. I think another way to summarize these findings could be to say that UG and input are both necessary but insufficient on their own for the development of language. The combination of UG and abundant input will lead to a rapid acquirement of any language in any typically developing child.

    ReplyDelete
    Replies
    1. I agree (or at least I also believe) that UG and input are both necessary and insufficient on their own for language acquisition, however I think that it takes more than just abundant input for rapid acquirement of speech. Rather, in addition to the innate brain structure that comprises the UG, and the input from the environment, there has to be feedback as well, at least to some level. Without it, the ability for humans to learn languages so rapidly and with so little instruction seems nearly unbelievable.

      Delete
    2. RE: "UG and input are both necessary but insufficient on their own for the development of language"

      That is a great way to summarize it. If you have input but not UG, in the case of chimpanzees for example, there will not be language development. If you have UG but no input, as in the case of "wild children" who are abandoned and deprived of input during their critical period, there will not be language development. Some input is required, although perfect input is not.

      Amar, I don't think the feedback in the sense of direct positive/negative feedback is required. The experiments detailed in the reading point out that negative feedback is not necessary for language acquisition. "The child must have some mental mechanisms that rule out vast numbers of "reasonable" strings of words without any outside intervention." I think that some version of "feedback" is necessary but it comes in a different form than correction from parents. It comes in the form of prosody, semantics, and the organization of grammar (as shown by the block principle described in the reading) in the input. These aspects of feedback are all necessary but not sufficient for language acquisition.

      As stated by Pinker, "The child would set parameters on the basis of a few examples from the parental input, and the full complexity of a language will ensue when those parameterized rules interact with one another and with universal principles. " In other words, specific forms of feedback via input are necessary for turning the parameters of UG on or off.

      Delete
  5. Review for anyone in PSYC 532, but we took a look at a neural networks that modelled the stages of learning the English past-tense. Babies seem to learn regular and irregular past tense verbs in 3 distinct stages: correct performance on irregulars, then an over-regularization phase where they tend to apply regular verb endings (suffix -ed) to irregular verbs (eg. broke as breaked), then correct verb tense use. For any German's out there, the German plural works in the same way strangely enough.

    The goal of models are to simulate this learning process including all the small psychological effects that appear to takes place, for example regular verbs are all quite similar but heard less often than irregular verbs which have less similarities to eachother. The models we looked at did this with various efficacy.

    My question is if reverse engineering this small piece of cognitive learning can give us any insight into how the mechanism works in the brain. Pinker does a good job at highlighting the complexity of language acquisition, I'm wondering if these models are too isolated to demonstrate enough likeness to the human brain. Even with the SGP aside, the input that children hear is still not fully understood, so what input strings to give the artificial model must be debated. Of course for research purposes cognitive modelling has great utility, I think that eventually combining these specific modelled functions in a larger more powerful neural network would be really interest. But for the purposes of generating a computational model to explain cognitive language learning, I think the models are pretty limited as of now.

    ReplyDelete
  6. The two points in this article that were the most convincing to me were Gordon’s “Rat-Eater” study and the two developmental syndromes, Spina Bifida and Specific Language Impairment, that show language capabilities to be separate from the rest of cognitive functioning. As I mentioned in my Skywriting for article 9b, the Gordon study was refuted by Pullum and seems to be inconclusive, but that leaves the two disorders to be convincing evidence.

    My question is: Is the double dissociation between the two disorders enough to satisfy the likes of Pullum, who exclusively questioned the linguistic arguments, to convince him of the inborn nature of UG? Additionally, studies could be done that observe patients of both syndromes to contrast their difficulties in order to tease out evidence for a Language Acquisition Device or UG. Does anybody else have biological or at least non-linguistic evidence of UG?

    ReplyDelete
  7. This paper presents a lot of evidence for why it seems so baffling that children can learn language at all. But I wonder if we're looking at this the wrong way. This might sound completely contradictory, but hear me out:
    Given that brains have evolved for language, should we expect that language evolved for brains, specifically children's brains? Given that language evolves from generation to generation just like a genotype, at every generation its "reproduction" is dependent on getting into the heads of children. By a quasi-process of natural selection, should we not expect that language has adapted to the brains of children? After all, they're the only game in town. Does thinking about the problem of language this way help us at all?

    ReplyDelete
    Replies
    1. It seems like what you’re referring to is similar to the reading for 8a: 4.2 Constraints on Possible Forms.
      “Chomsky writes: In studying the evolution of mind, we cannot guess to what extent there are physically possible alternatives to, say, transformational generative grammar, for an organism meeting certain other physical conditions characteristic of humans. Conceivably, there are none -- or very few -- in which case talk about evolution of the language capacity is beside the point. (1972: 97-98).”

      The problem of the brain evolving for language is how. Pinker and Bloom write: “Changes in brain quantity could lead to changes in brain quality. But mere largeness of brain is neither a necessary nor a sufficient condition for language, as Lenneberg's (1967) studies of nanencephaly and craniometric studies of individual variation have shown. Nor is there reason to think that if you simply pile more and more neurons into a circuit or more and more circuits into a brain that computationally interesting abilities would just emerge. It seems more likely that you would end up with a very big random pattern generator. Neural network modeling efforts have suggested that complex computational abilities require either extrinsically imposed design or numerous richly structured inputs during learning or both (Pinker & Prince, 1988; Lachter & Bever, 1988), any of which would be inconsistent with Chomsky's suggestions.”

      Delete
    2. I think that's an interesting way to think of the problem at hand but I am not sure what is meant by language evolution. It is clear that language "evolves" in a pseudo-natural selective way. Linguistic novelty gets incorporated into the lexicon if picked up by enough users and it is not limited to vocabulary but syntactical structures. Although, if we analyse these changes to specific spoken languages, it's not clear that any kind of "evolution" is happening on the level of universal grammar. I mean, it is conceivable from this kind of novelty that the romance languages depart from each other, even though descending from the latin root language. However, all languages share in universal grammar. So can we truly say that language has "evolved" on the time-scale of the human species if all recorded language complies with universal grammar. I think maybe this way of thinking about it is misdirecting as to how universal grammar or rather language capacity has evolved.

      Another question that has been puzzling for me is the evolutionary advantage that language confers. A single mutation in a single person that grants linguistic ability or even pseudo-linguistic ability does not really confer any advantage because language is a social tool. Like every speaker needs an understander, it seems that language must have simultaneously emerged from a social group and that the 'language gene' must have been rather inconspicuous until it was present in the population of the social group in a critical mass.

      Delete
    3. It’s an interesting thought, but I’m not sure how well the ‘language evolving for brain’ part holds up. At a recent conference Dr. Jessica Coon (professor of linguistics here at McGill, who researches syntactic theory) talked about how human languages differ, as well as what it takes to analyze different language. In explaining the similarities and groups of speech across the world, she mentioned that language has not evolved very much – small things change, but overall it has stayed the same (she brought this up to demonstrate that linguists need not be afraid of emojis taking over our spoken or written language). While the example was amusing the point still stands, and it’s the reason why I believe our brains evolved for language, and not the other way around.
      Interesting and not related point – Dr. Coon was a consultant for the movie Arrival, which was a fun watch.

      Delete
  8. "Shortly before their first birthday, babies begin to understand words, and around that birthday, they start to produce them... Words are usually produced in isolation; this one-word stage can last from two months to a year. Children's first words are similar all over the planet. About half the words are for objects..."

    I find the topic of language acquisition very interesting. I do have one question regarding this passage, though. As mentioned in this passage, a child's first words are similar to another child's first words in another culture. Is this because these words are simple words within the language? Or is this because these words are used most often? Or.. perhaps is it an interaction of the two - are these words simple because they are used most often... which is then why children tend to learn them first regardless of their culture?

    ReplyDelete
    Replies
    1. I would think it's a combination of the two, with a larger emphasis on simplicity. By simplicity I mean the concreteness of the word; about half of baby's first words are objects. There are other many words like "is" and "good" that are very frequent, but they are more functional or descriptive and can be harder to understand without any knowledge of the words around them. Names of people ("mommy") and things in their environment ("dog") can easily be pointed to.

      Delete
    2. Discussion of the words a baby first utters is very interesting because it appears that a majority of these first words can easily be grounded using sensorimotor capacity of the child and are likely to be words included in the 1500 kernel words that make up the core of human language. It seems to me that even if these words are not part of the 1500 kernel words, the words that babies most frequently utter are not to distantly removed from the core words. Even without fully learning the punctuation, syntactical rules of a language children are able to form propositions. Children first form sentences in whatever language they have been exposed to from infancy and although they do not conform with all the syntactical rules of the ordinary grammar of their specific language, children never produce sentences that disobey UG.

      Delete
  9. "A key factor is the role of negative evidence, or information about which strings of words are not sentences in the language to be acquired. Human children might get such information by being corrected every time they speak ungrammatically. If they aren't -- and as we shall see, they probably aren't -- the acquisition problem is all the harder."

    This idea was touched upon in my Cognition class this semester. However, I am noticing a discrepancy between this reading and that class - in that class it was mentioned that there probably are a lot more examples of negative evidence that children are exposed to than previously thought. This is through what are called "adult reformulations." For example, if a child were to say, "I want butter mine," an adult will answer by saying, "Ok, I will put butter on it." Then, a child would learn to say, "I want butter on it." My textbook for that class even states, "Chouinard and Clark discovered that reformulations of erroneous utterances occurred between 50 and 70 per cent of the time when the child was about two years old. They also found that the children took up these reformulations as frequently as 50 per cent of the time..." Although this is only one study, I think it's interesting to note when studying this topic.

    ReplyDelete
    Replies
    1. This article by Pinker touches more on this topic and gives more evidence towards the conception that negative evidence is not very important.

      In particular, "Different parents react in opposite ways to their children's ungrammatical sentences, and many forms of ungrammaticality are not reacted to at all -- leaving a given child unable to know what to make of any parental reaction. Even when a parent does react differentially, a child would have to repeat a particular error, verbatim, hundreds of times to eliminate the error, because the parent's reaction is only statistical: the feedback signals given to ungrammatical signals are also given nearly as often to grammatical sentences."

      Thus, even if reformulations occur in adult speech, this will also happen in correct utterances. For example, a child would say it correctly and adult would repeat or reformulate it slightly in a different but also correct utterance. Thus, it would be unclear to a child when they were being corrected or not.

      Delete
    2. Hi Laura! I think your comment is really interesting. I remember learning about this in Cognition as well! To reiterate what Rebecca said, I think that the reading suggests that negative evidence from parental feedback seems to be insufficient in terms of directing children’s language acquisition. However, I wonder if it would make a difference if the correction was made more salient. For example, if the child said “I want butter mine” and the mother replied with a question to emphasize the mistake, such as “you want butter ON IT?”. In guiding the child in this way, they may reply “yes, butter on it”. Obviously, such corrections are less likely to happen naturally, in the sense that the mother/parent would need to be intentionally trying to teach the child proper grammar for this correction to occur. That being said, I’m not sure if it’s appropriate to consider this as an example of negative evidence, which might need to occur more naturally (without an explicit intention of teaching the child and waiting for them to make the correction) – but I would still be interested in knowing whether this type of correction would be more helpful in language acquisition.

      Delete
  10. From section 2.2: “Humans evolved brain circuitry, mostly in the left hemisphere surrounding the sylvian fissure, that appears to be designed for language, though how exactly their internal wiring gives rise to rules of language is unknown.”

    This short passage reminded me of Fodor’s “Why, why do people go on so about the brain?” piece, in which Fodor casts doubt on neuroscience’s ability to uncover the causal mechanism underlying human performance capacity just by looking at what’s active when people cognize. Similarly, even though we know which brain regions and which connections are relevant for language, we don’t know exactly how these neurons and connections actually give rise to our ability to produce and process language. Let’s say neuroscientists are able to isolate one neuron in the left hemisphere near the sylvian fissure that is totally specific for language processing: if you deactivate this neuron while someone is processing language they stop understanding, and if you activate it the person immediately starts understanding again. Even though the neuroscientists have “explained” the feeling of understanding language by saying that this neuron’s activity drives it, how does the neuron’s activity actually generate this feeling of understanding?

    ReplyDelete
    Replies
    1. I was thinking the same thing as I read that section, however I do think that the localization of a neuron in this case is defensible. Would locating the neuron provide a perfect understanding? I think it would be foolish to claim so, but may provide a path to understanding the way that the experience of speech is connected to the rest of the brain. If that neuron is exclusively what fires during the act of experiencing-meaning-through-language, could this not provide some key insight?

      For example, imagine that we isolate this neuron. This means that we can look at patients who have had damage to this specific area, and compare them to patients who have linguistic impairments: either they will 100% overlap, or we will find patient who experience linguistic difficulty with this area intact. In the former case, Jerry Fodor was right and we have been wasting our time. However, if there is not a correlation then a mystery remains — and if anything can be learned through history, mysteries are a good thing for scientific progress.

      That does not mean I believe that pure brain-mapping will provide the understanding that we seek. However, I do believe that it is the best place to begin an inquiry into broader questions of cognitive science. We know from both the ‘easy’ and ‘hard’ question that both sides of cognition are mutually-dependant. I suspect that a hermeneutical approach here is best: we must observe the parts in isolation in order to approach the whole, because the whole picture is simply too broad on its own. A holistic understanding in this case may rely on a foundation of neuroscience, or it may not. If not neuroscience, (apart from speculation) where should we begin our search?

      Delete
    2. I agree with you Ted; I think this might be where Fodor’s claim that brain localization studies are entirely frivolous falls short. I think one example is that brain localization studies have shown us that language that was learned before the age of five “resides” in a different part of the brain than language learned later in life. So when I speak English, a certain part of my brain shows activity whereas when I speak French (which I started learning when I was ten), a different area of my brain lights up. I think then if we were perhaps able to analyze the differences in these two brain areas, it might elucidate some new information about how secondary languages are processed in the brain. However, even if no differences exist, knowing that secondary languages reside in different parts of the brain already shows us that the brain processes these languages differently, and might explain why it is more difficult to learn a language after a critical period.

      Delete
  11. If the physical structure of our throats has been selected by evolutionary forces for affording us to pronounce certain sounds like the vowel, how does does natural selection explain for the growing complexity of languages? For example, in English we’re adding “s” to express plurality, and regardless of some irregular words, we can still hear the “s” while we’re talking to somebody. But for the verb conjugations in French, we can’t really hear some differences while communicating with somebody in person. So what are the advantages of having more complex grammar structures

    ReplyDelete
    Replies
    1. Hi Zhao,
      I think the advantages of having complex grammar may not necessarily be an evolutionary question, in comparison to questions regarding the evolution of UG for example. The progression to a more complex grammatical system to me, seems like a product of human mastery of language. It seems like a progression in categorization. The creation of complex syntactical rules, for example the pronunciation of 's' at the end of words in different languages, has allowed humans to create categories of 'many' as opposed to 'one' or 'single.'

      Delete
    2. Hi Nadia and Zhao,

      I don't think language is growing in complexity. (Maybe this depends on how you define complexity). Growing complexity, such as the development of plurality by adding an 's' implies that there was a protolanguage. However, we know this cannot be true because a language cannot be a lanaguage without having a mechanism for plurality. Don't forget the translatability of language. Anything that can be said in any language, whether ancient or modern, can be said in another language. Also, professor Harnad in his reading about the transition from show to tell says that language existed before it was vocalized. We initially probably learned how to pantomime language. And in this 'show' stage of language we would have had a concept of plurality. Furthermore, in Piker's reading on language acquisition he says that one-word speaking babies can understand syntax because they can provide the right output to sentences. This means that these babies have a concept of plurality even before they can voice plural words. Therefore, it is impossible for language to "evolve" to incorporate plurality.

      Delete
  12. For the “Motherese” section, the authors states that “Speech to children is slower, shorter, in some ways (but not all) simpler, higher-pitched, more exaggerated in intonation, more fluent and grammatically well-formed, and more directed in content to the present situation, compared to speech among adults”. However, this is in conflict with my previous psychology classes. I remember parents don’t necessarily talk to their children in a different or more ”motherese” way. Instead, they talk to them normally just like how they would talk to adults.

    ReplyDelete
    Replies
    1. I think the way mothers/caretakers talk to their children really varies across cultures and households, but ultimately this doesn't seem to be a huge factor in children's language acquisition process. To support your point, the author gave the example of communities in New Guinea where mothers "coach the child as to the proper, adultlike sentences they should use." Ultimately, children must have some intrinsic way of learning language that is not necessarily correlated with input from their primary caretakers.

      Delete
    2. I have also learned that mothers tend to underestimate their child's linguistic level, so they will be producing sentences that are often too simple for the child. Thus, motherese is not sufficient input for language acquisition. I think the main reason that mothers tend to speak slower, with more emphasis, etc. is for the child's comprehension. Similar to the way we would speak to a foreigner whose first language is not English, we want to be as clear and simple as possible to make sure the listener can extract the content of our sentence. I think motherese likely serves more of a communicative function rather than a teaching one.

      Delete
    3. I agree, probably “Motherese” at first place is characterized as “slower shorter and more exaggerated”, just like lullabies differ significantly from other music pieces and people across the culture can easily identify lullabies even presented in different languages. Perhaps “Motherese” won’t apply to toddlers, just babies.

      Delete
  13. On the subset principle:

    The idea of such an order of parameter testing with a "default" case is very strange to me. It is true that it would explain how children do not make some types of mistakes but it is contradictory with the fact that all children learn language at roughly the same speed, no matter which one they are learning. In the case of word order for example, if children need to go through every "parameter" before knowing all the possible word orders in Warlpiri while in English they are already all set correctly, then it would tbe much faster to learn English completely than Warlpiri (considering only this parameter).
    On a side note this may also be a politically and socially dangerous claim since one could use it to imply that some languages would be more adequate because they have more grammatical parameters set to default.

    ReplyDelete
  14. “The main linguistic accomplishments during the first year of life are control of the speech musculature and sensitivity to the phonetic distinctions used in the parents' language. Interestingly, babies achieve these feats before they produce or understand words, so their learning cannot depend on correlating sound with meaning.”

    While babies do not learn to speak before age 1 (normally) they produce sounds and vocalizations. As it says above, they have control over their speech musculature before they can speak, which is how they make vocalizations and speech sounds. It also says they are sensitive to their parents’ speech sounds. I am wondering if these sounds, if based off of the phonetic distinctions they hear and perceive, are similar to the vocalizations and plethora of sounds that different animals make to communicate?

    Assuming animals learn to make appropriate noises for communication and signaling similar to how human infants learn (by feedback, trial and error, imitation etc.) then this primitive stage of an infant’s vocalizations and sounds should be considered an attempt at emulating the noises in their environments and communicating similar to how animals make noises. While there is a broad range of animal communication in the verbal/audio domain (whales being the most intricate) this seems to be a robust capacity of living beings that humans likewise have during early development. It is only that humans have developed this ability further – and this brings us back to the question of why we have language but chimps don’t considering they still innately make sounds to communicate just have not mastered the categorization like we have.
    But if categorization and our ability to combine symbols into an “unlimited set of combinations, each with a determinate meaning” (Pinker, Introduction, Language Acquisition) is what allows us to achieve the complexity of language, what explains the broad range in communicative ability and linguistic complexity among other animals? Does categorization play a role or are there other genetic X environmental factors involved, such as physical capacity to produce sounds, needs to communicate etc.

    ReplyDelete
    Replies
    1. Hi Aliza. I think we could say that animals definitely know that vocalization A means "danger" and B is for mating etc etc...but these don't necessarily need to be the work of categorization - it could also be associative learning. Vocalization A could mean danger because a small rodent heard it lots of times while birds of prey were flying overhead. I don't know if we can actually tell if the rodents would categorize their noises but instead associate particular noises with particular events/stimuli. Humans are doing categorization and making propositions - "this group of words means this" and "I'm conveying that this idea is this" - and these are essential to convey the types of things we need to convey.

      I think this goes back to the idea of satisficing - to accept available options as satisfactory. Basically, does the available option do what it needs to do, and does it do enough? The animal vocalizations may just be satisficing for them. And humans definitely need the combinatory power of language to express the complex propositions we need to communicate.

      I guess how I'm trying to answer the question you posed is to think of it in the context of satisficing: whether its association or rudimentary categorization, is it getting the job done for animals (i.e. does it really matter which one it is if either could get the job done enough) ? And when humans want to convey complex ideas, is the available option (language) getting the job done for us?

      (This is probably a total tangent, but I think it could be one way to try to answer your question...)

      Delete
    2. I think you raise some very interesting points and I certainly agree with the notion of satisficing and thinking about it in terms of 'getting the job done'. I am wondering if there is a sincere distinction between associations and categorizations. If the rodent associates the sound of birds flying overheard as danger, is it not categorizing those sounds as danger? If so, what is the extent of its categorization capacities, how are they learned differently from humans and why are they so limited? Or, are they not limited and in the context of getting the job done for the rodent they are satisfactory and as powerful as needed for this species.

      Delete
    3. Going on a hunch here but I feel like animals do or at least have the capacity to categorize. Survival and reproduction are usually the main goal of organisms and to do that requires some level of categorization, for instance to tel the difference between poisonous fruits and non-poisonous fruits. Apart from gestural communication, most of the categorization learnt would have to be through trial and error. What animals lack is the ability to make propositions regarding these categories or string categories together.

      Delete
  15. To build onto the "What is Learned" section, some interesting research has come forward which states that environment may play a large role in the formation of languages. Scientists were able to find a relation between the climate and how many consonants/vowels are used in the language. Findings suggest that people altered the way they spoke in order to maximize sound transmission people who tended to use more vowels (which are spoken at a lower frequency) were found in warmer climates with dense tree cover. It is believed that songbirds do a similar thing depending on the amount of vegetation present as well. While climate is not the only variable that affects language, it is interesting to think about the way our environment/climate may impact how we speak and the evolution of language worldwide.

    https://www.sciencedaily.com/releases/2015/11/151104095045.htm

    ReplyDelete
  16. In section 6.5, Pinker discusses the importance of context in learning a language. It would be impossible for a child to learn a language over a radio without any environmental context available. This to me sounds very similar to the symbol grounding problem. Here, the words in a language are the symbols, and the context what the symbol must be grounded to. Without this grounding, children would never be able to understand a language. As we have previously discussed, once enough words are grounded, we can ground all words in a language by combining previously grounded words in a definition. Perhaps we can discover more about the symbol grounding process by observing language acquisition in very young children. Another question that arises is if symbol grounding must take place in second language learning, or if the symbols can be grounded from the first language. I suppose the difference would learning through translation, or learning independent of reference to the native language. Perhaps if a child still in the critical period is learning a second language, they can ground the symbols directly to the environment, allowing them to achieve native speaker fluency.

    ReplyDelete
    Replies
    1. Adrian,

      I also thought about the symbol grounding problem when reading this section – particularly when Pinker mentions that children seem to know the meanings of certain words before syntax is learned. I find that section 8.3 on Using Context and Semantics relates to last week’s discussion on how language is categorization because Pinker states, “If children assume that semantic and syntactic categories are related in restricted ways in the early input, they could use semantic properties of words and phrases as evidence that they belong to certain syntactic categories.” In other words, when a child learns nouns, he/she can put them together in a sentence with a verb to form a sentence that relates its subject to an object. Maybe once children have grounded enough words in the context-dependent phase, they can move to the next level of finding how to syntactically relate these words in order to express almost anything through constructing sentences alone. While the capacity for both grounding symbols and creating syntactically correct sentences may be innate, it must be built upon along the way through inputs related to context (maybe this is somehow “off-loading” on the environment?).

      Delete
    2. Hi Adrian, I had a similar thought process when reading this section. I really like your question about whether symbol grounding must take place in second language learning. I find it interesting because when learning a second language, people often suggest that watching TV shows or listening to music in that particular language might facilitate the acquisition process. While these suggestions may or may not have any basis in research, I think it's still important to consider whether the symbols and categories that may have been grounded as part of a child's native language are sufficient to support learning of a second language, such that less environmental context may be necessary.

      Delete
    3. Hi Kristina,
      A friend of mine learnt English almost completely from watching TV. His words were grounded in his native language. By seeing an object already grounded in his native language, he learnt to associate that object with both his native language and English. Vision is after all a sensorimotor capacity so that does add a level of context. Can second language learning occur solely through verbal communication? It definitely could because the native language would be grounded but I'm assuming it would take a lot longer. Sensorimotor capacity grounds words and that adds salience. Rather than hearing miel is honey and having to use your mind to make that connection, seeing a tub of honey might make learning that word more effectual.

      Delete
    4. Hi Adrian, I think this is an interesting question. I think that symbol grounding wouldn't be required in learning a second language because the words will mostly already have referents from learning the first language. Also, like Annabel said individuals when enough things are grounded individuals can move to the next stage where they can essentially define something using other words. The reason I don't think symbol grounding needs to occur is because even if you need to learn the word, the object it refers to will already be present in your head somewhere so if someone described or defined it to you in the language you already speak, you would most likely be able to make the connection. When learning a language however, you need to actually learn each of the referents or at least a certain amount before you can begin defining others through description. However, while I don't believe symbol grounding is required in a second language, you still need to have abundant interaction to associate all of the words to their referents, but this can be done through pictures whereas symbol grounding requires a level of sensorimotor interaction in order to really learn what referents are.

      Delete
  17. Stromswold's 1994 demonstration makes for an especially compelling example against the notion that parental feedback is a necessary aspect of the language learning crucible.
    The "organization of grammar as a guide to acquisition" feels most intuitive to me; blocking (as resulting not from feedback but from exposure), and interactions between word meaning and syntax, as described by Gropen et al. seem to fit into the existing framework for language acquisition without much difficulty. Of course there are some instances where it gets ambiguous, but the transitive/intransitive syntactical distinction may account for this. Even nuances concerning the context or perspective for verbs seem ONLY learnable in such a fashion.

    ReplyDelete
  18. This article examines how one acquires language.
    Since universal grammar is at the heart of language capacity, it is a focus of study.

    In order to answer “how” the author builds a schema depicting the bottom-up process of acquiring language. Firstly, input from the environment (sounds and situations) is necessary in order to make inferences about phrase structure. Since a child (learner) cannot experience everything, they must be able to extrapolate the evidence that they do encounter to form a general set of rules. For this, the author proposes that a child (learner) makes two assumptions that constrain the possible relations among items: 1) that the child’s parents serve as positive role models (obey the rules) 2) that the meaning from the parents’ speech are helpful for giving greater structure.

    ReplyDelete
    Replies
    1. I think that while the article mentions parents’ speech as input, it is not the authors’ focus. Rather, it seems that the focus is to “examine language acquisition in the human species by breaking it down into the four elements that give a precise definition to learning: the target of learning, the input, the degree of success, and the learning strategy.” One major problem is how structure arises despite the fact that parents’ speech as negative evidence doesn’t affect change in a child’s language acquisition.

      Delete
    2. Hello Manda and Austin, Pinker did spend a lot discussing about the parents' speech input and context. But, perhaps as Austin suggested, the article focuses on language acquisition by focusing on certain elements through learning and, like Manda pointed out, mostly context dependent.

      But in this, he mainly used examples and instances in talking about ordinary grammar and vocabulary, which are learned like any of other categories with corrective feedback. And like Austin pointed out that Pinker, quite unsettlingly, discussed children acquiring langauge without really mentioning the poverty of stimulus - which goes to question of how much of UG as a concept did he really discussed ?

      Delete
  19. Re: “The term "positive evidence" refers to the information available to the child about which strings of words are grammatical sentences of the target language. … Negative evidence refers to information about which strings of words are not grammatical sentences in the language, such as corrections or other forms of feedback from a parent that tell the child that one of his or her utterances is ungrammatical. … The boy's abilities show that children certainly do not need negative evidence to learn grammatical rules properly, even in the unlikely event that their parents provided it.”

    I see this as a reverse on the classic Problem of Induction. This is the Problem of Falsifiability. It is as if children do not care for input that falsifies their extracted patterns. I would argue, then, the pragmatist response to the Problem of Induction may inform how the first rules are bootstrapped.

    First, I would argue these philosophical frameworks are relevant. Children are little scientists, extracting patterns from input. Moreover, “[M]ost problems connected with the growth of our knowledge must necessarily transcend any study which is confined to common-sense knowledge as opposed to scientific knowledge. For the most important way in which common-sense knowledge grows is, precisely, by turning into scientific knowledge. (Popper LSD, 18)”. Therefore, applying how scientific knowledge is gathered to language acquisition makes sense.

    “If children had to learn all the combinations separately, they would need to listen to about 140 million different sentences. At a rate of a sentence every ten seconds, ten hours a day, it would take over a century. But by unconsciously labeling all nouns as "N" and all noun phrases as "NP," the child has only to hear about twenty-five different kinds of noun phrase and learn the nouns one by one, and the millions of possible combinations fall out automatically.”

    I would argue that what specifically underlies this bootstrapping is the pragmatist view of induction as practical reason: “[S]ince no hypothesis is ever completely verified, in accepting a hypothesis the scientist must make the decision that the evidence is sufficiently strong or that the probability is sufficiently high to warrant the acceptance of the hypothesis. (Rudner 1953, 2) … Sufficiency in such a decision will and should depend upon the importance of getting it right or wrong.” What matters for bootstrapping, thus, can be captured in the pragmatist principle: “given a precise formulation in the injunction to act so as to maximize expected utility, to perform that action, Ai, among the possible alternatives, that maximizes
    U(Ai) = ∑j P(Sj | Ai)U(Sj ∧ Ai)
    where the Sj are the possible consequences of the acts Ai, and U gives the utility of its argument.”

    ReplyDelete
    Replies
    1. Following on our class today, I'm still confused as to what exactly UG is.

      We have defined it as not OG. But what exactly constitutes UG?

      We mentioned phrases with asterisks as grammatically unacceptable. But conceptually, would it be fair to characterize UG rules as: rules for OG rules?

      Delete
    2. Universal grammar is the basis of all languages. Languages that have Subject Verb Object word order and ones with Object Verb Subject word order all adhere to the same principles. An elementary way of thinking of the innate grammar that connects the two types is that the complex internal structure leading to that word order is merely a mobile that can be rotated to give the other word order. Thus, the grammar is the same (all originate from deep structure) but minor adjustments can account for what seems like a language (phonetic output) that should be completely differently structured (surface structure). Syntax studies languages with the end goal of figuring out universal grammar, what are the rules that we are all born with that can give rise to many different languages? The more you study languages, the more you notice how much they have in common.

      Delete
  20. I find child language acquisition fascinating. The process of children progressing from sorting out sounds, to stages of babbling, stating words and two word phrases, and finally forming coherent sentences is particularly interesting in that it is systematic and universal. I also find that children are extremely curious which is quite beneficial to their learning. Pinker notes that children are quick to “announce when objects appear, disappear, and move about, point out their properties and owners, comment on people doing things and seeing things, reject and request objects…”. What I am really interested in is the point at which children learn more nuanced forms of language and when they begin to filter the transition of their thoughts into speech (i.e. not announcing everything they notice, not commenting on people in public). Is this filter placed by social/cultural constructs? Or, is this stage correlated with slowing down of rapid growth of neurons and connections (possibly it is some sort of “fine-tuning”)? This could be particularly interesting to study in order to learn more about the relationship between language and thought/cognition.

    ReplyDelete
    Replies
    1. Hi Annabel, I think you pose a really interesting question! Initially, I would be more inclined to think that this change would be constrained socially/culturally, since the child would receive less and less evidence of these announcements as they age and participate in conversations with adults. But you raise an interesting point, that it could also be related the neurobiological development. I was just wondering what you think this would actually tell us about the relationship between cognition/thought and language? The behaviour of not-announcing happens for some reason - learned or neurobiological - but what would figuring that out tell us about the cognition? I ask because I'm pretty sure those thoughts don't go away - we notice if something is missing from the table or if someone has walked in the room - we just don't announce it anymore. I can't see right away what answering this would tell us about cognition per se. What do you think?

      Delete
    2. I think this is a really interesting question.
      I think it's that social constructs play a role, and I don't know whether the rapid growth of neurons does. But I would guess that this might have to do with cognitive development of the child, and might have more to do with that than to do with language itself. For example, there's a stage in development where older children are very fond of playing imagination games ("house" and whatnot) but we wouldn't expect a teenager or an adult to. Young children announcing of what they see could be something like that.

      Delete
    3. I think the point where they start filtering out their thoughts is definitely related to social/cultural constructs. What may be appropriate to say in one culture, might not be in another. I don’t think that the fine-tuning of the neurons has anything to do with it. In my opinion, they are more about the fact that they have learned what they need to survive, and now they don’t need that many connections and neurons. As the article states as well, it is one of the reasons why it is so much harder for us to excel at something as good, whether it be language or the guitar, after a certain age. For language, if you have never spoken it till age 6, it is impossible.

      Delete
  21. Earlier in the course, in our discussion about genetics and the “efficiency” of genetic encoding, we were given an example of hypothetical “mole genes” that code for the height of a mole. In the example, it was argued that it would be more efficient for a mole have a gene that says “grow until your head hits the roof of your burrow”.

    This analogy may be a bit of a stretch, but let’s assume our hypothetical mole never left their burrow ever in their lifetime – we could argue that this more only has “positive evidence” for their particular burrow. It’s not like they explore other burrows and tunnels and were explicitly taught which constructions were too small or too big. By extension, this mole only has positive evidence of how to build tunnels and indirectly, positive evidence for their height gene(s) to determine how high they have to grow. Positive evidence is really the only consistent evidence children face during language acquisition. It would make sense, following from our mole example too, that some genetic predisposition for language would make use of this positive evidence in their environment.

    Parameter setting does not seem like the most realistic way for a cognitive process to develop. It does fit very nicely with the on/off way we often thinking about gene expression (either the gene makes a protein or it doesn’t), but even gene expression isn’t so 0s and 1s. Why wouldn’t it be the case that, like the mole height gene that says to grow until your head hits the ceiling, a human has genes that encode the ability to utilise our positive language evidence as it comes in? The mole isn’t exposed to too-short tunnels or too-long tunnels, only to tunnels that vary slightly in height. If many of the tunnels are really short, in this analogy you would just have smaller moles so that they can fit comfortably in the smaller tunnels. It seems more efficient for a child to have a smaller set of genes that might be receptive to probabilistic language evidence and usage in their environment, instead of a separate gene for each parameter that will either be turned on or off.

    ReplyDelete
  22. In his paper, Pinker explores whether language acquisition occurs through modularity, as if through a mental organ, or if it is ‘just another problem to be solved by general intelligence’. He delves into human uniqueness, as well as our seemingly innate ability to learn language to explore language acquisition and how it might extend to human cognition. After reading this paper, my belief that our language abilities lie in our genetics has been strengthened greatly. Pinker states that ‘Children’s two-word combinations are highly similar across cultures’ (perhaps like some universal emotion?), additionally we evolved to learn language quickly and accurately, as evidence by the language/vocab ‘boom’ that occurs in the toddler years. Naturally I believe environment also plays a significant role in language acquisition – what if saying something that sounds or feels ‘wrong’ or incorrect, feels that way because we’ve never heard it said before? Perhaps acquiring grammatical language is just the process of remembering what you have heard the most (I broke it) vs, what you don’t hear very much (I breaked it). Additionally, context must play a role, surely. If one utters a sentence and is met with stares of confusion, there must have been something wrong with what was said – and so begins a learning process. I remember as a child learning my mother tongue, my parents would often correct me as I spoke, until I remembered how to properly and grammatically say certain phrases/vocab words. While this is an example of active feedback, if babbling nonsense to someone vs. speaking a grammatical phrase gets different results (such as the other person’s attention or not) that too would help with learning what’s right and what’s wrong.

    ReplyDelete
  23. In 2.2 Pinker makes the point that language ability must be distinct from general intelligence because even those who suffer from Hydrocephalis or very low IQs, as a result of Williams Syndrome, still have fluent language capabilities with perfect grammar and articulation. But the topics which Hydrocephalic children fluently discuss are entirely a product of their imagination. This got me thinking about what can be classified as language ability. Namely, is there a difference between following all the rules of language (prosody, grammar, vocabulary), and being able to ground this speech in reality (say, being able to successfully express what one would like to, or correctly answer a question about ones environment)? Do the two come together to define "linguistic ability"? Because the former description in isolation sounds a little too much like symbols in, symbols out to really constitute meaningful linguistic capabilities to me.

    ReplyDelete
    Replies
    1. Hi Deboleena. I agree that the definition of language does require some refinement. If I’m understanding your point correctly, when you talk about “rules of language” and the ability “to ground this speech in reality”, are you referring to articulatory speech and something similar to semantics, respectively? If so, then both articulation and understanding are definitely necessary components of language. Although the article, by distinguishing language from general intelligence, makes language seem only concerned with syntactic computation, language definitely encompasses the ability to “ground speech in reality”. I’m confused how the authors could have come to the conclusion that all of language is distinct from GI – a better conclusion would be that “articulatory speech” is distinct from GI. Moreover, the article claims that categorization, extracting simple correlations to form grammatical categories, is necessary for language acquisition. Isn’t categorization also a component of intelligence? Providing a definition for general intelligence would definitely offer some clarity!

      Delete
    2. I agree that if we just consider the former it doesn't really constitute "meaningful linguistic capabilities" in a very satisfying way. But I think the "rules" of language are what Pinker means when he talks about language ability, and whether or not what is said makes sense or is true is not really relevant (at least, not for Pinker, in this context).
      However, I wonder what we/Pinker would say about a child's linguistic capabilities if the child not only spoke nonsense from their imagination, but didn't seem to understand the meaning of words - For instance, a child pointing to a red apple and saying "the desk is blue" instead of "the apple is red." (or saying very complex and perfectly grammatical sentences but clearly not using words correctly). Has that child fully developed language capabilities? Because I think we're only really talking about grammar, I assume we would have to say it has. But I feel like this presents a bit of a problem.

      Delete
  24. Inputs to language come in two main forms: Positive Evidence and Negative Evidence. Here we discuss language acquisition in children based on these two inputs to language.

    Positive Evidence refers to the information available to the child about which strings of words are grammatical sentences of the target language. Parents play an important role in producing correctly formed sentences because children will pick up on these during the language-learning years. It's all about the environment in which the children learns the language. Exposure to a certain language is what determines if the child learns it or not; children with French genes will not learn French easier than English if subjected in a neutral environment. Dialects and Pidgin languages operate in this way, seemingly complex but both dependent on direct inputs from the environment of a child.

    Negative Evidence, on the other hand, refers to information about which strings of words are NOT grammatical sentences in the language, such as corrections or other forms of feedback from a parent that tell the child that one of his or her utterances is ungrammatical. A few examples in this section give evidence that negative evidence is not truly necessary for a child to acquire language. Findings suggest that parents do not actually understand their child's well formed questions better than the poorly formed questions. Also, parental feedback elicits very little response in the child, since feedback signals are almost equally given to grammatical signals and ungrammatical ones. When this occurs, children have a hard time figuring out how to respond to the parental reaction. Thus, this shows that some other mechanism must be responsible for preventing a child from overgenerating a language.

    The author continues to discuss how Motherese, Prosody, and Context each play a role in developing a child's language from such a young age.

    ReplyDelete
  25. “Benjamin Whorf (1956), asserts that the categories and relations that we use to understand the world come from our particular language, so that speakers of different languages conceptualize the world in different ways. Language acquisition, then, would be learning to think, not just learning to talk.” Pinker goes onto refute this idea with evidence across different disciplines (i.e. we think in images, thinking predates language acquisition). Of course, I agree that that this assertion is overstepping in the sense that thinking is not just the narrative stream of words I have in my head. Everything in our environment can be transposed into thought irrespective of language. That being considered, I am having trouble differentiating why this assertion is “so over the line” when last week’s reading and our class discussions have illuminated the GIANT DIFFERENCE between “red apple” and “the apple is red”. Since the ability to utilize propositions represents a unique human quality that has differentiated us from other species, perhaps it is not absurd to propose that language transforms our world in a meaningful way and actually changes the quality of our thought. Maybe it isn’t a prerequisite per se, but it does enable a certain calibre of thought not otherwise possible.

    ReplyDelete
    Replies
    1. @Jessica

      I am also having difficulty understanding why Pinker's rebuttal is controversial. Conceptually, Whorf is suggesting the nurture aspect of language acquisition, whereas Pinker is closer to the nature argument of language acquisition. But, it is not so easy to suggest one side is the one grand theory of language acquisition. Nurture grows nature. Whorf's suggestion may not have truth in observing the effects of language in shaping the world view, but most likely in shaping the social and culture structures. For example, the onomatopoeia for chicken crowing in English is "cock-a-doodle-doo," but in French it is "cocorico." But in Korean it is "k'ok'iyo." The chicken is not making different sounds in all the different countries in this world. It is a basic example, but nonetheless demonstrates the different interpretation for exactly the same sound of a chicken crowing and how the language will change how the experience of the world can be changed.

      "Language acquisition, then, would be learning to think, not just learning to talk."

      Although the Sapir-Whorf Hypothesis is debatable, anecdotal evidence appears to suggest it. The cultural values are only explicable through the specific languages. For example, the honorific that exists in East Asian countries allows for certain social relationships that do not exist in English-speaking countries. Because of the different social and cultural structures derived from language, it is difficult to say that language acquisition is not learning to think. By learning another language, one is able to talk, but also think in the cultural and social structures of the said language.

      Delete
  26. The second part of the article gives examples to why Language must be innate, explaining that grammar is a combination of rules (to learn, or innate) + parameters to these rules (added from the examples heard, without a need for negative evidence). The first example of that is the existence of Constraint levels: the fact that words can be modified multiple times (plural, suffixes, prefixes..) but only in one order. Children follow the constraints without having heard a sufficient number of examples so if the learning "matter" is insufficient to lead to correct output, then it must be that there was some previous (innate) knowledge. The theory is that there is a subset of rules, and each child is born assuming that their language requires smallest number of rules and parameters, and learning corresponds to expanding outwards. One example of this expansion is the Blocking principle: an irregular form memorized blocks the application of the corresponding general rule. All of these explanations have the same bottom line, we are born with a simple seed that has potential to become language with input during lifetime. In my opinion, there is just a lot of straight forward evidence that this is true, but a very important lack of explanation as to how this might be true. The article just makes us believe that the hypothesis is true by giving evidence that it is so, without much explanation, or possible contradictions/problems.

    ReplyDelete
  27. I was surprised that Pinker didn’t discuss more about the poverty of stimulus throughout this Language Acquisition. It would seem to be relevant when he is talking about the positive and negative evidences in the inputs for language acquisition. But perhaps it was never mentioned because it seems that Pinker didn’t attempt to make the distinction between ordinary grammar and Universal Grammar (UG)- based on that he gives evidences of grammar learning: such as when he talks about past-tense or regular versus irregular verbs learning, in his thinking perhaps they are examples of UG.

    Yes, the Pinker’s section positive inputs is what the children hear and see in their environment and how the linguistic surrounding and interactions develops their language acquisition. Then he goes on and discusses about how negative evidences is also required to acquire language, but this seems to be ungrammatical sentences in the language. Such that children also hears and make grammatical errors in the sentences, and that the parent corrects to feedback and correct the children to eventually learn the rules. But these only fall into the realm of ordinary grammar, learned through corrective feedback and explicit rules, so we know so with ordinary grammar which is learnable and proven to be learned by induction and instructions – as children can produce these mistakes and are consequently corrected. But this is not UG errors and correction and there is no poverty of the stimulus in the examples Pinker provided.

    The poverty of stimulus is that the children never produce or hears UG errors, so there would be no UG corrections. Langauge acquisition doesn’t seem to be through learning UG, because children would need to have examples of members of category and non members of the category – but all languages are UG-compliant and there is no UG violations, so how can we learn the UG rules (like the incomplete categories, Laylek, examples in class). Bringing us back to why Chomsky concluded that UG is innate… So how do we come to have this genetically encoded, was it gradually through an adaptive advantage?

    ReplyDelete
    Replies
    1. @Grace

      Pinker avoids mentioning the "poverty of the stimulus" argument in his Section 9.2 which seems an obvious section to address it.

      "But this is not UG errors and correction and there is no poverty of the stimulus in the examples Pinker provided."

      Perhaps, Pinker is committing a confirmation bias. It is likely true, as Prof. Harnad mentioned in class, that Pinker probably doesn't understand Chomsky's "poverty of the stimulus" argument.

      "So how do we come to have this genetically encoded, was it gradually through an adaptive advantage?"

      Pinker suggests in section 2.1 that "about 300,000 generations in which language could have evolved gradually in the lineage leading to humans." However, I doubt that it is sufficient for the evolutionary timecourse to master such a complex characteristic of being human. Then, there are two questions to think about. First, we would have to address the extreme selection pressure for language to be this dominant and pervasive in human species. What selection pressure would have been this extreme to allow for only about 300,000 generations to acquire it? Second, it is difficult to answer the question of change within a species. How long did it take for language to become a "fixed" trait (i.e., present in every member of the human species)? This explanatory gap is a frustration for everyone. If language or the question of the origin of UG is not possible to explain through genetics, then what other explanations can we possible find?

      Delete
  28. “Interestingly, children treated their own overregularizations, such as mouses, exactly as they treated legitimate regular plurals: they would never call the puppet a mouses-eater, even if they used mouses in their own speech.”

    This type of observation suggests the existence of UG. How else would we be able to explain that the child is learning the rules without something innate? I wonder if we can extract the rule that the child is over applying, then we can replicate a computer program with those rules such that it can learn languages without installing language packs and other libraries to simulate language.

    “A grammar is not a bag of rules; there are principles that link the various parts together into a functioning whole. The child can use such principles of Universal Grammar to allow one bit of knowledge about language to affect another. This helps solve the problem of how the child can avoid generalizing to too large a language, which in the absence of negative evidence would be incorrigible. In cases were children do overgeneralize, these principles can help the child recover: if there is a principle that says that A and B cannot coexist in a language, a child acquiring B can use it to catapult A out of the grammar.”

    I’m still quite not sure what Universal Grammar is. For anybody familiar with computer science, it is the same feeling of not knowing what an object is while learning to program or what Dasein is while reading Heidegger.

    ReplyDelete
  29. Re: 6.5 Context

    Is it really true that it language must be learned in context? I’m not sure if I understand this argument. It seems to me that I have learned a great deal from listening to French radio or watching French movies. To be fair, I already had some foundational knowledge from school, but I find it hard to believe that no one has ever learned language in this way. If babies can learn from observing their parents, what is so different about watching a television? Is language necessarily interactive?

    ReplyDelete
    Replies
    1. Lucy, I have found the same while trying to pick up Spanish. All of my professors have suggested listening to radio and watching movies in order to improve grammar and vocabulary. In addition, it seems even after 8 years of practicing grammar in a classroom setting, a month immersed in a foreign country would be more beneficial. So I suppose context has something to learning ability, but to say radio and television do not impact language acquisition seems strange.

      Delete
    2. Does age matter? I see in 6.5 that Pinker was talking about "children" so could it be that he was talking about the first-language acquisition of children? I am not sure how different first and second language acquisitions are, but I think there IS a difference.

      I have the same wonder of whether language needs to be interactive to be learned. I guess at the very first stage of acquiring language, we usually need stimuli that are relatively slow, linguistically simple and repeatable (So you can keep on saying the same Bonjour over and over until you learn the phrase). And that could be why it is easier to learn from language school than only through radio or television. And I personally found radio and television useful only AFTER I acquire a language (already had a certain amount of language classes and knew basic grammar rules), but not as the only input for learning a completely new language.

      Delete
    3. Distinguish first-language learning from later languages. UG gets triggered by the first language (during the critical period for language learning in the child).

      But it's surprising (to me) how much language children can learn passively, just from hearing it, without speaking, in association with what's going on in the world, even at the first-language stage.

      Of course categorization, which precedes language, is mostly supervised (though there can be unsupervised and innate categories too, and they could be associated passively with their names, also unsupervised).

      It seems to me that this is similar to vision, which is also partly unsupervised. It's amazing how much of the actual shape of things in the world our brain brains have already encoded in advance, through evolution (not just the natural landscape differences, like mountains and valleys, but colors, shapes).

      It seems that the lazy Baldwinian "preparation" for language is even more powerful and specific than just: "Learn Categories, Learn their arbitrary names, Use them in Subject/Predicate Propositions to describe/define further categories"...

      It will take a Giant or two to sort this all out, not just a pygmy like me...

      Delete
    4. I think that what Pinker is trying to say with the “Context” section of the article is the question of symbol grounding. You (Lucy and Alex) both described experiences learning second languages in which radio and movies have helped, but I would wager that prior to listening to the radio and watching movies, you will have grounded at least a few words in French in your own native language. What radio and movies do after some grounding has occurred is help you practice understanding the flow of that language, but only given that you understand some of the words already. If I tried learning a language I never encountered or heard before (and haven’t grounded to anything or any language I already knew) over the radio, it essentially be like listening to a bunch of squiggles and squaggles. Pinker’s argument that nobody ever learned their first language through the radio is that you could never be able to ground those symbols to their referents. A better argument could be made for watching movies, but I think that is only because there is more opportunity for visual feedback to ground the spoken words to objects seen on the screen.

      Delete
  30. "... it's puzzling that the English language doesn't allow don't giggle me or she eated given that children are tempted to grow up talking that way. If the world isn't telling children to stop, something in their brains is, and we have to find out who or what is causing the change."

    Pinker’s argument here is that if children do not receive negative feedback about their ‘incorrect’ uses of language (for example, adults don’t tell them their speaking wrong), something in their brain makes it so they no longer use these ‘incorrect ‘ terms. But does negative feedback have to be so explicit as the child being told that he or she is speaking wrong? Could the fact that the child never hears any adults say “Don’t giggle me” or “I eated” itself constitute a form of negative feedback? Or at least a form of purely environmental feedback that discourages the child from using those terms?

    As an example, let’s say I like wearing pink around campus, but no one else at the university ever wears pink in public. After a while I start to notice no one else wears pink in public and begin to feel weird about wearing pink. I wear pink less and less often and eventually stop wearing pink all together, even though nobody has told me not to. Could something similar be going on to explain language acquisition with the apparent lack of “negative evidence”?

    ReplyDelete
    Replies
    1. Kara, I thought exactly the same at this section. I don’t believe that negative feedback necessarily has to be explicit – the absence of positive feedback or the absence of ill-formed sentences could be sufficient enough. I’m not sure that there ever could be such a thing as absence of negative feedback – if a child kept producing non-UG-compliant sentences, what we could consider negative feedback resulting from this is that nobody understands what the child is saying. If all a child hears is UG-compliant sentences, then all the child will be able to understand are UG-compliant sentences, and from that simple fact the child will know that only UG-compliant sentences are “useful” ones, or ones that communicate meaning effectively.

      Delete
  31. Concerning the part about the importance of positive evidence, the author talks about abandoned or neglected children that failed to develop language as a consequence. He then contrasts these children with youth who grew up in community with a very limited language, developing a whole new language of their own with their peers. I wonder if rather than relying on positive evidence, a young person only needs social interaction rather. Indeed, I do believe it is the human connections that fuel the need to speak, rather than simple input. Put a children alone with a radio, I doubt he will go on to develop language successfully.

    ReplyDelete
    Replies
    1. I think that's a really interesting idea.
      Would social interaction not itself form a kind of "positive evidence" (or a source of important positive evidence)? If a child was in a room with a radio - or let's say a television because a radio would involve a symbol grounding problem, as there are no visuals - I agree that he or she would struggle to develop language properly (or as well as in an environment with social interaction). This is probably because the child wouldn't have the opportunity to ask questions and to have those questions be answered - Which I think would be an important source of positive evidence itself. If the child is directly engaged with the language and the positive evidence their receiving and is able to "test things out" it makes sense that they'd be able to develop language more easily.

      Delete
  32. Pinker, here, appears to understand UG and Poverty of Stimulus arguments quite. I particularly liked his detailed explanation of positive and negative evidence in terms of set theory & learnability theory, which demonstrated a priori how language is unlearnable without an innate mechanism (UG). He also did a good job showing what UG might look like, and how it constrains how languages are learned. Perhaps he didn't show it in his last article with Bloom on Natural Selection, but I got a very good picture of it here.

    ReplyDelete
    Replies
    1. understands UG and PoS quite well*

      Delete
  33. Re: Ervin-Tripp (1973) Study
    This study showed that children of deaf parents were unable to learn any speech from radio or television inputs. However, based on the parameters for language acquisition that children need only be exposed to “sources of individual words” and “positive evidence”, why would these inputs be insufficient? Is the reason that children don’t learn language from these inputs primarily because the characters on TV or voices from the radio are unresponsive, and so children lack an interaction? If this is the case, this study seems to relate to “Symbol Grounding,” in that its results support the necessity for direct sensorimotor induction to learn the categories, or minimal grounding set, necessary for language (Reading 8B).

    ReplyDelete
  34. RE: Given this scientific definition of "grammatical," do we find that parents' speech counts as "positive evidence"? That is, when a parent uses a sentence, can the child assume that it is part of the language to be learned, or do parents use so many ungrammatical sentences random fragments, slips of the tongue, hesitations, and false starts that the child would have to take much of it with a grain of salt? Fortunately for the child, the vast majority of the speech they hear during the language-learning years is fluent, complete, and grammatically well-formed: 99.93%, according to one estimate (Newport, Gleitman, & Gleitman, 1977). Indeed, this is true of conversation among adults in general (Labov, 1969).

    I wonder what is considered the “language-learning years.” In my experience of being around parents with children aged 1-5 that they do not speak in grammatically well-formed sentences. I think they use a lot of repetition and shortening of sentences, or even just key words to make it easier for the child to comprehend. Does it make a difference in the amount that the child absorbs whether the parents be speaking directly to the child vs to other adults? I’m not quite sure I understand why the sentences in the next paragraph (she walking, he be working) are not corrupt or ungrammatical. How can these kinds of sentences be positive reinforcement?

    ReplyDelete
    Replies
    1. I don't know if its a matter of whether parents be speaking directly to the child or to other adults, as much as it is more a matter of the attention given by the child on what they are saying, regardless of to whom it is. Also, as said in the passage you quoted, although we sometimes say ungrammatical sentences and have slips of tongue, most of the time we do speak in complete well, grammatically formed sentences and that is what would make up most of the positive input children receive.

      Delete
  35. I’m not sure if I’m completely understanding the idea of negative examples in the learning of language, but it seems like the statement that children receive no negative examples is referring to universal grammar as one entity, rather than as a collection of rules that might be learned/known separately. Couldn’t the negative examples come, not from hearing incorrect grammar and being told it is wrong, but by hearing different rules applied and not applied in different contexts? For example, if a child is learning that the suffix -s follows a noun to produce a plural, could the negative example be that the suffix -s is (to the child’s knowledge) always absent when the noun is singular? In this way, even though a child might not hear incorrect uses of universal grammar, they will be exposed to the constraints of using each of its rules (such as when to include a certain morpheme and when to exclude it).

    ReplyDelete
    Replies
    1. Emma, I think your explanation sounds correct. Pinker states that in studies parents typically correct for the truth value of the stated proposition rather than the proper syntax of the string. So hypothetically, if a child said, "There are two cars", and the parent corrects it to, "No there is two cars" the only thing corrected is the truth value of the statement. However, the child would be able to recognize and associate the suffix of s when referring to plural nouns.

      Delete
    2. Hi Emma, I agree that I think children will hear correct input that shows how constraints of rules are applied in language. But these still don't qualify as negative examples as they are not being corrected when using the incorrect form, rather are hearing positive examples of when to apply certain rules and when not to.

      Delete
    3. I agree with Julia. I don't think this counts as "negative evidence" because the child is not actually being told that one way of saying it is *wrong.* This is positive evidence - Children being exposed to different cases of something being *right.*
      That being said, I agree with you. I think children can notice and catch one when one rule is never being used in a particular context, but is always being used in a different context. This is positive evidence, but I think it can teach children a lot about both what's correct AND what it's incorrect in language.

      Delete
  36. RE: They looked for a correlation, but failed to find one: parents did not differentially express approval or disapproval to their children contingent on whether the child's prior utterance was well-formed or not (approval depends, instead, on whether the child's utterance was true). 

    So Pinker is essentially saying that parents typically only respond to the truth values of the propositions of the children, regardless of whether the propositions were strung together with proper syntax? Could children be learning then by simply listening to well-formed strings over and over again? That is the children begin to categorize strings such that one belongs in well-formed English, and the other belongs in not well-formed English. Surely a child that never was exposed to older speakers that used not well-formed English, would not develop speaking well-formed strings.

    ReplyDelete
    Replies
    1. If the children were only listening to well formed sentences and learning from that, this would be positive evidence. However, they also need negative evidence to learn what is wrong and what is not. How else would the kid learn that a sentence belongs in the not well formed category of language as you put it, if no one corrects him? However, apparently this is the case with ordinary grammar. I think the question to ask is how can they know the rules of universal grammar without negative evidence, and also considering the fact that positive evidence is not enough to explain it?

      Delete
  37. RE: “If children don't get, or don't use, negative evidence, they must have some mechanism that either avoids generating too large a language the child would be conservative -- or that can recover from such overgeneration.”

    I am wondering how and why negative input is always necessary to learn a language. I definitely see how it is necessary if there are two identical twins and you have to learn to differentiate them as both inputs are very similar. Example: Twin A is the one with glasses, and twin B is the one without glasses.

    But what about when objects are not as similar to each other? Is negative input still necessary to be able to differentiate them? For example, if someone calls object A an apple, and object B an orange, I know that A is an “apple” and B is an “orange,” and that neither A nor B is an “apple and orange.” I am able to use positive evidence of what each of them IS to learn the category and identify it in the future.

    ReplyDelete
  38. Regarding: "The chapter by Newport and Gleitman shows how sheer age seems to play an important role. Successful acquisition of language typically happens by 4 (as we shall see in the next section), is guaranteed for children up to the age of six, is steadily compromised from then until shortly after puberty, and is rare thereafter."

    Read the next section of this quote (Section 3) but did not get the answer of why successful language acquisition is rare later after puberty. For sure we know that children acquire language better and more likely to reach the native level in early age. However, it is still possible to acquire a language later after puberty, it is just more difficult. I have learned that part of the reason why it is more difficult to acquire a language after puberty is that our mouth and throat become more used to speak our first language for 20+ years so some sounds that we do not articulate often will become not producable. But I don't think after puberty our prefrontal lobe will not be able to deal with a new set of language rules, especially if UG is an innate capacity/ability that we have in our mind since birth, then it will still be there as we grow up.

    If here Pinker tries to discuss only the FIRST language acquisition, then I question if the plausibility and development of the brain in early age (0-6) is more influential to language acquisition than the capacity of language (UG). As he said the successful acquisition is guaranteed til the age of six, is compromised til shortly after puberty and is rare thereafter, it seems that the acquisition is more depending on the early development of the neural structure of the brain, rather than a (hardware-independent) capacity of language (UG).

    ReplyDelete
  39. This paper provides many theories and experiments related to language learning, highlighting how complex the puzzle of language acquisition is, and how much of a miracle it is that we learn a language at all.
    I have no background in linguistics, and was overwhelmed with all of the theories and experiments in this article. While there is a lot I could dig into here, I will focus on section 4. “explaining language acquisition” because it seemed to be most pertinent to the course. Section 4 allowed me frame the language acquisition mystery as an intersection of 2 ideas we’ve highlighted in class: The (weak) Whorf Hypothesis (our language does some part in determining how we cognize) and Funes (the unlimited capacity for memory stifles our ability to categorize, and therefore, cognize). Interestingly, Pinker highlighted that children’s memory retrieval limitations (Funes) seem to be reflected in their inability to decipher, and express complex temporal relations (Whorf). In child development, children’s memory abilities develop in tandem with their linguistic abilities. Their neurological ability to organize the world in memory correlates with their linguistic ability to express themselves in complex time relations.
    Another point I found pertinent to the course was the learning strategy outlined in Learnability Theory: “The learner, using information in the environment, tries out "hypotheses" about the target language. The learning strategy is the algorithm that creates the hypotheses and determines whether they are consistent with the input information from the environment. For children, it is the "grammar-forming" mechanism in their brains; their "language acquisition device.”” From what I've understood, Harnad conceptualizes this idea as the innate propensity for universal propositions, and the inborn ability to recognize the power of the kernel, and see the kernel of the target language for what it is - the toolbox by which to learn all other words in the target language. Learnability Theory posits that we take this idea in a certain direction; conceptualizing language acquisition as an inborn hypothesis testing algorithm. How such an algorithm might work in the brain, how we are endowed with these ‘hypotheses’ to begin with, how exactly this testing rubs against the kernel, what our sensorimotor capacities have to do with it... are all questions that, when we answer them, will open the door to how it is that we cognize at all.

    ReplyDelete
  40. After reading the article and several of the comments made my other students, I’m wondering what the right level might be for language. If UG is innate, does this mean that T3 is sufficient, so long as UG is pre-programmed? Or is UG reliant on neuronal connections? If UG must rely on the proper physiological connections, this would suggest that T4 might be the right level. I would be tempted to say that T3 would be enough, as I got the sense from some of the readings that UG can be harnessed independently of neuronal/physiological details.

    ReplyDelete
    Replies
    1. Syntax is the study of the internal structure of sentences and it's aim is to account for all languages with minimal set of rules. Syntax is not the study of neuronal connections that make such rules possible but rather it is more like logic. Look up a syntactic tree (there are verbal descriptions of rules too) to get an idea. Of course their conclusion can be one of many possible ones, much like the number of ways to reverse engineer T3/T4. Perhaps, T3/T4 creators can use data from syntax to program one possibility of UG into robots but they also shouldn't feel the need to do so. Just like robotics shouldn't be required to use every piece of information gathered from neuroimaging studies. This is due to under determinance and the many possible ways to do something, whether they are more efficient or completely different.

      Delete
  41. Pinker argues for the innate human capacity for language, the nativist view of universal grammar, but also that there is an importance of context in the learning of language. This is encapsulated in the descriptions of positive evidence where the environment in which the child learns language plays plays a central role; while negative evidence relates to the innate understanding of ungrammatical utterances that are unaffected by context, indicating a secondary mechanism in the development of language. Something I find particularly interesting is the idea of the neuronal plasticity of a child’s brain and it’s relationship to language acquisition and then language maturation. Pinker points out that children often speak freely about the objects which they encounter in their environment, and this tendency to describe/announce everything they see decreases as their language matures. Whether this is correlated with neuronal pruning that occurs as children develop or if it related to feedback mechanisms is fascinating.

    ReplyDelete
  42. RE: Input

    I think Labov's studies on transmission vs diffusion better illustrate how children acquire languages from their peers and parents. Transmission is the pure transfer of linguistic features (like an accent) from the parent to the child while diffusion takes into account other things in the child's environment. While these both push and pull on language acquisition, his studies showed that diffusion was the more prominent. I think this deviates from the view in this reading that the Parent to child language relationship is the most important, especially because after 5 children spend a very significant portion of their time with other people.

    ReplyDelete
    Replies
    1. That's really interesting Nick, I'd never heard of Labov's studies which does bring up a good point of questioning how strong the parent and child relationship is (transmission) as opposed to other environmental factors (diffusion). From personal experience, I retain an accent closer to my mothers while my sisters' have Canadian accents due to the new environment after we immigrated. Of course, age definitely has something to do with this as does stubbornness.

      Delete
    2. It also relates to how languages evolve! The family tree model of languages (Latin -> French + Spanish + Italian, etc) vs the wave model (how linguistic features like a sound or a word spread from a single point to the surrounding area and dialects) each correspond to the parent to child model vs. the environment to child view. While bother correct, the diffusion model seems to be a better explanation because it's less generalizing and has less rigid borders. This is why I think the typical view of language acquisition is lacking.

      Delete
  43. If I understand correctly from the “agent, action, recipient and object” chart, as children are putting together chunks of words in that categorical proper order, they are both hearing language in this structural order (of missing some words but nonetheless ordering them correctly) and speaking this way. When learning language we fill in the spaces with other sentence strings we already know. This points to the notion of language development as a networking system, or a branching “tree”.
    --
    RE: “Every time we speak we are revealing something about language”
    This means that meanings (as derived from language) are of infinite possible combinations. But what then of reducing terms to simpler definitions as we do in this course? Could that in some way compromise our ability to understand what we are studying by reducing a set of words that is fact growing?

    ReplyDelete
  44. Before these articles, I had very limited knowledge about linguistics. Thus, the notion that children could not be learning language all from their environment, but instead must have some innate modules, was very counter-intuitive to me. Every other mechanism that we have seems very simple to me when you compare it to language. Thus, I still can’t comprehend how such a phenomenon with so many details, and with so many exceptions to the rules we try to build it around, arise from some innate module and not learning. I understand how our vocal tracts evolved, how there are certain areas for production and comprehension etc., but not how an area could tell us the right from wrong, without feedback, on such delicate differences. It almost seems like there is a person in our head who already knows the universal grammar, telling us what to do. Especially the fact that parents don’t provide their children with as negative evidence as we think they do, and that children can devise their own language if they don’t have much positive evidence was even more surprising. Is it that there is only one way to form languages then that we can arrive to with enough intelligence, also given the fact that we have the areas devoted to language? I learned from the article that every language has a different setting of a few parameters that exist. So there are not that many different ways to devise languages, there are certain rules one must always follow. The notion that the rules are in someone’s head when they are born does not make sense to me. However, maybe with the right resources from our genes and environment, everyone can come to the right conclusions about the language rules they are trying to learn. It is still very mind boggling to think how a child can learn such a complex phenomenon.

    ReplyDelete
  45. Having never learned much about linguistics before, this was my first time encountering the argument of whether or not language evolved by natural selection (I am definitely a kid sib of linguistics). I would have assumed that it must have, based solely on my knowledge of evolution and biology. When I started reading, I began to think about the complexities of language, and the incredible ability of children to learn so quickly. I nanny three times a week for a bilingual 3 year old and his understanding of complex sentence structures in both English and French astonish me every day I work. From my experience working with kids, I began to believe that there must be an innate mechanism for learning language that could not have emerged from natural selective processes. However, as I read more of this paper, Pinker sold me on the argument that natural selection was able to account for such complexity. Additionally from this reading, Pinker’s explanation of a child’s ability to go back and correct mistakes in their knowledge of grammar, just by nature of hearing the correct structure enough times, seemed likely to me. However, I am not sure I understood Pinker’s explanation of parameter-setting as an explanation for universal grammar. I understand that “natural languages seem to be built on the same basic plan…” but this doesn’t clarify for me exactly how children acquire language regardless of which language it is.

    ReplyDelete
  46. One of the important takeaways I took from this article was that studying language acquisition is an important means to understand how hereditary and environmental components of language interact. As Pinker argues, language must be in part but not entirely innate. I believe the most clear-cut example demonstrating that language cannot be entirely innate is the fact that a baby with Japanese heritage raised in an English-speaking environment from birth would have no issues learning English. Thus language may be encoded genetically, but we certainly do not have all aspects of language encoded. The aspects of language that we call ordinary grammar (OG) come from our environment.

    Likewise, I believe there is compelling evidence to suggest that an aspect of language (ie UG) is innate (although I think this point might be more contentious). Pinker discusses the lack (and in some cases complete absence) of negative evidence in our environment by which toddlers can make use of to learn the difference between grammatical and agrammatical sentence structure. Rather feedback is often given to children based on the truth value or content of their speech as opposed to if it is correctly worded. Despite this lack of negative evidence, children rapidly acquire language (which has incredibly complex rules and structures) and quickly learn to speak with correct grammar. As no matter what you teach a non-human animal, none can acquire language like we do. Again, this implies there must be something innate to the structure of our brains that facilitates learning of languages. No amount of environmental stimuli will give a dog or a cat the ability to speak fluent English.

    I was extremely intrigued by the examples Pinker gave of language acquisition in children raised in extreme circumstance (either in isolation or with other children) who were not fully exposed to language. The children who were grew up isolated in a forest or were severely neglected by their parents were mute. This is perhaps not surprising since they were never exposed to language. However, Pinker also gives examples of children raised with other children without sufficient exposure to language who will create their own languages. Thus, this suggests that the primary factor limiting language acquisition in the first examples was the lack of social contact rather than exposure to a human language. I think this underscores the importance of social interaction in language development. Logically, it makes sense that toddlers would be compelled to listen and learn language simply because they wish to be able to better communicate with their caregivers. I wonder if there have been any studies demonstrating that first attempts at language (ie first words) are often motivated by expressing a desire or need. Example, “mama” is the infant asking for their mother to come take care of them or “ball” because they want to play with the ball. This seems like a truism based on our intuitive understanding of child development but it has important empirical implications for language acquisitions and the environmental conditions that facilitate it.

    ReplyDelete
  47. RE: “Most obviously, the shape of the human vocal tract seems to have been modified in evolution for the demands of speech… but it comes at a sacrifice of efficiency for breathing, swallowing, and chewing” (2.1)

    It’s incredibly interesting to think that parts of our bodies evolved to compensate for the demands of language at the body’s own detriment, especially considering how versatile our throats and mouths have become in order to produce the different sounds necessary for different languages. For example, some words in Arabic have completely different meanings depending on if the sound produced was from the front of the mouth or from deeper within the throat, and others include glottal stops (pauses in sound production, like in the word uh-oh) as meaningful symbols. Personally, I always entertained the idea that language adapted to fit our bodies’ affordances, rather than the reverse, though I do admit that this makes much more sense.

    ReplyDelete
  48. This made me wonder about second language acquisition. I know there is a lot of cases where people never achieve native-like status when studying a second language, but there are some people that do, which creates this controversy about the critical period hypothesis. What I’m wondering is, many people learning a second language receive negative evidence and this is often how people improve, but does this mean that those who are fluent in their second language do not think the same way a native speaker would since they learned it differently? They definitely don’t know the rules to UG, but they know the rules for a language and learn the language differently, but I think I’m just confused about what the presence of negative evidence in second language acquisition says about UG?

    Also, just a side question for fun, but a lot of people speak to their pets in a similar fashion that they would to a child. Does this mean anything, or is there a reason for that?

    ReplyDelete
  49. My understanding from the Reading by Pinker is that the negative feedback children occasionally receive in response to incorrect uses of ordinary grammar are not required nor significantly consequential for learning a specific language dominant in their community. I understand the reason for this to be the presence of universal grammar. UG has the complex set of rules with parameters ready to be set to certain values and this innate capacity ensures that all the sentences produced by the child obey the fundamental complex rules of all languages. The remaining rules specific to each ordinary language are learnt by children’s hypothesis about what would be a correct way for formulating a language in the framework of innate UG rules. The hypotheses are tested in the environment in relation to sentences produced by adults. My take was that this form of hypothesis testing only happens on the OG level and all the sentences a child produces, regardless of whether they are in accordance with ordinary grammar or not, are compliant with UG. Going back to our discussions about categorization, OG can be learned by unsupervised learning as in the case of mere exposure to the use of “s” at the end of the name of objects when more than one of them are present in the environment or supervised learning where parents correct the child who uses “breaked” instead of “broke” as the past tense of “break”. OG can also be learned by instruction. An example for this is when children learn to produce more sophisticated compound sentences in academic settings. However, none of these ways of learning would apply to UG rules so they need to be innate. Nevertheless, Harnad shows that it is hard to find an adaptive advantage and an evolutional mechanism for brains which had the capacity to make only UG compliant sentences.

    I am curious to know what is an example for a UG rule. What is the difference between the level of difficulty in UG rules and OG rules? Are the rules determining what order for addition of “derivational affixes” and “inflectional affixes” is acceptable UG rules? I think they are because they seem to be true for all languages.

    ReplyDelete
  50. Pinker’s article uses different examples to show the dissociation between language and a general intelligence module but I’m wondering if there is any chance that this dissociation would only apply to the OG level of language and not UG. After all, we have never seen anyone using non-UG-compliant sentences to see whether they would necessarily have intelligent deficits or not. Moreover, the fact that there exists a very specific category for UG-compliant sentences without any negative evidence shows a very deep connection between having cognizing capacities and having brains which only produce UG compliant sentences. I think intelligence, which is a capacity of cognizers measured by doing, is not clearly dissociable from our default setting for using UG-compliant sentences exclusively.

    ReplyDelete