Saturday 2 January 2016

9b. Pullum, G.K. & Scholz BC (2002) Empirical assessment of stimulus poverty arguments

Pullum, G.K. & Scholz BC (2002) Empirical assessment of stimulus poverty arguments. Linguistic Review 19: 9-50 



This article examines a type of argument for linguistic nativism that takes the following form: (i) a fact about some natural language is exhibited that al- legedly could not be learned from experience without access to a certain kind of (positive) data; (ii) it is claimed that data of the type in question are not found in normal linguistic experience; hence (iii) it is concluded that people cannot be learning the language from mere exposure to language use. We ana- lyze the components of this sort of argument carefully, and examine four exem- plars, none of which hold up. We conclude that linguists have some additional work to do if they wish to sustain their claims about having provided support for linguistic nativism, and we offer some reasons for thinking that the relevant kind of future work on this issue is likely to further undermine the linguistic nativist position. 

86 comments:

  1. Whilst reading this article, I was constantly wondering if the "grammar" discussed referred to universal innate grammar, or ordinary learnt grammar.. In the previous article (Pinker), there was no mention of the poverty of stimulus, and as I said in my post, it leaves questions and problems unanswered. I believe that Puller says the inverse, but I don't understand why, in my understanding, for UG, there is no problem of poverty of the stimulus, because it is innate, so there is no limit to how much information can potentially be hard-wired and encoded before birth. But for OG there is a problem of poverty of the stimulus, because, despite plenty of positive and negative evidence over the life-span, children seem to pick-up on correct ways to speak before receiving enough stimulus, so in reality the amount of stimulus received by the time the rule/exception is learn is insufficient. In any case no article has yet explained HOW UG is innate, HOW it works, HOW much it contains, and HOW it evolved. But even if we assume we know all of that, the poverty of the stimulus still applies to both UG and OG in different ways and also needs to be addressed to prove for sure that there is such discriminable things as OG and UG. I'm not very sure of myself here, please don't hesitate to correct me!

    ReplyDelete
    Replies
    1. Julie, you have it a little mixed up. It is only UG that lacks negative evidence. OG is not only much simpler than UG but it does have negative evidence (errors and corrections). You mixed them up because others mix them up, or don't understand (or remember) the difference. And Pullum (who is quite clever, and wrote the exposé of the fallacy of the eskimo snow terms) simply does not deal with the problem of negative evidence in this paper. Yet the lack of negative evidence is the problem of the "poverty of the stimulus." And it's a problem for UG only. Mixing in OG just obscures it and begs the question.

      Delete
    2. I'm sure we'll go through this in class but I'm still confused about the UG/OG distinction. Is UG, by virtue of being universal, more general/less specified than OG? I'm guessing the issue with negative evidence is also one of categorisation, where learning OG is done through categorisation and UG is not...seems like a pretty crucial difference if we're going to make sense of the poverty of stimulus argument!

      Delete
    3. To reply to Auguste's question of the distinction between UG/OG:

      UG is less specific, however it leads to many specific types of languages. All languages have more similarities in their internal structure than dissimilarities. UG is proposed to be innate because it is never taught in school. We can describe it SVO vs OVS etc (Object, Subject, Verb) but we don't know how children do it without ever having been told the above description. UG is never learnt (nor do we have its causal mechanism- syntax is attempting to solve that) and children's speech production there is no negative reinforcement of it. In order to study syntax you need to compare grammatical and ungrammatical examples. In my syntax classes, the most difficult thing was coming up with these ungrammatical examples that were minimal pairs in order to come up with a possible explanation or rule accounting for the results. For example, "is she coming", "she is coming" are fine while "coming is she" is not. Or perhaps more illustrative, "she is rapidly swimming", "she is swimming rapidly", "rapidly, she is swimming", versus "is rapidly she swimming". As discussed, there is only positive evidence of UG. The poverty of stimulus argument comes in because children don't hear, learn or get feedback from a lot of language they are able to produce (in addition to no negative evidence of it) leading us to believe that UG must be innate. OG or ordinary grammar such as the stuff learnt in an English class is learnt through positive and negative feedback. As well, vocabulary is not part of UG, that is learnt from the environment.

      Delete
  2. This comment has been removed by the author.

    ReplyDelete
  3. In this article, Pullum seems to punch a hole through the wall of certainty that Linguistic Nativists like Chomsky had created regarding examples of Universal Grammar and the Poverty of the Stimulus. My knowledge of linguistics is limited, and the arguments made in the article are complicated, but the logical way that Pullum organizes his counterpoints paints a clear and concise picture of the weakness of the evidence presented in previous papers.

    The obvious question that this article begs is whether there are any concrete examples of UG sentences that are truly not found in everyday language. When reading the Pinker article (9a), I was thoroughly convinced by Gordon’s “Rat-Eater” study as supporting evidence of UG, but that seems to show the tendency for readers to not appropriately question content presented in scientific papers. At this point, I would be interested in reading a response to this paper by Chomsky, or at least be curious to other linguists’ reactions. My interpretation is that the Poverty of the Stimulus argument can still work because there are so many examples of extremely uncommon grammar that even if one might encounter a handful of them in a lifetime, there are still likely to be many other examples that one will not hear.

    ReplyDelete
    Replies
    1. I think the article’s criticism was that in order for the POS argument to be supported, researchers need to show how children learn with an absence of crucial evidence. Finding uncommon grammar shows that grammatical rules are followed despite lack of evidence, but not how.

      Delete
    2. Karl, I certainly see your point. I too have limited knowledge of linguistics so I am sure there is a bunch of nuances I am missing. By the same token, Professor Harnad has advised us to work through the many complicated (and often erroneous) statements about universal grammar and only focus on the fundamental pillar of the poverty of the stimulus (defined as the complete absence of negative evidence). By the way, I know you know these things Karl, I just want to show that I also know all this to Professor Harnad when he reads my response (). Anyways, by this alone, which again is the only relevant aspect of UG, besides positive evidence which we know is ever present, it is clear that children simply do not get this negative evidence. Period. As with the rest of science, there are often anomalies that arise, especially when experiments feature idiosyncratic rules and specific statistical decisions. I do not think anything Pinker presents is automatically a hole through the wall of linguistic nativist certainty. It is after all a mighty mighty wall.
      Also, as confirmed by some other comments listed, it is evident that Pinker blurs the line between UG and ordinary grammar in a way that suits his argument. Points off for that.

      Delete
  4. I'm having trouble understanding how linguists are able to make inaccessibility claims without actually studying the environment that children grow up in. To be clear, I understand that it would be impractical and very difficult to actually study every word presented to a child during his or her developmental period, but I don't understand how linguists are then able to say that it would be impossible for a child to have been provided with enough information.

    To be honest, I was a bit confused by this part of the article, but from what I could tell, linguists are making a lot of assumptions by arriving at an inaccessibility claim.

    ReplyDelete
    Replies
    1. Re: "I don't understand how linguists are then able to say that it would be impossible for a child to have been provided with enough information."

      Although we likely don't ever know exactly what sentences and words a child has been exposed to, we can still easily prove that they have not heard the full extent of a language's grammar -- the set of possible sentence structures is an infinite set in a recursively-based language, and a child's life is a finite length. Therefore, the child has been exposed to a finite number of utterances, not infinite, as the first mentioned set is.

      So this is how Chomsky can make the poverty of the stimulus argument, when he discusses how "the attained grammar goes orders of magnitude beyond the information provided by the input data and concludes that much linguistic knowledge must therefore be innate" (as quoted by Wexler)

      This is why can say that no matter how the child grew up -- even if they had the richest language environment possible -- they could not have heard the full extent of the language that they can learn to produce themselves.

      Delete
    2. I had a similar thought while reading the Pullum piece, especially after reading the Pinker section on “Motherese”. Pullum points out that it would be ideal for the field of linguistics to have documentation of every utterance that an infant heard in their first few years of life to determine what kinds of structures they were actually exposed to (this is what Dominique pointed out above, too). But since this kind of data doesn’t exist yet, the authors of the Pullum piece (like many linguists) use text corpora from sources like the Wall Street Journal. They do qualify their use of these text corpora by noting that “surely it is implausible that one could expect to reach kindergarten without running into any sentences” like “Will those who are coming please raise their hands?”, where the subject contains an auxiliary. But when you look at the vast majority of the input that kids receive before kindergarten, it’s mostly “Motherese”: very simple syntax (if there is any at all). Is it really fair of the authors to say that most kids will see sentences with auxiliary subjects?

      Delete
    3. Another point to this end: Pullman admits that it is foolish to try and document the full selection of utterances that a child will hear during their language acquisition period. This stands to reason that the number of grammatical structures they will produce in a life time will be exponentially bigger, making it very unlikely that they were exposed to structures earlier in life that account for everything they produce. Furthermore, some structures seem to be forbidden even though they seem logical from other constructions. For example, 'Wh- Movement' (the rules by which we move question words to the front of a phrase) seem completely non intuitive, and very unlikely that a child would hear each rule governing Wh- movement in their acquisition period. It seems that Pullman used examples that are easy to disprove, and they used very complex language as to make APS seem inaccessible.

      Delete
    4. This comment has been removed by the author.

      Delete
    5. I agree that Pullum's article does a good job at raising arguments to challenge the poverty of stimulus. But personally, I am having a hard time agreeing with arguments that Pullum raises against the explanatory power of poverty of stimulus to explain characteristics of language acquisition. Evidence that Domnique mentioned in her comment: A child's ability to create an infinite number of propositions despite the finite number of propositions that children hears in their life time, is to me the strongest proof for poverty of stimulus. The empirical work that would need to be carried out to fully confirm the influence of language input on language production is tedious as Pullum himself points out.
      Data-driven learning (i.e. the chance that the child was actually exposed to a specific propositions) that is often used to disclaim poverty of stimulus arguments still does not explain the capability to create an infinite number of sentences despite a finite auditory input of propositions. Valentina's comment below talks about a child that was raised with minimal language input and therefore was unable to learn language. This shows the ability for children to learn language but a requirement of environmental interaction with an evolutionary innate system. Until further evidence is provided I am convinced of the explanatory power of poverty of stimulus.

      Delete
    6. Pullum focuses more on ordinary grammar rather than universal grammar and therefore ineffectively tries to argue against UG being innate due to poverty of the stimulus. Pullum also says the poverty of the stimulus argument is not well founded because of its non-empirical nature, i.e. there is no evidence for the poverty of the stimulus. But Pullum cannot get far with this argument because poverty of the stimulus is founded on the fact that children hear no violations of UG and are yet able to follow UG regardless of where in the world they are. The poverty of the stimulus relies on the fact that there is no evidence of violations of UG.

      Delete
  5. It’s interesting that this article brings up questions (like innate priming) regarding Gordon’s experiment as a support for APS regarding the tendency of children to use irregular plurals to make compounds words like mice-eater (because of a higher occurence of these words compared with their singular versions). I’m just wondering what changes the Gordon’s experiment needs in order to be more persuasive?

    ReplyDelete
  6. As someone who has taken a couple of linguistics courses and read the Chomsky paper where the argument for the poverty of the stimulus (APS) is introduced, I found this piece by Pullum as refreshing as it was challenging. I was particularly amazed at the part where Pullum breaks down the logic of the APS. I had never really noticed how some papers just point out a bunch of facts about how amazing it is that kids effortlessly pick up their native language, and then piece these facts together to arrive at the conclusion of nativism.

    ReplyDelete
    Replies
    1. I agree, I do not study any linguistics however, so naturally I generally just accepted the APS due to plain ignorance of the field. But it's nice to hear there's still a lot of debate. Another point that was very strong in the article was Pullum's attempt to break down the definition of the APS using what researchers defined it as. It seems as though a lot of people are tackling a problem that is really not that well agreed upon. Similar to how we like to use terms like concepts or representations in cognitive science, it's pretty important to use language that doesn't beat around the bush and truly define what we are talking about when we introduce a hypothesis like the APS.

      Delete
  7. If I’m correct the authors are proposing evidence against the idea that, due to the “poverty of the stimulus”, language must be innate otherwise how would children be able to grammatically say things without having evidence for them. They do this by finding several examples of evidence in multiple situations that they looked into thus exemplifying that children learn these grammatical processes because there’s evidence for them in their input. However, I’m still stuck on the lack of negative evidence, which, in my opinion, provides strong evidence that UG must be innate. If every incorrect grammatical formulation isn’t taught to children and they still don’t make grammatical mistakes then they must have some innate knowledge of grammar.

    ReplyDelete
    Replies
    1. I agree with you Maya. Children can learn on their own without necessarily receiving feedback or instruction, which supports that there must be a component to the acquisition of grammar that is largely innate. At the same time, I also agree with Cassie’s point (above) that it is still important to be critical of linguistic research. We will not truly know the answer to this debate unless we find a way to "conduct an examination of every utterance used in the presence of some specific infant during the first five years or so of the infant's life," in order to find a correlation between children's input vs output in terms of language. This research would also have to be conducted across multiple infants (assuming that although language acquisition is systematic, it’s not always identical across children). For now, I don’t find this enormous (and seemingly impossible) research project necessary because I believe UG is strongly supported.

      Delete
    2. Hello Annabel,
      I do think that 'feedback' and 'instruction' are interesting topics. There is no advantage to being spoken to in motherese, this could be seen as ‘instruction’. Additionally, there are cultures in which children are not spoken to until they can speak perfectly. However, a child must be exposed to language regularly to acquire it. There is the example of a girl who was kept in a basement and rarely ever had human interaction or exposure to language and was not able to learn it. Therefore, there must be some ‘instruction’, even if it is not overt as in directly saying to a child ‘that is wrong’. There are methods in the brain that take auditory feedback from when we speak and compare it to targets and result in a pattern of activity. This could be seen as a form of instruction, where adult language is the goal and you have a set of ‘rewards’ (potentiation) and ‘punishments’ (weakening of synapses). Additionally, instruction could take the form of being understood or not!

      Delete
    3. I don't think nativist and a constructivists views on language are mutually exclusive. Maybe Pullum and Pinker have more in common than they think. Language acquisition is not purely input-driven or purely innate.

      I agree with you Valentina, there is some form of 'instruction' although clearly it is not in the form of overt correction from parents. As discussed in Pinker's reading, instruction may come in the form of prosody, semantics, and the organization of grammar in the input. These aspects of feedback are all necessary but not sufficient for language acquisition. Specific forms of feedback via input are necessary for turning the parameters of UG on or off.

      Delete
    4. Regarding the example of a girl who was kept in a basement and never learned language, as well as other examples of severe neglect which resulted in people being unable to ever learn language, I think it’s difficult to owe the reason for this solely due to lack of positive input of language/ instruction during a certain time. These are extreme cases where there was trauma/abuse or severe neglect, which has been shown to have many negative detrimental effects on cognition in general, and so there are too many confounds to owe the lack of learning language to any one specific thing, such as instruction. Of course, no controlled experiment could be conducted to deprive a child of language input to definitively pinpoint why they are unable to learn language as this would be unethical. So I think we need to be careful what we assume from those extreme examples of deprivation.

      Delete
    5. I have a very limited background in linguistics, so could not really appreciate all of the nuances of Pullum’s arguments. However, I understood Pullums main question to be, is the lack of negative evidence enough to deduce that UG/ nativism is valid? Linguists tend to point to evidence wherein children only uses UG compliant sentences, painting a picture of how much of a miracle it is that children learn their native language. From there, they use Chomsky’s reasoning, and make a logical leap in concluding that nativism is true. Pullum thinks that leaning on Chomsky, and drawing conclusions about the means by which we acquire language, is too fast too soon. We simply don’t have enough evidence on the topic yet, and have more work to do.

      Valentina, thanks for translating some of Pinker’s points/ language acquisition 101, into Kid/sib. I’m sure the goal of most linguists is that once we sort out the technicalities of the ‘language hypothesis testing mechanism’ in child brains, and how it is tailored to the development of UG, we will satisfy Pullum. I wonder how we might turn auditory and sensorimotor data into rewards, causing potentiation, and punishments, causing the weakening of synapses. I wonder if distancing ourselves from the POS arguments, will help linguists find answers sooner, or Chomsky is on the right track and linguists should continue as they are.

      Delete
    6. Valentina, I think you have a few really good points. From what you're saying it sounds like although there may not be direct negative evidence telling a child they are wrong, there is still instruction as well as feedback (like whether you are understood or not) that can serve to shape the child's language. We also know how beneficial interaction and exposure to language is in order for it to develop and grow, such that children with more exposure and use of language develop language at a much faster rate. With that being said I still believe the fundamental aspects of language is innate, but interaction and exposure still play a huge role in shaping language ability, such that without any instruction or feedback an individual's linguistic abilities would be greatly affected. Thus, our full language ability can't be explained entirely by the fact that it is innate, but at the same time the environment can't be entirely responsible for the development of it either. The interaction of both aspects allow our linguistic ability to be as extensive as it is.

      Delete
    7. Regarding this entire thread, I’m not sure if the argument being made is necessarily exclusively nativist or constructivist. I don’t think that either position would argue that language acquisition is exclusively innate, or exclusively input-driven. This is the age-old nature vs. nurture debate that, as far as I know, currently stands at the nebulous “it-depends” position that is that everything is a mixture of both.

      Delete
  8. According to Chomsky, one’s environment is devoid of negative examples for which to learn the structural relations of a particular language, yet children still manage to learn the rules. He therefore concludes that the capacity for UG must be due to an innate mechanism as opposed to a bottom-up learning from the environment. This is known as the poverty of stimulus argument.

    This article argues that in order for APS to be true, one must find a way to 1) quantify “how much is enough” and 2) compare this number to “what there actually is”. By this, the authors highlight the need for scientific proof that the environment is indeed poor in terms of its number of negative instances that are relevant for language acquisition.

    By showing that the frequency of negative and relevant instances is actually quite high, they call Chomsky’s PoS argument into question. If the environment is not completely devoid of negative instances, then is it truly poor? In order to answer this, the authors argue that there needs to be a way to systematically account for all negative instances that make up a person’s environment, and show that this is quantitatively insufficient.

    ReplyDelete
  9. RE: “People attain knowledge of the structure of their language for which no evidence is available in the data to which they are exposed as children”.

    Chomsky’s PoS argument relates to the “absence” of negative evidence, while the authors of this article argue in favor of the “sufficient amount” of evidence required. However, is negative information really necessary for grammatical capacity?

    In terms of supervised learning, negative information is necessary in order to learn the categories that make up the grammatical rules. Therefore, the child needs to be given examples of right and examples of wrong.

    However, in terms of unsupervised learning, a child may not need to have an environment that is rich with negative data. If their environment only contained what is “right”, and they were to come across ungrammatical (wrong) evidence, there would be a stark contrast/perceptual boundary (i.e. like mountains and valleys) that the child could easily pick up on. Therefore, the child would not need an environment that was rich in/sufficient in negative data to learn the grammatical rules that are so pivotal in language acquisition.

    ReplyDelete
    Replies
    1. I thought your introduction of the unsupervised learning concept here was rather interesting and quite convincing at first. I think, however, that it is a little misdirecting. If we are talking about Universal Grammar, then it is true that there is only positive evidence in our natural environment as children and never any negative evidence at all. So, it might be true that there would be a stark perceptual boundary that might pop out to us. However, this still does not address some of the concerns. The issue is that children are capable of producing infinitely many utterances that they have not been exposed to. One of the issues with unsupervised learning is that it is data intensive and require a huge sample size. Also, it seems unfeasible to learn an entire catalogue of grammatical rules unless there are some basic foundational rules and some innate rules to combine them. But even then, the appeal to innateness is the appeal of Universal Grammar and the poverty of stimulus argument.

      Delete
    2. Re: Yi Yang Teoh, "The issue is that children are capable of producing infinitely many utterances that they have not been exposed to. One of the issues with unsupervised learning is that it is data intensive and require a huge sample size. Also, it seems unfeasible to learn an entire catalogue of grammatical rules unless there are some basic foundational rules and some innate rules to combine them."

      I'm not sure how that children are capable of producing infinitely many utterances they haven't been exposed to is an issue. It seems like in this Pullum paper, the authors agree that this does not pose a problem. Children could learn grammar and language and then be able to recombine the categories within language to make an infinite amount of new utterances. This is true of any type of categorization. For example, human beings are able to learn about new categories, such as different types of cell phones and would then have infinite amounts of categorization and specifications of the characteristics of them. This does not seem very different from language, and yet no one would argue that humans are born with an innate ability to categorize cell phones.

      I'm also not sure why you jumped to the conclusion that without negative evidence, one would have to have basic foundational rules of grammar. It seems possible to me that positive evidence provides enough examples in order to re-apply pieces of the examples in individual categories, as in new instances and combinations.

      Delete
  10. “What Hornstein and Lightfoot claim is that some of the sentences children never hear are crucial evidence for learning from experience. That is, some aspects of languages are known to speakers despite the fact that the relevant positive evidence, although it does exist, is not accessible to learners during the acquisition process, because of its rarity: linguists can in principle discover it, but children will not.
    This claim, if true, would refute non-nativist theories of language learning. If learners come to have linguistic knowledge without being exposed to evidence that would be crucially necessary for experience-based learning, then learning is not experience-based.”

    Pullum and Scholz explain the refutation of this argument with the thought experiment of Angela who knows that F is a fact about L by using evidence of expressions of L which means L is available. They then explain the weaker version of Hornstein and Lightfoot as follows: “we replace total lack of evidence by lack of evidence that is adequate to the task. “

    Even with this weaker version, I am still puzzled by how a lack of evidence adequate to the task is provable for all tasks and all linguistic abilities? While they are correct to refute the stronger claim, I am unsure how they can give the weaker version validity given the vastness of said tasks and diverse linguistic environments in which children grow up. It is certainly possible to find an exception to this, an example where a child learns by experience. Just because there may be situations in which children do not learn by experience can we conclude that experiential learning is not critical for some (even if only a few) linguistic capabilities?

    ReplyDelete
  11. Re: "INGRATITUDE: Children are not specifically or directly rewarded for their advances in language learning."

    The article places ingratitude as a property of the child's environment, and it is something I am having trouble believing. Parents are often trying to get the child to speak/say their first words, and give large amounts of attention to the child in the process - especially when the child says their first word. The child speaking acts as positive reinforcement since they receive more attention as a result. From the moment it's born, the child is able to view conversations between older people, and generally when these people speak, the children aren't receiving the majority of the situation. It makes sense why the child wants to speak, and I find it difficult to believe that ingratitude is a part of the child's environment - attention is a big reward for an young child.

    ReplyDelete
    Replies
    1. I think that the problem is that this attention does not have the necessary content to explain a child’s grammar acquisition. Reinforcement alone cannot explain it.

      Delete
    2. I'm not sure that I agree Austin. I don't think it is necessarily about having the right "content" and more about the fact that language acquisition can occur independently of positive reinforcement. I think it is a bit of a stretch to argue that all babies receive encouragement from their parents to speak. There are certainly some neglected children who would not receive such positive reinforcement, and Pinker in 9a discusses how it takes about two generations for humans to develop language on their own. So although I think you are right Eugenia, most babies do receive positive reinforcement for speaking, I believe that what is important for the argument is that they can still learn language in the absence of positive reinforcement

      Delete
    3. Hello everyone, perhaps the distinction between Ordinary and Universal grammar in this case would be somewhat helpful?

      If I am not mistaken, Eugenia’s points and some of Lucy’s points, maybe they are both addressing the learning of ordinary grammar, which indeed is learned with plenty of positive and negative instances and correction. In that, attentions and positive reinforcement are explicit and, like correcting grammatically wrong utterances sentence by a children, offers them negative evidence for the brain to learn the correct ordinary grammar rules – so learned, like any other things, through trial-and-error and instruction from others.

      As for when we come down to Universal Grammar, which is what Pollum tried his hand at and what Austin has pointed out, that the example of attention doesn’t provide any explanations or solve anything in UG (for the reason that there is no negative instances of it, i.e. “Laylek” examples in class).

      Or perhaps your points meant something different, what do you think?

      Delete
  12. Re: “Our point is that the advocates of the APS-defenders of the claim that the child could not possibly be exposed to enough data to learn the generalization in a data-driven manner - must develop some explicit and quantitative answers to such questions if they wish to support their claim.”

    The problem pointed out for the POS argument is that its general formulation has not been supported by evidence and that the formulation itself is not specific and meaningful to be tested. My thought is to connect this to the article for 8b. Perhaps what proponents of the POS argument need to do is to find their equivalent of the minimal grounding set. What are the bootstrapped rules by which all other rules can be derived? Related to this, “We also need to know much more about what sort of utterances constitute the typical input to children, and what they pay attention to in that input.” It seems that what is needed here is a minimal set of input by which we can (referring to my post for 9a) apply the pragmatist formulation U(Ai) = ∑j P(Sj | Ai)U(Sj ∧ Ai) to account for attentional weight associated with the different input. These two, jointly, can offer a beginning to answer how “rules or principles of grammar that are known despite lack of access to crucial evidence.”

    ReplyDelete
  13. This comment has been removed by the author.

    ReplyDelete
  14. I read this article after this week's class (-insert a weak excuse about midterms and assignments here-), and it made a lot clearer why the mainstream arguments for the poverty of the stimulus are not just lacking evidence but arguing for something much different than the existence of UG. I'm still confused about what UG really is, but I think that the reason so many APS experiments went the wrong way is in fact this confusion: the concept of UG being so obscure yet somehow intuitively obvious, scholars presuppose its existence and attempt to demonstrate it with what makes direct sense to them, namely OG manipulations. But since OG is not UG, the arguments necessarily fail.

    ReplyDelete
    Replies
    1. @Mael

      "...the reason so many APS experiments went the wrong way is in fact this confusion: the concept of UG being so obscure yet somehow intuitively obvious, scholars presuppose its existence and attempt to demonstrate it with what makes direct sense to them, namely OG manipulations. But since OG is not UG, the arguments necessarily fail."

      I agree with you on the confusion of defining UG lead to misorientation of APS experiments. To add to your point, the APS experiments are satisficing the APS by attempting to demonstrate the confirmation bias with which the circularity begins.

      I wonder if Chomsky will ever attempt to clarify all the confusions that are causing the standstill in linguistics and in cognitive science. Perhaps, Professor Harnad should invite Chomsky to give a talk at McGill to seminate the real APS which Chomsky intended.

      Delete
    2. Mael, I agree that a lot of the APS experiments tend to confuse OG with UG; on the flipside, it seems unethical to try and center an experiment around solely UG and its manifestation in children. For example, if we were to conduct such an experiment to prove whether or not UG still exists in children without exposure to language, we would need a control variable- presumably a child who has had no contact with language (so not total sensory deprivation, just language deprivation). And since humans are the only beings capable of propositions, and therefore language, we wouldn’t be able to substitute other animals, or even neural nets, in place of this child. Another ethical concern- how would this child be raised? By animals, or by researchers in a room? Even more concerns would arise from this kind of strict UG experimentation, so for now, OG is the best with which we can work.

      Delete
  15. I must admit I found the linguistic details in section 4 to be a bit dense, but if I understood correctly, Pullum is arguing that the Poverty of the Stimulus Argument (APS) is not supported by the level of evidence people assume it is? This then meaning that one cannot simply cite APS and renounce data-driven theories of language?

    The impression that I had previous to last Friday’s class was that APS refuted the behaviouristic claim that language was purely a data-driven and learned process — that nothing in language was innate. Due to the ‘poverty of stimulus’ some structures which children succeed at forming correctly are surprising, difficult to generalize from mere experience, and that therefore there was a reason to throw doubt on purely data-driven explanations.

    In class, Prof. Harnad made a distinction between ‘original grammar’ (OG) and ‘universal grammar’ (UG). OG does not have APS because it is simple, and has ‘negative evidence (i.e. a schoolteacher or parent corrects your grammar), which allows humans to successfully distinguish between linguistic schemas. Universal Grammar, being the underlying structure for language that all humans are able to grasp, does have this problem because unrelated languages share similar structures? I confess I’m still a bit shaky on this.

    However, if Pullam’s conclusion is simply that APS is not completely understood or proven, and in fact relies on a grouping of different arguments, doesn’t this simply mean we cannot discount data-driven learning of OG? Put succinctly, it seems we have run into the central lesson of cognitive science again: human cognition is almost always a mix of environment and predisposition, and single-sided theories fail to account for the whole picture. Was it ever the case that we assumed that language was 100% innate? If so, I find this surprising.

    ReplyDelete
    Replies
    1. @Edward

      "...we have run into the central lesson of cognitive science again: human cognition is almost always a mix of environment and predisposition, and single-sided theories fail to account for the whole picture."

      I think it is not only the central lesson of cognitive science, but it also applies with all and every discipline with which human beings are involved. I've mentioned in a previous skywriting that "nurture grows nature." Whether it was initially innate or not, human beings and animals are dynamic systems nurtured to "learn." So any theory that suggest otherwise is bound to fail in explaining how/why of the language acquisition. In addition, linguistic theory of language acquisition holds that at least some knowledge about language exists in humans at birth; therefore, even language nativism does not "[assume] that language was 100% innate."

      Delete
    2. I think that most of these readings are saying that learning of OG is in fact data-driven, and influenced by reinforcement from the environment. The main debate centers around the acquisition of UG, which as you explained, concerns the specific mental representations and operations that are common to all languages. I think your point that human cognition is almost always influenced by the environment as well as biology provides a good link back to Baldwinian evolution. I don’t think it was ever the case that language was 100% innate, but that as language evolved, it made more sense for the organism to offload the acquisition of specific grammar rules – aka OG - onto the environment, while leaving the organism with an innate capacity to then acquire these rules. From an evolutionary perspective, this seems like the easiest way to ensure the development of language throughout generations, without wasting energy/capacity on aspects that could be learnt through experience.

      Delete
  16. In section 2, Pullam cites the circularity of the explanation provided by Hornstein and Lightfoot. He says basically that if a person A could point to evidence from a language as having led to knowledge of a linguistic fact F, then language acquisition is empirical. But if they attest to knowing it through some inherent biological mechanism, this presupposes a nativist understanding. But as is the case in much of decision-making/rational/economic/value-assignment literature, people often make the "right" choice without knowing why they make it. Sometimes, they provide the wrong explanation for why they are making the right choice. So in this case, if A knew F, asking her how she knew it would not necessarily support either stance; she could point to some rule she believed explained her knowledge of this F but it would not really be due to this exposure.

    I find the watered-down version of Hornstein and Lightfoot's claim compelling; that such positive evidence may exist after extensive research, but not be accessible to a child, and therefore support the idea of in-borne structures which support language acquisition. In 2.2 I saw a similarity between "innately primed learning" and data-driven learning" and supervised and unsupervised machine learning, respectively.
    I liked how this article broke down very clearly, and almost like a logical proof what would be necessary in order to prove, or disprove, the APS.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
    2. I agree with the first part you wrote. Even if we do want to do an empirical testing on infants to see what they know or not, we would not be able to tell the source of that information. Is it data driven, or is it in their heads already? It could be either. So as Pullam says, the most beneficial choice would be to follow them from the moment they are born and see what they learn. We have to look at the other side, the parents side, to find out if there is enough positive and negative evidence that they provide. However, following their progress every step of the way would be impossible. Thus, I wonder if we will ever get an answer to the question of how we know universal grammar as scientific testing seems impossible. Maybe if we were to build a T4 AI, we would know, as we would be the ones raising it as well to see what aspects of environment affects what.

      Delete
  17. The article attempts to explore whether children can learn rules of Universal Grammar from data available to them when they are learning language. Children can learn languages in 2 ways. 1) innately primed language from inborn linguistic info and 2) data driven from positive or negative experiences. Pullum argues that when children learn the rules of UG, everything is "positive data" - members of the UG category and this is all children hear and say at a young age. Since there is negative data lacking, I'm confused as to why UG is learnable and learned if there is no feedback mechanism? A positive/negative mechanism allows the brain to learn rules that should generate UG. But Pullum seems to argue otherwise?

    ReplyDelete
    Replies
    1. I guess UG is inborn while language itself is learned by e.g. instruction, supervised learning, etc. As Stevan replied me in 8a, our disposition to learn language and UG is evolved.

      Delete
    2. That is the question that the field of linguistics is most curious about probably! Why is UG learnable and learned without enough positive feedback and no negative feedback… The theory out there is that UG is inborn as Zhao said. I also have a lot of trouble grasping the idea of how such a complex phenomenon cannot be explained by data driven information. Could it be that our certain areas in the brain devoted to language are built in such a way to come to the same conclusion about language every time? If we were to think about the language centers in our brain as a separate intelligence center, and given that those areas are intact, I guess there are certain parameters that those areas force us to stay constrained within. How it manages to do that about details, I do not know.

      Delete
  18. This paper reinstates a few themes that are present in Steven Pinker's paper on Language Acquisition, namely being positive and negative evidence. Pullum places emphasis on Stimulus Poverty, which involves the learning of facts about a language based on an environment that is lacking in sufficient positive crucial evidence for successful learning to occur. This theory supports the idea of language being innate, and the paper even includes certain properties that go along with this account. Among a few, children learn languages fairly quickly and they always succeed at language learning, arriving at theories highly undetermined by the data they are given by their speech communities.

    However, there is a gap present about how children learn to assume that something is ungrammatical. In an environment where there is only information about what is grammatical, how is it that children can learn or theorize about what is not permitted grammatically? Children also somehow learn to overcome overgeneralizations, the classical example being the extraction of "broke" from "breaked." Information available to the child is fairly limited, as he hears only a random subset of sentences. Yet, children learn their first languages without explicit instruction and can produce an infinite amount of sentences given a finite set of inputs. Chomsky's theory of Universal Grammar supports the idea of APS, suggesting that language properties are innate and that they are partially shaped by the environment. Somehow children can tease apart these language restrictions, utilizing grammar in a correct and systematic fashion. What further evidence is needed to account for APS? What other observations can we look for to support these claims?

    ReplyDelete
  19. The article discussed, in length, counter-arguments for those supporting the poverty of the stimulus, and in attempt to support this by using a plethora of examples. What is a bit muddled is that, I don't believe a lot of the examples Pullum & Scholz employed were as relevant to argument cases. For example, when he discusses attentions and how important it is to know what the children is paying attention to during their language acquisition… This to me, seems unrelated to arguments against, or even for, the poverty of the stimulus, as attentions applies more to explicit learning, such as through instruction; and, it is not seemingly relevant with implicit learning (language grammar) and doesn’t seem to be any closer to resolving any UG related problems.

    It took me longer than I would like to admit to get through this article. Because once I read some and realized that Pullum had unfortunately made the same misconception, about UG and non-UG grammar, as Pinker did in 9a reading, my motivation to finish the paper went out the window. In whereas, Pinker didn’t address the problem with the poverty of stimulus in the previous reading, Pullum in this article seems to minimize the importance in the lack of negative evidence, and notably, Pullum’s mix up between negative evidences and other irrelevant issue in his examples.

    Despite all the language examples and efforts, Pullum didn’t really provide UG violations or any relevance to negative evidence of UG. Ultimately, he didn’t successfully refuted the argument for the poverty of stimulus – bringing us right back to the notion that, UG must be innate because their rules are not learnable from the all-positive data available to children.

    ReplyDelete
  20. I’m reminded of when, at the start of classes, Dr. Harnad broke down all of our definitions of consciousness/information/etc. Having very little experience in linguistics (none actually), I found this piece particularly technical and reading through it was slow going. I very much appreciated how Pullum took a concept so cemented in the field (APS) and tore down what was weakly supporting it all along – questioning and encouraging linguists to address the problems he writes about in this paper. Let’s just hope that there’s better proof of existence for CogSci than the APS, as he states in his conclusion.

    ReplyDelete

  21. In 2.1, the author talks of the facts about language acquisition that support linguist nativism. I personally struggle with first: Reliability [Children always succeed at language learning]. We can simply look at the majority children that are on the Autistic Disorder and who simply don’t develop language by themselves. They can still walk, eat, reach, play and develop other skills perfectly, while speech will only develop through intensive interventions and therapies.
    Second, I struggle with Convergence: [Children end up with systems that are so similar to those of others in the same speech community.] Indeed, setting the UG rules aside, the variation between all the languages spoken by the diverse cultures is far from being similar most of the time. Some don’t have count numbers, other won’t have tense, or pronouns, most will vary in their syntactic structures.

    ReplyDelete
    Replies
    1. On your first point, I think the author's point was not that "all children who have ever lived have succeeded at language learning," but that *most* children ( the vast majority) succeed at language learning. In other words, if you took the typical, average child, that child would have succeeded at language learning. I think when we're talking about something that concerns people broadly, it makes sense to investigate the most 'typical' example - that which concerns the vast majority of people.

      On your second point, I think the key term here is "in the same speech community." I don't think the author is not claiming that all children speak the same or that all languages have the same syntactic structure, but that children in a community of people who speak the same language or dialect will end up speaking using a language system extremely similar to others in their community.

      Delete
    2. *I don't think the author is claiming

      Delete
  22. I have a question about the Poverty of the stimulus PoS. I just want to double check, ordinary grammar (OG) is positive + negative evidence while there’s no PoS. For universal gramamar (UG), there’s negative evidence but is there positive evidence? And there’s no PoS for UG? Thank you!!!

    ReplyDelete
    Replies
    1. Since PoS means no negative evidence, there is no PoS for OG because there is both positive and negative evidence.

      For UG there is only positive evidence, no negative evidence. So there is PoS for UG. (Please make sure you understand this, Peihong, rather than just re-stating it, because the final exam tests what you understand, not whether you can repeat what I said!)

      Delete
  23. APS … b. If human infants acquire their first languages via data-driven learning then they can never learn anything for which they lack crucial evidence.

    One issue I have with this argument – something that was referred to in this article, but not delved into - is people's ability to recognize and repeat patterns. Could it be that some linguists are over-estimating what evidence is “crucial evidence” for a child to learn something about a language? Our ability to learn patterns means that we do not need to be directly exposed to something to know that it is correct in a certain circumstance. To give a very simple example – one could hear “Is the cat brown?” and “Is the dog white?” and “Is that person named Bob?”. If they want to ask if a book is long, they would say “Is the book long?” and wouldn’t be inclined to say “The book long is?” and would probably recognize it as incorrect even if they had never been exposed to the sentence “Is the book long?”.

    Something like this could explain why a child wouldn’t be inclined to say “Is the dog that in the corner is hungry?” Let’s say that they know “The dog in the corner is hungry.” turns into “Is the dog in the corner hungry?” and have been exposed sentences like “The red dress that is on the dummy is beautiful. Is it for sale?” etc. They might ‘guess’ that the ‘correct’ way to say a question is sentence A, “Is the red dress that is on the dummy for sale?” not sentence B. “Is the red dress that on the dummy is for sale? Likewise, they might ‘guess’ that the dog question would be formulated “Is the dog that is in the corner hungry?” and not “Is the dog that in the corner is hungry?” even if they had never been exposed to a question of that type before. It seems like this does not require the child to have some “innate” knowledge of grammar – Just an ability to recognize and repeat patterns that they hear in language.

    ReplyDelete
    Replies
    1. Hi Kara,

      I have an almost nonexistent background in linguistics aside from what I've learned from the course, so my assessment may very well be simplistic and incorrect. In regards to what you brought up about an "ability to recognize and repeat patterns that they [children] hear in language" - wouldn't this ability presuppose a mechanism by which they pick up on these patterns, since the only patterns they hear are correct (positive) ones? In other words, without some innate linguistic aptitude, how would these patterns be picked up on?

      Delete
    2. But don't we naturally have an ability to recognize and repeat patterns, even if they're non-linguistic? For example, visual patterns, numerical patterns, puzzles, etc?

      My point is - Could a child catch on to certain things about language without having been exposed to it directly *and* without having an "innate knowledge" of it?

      In more technical terms, could language learning be more of a "domain general" capacity than a specific module - in other words, something that's the result of our advanced ability to recognize and repeat patterns, rather than our having an innate knowledge of certain parts of grammar?

      Not saying this IS the case. But it might be a possibility.

      Delete
  24. Pullum and Scholz offer some compelling counterarguments to APS, and it's possible that they are correct about every case they discuss. But, as they say, that still does not disprove APS. It may be that the investigation they suggest on data-driven learning (DDL) can only work on a much larger timescale than children actually need to master a language. DDL also plays an important role even according to APS, so some combination of innate mechanism and DDL is compatible with the article.

    They do, however, point out some valid shortcomings of current APS research, and propose clear methodologies to iron those out. APS is just too strong of an argument to let go.

    ReplyDelete
    Replies
    1. Pinker's points about the necessity of negative evidence still hold, and the authors do nothing to show that such negative evidence is available: only positive evidence. However, a more precise formulation of the requirements for learnability will definitely help strengthen APS by providing clearer lines of research.

      Delete
  25. Pullum and Scholz do an excellent job of breaking down the argument for the poverty of the stimulus into its basic premises to deductively conclude that POS lacks empirical evidence to support its claims. Unlike proponents of POS, who tend to make unfounded conjectures, P & S actually go through the necessary process to find evidence to support POS:

    1. Provide description of grammar or grammatical statement that is known (acquirendum);
    2. Provide experiences (or set of sentences = lacuna) to which a child need be exposed in order to learn the acquirendum;
    3. Provide explanation as to why exposure to lacuna is necessary to learn acquirendum (indispensability);
    4. Provide evidence that the child was never exposed to lacuna during language acquisition (inaccessibility);
    5. Provide evidence that the child does get to know the acquirendum.

    If it was concluded that the child was never exposed to instances of an utterance, then we would have evidence of POS; however, if it was shown that the child was exposed to instances of an utterance, then that would serve as evidence for data-driven learning. The article examines studies that have supposedly provided empirical evidence for POS; however, P & S show that children actually do have sufficient exposure to these linguistic rules and therefore, none of the studies suffice as evidence for POS. Nonetheless, P & S do not outright refute POS, since negative evidence is lacking (as compatible with UG). Instead, a higher demand must be placed on providing empirical evidence to support assertions, so the argument over poverty of stimulus does not turn into a dogma.

    ReplyDelete
  26. It is important that we are setting aside arguments based merely on cases in which it appears that children would need negative data (definite information that some word sequence is not grammatical) in order to learn some fact. The argument from the (presumed) fact that children get no (or very little) negative information from experience involves not so much stimulus poverty as stimulus absence.

    I understand that there are studies supporting the fact that children receive no negative feedback from parents during development. To what extent were these studies conducted? How many children were studied, and what was the scope of their developmental backgrounds? POS seems to disregard the fact that other people typically interact with children, namely teachers. In addition, I find it increasingly difficult to believe pre-school teachers did not provide corrective feedback to improper grammar usage. The notion that the absence of a specific word or phrase in a sentence can constitute negative feedback, but to what degree could this account for acquisition?

    ReplyDelete
  27. Many of Pullum and Scholz’s arguments are based on the assertion that children are most likely exposed to certain syntactic patterns that advocates of nativism claimed were “vanishingly rare.” This is used to discount the empirical research that has attempted to support the nativist theory of language acquisition. However, I believe that, even if children are exposed to certain syntactic patterns, they are unlikely to master them on the first try— it would still take a certain number of exposures on average for a child to learn that something is grammatical. Therefore, I would propose a different sort of empirical design that doesn’t involve assuming a child has NEVER encountered a given speech pattern. Some other symbols (for instance different coloured blocks) could be used as different classes of words (e.g. red blocks are verbs, green blocks are nouns, etc.). Additionally, bigger, hollow blocks could be more complex combinations of words (e.g. noun phrases). Although it may be impossible to know whether any given child has been exposed to a given phrase, it should be less difficult to come up with an average, based on the pattern’s overall prevalence in a language. If learning certain block patterns (which are just as complex and the syntactic patterns but convey no meaning) took significantly more exposures than learning the equivalent language patterns, it could be determined that children have some kind of prior disposition to using these rules in the context of language acquisition.

    ReplyDelete
    Replies
    1. Hi Emma,
      This actually sounds like a really cool idea for testing children's sequencing and systemizing knowledge (important for learning math, music, organization), but I have a few issues with it's relationship to linguistic research
      --- How you would isolate a time frame? children move remarkably fast through these acquisition periods - acquiring much of the complex syntax they need at <3yrs old. You would have to test this within a very specific time frame before the child has presumably learned the syntactic structure. Trouble is, there is considerable evidence that children have acquired more than they are capable of producing (they comprehend certain constructions before they use it themselves, like uses of prepositions like on and under, long coordinated structures, etc)
      --- Children are not given instruction on how to learn language - how would you instruct a child to do your task? Presumably the don't know what "verb" or "noun" means even though they know how to use them. And children are motivated to learn language - how would you motivate them in your task?
      --- Sequencing and systemizing knowledge is used in domain-general ways that may differ from the domain-specific usage in child language acquisition
      --- You mention finding patterns that are "equivalent language patterns" to block patterns that have no semantics. I don't think this is even possible, given that semantics is a huge part of how syntax gets put into place in a language.

      Part of the reason UG is so hard to test is because it only surfaces via OG.

      Delete
  28. Without a background in linguistics, I found this paper quite heavy and difficult to understand. In section 4, the paper considers the four examples of plausible cases with which linguists are thought to have provided support for the APS. In class, we learned the distinction between UG and OG. Here, I am not sure if these 4 examples are using OG to prove the poverty of stimulus? The four examples in section 4 sound not very convincing to me because OG has both negative and positive evidence. On the other hand, the poverty of stimulus is a property of UG where you just have the language ability to express verbally, but you won't say things like DSTYJKGD because it just does not make sense. To be honest, I think UG and consciousness are what we cannot jump out of them to investigate how they work. In here and in other linguistic papers, a * in front of an example sentence means a sentence that is ungrammatical (I guess that is in terms of OG but not UG). I don't quite see the relation of how these things can prove about UG.

    ReplyDelete
  29. It is difficult if not impossible to disprove the poverty of the stimulus argument, to provide a complete account of all the linguistic data available to a child is too difficult a task. Pullman and Scholz, however, state that the poverty of the stimulus argument would be undermined if one could "identify a set of sentences such that if the learner had access to them, the claim of data driven-learning ... would be supported". They make, in my opinion, a convincing argument that a child’s linguistic environment is richer than previously granted. Thus, maybe linguistic stimulus isn't so impoverished.

    However, I believe that, disproving APS does not delegitimize the innateness of grammar. Just because the child's environment is linguistically rich does not undermine the incredible feat of language acquisition. What internal mechanism is present that allow a baby to pick up on the patterns present in such a rich linguistic environment?
    I am aware that Chomsky has updated his theory on multiple occasions. From my understanding the most current theory is a minimalist approach in which innate linguistic competence has only those characteristics or computations that are absolutely necessary for language acquisition. My only critique is that to categorize the internal mechanism for language learning in such a vague fashion, like “UG”, “OG”, or a “language organ”, undermines the potential to figure out the innate processes true to language acquisition.

    ReplyDelete
  30. This is a more challenging part of the course to follow as I have not taken any linguistic courses, however a couple of observations.

    Chomsky’s APS says that children’s knowledge of UG must be innate because there is not enough information in their environment (i.e. not enough negative evidence) that it could be learnt. The authors do not agree with this because, to my understanding, they do not see a logical relationship between “not enough information in the environment” and UG as “innate.” However, to be able to prove this by recording everything a child hears and says is, as Pullum agrees with, absurd, and perhaps unethical. I understand why Pullum takes issue with the premise that “infants do in fact learn things for which they lack crucial evidence” as there is no empirical proof that they “lack crucial evidence” but due to the impossibility of testing this, an assumption has to be made.

    Another thought is about how Baldwinian evolution codes for capacities to learn language rather than language itself. With Baldwinian evolution, it would make sense if evolution selected for the syntactic capability of UG, rather than selecting for language itself. I’m not sure, however, about the selective advantage of UG over other syntactic forms.

    ReplyDelete
    Replies
    1. Hi Nimra,

      Right, I think that doing a scientific investigation of APS is absurd. In order to get falsifiable evidence, you were need to actively attempt to show that a child would not learn language without environmental stimulation, obviously ethically impossible. It seems that were meant to assume the null hypothesis without a way to demonstrate the contrary.

      With regards to your comment on Baldwinian evolution, I think that it does make sense that evolution would select for UG capacities, but again that is still contingent on the assumption that the APS hypothesis is true. We can't use Baldwinian evolution as evidence because it makes the logical flaw of assuming its' own validity to prove the hypothesis.

      Delete
  31. RE: innately primed learning vs. data-driven learning
    I understand the concept of innately primed learning – in that it entails being “inborn domain-specific linguistic information.” Taking the information from the piece that discussed domain-specific face recognition, it seems as though this kind of learning is designated from a certain part of the body/brain, it is not domain-general where this part of the brain is used for other functions. Data-driven learning comes from the environment. In terms of the poverty of the stimulus argument, it seems as though innately primed goes hand in hand with perform data-driven learning. However, is it necessary to have innately primed learning to be able to function or can you just function on data-driven learning alone?

    ReplyDelete
  32. One thing I haven't yet understood about APS, is what Chomsky actually means by 'innate'? What does it really mean to say that our propensity for language acquisition, and our propensity to glean propositions (UG), is innate?
    I've taken it to mean, present at birth, but I figure it could also mean that it's impossible to learn it, or universal across our species independent of environment? Something that is passed down via genetics in humans but not in non-human primates?
    Perhaps 'innate' was defined in class/ in Pullum's article but I missed it. Could someone please explain this?

    ReplyDelete
  33. Even though it is a popular opinion, claiming that children don’t have access to the necessary stimulus required to know how to speak seems to be reaching out a little, firstly because we truly do not know what a child experiences, especially in the very early stages.
    Referring back to the previous reading, we can tell when someone is speaking motherese because of prosody, for example. Before even knowing how to form sentences, children can understand prosodic cues: ‘[…] prosodic properties are perceptible in advance of knowing any syntax […]’. This suggests that prosodic mechanisms are at a higher level, yet we seem to overlook prosody. There are so many different ways of understanding the statement ‘I’m fine’ depending on whether you’re reading it, you’re hearing it from someone who stubbed their toe, or you’re hearing it from your significant other. Would prosody be evidence for or against UG, given that prosody is universal but aspects of prosody are inconsistent cross-linguistically? Though, the bigger question is how can you know what is learned if you don’t know what exactly isn’t learned?

    ReplyDelete
    Replies
    1. Hi Marcus, i think your discussion on prosody and how much of it is innate is extremely interesting. I know phonetically (speech production), there are sounds that are easier to produce, consequently these sounds are present in most languages and are learnt earlier in development. Harder to produce sounds are present in less languages and are often amongst the last sounds to be produced. The difference accounting for these might be genetic and the result of our oral tract shapes etc. Now comparing prosody there seems to be less theories on this, how it is acquired and how much of it is innate versus learnt. Additionally, the interesting thing about Motherese is that even though the input to a developing child is elementary, the child's output is not in Motherese which offers some evidence for POS.

      Delete
  34. I have not studied linguistics academically and thus cannot say I have a strong background in the study of language, nor do I have a deep backlog of evidence to cite in my arguments here other than the readings assigned. As such I struggled a bit with this reading, however, even with my limited knowledge, the evidence for Universal Grammar seems pretty overwhelming.

    I understand the authors caution against drawing innateness from the poverty of stimulus argument, which is fine and always a useful exercise to be critical of such a one-dimensional rationale, however it seems that linguistic nativity has more advantages.

    As an earlier paper cautioned, it can be dangerous to form hypotheses from a top down perspective, i.e. "this is the mechanism because it evolutionarily 'makes sense'", so I am not throwing all my eggs in this basket, but UG ought to be considered an affordance of sorts.
    It stands that if the development of language and linguistic ability was indeed an evolutionary selection, that the development of the "tool" of UG is as necessary as the development of the opposable thumb for, well, tool use.

    ReplyDelete
  35. Similar to some of my classmates, I struggled with this article due to make lack of background in linguistics. From what I understand, P&S attempt to breakdown an argument that they support (language nativism, that we are endowed with some predisposition to learning language) into its fundamental premises and examines what kind of evidence would be required to support the argument for nativism, and how this work could be conducted. P&S state that the argument of PoS should show that human children are equipped with mechanism that have specific content that aid in language acquisition and how to provide empirical support for this.

    “Our point is that the advocates of the APS-defenders of the claim that the child could not possibly be exposed to enough data to learn the generalization in a data-driven manner - must develop some explicit and quantitative answers to such questions if they wish to support their claim.”

    The suggestion however, that in order to prove PoS, we would require a quantitative instance of a child isolated from utterances seems problematic due to language closely relating itself to many other aspects of cognition. Wouldn’t hypothetically (if it weren’t ethically heinous to do so) if a child were isolated enough that they never encountered an utterance also be developmentally stifled in other respects? How do you prove innateness in a way that isn't confounded by other developmental processes?

    ReplyDelete
    Replies
    1. There's no way to find this out without maltreatment, which as you said has other effects on a child. Also, since a lot of the theories on the innateness of language refer to a period of time where language learning is most effective, looking at a child's language learning later in life wouldn't really compare. This way of investigating it is a complete non factor and we would have to find other ways of empirically testing it.

      Delete
  36. I think the idea that humans are innately primed to learn rather than data driven helps refute the premise of computationalism. While the paper seems to lean towards data-driven, that would hold that humans are just computing information they receive. While I know of no formal POS studies that show this is a case, a typical example that might illustrate this is second language speakers (who make many mistakes or are at a lower "level" of language) raising a child who speaks perfect English. (Anecdotally) I've never noticed that these children are impeded in any way from having less positive stimulus and more negative stimulus and I think this could be a direction for studies to show that language learning is innate for children.

    Additionally, a question I have is that why can't an explanation be both forms of language learning interacting? I don't believe they're at odds with or refute each other, and it could help to explain why we learn languages so quickly and easily at a young age.

    ReplyDelete
    Replies
    1. Hi Nick,
      I don't know if i fully understand the link you are making between data-driven theories and computationalism, only because I think the same comparison can be made about UG giving a whole set of pre-determined algorithms that will either be used or not (depending on the OG)

      addressing your question about interacting data-driven and UG - i don't think the authors ever disputed this. The goals of the paper was to demonstrate the APS supporters haven't demonstrated yet that UG is necessary

      that being said - you still pose an interesting question. It doesn't seem clear to me that these approaches should be at odds with each other either. The acquisition of OG depends on UG beings present according to APS, but learning OG is obviously data driven, and rare structures are still learned and either case.

      Delete
    2. What I was trying to say is that the idea that humans learn language solely based off data they receive is parallel to the theory that human cognition is merely computation. If language were like this, it would be much easier to program computers to learn different languages (as we know, we can teach them rule based learning but it does't end up as natural as humans RE: google translate). On the other hand, the innateness theory of language holds true that there's something separating cognition from computation.

      They never disputed it, but I think it's heavily implied that they are separate things and one theory is either right or wrong.

      Delete
  37. Before I started reading about these articles about the how we learn language, I thought that it had to be data driven, meaning that we learn from our environment. The first article surprised me with some of its data. However, this article again supported my first view. Although the arguments that Pinker made were compelling, as the second article states, he did not really talk about much of the empirical evidence, which is very hard to get when you think about it. As Pullum and Scholz states, you would have to follow an infant from the moment it was born, to when it is 4 years old. Maybe even before, as there are studies indicating that newborn children prefer the vowels of their non-native language, as it is more new to them, meaning that they have learned in the womb to distinguish between the sounds of their native and non-native language. There are also studies showing that newborns’ heart rate goes down, showing they pay more close attention, when they hear their mothers’ rhyme that she read three times a day when she was pregnant. These studies do point to the fact that they start learning about some aspects of language even before they were born. The examples the Pullum and Scholz gave seem quite compelling as well, showing that there might be more data out there for them to extract and learn from than we thought. It also made me question more about these studies that was done on children that showed there was no negative evidence in Universal Grammar, or enough evidence to support that their learning could mostly be data driven.

    ReplyDelete
  38. If I have interpreted the thesis of this paper correctly, it is not that data-driven learning of language is correct, or that the argument for the poverty of the stimulus is incorrect; rather, they are arguing that the present evidence for the argument of the poverty of the stimulus is insufficiently proven. Pullum and Scholz are attempting to change the skepticism of the data-driven hypothesis, because they believe it is an avenue of research that needs to be explored in order to understand how language is learned. They believe that there is more information in the environment than the APS enthusiasts believe, and that much can be learned about language acquisition by further research on this hypothesis. I am curious about how Pullum and Scholz suggest designing experiments to further study the data-driven hypothesis for language acquisition. One thing that came to mind for me, was previous research on vocabulary and children raised in poverty versus professional families. There is proven evidence that there is a huge gap in the number of words heard per day between children living in poverty and those with professional parents. If the environment is less rich, and poorer children therefore learn fewer words, and learn less about the structure of language, this could be an interesting application to the research done on language acquisition and APS.

    ReplyDelete
    Replies
    1. I believe your second point was partly addressed in the Pinker article. Pinker demonstrated that "rural American English" and other similar "working class" languages would be considered as a different dialect to a linguist. While the vocabulary and manner of speaking may vary he noted that they all have specific and complex grammatical rules that they abide to.

      I really like your summary of the article though! My main takeways from this article were the need for empirical testing of hypotheses and the introduction of healthy skepticism of the poverty of stimulus argument.

      Delete
  39. I think this article provides an interesting contrast to Pinker’s paper. Pullum makes some important points arguing for the stronger inquiry and testing of the “poverty of stimulus argument”. I, myself, not having previously taken linguistics course was aware of this argument and how it has been attributed to Chomsky. However, Pullum and Scholz are correct in arguing that linguistics need to be critical and define precisely what they mean when they refer to the “poverty of stimulus”. I also think they are right to call for empirical testing of this hypothesis (of course this is difficult to do when there is not an accepted definition).

    Re: “RELIABILITY: Children always succeed at language learning.”
    The point about the reliability property of language acquisition being used to support the poverty of stimulus argument was something I noticed in Pinker’s paper. I think it is a particularly weak argument to use to justify the innateness of language. How does the fact that children always succeed at language learning prove language is innate? For one, not all children successfully master language. Many people with disabilities are nonverbal and disability can have both genetic and environmental origins. As well, children who are raised in extreme isolation do not develop language. Thus the claim that children always succeed at language learning is firstly inaccurate: most children master language. Second, we see examples of children who do not master language for reasons that have to do with their environment. You could also claim that this reliability property exists because almost all children are raised in environments that correctly facilitate language acquisition.

    Also, related to the innateness of language, I am wondering how we can logically expect that a child could learn language through instructive feedback? Instructive feedback is often given in the form of words, which presumably children would not understand unless they had already acquired a certain level of proficiency in language. This is not sufficient proof in support of the poverty of stimulus argument. For example, given sufficient exposure to language perhaps other mechanisms of learning can take place? Perhaps adults convey feedback in other ways to children through tone of voice or other nonverbal signals such as nodding. This seems plausible to me because this is also how we communicate with animals. For example, when my family first got our puppy he quickly learned to respond to the tone of our voices and specific gestures. He might not have understood the meaning of the words we were saying but he certainly knew when he was being scolded vs. being praised and learned accordingly. He would respond more to the intonation of our voice rather than the actual words being spoken. However, Pinker demonstrates that young children have good comprehension of complex sentences before they are able to produce such complex phrases themselves. How can children learn such complexity so quickly using only environmental cues? Clearly an infant comprehends and processes language different than a puppy, but why? What is it that allows humans to do this in contrast to all other animals? These questions remain unanswered to me and require further exploration.

    ReplyDelete
  40. This was a fascinating read for me. I knew that children didn’t get negative evidence from their parents and that they would still be able to figure out how to properly form a grammatical syntactic structure, unless they were suffering from some linguistic deficit, but I never came to really think that one cannot ever learn the rules to UG. Why is it that all languages are UG-compliant? Even though we have rules to a language (grammatical, structure, parameter settings), we don’t know if that’s all that can be included in language structure and we also don’t know how UG is something we can all access as infants. There is no physical placement of UG in our brains. There is no rulebook (because we’ll never know the rules). It’s impossible to know because we can’t ever explain UG or even know what it is. It just gives us the capacity to learn any language. It’s such a strange fact to now realize and to really think about.

    ReplyDelete
  41. A couple of points in the text really stood out to me:

    I. “Underdetermination” was mentioned as one of the properties of child’s accomplishments: “Children arrive at theories (grammars) that are highly underdetermined by the data.”
    How underdetermined are the UG rules that linguists eventually discover as theories?

    II. “What Hornstein and Lightfoot claim is that some of the sentences children never hear are crucial evidence for learning from experience. That is, some aspects of languages are known to speakers despite the fact that the relevant positive evidence, although it does exist, is not accessible to learners during the acquisition process, because of its rarity: linguists can in principle discover it, but children will not.”

    What I take from the “not accessible to learners during acquisition” part of this is that Hornstein and Lightfoot are using the idea that unsupervised learning could not have been the case because the positive evidence is too complicated for the child to make sense out. I think this is a less powerful argument compared to the view that no categorization mechanism that we know of could have produced a category of UG-compliant sentences without negative evidence. Lack of negative evidence not only applies to why unsupervised learning would not have worked but also applies to why supervised learning, instruction or in fact any kind of learning would not have been possible.

    ReplyDelete