Saturday, 2 January 2016

11b. Dror, I. & Harnad, S. (2009) Offloading Cognition onto Cognitive Technology

Dror, I. & Harnad, S. (2009) Offloading Cognition onto CognitiveTechnology. In Dror & Harnad (Eds): Cognition Distributed: How Cognitive Technology Extends Our Minds. Amsterdam: John Benjamins 


"Cognizing" (e.g., thinking, understanding, and knowing) is a mental state. Systems without mental states, such as cognitive technology, can sometimes contribute to human cognition, but that does not make them cognizers. Cognizers can offload some of their cognitive functions onto cognitive technology, thereby extending their performance capacity beyond the limits of their own brain power. Language itself is a form of cognitive technology that allows cognizers to offload some of their cognitive functions onto the brains of other cognizers. Language also extends cognizers' individual and joint performance powers, distributing the load through interactive and collaborative cognition. Reading, writing, print, telecommunications and computing further extend cognizers' capacities. And now the web, with its network of cognizers, digital databases and software agents, all accessible anytime, anywhere, has become our “Cognitive Commons,” in which distributed cognizers and cognitive technology can interoperate globally with a speed, scope and degree of interactivity inconceivable through local individual cognition alone. And as with language, the cognitive tool par excellence, such technological changes are not merely instrumental and quantitative: they can have profound effects on how we think and encode information, on how we communicate with one another, on our mental states, and on our very nature. 

(11b. Comment Overflow) (50+)

(11b. Comment Overflow) (50+)

**X1. Chalmers (2011) "A Computational Foundation for the Study of Cognition"

Chalmers, D.J. (2011) "A Computational Foundation for the Study of Cognition".  Journal of Cognitive Science 12: 323-57

[This is for Grad Students taking the course.]



Computation is central to the foundations of modern cognitive science, but its role is controversial. Questions about computation abound: What is it for a physical system to implement a computation? Is computation sufficient for thought? What is the role of computation in a theory of cognition? What is the relation between different sorts of computational theory, such as connectionism and symbolic computation? In this paper I develop a systematic framework that addresses all of these questions.

Justifying the role of computation requires analysis of implementation, the nexus between abstract computations and concrete physical systems. I give such an analysis, based on the idea that a system implements a computation if the causal structure of the system mirrors the formal structure of the computation. This account can be used to justify the central commitments of artificial intelligence and computational cognitive science: the thesis of computational sufficiency, which holds that the right kind of computational structure suffices for the possession of a mind, and the thesis of computational explanation, which holds that computation provides a general framework for the explanation of cognitive processes. The theses are consequences of the facts that (a) computation can specify general patterns of causal organization, and (b) mentality is an organizational invariant, rooted in such patterns. Along the way I answer various challenges to the computationalist position, such as those put forward by Searle. I close by advocating a kind of minimal computationalism, compatible with a very wide variety of empirical approaches to the mind. This allows computation to serve as a true foundation for cognitive science.

(**X1. Comment Overflow) (50+)

(**X1. Comment Overflow) (50+)

**X2. Harnad 2012 "The Causal Topography of Cognition"

Harnad, Stevan (2012) The Causal Topography of CognitionJournal of Cognitive Science13(2): 181-196 [commentary on: Chalmers, David: “A Computational Foundation for the Study of Cognition”]

[This is for grad students taking the course]

The causal structure of cognition can be simulated but not implemented computationally, just as the causal structure of a furnace can be simulated but not implemented computationally. Heating is a dynamical property, not a computational one. A computational simulation of a furnace cannot heat a real house (only a simulated house). It lacks the essential causal property of a furnace. This is obvious with computational furnaces. The only thing that allows us even to imagine that it is otherwise in the case of computational cognition is the fact that cognizing, unlike heating, is invisible (to everyone except the cognizer). Chalmers’s “Dancing Qualia” Argument is hence invalid: Even if there could be a computational model of cognition that was behaviorally indistinguishable from a real, feeling cognizer, it would still be true that if, like heat, feeling is a dynamical property of the brain, a flip-flop from the presence to the absence of feeling would be undetectable anywhere along Chalmers’s hypothetical component-swapping continuum from a human cognizer to a computational cognizer -- undetectable to everyone except the cognizer. But that would only be because the cognizer was locked into being incapable of doing anything to settle the matter simply because of Chalmers’s premise of input/output indistinguishability. That is not a demonstration that cognition is computation; it is just the demonstation that you get out of a premise what you put into it. But even if the causal topography of feeling, hence of cognizing, is dynamic rather than just computational, the problem of explaining the causal role played by feeling itself – how and why we feel – in the generation of our behavioral capacity – how and why we can do what we can do – will remain a “hard” (and perhaps insoluble) problem.

(**X2. Comment Overflow) (50+)

(**X2. Comment Overflow) (50+)

**X3. Chalmers Response "The Varieties of Computation: A Reply to Commentators"

Chalmers, D.J. (2012) "The Varieties of Computation: A Reply to Commentators". Journal of Cognitive Science, 13:211-48.

Publication of this symposium, almost twenty years after writing the paper, has encouraged me to dig into my records to determine the article’s history. The roots of the article lie in a lengthy e-mail discussion on the topic of “What is Computation”, organized by Stevan Harnad in 1992. I was a graduate student in Doug Hofstadter’s AI laboratory at Indiana University at that point and vigorously advocated what I took to be a computationalist position against skeptics. Harnad suggested that the various participants in that discussion write up “position papers” to be considered for publication in the journal Minds and Machines. I wrote a first draft of the article in December 1992 and revised it after reviewer comments in March 1993. I decided to send a much shorter article on implementation to Minds and Machines and to submit a further revised version of the full article to Behavioral and Brain Sciences in April 1994. I received encouraging reports from BBS later that year, but for some reason (perhaps because I was finishing a book and then moving from St. Louis to Santa Cruz) I never revised or resubmitted the article. It was the early days of the web, and perhaps I had the idea that web publication was almost as good as journal publication. 

5 Computational sufficiency 

We now come to issues that connect computation and cognition. The first key thesis here is the thesis of computational sufficiency, which says that there is a class of computations such that implementing those computations suffices to have a mind; and likewise, that for many specific mental states here is a class of computations such that implementing those computations suffices to have those mental states. Among the commentators, Harnad and Shagrir take issue with this thesis.

Harnad makes the familiar analogy with flying, digestion, and gravitation, noting that com¬puter simulations of these do not fly or digest or exert the relevant gravitational attraction. His diagnosis is that what matters to flying (and so on) is causal structure and that what computation gives is just formal structure (one which can be interpreted however one likes). I think this misses the key point of the paper, though: that although abstract computations have formal structure, im¬plementations of computations are constrained to have genuine causal structure, with components pushing other components around.

The causal constraints involved in computation concern what I call causal organization or causal topology, which is a matter of the pattern of causal interactions between components. In this sense, even flying and digestion have a causal organization. It is just that having that causal organization does not suffice for digestion. Rather, what matters for digestion is the specific bi¬ological nature of the components. One might allow that there is a sense of “causal structure” (the one that Harnad uses) where this specific nature is part of the causal structure. But there is also the more neutral notion of causal organization where it is not. The key point is that where flying and digestion are concerned, these are not organizational invariants (shared by any system with the same causal organization), so they will also not be shared by relevant computational implementations.

In the target article I argue that cognition (and especially consciousness) differs from flying and digestion precisely in that it is an organizational invariant, one shared by any system with the same (fine-grained) causal organization. Harnad appears to think that I only get away with saying this because cognition is an “invisible” property, undetectable to anyone but the cognizer. Because of this, observers cannot see where it is present or absent—so it is less obvious to us that cognition is absent from simulated systems than that flying is absent. But Harnad nevertheless thinks it is absent and for much the same reasons.

Here I think he does not really come to grips with my fading and dancing qualia arguments, treating these as arguments about what is observable from the third-person perspective, when really these are arguments about what is observable by the cognizer from the first-person perspective. The key point is that if consciousness is not an organizational invariant, there will be cases in which the subject switches from one conscious state to another conscious state (one that is radically different in many cases) without noticing at all. That is, the subject will not form any judgment— where judgments can be construed either third-personally or first-personally – that the states have changed. I do not say that this is logically impossible, but I think that it is much less plausible than the alternative.



Harnad does not address the conscious cognizer’s point of view in this case at all. He addresses only the case of switching back and forth between a conscious being and a zombie; but the case of a conscious subject switching back and forth between radically different conscious states without noticing poses a much greater challenge. Perhaps Harnad is willing to bite the bullet that these changes would go unnoticed even by the cognizer in these cases, but making that case requires more than he has said here. In the absence of support for such a claim, I think there remains a prima facie (if not entirely conclusive) case that consciousness is an organizational invariant.