Chalmers (2011) attempts to defend the following two principles:
Together these principles are intended to constitute the foundational role of computation in artificial intelligence and cognitive science. I am inclined to agree with both principles, as Chalmers
First, Chalmers claims that mental properties are organizationally invariant with respect to the causal topology of a physical system and that a causal topology can be “described” or “abstracted” as having the kind of grouping of physical states required by his account of implementation. However, Chalmers establishes the existence of such groupings only in the trivial sense that any physical system can be
1E.g. Chalmers (2011, p. 325) states computational sufficiency as follows: “the right kind of computational structure suffices for the possession of a mind”. However, “computational structure” as Chalmers uses it here is ambiguous between different senses of “computation”, as shall become clear below.
2. From Implementation to Computational Sufficiency
As Chalmers points out, the foundational role of computation in cognitive science and artificial intelligence requires clarifying the nature of computation.2 While the mathematical theory of computation is well-characterized (i.e. “abstract computation”), Chalmers claims that the interest in cognitive science and AI is with physical systems. What is required is an account of the bridge between abstract computation and certain physical systems, such that a physical system “realizes” an abstract computation, and an abstract computation “describes” the behavior of the physical computing system (i.e. the “concrete computations” of the system). Stated in this way, an account of implementation is of
Chalmers seeks to clarify the nature of computation in order to articulate and defend the foundational role of computation in cognitive science and AI. However, it is worth noting that there are two different notions of computation that are relevant here: abstract computation, as a mathematical formalism, and concrete computation, as a kind of physical causal process carried out by, for example, digital computers. One view is that cognitive science and AI are primarily interested in concrete
Chalmers seeks to clarify the nature of computation in order to articulate and defend the foundational role of computation in cognitive science and AI. However, it is worth noting that there are two different notions of computation that are relevant here: abstract computation, as a mathematical formalism, and concrete computation, as a kind of physical causal process carried out by, for example, digital computers. One view is that cognitive science and AI are primarily interested in concrete computation. Evidence that Chalmers might be sympathetic to this view comes from two sources: first, his claim that these fields are interested in physical systems; and second, his claim that an account of implementation must be provided prior to justifying the foundational role of computation in cognitive science and AI. This seems to contrast with the influential view that it is primarily abstract computation that is central to cognitive science and AI, and how physical systems implement these computations is less central for explaining cognitive phenomena (e.g. Pylyshyn, 1984). Chalmers usage of “computation” seems to be restricted to abstract computation, though I will return to the topic of concrete computation in the concluding section. We now turn to Chalmers’ account of implementation, and his argument for CS.
Informally, a physical system
Call this Chalmers’
Call this the
Chalmers holds that the CSA definition can be utilized to account for other formalisms. For example “to develop an account of the implementation-conditions for a Turing machine, say, we need only
Chalmers states that causal organization is the “nexus” between computation and the mind, for computation provides an “abstract specification of the causal organization of a system,” and “if cognitive systems have their mental properties in virtue of their causal organization, and if that causal organization
First, he introduces the idea of the “causal topology” of a physical system, which “represents the abstract causal organization of the system: that is, the pattern of interaction among parts of a system, abstracted away from the make-up of individual parts and from the way the causal connections are
Second, Chalmers defines a property
Given that mental properties are organizationally invariant, the final step for establishing CS simply depends on showing that organizationally invariant properties are “fixed by some computational structure”(2011, p.343). Now organizationally invariant properties depend for their instantiation on a pattern of causal interaction—a causal topology. And we can “straightforwardly abstract” a causal topology into a CSA description since “the parts of the system will correspond to elements of the CSA state-vector, and patterns of interaction will be expressed in the state-transition rules” (2011, p.343). Chalmers goes on to state that that the CSA formalism provides a “perfect” formalization of the idea of a causal topology. Support for this claim comes from the fact that “A CSA description specifies a division of a system into parts, a space of states for each part, and a pattern of interaction between these states. This is precisely what is constitutive of causal topology” (2011, p.344, my italics). However, this should be obvious given how “causal topology” was defined. As mentioned above, according to Chalmers, causal topology is an abstract “representation” and a computation is an abstract “specification” of the causal organization of a system.
According to Chalmers, this definition of implementation, when coupled with the notions of a causal topology and organizationally invariant properties, establishes CS, since “The fine-grained causal topology of a brain can be specified as a CSA. [And] Any implementation of that CSA will share that causal topology, and therefore will share organizationally invariant mental properties that arise from the brain” (2011, p.344). If one holds that CSAs provide “perfect” formalizations of causal topologies, then it would seem to follow that even the fine-grained structure of the brain can be specified using the CSA formalism. However, it strikes me as a bold empirical hypothesis that the causal structure of the brain can be specified as a CSA in any interesting way. This leads me to believe that something has gone amiss in Chalmers’ argument.
2I interpret Chalmers as primarily being concerned with digital (or rather, discrete) computation, as he later claims that such a framework can capture all important aspects of cognition (2011, p.351). 3For a related concern about how Chalmers characterizes causal topology, see Piccinini 2010, p.275, n.8.
3. Computational Sufficiency and Computational Description
I believe that Chalmers’ argument for CS depends on an undefended assumption regarding the nature of causal topologies. However, before illustrating why I think this is so, I would like to point out that Chalmers seems to run together related, but distinct, claims all under the banner of “computational sufficiency”. Chalmers seems to define CS in at least the following ways:
CS1, which is Chalmers’ initial statement of CS, can be taken to refer to concrete computation, or is at least ambiguous about whether it refers to abstract or concrete “computational structure.” I find CS1 to be the most plausible of these principles, when interpreted in terms of concrete computation, and the most neutral (I return to this version of CS in the conclusion). However, I believe Chalmers
I take (1)-(3) as being intended to establish that the CSA formalism can capture all categories of abstract computation. And if this is correct, then establishing (8) from (4)-(7) will not only establish CS3 but also CS2 as well.
I believe the above argument depends on making some problematic inferences, specifically from (2) to (3), and in defending (6). They are problematic, because if the first inference fails, then the CSA formalism cannot ground a general account of implementation, and if the second one fails, then (8) no longer follows. I take Chalmers’ discussion of descriptions and abstraction to be equivalent to the claim that there are groupings that satisfy the conditions for implementation stated above, or rather, that we “can” provide such groupings. Here are what I take to be the problematic inferences:
The initial statements of implementation involved a mapping between an abstract computation and a grouping of a physical system’s states and relations. As stated above, I take Chalmers talk of “description” and “abstraction” to refer to groupings of physical systems, and that this is done independently of the mapping between a causal topology
One kind of computational description is such that an abstract computation provides a model of a physical system: the states of the physical system,
The second kind of computation description, namely, computational explanation, is of a piece with how Chalmers understands computational explanation: the behavior of
In making the above inferences, Chalmers at best argues for the existence of modeling descriptions. In the case of (2) to (3), there will be some way to vectorize a Turing Machine, but the fact that we can “describe” a Turing Machine, to some level of approximation, as a CSA, establishes (3) only in the trivial sense that if a physical system implements a Turing Machine (or rather, an approximation of a Turing Machine, with a finite tape), and some CSA models the Turing Machine, it therefore also models the physical system. Since part of Chalmers’ motivation for adopting CSAs as his primary formalism was that it seems to provide a more accurate description of physical systems, it would seem that his argument fails by his own standards. After all, a CSA and a Turing Machine might compute the same function in extension, but not in intension; that is, they might have identical mappings from inputs to outputs, but differ in terms of the actual operations they specify. I take it that such a difference matters when we ask whether a physical system implements some computation, for if the operations are distinct then a physical system might only be able to implement one or the other set of operations because of its causal topology. If this is right, then arguing for CS3 fails to suffice for establishing the more general CS2, since we lack a formalism that is appropriate for defining implementation in general. Likewise, regarding the defense of (6), in so far as there is
I do not think that Chalmers holds CS2 to be a trivial principle. Again, given that he endorses the principle of computational explanation, it is fair to conclude that he thinks the mind has a computational structure in a non-trivial sense. However, his argument provides no reason to believe that there is an explanatory grouping of the causal topology with mental properties such that it implement CSAs. Perhaps the relevant causal topology of the brain is such that its fine-grained structure in fact consists of a number of discrete states, the internal structure of which can be explained as consisting of vectors with different discrete elements and values. But this is a strong empirical hypothesis, and given his resistance to burdening the computational foundations of cognitive science with such theses as symbolic computationalism (2011, p.353-354), I do not see why defending CS2 should depend on such a hypothesis.
It seems that Chalmers argues for the existence of modeling descriptions in making the above inferences, but he assumes that he has established these as explanatory descriptions, which provide the groupings required for his claims about implementation, and hence CS2. A diagnosis for this conflation is that Chalmers fails to clearly distinguish between functionalism, as the thesis that mental states are to be individuated functionally by their inputs, outputs, and relation to other mental states (i.e. their functional role), and computationalism (or “computational functionalism”), the view that the relevant functions and functional roles are computational. Piccinini (2004) has argued that this conflation is a result of endorsing two claims: (i) that all physical systems compute (i.e. pancomputationalism), and (ii) that the functional analysis of mental states is a kind of computational description.
I take Chalmers’ commitment to the organizational invariance of mental states and properties as implying a commitment to some version of functionalism (c.f. Chalmers’ [1996a, p.247] definition of functional organization). As noted earlier, his description of causal topology, which ostensibly also describes the functional organization of physical systems (in the case of systems for which a functional analysis is appropriate) is an abstract description of the causal organization of a physical system abstracting away from “implementation” details. So, Chalmers is clearly committed to pancomputationalism, and his description and discussion of causal topology suggests that he sees little difference between providing a functional analysis/description of causal topology, and providing a computational description.
However, I think we should reject both assumptions if we want the foundational role of computation in AI and cognitive science to be non-trivial. In fact, I worry that Chalmers also trivializes computational explanation. Chalmers claims that computation provides an “ideal language” for characterizing the causal organization (i.e. causal topology) of a system, and from this, he claims, computational explanation is supposed to follow (2011, p.345). However, all that he seems to have established is that computation provides a general framework for
4These definitions have equivalent mapping conditions as mentioned earlier, if one assumes that the informal definition is, tacitly, assuming vectorized states. Hence, I state both of them here (c.f. Scheutz, 2001). 5For a defense and elaboration of this view see Piccinini, 2007b.
4. Structural Complexity and Implementation
Recall that Chalmers ultimately wants to defend the claim that those properties of a physical system responsible for the system implementing some set of computations are exactly the same properties in virtue of which the system has mental properties (2011, p.327). So far I have been critical of whether Chalmers’ argument establishes the claim that physical cognitive systems implement computations in the relevant sense, while remaining uncritical of his informal definition and CSA definition of implementation; that is, I have granted that if mental properties are organizationally invariant with respect to causal topologies, and these topologies indeed implement CSAs, then the above claim follows. In this last section, I want to raise a problem with these definitions, which calls into question the truth of the above claim.
Both informal and CSA definitions have in common the requirement (mostly explicitly in the informal definition) that there be a bijection (one-to-one and onto) between the formal states of
To account for such possible cases of implementation, Scheutz (2001) suggests that we reject the requirement of a bijection between states of
For example, if mental properties are invariant with respect to the states of the causal topology, then a CSA will not reflect that part of the causal topology in virtue of which a system possesses mentality. However, it is possible that the only aspect of a causal topology of which mental properties are organizationally invariant
6In this section I use “complexity” to mean “structural complexity,” or the number of state-types and state-transitions, for either the causal or computational structure of a physical system. This usage follows Scheutz (2001), and should be distinguished from the technical usage of “complexity” within computational complexity theory, where the term is used to denote the “difficulty” of different computational problems. 7Scheutz’s (2001) suggested revision is to Chalmers’ (1994, 1996b) definitions of implementation, which utilize FSAs as the main formalism. However, his suggestion is equally relevant here.
Despite being critical of Chalmers’ argument for CS2, I think it can be mended. For Chalmers’ case to succeed, he must provide a further argument to show that: (i) the causal topology relevant to mentality in fact implements computations (given his definitions) and (ii) computations mirror those structural elements of causal topologies that are relevant to possessing a mind. However, as stated earlier, I find CS1 (when interpreted as a thesis about concrete computation) more plausible than CS2 and CS3 and more relevant to computation’s foundational role in cognitive science and AI. Of course, a defense of CS1 would require some account of concrete computation (i.e. computation by physical systems) and arguments for why it is the most important notion of computation for cognitive of science and AI. While there is no space here to defend the latter claim, I will end with a brief sketch of how one might defend CS1.
One way to understand concrete
In conclusion, I do not think one needs to endorse the stronger claim that there is some abstract computation (or set of computations), the implementation of which is sufficient for mentality. We should not rule out the possibility that multiple systems might implement the same computation (equivalency of abstract computation) while possessing distinct causal topologies (distinctness of concrete computation), only some of which qualify as instantiating mental properties. Alternatively, we should not rule out the possibility that even concrete computational structure does not exhaustively describe the causal structure relevant to cognition. This is not to say that CS2 is false; rather, I think that it puts a misplaced focus on abstract computation, when it could be claimed that it is concrete computation that is most relevant to justifying the foundational role of computation in cognitive science and AI.