The notion of computational implementation is foundational to modern scientific practice, and in particular, to explanation in cognitive science. However, there is remarkably little in the way of theoretical understanding of what computational implementation involves. In a series of papers, David Chalmers has given one of our most influential and thorough accounts of computational implementation (Chalmers, 1995, 1996, 2012). In this paper, I do three things. First, I outline three important desiderata that an adequate account of computational implementation should meet. Second, I analyse Chalmers’ theory of computational implementation and how it attempts to meet these desiderata. Third, I argue that despite its virtues, Chalmers’ account has three shortcomings. argue that Chalmers’ account is (i) not sufficiently general; (ii) leaves certain key relations unclear; (iii) does not block the triviality arguments.
What does it mean for a physical system (e.g. a brain, a desktop computer) to implement a computation? The first step in answering this question is usually to talk about a special relation—
In this paper, I aim to do three things. First, I outline three desiderata that an account of computational implementation should meet. Second, I describe Chalmers’ theory of computational implementation and how it attempts to meet those desiderata. Third, I argue that despite the virtues of Chalmers’ account, there are three challenges that it faces. I argue that Chalmers’ account is (i) not sufficiently general; (ii) leaves certain key relations unclear; (iii) does not block the triviality arguments. These critical remarks should not take away from the spectacular progress that Chalmers has achieved in articulating the notion of computational implementation. My claim is only that, as it stands, Chalmers’ account falls short of a complete account. Some distance has yet to be travelled before we reach an adequate account of implementation, and in particular, before we have a clear view of the metaphysical commitments involved in using computational implementation in explanations in cognitive science.
2. What is at stake in a theory of implementation?
Before starting, it is worth pausing to consider what is at stake when one gives an account of computational implementation. As noted above, the notion of computational implementation is often treated as an unanalysed explanatory primitive. This is particularly evident in the day-to-day practice of cognitive science where the notion rarely receives explicit articulation. Over the past forty years, cognitive science has scored spectacular explanatory and predictive successes without explicitly articulating the notion of computational implementation, and cognitive science appears to have the potential to score plenty more successes in the future. So one might wonder why one should even bother looking for a theory of computational implementation. If many cognitive scientists have not found the nature of computational implementation a particularly pressing problem, why should we?
There are at least three inter-related issues which jointly motivate a theory of computational implementation.
(R1), (R2), and (R3) are three major motivations for a theory of computational implementation. (R1), (R2), and (R3) also provide three desiderata for such a theory to serve the needs of cognitive science. A notion of computational implementation that is adequate to the needs of cognitive science should at least be: (D1) clear, (D2) avoid the triviality arguments, and (D3) provide a naturalistic foundation. If an account of implementation falls short in any one of these areas, then we have reason to complain.
Chalmers achieves a great deal of progress in all three areas. However, I argue that his account does not fully satisfy all three desiderata. It falls short in three main areas: (i) it is not sufficiently general and leaves certain features of the implementation relation unclear (D1), (ii) it does not block the triviality worry (D2), and (iii) it does not secure naturalistic foundations for cognitive science (D3). Chalmers makes a major step forward in explaining the nature of computational implementation, but the resulting account is not complete.
Before assessing Chalmers’ account of implementation, it is important to have two other pieces in play. In Section 3, I describe the account on which Chalmers builds: what I will call the Standard Position on computational implementation. In Section 4, I describe the triviality arguments that render the Standard Position untenable, and which motivate Chalmers’ position.
1For examples of the pull of anti-realism about computation, see Bringsjord (1995); Hardcastle (1996); Putnam (1988); Searle (1992).
3. The Standard Position on implementation
Despite the widespread use of computational implementation as an explanatory primitive, it would not be right to say that there are no widely-held theoretical beliefs about the nature of computational implementation. On those occasions when computational implementation is called into question, practitioners tend to produce a proto-theory—a theory that is almost certainly correct in many respects. The proto-theory says that computational implementation involves a
I will call this the
How does SP apply to a particular case? Chalmers applies SP to finite state automata (FSAs):
A physical system implements an FSA just in case the formal structure of that FSA is mirrored in the physical structure of that system. The notion of mirroring is identified with that of a
Straight off one can see that SP meets two of the desiderata on a theory of implementation:
SP also secures
Unfortunately, SP spectacularly fails to meet the third desideratum:
There are two major triviality arguments: an informal argument from Searle (1992) and a formal argument from Putnam (1988).
4.1 Searle’s informal triviality argument
Searle (1992) asks one to imagine one’s desktop computer running Microsoft Word. What is happening? There are many physical state transitions inside the desktop computer: transitions in electrical activity, transitions in thermal activity, transitions in vibrational activity, transitions in gravitational activity. According to SP, the computer implements Microsoft Word because one of these patterns of activity—the pattern of electrical activity—has the right structure. If one were to build another physical system, perhaps made out of different materials (e.g. brass wheels and cogs), which had physical transitions with the same structure, then it too would implement Microsoft Word. Now consider a brick wall behind the computer. Despite its static appearance, on a microscopic level a brick wall is teeming with physical state transitions. Within the wall there are physical transitions of vibrational activity, electromagnetic activity, atoms changing state, subatomic particles moving around—a typical wall contains more than 1025 atoms, which for one thing are all in microscopic motion. Searle claims there is
Similar reasoning appears to show that almost any macro-size physical system implements any computation one likes. Chalmers (2012) identifies two important computational theses in cognitive science:
4.2 Putnam’s triviality argument
One might have two immediate concerns about Searle’s argument. First, one might be unmoved by his claim that a brick wall contains some pattern of physical transitions with the same structure as Microsoft Word. One might reasonably insist that the burden of proof is on Searle to demonstrate that such a pattern exists. Second, one might think that his triviality argument only applies to macro-sized physical systems. Perhaps if one restricts attention to smaller or simpler physical systems, one can regain a non-trivial form of computational implementation.
Putnam (1988) presents a triviality argument that neatly scotches both of these worries. Putnam offers an argument that finds the relevant pattern of physical transitions that mirror almost any computation one likes in almost any physical system one likes.
Putnam states his argument in terms of finite state automata (FSAs). Pick an arbitrary FSA. Putnam chooses a simple FSA,
Consider the rock’s trajectory through its phase space from
We can now ask about the physical
2Computational sufficiency, although once an important principle of AI, has now fallen into the background. However, computational explanation remains utterly central to explanation in cognitive science. 3An anonymous referee helpfully points out that many objections to Putnam’s argument hinge on objecting to taking disjunctions of phase space regions. The point above is only that there is nothing wrong in taking disjunctions per se. The objections are rather that some disjunctions are legitimate bases for computational implementation, while others are not. In Section 6, I consider an objection of this kind based around an element of Chalmers’ account that I call the INDEPENDENT-COMPONENTS condition.
The two triviality arguments above show that SP in its bare form cannot be right. Computational implementation must involve more than just a simple mirroring between formal transitions and physical transitions. Chalmers revises SP to block the triviality result while aiming to keep SP’s virtues of clarity (D1) and naturalistic foundations (D3).
It is helpful to divide Chalmers’ revision of SP into three steps.
5.1 Step 1: Transitions must support counterfactuals
One feature of the triviality arguments is that they assume that mirroring a pattern of actual physical activity is sufficient for computational implementation. Putnam’s argument identifies a structure-preserving mapping between the actual evolution of physical states between
Chalmers argues that we should introduce two counterfactual requirements into SP.
First, in order for a physical system to implement a computation, the relevant physical transitions should be
Initially, Chalmers presented these counterfactual requirements as doing ‘all the work’ in blocking triviality arguments (Chalmers, 2012, p. 331; Chalmers, 1995, p. 398). This was a view that enjoyed widespread currency in the literatureat the time: it was generally assumed that counterfactual conditionals deal a knock-down blow to the triviality arguments (for example, see Block (1995) and Maudlin (1989)). Interestingly, Chalmers later showed that these kind of considerations
In the revised argument, Chalmers (1996) defines a ‘clock’ as a physical component that reliably transits through a sequence of physical states over the time interval. He defines a ‘dial’ as a physical component with an arbitrary number of physical states such that when it is put into one of those states it stays in that state during the time interval. The triviality result for the counterfactually-strengthened version of SP is that every physical system with a clock and a dial implements every FSA.
The argument involves a similar construction to Putnam’s, but over possible, not actual, trajectories in phase space. In one respect the construction is simpler, since the only states that need to be considered are the physical system’s clock and dial; the other physical states can be safely ignored. Chalmers’ strategy is to identify a mapping between each formal FSA state and a disjunction of physical states [
Suppose the system starts in physical state [1,
It is worth noting that almost all physical systems in which we are interested will have a clock and a dial. A clock could simply be any law-like sequence of physical changes inside the system. A dial could be the entire trajectory of phase space through which the system travels on a particular run. As Chalmers notes, a clock and a dial could also be easily added just by placing a wristwatch inside the physical system. Clearly, some extra condition needs to be added to solve the triviality problem.
5.2 Step 2: Add input and output constraints
Another striking feature of the triviality arguments is that the computations they consider lack inputs and outputs. Chalmers argues that the triviality results can be avoided, or at least attenuated, if inputs or outputs are added. SP should require that a physical system not only mirror the internal states of the formal computation, but also have appropriate inputs and outputs. There is a
On the
The
Nevertheless, even on the strong reading, a triviality result still obtains. This triviality result is that any physical system that implements
5.3 Step 3: Move to CSA architecture
The final revision to SP proposed by Chalmers is to replace the FSA architecture with a more complex computational architecture. Chalmers claims that the triviality arguments can be resisted for a type of formal architecture that he calls
Chalmers concedes that Putnam is right that the implementation conditions of FSAs are trivial in the ways described above.5 Nevertheless, this would be tolerable if non-trivial implementation are secured
Combinatorial state automata are just like finite state automata except that their states have a combinatorial structure rather than a monadic state structure. Instead of having a single internal state,
Chalmers claims that a physical system implements a CSA when the following conditions are met:
This completes Chalmers’ account of computational implementation. Call it SP-C. Chalmers claims that SP-C meets the three desiderata: it is clear (D1), blocks the triviality arguments (D2), and it provides naturalistic foundations for cognitive science (D3).
4Chalmers (1996, pp. 320–323) claims that SP should be supplemented with the additional condition that inputs would reliably cause the right internal states, even if they do not actually cause such states (effectively combining Step 1 + Step 2). He argues that this allows the input-output condition to get a toe-hold on constraining internal structure, and helps defend it from Putnam’s attack. However, Chalmers goes on to show that a similar triviality result still obtains: any physical system with an input memory and a dial implements any FSA with a given input-output behaviour. An input memory is a physical component that goes into a unique physical state for every sequence of inputs. Having an input memory is again not hard to satisfy (Chalmers gives the example of adding a tape recorder to the system). See Godfrey-Smith (2009, Section 2) for a more detailed discussion of how a triviality result for a combined Step 1 + Step 2 condition obtains under even less exacting conditions. A combined Step 1 + Step 2 condition therefore does not by itself solve the triviality problem. 5Chalmers (1995), pp. 394–395; Chalmers (2012), p. 334. 6Ibid.; Chalmers (1996), p. 324.
6. Three challenges to Chalmers
I will argue that there are three problems with Chalmers’ theory of computational implementation, SP-C. These problems are: (i) SP-C does not cover all architectures relevant to cognitive science; (ii) SP-C leaves certain key features of implementation unclear; (iii) SP-C does not block the triviality arguments. I argue that SP-C cannot simultaneously satisfy (D1), (D2), (D3).
6.1 SP-C does not cover all architectures relevant to cognitive science
Chalmers observes that CSA architectures are
Chalmers attempts to guard against this worry by claiming that the CSA formalism is capable of accurately describing
It is worth emphasising that Chalmers’ claim is not the relatively modest claim that the CSA formalism can reproduce the
The weak-equivalence claim concerns
Weak-equivalence is one thing, strong-equivalence is another. The strongequivalence claim requires that at least one of the CSAs which ‘solves the same computational task’ also accurately describes—without loss or distortion in its algorithmic description—
Whether SP-C succeeds as a general account of computational implementation hinges on the truth of the strong-equivalence claim: on whether translation of any computational method into a CSA is an accurate description (without loss or distortion) of that computational method. I think that there are good reasons for doubting this claim.
Let us start by examining Chalmers’ poster-case of strong-equivalence: CSAs and Turing machines.10 Chalmers argues that a Turing machine can be re-described, without loss or distortion, as a CSA. In the quotation above, he gives a number of translation rules that take one from a Turing machine to an equivalent CSA. For example, the state of the
There are a number of computationally-significant features of Turing machines that are lost in the CSA translation. One such feature is a distinction between
Pylyshyn argues that a distinction between data and control is also important to cognitive science:
The distinction between data and control is a
The fundamental idea expressed by SP is that
The problem is that the distinction between data and control is entirely lost in the CSA translation. Both head states and tape states are just substates alike of a giant undifferentiated state vector. SP-C places no constraints that head states and tape states be implemented in distinct physical ways that are physically similar amongst themselves. Indeed, SP-C does not even have the resources to state such a condition, since the distinction between a Turing machine’s data and control elements disappears in the CSA translation. Strong-equivalence—the claim that the CSA translation captures
A natural response would be to augment the CSA architecture so that it encodes the distinction between the data and control elements of the original Turing machine. For example, one might introduce within the CSA formalism a distinction between two types of sub-state of a CSA:
This response is good as far as it goes, but other computationally-significant differences still threaten a strong-equivalence claim. One such feature that a Turing machine’s data is not
A natural response again would be to require that a CSA translation of a Turing machine step through a sequence of intermediate ‘data reading’ states—each corresponding to the original Turing machine reading an intermediate tape square—before the CSA reaches the state that corresponds to the Turing machine ‘reading’ the desired square. This would not be the most efficient way for a CSA to operate, but it would appear to replicate the nonrandom-access property of the Turing machine inside the CSA formalism.
The problem is that this response locates the formal property—nonrandom access memory—in the wrong place: it locates the formal property in the CSA’s
A natural way to respond is to repeat the ‘highlighting’ trick above. One could augment the CSA formalism to encode a distinction between transitions within the giant CSA transition table. For example, one might explicitly distinguish between
This fixes two translation problems, but plenty more remain. There are no shortage of computationally-significant features of Turing machines that are lost or distorted in the CSA translation: what counts as an atomic operation, what can happen synchronously, and at what stages input or output is permitted. All of these matter to way in which Turing machines work. All characterise the distinctive methods by which Turing machines achieve their behaviour. All should be preserved and reflected in the implementation conditions of Turing machines. And all are distorted or lost in the CSA translation.
A hard-headed solution would be to keep replaying the ‘highlighting’ trick above, augmenting the CSA formalism to capture each and every computationally-significant feature of Turing machines until all of the relevant formal distinctions and similarities of the Turing machine formalism are captured in an enriched CSA formalism. Conceivably, at the end of this procedure one would represent the computational methods of a Turing machine within an enriched CSA formalism without loss or distortion. SP-C could then be revised in light of this CSA formalism, and appropriate constraints placed on physical implementation. We would then have achieved our goal of stating the implementation conditions of a Turing machine using the CSA formalism—albeit a heavily modified and augmented version of the CSA formalism.
But all this has a serious cost.
First, it loses the simplicity and generality of Chalmers’ original proposal. The only way to capture the computationally-significant features of Turing machines appears to be to revise the CSA formalism to such an extent as to effectively recreate the Turing machine formalism inside it. It appears that there is little or no redundancy lurking in the Turing machine formalism for the original CSA formalism to exploit. But if this is so, then it appears that the strong-equivalence claim as originally proposed is not, in any interesting sense, true.
Second and more seriously, the CSA formalism was claimed to be capable of expressing without loss or distortion the computational methods, not just of Turing machines, but of
As a first step, consider switching from the original Turing machine architecture to a multi-head Turing machine architecture, or a multi-tape Turing machine architecture. The CSA formalism and SP-C would need to be revised again. Different features of the target formalism will need to be ‘highlighted’ in the CSA translation. For example, the distinction between different heads and tapes in a multi-head/multi-tape Turing machine will need to preserved in the CSA translation and reflected in the implementation conditions. Each head and tape should be implemented by a distinct type of physical component that is similar amongst themselves (
Now consider moving to an architecture that departs more dramatically from that of Turing machines: register machines, Church’s λ-calculus, quantum computers, dataflow computers, billiard-ball computers, enzymatic computers. These formalisms have radically different ways of splitting control and data, different clocking paradigms, different parallelisms, different synchronous natures, different atomic operations, and different ways of handling input and output. They differ from CSAs, Turing machines, and each other, in major and often incompatible ways. They employ different computational methods, and introduce new and incompatible computationally- significant properties. Accurately translating each formalism, without loss or distortion, to the CSA formalism would require the CSA formalism ‘highlight’ the computationally-significant properties of that particular architecture. Given the
Therefore, replaying the ‘highlighting’ trick—hard-wiring the desired formalism inside the CSA formalism—may achieve strong-equivalence between CSAs and Turing machines, but it would be a short-sighted move. Indeed, tailoring the CSA architecture to Turing machines has the cost that it moves us
This worry has particular bite in the case of the computational models in cognitive science. Contemporary computational models in cognitive science have little resemblance to
For example, Marr (1982)’s computational architecture for the early visual system is a series of discrete nested computational filters that pass signals to each other in serial or parallel. Marr’s formal architecture differs in numerous ways from both Turing machines and CSAs. It differs in terms of its control/data split, atomic operations, introduction of nesting relations between filters, requirements on what can happen synchronously, and at what stages input and output are permitted. Neither the original CSA formalism nor the modified CSA formalism above accurately capture the computationally-significant properties of Marr’s formal architecture. Anderson (2007)’s ACT-R architecture is different, but just as challenging for a CSA architecture to model. ACT-R is tailored to explain different cognitive capacities from Marr’s architecture and has different computationallysignificant properties: a difference between declarative and procedural data, a difference between chunks and buffers, an organisation into modules, a production-driven rather than state-driven control system. Again, CSAs seem a poor model: they would fold all these properties into the workings of a giant transition table and state vector, with no guarantee that the relevant distinctions and similarities in the original architecture would be preserved by distinctions and similarities in kind between the components of the implementation. Wolpert & Kawato (1998)’s MOSAIC architecture is different again, and requires different formal distinctions. MOSAIC has a highly modularised structure, it has continuous dependence of output on input, it has computational relations that are best described by differential equations, it has error comparison and error summation operations as atomic steps, it is fully asynchronous, it uses probabilistic generative models among its basic representations, and it has a radically different way of managing control to traditional computers (Wolpert, Doya & Kawato, 2003). The CSA formalism does not appear capable of expressing the methods of the MOSIAC architecture without loss or distortion. If strong-equivalence between Turing machines and the original CSA architecture was hard to achieve, strong-equivalence with the computational models in cognitive science appears even harder. And if one tries to secure strong-equivalence by departing from the original CSA formalism by using the highlighting trick, then one faces the problem that different models depart from the CSA formalism in open-ended and incompatible ways.
There is a claim that is closely related to strong-equivalence which is almost certainly true, and which may be the source of possible resistance to the concerns above. It is almost certainly true that a physical system that is described as, say, a MOSIAC system can
may simultaneously implement an FSA, a CSA, a Turing machine, a register machine, and Microsoft Word. But this in no way shows that all these computational descriptions are strongly-equivalent. Just as a cat’s quantum mechanical description and the molecular description have neither the same content nor the same satisfaction conditions, so a CSA and MOSAIC computational description have neither the same content nor the same implementation conditions, even if both are (non-accidentally) satisfied by the same physical system.
Finally, it is worth wondering why, if strong-equivalence
6.2 What is SP-C’s mapping relation?
The first problem with SP-C is that it is not sufficiently general as a theory of implementation, and in particular, that SP-C does not secure the implementation conditions of computational models in cognitive science. The second problem concerns the mapping relation between physical states and abstract machine states—the relation that SP-C inherits from SP. So far we have treated this mapping relation as an explanatory primitive. We also said that SP satisfied (D1) on clarity because it unified computational implementation with model-theoretic interpretation and measurement. But what is this mapping relation? What metaphysical commitments does it bring with it?
This may seem like a strange question, but it should be pressed. SP’s mapping relation plays an absolutely central role in computational implementation. One of the desiderata for a theory of computational implementation is that it provide a naturalistic foundation for cognitive science (D3). We saw that, to a first approximation, this means that computational implementation has to be explicable in wholly non-mental terms. But then it looks like SP is hostage to the fortunes of the mapping relation. If the mapping relation turns out to be
One might claim that the mapping relation is an
One might object that this is a general problem, not one that is specific to SP.12 The mapping relation that SP employs is shared by other domains, not just computational implementation. If the nature of the mapping relation introduces worrisome commitments, or is unclear, then that is a problem not just for SP, but for a wide range of other areas. However, the general nature of the worries should not blunt their force. A parallel can be drawn with the treatment of the representation relation. Chalmers (2012) argues that the representation relation should not figure in an account of computational implementation because it is obscure and poorly understood (violating (D1)).13 Searle (1992) argues that the representation relation should not figure because it introduces illicit mind-dependency (violating (D3)). One might take issue with either of these claims, but both employ general concerns about the representation relation to place constraints on computational implementation. If general worries about representation justify keeping it out of an account of implementation, then general worries about the mapping relation should have force too.
6.3 SP-C does not escape the triviality result
A final problem for SP-C is that, even for the specific case of CSA computations, SP-C does not block Putnam-style triviality arguments. We saw in Section 5 that Step 1 and Step 2 of SP-C were neither individually nor jointly sufficient to block the triviality arguments. The work of blocking the triviality arguments therefore falls almost entirely on Step 3. The problem is that it is not clear how Step 3—switching from an FSA to a CSA architecture— helps to solve the triviality problem at all.
One worry is that CSAs immediately fall prey to Putnam’s triviality argument. Formally, it is easy to translate between CSAs and FSAs. If one is persuaded by the line of reasoning that Chalmers gives to justify strongequivalence above, one might be inclined to think that CSAs and FSAs are not genuinely different formalisms, but just notational variants.
Without loss of generality, denote the sub-states that the elements of the CSA state vector can take,
Chalmers is of course aware of this problem. He knows that an extra constraint must be added to avoid collapsing the CSA case into the FSA case. The key move is to flag the vectorial nature of the CSA notation as
Given the amount of work that the INDEPENDENT-COMPONENTS condition does, it is frustrating that it is not easier to spell out the content of the condition. What does it mean for something to be an
Chalmers proposes an answer that attempts to strike this balance: each component of the state vector of a CSA should correspond to a
The SPATIAL-REGIONS proposal is not necessary because a system could implement a CSA even if its sub-states occupy the same spatial regions. There are many ways in which this could happen. First, a system could use
Perhaps more worrying is that the SPATIAL-REGIONS condition is not sufficient for implementation. Even with the SPATIAL-REGIONS condition in place, a triviality result for CSAs still obtains. Choose an open physical system
Even if one were to reject SPATIAL-REGIONS, the INDEPENDENTCOMPONENTS condition still seems to be a fundamentally correct thought about the nature of computational implementation. The basic idea of INDEPENDENT-COMPONENTS also generalises beyond the specifics of the CSA formalism (as we saw in Section 6.1). That idea is that
If INDEPENDENT-COMPONENTS cannot be spelt out as SPATIALREGIONS, how should it be understood? My own view is that INDEPENDENT-COMPONENTS should be understood as placing constraints on what various physical features represent. This brings extra resources into play, and I believe allows one to strike the right balance described above. However, this representational approach to implementation differs significantly from that of Chalmers, and it brings further challenges, which I will not discuss here.
7Nevertheless, see Brooks (1991) for an argument that cognition should be modelled via a series of nested FSAs. If Brooks is right, then FSAs, and their implementation conditions, matter a great deal to cognitive science. 8Like Chalmers, I will restrict attention to computational architectures with finite storage (e.g. Turing machines with finite tape). As Chalmers notes, finite storage architectures are the ones most relevant to modelling human cognition. Chalmers sketches how SP-C can be extended to apply to architectures with unbounded storage, but will do not need to consider his extension here. 9Cf. Pylyshyn (1984), Ch. 4. 10Again, restricting attention to Turing machines with finite storage. 11See Backus (1978) for how formal architecture determines the available computational methods. 12Thanks to an anonymous referee for pressing this point. 13Chalmers (1995), pp. 399?400; Chalmers (2012), p. 334. 14Chalmers (1996), p. 325, Chalmers (2012), p. 328.
We identified three desiderata on an account of computational implementation. These were that an account should be (D1) clear, (D2) avoid the triviality arguments, and (D3) provide naturalistic foundations for cognitive science. Chalmers’ account of implementation, which I have called SP-C, can be understood as an attempt to meet these three desiderata.
I raised three challenges to SP-C. These were that SP-C is (i) not sufficiently general, (ii) leaves certain key relations unclear, (iii) does not block the triviality arguments. We saw a trade-off between meeting the desiderata. Individually, each desideratum is easy to meet. Even if one gives up one of the three desiderata, meeting the other two is relatively easy. For example, a common way to block the triviality arguments (D2) and keep clarity (D1) is to allow non-naturalistic factors into the facts that determine computational implementation (e.g. our
Even if one is convinced by the challenges above, Chalmers’ account remains of absolutely central importance. Chalmers’ account presents insightful and plausible necessary conditions on computational implementation. What I have called Chalmers’ INDEPENDENT-COMPONENTS condition expresses an important insight: that different elements of the computational formalism should be implemented by