We can characterize computationalism very generally as a complex thesis with two main parts: the thesis that the brain (or the nervous system) is a computational system and the thesis that neural computation explains cognition. As Piccinini and Bahar (2012) point out, over the last six decades, computationalism has been the mainstream theory of cognition. Nevertheless, there is still substantial debate about which type of computation explains cognition, and computationalism itself still remains controversial. My aim in this paper is to make two main contributions to the debate about the first sub-thesis of computationalism, i.e. that the brain is a computational system. First, I want to offer an accurate elucidation of the notion relevant for understanding computationalism (the notion of computation) and clarify the relation between information and computation as well as the relation of computation and information processing to the nervous system. Second, I want to argue against a peculiar form of computationalism: the thesis that neural processing is
Piccinini and Scarantino (2011) take on an important philosophical task in the context of this debate. The authors elucidate the concepts of computation and information, and they draw two main conclusions about how these concepts are related. First, they show that every notion of computation is different from every notion of information. Second, they then affirm that the three main notions of information (Shannon’s non-semantic notion, natural semantic information, and non-natural semantic information) imply the generic notion of computation. This can be important in the debate, because if the neural processing is empirically established to be performed by an information processing system, it follows that neural processing is computational in the generic sense. This is the second argument for generic computationalism that Piccinini and Bahar (2012) offer. The first argument is based on the fact that the nervous system has the central feature of a generic computational system: medium independent vehicles. Piccinini and Bahar also argue against digital and analog computationalism. They posit that although the nervous system computes, the kind of computation it performs is neither digital nor analog, i.e., it is a
This paper has two main parts. In the first part (sections 2.1 and 2.2), I will consider the concepts of information processing and generic computation, the relation they have to each other, and the relation they have to the nervous system. In section 2.1, by comparing the notions of computation and information, I will show that computation is best characterized as the manipulation of vehicles that have what I call “restricted functional relevance.” I will then show that information processing does not imply computation. In section 2.2, I will argue that information processing is related to the nervous system in a stronger way than computation is. I believe that feedback control, the function peculiar to neural activity, implies information processing, but not computation. To argue for this thesis I will consider and criticize the elucidation of the notion of feedback control offered by Wimsatt (1971). I believe that even a very strong notion of feedback (such as Wimsatt’s) does not imply computation and that even the much weaker notion of feedback I defend implies information processing.
With these conceptual tools, in the second part (sections 3.1 and 3.2), I argue against the types of computationalism that Piccinini and Bahar (2012) defend: generic and
2. Computation, Information Processing, and the Nervous System
2.1. Computation and Information Processing
Piccinini and Scarantino (2011) distinguish three main notions of information: a non- semantic, a natural semantic, and a non-natural semantic notion. The notion formalized by Shannon (1948) is non-semantic in that a piece of information in his sense is fully characterized by two features: it is a physical structure distinguishable from a set of alternative physical structures, and it belongs to an exhaustive set of mutually exclusive selectable physical structures with well-defined probabilities. No semantic property is involved in its individuation. However, an information theorist could be also interested in the specific content of a given piece of information. Piccinini and Scarantino (2011) borrow Grice’s (1957) distinction between natural and non-natural meaning and apply it to semantic information. A case of natural meaning could be exemplified by the sentence, “those spots mean measles,” which is true only in the case that the patient has measles. Non-natural meaning is exemplified by a sentence like “those three rings on the bus bell mean that the bus is full,” which is true even if the bus is not full. Similarly, we classify information as natural when there is a physical, reliable correlation between two states. Spots carry information about measles only when the spots are reliably correlated with measles. On the contrary, the ringing of the bus bell carries information about the bus being full by virtue of a convention.
So, given this characterization of semantic information, what determines whether an object
The more general notion of computation, the notion of generic computation, is introduced by Piccinini and Scarantino as a way to capture “all the relevant uses of ‘computation’ in cognitive science” (2011, p. 10). Piccinini and Scarantino affirm that “generic computation is the processing of vehicles according to rules that are sensitive to certain vehicle properties and, specifically, to differences between different portions of the vehicles” Modal Considerations on Information Processing and Computation ~
(2011, p. 10). Piccinini and Bahar (2012) characterize the rules that define generic computation more specifically:
Piccinini and Scarantino call this kind of vehicles “medium independent” because this characteristic property makes possible the multiple realizability of computational vehicles, i.e., that a given computation be implemented in different physical media (mechanical, electromechanical, electronic, magnetic, etc.). A computational vehicle type can be realized in a variety of media because very specific or concrete physical properties are not necessary for its realization.
I believe that there is a property that informational and computational vehicles share, and I will follow Piccinini and Bahar in calling it “medium independence.” But I also believe that this property is not identical to the defining property of computational vehicles. So, I will use the term “restricted functional relevance” to refer to the defining property of computational vehicles and keep the term “medium independence” to refer to the property that I believe both kinds of vehicle share. I believe that “restricted functional relevance” better captures the more specific property of computational vehicles and that “medium independence” better captures the more general one (the one I think it is shared by computational and informational vehicles). I will maintain that computational and informational vehicles are similar only in that they do not depend on a particular physical medium for their realization, i.e., in that they both have multiple realizability. So, I will use “medium independence” to refer only to multiple realizability (knowing that this a wider sense than the one given by Piccinini and Bahar to the same expression). But, I will affirm that while computational vehicles exhibit this multiple realizability because their intrinsic functionally relevant properties are abstract, informational vehicles have multiple realizability because their defining property is relational, not intrinsic. So, by “restricted functional relevance” I mean only that the functionally relevant properties of computational vehicles, according to Piccinini and Bahar’s definition, are not very specific or concrete. I believe that Piccinini and Bahar are correct in thinking that both being a vehicle of information processing and having what I refer to as “restricted functional relevance” conceptually imply having multiple realizability. Nevertheless, I will argue that being informational and having restricted functional relevance are not extensionally equivalent concepts. So, information processing does not imply computation.
We have seen that, since a vehicle of informational type
I said above that restricted functional relevance was the common feature of computational vehicles. The medium independence of vehicles of both digital and analog computation follows from the fact that not very specific properties of them are functionally relevant. If being informational and having restricted functional relevance are concepts that are not extensionally equivalent but that both imply medium independence, then the concepts of medium independence and restricted functional relevance are not extensionally equivalent either (the notion of medium independence will apply to token vehicles that have no restricted functional relevance; namely, some informational vehicle tokens). We therefore cannot define computational vehicles as medium independent, but rather only as vehicles of restricted functional relevance. I think that Piccinini and Scarantino, in defining generic computation, identified the correct property, yet they failed to characterize it appropriately. They realized that what I call restricted functional relevance was the defining property of generic computation, but they characterized it as medium independence,
To summarize, I have argued that medium independence and restricted functional relevance are different concepts. The notions of restricted functional relevance and being informational are not extensionally equivalent, but they both imply medium independence. This means that medium independence (understood as multiple realizability) does not imply restricted functional relevance, and therefore we cannot define computational vehicles as medium independent, but only as vehicles of restricted functional relevance. From this it follows, contrary to the affirmations of Piccinini and Scarantino, that
2.2. Computational and Informational Feedback Control
In the previous section, I offered an elucidation of computation and argued against the implication from information processing to computation. This makes the line between computation and information processing sharper than it seemed in Piccinini and Scarantino’s explanation. In this section, I will further distinguish computation and information processing in terms of their relation to the nervous system. I will argue that information processing, but not computation, is constitutive of feedback control, the function peculiar to neural activity. In particular, I think that even a very strong notion of neural feedback does not imply computation and that even a very weak notion of neural feedback implies information processing.
Feedback control is the function that is specific to neural activity. The specific function of neural activity differentiates the nervous system from other functionally organized systems. The nervous system belongs in the class of feedback control systems together with autopilot systems, for instance. Piccinini and Bahar (2012) note that the thesis of the nervous system as a feedback control system is not trivial and took serious scientific work to establish rigorously (Cannon 1932). Nevertheless, by now it is uncontroversial, though there is still substantial debate on the nature of feedback control itself. Piccinini and Bahar characterize nervous feedback saying that the nervous system controls the behavior of an organism in response to stimuli coming from both the organism’s body and the environment,
I believe that this characterization of feedback control is basically correct. What is constitutive of feedback is the causal loop I have just described between previous effects of the feedback systems and new effects. I think that this is all we need for a system to execute feedback control. Nevertheless, in light of recent discussion on the notion of feedback, I feel I need to offer a defense of this characterization. The notion presented by Wimsatt (1971) 1 is much more demanding than Piccinini and Bahar’s. First, I will argue that even this stronger notion does not imply computation (neither
Before entering into the debate about feedback control, there is a further constraint for
There has been substantial debate about the nature of feedback control. I believe that Wimsatt (1971) has contributed two ideas that are very relevant for thinking about the relation between feedback, computation, and information. Wimsatt (1971) argues that feedback control cannot be characterized solely in behavioral terms, that is, in terms of a certain correlation between specific inputs and outputs of a system. This means that to determine if a system executes feedback control, one has to determine, to some extent, the nature of its internal structure. This is important for our purposes, because whether feedback control implies computation depends exactly on how strong those constraints on the internal structure of feedback control systems are. Wimsatt (1971) also argues that two very widely accepted conditions to characterize the internal structure of a feedback control system are not sufficient. Nevertheless, he accepts them as necessary. In what follows, I will advance three theses. First, I will argue that even the notion of negative feedback that includes the constraints on the internal structure of a negative feedback system that Wimsatt mentions as necessary conditions does not imply computation (neither
Wimsatt considers three constraints that have been widely accepted for feedback control systems. The first two appear in the elucidation of feedback control given by authors such as Beckner (1959), Ashby (1960), Sommerhof (1950), and Nagel (1961). They claim that a feedback control system requires: (1) power to compensate for environmental changes that might impede the system’s progress towards the goal; and (2) independent variables that define the system and its environment. It seems clear that the second condition does not imply that feedback control vehicles must be computational. It simply requires that there be two logically independent, but causally related, variables in the system in addition to environmental variables. This second condition puts no restriction on the specific nature of those variables (they could be discrete or continuous, medium-dependent or medium-independent, etc.). Thus, this condition alone makes it possible for vehicles of feedback control to be instantiated in a non
The third condition is that a system be capable of reaching the same set of equilibrium values for its state variables from an arbitrarily large range of different initial values of the state variables (this is the primary feature of Braithwaite’s (1953) analysis). Von Bertalanffy called this phenomenon ‘equifinality,’ and both he and Kacser (1957) have given formal analyses of this phenomenon for open systems of the general type which Wimsatt claims are not feedback systems. So, although this feature could be necessary for feedback control, it is not sufficient.
The first condition, the ‘compensatory’ phenomenon frequently invoked, states that changes in one (or more) state variable(s) will be partially or wholly compensated for by changes in one or more other state variables in such a way that the original perturbed variable(s) will be wholly or partially restored to their ‘normal’ equilibrium values. This phenomenon has been described by Kacser (1957) as ‘buffering,’ a word which is a better clue to its real significance than those employed in the analyses mentioned above. The first and third features are clearly related. Equifinality describes the effect of compensation. The compensation condition gives the mechanism by which this ‘equifinality’ of non-equilibrium states is brought about: some variables ‘take the load’ from the ‘stressed’ variables and allow them to partially or completely approach or regain their ‘normal’ values.
Wimsatt shows that the condition of compensation, like the condition of equifinality, could be necessary for feedback control, but is not sufficient. He describes two mechanisms that satisfy the compensation condition, but the first of them is not a feedback control system. The feature that the second has, but not the first does not, is a further necessary condition for feedback control that Wimsatt introduces. I will claim that neither compensation nor equifinality are necessary for feedback control, and that the additional condition that one should extract from Wimsatt’s model is just the causal loop that we described above (the causal loop that characterizes a
Before arguing for the weaker notion of negative feedback control, I want to argue that even these stronger requirements do not imply computation. The condition of equifinality requires that the internal variables of the system be able to reach an equilibrium value from an arbitrarily large set of values. The condition of compensation requires that this must be done by making other variables take the load of the stressed variables. The variables that are above a certain threshold must go back to a value below that threshold (their equilibrium value) by raising the values of other variables. It can easily be seen that these restrictions do not imply that negative feedback vehicles should be non-digital or non-analog like actual neural vehicles. The capacity of a variable to go back to an equilibrium value and the capacity of a variable to take the load of another variable (increase its value if the value of the other decreases) imposes no restriction to the nature of those values.
Wimsatt presents two hydrodynamic models involving the flow of water into and out of a pair of interconnected tanks. The non-feedback system consists of two geometrically congruent tanks of constant cross-section interconnected by an orifice with an input of water at the top of one of the tanks at a specific rate r1 and with an output magnitude r0 in an orifice at the bottom of the other tank. Water passes through the connecting orifice at another specific rate rs (that is positive if flow moves from the first tank to the second). rs is a function of the water level of the two tanks (l1 and l2 ), the gravitational force g, and the area of the orifice that connects them. The condition of equilibrium is given in the situation in which the values of l1 and l2 are proportional to each other, they are proportional to r0 , to r1 , and to rs , and these three values are equal. In the situation in which the value of l2 is superior to the value of l1 , rs will be negative, although r0 and r1 will remain positive. The flow of water from one tank to the other will reverse until l1 and l2 reach their equilibrium values. When they do, rs will go back to its equilibrium value too. This simple open system thus exhibits both of the features supposedly sufficient for a feedback system: (1) the system tends to approach equilibrium values from arbitrary non-equilibrium states, and (2) the system ‘buffers,’ as an unusually high value of the variable of second tank has been partially compensated for by an increase in the variable in the first tank.
Nevertheless, the second model will show why this first is not a feedback control system. In the second model, we again have two interconnected tanks, but there is also a pressure sensor at the output r 0 of the second tank that detects changes in r0 (and, as the two variables are connected, it also detects changes in l2 ) and is connected to a variable input valve that controls r1 in such a way that deviations from the equilibrium values of r0 and l1 are counteracted by changes in r1 .
Wimsatt holds that the main difference between these two systems is that in the first, the compensation between the two internal variables depends on the possibility that the flow from the first variable to the second being reversible. In the second model, compensation can take place without this reversible character. I believe that it is another difference which determines that only the second is a feedback control system. Recall that a very intuitive required feature of feedback control system (the one that made us, through an argument that Wimsatt (1971, pp. 244-246) offers, reject the behavioristic account of feedback) is that there must be some kind of ‘causal loop.’ There must be a causal relation between the actual state of the output variable of the system and previous states of that same variable. That is why Wimsatt showed that we cannot give a behavioristic criteria for feedback control. If any input-output matrix can be reproduced by a net that has no loops, then, although we may find a system in which there is a
In the first pair of tanks we simply do not have a causal loop like this. It is not the value at r0 in a past moment that causes the decrease of the value of r0 at a present moment. When r0 has certain value, this depends on the level of water of the second tank, l2 , and the value of
Furthermore, I believe that this model shows that one of the other conditions supposedly required for feedback control, compensation, is not necessary. It must be noted that the second system is a feedback control system according to Wimsatt, but no compensation occurs between its internal variables. The decrease of the value of r0 is caused by the
Additionally, I believe that the form of equifinality required is much weaker than the one described. In particular, the equifinality I believe we should demand does not imply anything about the internal structure of the system. Recall that the equifinality of a system, as described, is that it is capable of reaching the same set of equilibrium values for its state variables from an arbitrarily large range of different initial values of the state variables. Restricted just to an output variable, such as Wimsatt’s r0 , this is simply part of the description of the causal loop required for negative feedback. This system is a negative feedback system because r0 can go from an arbitrarily large range of states back to its equilibrium value. However, it seems that the condition of equifinality requires that
Suppose there is a system S that consists of two tapes, A and B, a device D1 that writes or erases symbols on A, a device D2 that writes symbols on B, and a sensor S that detects the number of letters on A (SA) and the number of letters on B (SB) and that is connected to D1 and D2. D1 keeps writing symbols on A unless D2 writes symbols on B. Furthermore, each time D2 writes a symbol on B, this causes D1 to erase a symbol on A. When S senses that SA has passed a certain threshold T, it makes D2 start writing symbols on B, and thereby makes D1 erase symbols from A until SA reaches certain value V below T. V is the equilibrium value for the variable SA of the system. SA reaches V through the modification (specifically, the increase) of the value of the other relevant variable (SB). But SB has no equilibrium value. The value of SB will keep rising each time SA passes T. Nevertheless, this system is clearly a negative feedback control system. There is an output variable SA of S that can go beyond a value T, and when this happens, it causes an internal process that culminates with the decrease of the value of SA to V. This is the kind of causal loop required for negative feedback, and it does not require equifinality.
Therefore, the only requisite necessary for feedback control that remains is that which appeared in Piccinini and Bahar’s characterization: the outputs of the system must depend causally on previous effects of the system itself. And since neural feedback is negative, we can add the restriction that the output must be an equilibrium value of a variable caused by previous non-equilibrium values of the same variable. As this characterization puts even less restrictions on the internal states of the system, it seems clear that it does not imply
We can now consider the relation between feedback control and information processing. I will argue that even the weak notion of feedback that I defend implies information processing. This relation between feedback control and information processing is conspicuously absent in Piccinini and Bahar’s work. They only claim that the complexity of the feedback control performed by the nervous system makes it necessary for the nervous system to process internal variables that carry information about external variables:
They do not elaborate, however, on why the complexity of feedback control makes information processing necessary, nor do they comment on whether the necessity is conceptual, empirical, or of some other nature. In what follows, I will claim that the weaker notion of negative feedback control
According to the weaker notion defended, there must be
1I am in debt to a referee of the Journal of Cognitive Science for encouraging me to examine my theses on the light of Wimsatt’s stronger notion of feedback control.
Piccinini and Bahar (2012) defend two forms of computationalism: generic and
3.1. Constitutive sui generis Computation
Piccinini and Bahar (2012) argue for
Digital computation is often contrasted with analog computation. Piccinini and Scarantino (2011) observe that the clearest notion of analog computation is that of Pour-El (Pour-El 1974, Rubel 1993, Mills 2008). According to this notion, abstract analog computers are systems that manipulate continuous variables to solve certain systems of differential equations. Continuous variables are variables that can vary continuously over time and take any real values within certain intervals. As digital abstract computers, analog computers can also be physically implemented, and physically implemented continuous variables are different kinds of vehicles than strings of digits. While a digital computing system can always unambiguously distinguish digits and their types from one another, a concrete analog computing system cannot do the same with the exact values of (physically implemented) continuous variables. This is because the values of continuous variables can only be measured within a margin of error. A major consequence is that analog computations (in the present, strict sense) are a different kind of process than digital computations. While digits are unambiguously distinguishable vehicles, concrete analog computers cannot unambiguously distinguish between any two portions of the continuous variables they manipulate. Since the variables can take any real values, but there is a lower bound to the sensitivity of any system, it is always possible that the difference between two portions of a continuous variable is small enough to go undetected by the system. From this, it follows that the vehicles of analog computation are not strings of digits. Nevertheless, analog computations are only sensitive to the differences between portions of the variables being manipulated to the degree that they can be distinguished by the system. Any further physical properties of the media implementing the variables are irrelevant to the computation. Like digital computers, therefore, analog computers operate on medium-independent vehicles.
Piccinini and Scarantino (2011) point out that, in recent decades, the analogy between brains and computers has taken hold in neuroscience. Many neuroscientists have started using the term ‘computation’ for the processing of neuronal spike trains, that is, sequences of spikes produced by neurons in real time. The processing of neuronal spike trains by neural systems is often called ‘neural computation.’ Taking the types of computation that Piccinini and Scarantino have distinguished, there are two main questions that we can ask regarding neural computation:
Piccinini and Bahar (2012) argue that the nervous system is neither an analog nor a digital computer. Since they also argue that the nervous system is computational in the generic sense, it follows that it is a
However, they note that there is at least one crucial disanalogy between brains and analog computers:
Typically, if the total graded input signal received by a neuron is above a certain threshold, the neuron fires a spike. If the total signal received by a neuron is below the threshold, no spike is fired. As a consequence of the all-or-none character of the spike, computational neuroscience focuses on firing rates and spike timing as the most significant functional variables at the level of circuit and network. Thus, while firing threshold is subject to modulation by continuous variables (e.g., ion concentrations), firing rates vary in a graded way, and spike timing (in real time) may be functionally significant, none of these aspects of neural signals amounts to the use of continuous variables as vehicles within analog computers. (2011, p. 15)
On the other hand, digital computationalism can be traced back to the work of McCulloch and Pitts. They attempted to explain cognitive phenomena in terms of putative neural processes. McCulloch and Pitts (1943) introduced concepts and techniques from logic and computability theory to model what is functionally relevant to the activity of neurons and neural nets in such a way that (in our terminology) a neural spike can be treated mathematically as a digit, and a set of spikes can be treated as a string of digits. McCulloch and Pitts’s empirical justification for their theory was the similarity between digits and spikes. Spikes
Piccinini and Bahar (2012) offer exhaustive considerations against the possibility of typing spikes (or even spike rates) as digits and describing spike trains as strings of digits. Their considerations go so far as to consider a breadth of relevant theoretical possibilities and are therefore too extensive to be presented here. Since the thesis I will defend is independent of the one that these considerations support, it is not necessary to assess them. I said earlier in this section that there are two questions regarding neural computation given Piccinini and Scarantino’s elucidation. The first question is about neural processing as generic computation and, as we will see in the next section, Piccinini and Bahar answer positively. Thereby, the second question is, given a positive answer to the first question, about the specific kind of computation performed by the nervous system. Piccinini and Bahar argue against the digital or analog character of neural computation. They affirm that neural computation is
In order to argue for or against a thesis about the constitutive character of a feature of neural processing, one has to find a way to distinguish between constitutive and contingent features. Regarding different properties of action potentials, we have the criterion of functional relevance. Spike rates and timing are the variables correlated with different causes and effects inside and outside the nervous system, and they do not need to be realized in neural tissue. I believe that there is also such a criterion regarding the processes performed by different systems of a biological organism and, in particular, regarding different features related to the processes performed by the nervous system and discussed by Piccinini and Bahar (2012). I will claim that feedback control is constitutive of neural processing in a way that the different kinds of computation and information processing are not. I showed, in section 2.2, that feedback control entails information processing. So, information processing will be also constitutive of neural processing (although indirectly). Nevertheless, I will show that feedback control does not entail
Piccinini and Bahar (2012) do not seem interested in computationalism as a thesis about the constitutive character of a feature of the neural processing. They argue for the thesis that the nervous system computes in some sense without considering the possibility that neural processing be realized in a system that is not computational in that same sense. I will return to this aspect of Piccinini’s position later. What is important to acknowledge here is that the two forms of
The peculiar function discovered in the nervous system, negative feedback control, has something in common with the main functions of the other systems of human organisms. This similarity, however, is not shared by other features ascribed to the nervous system, such as being computational or being an information processing system. To execute negative feedback control over larger system
The fact that the nervous system is a computational or an information processing system says nothing about the way that system will interact with the rest of the organism or contribute to its behavior. Regarding information, as we have seen, we only need their states to carry information and the states that carry different information to have different causal roles. But those roles do not determine some specific output; there is no specific way an informational system must contribute to the behavior of a larger system it forms a part of. Regarding computation, the same point applies: as long as a computational system manipulates vehicles of restricted functional relevance, there is no specific output that every computational system that is part of a larger system should have. On the contrary, to characterize a system as a feedback control system, one needs to characterize the causal correlation between values of its output, i.e., values of the variable that is connected with what is beyond the system. To characterize something as a feedback system, we have to specify the way it interacts with what is beyond it. It is because of this that feedback control is the only of these three features (computation, information processing, and feedback control) that we can demand for the realization of a nervous system or a neural process. Feedback control is the only one of these three features that characterizes the nervous system as a
Therefore, if feedback can be performed by a non-
To determine whether the neural processing can be realized by a digital or an analog computer, we first have to decide whether feedback control can be performed by a digital or an analog computer. We have seen that neural feedback control is the function of controlling the behavior of an organism in response to stimuli coming from states of the organism’s body and environment that are an effect of the feedback system itself. So, for a nervous system to be realized by a digital or an analog computer, digits or strings of digits and continuous variables must be able to cause states in an organism in response to the stimuli coming from states of the organism’s body or environment caused by strings of digits or continuous variables. Of course, not every feedback control system that processes digits or continuous variables will be a nervous system. It is also necessary for the system to form part of an organism.
We have seen that negative feedback control, even in its more narrow characterization (the one that included the properties of equifinality and compensation) did not require that its variables be non-continuous or non-discrete. The only structural restriction that these requisites impose on feedback vehicles is that they must be variables capable of going back to an equilibrium value from an arbitrarily large set of non-equilibrium values and that the process of a variable of going back below a threshold depends on the increase of the values of other variables. Some possible changes in the value of the variables and some specific correlation between those changes are the only conditions required. But given these restrictions, it is perfectly possible that these variables be discrete or continuous as digital or analog computation requires. Therefore, feedback control does not imply
3.2. Constitutive Generic Computation
Piccinini and Bahar (2012) arrived at the conclusion that the nervous system is
The first argument is called the ‘argument from the functional organization of the nervous system.’ The idea is that since (i) neural processes are defined over medium independent vehicles and (ii) the processing of medium independent vehicles constitutes computation in the generic sense, then it follows that (iii) neural processes are computations in the generic sense. Premise (ii) is the result of Piccinini and Scarantino’s elucidation of generic computation. So, Piccinini and Bahar only need to argue for premise (i). They do so by pointing out that current evidence indicates that the primary vehicles manipulated by neural processes are neuronal spikes (action potentials) and that the functionally relevant aspects of neural processes depend on dynamical aspects of spikes such as spike rates and spike timing. Only these dimensions of variation of their vehicles, not any more specific property, are functionally relevant for neural processes. This means, according to Piccinini and Bahar, that action potentials are medium independent.
I argued above that computational vehicles are not defined by their medium independence, but rather by their restricted functional relevance. So, as premise (ii) is false, from premises (i) and (ii), conclusion (iii) does not follow. The fact that vehicles of neural processing are medium independent does not imply that they are computational. Nevertheless, the evidence that Piccinini and Bahar offer for premise (ii) also supports the thesis that vehicles of neural processing have not only medium independence, but also restricted functional relevance. So, we can argue for generic computation by reformulating premise (i) as the thesis that computation is the manipulation of vehicles of restricted functional relevance, and reformulating premise (ii) as the thesis that actual neural processing consists of the manipulation of vehicles of restricted functional relevance. These two premises establish generic computation.
The second argument is called ‘the argument from information processing.’ Piccinini and Bahar affirm that since (i) neural processes process semantic information, and (ii) the processing of semantic information requires generic computation, then it follows that (iii) neural processes are computations in the generic sense. Here, both premises require some argumentation. Regarding premise (i), Piccinini and Bahar maintain that cognition involves the processing of information in the three senses I mentioned earlier. First, mutual information, as a measure of the statistical dependency between a source and a receiver in Shannon’s theory (Shannon and Weaver 1949), is often used to quantify the statistical dependence between neural signals and their sources and to estimate the coding efficiency of neural signals (Dayan and Abbott 2001, ch. 4; Baddeley et al. 2000). There is also evidence that shows that neural signals carry natural semantic information. It is experimentally well established that neural variables carry natural semantic information about other internal and external variables (Adrian 1928; Rieke et al. 1999; cf. Garson 2003). Much of contemporary neurophysiology centers on the discovery and manipulation of neural variables that correlate reliably— and often usefully—with variables external to the nervous system. Lastly, Piccinini and Bahar affirm that many theories in cognitive science assume that nervous systems possess and process representations. In their interpretation, this means that the nervous system possesses and processes non-natural semantic information. Specifically, they say it is difficult to explain language processing and other higher cognitive processes without postulating representations.
Regarding premise (ii), Piccinini and Bahar affirm that information, in the three senses mentioned above, is a medium-independent notion:
So, as neuronal spikes carry information in the three main senses, and the vehicles of information are medium independent, Piccinini and Bahar conclude that the nervous system is computational in the generic sense.
Premise (ii), that the processing of semantic information requires computation, is more complex that it may seem. It contains three different claims: (a) that vehicles of semantic information processing are medium independent, (b) that vehicles of generic computation are medium independent, and (c) that this common feature makes informational vehicles computational. As we have seen, Piccinini and Bahar are correct in claiming (a) and (b). The vehicles of information processing and computation are medium independent. Despite the common feature of medium independence, however, (c) is false. There is no implication from information processing to generic computation, because the medium independence of an informational vehicle type is compatible with no restricted functional relevance of some of its tokens. The argument, then, does not work. Nevertheless, as I believe that my reformulation of the first does work, generic computationalism is true.
What about
Piccinini and Bahar (2012) do not seem interested in this form of computationalism. They do not consider the
The argument I offer rests also on an empirical premise, that the nervous system is in fact a feedback control system, and three conceptual premises, the fact that feedback control conceptually entails information processing and that neither feedback control nor information processing imply computation. But, unlike Piccinini and Bahar’s argument, the empirical premise comes together with a philosophical or conceptual insight about the constitutive character of the feature empirically discovered (negative feedback control).
To conclude this section, it is important to notice a peculiarity of the relation between modal generic computationalism and modal
In the first part of this paper, I have developed two main arguments. First, that computation cannot be defined as the manipulation of medium independent vehicles, but rather the manipulation of vehicles that have what I call ‘restricted functional relevance.’ Second, I defended two ideas that show that the line between computation and information is actually wider than the current debate on computationalism would suggest. I have argued against Piccinini and Scarantino (2011) that it is not only the case that computation and information are different concepts, but also that information processing does not imply computation. The main idea was that any given informational vehicle type can be realized by a token that has no restricted functional relevance. I then argued that information processing is more closely related to the nervous system than computation is, because only the former is constitutive of the peculiar function of the system. I showed that even a weak notion of negative feedback control implies information processing, but that even a strong notion of feedback control does not imply computation.
In the second part of the paper, I used these previous results to consider different forms of computationalism. I challenged a form of the kinds of computationalism that Piccinini and Bahar (2012) defended. However, as ‘computationalism’ was understood here in a different sense than that accepted by Piccinini and Bahar, my theses are compatible with theirs. First, I argued against a form of sui generis computationalism. I presented reasons to believe that feedback control is constitutive of the nervous system in a way that information processing and computation are not. Then I showed that, since feedback control is compatible with digital and analog computation, the neural processing can be realized by an analog or a digital computer.
My two main aims were to clarify several conceptual theses that are important to the debate on computationalism and also to offer answers to new questions regarding computation in the nervous system. First, I intended to go deeper into the characterization of computation, the relation between computation and information, and the relation that they both have with the nervous system. Second, I evaluated computationalism as the thesis that generic and