- Author: Greco Alberto
- Publish: Journal of Cognitive Science Volume 13, Issue4, p393~399, Dec 2012
Psychologists study a phenomenon concerning individuals, which comes at least once in life, usually during adolescence, called “identity crisis”. This is the moment when individuals ask themselves who they are, what makes them different from other individuals, what peculiarities they have. Perhaps the same thing happens, sometimes, also with human activities, namely with science. Cognitive science (CS) is one of the most flourishing scientific enterprises: there are Societies, and Journals like the one you are now reading, and conferences are held here and there in the world.
Many times, the perceived rationale for such activities is that scholars judge it somehow useful to have a look on what other practitioners are doing on subjects that seem similar enough to their own… Roughly, as long as someone speaks about processes like memory or language, for example, this clearly seems to be involving “cognition” and so it is judged appropriate and interesting. But if you ask each cognitive scientist about her/his view about CS, and want to go beyond the shallowest answers like “it is a multidisciplinary study of cognition, of intelligence…” (perhaps adding “in humans and machines…”), you will probably get a different personal answer for each interview. Perhaps is not by chance that, at its very start, CS was born with the question: “Why Cognitive Science”? (Collins, 1977)
Thus, every now and then, the need for a clarification becomes apparent, as in the identity crisis happens for individuals. The matter seems fuzzy and hard to be disentangled. Cognitive Science seems to be “at a crossroads” once again. Then discussions are started in Journals and conferences about this topic are held (like the one carried out in the newest Journal of the Cognitive Science Society, “TopiCS”, in 2009). This does not happen very frequently, and when it happens it does not lead to final conclusions. The present special issue of the JCS is no exception and, of course, its aim is not at all to say the final word about what CS is or should be. Our modest intention is to add some more elements that in our view can show some challenges or crucial aspects that still need to be clarified.
The quintessence of every appearance of the crisis of CS is the basic question “what is cognition?” This is now a usual and trite matter, but perhaps just because it is fundamental. Contributions presented in this issue cannot avoid being concerned with it but, of course again, they are not intended as a systematic treatment of this matter. They simply intend to join the discussion already underway between scholars in different disciplines, each dealing with a facet of it.
One of the first and most common definitions of cognition was the one that considered it as computation. This was the prevailing conception in the early days of Cognitive Science and is still very popular. Some people, however, do not like this concept because it seems too reminiscent of the cognitivist “computational metaphor”. The term
computationwas not used there in the literal sense of numerical calculations, but indicated that cognitive processes work by symbolic formal manipulations, according to some welldefined sequences of steps (algorithms), implementing what is known as a Turing machine. This approach to cognition implies that the cognitive “machine” can read symbols and conditionally perform some operation according to which symbols have been detected.
The main trouble with this view is that it requires that any state of the system, and rules for going from one state to another, be defined in a nonambiguous way. A common objection is that, although mathematical or logical operations may easily satisfy such requirements, this seems less plausible when speaking of all cognitive or mental operations. Many are not willing to accept that thinking be reduced to “nothing else” than comparing symbols and jumping to other symbols (the wellknown “Chinese room” argument is one example); but when one asks what exactly this “something else” is, then answers appear mostly vague.
Igor Farkas revisits this classical definition of cognition as computation, to propose reasons why computation is not so an awkward concept. In his view, the traditional view of the concept of
computationhas been too narrow, and it should be widened to encompass other kinds of computation like analog, quantum or probabilistic computation. He compares different approaches to this keyconcept and claims that computational models in CS are necessary, at least if we do not want not to renounce the benefits of cognitive modeling, whose main advantage – in his view – is that of allowing a formal description of processes. Farkas illustrates advantages of cognitive modeling and claims that modeling can only be computational, provided it is freed from the interpretation of computation as “symbolic computation”. In his view, the best modeling framework is connectionism. As one of its main advantages, Farkas considers the fact that representations are constrained by the kind of computation peculiar to this framework. In other words, what is learned has the greatest importance, but is also and especially important the way in which it was learned. He considers also, as obvious further desirable properties, the facts that connectionist models are strongly focused on learning and biologically inspired. Thus Farkas’ effort is to show how the connectionist perspective can allow us to gain the precision of formal description without falling into pure formalism.
The contribution by Mihoko Otake, Surya Nurzaman, and Fumiya Iida also implies a wider consideration of cognition. They take the
embodiedapproach to cognition that urges for taking body and action into consideration, to show how this view can be at the ground of applied research in assistive domains. Their approach can show a concrete way of giving substance to the concept of embodiment, which often limits itself to emphasizing the role of body in abstract terms, only to criticize the cognitivist perspective. In particular, one aspect that has been poorly treated concerns how memory can be conceived from the embodied perspective. Now there is a wide consensus on the fact that we should go beyond the early “storehouse metaphor” of memory; most contributions on how to do this in concrete contexts, however, only emphasize the role of working memory or point out that we should be more aware of the constructive nature of memory and of its limits in everyday situations (like in the testimony). It seems that the place of the “embodiment” perspective in treating memory has been rather limited up to now in standard cognitive science.
It is interesting, therefore, to show how the view of cognition as embodied may be related to situations where sensory and motor experiences act as enhancers of memory performance. Otake and colleagues present an original method called “coimagination”, aimed at preventing and treating cognitive decline in older adults. This method consists in stimulating in participants the reminiscence of past experiences by showing them pictures and encouraging them to comment and discuss with others. The subsequent analysis tries to relate success in recalling and need for support to the sensorymotor content of verbal utterances.
This study strives to relate this method to clinical applications, but it is relevant also in suggesting how efforts can be directed towards an effective multidisciplinary collaboration not only in treatment but also in modeling cognitive processes involved in dementia. This is just skimmed in the paper, but present trends in robotic practice may give examples. New robotic platforms that are now being developed might exploit scripts based on some method similar to coimagination while physically interacting with old adult users. This could allow such systems to achieve a twofold benefit. Firstly, as cognitive models, they could use memory in a more elaborate way, incorporating facts concerning physical and sensorial interaction into their user model. Secondly, they could acquire the ability of proposing themselves as a therapeutic interface based on their knowledge of situations that are relevant in their users’ life. Such situations can be stimulated by the use of pictures or objects like in the coimagination method, employing the technique of analysis of the verbal exchanges used in this method, but also enacting a range of particular interaction patterns like ones that emerge during the application of this method.
The benefits of cognitive modeling are evident so far, and few would be willing to deny that this is a core aspect of the whole CS enterprise. Less clear, however, is why. The wrong question to ask is which aspects are to be considered at the center of cognition, in order to put them at the focus of cognitive models. The exercise of cognitive modeling perhaps taught us that a readymade concept of cognition should not be taken as presupposed. Rather, cognitive modeling itself contributes to the definition of cognition: building cognitive systems allows for tools and testbed where different conceptions of cognition can be examined. In their paper, Antoni Gomila and Vincent Mueller systematically examine some capabilities that are desirable for a cognitive system, and that at the same time constitute challenges for next generations of cognitive models. The paper treats such matter in a theoryneutral way, and tries to give measures of theoretical progress, that the authors see in some core skills: dealing with uncertainties, learning from experience, understanding and flexibly using knowledge, autonomy, sociality.
In their view, cognition is the capability of learning from experience and to use acquired knowledge in a flexible manner to achieve some goals. This definition applies to natural and artificial cognitive systems. It puts much stress on learning, to avoid that any kind of knowledge count as cognition, but only knowledge that is acquired by the system itself by “experience”. Given the central role that the latter notion has in this analysis, it is important to understand how it should be regarded. Gomila and Mueller conceive “experience” at a somewhat high level, as active exploration, discovery of meaningful patterns, reasoning on analogies, and so on. From this point of view, this criterion has been already addressed many times in the history of AI, by artificial cognitive systems that tried to reason on problems that they had not been able to solve by identifying the mistakes not to repeat.
Making “experience”, however, is not only finding abstract analogies between present and past situations, but also and primarily connecting such situations to system’s
ownprocessing. Gomila and Mueller rightly point out, as a measure of progress in learning form experience, active discovery of environmental patterns opposed to simple habituation or accommodation. It may be worth noticing that truly adaptive responses need a lowerlevel analysis of mechanisms that implement such process of active discovery already at the pattern recognition and categorization levels. In this sense, this view can be considered as cognate to Harnad’s idea of sensorimotor toil(Cangelosi, Greco, Harnad, 2000) in category learning, i.e. the process of acquiring new categories through realtime trialanderror behavior, corrected by feedback; this learning is the basis for symbolic theft, where sensorimotor experience is used for acquiring new categories through grounded symbolic language. This basic mechanism of referring all symbolic processes to the peculiar learning that pertains to an individual system (and only to that system) may be also connoting other challenges highlighted in the paper. For example, autonomy implies the representation of values and goals that have a meaning in the whole cognitive system itself, and that are not dependent on external interpretation. This implies that there cannot be autonomy if the interpretation of what a system is doing is not grounded in the system’s own experience. This may be viewed as a further challenge for Cognitive Science.
In the final paper, Alberto Greco is concerned with the most general problem of the multidisciplinary nature of CS. He asks whether a unified science of cognition is needed, and how it can be achieved. After having pointed out that there is currently no universally accepted definition of what the object and the method of CS are, the paper takes a pragmatic perspective, considering that cognitive scientists, while continuing to regard cognition as defined inside their own discipline and to use their own methods,
de factoseem to have an intuitive notion of the importance of multidisciplinary collaboration. In his view, this collaboration may be considered a true unified “cognitive science” enterprise only if the contribution of each discipline is essential for the explanation of a cognitive phenomenon.
In such vein, in this paper a method for integrating different disciplinary contributions into a consistent explanatory framework is described. The idea is to find a system for representing descriptions of “facts”, that require explanation, made from several points of view and using different disciplinary concepts. For example, “the brain area X is active” or “the person reports that Y”. Such descriptions are placed along a timescale so that they can be considered as “flows” of events. Explanation requires that various events be related (causally o correlationally) and this can be made by building links between points in flows. In some cases, this can only be made
insideone discipline (like when a certain mental state is supposed to be causally connected with another following mental state). In other cases, however, a better explanation can be achieved by considering correspondences between events, happening in the same critical time, that are described inside different disciplinary “flows” (like “the fault X happens in the brain area Y” and “the person does Z”). It is argued that this would show the crucial contribution to a particular explanation of different disciplines, belonging to Cognitive Science.
One of the main reasons for this special issue is to testify that there is still much room for discussion about unsolved matters concerning Cognitive Science. Papers in this issue barely touch the surface and raise only a few limited questions: is computation still an essential factor in the definition of cognition? or, as many scholars now believe, is it compulsory introducing the “body” in this definition? and, in this case, is there a way of giving the embodied approach substance in some applied domain? if an artificial cognitive system should exhibit cognition, what we expect it is able to do? and, finally, is the collaboration between disciplines still essential for explaining cognition? Our purpose is to stimulate further contributions on other aspects, and discussion, comments or criticisms on the present topics as well. The identity crisis is overcome by individuals when they reach maturity, perhaps when they decide that the most important thing is not who they are, but what they want to become.