SEHR, volume 4, issue 2: Constructions of the Mind
Updated July 23, 1995
minds and machines
behaviorism, dualism, and beyond
James H. Fetzer
dualism and behaviorism
There are two great traditions in the philosophy of mind that continue to exercise their influence in the age of computing machines. These are the dualistic tradition associated especially with the name of René Descartes and the behavioristic tradition associated with B.F. Skinner. Dualism is strongly antireductionistic, insofar as mentality is taken to be different than and not reducible to behavior. Behaviorism, by contrast, is strongly reductionistic, taking mentalistic language to be at best no more than an abbreviated mode for the description of behavior and at worst just scientifically insignificant gibberish. The most important elements distinguishing dualism from behaviorism concern ontic (ontological) theses about the nature of minds and epistemic (epistemological) theses about how minds can be known. Descartes, for example, maintained that minds are thinking things (the kinds of things that can think), where access to knowledge about minds is available by means of introspection (envisioned as a kind of inner observation). Skinner, however, maintained that the concept of mind was dispensable in the scientific study of behavior, which only relies upon observation and experimentation. In one form or another, these traditions continue to exert their influence within the study of the mind, where the work of Alan Turing, for example, seems to fall within the behavioristic tradition, while that of Stevan Harnad appears to fall within the dualistic tradition instead. The position that I am elaborating here, however, suggests that neither behaviorism nor dualism--in their classic or contemporary guises--can solve the problem of the nature of the mind, which requires an alternative approach that regards behavior as evidence for the existence of nonreductive, semiotic modes of mentality.
the turing test
Contemporary discussions of the nature of the mind are usually dominated by what is known as the computational conception, which identifies mentality with the execution of programs: humans and machines are supposed to operate in similar ways. Perhaps the most important representative of this position is Alan Turing, who introduced the Turing Test (TT) as a means for determining whether the abilities of machines were comparable to those of human beings.1 Turing's position has been enormously influential within cognitive science, which is dominated by the computer model of the mind. Although the Turing Test has acquired the status of common knowledge among students of artificial intelligence and cognitive science, its character is not so widely known within the intellectual community at large. Turing adapted a party game, known as the imitation game, for the purpose of establishing evidence of the existence of intelligence or mentality in the case of inanimate machines. In the imitation game, a man and a woman might compete to induce a contestant to guess which is female and which male, based solely upon answers given to questions (permitting the male, but not the female, to lie). The game would have to be arranged in such a way that the physical properties of the participants--their shapes, sizes, and voices, for example--would not give them away. If the contestant correctly identified the genders of the other players, he or she would win. Alternatively, if the contestant incorrectly identified their genders, then the woman would win. Turing's alternative conception was to adapt the test to pit an inanimate machine against a human being, where the property under consideration is no longer the participants' sex but their intelligence or mentality.
TT behaviorism
The TT could be carried out in several different ways. For example, it can be conducted by covertly substituting a machine for the male participant in the course of an ongoing play of the imitation game. Here the success of a machine in inducing the contestant to treat it as human would be taken as evidence of its intelligence or ingenuity. Alternatively, the game might be overtly advertised as pitting a human being against an inanimate machine, where the contestant knew the nature of the challenge. In the first instance, but not in the second, the contestant in the test would be "blind." Turing's approach appears to fall within the tradition of behaviorism, moreover, not simply because he proposed a behavioral test for the existence of machine mentality but also because passing the TT is supposed to be sufficient to justify ascriptions of mentality. Answering the questions in such a way as to induce the contestant to mistakenly guess that it was human, for example, is viewed as a sufficient condition--as strong enough evidence--to justify the conclusion that the machine possesses mentality. If it passes the TT, then it possesses a mind, according to this conception. In this sense, the ascription of mentality functions as an abbreviated mode of language for the description of behavior, where attributing this property is not merely scientifically insignificant gibberish. The behavioristic tradition within which the TT properly falls, therefore, is not the radical form that Skinner advocates but a milder version. There are such things as minds, whose existence can be discovered by the TT. Precisely what it is the TT actually tests, however, may or may not be conspicuous, since the idea of answering questions can be interpreted in various ways.
the chinese room
In order to conceal the identity of the TT participants, you may recall, arrangements have to be made to conceal their shapes, sizes, voices, and so forth by relying upon a medium of communication that does not give the game away. The precise form in which "answers" are provided to "questions," therefore, depends upon the specific arrangements that have been made but typically would be conveyed by means of written (typed) messages. These "answers" must conform to the rules of grammar of the same language used to pose those "questions" or else could give away the game. Merely providing "answers" that conform to the grammatical rules of the language in which those "questions" are posed, however, may or may not provide evidence of mentality. As John Searle has observed, if a non-Chinese speaker were to rely upon a set of instructions, written in English, that directed him to send certain sets of Chinese characters out in response to certain other sets of Chinese characters being sent in (when situated in an otherwise isolated location, for example), that person might seem to his correspondents to understand Chinese, even though by hypothesis he does not.2 Searle's example--known as "the Chinese Room"--implies that conforming to the rules of grammar of a language does not automatically infuse the sentences that are thereby generated with meaning. A fundamental distinction needs to be drawn between syntax and semantics, where syntax in relation to a language concerns how its words may be combined to create new words, phrases, or sentences, for example, while semantics relative to that language concerns the meanings that those words, phrases, or sentences may be used to convey. Syntax processing alone does not appear sufficient for mentality.
the symbol grounding problem
The Chinese Room has provoked an enormous variety of responses, the most sensitive of which attempt to cope with the problem Searle has identified rather than dispute it. Stevan Harnad, for example, has explored its character and ramifications in a series of articles, which emphasize the importance of infusing otherwise purely syntactical strings with semantic content, if they are to be meaningful symbols rather than meaningless marks, as a suitable theory of the mind requires.3 The necessity to locate a suitable mechanism for imparting meaning to symbols he calls the symbol grounding problem. Harnad has elaborated upon the significance of the Chinese Room for the theory of mind by observing that a Chinese dictionary may succeed in relating some Chinese symbols to other Chinese symbols, but that such a resource would be woefully inadequate for anyone who actually wanted to learn Chinese. The symbols that appear in that dictionary, after all, would be nothing more than entries that relate some meaningless marks to other meaningless marks for those who know no Chinese. The symbol grounding problem must be successfully resolved for those symbols to be meaningful. The difficulty can be formulated more generally for any language whatever by observing that the words that occur in that language are either defined or undefined ("primitive"). The defined words, in principle, could be replaced by those words by means of which they are defined and, in turn, those by others by means of which they are defined where, at some point in this process, nothing but strings of primitives specifying the meaning of nonprimitive symbols should remain. If we cannot understand primitive symbols, surely we cannot understand symbols defined by means of them.
the linguistic approach
At least two avenues are available for approaching the symbol grounding problem, one of which is linguistic, the other not. The linguistic solution depends upon the existence of another language--a base language, we might call it--relative to which a second language might be learned. Anyone who already knows English, for example, might overcome the symbol grounding problem by learning to relate specific Chinese characters to corresponding English words, phrases, and expressions. The meaning of Chinese symbols might then be discovered by the process of translating them into English. The problem with this approach, however, is that it leaves in its wake the residual problem of accounting for the meaning of the symbols which occur in English! If the words that occur in English are likewise meaningless marks, then the process of translation that was intended to solve the symbol grounding problem may merely succeed in relating one set of meaningless marks to another. The appeal to a base language to explain the meaning of another language through a process of translation holds no promise if we cannot account for the meaning of symbols in the base language itself! If there were no nonlinguistic means for understanding the meaning of symbols, of course, the symbol grounding problem might represent an ultimate dilemma: we cannot understand some language without already understanding some other language, which implies that we cannot understand any language unless we already understand a language! There could be an escape from this dilemma, however, if there were a base language that did not have to be learned because it is innate, inborn, and unlearned, a language that might be described as "the language of thought."
the language of thought
The hypothesis that all neurologically normal human beings are born with an innate, species specific language of thought has been advanced by one of the leading figures in cognitive science today, Jerry Fodor.4 Fodor holds that every member of the species Homo sapiens who is not brain damaged, mentally retarded, or otherwise neurologically impaired benefits from the possession of a set of semantic primitives as part of our genetic psychological endowment. It is this innate language that thereby provides an unlearned base language relative to which other languages might be learned. In order for this unlearned base language to provide a foundation for the introduction of any word that might emerge during the course of human existence, the innate stock of primitives must be adequate to the description of any future development in art, history, science, or technology. Otherwise, because this approach is committed to the principle that a language can be learned only in relation to another language, it would turn out to be impossible to introduce or to understand the meaning of words for those thoughts that the human mind was not disposed to understand. There is a striking parallel to Plato's theory of knowledge as recollection, according to which everyone knows everything there is to know before they are born, which the trauma of birth causes us to forget. When life experiences trigger our recollections, we remember things we knew already but had forgotten.5 Moreover, since everyone has the same stock of semantic primitives, our prospects for successful translation from any language to another are unlimited. No one can ever mean anything that anyone else cannot understand relative to the base language they share.
the nonlinguistic approach
It has been said that there is no position so absurd that some philosopher has not held it. In the present instance, that generalization extends to at least one cognitive scientist. Surely there are more adequate theories than Plato's to account for the acquisition of learning, including the notion that we can learn from experience even without the benefit of knowing everything there is to know before birth. Surely linguists have ample evidence that successful translations between different languages are by no means always possible. Fodor's theory seems to be merely an intellectual fantasy. Far more plausible accounts can be generated by abandoning the presumption that one language can only be understood by means of another. After all, if words can ever be understood other than linguistically, then the dilemma might be resolved without resorting to such an innate stock of semantic primitives, which is so complete that it includes concepts sufficient to accommodate any development--no matter how surprising or unexpected it might be--in art, history, science, or technology, including impressionism, totalitarianism, electromagnetism, and color television. Consider, for example, the alternative proposed by Ludwig Wittgenstein, who suggested that, rather than asking for the meaning of words, we should consider how they are used.6 From the perspective of what has gone before, the immense appeal of Wittgenstein's position should be evident. Surely the words that are used in one language might have no counterparts in another language, due to differences in customs, traditions, and practices. And surely there is no need to fantasize that the meaning words can have is determined by our species' genetic heritage.
the total turing test
When we consider the Turing Test from this point of view, it appears obvious that similar uses of language under similar conditions may be important evidence of similarity of meaning. Yet the Chinese Room has already displayed the possibility that syntactic behavior may be an inadequate foundation for drawing inferences about semantic meanings. What Harnad suggests to overcome the yawning chasm between mere syntax processing and cognitively meaningful language is to incorporate the TT within a strengthened conception of the Total Turing Test (TTT). Harnad's position reflects the realization that nonverbal behavior is at least as important as verbal behavior in determining what we mean by the words we use. He therefore appeals to the capacity to identify, classify, sort, and label objects and properties of things, especially though not solely on the basis of their sensory projections. Thus, our capacity to categorize, for example, depends upon our ability to isolate the ways in which things appear that are invariant across various contexts and which provide a usually reliable, but not therefore infallible, basis for their identification.7 In order to provide a more adequate foundation for determining what a human being or an inanimate machine may mean by the syntactical marks that it manipulates, therefore, Harnad introduces the TTT as a test of nonsymbolic as well as of symbolic behavior, where symbols can be grounded by means of the verbal and nonverbal behavior that the system displays. Two systems that exhibit similar behavior in identifying, classifying, sorting, and labeling objects and properties of things provide powerful evidence that those systems mean the same thing by the marks they use.
TTT dualism
Upon initial consideration, Harnad's position appears to be an extension and refinement of Turing's position, which strongly suggests that Harnad, like Turing, falls within the behavioristic tradition. After all, Harnad uses both verbal and nonverbal behavior as evidential criteria for determining semantic meaning. If he defined meaning by means of the verbal and nonverbal behavior that systems actually display, his position would fall within the behavioristic tradition. The way in which behavior functions as evidence on his account, however, places it in the dualistic tradition instead. Harnad, like Descartes before him, identifies mentality not simply with the ability to think (cognition equals thinking) but also with the notion that thinking is always conscious (cognition equals conscious thinking). Because an inanimate machine--a "robot," let us say--might display verbal and nonverbal behavior that is arbitrarily similar to that of a human being--a real "thinking thing"--and still not possess mentality, there exists what Harnad, like Descartes before him, envisions as being an unbridgeable gulf between our easily accessible public behavior and our permanently private minds:
Just as immunity to Searle's [Chinese Room] argument cannot guarantee mentality, so groundedness cannot do so either. It only immunizes against the objection that the connection between the symbol and what the symbol is about is only in the mind of the [external] interpreter. An indistinguishable system could still fail to have a mind; there may still be no meaning in there. Unfortunately, that is an ontic state of affairs that is forever epistemically inaccessible to us: We cannot be any the wiser.8
There is yet another respect in which Harnad's position parallels that of Descartes, because they both accept introspection as providing privileged access to mental states, as Larry Hauser has remarked.9 Since introspection offers access to our mental states but not to those of anyone else, we can only be certain that we ourselves have minds. Indeed, when certainty is taken to be necessary for knowledge, as Descartes understood it, no one can ever possess knowledge of the mental states of anyone else--which is why "the problem of other minds" cannot possibly be resolved.
the TT vs. the TTT
At least two properties distinguish the TT and the TTT in relation to the problem of other minds. The first is that, in the tradition of behaviorism, the TT functions as a sufficient condition for ascriptions of mentality, because passing the TT is viewed as enough evidence to justify the inference that anything that actually passes the test has a mind. The TTT, in the tradition of dualism, functions as a necessary condition for ascriptions of mentality instead, since failing to pass the TTT is viewed as enough evidence to justify the inference that anything that fails the test does not have a mind. The second is that, in consonance with the computational conception, the TT complements the approach within cognitive science that identifies minds with the capacity to manipulate symbols, where "symbols" are given a purely syntactical characterization. Systems of this kind are variously referred to as "symbol systems" and as "automated formal systems."10 It is its syntactical character, of course, that makes the account vulnerable to the Chinese Room. The TTT, by comparison, complements the tradition within Cartesian philosophy that instead identifies minds with conscious thinking things. The difference between the TT as a sufficient condition and the TTT as a necessary condition is an epistemic difference, while the difference between the conception of minds as symbol systems and as thinking things is an ontic difference. Given these differences, the manipulation of syntax should be viewed as direct evidence of mentality for the computational conception but only as indirect evidence of mentality for the Cartesian. In fact, Harnad goes even further and maintains, "[t]here is in fact no evidence for me that anyone else but me has a mind," which is a rather surprising claim.11
hauser's objection
If there really were no evidence for me that anything else besides myself has a mind, however, the point of the TTT would be obscure. Surely if the TTT is to perform its intended function, it must supply some evidence of the presence or absence of mentality, since it would otherwise be meaningless in relation to the central question. Moreover, as Hauser observes, Harnad maintains that our linguistic capacities must be grounded in our robotic capacities, which would seem to imply that robotic capacities are a necessary condition for linguistic capacities;12 but that cannot be correct. Hauser's argument is that, if linguistic capacities presupposed robotic capacities, then any test of linguistic capacities, such as the TT, would also automatically test for robotic capacities, in which case the TTT would not improve upon the TT. What Harnad presumably intends to claim is that, for a system in which the manipulation of symbols is a cognitively meaningful activity, linguistic capacities presuppose robotic capacities. On this account, the TTT tests for behavioral indicators of the meanings that should be assigned to the symbols the manipulation of which the TT tests. Harnad's position, as I understand it, is that there are infinitely many assignments of meaning (or "interpretations") that might be imposed upon any piece of syntax, no matter how complex. The function of the TTT is to rule out interpretations that are not compatible with the verbal and nonverbal behavior of that syntax processing system. While the TTT cannot definitely rule in any single interpretation of the meaning of that syntax, it can definitely rule out those interpretations that are incompatible with the verbal and nonverbal behavior that such a system happens to display.
an epistemic problem
This understanding of Harnad's position makes sense of the TTT as a necessary condition for possessing mentality, but it undermines his contention that the possession of mentality or not is an ontic state of affairs that is forever epistemically inaccessible to us, where "we cannot be any the wiser." Surely passing more and more elaborate versions of the TTT would provide more and more evidence about the meaning that should be assigned to the syntax processed by a system, even if it cannot do so conclusively. That presumably is the purpose and importance of the TTT. It may be worth noting, however, that a system could possess a mind, even though it never passed the TTT, namely: when it is never subjected to the TTT. A system that is never placed under the kind of scrutiny that the TTT imposes (by observing and comparing its behavior with those of others)--such as a lone survivor of a shipwreck stranded on a deserted island--would not for that reason alone be shorn of his mentality. Instead, the TTT must be understood hypothetically as concerned with the kind of behavior that would be displayed, if that system were subject to the TTT. Even more importantly, the Cartesian conception of minds as thinking things (even as conscious thinking things) really does not help us understand the nature of mentality, unless we already possess an appropriate conception of the nature of thinking things, on the one hand, and of the nature of consciousness, on the other. The most glaring inadequacy of Harnad's position is not the epistemic standing of the TTT (indeed, even the TT should be viewed as providing merely inconclusive inductive evidence),13 but his failure to explain the ontic nature of conscious thought.
the ontic problem
In case anyone doubts that the failure to explain the ontic nature of conscious thought almost completely undermines the theoretical significance of Harnad's position, then consider the following questions. What does Harnad, like Descartes before him, claim to possess certain knowledge of by means of introspection? What are we claiming for ourselves (even if we can only claim it for ourselves) when we maintain that we have minds? That we are thinking things? That we are capable of conscious thought? Exactly what are hypotheses like these supposed to mean? There are, after all, three great problems that define the subject that is known as the philosophy of mind, namely, those of the nature of mind, the mind/body problem, and the problem of other minds. Surely neither the mind/body problem (of how the mind is related to the body) nor the problem of other minds (whether anyone besides ourselves has a mind) can be resolved in the absence of a solution to the problem of the nature of mind. Otherwise, we do not know precisely what kind of property it is that is supposed to be related to our bodies or possessed by someone else. From this point of view, the TT might be said to fare somewhat better than the TTT, since the TT, as an element of the computational conception, complements an answer to the problem of the nature of mind that is clear and intelligible, whether or not it is also adequate. The TTT, as a feature of the Cartesian conception, surely bears the burden of explaining the nature of thought and the nature of consciousness. Otherwise, that account itself might turn out to be no more than another instance of one meaningless string of marks being related to another meaningless string of marks.
the no theory theory
The only response that Harnad seems prepared to provide to questions of this kind is "a full description of the internal structures and processes that succeeded in making the robot pass the TTT."14 When a system succeeds in passing the TTT and we want to know why that occurred--that is, what it was about that system by virtue of which it passed that test--Harnad's reply is to propose that we take it apart and study its causal components. This may come as bitter medicine for all of those who think that there is or has to be a solution to this problem, but that is the answer he supplies. There are at least two reasons why a response of this kind really won't do. The first is specific to any Cartesian account with the ingredients that Harnad combines. If we can never know whether or not anything else has a mind, how can we ever know whether we ourselves have a mind? If we were completely dissected and our internal organs were wholly displayed, would we be any closer to understanding the nature of the mind than we were before? What we want is an account of the distinctive differences be tween systems with minds and those without them, which is another thing. Indeed, the very idea that a complete description of its "internal structures and processes" could possibly explain why something passed the TTT implies that minds and bodies interact without explaining how, which is an underlying problem that any Cartesian account invariably encounters. Without knowing more about the mind and its mode of operation, how can we possibly discover how minds can interact with bodies to produce behavior? Harnad's emphasis upon the problem of other minds thus conceals the absence of a theory concerning the relationship between bodies and minds.
kinds of similarity
The second is general and confronts any theory of mind, Cartesian or not. A distinction has to be drawn between a) systems that display the same output when given the same input, b) systems that display the same outputs when given the same inputs by processing them in similar ways, and c) systems that not only display the same outputs when given the same inputs by processing them in similar ways but are made of similar "stuff." Systems standing in these relations reflect relations of simulation, replication and emulation, respectively, which are stronger and stronger in kind.15 If one system were made of "flesh and blood" while the other were made of electronic components, for example, no matter how similar they might be in their input/output behavior and in their modes of operation, they could not stand in a relationship of emulation. If one system arrived at answers to questions by multiplication and another by means of repeated addition, no matter how similar they might be in the answers they provided to questions, they could not stand in a relationship of replication. If they provide different answers to questions, then likewise for a relation of simulation. From this point of view, it should be evident that the reason why the Chinese Room succeeds is because simulations do not establish sufficient conditions for replications. It should also be evident, relative to Harnad's conception, that descriptions of the internal structures and processes that succeeded in making a robot pass the TTT depend on the kind of stuff of which such things are made. That would be relevant if the issues at stake were matters of emulation, but what we want to know is whether we and they are performing the same mental functions as a matter of replication.
the TTTT
What Harnad has done is to suggest an answer at the level of emulation in response to a question raised about the level of replication. This is a very difficult position for him to defend, however, especially given his attitude toward what he calls the Total Total Turing Test (TTTT). This test goes beyond linguistic and robotic performance to the level of molecular neurology. If the TT depends upon linguistic indistinguishability and the TTT depends upon linguistic and behavioral indistinguishability as well, the TTTT further depends upon neurophysiological (or "bodily") indistinguishability. Harnad suspects that the TTTT may be unnecessary to determine whether or not a system possesses mentality and considers passing the TTTT to be helpful "only inasmuch as it [gives] hints to accelerate our progress toward passing the TTT."16 The problem that Harnad confronts, however, is that he really has no theory of mentality beyond the full description of the internal structures and processes that enable a robot to pass the TTT. What we need to have, however, is not a full description of the structures that enabled a robot to pass the TTT but a full explanation of the functions. That the absence of any other account of the nature of the mind hobbles Harnad's approach becomes painfully apparent when consideration is given to the comparative character of the TT, the TTT, and even the TTTT. All of these tests succeed (to whatever extend they do succeed) only by assuming that the systems employed for comparison possess the property for which those tests are supposed to test. If two machines were TTTT indistinguishable, that would not show that they had minds. And even if a robot and a human were TTTT distinguishable, that would not show that they did not.
signs and minds
What we need is a theory of the mind that preserves the virtues of accounts of both kinds without retaining their vices. Behaviorism appears appealing to the extent that it renders hypotheses about mentality accessible to observational and experimental tests without denying its existence altogether. Dualism appears appealing to the extent that it denies that mentality can be reduced to behavior so long as hypotheses about behavior can be subjected to observational and experimental tests. What we want, in other words, is a nonreductionistic conception that relates minds with behavior. An account of this kind can be elaborated on the basis of a dispositional conception that characterizes minds as sign-using (or "semiotic") systems.17 The foundation for this account is the theory of signs advanced by Charles S. Peirce, one of the greatest of all philosophers, who introduced the notion of a sign as a something that stands for something else (in some respect or other) for somebody. By inverting and generalizing this conception, minds can be taken to be the kinds of things for which a thing can stand for something else (in some respect or other) for a human, other animal, or machine. In Peirce's view, signs not only stand for other things--typically, things in the world around us--but create in the mind of a sign user another sign, equal or more developed than the original. Its meaning can be identified with the tendencies of that system to behave one way or another when conscious of that sign, where consciousness involves both the ability to use signs of that kind and the capacity to exercise that ability. Cognition then occurs as the effect of a causal interaction between a sign within a suitable proximity and the "context" of other properties of that system that affect its behavior.
three kinds of minds
Peirce distinguished three kinds of signs on the basis of the ways in which those signs are able to stand for other things, which he called their "ground." Icons (such as photographs, sculptures, and paintings) are signs that stand for other things because they resemble those other things. Indices (such as smoke in relation to fire, elevated temperature and red spots in relation to measles) are signs that are causes or effects of those other things. Symbols (such as the words which occur in English or Chinese), by comparison, are signs that are merely habitually associated with that for which they stand. A stop sign at a traffic intersection affords an illustrative example. For a qualified driver who is familiar with signs of that kind, the appearance of a thing of that kind within a suitable proximity (when the driver's vision is not impaired, the sign itself is not obscured by trees and bushes, and so on) activates a mental sign whose meaning is the corresponding concept, namely, the disposition to slow down and come to a complete halt and proceed through the intersection when it is safe to do so (provided that the driver is not a felon fleeing the police, etc.). The distinction between three kinds of signs invites a distinction between three kinds of minds, where minds of Type I can utilize icons, minds of Type II can utilize indices, and minds of Type III can utilize symbols, where each type is successively stronger in presupposing the ability to use signs of any lower kind. This is an attractive conception from the evolutionary point of view, since it suggests the possibility that lower species may possess lower types of mentality and higher species higher, in contrast to the language of thought hypothesis, for example, which does not harmonize with evolution.18
the problem of primitives
If thinking can occur though the use of icons and indices as well as symbols, however, then the distinction between syntax and semantics ought to apply to them as well as to symbols. Indeed, iconic signs can be combined in ways that make them resemble different things, as when different parts of a face are changed to create replicas of different faces, a technique well known to law enforcement communities. And indexical signs can be combined in ways that make them resemble different causes and effects, as when directors and actors rely upon "special effects" for the purpose of creating motion pictures. This, in turn, implies that primitive icons and primitive indices as well as primitive symbols require "grounding" in our (mental and other) dispositions. Harnad's "symbol grounding problem" is better envisioned as the problem of primitives, at least to the extent to which the meaning of new (or molecular) signs created by combining old (atomic or molecular) signs depends upon or presupposes that those old (atomic or molecular) signs already possess meaning. An adequate theory of meaning thus requires understanding interactions between signs, what they stand for, and sign users. From this point of view, both the TT and the TTT provide relevant kinds of evidence for the meanings that systems happen to attach to signs, where similar (verbal and robotic) behavior under similar (internal and external) conditions serves to confirm some hypotheses and to dispute others. As a sufficient condition for the presence of mentality, however, the semiotic approach supports the capacity to make a mistake, which involves taking something to stand for something other than that for which it stands, which in turn implies the capacity to take something to stand for something else.19
bodies and minds
The conception of minds as semiotic systems complements the connectionist conception, which views the brain as a neural network of numerous neurons capable of activation. Each node is connected to other nodes where, depending upon its level of activation, it can bring about increases or decreases in the levels of activation of those other nodes. What is remarkable about this approach is that specific patterns of neural activation can function as causal antecedents of behavior for the systems of which they are component parts, where some of these causal roles may be genetic and others are learned. This suggests that the "signs" that are created in the mind of a sign user in the presence of a sign of a suitable kind might be identified with specific patterns of activation. Harnad has objected that a semiotic theory of the kind I have in mind implies the existence of a homuncular module that interprets the meaning of those signs--as a mind within a mind--thereby generating an infinite regress of minds within minds without explaining the mode of operation of any "mind."20 Harnad is mistaken in two different ways, however, each of which is interesting and important enough to deserve elaboration. Harnad's first mistake is to overlook the possibility that dispositions for specific behaviors within specific contexts might accompany the presence of specific patterns of neural activation in systems of specific kinds, not in the sense that they are identical with ("indistinguishable from") those patterns of activation but rather that they are functions ("causal roles") that accompany those structures ("bodily states") for systems of specific kinds. These connections should not be understood as logical connections by way of definitions but rather as nomological relations by way of natural laws.21
consciousness and cognition
If specific patterns of neural activation are connected to specific dispositions toward behavior (within specific contexts) for various kinds of systems, then the semiotic conception would appear to have the potential to solve the mind/body problem. The advantage that it enjoys in relation to dualistic conceptions is that it provides the foundation for understanding the nature of thought as a semiotic activity in which something stands for something else in some respect or other for a system. This account thereby supplies a framework for investigating various kinds of thoughts, etc. Harnad's second mistake is to ignore the conceptions of consciousness and of cognition that accompany the conception of minds as semiotic systems. Consciousness on this account qualifies as a completely causal conception: if a system has the ability to use signs (of a certain kind) and is not incapacitated from exercising that ability (because it is brain-damaged, intoxicated, or otherwise impaired), then it is conscious (with respect to signs of that kind). Consciousness in this sense does not presuppose any capacity for articulation nor presume the presence of any concept of self.22 Cognition on this account likewise qualifies as a completely causal conception: when a system is conscious (relative to signs of a certain kind), then the occurrence of signs (of that kind) within an appropriate causal proximity would lead--invariably or probabilistically--to the occurrence of cognition. Depending upon the complete set of causally relevant properties of that system, the occurrence of cognition would bring about some internal or external changes in the system, which, when they involve the production of sounds or movement by that system, influence its behavior.
other minds
If "consciousness" or "cognition" were left as primitives within an account of this kind, then that would pose a serious objection. The accounts of the nature of consciousness and cognition presented here, however, strongly support the conception of minds as semiotic systems. Indeed, perhaps the fundamental advantage of this dispositional approach over its dualistic alternative is that it supplies an account of what it means to be "a thinking thing" (namely, a semiotic system) and of what it means to be a "conscious thinking thing" (namely, a semiotic system that can exercise its abilities). It also supplies a framework for pursuing the problem of other minds. The evidence that distinguishes systems that have minds from those that do not arises within the context of what is known as inference to the best explanation.23 This principle requires selecting that hypothesis among the available alternatives which, if true, would provide the best explanation of the available evidence. When the behavior of a system is sufficiently noncomplex as to be explainable without attributing mentality to that system, then it should not be attributed, nor should stronger forms if lesser will do. The E. coli bacterium, for example, swims toward at least twelve specific chemotactic substances and away from at least eight others.24 This counts as (surprising) evidence that E. coli might possess iconic mentality, but it could not support inferences to any stronger types. Vervet monkeys, however, make three distinctive alarm calls that cause them to run up trees in response to leopard alarms, to hide under the brush in respose to eagle alarms, and to look down and approach in response to snake calls.25 Behavior of this complexity tends to support inferences to higher modes of mentality.
beyond behaviorism and dualism
The arguments presented here have had several objectives. One has been to suggest that, when properly understood, Turing and Harnad are seen to fall into very different traditions. The behavioristic tradition (of Turing) embraces a reductionistic conception of mentalistic language as no more than an abbreviated mode for the description of behavior, the presence of which might be tested by the TT. The dualistic tradition (of Harnad), by comparison, embraces the nonreductive conception of minds as conscious thinking things, the presence of which might be tested by the TTT. Although the TT is intended to function as sufficient evidence and the TTT as necessary evidence for the ascription of mentality, neither affords a suitable conception of mentality. The TT provides a test of the capacity of one system to simulate another, which is too weak to capture the difference between minded and mindless systems. The TTT, as Harnad admits, also provides no guarantee of mentality, where the best that he can do is a full description of the internal structures and processes that enable a robot to pass it, treating the problem of replication at the level of emulation. Neither approach promise to resolve the problem, which appears to require an alternative conception that draws upon them. The theory of minds as semiotic systems identifies mentality with semiotic ability and meaning with dispositional tendencies rather than with behavior. It thus preserves the nonreductive character of dualism while going beyond it by accounting for the nature of consciousness and cognition. It retains the testability of behaviorism while rising above it by appealing to inference to the best explanation. The outcome is a completely causal theory of signs and minds.
Notes
1 Alan M. Turing, "Computing Machinery and Intelligence," Computers and Thought, ed. Edward Feigenbaum and Julian Feldman (New York: McGraw, 1963) 11-35.
2 John Searle, Minds, Brains and Science (Cambridge, MA: Harvard UP, 1984).
3 See, for example, Stevan Harnad, "The Symbol Grounding Problem," Physica D 42 (1990) 335-346; Stevan Harnad, "Other Bodies, Other Minds: A Machine Reincarnation of an Old Philosophical Problem," Minds and Machines 1 (1991) 43-54; Stevan Harnad, "Connecting Object to Symbol in Modeling Cognition," Connectionism in Context, ed. Andy Clark and R. Lutz (Heidelberg, Germany: Springer, 1992) 75-90; Stevan Harnad, "Grounding Symbols in the Analog World with Neural Nets: A Hybrid Model," THINK 2 (1993) 12-20.
4 Jerry Fodor, The Language of Thought (Cambridge, MA: MIT Press, 1975) 80.
5 James Fetzer, "Language and Mentality: Computational, Representational, and Dispositional Conceptions," Behaviorism 17 (1989) 21-39.
6 Ludwig Wittgenstein, Philosophical Investigations, trans. G.E.M. Anscombe (Oxford: Oxford UP, 1953).
7 Harnad, "Connecting Object to Symbol," 81.
8 Stevan Harnad, "Harnad's Response" (to Eric Dietrich), THINK 2 (1993) 30.
9 L. Hauser, "Reaping the Whirlwind: Reply to Harnad's 'Other Bodies, Other Minds,'" Minds and Machines 3 (1993) 220.
10 Allen Newell and Herbert Simon, "Computer Science as Empirical Enquiry: Symbols and Search," rpt. Mind Design, ed. John Haugeland, (Cambridge, MA: MIT Press, 1981) 35-66; John Haugeland, "Semantic Engines," 1-34; and Artificial Intelligence: The Very Idea (Cambridge, MA: MIT Press, 1985).
11 Harnad "Other Bodies," 45.
12 Hauser, 227.
13 As Hauser, indeed, has also observed (225).
14 Harnad, "Harnad's Response," 36.
15 James Fetzer, Artificial Intelligence: Its Scope and Limits (Dordrecht, Netherlands: Kluwer Academic Publishers, 1990) 17-18.
16 Harnad, "Harnad's Response," 36; see also "Other Bodies," 53.
17 For example, see James Fetzer, "Signs and Minds: An Introduction to the Theory of Semiotic Systems," Aspects of Artificial Intelligence, ed. J. H. Fetzer (Dordrecht, The Netherlands: Kluwer Academic Publishers, 1988) 133-161; James Fetzer, "Language and Mentality: Computational, Representational, and Dispositional Conceptions," Behaviorism 17 (1989) 31-39; Fetzer, Artificial Intelligence; and James Fetzer, Philosophy and Cognitive Science (New York: Paragon, 1991).
18 I am indebted to William Bechtel for this observation.
19 For example, see Fetzer, "Signs and Minds"; Fetzer, Artificial Intelligence; Fetzer, Philosophy; and James Fetzer, "Connectionism and Cognition: Why Fodor and Pylyshyn are Wrong," Connectionism in Context, ed. A. Clark and R. Lutz (Heidelberg, Germany: Springer, 1992) 37-56.
20 Harnad, "Harnad's Response," 36.
21 Fetzer, Philosophy, ch. 5.
22 Fetzer, Philosophy, 78.
23 Fetzer, Philosophy, 32; and James Fetzer, Philosophy of Science (New York: Paragon, 1993).
24 John T. Bonner, The Evolution of Culture in Animals (Princeton, NJ: Princeton UP, 1980) 63.
25 Peter Slater, An Introduction to Ethology (Cambridge, UK: Cambridge UP) 155.