"Computationalism" is the view that the mind is (in some sense) a computer, that the mind can be described programmatically, and that computers can have mental states. This view has been critiqued for a variety of reasons, and here we have several essays examining some of the issues. Some of the problems appear to arise when it is assumed that the real world concept of "computer" is synonymous with one particular and influential abstraction, that of the "Turing machine":
Now, I think I understand what those properties mean, except for "intentional". This is a word that crops up all the time in philosophy of cognition. According to the Stanford Encyclopedia of Philosophy (an excellent on-line resource), "Intentionality is the power of minds to be about, to represent, or to stand for, things, properties and states of affairs", which doesn't really help me a lot. Firstly, it is assumed to be a property of minds, something we are trying to understand in the first place. Secondly, once we start getting pernickety, then, well, what does it mean "to be about" or "to stand for", or for that matter, what is a "thing" or a "state of affairs"? And so on, unto circularity (let's not even get into the definition of "mechanism" or "machine"!). Anyhow, some of these essays address this question (although they seem to conflate "semantics" and "intention", so maybe I don't understand the (philosophical) meaning of "semantics", either).
Nevertheless, there's a lot of interesting food for thought here.
Scheutz provides an overview chapter, linking the following contributions. (He also helpfully provides a summary section at the beginning of each contributed chapter, which helps to understand the more obscure formulations).
Like Smith's On the Origin of Objects (300+ pages on "what is a thing", which I am part way through, and have been for a while), reading this is like wading through treacle: it's hard going, but the substance is delicious.
Rather than assuming we know what we mean by "computation", and applying that to cognition, Smith turns the argument around. He doesn't assume that we know what computation is, and asks, if computationalism is true (that is, if cognition is a form of computation), what would that mean for (a theory of) computation?
Because of this uncertainly about what computation is, some of the claims about and criticisms of computationalism potentially disappear:
Additionally, Smith identifies intentionality as the key property of computation that interests cognitive science: semantics is a problem for understanding minds, computation seem to achieve it too, so minds maybe are computers. (Here we have one of the statements that, in the philosophy of mind/cognition at least, "semantical" and "intentional" are synonymous.)
Picking through the implications of this leads Smith to state one of the main differences between the abstract notion of computation, of an isolated "brain in a vat", and the actual, and very different, participatory way computers work in the real world:
Following through even further leads Smith to the startling conclusion that the "Theory of Computation" is not what we thought it was:
This further leads to the overall conclusion that "Computation is not subject matter", it is not a "distinct ontological category". He takes this as a positive conclusion, however:
Smith's deeply thoughtful picking apart the foundations of computation is fascinating stuff. In this chapter he refers several times to his forthcoming 7 volume The Age of Significance, based on his 30+ year investigation into these foundations. As of the time of writing this review, 8 years after the publication of the reference, the work is still "forthcoming"; only the 45 page Introduction has yet appeared. I do hope it all eventually gets published (written?) -- Smith's view on computation is fascinating and deeply important.
Copeland points out that the Turing machine is an abstraction and formulation of what human "computers" (clerks) do, and not (necessarily) a formulation of what any mechanism can do. The limitations of TMs are consequences of the limitations of humans (when acting as computer-clerks), not the other way round.
He then goes on to pick apart how writers (mis)interpret the Church-Turing thesis, which is actually about what humans (when acting as computer-clerks) can do, not what any mechanism can do. Maybe there are mechanisms that can do more. He illustrates this by means of "thesis M", that: "All functions that can be generated by machines (working on finite input in accordance with a finite program of instructions) are Turing-machine-computable". This admits two interpretations, neither of which is provably true:
Personally, I don't care for thinking about notional machines that cannot exist in the real world -- these are "magic", and of no real interest (provided the magic is logical impossibility, not mere physical impossibility: after all, the laws of physics are not completely known, and something impossible with today's physics might be possible with tomorrow's). A notional machine where we don't know whether or not it can exist, well, that's part of the first interpretation, and to be decided empirically. Part of the problem (from a pragmatic if not a philosophical viewpoint) is that no-one has implemented such a machine -- they are all still "notional" (even the non-magical ones). But that is just an engineering problem.
Copeland points out that Turing himself invented (notional) machines that could out-compute a TM -- the so called O-machines (O stands for "oracle", or, just possibly, "magic").
The crucial term here is "well-defined", which is not synonymous with "effective". A process can be well-defined in terms of its operation and outcome without giving an effective procedure for how to achieve that outcome. (Again, "effective" is a technical term.) Copeland gives the classic example of such a well-defined operation: the Halting function.
This is clear, although often misunderstood. There are many perfectly well-defined mathematical functions that are (Turing) non-computable: there is a whole sub-discipline on determining which functions are non-computable! The crucial question is, are there any mechanisms that can evaluate (by a necessarily non-Turing-computation) any of these functions?
So Copeland's argument might be summarised as: stop thinking that TM's are the top of this hierarchy, and that therefore the mind, if it is a computer, must be (at most) a TM. (I would add, but don't use "magic" when arguing about the existence of higher level machines.)
Sloman argues that all the fuss about minds as Turing machines is irrelevant -- the kinds of computation that minds do should be related to the kinds of computation that real computers in the world do, not to some artificial abstraction. Indeed, he argues that if the concept of TMs had never been articulated, it would make no difference to the field of practical Artificial Intelligence.
Real computing machines, computation-in-the-wild machines, have two kinds of properties, informational and physical.
The TM abstracts away from the physical, and is ballistic, but physical and online aspects are crucial for embodied, interactive AI.
Sloman then carefully picks apart several features that are needed in real computers. These include things like state (and access to it), laws of behaviour (that can self-monitor, self-modify, and self-control), conditional behaviour, coupling to the environment, and multiprocessing. Crucially, none of these make any mention of TMs or universality, and several are outside the Turing model.
As an aside, he makes an interesting point about the suitability of continuous dynamical systems as a computational approach:
So, in summary: the TM is an abstraction that is useful for various mathematical analyses. But what it has abstracted from, particularly the real-time embodiment in a participatory physical environment, is the very stuff that is crucial to AI. So AI shouldn't restrict itself to considering only the irrelevant abstraction of the TM.
Huh? Well, I suppose one incomprehensible chapter in a philosophy book isn't too bad going, really. This has something to do with various problems of duality in AI --- mind/body, plan/behaviour, concrete/abstract --- but it assuming a lot of knowledge of the area I don't have, mixed with very technical philosophical jargon. On to the next chapter.
Harnad addresses the symbol grounding problem: how can an internal mental symbol be grounded, given meaning, when it is dislocated from its external referent? Here the discussion has two parts. Firstly, basic symbols can be grounded not through a linkage to the distant referent, but through localised "sensorimotor toil", by interactions localised to a sort of bodily "surface" on which the external referent is actively projected by the body's sensor and motor experiences. Secondly, symbols can the grounded by "theft": "Darwinian theft" where our ancestors grounded symbols through their own sensorimotor toil, and we inherit that grounding; and "symbolic theft" where others use language to tell us about their own grounded symbols. Robots will need language in order to gain this enormous advantage of grounding by "symbolic theft" over "honest toil".
Haugeland addresses the semantic/intentionality issue from a different perspective. Here there is an interesting study on the kinds of self-critique needed for scientific knowledge, mixed up with an enormous assumption that all knowledge is gained in a suitably analogous manner to scientific knowledge ("suitably analogous" to allow the scientific knowledge argument to carry over). I don't think the second part has been demonstrated at all.
Haugeland identifies numerous issues of computationalism, and focusses on the issue of semantics:
(Here we have an explicit statement that "semantics" and "intentionality" are synonymous.) Haugeland goes on to distinguish two kinds of intentionality: derivative (conferred by something else) and original (not derivative). He then makes an orthogonal classification: authentic, ordinary, and ersatz (that which only looks like intentionality, as in things described from Dennett's "intentional stance", and here also subhuman animals and robots). I haven't defined the difference between authentic and ordinary intentionality, because they are defined in terms of Haugeland's concepts of responsibility. But the difference is not important until the end, and Haugeland refers to them collectively as genuine intentionality (as distinct from ersatz).
So all we need for intentionality (ignoring the ersatz stuff for now) is non-accidentally true beliefs or assertions. I'll focus on the "non-accidentally true", in some representation or other, because I'm not sure what the definition of "belief" or "assertion" is here. So, for some "cognitive state" to have meaning, what it represents must be "non-accidentally true" -- "non-accidentally" to get round the possibility that it is true by chance, but with no way of knowing that. How do we gain "non-accidental truths", or knowledge? Haugeland focusses how we gain scientific knowledge.
I'm not in the least bit convinced of that: the whole reason we have this structure and framework for gathering scientific knowledge is that it is not typical, or representative, of the way we gain knowledge. Neither am I convinced that scientific knowledge and common-sense knowledge exhaust the possible forms of knowledge: there is also innate and embodied knowledge. More on this later.
Haugeland then has an interesting discussion of the responsibility of human scientists to critique their work, on three levels. Summarising brutally: Firstly, there is self-critique: have I done the experiment correctly? Secondly, there is critique of the scientific norms: is this the right experiment to do to test this hypothesis? do these techniques actually constitute a demonstration? Thirdly, and only when all else fails (because it is such a drastic step), there is critique of the paradigm: is this theory right? All these are essential for objective scientific knowledge.
In summary, Haugeland's argument (as I understand it, at least) runs as follows:
1. genuine intentionality requires knowledge:
non-accidental truths
2. science gains such truths through a
process of experiment and critique
3. scientists are responsible for
performing this critique
4. therefore, responsibility is a
prerequisite for genuine intentionality
5. no subhuman animal or robot can accept
responsibility, so cannot have genuine intentionality, or
cognition
Wait ... what? I was with you up to point 4. But there seem to be two problems here. The minor one: just because human scientists accept responsibility to perform this essential critique does not necessarily mean that other forms of scientist could not maybe be programmed to perform this critique. But the major one: just because science gains knowledge this way does not necessarily mean that all knowledge (non-accidental truth) is gained this way. Consider an extreme example: the adaptive immune system. Cohen claims this is a cognitive system, in part because it "contains internal images of its environment". Whether or not you agree with Cohen's cognitive claim, these internal images (antibodies, memory cells, whatever) are indeed "non-accidental truths", but were certainly not acquired by any process of scientific investigation: they were acquired by biological processes including amplification and selection. Okay, they are not absolutely guaranteed to be true (but then neither are scientific theories), but they are very much more than "accidental". Similar arguments could be applied to other innate and embodied knowledge (such as possessed by subhuman animals, too). So we can gain objective knowledge via evolution and other biological processes. Indeed, this includes the process of what Harnad above calls "Darwinian theft".
So, I like the three level critique model of science, but I am in no way convinced that the ability to accept responsibility to critique knowledge is a prerequisite for intentionality (whatever that actually means).