Philosophers make a big deal out of the intentionality aspect of consciousness, how it is that:
Here, Edelman summarises his own theory of mind, consciousness and self-consciousness, as laid out in more technical detail in his earlier trilogy. He starts from a biological perspective, from the staggering complexity of the brain, and notes that:
When I first read that passage, I felt it was not a natural assumption at all. After all, up until relatively recently, we could just as easily have said "Biological organisms (specifically birds) are the beings that seem to have wings. So it is natural to make the assumption that a particular kind of biological organization gives rise to flight."
However, by "biology" Edelman means: the sheer staggering complexity of biological systems such as the brain; the consideration of the historical component of individuals' developmental (both in evolution of the species, and the idiosyncratic growth of each individual); and embodiment, or how the mind evolves and grows in interaction with an equally staggeringly complex, open-ended environment containing other minds. He does not seem to mean "wetware is intrinsically different".
He goes into quite a bit of detail about the biology of the brain as currently understood, and then describes his model of mind and consciousness. In this model, the self-conscious mind has various evolutionarily determined value systems, and feedback loops between them, the categorization of inputs, and language centres. He claims that language is necessary for self-consciousness. (I don't understand why he feels it had originally to be a spoken one.)
All that is needed for consciousness in this model is known physics and biology. By known physics he means no strange 'conscious particles', no Penrosian quantum gravity, no 'spooks'. He states he is not a "carbon chauvinist" either: he admits we might one day be able to build self-conscious artifacts, although not in the near future, because of the difficulty of the task. (And so he argues that we don't need to worry yet if such a thing would be ethical. Personally, I'd rather argue the ethics before the deed...)
So far, so eminently reasonable. But Edelman seems to have a real hang-up about computers, constantly insisting that minds are intrinsically different from computers, in several passages such as:
And yet, only two paragraphs after this dismissal, we get
So, minds aren't computers, but we can use computers to simulate minds? Why is this 'simulation' not a mind? (Its external behaviour certainly gives ample appearance of mind-like characteristics.) The reason for this self-contradiction seems to lie in his understanding of simulation:
Possibly true, but certainly irrelevant. So what if there is no "effective procedure simpler than the simulation itself"? No-one is saying you have to be able to predict the behaviour of a program without running it, or by running a simpler one.
Edelman provides a Postscript with a more detailed 'rebuttal' of the mind as computer view. It contains arguments such as:
This passage is replete with confusions, such as: choosing symbols from a finite set doesn't mean that there are only finitely many combinations of those symbols (binary notation uses only two digits, but can represent rather more numbers); the output on my computer screen is controlled digitally, but high resolution pictures displayed on it can be made to appear sufficiently analogue to me; finite doesn't mean limited or bounded; deterministic doesn't mean predictable. And what about those weasel words: "no apparent limits", "ample appearance of indeterminacy".
So I think that, although Edelman understands the biology superbly, and would never dream of confusing levels with, say, chemistry ["those carbon atoms, so inflexible, each the same as the other, always wanting precisely four bonds -- there's no way you could ever build a conscious mind out of them"], he needs to do the same for software. But Dennett and Hofstadter are much better at rebutting this sort of position than I am.