Back Here on Zombie Earth
Jonathan CW Edwards
Abstract: Both the conceivability and the usefulness of philosophers’ zombies have been called into question. However, it is not clear that it is any easier to conceive how we can be sentient non-zombies (if we are). It will be argued that various zombies, behavioural, physiological, and functional, are useful in illustrating the simulation=replication fallacy, in clarifying the relationship between ‘being informed’ and ‘experiencing’ and in highlighting the potential importance of fine-grained function to this relationship. It is suggested that functional zombies in whom post-synaptic integration is achieved in non-physiological ways may be of particular value in indicating where to look for the rules of correspondence between a state of being informed and an experience.
A memorable early achievement of JCS was a symposium on philosophers’ zombies: beings that behave just like us but lack sentience (Moody, 1995; Dennett, 1995; Flanagan and Polger, 1995; Güzuldere, 1995; Midgley, 1995). Amongst other cogent points from many contributors, Daniel Dennett (1995) re-stated his reasons for considering the zombie idea futile. He claimed that all accounts of zombies fall into logical errors and discouraged further reference to them until ‘some philosopher write[s] an essay in defence of zombies that doesn’t commit any such misdirections’. Perhaps it is time to call Dennett’s bluff. Dennett has good arguments, but leaves something missing. The purpose of this essay is to show that the zombie idea is indeed useful because it provides null hypotheses for a branch of science yet to be explored. As a slightly tongue in cheek aside, it will also propose that, at least by the criteria of the ‘Chinese nation’ thought experiment (Block, 1980), not only Dan, but all human beings, are zombies (despite, in my experience, his being able to recount, amongst other things, his experience of ballooning in Cappadocia).
Which zombies are useful?
In the original symposium, Güzeldere (1995) makes a useful distinction between behavioural, functional and physiological zombies.
The behavioural zombie behaves like us but can have any sort of machinery inside. This zombie is perhaps the least interesting because the question is merely whether someone or some god could build a machine so cunningly that it passed all Turing tests. That would be easy if the machine was remotely operated by the Chinese nation. The only issue is whether the Chinese nation could design an inanimate machine with built-in responses to all the cunning questions that it might be asked. This is largely a matter of logistics.
The behavioural zombie does, however, illustrate a key point raised by Midgley (1995): that simulation is not replication. The idea that because an experimental or theoretical model has the same input-output relations as an object of study it tells us about the internal workings of that object is one of the most widespread falsehoods of scientific practice. And I think this is where Dennett ultimately goes wrong, if in a subtle way.
At the other extreme, the physiological zombie, a facsimile of a human down to the last virtual photon but without sentience, also has limited utility, but equally illustrates an important principle. It provides a null hypothesis for a science of rules of correspondence to cover the identity (or otherwise?) of ‘the dynamic physical state of being informed’ and ‘experiencing’. Dennett (1995) appears to believe in this identity but not in a need to know the rules of correspondence.
Most people can conceive of a physiological zombie. In fact most people cannot conceive of how either a facsimile or the real thing could be sentient. Yet physics-based science claims that since a facsimile covers everything this zombie conception must be internally inconsistent, as our concepts so often are. The situation may be a little more complicated, as indicated below, but the prima facie case is indeed that we should not be able to produce a consistent conception of a physiological zombie. This null hypothesis will not do. Experience and being informed should be two ways of describing the same thing because experience cannot be an optional addendum to the causal process we call informing. The idea that conscious experience can have a causal role beyond the causal processes it is associated with is unworkable. A task is set: to find an alternative to the null hypothesis that not only makes experience and being informed two descriptions of the same thing but is workable in the context of brain biology and our experiences.
This is a reasonable start, but the physiological zombie may not do much more work alone. The functional zombie is more helpful because it potentially addresses Searle’s (1997) view that consciousness is a property of biological material because of some level of function not normally shared with other materials (although non-living materials might one day be made with the right properties). In theory a functional zombie is ‘functionally’ totally equivalent to a human, even if, for instance, cells are replaced by silicon chips. However, that begs a question about what we choose to call function. The true value of functional zombies may be that they may reveal an unexpected fine-grained level of function that corresponds to experience. The interesting sets of (potential) functional zombies are those in which processes representing certain specific levels of brain function are replaced, with all others left intact. And if such would-be zombies fail we are unlikely to need to look for subtle differences in philosophical discourse. At the interesting functional level the would-be zombie is likely either to pass muster or to fail so spectacularly that it may be kindest to switch off the life support machine.
Many within the artificial intelligence and philosophy communities may feel that fine-grained function is irrelevant. We know that messages are sent around the brain between computational units called cells and there might be no good reason to think that their detailed workings matter. However, fine-grained function is function at its most immediate and there is something immediate about consciousness. By side-stepping fine-grained function I suspect that we may miss a crucial area of natural philosophy with knock-on implications for all physics and biology.
To pursue this line of argument, I would like to return to types of behaviour that insentient pseudo-humans have been claimed to lack. (A technical problem in the original debate is that if they lack this behaviour they are not zombies, but this is a distraction.) I shall home in on a deficiency that Moody (1995) was keen to uphold. Although I agree with Flanagan and Polger (1995) that Moody was wrong in this regard, I must agree that it is not easy to see how an insentient pseudo-human can come to report being puzzled by their experience off their own bat and, in particular, how they could report being puzzled by the inverted spectrum paradox (is her red my red and her blue my blue, or could they be reversed?). To answer these riddles posed by the physiological zombie we need to establish why we are puzzled by our experience, which, with the help of some potential functional zombies, may lead us to an appreciation of the insufficiency of current functional accounts of the brain.
What is puzzling about consciousness?
Consciousness is not puzzling to children, nor to many adults. Even many neurobiologists cannot see the problem. This suggests that some of us may be puzzled not because there is a contradiction in the evidence but because we have been taught to build faulty ways of conceiving ourselves. Our worries about how it fits with physics may overlook the fact that physics only solves questions of a certain type. We understand things by making analogies and comparisons. Maybe these are flawed. Some of this may sound like Dennett talking but I see him as reaching rather similar conclusions by a short cut that misses the meat of the problem.
What might we be puzzled about?
1. Is experiencing an input from an environment puzzling? The simplest level of puzzlement might be that we cannot see why we should experience anything at all in a ‘physical’ world. This is not difficult to answer. Physical entities, whatever they might be, are influenced by other physical entities. In an operational sense we can say that a physical entity must be acquainted with, or informed of, these influences, because they alter its behaviour. A snowflake must be acquainted with gravity for it to fall. To have an experience would fit perfectly with this sort of acquaintance – what one might call ‘overt’ acquaintance with physical influence. If we take our ‘selves’ as the directly available examples of physical entities then we find that physical influence does lead to overt acquaintance. We have experience. Most other entities are just hard to ask. They do, nevertheless, display acquaintance with their environment by physical response. Postulating overt acquaintance does not require postulating anything other than physical influence, it is just what the physical influence ‘is like’ to the entity being influenced. All is quite acceptable in physics, particularly modern physics, which explicitly recognises overt acquaintance in the form of observation. Puzzlement at this level may be not so much puzzlement about experience as puzzlement generated by philosophers who get muddled trying to get words to do jobs they are not made for.
2. Is the character of the input puzzling? Might people be puzzled that they experience greenness when we know that grass on its own has no greenness, or that fire is painful when there is no painfulness in fire? How can signals inside my head look like a row of teacups or the Grand Canyon rather than the inside of a head? The error here is of course to think that the world has any appearance in its own right other than that which our brain creates for us. It is not that grass is black and white or that the Grand Canyon is much bigger than it looks like in a head. We have no reason to be surprised about the characteristics of qualia because they are not discordant with any ‘real out there’ alternative. Nor should we expect signals in our brain to feel ‘electrical’ or ‘chemical’ because these are just other (vicarious) qualia flavours. So if we accept that physical acquaintance may be experientially overt we may legitimately ask what the rules are for which experience goes with which influence on what but we have no reason to be puzzled by the way things do appear. It is likely to reflect in us simply what natural selection ‘found useful at the time’.
3. Is the complexity and multimodal nature of experience puzzling? This is where we hit a real problem: how we explain the richness of our experience. A lot of bits of information of many ‘categories’ seem to be available to something in a head at the same time. It is very unclear that our physical description of the world can provide events that would allow the causal interaction implied if experience is acquaintance with physical influences. If the influences are neural signals how do we get enough to influence the same entity and why should some feel green and others painful? This is the genuine scientific challenge that Dennett appears to sidestep. If we are to say that being informed is to experience we need to show that something in a brain can be acquainted with enough information and we need some way of explaining its variety. I will return to how we might deal with these after considering two further levels of puzzlement.
4. Is our ability to report qualia puzzling? It is easy to puzzle over how speech, external or internal, can be influenced not just by the fact that we see blue sky, but also by the gloriousness of the blue. We seem to be able to report not just information but its ‘experiential feel’. Physics might not appear to allow this, yet it appears to be a matter of fact. We are so sure of it that the most difficult aspect of a zombie to grasp is, as Flanagan and Polger indicate, that an insentient being could report that it is as puzzled as us by whether the redness of the rose it sees is the same redness its zombie lover sees, or confesses that it is weeping at the beauty of the voice in a rendering of ‘On with the motley’ rather than just the notes or words. Dennett would argue that at this point the plot has been lost, and he would be right. However, if we can get a handle on the issue of complexity and variety it may be easier to see why.
5. Finally, is it puzzling that we are puzzled by consciousness - even after Dennett has explained that our puzzlement is misplaced? I think it is. Puzzlement is a property of something receiving conflicting information (and maybe informed that a conflict has been identified), but what is ‘conflicting information’? Conflict, like error, is an interpretation, which is a property of something being informed, not of the signals that inform it. It is not just a difference between data elements; it is an incongruity. To interpret information as incongruous, an informed unit must have the irreducible complexity and sense of diverse category required to make this interpretation. This is not simply a matter of a system having the ‘function’ of identifying incongruity, as judged by us as third parties. It requires that the informed unit itself has the capacity to identify its input as incongruous. If we try to break down the informed unit into a net of sub-units we are likely to be left with nothing with the complexity to interpret something as incongruous. I suspect that a unit that can interpret signals as incongruous must be acquainted with the relationships between at least four inputs. To have a human sense of knowing an incongruity when we see one I suspect the minimum is much greater. What becomes clear is that the complexity and variety problem is, in fact, a problem of the nature of the informed unit, rather than of the signals it receives. What informed unit can this be? If all experiences are functions of the interpreting capacity of this unit, what sort of informed unit could interpret in a way that includes our sense of incongruity?
A biological informed unit
If, as suggested, experience is just ‘overt acquaintance’ with physical influences, there is no need to worry about why organisms evolved with, rather than without, sentience. I agree with Dennett (1995) that, in this regard, Flanagan and Polger (1995) pose a non-problem. Sentience should be everywhere. What has evolved is a complex form of sentience, called consciousness, which presumably reflects both the complexity of patterns of inputs into the informed units in our brains and the complex interpretative power of those units. This complexity has evolved because of the advantage of the associated complex behavioural regulation: intelligence.
Where would the evolution of sophisticated biological informed units start? Organic molecules and their complexes, like enzymes and chromosomes, would be informed in a basic sense, but a more relevant starting point is a single motile eukaryotic cell, since we are really interested in the rapid supramolecular interactions that regulate ‘animate’ behaviour rather than just metabolic reactions.
Consider a protozoan cell, with rows of flagellae for ‘touching’, light sensitive spot for ‘seeing’, chemoreceptors for ‘smelling’, an inturnable membrane area for ‘tasting’ and a cytoskeleton sensitive to pressure waves for ‘hearing’. (Note that the original television camera had a single light sensor and that, at least with appropriate memory storage facilities, our animal corkscrewing through water could build up a full picture of everything around it.) This little creature, given equipment to integrate all this incoming information, could experience a world almost as rich as ours. We do not expect it to have the power we have to integrate over decades and kilometres but the useful way that such animals respond to a combination of all these inputs, and probably learn, indicates that they have integrating machinery.
So what would be the (experiencing) informed unit for this cell that controls its animacy? We can say that it must support, or be, one or more events causally influenced by all the inputs that contribute to the animal’s motility. This makes the informed unit causally downstream of the individual sensing units, none of which is informed of all relevant inputs. We do not know what this is for a protozoan but it seems likely to be a dynamic property of the cytoskeleton, cell membrane, or both. This property is likely to be influenced by current and past inputs through changes in shape, or distribution of molecules or ions, which in turn regulate motility.
An important potential difference between being informed and experiencing is raised here. I can be informed of grandmother’s presence by the tap of her stick on the path but to experience this instance of her presence I need to look up and see a facial pattern. This does not require dissociation of the concepts of being informed and experiencing, but it does imply that the informed unit must have direct access to information of sufficient specificity to match the richness of the experience. The experiencing unit in our creature might have direct access to signals from all sensory modalities or it might receive a limited number of signals from intermediary integrating processes. We should expect the richness of the experience to vary accordingly. (The richness of an experience need not strictly depend on the number of signals to which the receiver has direct access so much as on the number of possible combinations of signals to which it can have access, but the need for a complex mode of access remains.)
Human informed units
Passing on through evolution to complex animals with nervous systems we have to decide what will be the relevant experiencing informed units. The simplest answer is that the corresponding units are individual neurons, similar in basic machinery to the protozoan but adapted to more sophisticated message receiving, leaving motility to cells adapted in other ways (Edwards, 2005; Edwards, 2006; Sevush 2006). This is not the usual view for the human. It is more usual to assume the informed units with ‘our experiences’ to be groups of cells. However, before going further, it is worth considering whether the awkward evolutionary jump to a multicellular informed unit that this would imply could actually explain consciousness, reports of consciousness or even puzzlement about consciousness, at all.
A purely behavioural analysis will be made, to avoid difficulties with first person accounts. People report ‘experiences’ of which a ‘percept’ of grandmother’s face is a paradigm. To be reportable, it is necessary for events immediately associated with the percept to influence a causal chain. If the events immediately associated with the percept are (part of) the electrical activations of a group of cells by relevant incoming data signals then presumably each cell will receive a different subset of the data (otherwise we should logically consider each cell individually as a informed unit for a copy of the percept). But if being informed is to experience, then to experience the percept something has to be informed of the inter-relationships between these data subsets, and at this point nothing is.
The events in the cells of such a group are in spatial and temporal relationships but these relationships cannot encode or determine anything reportable. It does not matter whether or not events in these cells are synchronised or in the same or different parts of the brain. Output would not be altered simply by moving such cells to another place or placing their somatic activation earlier or later in time. Conversely, switching the output connections of these cells would change what was reported, despite not changing the spatial and temporal relations between the events to which the percept was attributed. The temporal and spatial relations of events on separate causal chains are in themselves of no causal efficacy. In other words, downstream signals in a brain cannot encode the purely spatiotemporal relations of prior cellular activation events from which they originate.
All that matters causally is what happens when signals from the group of cells converge on one or more further cells, within which the data signal subsets can all have causal inter-relationships as part of post-synaptic integration. As indicated in the protozoal case there can be no causally effective (reportable) percept until some informed unit has been influenced by all elements of the information to be experienced. If being informed is experience we do not want any post-dated cheques. The temporality/locality laws of usable information are, at this classical level, sacrosanct. Moreover, the ‘perceiving’ unit must receive a sufficiently rich input to specify the detail of the percept since it will not be informed of any details pruned in upstream computations. Reportable percepts cannot occur in groups of cells, they can only occur in individual cells.
There is a caveat to this argument; groups of cells might be interconnected by gap junctions such that their activities did have causal interrelations. The problem with this is that the time frame that might allow such causal relations to be relevant to output - the time between postsynaptic potentials and axonal firing - would only allow limited interaction within small groups of cells. Moreover, the explanatory advantage in such a model over a cellular unit is unclear. Individual cells with up to 40,000 inputs are unwieldy enough as informed units and units with larger numbers of inputs seem to have no conceivable computational, and thus survival, value.
It may seem harsh to throw out all theories of consciousness that site it in neural networks but since such theories posit something with no causal efficacy they are untestable. This might explain why progress has been slow in pinning consciousness down! The alternative is to return to the simpler and more logical proposal that the cell remains the informed unit in complex organisms and we should look for our experiences in cells, not in networks (Edwards, 2005; Sevush , 2006). At least Dennett should not object, since neurons are clearly informed, as indicated by their responses.
The Chinese nation
The fact that it is implausible for an experience to belong to a group of cells even in a behavioural account means not only that such an experience is implausible but also its functional equivalent in informational terms is implausible. Even zombies would not have reportable non-experiences, or experiencesz, to use the nomenclature of the original symposium, in groups of cells.
The implication is that the ‘Chinese nation’ thought experiment (Block, 1980) may be near the truth. Each neuron, or at least each of a number of specialised neurons fed by all sensory modalities (Sevush, 2006), is a separate informed and experiencing unit, like a member of the Chinese nation linked to others by telephone. As in the thought experiment, no one informed unit knows what is going on in the brain (nation) as a whole, and we do not know what goes on in our brains as a whole. But if the Chinese have in their fields of attention the Olympic Games then each will be informed of the same pattern of medals won or lost. The analogy has limits, but the principle is fine.
Thus there is good reason to think that human beings are zombies populated by sentient neurons. So Güzuldere’s (1995) physiological zombies are entirely feasible and alive and well on our earth. Flanagan and Polger (1995) were right. They can report being puzzled about inverted spectra. This analysis is, of course, not entirely fair, because the implication is that informed units in our brains really do have experiences, it is just that they are cells, not people. They could be the units informed of Dennett’s ‘multiple drafts’ except that at least some of these would not be drafts so much as full percepts.
Back to qualia games and inverted spectra
Having identified what ought to be the informed units with reportable experiences in brains I would like to return to the issue of puzzlement over inverted spectra and other qualia-related issues and how they might affect the behaviour of candidate zombies. I do so because I think the concept of the cellular informed unit may allow us to make further headway with models that might explain how this puzzlement relates to language and how that relates to cell structure.
There is an intuition that zombies would be able to handle information, as a computer does, but not qualia. The zombie’s brain could report dealing with concepts like chair or six but not that it was marvelling at the loveliness of the Aurora Borealis. Dennett (1995) says this is a confusion, and he is right, but why is it so hard to feel comfortable with his position?
The answer to this may lie in false assumptions about how meaning is ‘conveyed’ and how this relates to computational faculties programmed into human brains, and in particular the faculty of language as described by Chomsky (2000) and others. Human understanding depends on analogies and abstractions and comparisons between these. When we consider the red rose we experience the redness of the rose, we consider the concepts of the rose and of the colour red and we are likely to hear the related words in our head. All of these must be experienced in response to interneuronal signals and all can influence further events through interneuronal signals. It is not that the concept of red is ‘information’ and the redness is ‘experience’. The two have the same status. Both are interpretations of signals. Neither interpretation can be ‘conveyed’, in the sense of something being transported, by an anonymous interneural electrochemical signal. Both the concept and the redness of the red only arise if a signal arrives at some entity that is programmed to be informed of, i.e. experience, either a concept or a quale.
Thus, the (bona fide) zombie’s brain does not just send messages around that would in us lead to concepts. It sends messages around that would lead to qualia. That neither concepts nor qualia arise in the zombie, but only in us when we listen to the zombie, is irrelevant. When ‘verbally reporting experiencesz, the zombie’s brain will compare signal patterns from pathways that would evoke verbal concepts in us with signal patterns that we would use to evoke qualia. The zombie’s brain trained in physics can be construed as asking if relations between the qualia-related signals that go with certain concept-related signals match those of another zombie, as in the inverted spectrum paradox. This is exactly the same situation for us since qualia have no causal impact beyond the signals with which they are associated.
All this emphasises the point made by Flanagan and Polger (1995) that ‘mentalistic concepts’ are provided by signalling routines built into human brain modules such as the faculty of language that programme layers of analogy and abstraction. Positing their existence requires no specific reference to the mental in the sense of experiential. That at least some of these concepts, including verbal ones, are determined by DNA is shown by the inability of other primates to acquire forms of human behaviour when raised in human company. Other concepts, like thing, shape, time, sameness and multiplicity, colour and sound are presumably more widely programmed.
In these programmed routines, signals going with complexity and variety of qualia must be distributed and collated in a way appropriate to these properties. Thus an appropriate functional zombie could again report being puzzled by complexity and variety, despite the fact that these are not experienced. But this assumes that computation is functionally equivalent at the right level. This is where fine grain of function may really matter. Does the complexity of input to an informed unit have to be in a particular form? How are inputs related to the structure or dynamics of the informed unit such that it interprets them in varied ways?
The structure of biological informed units
Potential functional zombies come in to their own if used to consider the effects of retaining all interneuronal connections but replacing the fine grain of intraneuronal signal integration with alternative processes expected to give the same overall input-output relations. For instance, a neuron might retain 40,000 inputs and one branching output but physiological intraneuronal post-synaptic integration might be replaced by a ramifying tree of binary gates, each with two inputs and one output. What in biology can be considered as a single indivisible computation based on electrical potential, becomes at least 15 (40,000 = 215) discrete levels of computation.
This thought experiment has a different impact on two functional features of a cell. The first is the lack of branching output until all inputs have been integrated, whether or not we consider physiological integration as truly indivisible or partly divisible into discrete levels as in the zombie. Outputs from one such intraneuronal level to the next will not be branching. In this respect the zombie is functionally equivalent to the real cell. Assuming that the ‘programme’ to which the brain as a whole runs is designed to ‘chunk’ signals integrated between branching outputs into what are reported as concepts or ideas then the zombie is at no disadvantage.
The second feature is more fundamental. If physiological integration is truly a single indivisible process then the cell can be considered the informed unit. However, in the zombie, each binary gate must be considered a separate informed unit. Thus the zombie would have no informed units equivalent to those in the human and would not be expected to have our sort of experience.
The concept of physiological integration as a single indivisible process is open to challenge. It raises ontological issues at the boundary of physics. There are, however, two good arguments for retaining it. Firstly, without it, it is unclear how we can find any informed unit in a brain that could fit our reportable experience. Secondly, at the most fundamental level, modern physical causality is not a one on one billiard ball affair but rather, each physical event is determined by a large number of facts and counterfactuals. This applies to everyday events as much as in a physics laboratory. When a key turns in a lock the turning depends on all the teeth and all the gaps between the teeth. The interactions are totally interdependent. In simple terms, integration in a cell can be considered a single event if every input has time to alter the effect of every other input, which is probably the case. For the zombie with a hierarchy of gates this is not the case because each gate is insensitive to effects from parallel gates.
A further issue is that although it might seem possible to simulate physiological integration with a tree of 39,999 separate points of integration (and produce the perfect conceivable zombie for the benefit of Dan Dennett) this may not in fact be so. The mathematics of physiological integration might be such that a reliable simulation would require vastly greater numbers of gates, perhaps a prohibitive number. This leads on to the point that physiological integration is not algorithmic, because there is no defined time sequence. It may only be describable in a mathematics not based on a purely formal logic but grounded in a specific dynamic multidimensional context. This might explain why the brain appears to do things computers cannot (Penrose, 1994). Changes in one dimension may constrain changes in others. Thus, a tree of binary gates may not resemble a cell at this level of function at all.
Another aspect of parallel computation in the human is that it probably takes advantage of variable specificity and sensitivity of response. When someone comes into view there might be one cell which only fires for grandmother but many cells that fire for a person, or an old person or a loved person or whatever. The final experience may carry a grandmother label because of a ‘best fit’. This sort of computation by best fit can explain the familiar fact that we can make mistakes, and at various levels of generality. Again, the strict algorithm does not apply. It may also mean that all our mental procedures involve a range of levels of specificity with ‘concepts’ and ‘instances’ contributing to the same computation. Any explanation for this must ultimately be found in the rules of connection and integration in neurons and, as indicated in the next section, treating the neuron as the informed unit for percepts may make it much easier to marry structure with function than if we try to find percepts in networks that cannot support them.
So here we begin to have a concept of a (candidate functional) zombie that cuts mustard because it leads to some practical scientific questions. Is our experience related to a fine-grained level of function in neuronal informed units such that other artificial means of post-synaptic integration would not support experience as we know it? The equivalence of being informed and experiencing may be valid but it may be essential to define the exact level at which these properties operate, i.e. to define the informed unit and its mode of being informed.
Hierarchies and abstractions
In 1969 David Marr produced a theoretical analysis of the innervation of cerebellar Purkinje cells that may provide a model for the way our percepts and the language we use to report them could be based on neuronal informed units (Marr, 1969). Marr’s paper deals with motor cells and it would be unwise to suggest a detailed structural correspondence to cells at the apex of the sensory input tree. (Purkinje cells are a very different shape from other neurons.) Nevertheless, they have a feature that might have general relevance.
The dendrites of Purkinje cells have two quite different inputs. One is the mossy fibre input, consisting of masses of richly branching axons from many small nearby cells feeding synapses on the Purkinje dendrites. The second input consists of branches of a single axon derived from an olivary cell that ‘climbs’ up into the Purkinje cell dendritic tree and connects to it at multiple sites. This climbing fibre input shows how absurd it is to think of neurons as just responding to how many cells are sending them an input at any one time. The cell with the climbing axon gives the Purkinje cell many inputs in exactly the same places every time it fires.
Marr interpreted these connections in terms of learning. During learning of a pattern of movement one sort of input would excite the Purkinje cell. However, once a routine was learnt then the other input could excite the cell ‘automatically’. The movement induced would be the same but it could now be executed without a need for relevant sensory inputs being in the field of attention. The next step of a walk could be made without attending to the ground. Exactly what role the two inputs have remains speculative, as is the way connections elsewhere in the brain facilitate switching from one input to the other. Nevertheless, a general principle might be extracted from dual input – it allows a cell to generate the same output from two very different patterns of input such that these two patterns can have an ‘equivalent function’ at one level (qua output generators) but not at another (qua input).
In sensory and experiential terms, such an opportunity for two sorts of input being functionally equivalent raises an obvious possibility. It provides a potential basis for ‘abstraction’, and hence for language. If an input into a cell of a visual percept of a dog can be arranged to give rise to the same output as an input signifying the concept of a dog, or the word for a dog then the associations in memory triggered by the visual percept can also be triggered by the concept or word. With a slightly different arrangement it should be possible for ‘instances’ and ‘concepts’ to be ‘compared’ within a cell by having one input neutralise the other by inhibitory signals or discordant phase relations. There are many potential variants.
A further attraction of this idea is that one can envisage relatively minor genetic changes allowing major increases in sophistication of abstraction. Dual innervation operating in a single subpopulation of neurons in a circuit relating memory to current input might allow for basic generalisation from percept to concept. A change in connectivity that created a new dually innervated subset might, for instance, allow concepts to be related to words. Cells able to relate temporal to spatial patterns might allow strings of words to be ‘stacked’ for interpretation as sentences. Each of these steps could be achieved by something as simple as reduplication of a single gene, which can often occur within a species, let alone between related species.
Another possibility raised by the dual innervation concept is in mathematics. We may, as suggested earlier, be able to arrive at proofs not computable with a Turing machine (Penrose, 2004) because our computation is grounded in a complex pattern of cellular dynamics. A further possibility is that computations based on rich inputs interacting under such complex conditions might then be replaceable by computations based on simpler ‘surrogate’ inputs – a sort of cellular algebra.
These examples are speculative, but indicate at least that considering the structure of the individual cell, and variations on it in potential functional zombies may provide avenues for tackling how we produce language and arithmetic. More detailed clues to our faculties of language and arithmetic might be found in the fine anatomy of specific subsets of neurons in areas such as prefrontal cortex, such as the ‘Nimchinsky cells’ said to be unique to humans and very closely related apes (Nimchinsky et al., 1999).
Reporting of experience
Although the reporting of experiences in individual cells may, unlike putative experiences in groups of cells, be feasible, there is still a caveat; the reports would be of many congruent experiences. As Dennett has pointed out, there is no one place in the brain where things come together. There are many informed units. Reporting of an experience must be something like a consensus. Except it is not what we normally call a consensus because all the experiences contributing to the report will, by definition, be contributing to one reported meaning. This does not, however, exclude the possibility that the experiences differ in their mode of abstraction: in their primarily verbal, pictorial or other character. This may have an important implication for the philosophy of mind. If experiences of different modes of abstraction dominate to different extents in different brains, as implied by comments like ‘I am a right brain visual person’ then the cross-purpose arguments of philosophers may have a physiological basis. Thus those who claim that all thoughts are verbal may be genuinely unable to report non-verbal experiences. Fodor, Nagel, Dennett and Searle may have been condemned forever to talk about different things.
The equivalence of information and experience
Dennett (1995) is correct to equate experience with being informed. However, as indicated above, both should depend both on the complexity of the input to an informed entity and the complexity of the mode of reception, integration and ‘interpretation’: probably just two views of a single dynamic process. The puzzling complexities of our experiences should relate to this complexity of mode of being informed. And there must be as yet undefined laws that govern the correspondence between the two elements of the information/experience identity.
To an extent these laws may be inaccessible. We may never be able to know whether a particular event in a brain gives rise to the red experience I am familiar with, or yours or both or neither. Serious logistic problems stand in our way. What we ought to be able to do is work out in general terms what events in brains give rise to, for instance, senses of colours, or sounds or shades of brightness or concepts.
A key point is that the variety of experience, the ‘flavours’ of qualia, must relate not to individual inputs but to relations in input – which at a point in time must be spatial. Thus, to suggest a trivial example, colour might be one input above another in a reference frame and sound one input to the side of the other. This sounds far too simple, and is, but biological codes often turn out to be surprisingly mundane. The arrangement of inputs then has to relate to some aspect of the integration of the signals and its relation to output and this has to be complemented by what happens to the output signals elsewhere in the brain. Connection patterns must complement the rules of integration at either end.
The complexity of the mode of reception of information by cellular informed units is something that has no parallel in the computer world. Rich experiences and abstractions like concepts and words can mean nothing to any informed unit in a computer, even if their effects may be simulated by a sufficiently complex machine. John Searle was right to suggest that biological information-sensitive units have properties that are unlikely to be found in man made materials unless we take great care to devise materials with analogous properties.
It may still be fair to agree that the paradigmatic zombie is silly. An idea of a physiological zombie built from the teachings of science must be inconsistent, even if the reasons for inconsistency may be subtle. The perfect fine-grained functional zombie is a physiological zombie by another name. However, some level-selective functional zombies may truly be conceivable, and even buildable. Moreover, the people on this earth may be examples of unexpectedly indirect and complex ways in which copies of experiences at various levels of abstraction may contribute to behaviour, at least fulfilling the criteria for a ‘Chinese nation’ zombie. Infants, autistics, stroke victims, eminent neuroscientists who deny they have experiences and philosophers who claim to understand consciousness may all be telling us something about how and where to look for the fine-grained function that underlies the correspondence between information and experience. Even if sentience is there, what it belongs to and whether or not it can be reported may depend on many quite fragile layers of brain organisation. This is a major untouched branch of biomedical science that we might be able to address if we can pin down where correspondence occurs.
Block, N. (1980), "Troubles With Functionalism", in Block Readings in the Philosophy of Psychology, Volumes 1 and 2. Cambridge, MA: Harvard University Press.
Chomsky, N. (2000), New Horizons in The Study of Language and Mind (Cambridge University Press).
Dennet, D.C. (1995), The unimagined preposterousness of zombies. Journal of Consciousness Studies, 2, (4), pp. 322-325.
Edwards, J.C.W. (2005), ‘Is consciousness only a property of individual cells?’ Journal of Consciousness Studies, 12, (4-5), pp. 60-76.
Edwards J.C.W. (2006), How Many People Are There In My Head? And In Hers? (Exeter: Imprint Academic)
Flanagan, O. and Polger, T. (1995), Zombies and the function of consciousness. Journal of Consciousness Studies, 2, (4), pp. 313-321.
Güzeldere, G. (1995), Varieties of zombiehood. Journal of Consciousness Studies, 2, (4), pp. 326-332.
Marr, D. (1969), A theory of cerebellar cortex. Journal of Physiology, 202, pp. 437-470.
Midgley, M. (1995), Zombies and the Turing test. Journal of Consciousness Studies, 2, (4), pp. 351-352.
Moody, T.C. (1995), Why zombies won’t stay dead. Journal of Consciousness Studies, 2, (4), pp. 385-372.
Nimchinsky, E.A., Gilissen, E., Allman, J.M., Perl, D.P., Erwin, J.M., Hof, P.R. (1999) A neuronal morphologic type unique to humans and great apes. Proceedings of the National Academy of Sciences, 96 (9), pp. 5268-5273.
Penrose, R. (1994), ‘Shadows Of The Mind’ (Oxford University Press).
Searle, J. (1997), ‘The Mystery of Consciousness’ (London, Granta Books).
Sevush, S. (2006) Single-neuron theory of consciousness. Journal of Theoretical Biology, 238, 704-25.