XClose

Centre for Rheumatology and Bloomsbury Rheumatology

Home
Menu

Are Our Space Made of Words?

Jonathan C.W. Edwards

Journal of Consciousness Studies 2008

Summary

It is argued that both neuroscience and physics point towards a similar re-assessment of our concepts of space, time and 'reality', which, by removing some apparent paradoxes, may lead to a view which can provide a natural place for consciousness and language within biophysics. There are reasons to believe that relationships between entities in experiential space and time and in modern physicists' space and time are quite different, neither corresponding to our geometric schooling. The elements of the universe may be better described not as 'particles' but as dynamic processes giving rise, where they interface with each other, to the transfer, and at least in some cases experience, of 'pure' or 'active' information, the mental and physical just reflecting different standpoints. Although this analysis draws on general features of quantum dynamics, it is argued that purely quantum level events (and their 'interpretations') are unlikely to be relevant to the understanding of consciousness. The processes that might be able to give rise, within brain cells, to an experience like ours are briefly reviewed. It is suggested that the elementary signals that are integrated to generate a spatial experience may have features more in common with words than pixels. It is further suggested that the laws of integration of words in language may provide useful clues to the way biophysical integration of signals in neurons relates to integration of elements in experiential space.

Experiential and Explanatory Spaces

It is still widely perceived within consciousness studies that neither consciousness nor meaning in language (semantics/syntax) can be explained in 'physical' or 'material' terms (Chalmers, 1995; Preston and Bishop, 2002). However, both neuroscience and physics provide reasons for thinking that many of the difficulties involved may arise from a failure to grasp the radically different nature of experiential space and time and modern physicists' space and time. We may also underestimate the link between these spaces and language.

By space, we can mean either the space we experience or a concept that physicists use to help explain how things come to be the way we experience them. This dichotomy extends to all aspects of the 'reality of experience', as distinct from what we think is 'really going on' (including when we are not watching). The existence of a disparity is widely acknowledged, as in phrases like 'the moon looks too big tonight'. However, as discussed later, this may involve reference to yet another sort of space, geometric or Cartesian space (of res extensa), which most people use as their intuitive gold standard, once sufficed as physicists' space, but no longer does.

The Case From Experience

We do not experience the world 'as it is', but rather as our brains portray it to be. There is clear evidence that our spatial experiences are based on models, or maps, concocted in the brain, that reflect selected features of what is going on in the world (Smythies, 2003). It should not be necessary to point to such evidence because it confronts us every day. However, we intuitively compensate for inconsistencies and mismatches in these maps, which is why it is useful to have formal psychological studies. The case is summarised clearly in Ramachandran and Hirstein's paper 'Three Laws of Qualia' (1997), which gives a wealth of evidence from filled in blind spots to disembodied heads. Rather than take up space reviewing the detail here I would refer the reader to the original, since this is well-established ground. I appreciate that some readers may still hold to a different view but would nevertheless encourage them to read on.

The implications of this 'space concocting' for our understanding of reality need to be taken seriously. It is generally accepted that nothing is intrinsically green. It can only tend to give rise to an experience of green for a human with typical colour vision, by dint of a certain reflectance spectrum. People may hold more firmly to the idea that two things are 'far apart', but far apart in experiential terms differs from far apart in physicists' terms in much the same way that the sense of green differs from a green reflectance spectrum. Space may be a more pervasive aspect of the world than greenness, but neuropsychological experiments indicate that experiential space is just as much dependent on the machinery of experiencing as green. And this is what we should expect. We have no basis for suggesting that brain substance can re-concoct something with the 'real appearance of space' rather than some analogy-bearing construct the appearance of which is primarily dependent on the biophysical media the brain uses for its construction. We cannot expect the real appearance of space to be floating about in the æther ready for a functionalist brain to latch on to at will.

My astigmatic right eye provides me with a concocted spatial experience that illustrates two further points, one of which I will expand on later. The lens in that eye makes light from a circle converge in the way that a perfect lens does for an ellipse. With my corrective glasses on, a round plate on a wall is round. Surprisingly, without glasses, the plate seen through my right eye is still round. Yet, if I hold my glasses at a distance and look at the plate, but not the room, through the glasses with my right eye, the plate is elliptical, and I sense it as 'seen through a lens'. But if I convince myself that I see the plate directly, it snaps to the circle, as a Necker Cube flips. With glasses 'off' my left hemisphere works as if it has a lens inside it. With them 'on' it does not.

Firstly, this example demonstrates that the issue is not which appearances are real and which illusions. Which was the illusion? All appearances depend on the way inputs are sorted at all stages up to the point of experience.

Secondly, the fact that I have never noticed a re-learning process since starting to wear glasses suggests something remarkable about how the brain deals with shapes. It is very unlikely that the relevant part of my brain maps a circle as a set of points each of which is shifted just enough to make an ellipse each time I get the flip. (Optic nerve input is constant.) It is much more likely that the map carries the messages 'see circle' or 'see ellipse' and that these feed experience directly. This should be no surprise since we know that messages like 'see edge' arise in the retina.

It may be assumed that experiential space is 3-dimensional but it has been known since the nineteenth century that it is not (James, 1890). We can see movement using parts of our retina that cannot assess the distance involved in the movement. Just as things can be red now, they can be moving now. We do not experience one space at one time; we insert time into space. Experiential maps cross dimensions. As discussed below, our experience has many dimensions, at least in the sense that it has dimensions at all.

There are further layers to this discussion. We are entitled to ask specific questions about how brains produce such useful maps of the outside world, as long as these are considered maps based on correspondence of rules rather than replicas. It is also important to note that a lot of experience is non-spatial, as emphasised by Chris Clarke (1995), who has addressed several of the issues raised here about space, if from a slightly different angle. Nevertheless, the salient conclusion up to this point is that neuroscience supports the view that searching for an answer to the question 'what is physicists' space like' is misplaced.

The Case From Physics

It may be hard to accept that physicists' space is not 'spacious', but modern physics involves radical changes to our concept of space. Space is 'curved', in a way we cannot experience. It is inseparable from time, whatever that might 'be like' if it were. The space between an atom's nucleus and its electrons was thought to be empty for a period around 1900 but is now seen as neither full nor empty. More importantly, as discussed by Douglas Bilodeau (1996), space no longer has the property that things are in one part of it at one time. Space, time and probability are linked in a way that defies the idea that we can envisage how 'this must have happened there then'. We have to analyse the world in an abstract dynamic way in which causality is more than just a sequence of geometric frames.

So physics supports neurobiology. Rather than adhere to the idea that physicists' space is spacious like experiential space, logic suggests that it is highly unlikely that physicists' space should be thought of as spacious in an experiential sense even if the thought were legitimate. Ironically, these issues may have been easier to handle in 1700. Our problem now is that schools hammer into us a debased form of Newtonian physics, frozen in Cartesian geometry.

The implication of the above is that appearances are connected by an aspect of reality that, in itself, has no knowable, or even meaningful, appearance. This aspect of reality is only known to us by inference, as sets of rules. This is not idealism because these are not abstract Platonic rules. Rather, modern physicists' reality comes in the form of units (quanta) of operation of these rules, the number and type of which can be determined under suitable conditions. There is apparent reality, which is undeniable to the individual subject but a partial and sometimes inaccurate guide to what might be called connecting or process reality, which has no appearance, but explains why appearances are compatible for all observers. These are not internal and external realities. It is just that our only access to apparent reality is internal and we are better at linking external, rather than internal, events to process reality.

Two complementary realities may seem odd, but they have been around a long time, even in the ordinary person's view. William James (1890) notes: "Strange mutual dependence this, in which the appearance needs the reality in order to exist, but the reality needs the appearance in order to be known." The dichotomy was familiar to Newton. He realised that his rules for light and gravity required processes with no appearance generating appearances. It is fairly easy, however, to pretend that these processes have appearances. Quantum theory, however, has rules that defy appearances. Moreover, the dichotomy is central to the theory. As such it is perceived as posing paradoxes, like the 'measurement problem'. However, it may do so only because the case from experience has not been absorbed. What is perhaps historically most strange is that Whitehead (1929) was so dedicated to such a process/experience view yet seemed not to capitalise on the new physics of his time.

Geometric Space: a Third, Fictitious but Necessary Space

I have indicated that neither experiential nor modern physicists' space is geometric space, which may come as a surprise and requires clarification. Bilodeau (1996) contrasts the dynamic view of space, as the metric of processes, with a geometric view of space with contents. The geometric view is inadequate because it cannot explain the causality that modern physics requires, if indeed it can explain process at all. In a sense it reduces the world to a series of publicly accessible static three dimensional slices of spacetime, which raise problems in both Bohr's and Einstein's theories. This point goes back to Leibniz's time and is reflected in his Monadology (Woolhouse and Franks, 1998). Leibniz rejected the geometric view for a 'modern mechanical' view, although he also realised that 'mechanism' could not ultimately be of an intuitive billiard ball type and must be replaced by a 'progress in harmony' with significant similarities to contemporary physics.

Bilodeau also contrasts a dynamic account with a 'historic/ empiric' account of the world. The historic account is what is provided by a 'measurement'. A physicist needs both the dynamic account to provide predictions and the historic account to test them. Indeed, Bilodeau makes the claim that this is what Bohr meant by the distinction between 'quantum' (dynamic) and 'classical' (historic/ empiric). It is not that some things obey quantum dynamics and some obey classical dynamics. All processes must be quantum-dynamic. Unfortunately, confusion has arisen because the word 'classical' is also used to describe the approximate Newtonian/ Maxwellian dynamics to which quantum dynamics tends on a large scale through the correspondence principle.

The historic account might seem to be an experiential account, fitting with the process/appearance dichotomy described above, but Bilodeau emphasises that it is not. His historic account is an objective account in public language - like metres and seconds, with troublesome dimension-crossing removed. It sounds suspiciously like a geometric account and I think it is. Bilodeau admits that 'the metaphysical status of this mode may be up for debate'. This would seem right in that appearances and processes seem undeniable in their different ways but the geometric account may be no more than a tool for interpersonal agreement, or the internal dialogue of considered thought. It is essential for any scientific exploration (and so in a sense is also a 'physicists' ' space), including that of consciousness, but maybe we have confused the tool with reality.

Thus, the working model is that appearances exist in experiential space and are linked by processes in modern physicists' dynamic space and that geometric spaces are attempts to marry a sanitised experiential space with an explanatory space. These attempts have a bridging role, but no claim to reality.

Sticking to the Dynamic View (and Geometric Recidivism)

If reality has the two aspects of process and appearance then the relationship between the two ought to be central to understanding consciousness. Modern physics requires that the dynamics of quantum theory underlie all processes, yet there are serious objections to explaining consciousness at a purely quantum level. I am strongly in agreement with Bilodeau's view that models that invoke quantum level phenomena such as wave function collapse or coherence are distractions. As I will argue further below, it is the general framework of modern dynamics, involving the interaction of distributed indivisible processes, which may allow a solution that a 'mechanical' model cannot. Moreover, this can readily 'show through' at a traditional Maxwellian level via correspondence.

Perhaps the key problem with quantum theory is not that it gives us an unfamiliar dynamic view of the world, but that people want to keep a geometric view of process as well. They want to know 'what things are like' between two situations in which interactions lead to appearances. What is predicted mathematically is not disputed, but because the process linking two such situations defies being given an appearance, people try to break it down into sub-processes that almost have appearances. Although most physicists agree that it is valueless to attribute appearances to the fundamental processes that link observations, it is not always clear how completely this is taken to heart. Bilodeau suggests that Bohr believed that the dynamic rules that link appearances should not have any appearance. He suggests that several physicists, most notably von Neumann, then back-peddled, creating the measurement problem.

Bilodeau's account of Bohr's views might be challenged, but this is unimportant. Neuropsychology tells us that the refusal of quantum theory to give appearances to the rules that connect appearances is not a matter of pragmatics, it is good metaphysics and good biology. In Bilodeau's words 'The convoluted paradoxes of QM are really a road map out of our ontological impasse.' If it has no meaning to ask what things are really like in between 'being like something to something', i.e. generating an appearance at the interface with another process, then the differences between the 'interpretations' of quantum theory have limited meaning. The two sub-processes of von Neumann, the ontological interpretation of Bohm and Hiley and the many worlds view of Everett (Barrett, 2003) are just attempts to impose appearances on something which has none, or at least to try to hook up the dynamics to appearances that cannot be. The predictions are the same; the mathematics is equivalent. There may, nevertheless, be differences in heuristic power, as I shall come to later.

Restoring Homunculi

Before going further in trying to see how a dynamic view might help to explain consciousness I believe we have to take one thing at face value, even if problematic: the appearances of experience belong to something inside a brain which has access to a pattern of internally pre-interpreted information, including maps of external events together with elements from memory, emotional responses etc.. We need a homunculus, or homunculi. And there is no need for apology. Homunculi are often dismissed, but usually without good reason. As Dennett (1978) pointed out, 'homunculi are only bogeymen if they duplicate entirely the talents they are rung in to explain'.

By homunculus I mean something in the brain that receives converging signals not directly from the outside world but from some internal 'array' that maps the outside world. This involves repeated transduction of signals from a state of a sensory receptor to a message-sending form, back to another form in the array and again into a message-sending form before reaching the homunculus, i.e. there is a regress or repetition. However, this regress is only infinite, as Dennett implies, if every time the message is passed on it gives rise to an array of exactly the same physical form as the last. Nobody has ever proposed a homunculus in such terms. Moreover, repetitive transduction is what brains are about. There are arrays in the retina, in the brain stem, in the visual cortex and in the parietal cortex but in each place the signals are in a different format. We should be relaxed about there being many places in the brain with maps of the world, with no infinite regress.

It may be useful to indicate what I mean by information at this point. The existence of several definitions may reflect a tendency to hit trouble when considering what a homunculus might be. I shall later introduce a concept of 'pure' or 'active' information to try to resolve these problems, but first I need to discuss information in traditional terms. I will use the term at this point to mean a feature(s) of a physical state of affairs for which a subject has access (is informed), which is often, but not necessarily, a signal or indicator of a feature of another state of affairs with which it correlates in a way that the subject can appreciate through interpretative rules. This definition (and most?) immediately begs two questions. What is it for a subject (homunculus) to have access to information (apperceive)? How does it have a set of rules of interpretation? The challenge of these questions is again central to Leibniz's metaphysics and I suspect Leibniz's analysis is a good place to start, whether or not we call the rule-provider God.

What Properties We Want From Homunculi

Rather than ridiculing homunculi it is useful to review their putative defects. The first is that repetitive transduction is a way of putting off facing up to the impossibility of there being a final place where 'things come together'. Information is handled in a brain in two stages. The first is segregative. The eye and the ear segregate elements of information, which arrive in a superimposed form, into maps or patterns of separate data points, as at the retina. Further steps may involve re-segregation into other forms of maps but at some stage there must be a process that does the reverse; where signals are integrated, or brought back together.

It sometimes seems to be suggested that the information is not 'all brought together'. However, if this were so we would not be able to respond to the relationships between bits of information - like smiling in response to the pattern of a familiar face. For actions to be based on patterns of information the elements of information need to be brought together, or integrated, in one or more places, at least one being where we experience things together. Although it may be unclear why integration is associated with an experience it would be perverse to suggest that information is integrated in one place for computation and experience of integrated information arises elsewhere, unconnected to computation. Apart from anything this would deny the possibility of talking about experiences.

What is likely to be true is that there is no single place where things come together, because the brain uses parallel processing with complex integration occurring in many places at once. Computers provide a misleading analogy because everything comes together in one central processor, but does so in a serial piecemeal fashion that would not be expected to give rise to a complex appearance. The presence of many sites of integration in a brain does complicate the relationship between computation and experience (Edwards, 2006) but this is beyond the scope of the present discussion.

The second putative defect is the misconceived need for an internal map to physically match the arrangement of the outside world in the way that a screen, or a 3-D equivalent would: i.e. to be homotopic. Very crudely homotopic maps do occur in the cortex, but probably just as the simplest way of packing connections, irrelevant to computation. Homotopy has no computational value because homotopic arrays of signals present the same sorting problems as the source from which they derive, with the added problem that in opaque brain they cannot be accessed optically. They are Dennett's bogeymen.

We are so used to the idea that maps are homotopic, because their function is to present segregated data, that it is hard to envisage what 'maps' involved in integration might be like. This is where the question of how a subject has access to information looms large. A further shift into counterintuitive territory is needed. Whatever the rules or dynamics of 'experiential integration' are they are unlikely to bear any relation to the rules used to generate segregative maps, whether by means of lenses, or a video camera. These maps used for integration are not going to be 'analogue' maps in any familiar sense since no mode of 'access' based on a traditional classical geometric view of the world can address the 'complexity within unity' that is the binding problem of integrated phenomenal experience. This should not be a surprise, since, if the information accessed by the subject is in forms like 'see circle', there is no way that we should expect homotopy or 'analogy' in the laying out of signals for experiencing. This suggests that the 'access' that a homunculus has to information may need to be understood in a new sort of framework with the distributed features of the fundamental dynamics of modern physics.

Difficulties with Receiving

Because of the difficulty of understanding how information is accessed or received there appears to be a fashion for denying that information needs to be received, despite the clear assumption of conventional neuroscience that it does (and the cell activation that sends an action potential down an axon only 'functions' as a signal if the potential arrives somewhere). In their essay on qualia, Ramachandran and Hirstein (1997) suggest that if person A experiences red, then careful connection of their brain by a 'neuronal bridge' to the right spot in the brain of person B, who is colour blind but has the brain tissue needed to experience colour, will ensure that B experiences red; i.e. a bit of B receives red.

Receiving is needed to make sense of the story. Yet the problem of meaning in physical systems arises here. The suggestion seems to be that experiential red is carried by brain cells in a language that is accessible to other brain cells, and that experiential red is only 'private' because B does not know how to translate the word red into cell message language. This may be true in one sense, but in another sense brain cells do not have a language for their messages. All messages consist of the same action potentials.

The meaning of red must lie entirely in the relationship between the physical arrival of the message and the properties of the brain cell at which it arrives (highlighting the importance of brain substrate, and supporting the 'tissue' (versus functionalist) view espoused by Jeffrey Gray (2002)). We can only assume the message will be A's 'red' if the receiving cell in brain B has exactly the same physical properties, including the position and excitatory properties of the point where the neuronal bridge is connected, as its counterpart in brain A. (For the difficulties encountered when considering more than one cell, see Edwards (2005, 2006)). These things may one day be knowable but privacy of experience is more robust than Ramachandran and Hirstein imply. This brings us to the problem of how a subject can have rules of interpretation that allows it to receive information in a useful way. Something in the nature of a receiving unit must carry with it such rules.

So how might a dynamic analysis resolve these issues of integrated richness of access and intrinsic rules of interpretation?

The Indivisibility of Fundamental Processes

The main attraction of a dynamic view for explaining consciousness is that fundamental processes are distributed in time and space and can have enormous indivisible richness. This is not particular to oddities like Bose-Einstein condensates, which are, if anything, rather isolated and uniform. It also has nothing to do with the concept of wave function collapse, which appears to have been introduced to salvage a quasi-geometric account of the otherwise 'unvisualisable' [note 1, see end of file]. The indivisibility of a fundamental process is a much more general concept, perhaps most elegantly illustrated by Richard Feynman's lay description of electromagnetism in 'QED' (1985). Crucially, it is an indivisibility of process, not of 'state'.

As an example, consider a radio wave passing from a transmitter to my television aerial through air encumbered by tall buildings, leading to 'shadows' in my TV picture due to diffraction by the buildings. This process is indivisible, is as large as one might like, cannot be described purely geometrically, but can nonetheless be analysed without recourse to quantum formalism. All we need is for consciousness to involve the same everyday features.

What is interestingly indivisible about fundamental processes is their interaction with other processes; diffraction of a radio wave by buildings is an indivisible process distributed in space and time. These fundamental causal interactions have no 'mechanism'. There is no mediator, the processes simply progress in harmony, as Leibniz proposed. Nothing pushes or pulls; pushes and pulls are just the most intimate harmonious processes (e.g. radio waves). All that is exchanged in these interactions is a pattern of probabilities that so-and-so will appear to be so: in other words, pure information, or, as Bohm and Hiley (1995) described it, 'active information'. Passage of information is 'determinacy': not a state of a thing, but a 'knowing' about one process by another process. The knowing is always partial and known to some particular process; processes are not either 'determinate' or 'non-determinate'. Moreover, modern physics requires that once an aspect of a process is 'known' through such an interaction it is irrevocable. There is nothing special about brains in this respect; once a machine has acquired information about a process that aspect of the process can never be knowable otherwise.

Modern physics tells us that the fundamental elements of the universe, are instances of the operation of rules which determine the likelihood that one appearance will follow another: rules that operate on information from their environment that modulates, or informs, probabilities. Moreover, the amount of information available to these elements is large. As Feynman (1985) pointed out, a photon of sunlight reaching my eye, having been reflected off a lake, is informed about the entire lake, not to mention the surface of the moon on some days. We are used to information coming in discrete 'bits' in a classical geometric framework, with each bit relating to one interaction in one place at one time. However, physics tells us that this is just the correspondence principle operating when countless processes are considered together. Bohm and Hiley's pure, active information, works in a quite different way.

The Subject as Fundamental Dynamic Process

To capitalise on this availability of a rich, indivisible pattern of information, we need to propose that a human experiencing unit, or subject, is itself a fundamental process. This should not be too hard to swallow, since there is nothing else for it to be in the dynamic view. A subject in a brain should access information the way an x-ray passing through a crystal does, and responds accordingly. By definition, fundamental processes carry with them sets of interpretative rules, at least in the sense that they 'know' how to respond; perhaps the most remarkable aspect of all physics. They are instances of operation of such rules. So an appearance should not be a description of such a processes, but a pattern of informational elements passed to a process (subject) by other processes at their interface. Bohr's complementarity, which separates appearance and process, is to be expected, not mysterious.

Casting the subject as a fundamental process may be unfamiliar. It is usually cast as a 'classical' object, but in Bilodeau's interpretation of Bohr's usage it is not. Classical 'things' are just marks in the sand ('measuring devices') used for confirming dynamic realities.

Nevertheless, we need to find a fundamental process that could be a subject in a brain that is unitary and demarcated, with parameters that allow it to access the right sort of information, and coupled to a biologically relevant outcome. Modern physics seems to suggest that a fundamental dynamic process ought to be a mode of oscillation that can be occupied by one or more quanta of energy. Detailed discussion of the options for a 'human subject mode' is beyond the present scope but elsewhere (Edwards, 2006) I have argued that the candidate that fits the above requirements best is an elastic (phononic) mode occupying the dendritic tree of a neuron with a wavelength close to the distance between synapses, coupled to trans-membrane electrical potentials. Neuronal dendrites are where we know information is integrated and such a mode would have access to many thousands of informational elements simultaneously. A recent reassessment of neuronal membrane excitation suggests that it is critically dependent on coupling to an elastic wave (Heimburg and Jackson, 2005) at least for action potential propagation. Anaesthetics may remove experience because they decouple the elastic wave by altering the melting point parameters of the membrane. This suggests that there would be nothing epiphenomenal about such a basis for consciousness; it would be essential to neuronal function.

The idea that human subjects might be features of the membranes of individual cells seems to worry people, although this may be more an emotional than a scientific problem since it appears to raise no conflict with what we know, just with what we tend to assume (Edwards, 2005; 2006). It is possible that a fundamental physical process can occupy the dendrites of several cells but current dogma is that the neuron is the functional integrating unit in the brain. Further discussion below leaves this as an open question, but a single cell model does appear to be more tractable and therefore potentially more productive.

Is Experience Made of Words?

The puzzle that this model leaves us with is that although current physics can give a third person account of how fundamental processes might interact at sites of integration of information in the brain, it tells us nothing about how these rules might translate into experiential integration. How is 'circle' integrated with 'blue' to produce a blue circle? For the concept of interplay between dynamic processes and appearances to be of practical value, we need some clues as to how to approach the problem from the experiential side, to provide some predictions about what might match with what. We need clues to how experiential space might 'be constructed', to use a potentially dangerous metaphor in the absence of anything better.

In 1972, Horace Barlow (1972) made some suggestions about the number of elements of information required in various parts of the brain for a human percept. The suggestion was that an experience would be composed of about a thousand elements of information, which Barlow portrayed as, 'like words, having the special property that they lead to an economical representation'. Thus, although 'a picture is worth a thousand words', a thousand words (~5Kbytes) would carry information more economically than the necessary array of pixels (?~1Mbyte). In Barlow's concept each element of information was associated with the firing of a cell; 'an active neuron says something of the order of complexity of a word'.

I suspect this suggestion contains a profound insight. However, Barlow says nothing about how these word-like elements 'lead to an economical representation'. What 'hears' this 'saying'; i.e. how do the words combine to form the picture? How does blue get with circle? There would seem to be only one neurologically tenable answer; action potentials from these 1,000 cells must be sent down axonal branches to converge on the dendrites of a receiving neuron - or indeed through many branches to many such neurons. A thousand signals is well within the input capacity of some cortical neurons with ~40,000 dendritic synapses. (A variant is that signals behave more like letters than words, which is still economical, but implies different sorts of rules of interpretation.) It is perhaps ironic that although it is frequently assumed that experiential integration occurs over a large 'net' of cells, orthodox neurophysiology, based on the 'neuron doctrine', requires integration to occur only within individual cells. The frequently assumed picture is a violation of a doctrine based on a century of experiment. My own view is that the neuron doctrine is sound (Edwards, 2005, 2006).

What seems of key interest here is that without this assumption of signals that combine with the unusual economy of words, it is hard to see how a brain could integrate the elements of an (at least visual) experience. This bears on the circle-ellipse flip mentioned earlier. It suggests that experience may be synthesised from elements more like words than pixels. As with 'five red roses' this allows a single colour signal somehow to be bound to five separate shapes and not the gaps between.

Dimensions and Degrees of Freedom

Understanding how appearances could be built out of words runs into the problem that the physical basis of word combination, or syntax, is itself a major mystery. Chomsky (2000) has implied that unlocking it is probably beyond our current capabilities. However, one or two people have considered taking up the gauntlet. The way words combine involves peculiar properties in which elements 'belong' to each other in various different ways. Hinzen and Uriagereka (2006) have suggested that this implies that individual words have values in more than one dimension and may interact in a way best described by a matrix based algebra, which allows for complex and asymmetrical relations between elements.

The idea of words existing in, say, five dimensions, may seem fanciful, but only if we remain tied to a geometric view of the world. In a purely mathematical dynamic view dimensions are simply independent degrees of freedom. Modern physics ascribes considerably more degrees of freedom to processes than just position and time. Mathematical treatments of fields using vector or matrix based algebras may invoke very large (potentially infinite) numbers of degrees of freedom.

There is no shortage of degrees of freedom available in the brain. A neuronal membrane with 40,000 input synapses can be considered as a field with 40,000 degrees of freedom. Creating a 17-dimensional experience out of this is not a problem in terms of capacity. In a sense what needs explaining is how a system with 40,000 degrees of freedom gets interpreted in terms of what seem to be a relatively small number of dimensions. This should depend on the way the input is 'read'. A hologram may provide an analogy. A hologram can be considered as a field of countless degrees of freedom or as an array at a surface with two, spatial dimensional, degrees of freedom. However, if probed by the right sort of optical beam a third dimension is retrieved because the probing beam applies a registrational rule to the elements in the hologram that detects a phase relation. Another example is an 'information string' on a Turing machine tape or optical disk, which can be interpreted in whatever set of dimensions the registrational rules of a piece of software are designed to 'extract' it in.

Thus pure information accessible to any fundamental process would be expected to have an effective number of degrees of freedom determined by the number of degrees of freedom of the field that is this process's environment of many other 'apperceived processes' and whatever registrational rules are inherent to the apperceiving process. In terms of the causal chain involved in such an information-passing interaction, and the externally observable results, we have a good idea what the registrational rules must be. They should be the rules of modern field theory. We simply need to know what the parameters of the relevant fields and modes are.

What is a much harder issue is how these rules of process might relate to rules of interpretation that determine the qualities of appearances where processes interact, and 'apparent dimensionality'.

Analysis from Within

This brings us to a completely new sort of scientific question. There may be another half to physics, an experiential half, to keep us busy for a few centuries. And trying to transpose the rules of third person 'process' physics to first person 'appearance' physics is likely to create a minefield of false analogies. Almost certainly, correspondence between the two needs to be considered in abstract mathematical terms without being bound by a geometric framework. We need to beware questions like 'why should a phononic mode interacting through a pattern of potentials across a membrane actually have a spatial experience'. This would be putting things back to front, missing the point that physicists' space is not spacious in the experiential sense; that is just a feel you get inside a brain. It would be just as inappropriate to think that a photon passing through Young's slits would experience our spatial view of slits. Everything has to be thought of 'from the inside out', which is far from easy.

There must be a pattern of electrical perturbations in the dendrites of one or more cells that carries a code that is interpreted as the experience of an imagined green silk scarf. Neuroscience requires that. Perhaps we need to recognise just how clever fundamental processes are at interpreting incoming information. They progress on the basis of effortlessly following the interference pattern of an almost infinite number of path possibilities. Why should they not be able to convert an array of about 200 points of 'blackness' (path impossibility) into the thought 'green silk scarf'. After all, even if in a more indirect way, this is exactly what has happened between your retina and somewhere in your brain just now. And it is the final receiver that must do the interpreting; nothing else makes sense.

One foothold we have is that it seems reasonable to assume that the way a field of informational elements is handled in terms of degrees of freedom in process terms matches up in at least in some way with the way it is interpreted experientially. This seems to be required if experience is to bear any relation to behaviour, which it does. In fact the brain might be thought of as a machine that harnesses experiential integration at the fundamental level to guide behaviour in a macroscopic composite world.

Yet, two uncertainties emerge. Firstly, it is not clear how well the hologram analogy holds when considering the internal picture. In a hologram the probing beam converts a field from a two-dimensional array to a three dimensional shape, in terms of the information passed on to another observing process, but we have no insight into what the probing beam might experience. Secondly, it is not clear that experiential space really comes in dimensions. The Penrose/Escher staircases show that we can think we are experiencing in three dimensions when in fact there is no overall dimensional framework. (Perhaps germane to Clarke's (1995) argument that experience is not inherently spatial.) We have a set of relations, each of which makes sense, but which, taken as a whole, are impossible. This is again akin to language where it is possible to have a narrative, in which the words and sentences make sense, but which, taken as a whole, is self-contradictory. Experience is probably consistent most of the time because it is a best guess narrative constructed by the consensus of millions of parallel operations in input pathways sifting through for the most consistent relations ('circle seems the best bet'). These operations have usually got things sorted before experience but sometimes a false association is made.

Moreover, these relations often seem to have more to do with the 'belonging' relations of language than geometric relations. A car is perceived as having all sorts of non-spatial features, like solidity, potential motion, kudos, age, environmental unfriendliness. These relations certainly involve many degrees of freedom, but perhaps they are not sorted into 'dimensions'.

Nevertheless, it does seem necessary to propose that information is stratified in some sort of way within an experience, so that elements are interpreted as 'this or that sort' whether the sorts be shape, colour, movement or whatever. The integration of these sorted elements is then a matter of the subject's 'experiential syntax'. One or more processes inside a head must have an internal language, a sort of inbuilt Chomskyan I-language (Chomsky, 2000) of thought, with a vocabulary that includes line, circle, ellipse, red, blue, etc. Incoming signals are read in this vocabulary. The signals may have meaning individually or their relations may have meaning. Thus a signal at a synapse may mean red, or may be more analogous to 'r'.

What does seem to be a basic requirement of a set of registrational rules generating an experience, if we genuinely believe our experiences are unified, is that it must be able to sort sensory elements from a hierarchy of relations between elements and then build an interpretation that uses both appropriately. This is essentially what a language faculty needs to do, binding the percepts of 'phonetic form' (the sound or shape of the word cow) and referent (an image of the animal) through a relation involving the semantic content or 'logical form' of the word and then binding each word to others through syntactic relations.

To discover the rules of experiential syntax may be an impossible task. However, the message sending system that is a brain must, presumably, to be useful, complement such an experiential syntax according to rules that may be at least broadly derivable. Thus the function of the brain, whether it be language, mathematics or any form of thinking, should involve the interaction of two complementary rule systems, one at the fundamental level and one at the composite level of neural networks. In the past interest has focused on connections and network systems, but the rules of integration must be at least as important.

The parts of the brain dealing with language may be unusual, but the evolution of human language occurred too fast to require major genetic changes. It is not unreasonable to suggest that the way the elements of language are integrated reflects some fairly basic aspects of how information is integrated in neurons. In the Chomskyan programme (Chomsky 2000) most of the rules of language have been whittled down to one basic process called 'merge'; in other words, integration. Very simply (at least on the face of it) 'loves' merges with 'Mary' and then with 'John' to form 'John loves Mary'. Although this has been described in terms of 'inclusion in a set' there are features that suggest that it may be much more like a matrix operation (Hinzen and Uriagereka, 2006). If the defining parameters of such an operation could be expressed mathematically then we might have an idea just what we should expect neurobiophysical processes to explain. Hinzen and Uriagereka (2006) suggest that words need four hierarchical degrees of freedom (for nouns: abstract < mass < count < animate). That is an important start.

Thus, there may be legitimacy in seeing the problem of syntactical merge as the same sort of problem as the binding problem of experience. The caveat must be that any integrative processes going on outside of experience (at least the sort we are used to discussing) may well be a composite process involving piecemeal integration at lots of different sites in series or in parallel, as much dependent on the pattern of connections as on individual fundamental integrative events. Nevertheless, the semantic/ syntactic rules of these fundamental integrative events ought to play an important part in determining the overall rules of the system.

There might also be clues derivable from neuronal microanatomy. Fundamental processes can have parameters that translate directly into classical geometry - like the phononic modes that occupy ice crystals, with sixfold symmetry and multiple nodes in each arm. Phononic modes in neuronal dendrites could similarly have parameters relating to the branching manifold nature of dendritic trees. However great the gap between biophysics and experience may currently seem to be there are reasons for thinking that the rules governing the relationship between relevant parameters should be rather simple, just as the genetic code turned out to be remarkably like an alphabet.

Finally, it needs to be noted that the processes of signal sending and integration are going on in many places in the brain at once, each place dealing with a different level or stage of the total process of perception. The access to information that in the proposed model constitutes our 'verbally reportable' experience will presumably occur at only one of these many levels or stages. That need not imply privileged metaphysical (experiential) status at this level; it simply implies reportability. This illustrates how cautiously one has to proceed.

Synthesis

In summary, it is proposed that there are two complementary aspects to reality: processes, and, at their interfaces, appearances. Processes have no appearance, but are instances of sets of rules operating on other processes. For fundamental processes, operation of these rules is indivisible, whereas composite processes are divisible into component processes and, as such, tend to fit into a 'classical geometric' view. Pure, or active, information, which forms the basis of appearances, is exchanged between fundamental processes in a non- mechanical way, although in composite processes information may seem to be carried mechanically. The rules that are fundamental processes determine both the way these processes evolve, as observed from 'outside' and described by current physics, and the way each process interprets its interactions with other processes as an appearance. Fundamental processes interact with fields with very many degrees of freedom. The way these degrees of freedom are assigned to 'dimensions' from within is something we do not yet understand, and may prove intractable, as in the 'mysterian' view. However, it may prove accessible through inference from mathematics and language.

The suggestion is made that the way these rules handle degrees of freedom in constructing visual or other experiences may be much closer to the rules of language syntax than might first appear. Experiential space is not built of little Lego 'spacebricks' (as if we had any reason to think it was) but more the way a sentence is built. We know that fundamental processes have stupendous powers of reading and integrating possibilities. There seems no reason not to think they might handle things like sentences. Even the invisible mathematical rules of the space of processes might be said to be more like language than schoolroom geometry.

It has been suggested (Hinzen and Uriagereka, 2006) that our mathematical capacity, which, as indicated by Alfred Wallace, emerged without apparent evolutionary drive, may be a by-product of language. Linguistic operations can be reduced to mathematical operations in abstract terms. A slightly different proposal, based on Barlow's suggestion, might be that brains use word-like operations, with matrix-like aspects, in constructing and using ordinary experiences, as of a round or elliptical plate. Words and numbers may have, in a sense, already been there. A relatively minor change in neuronal dynamics may then have led to the option of using these same operations in a more abstract way, giving rise to both linguistic and mathematical faculties.

The practical implication of this synthesis is that it may be valuable to focus efforts on identifying exactly what the domains of integration of pure information relevant to experience in the brain are. The way they behave may then be open to study both in process terms, through biophysical experiment, perhaps involving the actions of anaesthetics, and experiential terms using inference from language itself. Just as neurology might one day tell us how language works, language might also tell us something about how brain cells work.

Acknowledgements

I would like to thank Paul Marshall, Basil Hiley, Wolfram Hinzen, Juan Uriagereka, Chris Clarke, Horace Barlow, John Smythies and many JCS online subscribers for stimulating input in various forms, and Douglas Bilodeau for the clarity of his 1996 article.

References

Barlow, H. (1972), 'Single units and sensation: a neuron doctrine for perceptual psychology?' Perception 1, 371-394.

Barrett, J.A. (2003), The Quantum Mechanics of Minds and Worlds (Oxford University Press).

Bilodeau, D. (1996), 'Physics, machines and the hard problem', Journal of Consciousness Studies 3, (5-6), 386-401.

Bohm, D. and Hiley, B. (1995) The Undivided Universe (London, Routledge).

Chalmers, D. (1995), 'Facing up to the problem of consciousness', Journal of Consciousness Studies, 2 (3), 200-219.

Chomsky, N. (2000) New Horizons in the Study of Language and Mind (Cambridge University Press).

Clarke, C.J.S. (1995) 'The nonlocality of mind.' Journal of Consciousness Studies 2, (3), 231-240.

Dennett D.C. (1978) Brainstorms: Philosophical Essays on Mind and Psychology. (MIT Press).

Edwards, J.C.W. (2005) 'Is consciousness only a property of individual cells?' ' Journal of Consciousness Studies 12, (4-5), 60-76.

Edwards, J.C.W. (2006) How Many People Are There In My Head? And In Hers? (Exeter, Imprint Academic).

Feynman, R. P. (1985) Q.E.D. (Princeton University Press).

Heimburg, T. and Jackson, AD. (2005) On soliton propagation in biomembranes and nerves. Proc Nat Acad Sci U.S.A. 102, 9790-5.

Hinzen, W. and Uriagereka, J. (2006), 'On the metaphysics of linguistics', in Hinzen,. W. (Guest Editor), special issue of Erkenntnis on "Prospects for dualism: Interdisciplinary Perspectives".

James, W. (1890, reprinted 1983) The Principles of Psychology, p930. (Cambridge MA, Harvard University Press).

Preston, J. and Bishop, M. (2002) Views into the Chinese Room (Oxford, Clarendon Press).

Ramachandran, V.S. and Hirstein, W. (1997), 'Three laws of qualia', Journal of Consciousness Studies, 4 (5-6), 429-457.

Smythies, J. (2003), 'Space, time and consciousness', Journal of Consciousness Studies, 10 (3), 47-56.

Whitehead, A.N. (1929, 1978), Process and Reality (New York, The Free Press)

Woolhouse, R.S. and Franks, R. (1998) G.W. Leibniz, Philosophical Texts (Oxford University Press).

Note 1. Despite the popularity of a link between wave function collapse and consciousness, such a link raises a number of difficulties, including:

1. In Bohr's original theory, observations of the initial and final conditions of a quantum system were linked by a single process, contingent on both observations. Von Neumann divided the process into two, the second being a collapse from many possible 'states' to one. This may help 'visualisation' but introduces an arbitrary 'event', at the second observation for which we have no evidence.

2. Wave function collapse looks to be the least interesting part of quantum theory, its 'contribution' being random and expected to be lost in a biological system.

3. The original idea that 'consciousness collapses wave functions' conceals a logical error. Bohr gave rules for processes that link experiences. Since we do not have experiences in other people's heads we cannot ascribe a special ontological interruption to what goes on there even if we might to our own experience.

4. Wave function collapse is said to make things 'determinate'. This begs the question of what being determinate to what. Being determinate is about one thing being known to some other thing. There is nothing in quantum theory that says it is an intrinsic change of state of something. This is the geometric illusion again.

5. If part of my brain 'became determinate to itself', what would that mean? Would the determinate state be a static snapshot or include dynamic features? Quantum theory does not say 'particles know themselves when the music stops'. It just says that when we observe something it has a 'particulate' quality.

6. von Neumann's approach allows the 'superposition of states' of, say, an electron, to continue as part of a greater 'superposed system' right through to the point of the 'collapse' of conscious perception. But what links the collapse of this electron's wave function to those of all the other things we are seeing at the time?