The Less Than Jolly Heretic, 4th Edition [preview]
- Benjamin Power

- 12 hours ago
- 52 min read
Chapter Four: New Paradigms of Consciousness
Note: I could not transcribe all of the mathematical sections in the original document to my blog-space so some areas of this preview are left incomplete; the symbols marked: [unavailable].
It is perhaps better to forgo the bio-reductionist paradigm in cognitive science altogether, given the advances provided by Mario Jibu and Kanio Yasue in their quantum brain dynamics research in Quantum Brain Dynamics and Consciousness: An Introduction showing that physical substrates in the brain potentially manifest quantum field phenomena, consciousness explained by fundamental quantum mechanical principles, as an alternative to us taking the fundamental constituents of a brain as single neurons in networked organization, with our focus up until now merely on the chemical gaps in between them, instead presenting a new understanding, with the computational orthodox theory better suited in reality to those dealing solely with machines.
The two authors explain that in the 1960s and 1970s, most brain scientists supported the neuron doctrine and looked for the fundamental process of brain activity in the neuron, which, from the point of view of brain physiology, was considered a materialistic unit of the brain. Then, it was customary in brain science, physical chemistry, and molecular biology to concentrate on material constituents that manifest continual existence or appearance.
In contrast, they highlight the cutting-edge technical research of Hiroomi Umezawa, who, since his youth, had been attracted to philosophy, physics, mathematics, and psychology and went on to apply an interdisciplinary approach to the study of the human brain.
It is carefully explained that in quantum field theory, one must consider the fundamental constituents of matter and the physical processes that arise from the sophisticated interactions among these constituents. The physical processes came to be regarded as quasi-fundamental constituents of matter due to the basic principles of quantum theory; that is, a large composite system, consisting of many fundamental constituents of matter, exhibits a behaviour at the macroscopic level that is different from properties of the individual constituents (Umezawa 1993).
Jibu and Yasue go on to develop this understanding, explaining that, to Umezawa, the neurons and the various biomolecules that constitute neurons were no more than redundant degrees of freedom playing only subsidiary roles, just as nucleons do in the case of mesons. Umezawa regarded the brain as an extensive composite system where it is not the dynamics of the individual constituents but the ordered dynamics of collective modes of the extensive composite system of the brain, governed by the basic principles of quantum field theory, that play an essential role in realizing the highly advanced brain functions.
Frustratingly to me, due to the space constraints of this chapter, I will not devote time to explaining quantum field theory itself as an idea. However, for preliminary investigation, I would recommend the pleasantly-detailed (pre-)undergraduate-level textbook Quantum Field Theory for the Gifted Amateur by Tom Lancaster and Stephen J. Blundell, which, due to its short chapters, makes for easier regular study, and helpfully provides some – quite difficult – exercises, the solutions to which are also available to download online (although, as a total layman, I am certainly not claiming that I match that title’s mental characteristic myself, beyond my general curiosity levels! [I was so embarrassed at the thought of having such a title in my home library, feeling myself an evident fraud, that it took me months to pluck up the courage to purchase this book]).
In response to another popular permutation of reductionism, it is clear that AI advancements themselves, a very popular topic indeed in contemporary technological research, cannot provide us with machine consciousness, computers lacking all self-awareness, emotion, and conception of beauty, bereft of joy, awe, or delight, loveless and wholly insensate, and even before we consider the proposed full limits of Artificial Intelligence as presented by Hubert L. Dreyfus in his classic What Computers Can’t Do where he states:
In discussing CS [cognitive simulation] we found that in playing games such as Chess, in solving complex problems, in recognising similarities and family resemblances, and in using language metaphorically and in ways we feel to be odd or ungrammatical, human beings do not seem to themselves or to observers to be following strict rules. On the contrary, they seem to be using global perceptual organization, making paradigmatic distinctions between essential and inessential operations, appealing to paradigm cases, and using a shared sense of the situation to get their meanings across.
It is noticeable that, from his very first publications up until now, the AI community has been very conspicuous in burying its head in the sand over the writings of Dreyfus, with the occasional haughty dismissal (somewhat expectable, if unsubstantiated), and no solid academic rebuttals have ever been presented that can counteract his compelling blend of Heideggerian philosophy, phenomenology, and hard science – the most one could do in one instance was claim that humans do indeed follow hard-set rules, which we are so far unaware of, echoing the words of Alan Turing, in his 1950 ‘Argument from the Informality of Behavior’, although admittedly this does seem, from a practical standpoint, to have parallels to the unfalsifiable ‘just over the horizon’ gene-hunting manias of bio-psychiatric genetics researchers. When is enough considered enough, and a scientific moratorium imposed on what, so far, has proven a considerable waste of time and money?
It is accurate, as of 2026, to state that computers – despite wild publicity and hype (and, again, the atmosphere of unquenchable, quasi-religious enthusiasm in favour of AI research’s purported ‘success’, a belief-level common also to devoted psychiatrists, which may not entirely be a coincidence given the contemporary model of – human – cognition which is adhered to by both disciplines and the bio-reductionistic overlaps between these fields of consciousness research) – are still unable to perform tasks that would require deep context and meaning, just as they lack the nuance of emotional intelligence, and thus cannot respond effectively to human emotions, context-heavy cultural references, or subtle human interactions, or indeed interpret them at all.
There is no empathy, and no amount of personalised feedback or interactive gamification by humans can install a legitimate phenomenological drive to compassion or interpersonal understanding. They cannot perceive us; there is no theory of mind.
Neither is there an ability present in the computing machines of today to make the intuitive leaps humans rely on in their decision-making.
Besides this, they have no genuine creativity or sensible artistic impulse, and no agency, relying instead, rather obviously, on human input.
Even artificial general intelligence (‘AGI’) – a theoretical form of AI that surpasses human cognitive ability in all areas – cannot handle causal problems dependant on a model of reality, as proved by Ragnar Fjelland in 2020 in his paper for Humanities and Social Sciences Communications. 7 (1): 1–9, titled “Why general artificial intelligence will not be realized”.
This paper states that, reliant as they are on the work of Yuval Noah Harari and Francis Crick, proponents of AGI and the strong AI model have made the glaring error that mathematician and philosopher Edmund Husserl famously recognised in Galileo’s Platonic thought (an idea also present in the deterministic mathematical ideas of Pierre Simon de Laplace), namely that, in objectivist fashion, they presume the world is ‘nothing but’ one of bio-chemical algorithms in vast assembles of nerve cells, which in truth, and as elaborated on by Theodore Rosdak with his thought experiment example of a Buchenwald psychiatrist’s ignorant incomprehension over why his patients displayed to him as very upset (!), does nothing to help us understand other people, by putting ourselves in their shoes, as context is forever missing, providing a serious oversimplification of humanity and social phenomena, and abstracting reality into something idealised and metaphysical, governed by mathematical functions rather than the causal relationships evidenced by empirical science.
Computers not being in our world (i.e. there being no genuine connect with us outside of what we ourselves contrive; a gap forever in place), the claims of Big Data advocates that the data ‘speak for themselves’ are thus hollow, as, despite not all neural networks requiring a programmer – the deep reinforcement learning of the company Deep Mind/Alphabet’s artificial neural network game, AlphaGo, could train on earlier versions of itself rather than competent players, and indeed can handle tacit knowledge, albeit of an unrealistic kind – the data models utilised must still be selected by humans, and consist of numbers.
As it stands, despite the expectable stiff expenses of research and development, DeepMind continues to run at a loss on account of deep reinforcement learning’s disconnect with real world problems in a changing world, having lost over one billion dollars over the course of three years, from 2016-2018, all victims of the ‘fallacy of initial success’, as with the rest of the AI industry.
It may be a shame for some to burst this bubble, if long overdue, but these vital human abilities will not – cannot – ever be achieved by computing technology, fundamentally as by the nature of these machines at all, the immense, irresponsible time-sink and energy-heavy resources drain of quantum computing research – high-tech machines functioning far beyond the capabilities of the best classical supercomputers – accounted for in this pronouncement.
Even quantum entanglement and superposition cannot determine quantum phases of matter, susceptible as these computers are to decoherence. As mathematician Gil Kalai observed in 2025, the phenomenon of noise (i.e. random fluctuations and errors) seriously affects the outcome of the process, with the potential to corrupt many qubits all at once, and the machines lack quantum error correction. Since this correction effort increases exponentially with the number of qubits, it becomes impossible to create a low enough error level to implement quantum circuits. Solving some difficult problems (such as detecting the mass of the black hole binary GW231123) would take a – so far theoretical – 20 million qubit quantum computer an estimated many billions of trillions of years, and current machines are nowhere near that number of qubits, operating barely past the 1000 qubit mark, in fact.
Quantum computers remain less complex than the human brain, lacking the intricacy of the brain’s neural networks, comprised of around 86 billion neurons interconnected by trillions of synapses, a brain that excels at parallel processing, pattern recognition, and learning, even before emotional and social intelligence is considered.
So even if they could – which they can’t – reach that level of qubits, would it be worth it?
Also, reliant as quantum computers are on the generation of random numbers, is it even correct to claim they are modelled on the reality of human thought?
Indeed, in Shadows of the Mind, in the conclusions of his chapter 3, The Case for Non-Computability in Mathematical Thought, the physicist Roger Penrose also acknowledges a clear non-axiomatic quality to the process of thinking, saying, “we appear to be driven to the firm conclusion that there is something essential in human understanding that is not possible to simulate by any computational means”, having speculated further in the previous lines, asking of us “is it conceivable that there is an essentially non-random nature to the detailed behaviour of some chaotic systems, and that this ‘edge of chaos’ contains the key to the effectively non-computable behaviour of the mind?”
Furthermore, edge-of-chaos dynamics are discussed at length in the first chapter of a fascinating Advances in Consciousness Research book titled Fractals of Brain, Fractals of Mind, edited by Earl Mac Cormac and Maxim I. Stamenov – and which may render my other writings to some degree obsolete – which reminds us, on the very first page that when various scales of complexity in the (nonlinear dynamical) brain are considered, the brain can be observed to take on a fractal-like structure where neural structures at many different spatial scales are embedded recursively, making reference to the many scales of supra-neural structure in the ‘Neural Darwinism’ neural model of Maurice Edelman, 1987 (among discussing many other researchers and theorists), and going on to suggest that, as by Chris G. Langton (the paper “Computation at the edge of chaos: Phase transitions and emergent computation” in Physica D. Nonlinear Phenomena, Volume 42, Issues 1-3, June 1990, pages 12-37) complex systems may be positioned on a continuum between highly ordered and highly chaotic.
In the specific example of a brain system, the movement to a more ordered state makes up a recognition-based, engaged, unreceptive mode of interaction whereas movement to a more chaotic state requires an alert, ready, receptive mode of interaction (according to the article “How brains make chaos in order to make sense of the world”, by Christine Skarda and Walter J. Freeman, Behavioral and Brain Sciences (1987)10:161–195).
One way to explore this is to examine cellular automata i.e. simple computational devices which are theorised to switch from one discrete state to another depending on neighbour-states at the previous discrete time step, much as, as by Stephanie Forrest’s paper in Physica D: Nonlinear Phenomena, Volume 42, Issues 1–3, June 1990, Pages 1-11, titled “Emergent computation: Self-organizing, collective, and cooperative phenomena in natural and artificial computing networks”, large systems of identical automata display properties which are very non-computational. An analogy of these extremes would be the behaviours of solids and gases, respectively.
In an further linked analogy to the process of sublimation, we can consider ‘class four automata’, which display properties not seen in either highly ordered or highly chaotic cellular automata, the complex behaviours described as extended transients, i.e. metastable dynamics produced by the tension between order and chaos, propagating unpredictably, albeit with clearly observable coherent patterns in their evolution (hence effervescent). Extended transients enable the possibility of longe-range interactions at the global scale – at the edge-of-chaos, cellular automata can influence each other according to a power law distribution (Stuart Kauffman, 1991) where nearby sites communicate frequently in small ‘avalanches’ of changes, whereas distant sites communicate rarely, albeit with large change avalanches, with extended transients revealing the most effective trajectories, optimally positioned between total order and total chaos, the resulting behaviour resembling the dynamics of real-world complex systems capable of producing solitary waves, i.e. a ‘soliton’: a nonlinear, self-reinforcing, localised wave packet, providing stable solutions to a range of – weakly – nonlinear dispersive partial differential equations describing physical systems, and ensuring a nearly lossless energy transfer of wave-like propagations (again, with initial reference to Chris G. Langton, 1990).
To return, however, to the basic nature of thought, according to the overview given in Chapter 7 of Stairway to the Mind, by Alywn Scott, Roger Penrose clarifies matters by outlining “four philosophical positions that one may assume”:
A. All thinking is computational; in particular, feelings of conscious awareness are evoked merely by the carrying out of appropriate computations.
B. Awareness is a feature of the brain’s physical action; and whereas any physical action can be simulated computationally, computational simulation by itself cannot simulate awareness.
C. Appropriate physical action of the brain evokes awareness, but this physical action cannot even be properly simulated computationally.
D. Awareness cannot be explained by physical, computational, or any other scientific terms.
Scott goes on to explain:
A is the position of strong artificial intelligence, or functionalism and D is the position of the mystic. Both are rejected by Penrose so the choice is between B and C. B, he suggests, is the view that would generally be regarded as “scientific common sense” because the simulation of a physical process is not the same as the actual process. (“A computer simulation of a hurricane, for example, is certainly no hurricane!”) Nonetheless, C is the position that Penrose believes to be closest to the truth. View C holds that not all physical actions can be simulated on a computer, and Penrose argues – as did [Eugene] Wigner – that such noncomputable physical laws may lie outside the present purview of physics.
In his short, informative presentation on the debate between Roger Penrose and Emanuele Severino in Artificial Intelligence Versus Natural Intelligence, Fabio Scardigli summarises this argument, explaining to us that the authors consider, as Roger Penrose does, that ‘true’ intelligence requires consciousness, something that our digital machines do not have, and never will. These authors are also opposed, like Penrose, to the standard AI view of human beings as a kind of ‘wetware’. They contrast the strong AI belief that consciousness emerges from brains alone, as a product of something similar to the software of our computers, as well as the physicalist view that consciousness ‘emerges from functioning’, like some biological property of life.
He goes on to say that these researchers hold that the essential property of consciousness is the ability, the capacity, to feel. Of course, the ability to feel implies the existence of a subject who feels – a self. Therefore consciousness is inextricably entangled with a self which (or who) feels inner experiences. Central to the discussion is thus the construction of a theory of ‘qualia’ (i.e. specific instances of subjective experience, for example, the taste of a tomato, or the pain sensation of a broken rib, as opposed to propositional attitudes, which are merely neutral, content-bearing beliefs about an experience).
In a different context, relating to the problem of free will, another, more ‘orthodox’ quantum understanding of the brain, inspired by the metaphysical writings of Alfred North Whitehead on process philosophy, and considering a more global, ‘mind-like’ wave function collapse than in present in the competing position of Roger Penrose and Stuart Hameroff (his posited phenomenon described as occurring in the synapses of brain neurons rather than their microtubular cytoskeletons), has also been explored in detail down one new research theory’s hypothesis by Henry P. Stapp in Mind, Matter and Quantum Mechanics, discussing the Newtonian Process II component dynamics in which the lateral velocity of calcium ions entering the narrow ion channels of the trillions of nerve terminals between nerve cells increases on account of the quantum uncertainty principle causing the cloud structure associated with the ion to spread out, the spreading of this ion wave packet meaning that the ion may or may not move from the channel exits to trigger the vesicles that release neurotransmitters, and the Process I component of brain dynamics that involves the agentive mind analyses the operators that correspond to intentional actions as the quasi-stable harmonic oscillation states of macroscopic subsystems of the brain, each action extending nonlocally and representing a single conscious thought, with free choice over the occurrence of a Process I event influenced by the quantum Zeno effect by which increasing the rapidity of similar Process I events holds the brain state in place in an intentional choice “Yes” feedback despite the Process II mechanics that would swiftly produce a “No” feedback. This research is based on the Copenhagen quantum theory of Neils Bohr and Werner Heisenberg, which John von Neumann and Eugene Wigner further developed.
This can be expressed symbolically. We can represent the quantum state S of the system being acted upon by Process I mental action as changing to a new state S’, the sum of PSP, the “Yes” feedback, and (1 – P)S(1 – P), the alternative “No” (“Not-Yes”) possibility:
S → S’ = PSP + (1 – P)S(1 – P)
The projection operator P is required to satisfy P = PP. This implies that P(1 – P) = (1 – P)P = 0.
Given the formula for the probability that the agent will experience the feedback “Yes”, with tr representing a cyclical trace operation multiplying together any two operators (the quantum analog to the classical process of giving a priori weighting to equal volumes of phase space):
tr PSP / tr S
Let {P} be the set of actions P that correspond to possible mental intentions. Then let P(t) be “the most probable P in {P}” i.e. that which maximises tr PS(t)P/tr S(t), where the probability is defined by brain state S(t).
The formula for the transition from the state PSP at time t = 0 to the state (1 – P) S(t) (1 – P) at time t is:
(1 – P) e-iHt PSP eiHt (1 – P) = order t squared.
For small t the expression eiHt becomes 1 + iHt + order t squared.
We have already gone one step further than this before the invention of Stapp’s research hypothesis. If only more of the public could drop their preoccupation with reductionism to the microscopic elementary dynamics of neurotransmitters, cosmological physicists, theoretical biologists, and philosophers generally being more astute at handling the theory of mind vital to discussing consciousness than cognitive scientists, psychologists, and neurologists, the primary model of the latter groups retained in that quantum consciousness upgrades remaining closer to the original synthesis of chemistry and computer science than a branch of biology or physical anthropology, much as yes, we do have brains of a billion neurons which one still could, for some reason, break down into pieces instead of considering the macrostate of all this assembled biological matter as what is, in mathematical science at least, a statistical whole, somewhat like what can be understood in the questions posed by thermodynamics problems (for one easy example, the state of a gas in a container dictated by the macroscopic variables of volume, pressure, and temperature; measurements taken irrespective of the behaviour of individual particles).
Curiously, John Locke, (in his Essay Concerning Human Understanding IV.x.10) also notes our prejudice in favour of the microscopic, and our reluctance to ascribe to matter in the large the power to generate mentality:
Divide matter into as minute parts as you will, which we are apt to imagine a sort of spiritualizing or making a thinking thing of it; vary the figure and motion of it as much as you please; a globe, cube, prism, cylinder, &c., whose diameters are but 1,000,000th part of a gry [one gry = 100th of an inch], will operate no otherwise upon other bodies of proportionate bulk than those of an inch or foot diameter; and you may as rationally expect to produce sense, thought, and knowledge, by putting together in a certain figure and motion gross particles of matter, as by those that are the very minutest that do anywhere exist. They knock, impel, and resist one another just as the greater do, and that is all they can do.
With more sense perhaps than the clockwork 'compute or else' chemists above, Michael Lockwood has contributed his own research on neurophysiology and a permutation of quantum mechanics that considers biophysical macrostates, in a solid introductory book titled Mind, Brain & The Quantum: The Compound' I,' including a thorough philosophical introduction and a critical evaluation of current strong reductionism materialist biases for the individually microscopic and the unconvincing and erroneous adoption of behaviourist methodological functionalism in cognitive science and neuroscience.
Lockwood writes, in Chapter 2:
What is really at stake here is the question whether there is anything that in principle is going to be left out of any description, however comprehensive in its own terms, which is couched purely in the language of physics. ... In the first place, the concepts and modes of description employed by physicists are constantly being amended and augmented ... What we are really talking about here is reducibility (or reductionism)... Consider, for example, the question whether physiology is in principle reducible to physics. I would argue that in one sense it probably is and in another sense pretty clearly is not. ... Given a statement couched in the language of physiology, one could not, even in principle, translate it into a logically equivalent (though obviously vastly more complicated) statement couched purely in the language of physics. We might say, then, that physiology is not strongly reducible to physics.
Later he elaborates:
Compare two possible philosophical positions. First, there is that held by Descartes, who thought that the (conscious) mind was a distinct immaterial substance (the the philosopher’s sense of a persisting entity), lacking a spatial location but nevertheless casually interacting with matter. This position is neither physicalist nor materialist. Now consider a different soft of position, according to which the mind does not exist over and above the brain: states of awareness are brain states, and are located in space (to the extent, anyway, that the states of any material object may be said to be). But those brain states that are identical with states of awareness have, besides their physical or physiological properties, certain additional mental properties. Thus the patch of phenomenal red or (better perhaps) the state of one’s being aware of such a patch, is just a state of one’s brain; but its having the subjective character it does (as an experience of red, for example, as opposed to yellow) is a property of that state additional to its physical or physiological properties.
Lockwood is clear in his criticism of functionalism in Chapter 3:
The great white hope of physicalists and materialists, in recent years, has been a doctrine called functionalism (Putnam, 1971; Dennett, 1978; Loar, 1982; Smith and Jones, 1986). It is a theory born of the computer age and taken up with enthusiasm by many researchers within the field of artificial intelligence.
He explains that it has its roots in behaviourism, which he describes as a methodological approach in psychology, which involves treating human and other organisms as essentially black boxes (systems viewed in terms of inputs and outputs without knowledge of internal workings).
What behaviourist psychologists say, in effect, is: ‘Ignore what goes on inside an animal, human or otherwise... Regard an animal simply as something which receives certain inputs from the environment, that is to say stimuli, and emits certain outputs, that is to say behavioural responses. Our job, as psychologists, is to work out a set of laws that relate responses and stimuli, thus enabling us to predict, at least statistically, what responses will emerge, over a period of time, if the animal is subjected to this, that, or the other stimulus regime.’
To Lockwood, “Philosophical behaviourism is an absurd theory, a straw man. Practically the only philosophers who ever held it ... were certain logical positivists: philosophers, that is, who were wedded to a theory of meaning according to which knowing the meaning of a proposition is a matter of knowing its method of verification”. He suggests further that “Something it might be profitable to ask a philosophical or a methodological behaviourist (if one could still be found) is: what’s so special about the skin? That is to say, why all this emphasis on overt behaviour and external stimuli? Why isn’t what goes on inside the organism of equal significance?”
It is clear behaviourists are looking for a way of construing psychology that would render it strongly reducible to physics; whereas this is simply not necessary.
In Chapter 9 of Stairway to the Mind, Alywn Scott further elaborates on the problem with (strong) functionalism, explaining that weak functionalism is the position that states two dynamic systems are identical if their external behaviours are the same over a limited period of time. One system could have consciousness while the other does not since two “black boxes” with identical terminal properties over a limited time can be quite different inside. Alternatively, strong functionalism takes the position that the dynamic behaviour of the two systems can be put into an exact one-to-one correspondence.
In a ‘silicon chip’ representation of a mind, this means that appropriate computer circuits would have to be arranged to mimic the dynamics of a conscious human from the individual molecules up, modelling every protein, molecule of ATP, patch of membrane, branch of the cytoskeleton, ramification of dendrite and axon, synapse, neuron, neural assembly, or assemblies, and so on up to the entire cerebrum, spinal cord, musculature, and on to the relevant aspects of the cultural configuration. He acknowledges that assuming this could be done – which is unlikely at all – strong functionalism would be a tautology.
Michael Lockwood’s detailed philosophical work bolsters the research of Stuart Hameroff, who suggests, as with Roger Penrose’s abductive reasoning, that the phenomena of consciousness could arise from an adjusted quantum gravity model utilizing Einstein’s general relativity that describes the physical wave function collapse of isolated linear eigenstates in quantum superposition, following the Schrödinger equation, the extremely weak gravitational force supplying a small energy uncertainty, not in direct competition with other biological process energies, that allows choice between separated spacetime geometries, with quantum channels formed from the electric (and magnetic/electron spin, and a possible synergistic nuclear spin) Fröhlich-type coherent oscillation of coupled dipoles at Planck scale, the dipole states mediated by London force attractions among phenyl and indole rings of amino acids, the latter indole rings separated by about 2 nm.
We can recall that that quantum states (of which eigenstates describe the quantum state after a measurement, with the eigenstate corresponding to the measurement and the value measured) are a mathematical tool utilized by quantum mechanics to represent a physical system. A full set of compatible eigenstate measurements produces a pure state, whereas any other state is described as a mixed state. The eigenstate solutions to the Schrödinger equation can be formed as pure states.
Pure solution states are labelled with quantized numbers (for example, to fully specify the energy spectrum state of an electron in a hydrogen atom, four quantum numbers are needed, the principle quantum number n, the angular momentum quantum number l, the magnetic quantum number m, and the spin-z component sz). At a given time t, the pure state corresponds to vectors in a separable complex Hilbert Space where each physical quantity measurable is associated with a mathematical operator.
As stated above, pure states can be superposed. If [unavailable] and [unavailable] are two kets corresponding to quantum states, the ket [unavailable] is also a quantum state of the same system. Both [unavailable] and [unavailable] can be complex numbers; their relative amplitude and relative phase will influence the resulting quantum state.
Writing the superposed state using and defining the norm of the state as: [unavailable] and extracting the common factors gives:
[unavailable]
The Schrödinger equation itself is a partial differential equation that governs the wave-function (the superposition of several eigenstates) of a non-relativistic quantum mechanical system, acting as the quantum counterpart to Newton’s second law of motion in classical mechanics (‘The change of motion of an object is proportional to the force impressed; and is made in the direction of the straight line in which the force is impressed’ – motion, i.e. momentum, being the mass multiplied by the velocity, and force being the mass multiplied by the time derivative of the velocity i.e. the acceleration, the time derivative of the momentum thus being the force) and gives the evolution over time of the wave-function of an isolated physical system. It is based on a postulate by Louis de Broglie that all matter has an associated matter wave, where matter exhibits wave-like behaviour, which led Schrödinger to seek a proper three-dimensional wave equation for the electron. The Schrödinger equation describes the time evolution of a wave-function i.e. a function that assigns a complex number to each point in space; one of the most important concepts in quantum mechanics. The equation given is nonrelativistic because it contains a first derivative in time and a second derivative in space, and therefore space and time are not on equal footing.
Every particle is represented by a wave function. The wave function is typically represented by the Greek letter psi (Ψ) and depends on position and time. When one has the expression for a particle’s wave function, it displays to you everything that can be known about the physical system, and different values for observable quantities are obtained by applying an operator to it.
The square of the modulus of the wave function describes the probability of finding the particle at a position x given time t, but that’s only the case if the function is ‘normalised’ i.e. the sum of the square modulus over all possible locations must equal 1 – the particle must, by certainty, be located somewhere.
The wave function information is probabilistic, so there is no predictive power to one observation alone. However, one is able to determine the average over multiple measurements. One can also use the wave function to calculate the ‘expectation value’ for the position of the particle at time t, which is the average value of x you would obtain if you repeated the measurement many times. Again, it does not tell you anything about any single measurement, and in truth the wave function is closer to a probability distribution for a single particle than anything concrete. By using the appropriate operator, expectation values for momentum, energy, and other observable quantities can also be obtained.
Just to refresh further, the Schrödinger Equation gives a description of a quantum state evolving with time (describing the energy and momentum of a wave function – a wave equation for the wave function of the particle in question, hence why its use to predict the future state of a system is called ‘wave mechanics’).
To apply the equation, given that it derives from the conservation of energy, one must write down an operator called the Hamiltonian, then insert it into the Schrödinger equation. The resulting partial differential equation is solved for the wave function, which contains the information about the system. In practice, the square of the absolute value of the wave function at each point is taken to define the probability density function.
In other words, from the beginning, write down the simplest form of the equation:
HΨ = iℏ x ∂Ψ /∂t
ℏ is the reduced Planck’s constant (i.e. ℏ/2π, with ℏ arbitrarily calculated as: 1.054571817...×10−34 J⋅s).
i is an imaginary unit (the square root of -1), used to form a complex number.
H is the Hamiltonian operator (the total energy of the quantum system i.e. the full kinetic and potential energy of the particles constituting it).
The full Hamiltonian equation itself is generally written:
-ℏ2/2m x ∂2Ψ/∂x2 + V(x)Ψ = iℏ x ∂Ψ /∂t
Sometimes, in three-dimensional problems, the first partial derivative is written as a Laplacian operator with a del symbol squared (∇2).
Essentially, the Hamiltonian is working on the function to describe its evolution in space and time, although in the time-independent equation, which isn’t dependant on t, the Hamiltonian gives the energy of the system. However, we’re working here with the time-dependant equation.
A good simple case to consider is a free particle because the potential energy V = 0 in that case. The solution takes the form of a plane wave, with the mathematical form:
Ψ = Aekx-ωt
Where A is an arbitrary constant for normalisation, e is the base of the natural logarithm used in the exponential function, k = 2π/λ (λ being the wavelength), and ω = E/ℏ (where E is the energy), and x and t are, of course, position and time respectively.
Briefly, ‘wave function collapse’ (also known as reduction of the state vector) occurs when a wave function reduces to a single random eigenstate of that observable (i.e. that linear operator) from a classical vantage point due to interaction with the outside world – an observation.
At the crux of his theory, from a structural brain organisation perspective, Penrose proposes a resonance of 10MHz with coherence times of 10-7 seconds, and with their own discrete instances of accompanying spacetime curvature (which become unstable and collapse when separated by more than one Planck length), 32 resonance rings in helical qubit pathways throughout non-polar hydrophobic pockets of delocalized π-bond electrons, each pocket 8 nm apart, inside the tubulin protein dimer subunits of polymer microtubules, hollow cylinders in a polarized array that make up the cytoskeleton of neuronal dendrites and function as protein transporters for neurotrophins and organelles, etc.
The Fibonacci geometry configuration of the 3-dimensional lattices provides quantum error correction, thus resisting quantum decoherence, this ‘wireless communication’ of axons via a fractal arrangement of resonant vibrations with a diameter of around 100 micrometers occurring globally across all neurons of the entire brain, developing on the former hypothesis of the entangled electron condensates potentially jumping to other neurons and gliocytes by quantum tunneling across synaptic gap junctions (now rejected by Penrose, although the idea of tunnelling is still present in another competing quantum theory – although not strictly a model of consciousness – that of Catecholaminergic Neuron Electron Transport, or ‘CNET’, which I will not cover here), further postulating that this quantum activity is the source of the 40Hz Gamma rhythms in humans. These neural oscillations correlate with large-scale brain activity and cognitive phenomena such as working memory, attention, and perceptual grouping.
Penrose refers to his theory as Orchestrated Objective Reduction (‘Orch OR’). In other words, he posits that, rather than arising as a product of neural connections, consciousness appears instead at a fine-scale quantum level inside neurons, based on non-computable quantum processing performed by qubits formed collectively on cellular microtubules, yet amplified in the neurons. Inspired by the ‘Penrose-Lucas argument’ – conducted by himself and John Lucas – to Kurt Gödel’s Incompleteness Theorem (the theorem stating, basically, that a finite-timed, deterministic, mechanical procedure, capable of proving basic arithmetic in its theory, cannot be deductively consistent, i.e. not lead to a logical contradiction, and simultaneously complete, i.e. in meta-logic a formal system is complete with regard to a property if every formula having that property can be derived from within that system itself).
The Penrose-Lucas argument states that this incompleteness does not apply to humans (hence why they also conclude that Turing machines can’t be mathematically insightful): Penrose argues that, though a formal proof system cannot prove its own consistency, human mathematicians can indeed prove results unprovable by Gödel’s theorem.
He also thinks that wave function collapse is indeed the foremost example of a non-computable process. If the collapse into that single random eigenstate is truly random, then it cannot be predicted deterministically by any process or algorithm.
However, as opposed to thus relying on a random environment inducing collapse, Penrose suggests that, in isolated systems, we augment our understanding of wave function collapse, hence what he calls objective reduction, where these separated spacetime pieces become unstable above the Planck scale of 10-35m and collapse to just one of the possible states, constructing the equation for his indeterminacy principle as follows:
t ≈ ℏ/EG
Where t is the time until objective reduction occurs, and EG is is the gravitational self-energy (or the degree of spacetime separation given by the superpositioned mass).
Thus, the greater the mass-energy of the object, the faster it will undergo objective reduction, and vice versa. Atomic level superpositions would take approximately 10 million years to reach this threshold, and an isolated 1kg object would take 10-37s. He posits that objects that exist somewhere in the middle between these two scales would collapse on a timescale that is relevant to neural processing. He claims that such information is Platonic, i.e. it just is, existing as pure mathematical truth, and corresponds to the geometry of fundamental spacetime.
Unfortunately, this makes it hard for me to get away from the tentative conclusion that Penrose’s argument has fallen into the same basic trap I outlined briefly for the case of Galileo above – much as Orch OR has much going for it compared with my initial critical evaluation of strong AI, these shared Formal positions do not fully satisfy as a tangible explanation, much as this latter theory of reduction is by far the more innocuous of the two in terms of the ramifications for any analysis of human nature; what it is to be us as a distinct life-form, and what is is to be conscious life at all. ‘Innocuous’ of course, has no bearing on truth.
However, there is at least some external evidence that supports the structural mechanisms of the theory, even if there remain a great many loose ends, and one effectively closed proposition, regarding the intuitive invocation of Platonic reality (a potential loose end which I would, personally, be loath to tie up prematurely, favouring further exploration, just in case any more of a scientific explanation exists to glean from this idea). As a side note, I’m not entirely sure that it is obvious that the human mind is a closed system either, but I’ll return to that later.
The physicist and nanoscientist Anirban Bandyopadhyay, working with the National Institute for Materials Science in Japan in 2013 discovered quantum vibrations in microtubules, which backs up the Penrose/Hameroff position, as do the April 2022 results of two related experiments at the University of Alberta and Princeton University, announced at The Science of Consciousness conference, providing further evidence to support quantum processes operating within microtubules.
In the first study, featuring Stuart Hameroff , Jack Tuszyński of the University of Alberta demonstrated that anesthetics hasten the duration of a process known as delayed luminescence, in which microtubules and tubulins re-emit trapped light. Tuszyński suspects that the phenomenon has a quantum origin, with superradiance being investigated as one possibility.
In the second experiment, Gregory D. Scholes and Aarat Kalra of Princeton University used lasers to excite molecules within tubulins, causing a prolonged excitation to diffuse through microtubules further than expected, which did not occur when repeated under anaesthesia.
A 2024 study, titled “Ultraviolet Superradiance from Mega-Networks of Tryptophan in Biological Architectures” and published in The Journal of Physical Chemistry, confirmed superradiance in networks of tryptophans. The study states that “by analyzing the coupling with the electromagnetic field of mega-networks of Trp present in these biologically relevant architectures, we find the emergence of collective quantum optical effects, namely, superradiant and subradiant eigenmodes. ... our work demonstrates that collective and cooperative UV excitations in mega-networks of Trp support robust quantum states in protein aggregates, with observed consequences even under thermal equilibrium conditions.”
The prior study by T.J. Craddock et al., also featuring Stuart Hameroff, published in 2015 in Current Topics in Medicinal Chemistry. 15 (6): 523–533., titled “Anesthetics act in quantum channels in brain microtubules to prevent consciousness” suggests that “anesthetic molecules can impair π-resonance energy transfer and exciton hopping in 'quantum channels' of tryptophan rings in tubulin, and thus account for selective action of anesthetics on consciousness and memory”
Another study, on tadpoles, performed previous to the studies above, titled “Direct Modulation of Microtubule Stability Contributes to Anthracene General Anesthesia” was published by Daniel J Emerson, et al., for J Am Chem Soc., March 29th 2013, 135(14):5389–5398. and is available on the National Institute of Medicine’s website. The study states that the researchers have obtained “strong evidence that destabilization of neuronal microtubules provides a path to achieving general anesthesia”.
One more early study, confirming in its report that 0.5 mM [14C]halothane has been found to bind to tubulin monomers alongside three dozen other proteins, was performed in 2007 by John W. Tobias, et al., for the Journal of Proteome Research. 6 (2): 582–592, titled “Halothane binding proteome in human brain cortex”, and is available on the ASC Publications website.
Jack A. Tuszyński provides additional introductory resources to explore in his Molecular and Cellular Biophysics and The Emerging Physics of Consciousness. As he reminds us in the former, on page 53 of his first chapter, What Is Life? (although he is less convinced by wholly non-computational arguments):
...there is a very intricate structure of protein filaments filling both the nerve cells body and its axons. Inside the axons one finds a parallel architecture of microtubular bundles interconnected with other proteins, a structure that resembles parallel computer wiring (Hameroff, S. The Ultimate Computing, Elsevier, Amsterdam, 1987) leading to the hypothesis that this microtubular structure may be involved in subcellular (nano-scale, possibly quantum) computation. Brown and Tuszyński (Brown, J. A. and Tuszyński, J. A. Phys. Rev. E 56 5834-5840, 1997) demonstrated theoretically their feasibility as information storing and processing devices. This then suggests that living cells perform significant computational tasks. ... Probably only a small fraction of the cell can be seen as pure information content. Is there something else in living systems that neither machines nor computers possess? Probably yes. At least two properties distinguish animate matter from inanimate objects: procreation and autonomy expressed by free will (to move against the whims of thermal noise, in the very least).
More on microtubules can be found in Pierre Dustin’s classic textbook Microtubules, including a helpful student’s distinction on page 39 that explains the nature of the heterodimers from which microtubules are assembled (itself useful in full for reviewing some studies critical of certain aspects of Penrose’s theory: for instance, the proposed predominance of α-lattice microtubules, more suitable for information processing, seems potentially falsified by M. Kikkawa, et al., writing in 1994 in the Journal of Cell Biology. 127 (6): 1965–1971 in a paper titled “Direct visualization of the microtubule lattice seam both in vitro and in vivo” which showed that all in vivo microtubules have a β-lattice and a seam):
The polyacrylamide gel electrophoresis of tubulin preparations shows two closely located bands, namely α and β tubulins, the β subunit having the greater electrophoretic mobility. ... This separation results from a difference of charge. ... [I]n α tubulin, no differences were found in the 25 first N-terminal amino acids. ... It is likely that α and β tubulins derive from a common ancestor protein. They do differ by the location of their specific binding sites for guanine nucleotides. ... Two definite biochemical properties have been observed: β tubulin is specifically phosphorylated ... and α tubulin is the substrate of a curious enzyme, tubulin tyrosine ligase, which, in the presence of ATP, without any specific tRNA, attaches a single tyrosine molecule to the N-terminal group of the molecule. The substrate is the tubulin dimer, and tyrosilation does not appear to interfere with MT [microtubule] assembly.
Also, on pages 23-33, the size and structure of the microtubules (and their subunits) is confirmed, this information useful also if one is considering applying the theory of Orch OR to cover non-human animal consciousness also:
MT appear, when seen in light microscopy, more flexuous than was first imagined. ... Their diameter is constant and the clear central zone ... indicates that they are truly tubular. The diameter measures ... 24± 2 nm ... With a wall about 5 nm thick, and the hollow central core about 15 nm in diameter. These dimensions are directly related to the size and number of the tubulin subunits. ... [I]n immunofluorescence, the whole cytoplasm appears to be supported by a cytoplasmic complex of MT ... as the MT become themselves the light sources. ... Negative staining of isolated MT demonstrates parallel longitudinal protofilaments. ... These protofilaments, which have a beaded structure, are made of subunits with a diameter close to 5 nm. ... MT from most cells have the same size and show a helical arrangement of 13 subunits with a pitch of about 10-25 degrees. ... From many observations on vertebrates and invertebrates, it is apparent that the basic structure of MT is closely similar in the whole animal kingdom. ... The fact that MT splay out into protofilaments shows that the lateral bonds between tubulin microtubules are weaker than the longitudinal ones. ... The tubulin helix is left-handed. ... MT are rarely seen to come into contact with each other. ... In transmission electron microscopy, they are separated by a clear zone, about 10 nm wide.
A comprehensive evaluation of long-range cellular effects and the resonant interactions of biological tissues with low-intensity electro-magnetic radiation is presented in Biological Coherence and Response to External Stimuli by Herbert Fröhlich, much as, in relation to Penrose, Jeffrey R. Reimers, et al., already published a paper of 17th March 2009 in PNAS. 106 (11): 4219–4224., titled “Weak, strong, and coherent regimes of Fröhlich condensation and their applications to terahertz medicine and quantum consciousness” which seems to contradict the presence of a Fröhlich condensate in the coherent oscillation of the dipole molecules.
Further presentations on coherent vibrations and an explanation of self-assembly are detailed at great length in Biophysical Aspects of Coherence and Biological Order by Jiří Pokornỳ and Tsu-Ming Wu.
Finally, a thorough examination of patterns of synchronized and chaotic oscillations in biological nervous systems and dynamical system theory is given in Oscillations in Neural Systems, edited by Daniel S. Levine, V. Timothy Shirey, and Vincent R. Brown.
However, rather than delve into any more biophysical or neuroscientific detail, it must be acknowledged at this juncture that, among many other critics, the Australian philosopher David Chalmers has argued against quantum consciousness (whilst not yet dismissing a quantum effect, discussing instead how quantum mechanics may instead relate to dualistic consciousness, specifically, property dualism – as opposed to the Cartesian substance dualism).
He has expressed scepticism that any new physics can resolve the ‘hard problem of consciousness’, as he refers to it (i.e. that physicalism leads to an explanatory gap, where one cannot adequately unite both biological form/structure and subjective mental states into a cohesive explanation of reality), and argued that quantum theories of consciousness suffer from the same weakness as more conventional theories. Just as he has argued that there is no particular reason why specific macroscopic physical features in the brain should give rise to consciousness, he also holds that there is no particular reason why a specific quantum feature, such as the EM field in the brain, should give rise to it either.
Again, as stated by Alwyn Scott in Stairway to the Mind, the difficult aspect of consciousness has to do with the experience of consciousness. Thus, he reiterates Chalmers (1995) in suggesting that we divide questions of consciousness into those that are ‘easy’ and ‘the hard problem’: that of understanding the nature of conscious experience, to which Chalmers concludes, “for any physical process we specify there will be a question that the physical theory leaves unanswered: Why should this process give rise to experience? For any such process, it remains conceptually coherent that it could be instantiated in the absence of experience. It follows that no mere account of the physical process will tell us why experience arises.”
This is a vital point that I will return to. However, as Scott himself acknowledges:
My position is close to the naturalistic dualism of David Chalmers ... (I don’t like labels, but if forced to choose I would prefer to call it hierarchical or emergent dualism.) From both perspectives, it is assumed that consciousness arises from physical systems but in a nonreductive manner, so it is not necessary to explain in purely physical terms how conscious experience enters the picture. This nonreductiveness is the “extra ingredient” that Chalmers sees as being needed to escape the trap of mechanistic theories that purport to explain consciousness.
Scott closes his argument with the idea that three main suggestions have been put forward to explain the nature of the mind:
1. That consciousness may be embodied in a quantum mechanical wave function. [Penrose]
2. That it may be a new primitive, a fundamental property like the mass of electrical charge of an elementary particle. [Stapp]
3. That it may emerge from several levels of the mental hyperstructure in a nonreductive manner. [Chalmers]
He picks the third option, in accordance with the views of Charles Goodwin (1994), who observes:
It is another of those curious paradoxes that a large number of scientists who work in the area of artificial intelligence, and the cognitive sciences generally, deny that consciousness has a fundamental reality and say that it is basically an epiphenomenon of brain activity – the electrical and molecular processes that go on in brain cells. This is just like the denial on the part of many biologists that organisms have any fundamental reality that cannot be explained by genes and molecular activities.
However, Scott does not elaborate further on this emergent dualism, beyond, of course, the position intended to arise from an analysis of the progressing chapters of his book, taken in totality – which still does not seem to really work as a means of answering the question, or closing some outstanding gaps between levels, despite his efforts remaining a very enjoyable read – although I will deal a little more with such ideas, and indeed property dualism, which I think can – via one understanding – be said to be similar enough, in my chapter 6 of this book (titled “The Godly Universe”).
In a different theory altogether, closer to the work of Earl Mac Cormac and his colleague, a nonlinear dynamical model for brain functioning has been developed in The Postmodern Brain by Gordon Globus, where particular interest can be found in chapter 4, Towards a Noncomputational Cognitive Science: The Self-Tuning Brain, explaining non-computation in biologically realistic neural nets where:
The outside is not represented inside but participates on the inside as but one constraint on a self-organizing process” which changes as a function of chemical modulation, where “the difference between computation in simplified nets and noncomputation in realistic nets can be presented in terms of ‘state space’ and where “the state of the net is given by the activation value for each of its N nodes. The network state, then, is represented by a point in an N-dimensional space (‘hyperspace’) whose N dimensions represent the activation values of the N nodes. State change over time is represented by a trajectory (path) in state space. Hyperspace additionally has a topography … in the case of living nets where there is continual tuning going on, not even theoretical probabilities are fixed; they change moment to moment, which gives an autodynamic property to network functioning.
As he explains in more detail:
...the topography is decidedly abnormal: it continually fluctuates ... Results, then, are not fixed but arise spontaneously out of the ever-changing interaction between the flow of input to the net, the relatively static memory traces, and the net’s flowing attunement … In such participatory holistic cooperation and autodynamics movement, the character of computation as driven, mechanical information processing is lost.
Again referring to consciousness research in Artificial Intelligence Versus Natural Intelligence, Giuseppe Vitiello has formulated a dissipative quantum model of the brain in which the acquisition of a specific new memory is identified with a fundamental state, selected from many unitarily inequivalent states, known as the “vacuum” to which the brain-environment system has access and is accorded the status of “memory.”
According to Vitiello, thoughts are conceived as having aperiodic chaotic trajectories that diverge exponentially (if they have different initial conditions) in this set of vacua ‘attractors,’ describing an act as intuitive knowledge by the idea that each act of recognition, of association with a specific memory, can be depicted as the approach towards a nearby attractor, and the consequent capture by it, non-computational in nature, and not translatable into a logical language. Chaotic trajectories originate the brain's ability to respond flexibly to the outside world and generate novel activity patterns, including those experienced as fresh ideas. This wandering is a characteristic trait of brain activity and thinking.
As Vitiello explains, knowledge of the biomolecular and cellular details is fundamental but insufficient for describing brain activity. Alone, it cannot account for the property of the system as being an open system. The interplay between linearity and nonlinearity plays an interesting role in the dissipate model. Phase transitions between different phases occur in a nonlinear dynamical spontaneous symmetry breakdown through which boson condensation and coherent states are formed, but linearity holds within each phase. This interplay between linearity and nonlinearity is consistent with observations showing the existence of wave modes and pulse modes. Pulse activity may be observed in experiments based on linear response. On the other hand, their synchronized AM patterns exhibit scale-free (self-similar fractal) dynamics requiring coherence consequent to nonlinear dynamics. Self-similar fractal properties are indeed isomorphic to coherent states, consistent with the dissipative model's underlying coherent many-body dynamics.
In addition to the above, Federico Faggin, in his 2024 book release, Irreducible: Consciousness, Life, Computers, and Human Nature, writes:
I imagine consciousness as a holistic property that is supported by a near-infinity of invisible connections that are neglected when simple systems are studied in a reductive way.
Faggin suggests that to fully understand living organisms, we must imagine them as dynamic energy simultaneously transformed into its three objective dimensions of matter, information, and energy. This transformation is supervised by the deeper inner subjective properties of consciousness and free will, which exist only in quantum reality (the latter statement perhaps echoing the controversial ‘holomovement’ concept proposed by David Bohm in his 1980 book Wholeness and the Implicate Order).
However, again in these examples (and Faggin’s was perhaps the vaguest of all) , we are faced with the same challenge: the hard problem of consciousness, and indeed the underlying mind-body problem (‘M/BP’).
Thankfully, Nicholas Humphrey, writing in the Journal of Consciousness Studies, Issue 7, No. 4, in 2000, in an opening paper titled ‘How to Solve the Mind-Body Problem’, thinks he has a solution. He attempts to form an argument couched in Darwinian evolution, conducting an imaginative scenario, on pages 15-16 of the document, explaining that we can:
...return, then, in imagination to the earliest times and imagine a primitive amoeba-like animal floating in the ancient seas. This animal has a defining edge to it, a structural boundary. This boundary is crucial: the animal exists within this boundary ... The boundary holds the animal’s own substance in and the rest of the world out. The boundary is the vital frontier across which exchanges of material and energy and information can take place. Now light falls on the animal, objects bump into it, pressure waves press against it, chemical stick to it. ... If it is to survive it must evolve the ability to sort out the good from the bad and to respond differently to them. ... Thus when, say, salt arrives at its skin it detects it and makes a characteristic wriggle of activity – it wriggles saltily. When red light falls on it, it makes a different kind of wriggle – it wriggles redly. ... Wriggling redly has been selected as the best response to red light, while wriggling bluely would be the best response to blue light. ... To begin with these wriggles are entirely local responses, organised immediately around the site of stimulation. But later there develops something more like a reflex arc passing via a central ganglion or proto-brain: information arrives from the skin, it gets assessed, and appropriate adaptive action is taken. Still, as yet, these sensory responses are nothing other than responses, and there is no reason to suppose that the animal is in any way mentally aware of what is happening.
Humphrey then asks, what if that animal’s life becomes more complex, to the point that it would be a natural selection advantage to have some kind of inner awareness, most likely to aid what would now be decision-making? It would require the capacity to form mental representations of this sensory stimuli, so as to reflect on how it felt about them.
He argues, on pages 16-17:
...all the requisite details about the stimulation – where the stimulation is occurring, what kind of stimulus it is, and how it should be dealt with – are already encoded in the command signals the animal is issuing when it makes the appropriate sensory response. ... To sense the presence of salt at a certain location on its skin,it need only monitor its own signals for wriggling saltily at that location. ... By monitoring its own responses, it forms a representation of ‘WHAT IS HAPPENING TO ME’. But, at this stage, the animal neither knows nor cares where the stimulation comes from, let alone what the stimulation may imply about the world beyond its body. ... Yet wouldn’t it be better off if it were to care about the world beyond? ... The answer is of course, yes.
Humphrey’s solution to this thought experiment scenario goes as follows, as he posits on page 17 that early animals developed a means of using the information arriving at their body surface for both sensation and perception – after all, the question ‘what is happening to me?’ requires an answer that is qualitative, present-tense, transient, and subjective, whereas the question ‘what is happening out there?’ need an answer that is quantitative, analytical, permanent and objective:
...there developed in consequence two parallel channels to subserve ... sensation and perception: one providing an affect-laden modality-specific body-centred representation of what the stimulation is doing to me and how I feel about it, the other providing a more neutral, abstract, body-independent representation of the outside world, [and] its experience or proto-experience of sensation ... arises from it monitoring its own command signals for these sensory responses. [A]s this animal continues to evolve ... [it] becomes more independent of its immediate environment. [T]here comes a time when, for example, wriggling saltily or redly at the point of stimulation no longer has any adaptive value at all. ...[E]ven though the animal may no longer have any use for the sensory responses in themselves, it has by this time become quite clearly dependent on the secondary representational functions that these responses have acquired. ... [I]t clearly cannot afford to stop issuing these command signals entirely.
He explains that in order to properly represent the ‘what is happening to me’ phenomenon, the animal must indeed continue to issue commands that would hypothetically produce an appropriate body area specific response if they were to transmit into bodily behaviour. However, since this behaviour is no longer wanted, these commands remain virtual ‘as-if’ commands, i.e. commands retaining their original intentional properties but with no real effect. He suggests that over evolutionary time, the sensory activity becomes privatised, with the command signals ‘short-circuited’ before they reach the body surface, instead only reaching so far as the incoming sensory nerve, until the whole process closes off from the outside in an internal loop within the brain. Once selection becomes irrelevant, despite drift, these forms are likely to always reflect their evolutionary history. Thus, the responses that started their evolutionary existence as wriggles of acceptance/rejection of a stimulus will still be recognisably of their kind, even in the present day.
He ends his paper by speculating over the existence of an early feedback effects circuit that modified the stimulations that affected it, but that may initially have been too slow to have interesting consequences. Again though, as the process became internalised and the circuit shortened, the conditions could have arisen for a significant degree of recursive interaction, with the command signals for sensory responses looping back on themselves, becoming self-creating eventually, and self-sustaining. Thus, these signals are still initiated by input from the body surface, and are still influenced by it, but they are now also signals about themselves.
It is by this means that Nicholas Humphrey approaches a solution to the mind-body problem that biologically avoids the dualism of two distinct substances (‘soul’ and ‘body’).
Writing later in the journal, Naomi Eilan adds her own academic perspective, in agreement with two of Humphrey’s general a priori intuitions of the connection between our concepts of consciousness, knowledge-yielding representation, action, and affect:
1. The exact nature of this basic connection must be understood correctly in order to engage with question of the place of consciousness in the natural world.
2. That there is more to this matter than claims that we lack the wherewithal to answer the question at all.
However, she has also raises some problems, including on the distinction Humphreys makes between perception and sensation, using the example of blindsight to critique the ‘two-concept’ claims made about consciousness in the paper (and also by David Chalmers 1996 and Ned Block 1995), where a distinction between representational and phenomenal properties is fraught with contradictions, and where, whether Humphrey adopts this position or not, he either cannot logically get right the notion of ‘conscious of’ as distinct from merely having information about the environment, or he loses the ability to claim his two-part story, given that the representation of the body as spatial cannot be independent of and prior to the representation of the space in which it is located, the body playing no primary role spatially, just as, the point of the body being to introduce affect (including perceptions of the environment), there is no need for a representation of the body to have priority.
In other words, to manifest consciousness of the environment, as opposed to merely information about it, the notion of perception need not be conceptually independent of the properties assigned to sensation: thus we lose the motivation for suggesting that the representation of the body is responsible primarily for the emergence of consciousness.
Fascinatingly, her best discussion in my eyes, on the ‘hard problem’ itself, and how Humphrey may have erred, is contained in her writings on pages 34-35, beginning with another thought experiment:
Suppose you have a visual experience, as of a rabbit in front of you, and you are aware of this experience and think of it as an experience, with such and such phenomenal properties. Let us call this way of thinking made available to you in virtue of undergoing the experience ‘introspection’; the predicates you employ when introspecting these properties ‘phenomenal’ (P) predicates; and the modes of presentation they express ‘P’ modes of presentation. Suppose at the same time someone is probing your brain and watching various neurones firing away in response to the external stimulation. And let us suppose the person watching your neurones is armed with predicates used in something we may call the ‘brain sciences’ which covers everything from physiology to computational information process theories. I will call these ‘scientific’ (S) predicates, and the modes they express ‘S’ modes of presentation. ... [T]he central question we should be asking is: could P and S predicates ever be referring to the same properties?
With prior reference to Thomas Nagel’s What is it Like to Be A Bat? (pages 176-7, Nagel 1979), she adds, faintly echoing the property dualist position of Alwyn Scott earlier:
...if any pair of S and p predicates do refer to the same property, they will be doing do in different ways, expressing different modes of presentation of the property, for any such identity claim is informative. ... [N]ormally, when we say that two modes of presentation have the same referent, we have some story about ‘how the referential paths converge’. Thus we have a theory about why 2+2 and 4 refer to the same number. ... [I]n its most minimal form ... We do not have any such story in the case of P and S predicates.
Eilan admits that the basic challenge encountered is that we do not know how to start answering her question (the ‘Identity Question’) because indeed we do not have a theory of how the referential paths converge (the ‘Radical Disparateness’ thesis). She suggests the Humphrey, although he does ask the Identity Question, still attempts to render the Radical Disparateness illusory. The ‘one-concept’ view, as Eilan calls it, that the phenomenological and spatially/causally explanatory ingredients are interdependent in our mental concepts – in our everyday concepts of perception – can be referred to as the ‘primitive theory of perception’, where we are obliged to understand what the conditions of perceptibility in the various modalities come to. Without an appeal to phenomenology, we cannot get the explanatory framework right; without an appeal to the spatial/causal, we cannot individuate the phenomenological ingredient.
This view exists, to her, as the most radical denial that our mental concepts are disparate. The implication of it is that there exist certain a priori restrictions on the level and kind of discoveries that can be made about the brain.
She acknowledges that Humphrey’s position is partially suggestive of a ‘one-concept’ view, which in her eyes is commendable, closing the gap between phenomenological and scientific ends simultaneously. After all, in a ‘two-concept’ theory, and reference to causal mechanisms is over-ridden by the ‘psychological’, functional concept, rendering the phenomenal concepts stripped of causal implications (i.e. How does a disembodied ‘soul-mind’ will a distinct body into motion, if there is no biological connection with the physical material? Where does consciousness slot into Nature, if so?).
However, she acknowledges that, taxingly, to abandon the ‘two-concept’ view would require a radical overhaul of a great number of human philosophical intuitions, by now well entrenched – a paradigm shift more radical than that delivered by Humphrey in his paper!
Much as he seems to attribute physical properties to sensation, as a modified way of directing actions towards one’s body, he has not included perception in this claim, although he still appears to be making a two-concept claim on both. Neither does he argue the other side of the one-concept: that the individuation of causally relevant psychological properties of mind must be specific to the subject, from their perspective (the “hardest part” of the hard problem).
In the words of Chalmers himself (writing on perception in The Conscious Mind):
It can be taken wholly psychologically, denoting the process whereby cognitive systems to the environmental stimuli in such a way that the resulting states play a certain role in directing cognitive processes ... But it can also be taken phenomenally, involving the conscious experience of what is perceived.
Humphreys, by avoiding the claim that we can get to a complete description of the causal workings of our minds without any (non-reductive) appeal to what things are like for the subject for their perspective (i.e. by neglecting to address Chalmers’ phenomenologically-free individuation of the causal component of mental concepts), cannot adequately address dualism, eliminativism, strong reductionism, and mysterianism (the view that this problem cannot be solved; or cannot be solved by this means of thinking), leaving phenomenal consciousness appearing causally epiphenomenal.
Eilan ends her response paper by concluding that the Identity Question Humphrey says he is answering is lost if we do not adopt a one-concept view. If neither the phenomenal nor the spatio-causal can be adequately explained, that the two concepts that form his question are lost. In fact, in her estimation it seems that Humphrey has, in practice, not treated the Identity Question as being synonymous with the hard question at all.
In further response to Humphrey, researcher Ralph Ellis provides a more modest analysis. Suggesting that it is important that we should distinguish between the mind-body problem and Chalmers’ hard problem of consciousness rather than conflating them, he suggests on page 47-48 that:
The mind-body problem is not just a question that is difficult to answer, but a paradox. ... [C]onsciousness does not seem to be equivalent with its physiological substratum. ... [but] if consciousness were not equivalent with something physical, then we would be confronted with the well-known problems of dualism. ... [It] feels like a logical contradiction, unless we assume that the physical and mental events are equivalent. ... [I]f two events are equivalent, then they can be both necessary and sufficient for the same outcome. ... [T]hey eliminate the problem of mental causation. But by avoiding the Charybdis of mental causation in this way, strict identity theories are consumed by the Scylla of the knowledge argument [thought experiment by analytic philosopher Frank Jackson from his 1986 article on ‘Epiphenomenal Qualia’ titled “What Mary Didn’t Know” asking whether knowledge can be gained by physical description without experience, using the example of Mary seeing colour for the first time in a formerly black-and-white world, having previously only read comprehensive descriptions of it, and intended as an argument against physicalism]. ... The fact that mind and body seem indescribable in commensurable languages – which leads to Chalmers’ ‘hard problem’ – is really a much easier problem than the mind-body problem. The hard problem merely shows that we have a phenomenon, consciousness, that in principle is difficult to explain.
Indeed, in his opening paragraphs, Ellis explains that Humphrey’s arguments contribute to the mounting evidence that consciousness is inseparable from the motivated action-planning of life-forms that are, at least in some sense, organismic and agent-like, rather than passive, mechanical, and reactive, in the way digital computers are. He reminds us that [Francisco] Varela, et al., 1993, (the biologist, cybernetician, philosopher and neurophenomenologist famous for introducing the concept of autopoiesis – cellular self-organisation i.e. systems producing more of their own complexity than their environment – to biology, and eventually to cognition, under his mentor Humberto Maturana) called this new approach the ‘enactive’ view of consciousness.
This view argues that conscious information processing can arise only as the self-regulated action of a self-organising process that interprets the world as a system of action affordances. Although information processing can be passively absorbed as afferent input, only efferent nervous activity to maintain a living organism’s homeostatic balance (yet also suitably extropic) can create consciousness of any information, whether that be perceptual, imagistic, emotional, or intellectual.
Later, on page 48, Ellis writes:
[S]uppose that, as Humphrey has argued, consciousness ... is equivalent with efferent rather than with afferent nervous activity. The question remains as to what kind of causal power this consciousness-equated-with-efferent-activity has. ... Suppose a biologist knows all about self-organizing thermodynamics systems, and all about efferent nervous activity. Will this enable the biologist to know what the subject’s consciousness is like? ... It seems entirely conceivable that the biologist could observe Jackson’s nervous system, in full knowledge of the fact that it is only the efferent activity that is conscious, and still not know what Jackson’s consciousness – for example, his being in love with Mary – feels like.
Ellis reminds us that the self-organisation theory that evolved into dynamical systems theory, worked on by Stuart Kauffman, 1993, and Jacques Monod, 1971, (and indeed, classically, Maurice Merleau-Ponty, 1942, and Ludwig von Bertalanffy, 1933/1962) does not entail dualism, or indeed causal interactionism, in that the process does not cause the behaviour of its sub-stratum elements. Instead, the behaviour of each substratum element is caused by other substratum components that are both necessary and sufficient, under the given background conditions, to bring about that behaviour.
That said, he explains further that the self-organisation of the organism in what the behaviour occurs, is partly responsible for having set up, at any moment, the given background conditions under which those antecedents are necessary and sufficient for those consequences. Even though a specific behaviour in any substratum element is caused by a necessary and sufficient substratum level component, given the background conditions, the structure of the self-organising process as a whole ensures that those given circumstances will tend to be changed when that is needed to maintain the overall process.
Ellis gives his final, optimistic explanation, on page 49, that:
The complex self-organising processes constitutive of the emotionally motivated efferent processes needed for the subject’s phenomenal consciousness are experientially accessible only from the standpoint of the organism that executes them, because conscious experiencing per se entails executing rather than merely observing emotional processes. That is why self-organization is the key to solving the mind-body problem. ... The enactive approach points us in the direction we must go ... But it only provides a necessary, not a sufficient set of conditions needed for this solution. What must be explained is how a self-organizing or dynamical system can have a causal power over its own substrata, which is not reducible to the sum of the actions of that substrata, while at the same time causal closure is not violated at the substratum level. This is a difficult problem ... [but] only a dynamical system promises to offer the possibility of a system’s having the power to appropriate substratum elements into its own basins of attraction, rather than letting those basins be merely a higher-level description of the independent actions of the substrata.
Curiously, in his concluding response, Humphrey acknowledges, on page 103, that there may indeed be a phenomenology of action per se (as opposed to conventional sensation), as argued for by Ellis and to some degree Eilan, but extrapolates instead on yet another problem he faces, unmentioned so far: that raised by the philosopher John Searle. The problem is that of explaining a person’s experience of changes in consciousness if one is presuming that the entire content of consciousness is merely bodily sensations, with nothing being contributed by perceptions or thoughts. After all, that would entail that a change in what is perceived or thought alone could not even be experienced unless there was also a change in sensation.
To give an example of this, if someone is looking out of his window at the world, he should experience no change in consciousness unless and until the visual stimulus of his fixation also changed (so as to create a change in the ‘what is happening to me’ state). A crucial test to employ in this case would be an ambiguous Necker cube, or equivalent, where, even though the visual image remains unchanged, the perception of what is represented can radically alter. Whereas Searle suggests it obvious that a Necker cube seen one way is a consciously different phenomenon from the same cube seen the other way, thus proving a contradiction in the position of Humphrey, the latter responds that, on evidence of introspective observation, perhaps we can in fact be conscious of what is perceived as well as what is sensed.
Returning again to a hint provided in the commentaries of Eilan and Ellis, and suggesting that action is indeed crucial, Humphrey writes, on page 104:
Suppose that whenever we perceive anything (and sometimes when we only think of things) we always implicitly form a plan of action – for example a plan to reach out and take hold of it. And suppose that such action, even when implicit, always has a small but noticeable qualitative feel to it – either on its own account via somatic sensation or through modelling of the sensory feedback that would be expected. Let’s call this additional dimension the dimension of ‘agentic qualia’ (Humphrey, 2000b). Then, whenever we experience a sight or a sound or a taste, etc., the conscious experience can be expected to consist not only of the sensory qualia appropriate to the particular sensation but also of whatever agentic qualia are being called into being by the perceived ‘affordance of action’ (In J. J. Gibson’s terms). This solves the problem of the Necker cube. For now, we can postulate that, even while the visual sensation remains constant, there may be a covert change in action plan when the perception of the cube reverses, and so a slight change in the overall sensory qualia. ... The admission of a realm of agentic qualia makes the story I have been telling considerably mode complicated. But that’s a good price to pay for making it more likely to be right.
The debate continues, and the issue is certainly nowhere near settled, although I am glad for the optimism and explanatory power of Humphrey’s work, augmented by these other theorists. Worryingly, however, if we are to take seriously the conclusions in Sisyphus’s Boulder by Eric Dietrich and Valerie Hardcastle, we may never reach a full theory of consciousness (what consciousness is) due to the limitations on what humans can understand and what is knowable. This position is echoed by the philosopher Colin McGinn. I like to hope still that this is not the case. Either way, it is clear we need a radical paradigm shift in our science and philosophy of mind.
It is worth ending this chapter by briefly touching on the most scathing (if not the most challenging) critic of Humphrey in this anthology, Stevan Harnad, who, backing away from the M/BP altogether, admits openly, on page 56, with a strange ‘optimistic’ confidence of his own (and in a manner that does not strike me as ostensibly unfair):
I too happen to think the M/BP’s insoluble, but not because any limitation of the human mind. Indeed, I don’t know what is even meant by saying that there may indeed exist a ‘solution’ to the M/BP, but not one that the mind can ever know! There is nothing here that is analogous to the (epistemic?) constraints underlying Gödel unprovability, quantum indeterminacy, statistic-mechanical indeterminacy, unproven mathematical conjectures, halting problems, the many-body problem, the limits of measurement, the limits of memory, the limits of technology, the limits of computation, NP-completeness, the limits of time, the limits of ‘language’ (no idea what the last might even mean [to answer his question, as best as I can intuit myself, what we can conceive of in our imaginations shapes what we can think, and thus how we can think it. See: the conceptualisation of time-keeping in African languages in psycholingistics, a further ramification being the effect on perceptions of rhythm patterns and prosodic organisation – it all boils down to whose minds he is referring to.]), etc. Those are all red herrings and false analogies. ... Undoubtedly the feelings are in some way caused by and identical to(‘supervene on’) brain process/structures, but it is not at all clear how, and even less why. ... Inputs and outputs can be connected, functionally, computationally. ... [But] [i]f we ‘characterize’ feelings computationally or functionally, we have simply begged the question, and changed the subject – to a discussion of the relation between brain function and computational (or other) function. ... [The states of interest to psychologists] ... are qualitative, feeling states. They will be amenable to informational analysis, and to behavioural and neural analysis, but their feelingness will remain a dangler – and that’s the point! ... The functional stuff would all go through fine ... if we were all just feelingless zombies. But we’re not. And that’s the problem (Harnad, 1995; 2001).
It seems to me that although the rest of his argument makes some sense, the problem for Harnad is contained in his first line, which requires more evaluation. Besides that though, as Humphrey himself reflects on, on page 111, having previously noted that for Harnad, by his definition, all functionalism is ‘zombie functionalism’ (thus seeming to forget weak functionalism in the process):
...if I am glad to have Harnad state the enemy’s case so boisterously ... It’s only because he thereby reveal the ultimate vacuity of his position. It goes nowhere. It makes no predictions. It generates no tests. Indeed, for Harnad it would actually be an argument against the legitimacy of any theory of consciousness that someone should even imagine that his theory could be tested by implementing the consciousness-producing architecture in a machine. Because if he were to interpret anything the machine actually does with its new architecture (anything at all) as evidence that the implementation has been successful, that would only show that his theory begged the question.
I am therefore forced, given the logic of what I have written above in this chapter, to agree with Stevan Harnad.
However, at the same time, I also agree with Humphrey on page 112 when he observes, “It’s as though Harnad has managed to turn Tertullian’s grand claim ‘I believe because it is impossible’ into its corollary ‘I do not believe because it is possible’, wishing Harnad had not washed his hands of the matter so soon, much as Humphrey’s unfortunate observation on machines does not suggest any problem to me at all.
It feels to me that this matter requires much more investigation, where science ends – the current body of multidisciplinary research is not suitably balanced. By that, I do not mean that we should make a naive capitulation to the patent ridiculousness of New Age mysticism, only an acknowledgement, as with the naturalists of the German Romantic movement, that positivist scientism cannot address some questions, much as, as noted by Stephan Zweig in his analysis of the European psyche in his 1939 book The Struggle with the Daimon: Hölderlin, Kleist, Nietzsche, we have unfortunately come by now to a fiercely analytic tradition in the West, at odds with our natural disposition, if very hard to shake at times (and with disastrous consequences for all three men considered, particularly Friedrich Nietzsche). As stated, I will return to this later on.
For the time being it is sobering to reflect on the assured fact that I'd be in a padded cell in some long-term institution by now if I'd tried to put any of this to my NHS psychiatrist.
By Benjamin Power

Comments