Gratis verzending vanaf €35,-
Unieke producten
Milieuvriendelijk, hoogste kwaliteit
Professioneel advies: 085 - 743 03 12

Fictions and simulations: The case for idealism

Fictions and simulations: The case for idealism

Reading | Philosophy | 2022-05-22

Brain mind spiritual soul freedom and hope concept art, 3d illustration, surreal mystery artwork, imagination painting, conceptual idea of success

A new, creative and compelling argument—even a new type of argument—for idealism is elaborated upon in this long-form essay, which is fluid and easy to read.

The matter with matter

I was brought up in a universe that, according to the metaphysical paradigm I was unconsciously spoon-fed, is nothing more than a collection of things: “matter” for short, although it also includes space, time, physical fields, etc.

The assumption is that matter is devoid of qualities such as value, beauty, meaning, etc. The qualitative aspects of reality are dismissed as being either an illusion, or an emergent placeholder to refer to very complicated physical patterns, just as a planet’s center of gravity is a placeholder for the entire planet when describing its orbit.

The gist of it is, as Steven Weinberg perfectly said: “The more the universe seems comprehensible, the more it also seems pointless.”1

However, careful observation of any human culture that has ever existed reveals that our deepest intuitions and our behaviors clash with such a paradigm, even in our current materialist environment.2 As a matter of fact, even if materialists preach the gospel of a pointless world and may even adhere intellectually to such an idea, they still act as if their lives have meaning.3

Yuval Noah Harari4 would contest that human culture is just a bunch of fictions, many of which are extremely ugly, and that we would be better off if we left all such stories behind, especially those that aim to give their believers a sense of meaning. Yet one could argue that the portrayal of fictions as pointless, truthless brain viruses obeying meme mechanics is, in itself, a stupendous piece of fiction. Perhaps the cure for ugly fiction is not no-fiction, but better fiction.

This is, I must admit, a hard sell these days, when attempting to navigate outside the soupy waters of materialism seems either impossible or deluded. After all, we have a fridge, therefore materialism is true.Nevertheless, in this brief essay I suggest this is not the case, and that we have good reasons to think that it is not consciousness that comes out of matter, but the other way round: consciousness comes first, in a paradigm known as idealism.5

I shall first argue the case for materialism as proposed by some of its more sophisticated defenders, drawing inspiration from books such as Sean Carroll’s6 and Carlo Rovelli’s.7 Two arguments —one gnoseological, the other ontological— for consciousness will follow in abbreviated form, as well as the consequences that derive from them, and a few concluding remarks that have personally enriched my own experience.


The case for matter

The regularities we perceive around us prod us into formulating explanatory models. Most useful among these is the imagining of a thing called ‘matter’ that obeys certain physical laws. Even if this matter cannot explain consciousness (see the ‘hard problem of consciousness’) it can still ostensibly explain all physical phenomena, something that intuitively leads us to believe that matter really exists outside and independently of consciousness. Perhaps even more importantly, postulating the existence of consciousness and subjective experience does not improve the explanatory power of the concept of ‘matter.’ Therefore, Occam’s razor suggests that matter is definitely outside and independent of consciousness. In this regard most opinions fall in mainly two camps:

  1. Consciousness does not exist at all, it is an illusion;
  2. Consciousness exists but is secondary: it emerges from material patterns.

Most materialists adhere to the second option, which has its intellectual appeal. The implicit metaphysical paradigm behind it is called ‘causal structuralism,’ according to which things exist only insofar as they participate in defining other elements in the big web of things. This web is parsimonious: nodes that have no links to other nodes are not admitted, and so disconnection is synonymous with non-existence. Things have no consistency of their own: each is defined as the set of links it has with other things.


The case for consciousness: gnoseological argument

By ‘consciousness’ here we refer to phenomenal consciousness, the ‘space’ that, in a screen-like fashion, may be filled with many different experiential contents. We intuitively divide these into two main groups:

  1. Those we identify with: emotions, sensations, feelings, thoughts;
  2. Those we do not identify with: perceptions.

This criterion is based on whether we feel we can control/create such contents. However, this feeling is fluid. Control often escapes us when it comes to negative emotions, earworm songs or unpredictable thoughts that seem to obey their own independent flow.

At the same time, and as mystics of all times can confirm, the opposite may happen, and then we identify with all the contents of consciousness. In this case, the point is not about control (or lack thereof) over such contents, but rather on identifying with the ‘screen’ within which they unfold.8

Nevertheless, we can also classify the contents of consciousness in other ways, such as:

  1. Conceptual knowledge, which is indirect or non-immediate, populated by conceptual entities.
  2. Non-conceptual knowledge, which is direct or immediate, populated by qualia.

Conceptual entities are always defined in relation to something else: in a dictionary every word/concept is defined in terms of other words/concepts, weaving and knotting a network where each node is connected to a set of other nodes that define it. Each concept is explainable in terms of other concepts (or qualia): thus, the conceptual realm is the realm of reduction, i.e. of causal structuralism. It is also the realm where ‘matter’ lives and, by extension, all physical entities as well.

Let’s take light as an example: whatever physical light is, it is not what we perceive as light. When a stream of photons hits the retina, it does not pierce our skull and illuminate its interior. Instead, it becomes a train of electrical impulses that is correlated with, yet different from, physical light. Photons are concepts we have imagined in order to model certain kinds of perceptual regularities, and they work beautifully, but no one has actually perceived one in a direct way. Strangely enough (from a materialistic perspective, that is), we perceive light in our nightly dreams, although our eyes are closed and thus correlation with any physical ‘reality out there’ is impossible.

Unfortunately, causal structuralism forgets that many things exist outside the conceptual realm, such as the qualities of experience, or ‘qualia.’ Qualia are irreducible: they exist by themselves, and thus cannot be reduced or defined in relation to others. Someone who has never seen red cannot know it by seeing green or blue (i.e. other qualia) or by knowing the frequency of its correlated, physical, electromagnetic wave (a concept). Whereas concepts may be fully grasped based on other concepts, this is not the case with qualia: the only way of truly knowing the taste of a mango is by eating one.

The existence of qualia is different to that of concepts. ‘Existence’ comes from Latin ex– (out) + –sistere (to stand): an entity exists when it stands forth against the background of other entities. In order to exist, concepts lean on other concepts and/or qualia, whereas qualia stand forth (i.e. exist) by themselves, a property that scholastic philosophy used to call perseitas or ‘perseity.’

Now a materialist might say: well, alright, we might conceive physical light as a concept, not standing forth by itself, but maybe this is just our cognitive fault and in reality physical light is a non-conceptual entity. To this, an idealist can quickly respond: sure, why not; however, if physical light is non-conceptual, then it can only be in the camp of qualia. There is no escaping that either something stands forth by itself (and we refer to it as quale) or it stands forth through others different from itself (and we refer to it as concept): tertium non datur.

Qualia stand directly on the ground of being, while concepts are built above this surface. Yet conceptual networks cannot be entirely suspended in midair: at least some concepts must act as anchors that have a one-to-one association with direct, immediate experiences, i.e. qualia. If a language must be intelligible, then a minimum percentage of its vocabulary must refer to qualia. Many of the logical, ontological and ethical paradoxes we humans have stumbled upon are rooted in our willingness to imagine a groundless language (i.e. conceptual system), afloat and unmoored, disconnected from the ground of qualities.

In this light, the hard problem of consciousness can be seen as the unsolvable consequence of our misguided attempts to reverse the ontological order, trying to ground qualities in concepts. Needless to say, this does not work. As qualia are the ontological basis of concepts, consciousness has ontological priority over matter. The world is exactly what it seems to be: a qualitative phenomenon unfolding in consciousness.9


The case for consciousness: ontological argument

We can formulate a second argument in favor of consciousness that stems from the world of fiction-making, where entire conceptual universes (with their own internal structures, natural laws and inhabitants) may spring into being in the form of novels, video games, films or role-playing games. In some cases we even create nested universes, such as in Michael Ende’s The Neverending Story or in the film The Thirteenth Floor, where virtual realities are simulated within virtual realities and so on and so forth.

Let us, for the sake of brevity, refer to such processes as simulations; in every simulation there are three realities at play:

Reality A: the simulated reality;
Reality B: the reality where the simulation runs, and
Reality C: the reality underlying (grounding) the first two.

It so happens that reality C always coincides with reality B, and can never coincide with reality A. This means that agents inside B and C can interact directly with each other, but agents living in A cannot interact directly with those of B. We say that reality A does not have being-in-itself; that is, it does not exist in itself; rather, it exists in(side) another (reality B-C). Reality A is an extrinsic appearance of reality B: entities in reality A are symbols of entities in reality B. The property of existing-in-itself is called inseitas in Latin.

If we apply this reasoning to a landscape painting, we have:

Reality A: the mountains, valleys, rivers and sheep depicted;
Reality B: the canvas and the oils with which the painting is made;
Reality C: the universe in which the painter lives.

Clearly the painter (reality C) belongs in the same universe as the canvas and oil paints (reality B). The painter is not a character in the painting (reality A does not coincide with reality B) and cannot interact with the sheep painted on the canvas.

The nature of the simulation is always heavily dependent on the means available in the reality where the simulation takes place, and these may change over time: canvas and paint have been around for centuries, whereas computers are a recent development. If we take a video game as another example, we have:

Reality A: the race cars simulated in the video game;
Reality B: the hardware in which the video game software runs;
Reality C: the universe in which the software engineer lives, as well as those who produced the hardware.

The electronic engineer (reality C) belongs in the same universe as the hardware (reality B), yet the engineer does not coincide with an avatar inside the video game (if the avatar dies, the engineer does not die).

In a book or movie, such as Lord of the Rings, we have:

Reality A: Frodo ‘lives’ inside the movie and is subject to (apparent) causal relationships with other entities in the movie (Sauron might kill him);
Reality B: the TV screen where you are watching the film;
Reality C: the universe in which the viewer watching the film lives.

The viewer (reality C) shares the same universe with the TV screen (reality B), but neither of them can be destroyed by Sauron, whereas Frodo can (as both are in reality A).

Now let’s apply the same kind of reasoning to conscious perception:

Reality A: the world I seem to perceive and be immersed in, a world I interpret to be made up of matter, energy, space, time, fields, and which is completely describable in quantitative terms.
Reality B: my consciousness, which contains my perceptions, sensations, emotions; ultimately a world entirely populated by qualitative entities.
Reality C: the ground of being.

It follows that the ground of being (reality C) and my consciousness (reality B) belong to the same kind of reality, made up of qualitative stuff. If we could peek outside of our own consciousness, we would find no matter: only consciousness itself, the reality within which the ‘matter simulation’ ultimately exists. Matter has not inseitas; rather, it has its being (its existence) in consciousness. Matter is an extrinsic appearance of consciousness.


How matter and consciousness interact

After having spoken in favor of the primacy of consciousness we can re-examine the argument in favor of matter.

The gnoseological argument tells us that matter, as a concept, needs to be grounded on a qualitative substratum, otherwise it remains suspended in air. The ontological argument tells us that, within physical reality, matter can perfectly explain various phenomena without resorting to consciousness. Matter is a coherent conceptual model that lives in a type A reality, and within such reality it does not need to postulate consciousness. Matter is like a well-built video game universe narrative, where the rules of the game form a closed referential system: they need not refer to something outside their universe. This is why it is problematic to study consciousness (fist person perspective, B reality) from a scientific (third person perspective) A reality.

However, matter (an A reality) cannot be grounded in itself. As the reality that underlies matter has a qualitative nature, consciousness must come before matter. Following the ontological argument, one may wonder: could the reality of consciousness be itself a simulation (that is, a type A reality within an even more fundamental reality)? Has consciousness inseitas?

The gnoseological argument suggests it is not the case, because grafted realities can only be populated with ‘concepts’ ultimately grounded in qualia, whereas qualia are, by definition, grounded in themselves. Metaphysically, we can say that, since the contents of consciousness (qualia) have perseitas, then consciousness has inseitas. Therefore, consciousness is the ground of being.

Another intriguing line of reasoning focuses on what happens at the borders of a type A reality. In a well-designed reality A, no strange behaviors should be noticed; nothing that violates its internal rules; not a single glimpse into reality B. A suggestive hypothesis to explain the spooky behaviors of our physical reality, such as quantum mechanics or paranormal phenomena, is that these happen in proximity to the limits of the ‘simulation.’


The one and the many

If consciousness is not an afterthought sprung by accident from a material universe—if it truly is the ground of being—our curiosity impels us to ask questions such as: how is it that there are so many ‘individual consciousnesses’ and so many qualia? What are the dynamics of consciousness?

If we focus on how entities interact among themselves, we may notice the following:

  1. A sum of conceptual entities is extensive, i.e. one comes next to the other (let’s call them ‘things,’ put side by side).
  2. A sum of qualia, on the other hand, is intensive, i.e. one is inside the other (we shall call them ‘processes’).

Conceptual entities as things are static, whereas qualia are processes because they are alive. Once subdivided, things cannot be reunited into wholeness: no matter how carefully you glue them together, a pile of eggshells cannot go back to being an egg. On the other hand, processes have no boundaries and may interpenetrate each other, sometimes as far as reaching a coincidentia oppositorum: they can be one and many at the same time. Consciousness is one process and at the same time many processes.

Let me illustrate this by going back to direct experience: a music concert I attended last week. If time had stopped while I was listening to the symphony, the musical experience itself would have disappeared. All qualia are experienced in the process of one transforming into another, and the transformation itself is a quale. In fact, our perception of time does not correspond to a sequence of discrete points with no depth, but always as a continuum that has duration: experienced time has a certain thickness to it. The idea of physical time as a necklace of microscopic pearls where every instant is an individual bead is a useful conceptual abstraction built upon, yet profoundly different from, our qualitative experience of time, in which being and becoming are one and the same.10

Qualitative, metaphysical time is a fundamental aspect of consciousness, without which certain perceptions, such as that of movement, make no sense. The sum of qualia, in which they interpenetrate each other like fluids, can only happen in time that has thickness. When we experience a moment of ecstasy, of beauty or union, the perception of time slows down—or, conversely, its thickness swells, thus intensifying our qualitative perception.

Interestingly enough, in extensive sums the addends (things) disappear and are no longer distinguishable: 10 can be both the result of 8+2 and of 5+5. However, in intensive sums the addends (processes) do not disappear, but live more intensely within the bigger process that includes them.

Going back to the example of music, a bunch of notes played separately are distinct qualia. Yet, if we play them together, we will have an intensive addition, in which the ‘individual’ notes continue to exist within a new process: a symphony. The same notes, once enfolded into the symphony, seem transfigured because we perceive them interpenetrating.

We, amateurs that know little about music, have a limited ability to appreciate it, as everything appears indistinct to us. However, if we devote ourselves to studying and understanding how music works, how to compose or play musical pieces, the indistinct becomes distinct: we learn to perceive the individual notes. At this point, next time we listen to a symphony we will perceive the interpenetration, the union in the distinction of all notes and the experience will be more intense. Unlike things, qualia can increase in intensity if you break them down and put them back together again. This is the dynamic of qualia that is implemented in each single conscious being: each single eye of the universal consciousness.

Thus considered, life may be seen as a creative, never-ending process—an infinite game—of learning how to perceive the distinctness in the indistinct, only to let qualia flow back and merge together, intensifying their combined tastes like so many ingredients in a well-made recipe.

Indeed, such a soup might be true food for the soul.



  1. Steven Weinberg (1993). Dreams of a Final Theory.
  2. Joseph Campbell (2008). The Hero with a Thousand Faces.
  3. Viktor E. Frankl (2008). Man’s Search For Meaning.
  4. Yuval Noah Harari (2018). 21 Lessons for the 21st Century.
  5. Bernardo Kastrup (2019). The Idea of the World.
  6. Sean Carroll (2016). The Big Picture.
  7. Carlo Rovelli (2021).
  8. Rupert Spira (2017). The Nature of Consciousness.
  9. Bernardo Kastrup (2015). Brief Peeks Beyond.
  10. Iain McGilchrist (2021). The Matter with Things.

Falling for naive common-sense: Russell and physical realism (The Return of Metaphysics)

Falling for naive common-sense: Russell and physical realism (The Return of Metaphysics)

Reading | Philosophy

The Starry Night - Vincent van Gogh painting in Low Poly style. Conceptual Polygonal Illustration

This essay recounts the story of our falling for naive physical realism—the notion that we can become directly acquainted with non-mental entities, which are supposed to have standalone existence—in the early 20th century, and how modern thought is now bringing us back to the more mature German Idealism that prevailed in the West during the early 19th century. This is the fourth instalment of our series, The Return of Metaphysics, produced in collaboration with the Institute of Art and Ideas (IAI). It was first published by the IAI on May 9, 2022.

In the late 1890s the Cambridge philosophers G.E. Moore and Bertrand Russell made a remarkable and creative leap forward: their ‘discovery’, they declared, was of the principles underlying what they called their ‘New Philosophy.’ According to this philosophy, reality consists of a mind-independent plurality of separate, independently existing entities. They are entities that, when we perceive them, are given to us immediately or directly, so without relying upon our having any mediating ideas or internal representations of them, hence given to us without any conceptual trappings of our mental making.

Moore and Russell called their philosophy ‘new’ because they believed its discovery marked a decisive break in history; they envisaged their philosophy would sweep away all of its predecessors. Even though other philosophical traditions endured and indeed flourished later, their youthful confidence was far from being entirely misplaced. Their New Philosophy was destined to become one of the contributing streams—one of the most significant—that fed into what was to become that great intellectual river system, analytic philosophy. Nonetheless, a key idea from the Hegelian philosophy they were revolting against would continue to pose a challenge to their realist shift.


The resurgence and death of Hegelian philosophy?

To most bystanders watching at the end of the nineteenth century, it would hardly have seemed likely that the New Philosophy would turn into analytic philosophy, and analytic philosophy then become the dominant tradition in the United Kingdom. During the late nineteenth century, Hegelian idealists had become the dominating force in British philosophy, although it would still be an exaggeration to say that theirs was the only voice to be heard. But the British Hegelians had the ascendency and they were inspired by some of the most general features of Hegel’s worldview—even if they didn’t always embrace the specific details of Hegel’s philosophy or his dialectical method, whereby intellectual advance is to be achieved by overcoming the contradiction of thesis and antithesis to achieve a higher synthesis. What especially captivated the British idealists was Hegel’s belief that separateness is ultimately an illusion. The apparent separateness of things—their plurality—was, for Hegel, an illusion, because he held that what is ultimately real and intelligible is only the whole of reality; apparently separate things only have reality to some degree, depending upon the degree to which they contribute to the intelligibility of the whole. Hegel called the whole of reality ‘the Absolute’ and he conceived of the Absolute as spiritual [Editor’s note: it can be argued that ‘spiritual’ is a mistranslation of Hegel’s original ‘Geist,’ which also means ‘mind,’ in which case Hegel’s Absolute is mental, not spiritual.] Moore and Russell held just the opposite of this. According to the New Philosophy, separate things are perfectly intelligible independently of one another, or anything else, whilst the whole of reality isn’t spiritual.

What many of the British Hegelians found inspiring about Hegel’s worldview, at least since the publication in 1865 of J.H. Stirling’s The Secret of Hegel, was the promise it held out of a metaphysical backing for religion, religion having hitherto been threatened by the advance of materialism and the reception of Darwin’s theory of evolution. The Scottish philosopher Edward Caird (1835-1908), who held a Chair of Philosophy in Glasgow before becoming the Master of Balliol College in Oxford, was a leading and influential advocate of this Hegel-inspired apologia for religion. In his Hegel, published in 1883, Caird maintained that religion and materialistic science aren’t really in conflict at all because neither make sense except when understood as a partial fragment of a higher, integrated unity.

The resurgence of interest in Hegel to be found in Britain—and, as it happens, around the same time, in the United States too—also ranks as a twist of philosophical fate that could hardly have been expected by many bystanders. That’s because Hegel’s philosophy had been largely buried and defunct in Germany by mid-century. The peculiarity of the historical situation wasn’t lost upon the American pragmatist William James (1842-1910). He wrote,

We are just now witnessing, a singular phenomenon in British and American philosophy. Hegelism, so entirely defunct on its native soil that I believe but a single young disciple of the school is to be counted among the privat-docents and younger professors of Germany, and whose older champions are all passing off the stage, has found among us so zealous and able a set of propagandists that to-day it may really be reckoned one of the most potent influences of the time in the higher walks of thought.

What explained the decline of Hegel’s influence in Germany was a ‘back to Kant’ movement, a ‘neo-Kantianism’ that eschewed speculative metaphysics, such as Hegel had inspired, in favour of a respect for the natural sciences. The Marburg School of Neo-Kantians, in particular, had an especial interest in understanding, methodologically speaking, how the natural sciences functioned. It was a movement destined to be one amongst other sources of another of the most significant streams feeding into the river system of analytic philosophy: namely, the logical empiricism of the Vienna Circle.


The realist challenge of science and common sense

The fact is that Moore and Russell, in the late 1890s, were more aligned with the prevailing currents of European thought than any of the British Hegelians. Nevertheless, they still considered the system of one British Hegelian, the Oxford philosopher F.H. Bradley (1846-1924), an important foil for their own philosophy. But why did they consider it worth engaging with the views of any British Hegelian? The answer was that Bradley stuck out from the rest. Like many British Hegelians, Bradley had been an admirer of Hegel without adhering to the details of Hegel’s philosophy. But Bradley was led by his arguments to a conclusion that went further from Hegel than other Hegelians were prepared to envisage. For this reason, Bradley came under fire just as much from them as from his other adversaries. Caird had argued in more or less general and speculative terms for a higher synthesis of science and religion to resolve the widely acknowledged clash between them. By contrast, Bradley argued with a forthrightness and dialectical acumen that emulated Parmenides and Zeno, albeit expressed with Victorian curlicues. Bradley aimed for the destructive conclusion that discursive thought per se is ultimately unintelligible, inevitably driven to its own ‘suicide’—and that included common-sense, scientific and religious thought. Since Moore and Russell held discursive thought to be the very vehicle of intelligibility, but found Bradley’s arguments demanding and difficult to dismiss, the philosophical stakes could not have been higher for them. They had no choice, intellectually speaking, but to engage with Bradley.

To think discursively is to reflect upon the connections between separate things, their interrelatedness. That means thinking, for example, about the resemblance of one thing to another, or reflecting upon the distance between them, or registering the fact that what happens to one is before what happens to the other. Bradley’s point was that the idea of one thing or event connected to another, whether in space or time or by relations of resemblance, makes no sense.

One of the arguments upon which he placed the greatest weight is now called ‘Bradley’s Regress.’ It takes the form of a dilemma. Suppose we take the connection between two things to be ‘something itself,’ so distinct from both of them. This means the connection is a third thing. But we cannot understand their connection this way. By construing their connection as a third thing that, so to speak, sits alongside them, we have only added to our labours because now we have to explain how these three things are connected. It won’t help to say that the connection with them is a fourth thing because their connection will be a fifth thing, and so on, ad nauseum. Alternatively, if the connection between two things isn’t ‘something itself,’ it is mysterious how the two are connected at all. Bradley summarized, “If you take the connection as a solid thing, you have got to show, and you cannot show, how the other solids are joined to it. And, if you take it as a kind of medium or unsubstantial atmosphere, it is a connection no longer.” Since discursive thought presupposes the intelligibility of connections and there’s no making sense of connections, Bradley concluded that discursive thought cannot be ultimately intelligible. This wasn’t the only argument Bradley gave for this conclusion, but it was the argument he prized the most.

The Archimedean point from which Russell chose to mount his defence of discursive thought against Bradley’s onslaught was the outlook of contemporary scientific culture. Russell’s strategic judgment was that “there is more likelihood of error” in Bradley’s argument “than in so patent a fact as the interrelatedness of the things in the world.” Russell felt entitled to this judgement of the relative likelihood of error in Bradley’s argument because, as a matter of fact, science presupposes that there are interrelated things. This presupposition has survived the test of time, paying dividends in terms of the scientific developments that depend upon it, but also the technological applications of science. Consider, for example, the kinetic theory of gases, which presupposes that a gas consists of a large number of particles in rapid motion, which are constantly colliding: what that means is a plurality of separate but interrelated things. Russell conceded that, if we were ancient Greeks, ignorant of subsequent scientific achievements, then we might follow Bradley’s argument where it leads. But we cannot wish away what we know now, as members of a scientific culture that has seen extravagant philosophical systems and philosophers’ iconoclastic arguments continually fall by the wayside whilst scientific knowledge, which presupposes the interrelatedness of separate things, has inexorably accumulated. Knowing what we know now, we cannot follow Bradley’s argument where it leads.

Moore shared Russell’s strategic judgement of the relative likelihood of error having crept into Bradley’s argument, but Moore’s Archimedean point was a different one. He had a common-sense outlook, a worldview whose successful track record outstrips even that of science—a track record, running back millennia rather than centuries, of enabling Homo sapiens to successfully navigate their environment. For Moore, the common-sense view is that there are many material objects, both animate and inanimate, which occupy space, and there are many events to which material objects contribute, which occur in time, and that besides having bodies, we have minds, and we know all this to be true because of our appreciation of concrete cases. Bradley had argued that neither space nor time can be real because space and time presuppose that there are spatial and temporal relations holding between separate things and separate events, the kind of interrelatedness that Bradley held to be unintelligible. Moore replied that his pen was sitting right next to his inkwell and he had definitely gone for a stroll after lunch. Moore put it to his audiences that we are each of us far more certain of such concrete truths than we are certain of the cogency of Bradley’s reasoning. So common sense, never mind the scientific outlook, tells us we’re not in a position to repudiate the reality of interrelatedness.


Russell’s ‘knowledge by acquaintance,’ the myth of Given, and the return of Hegel

Did this mean that the New Philosophy had won? Bradley didn’t think so, because he was prepared to deny the intelligibility of science and our common-sense outlook. But whilst few British Hegelians were prepared to follow Bradley in this regard, they had other criticisms to make of the New Philosophy. Russell argued that our having knowledge of the external world relies upon our having ‘acquaintance’ with objects that are immediately given to us, where to be acquainted with an object means being primitively aware of it without knowing anything else about it—so without the distorting filters of our conceptual scheme. This was akin to the kind of cognitive set-up that Hegel had called ‘sense-certainty’ and subjected to searching criticism. Hegel’s basic point was that we cannot claim to have cognitively targeted some particular thing, and kept track of it, unless we are able to say what distinguishing features it has. But this requires us to have more than knowledge of the pure particular.

G.F. Stout (1860-1944) was one British philosopher who was influenced by Hegel, if not a card-carrying Hegelian. Stout had supervised Moore and Russell as undergraduates in Cambridge during the 1890s, but spent most of his career at the University of St. Andrews. It was integral to Stout’s philosophy that we cannot have immediate acquaintance with an object without knowing any truths about it. So Stout’s criticism of Russell, that “mere existential presence is not knowledge at all,” echoed Hegel’s critique of sense-certainty. Mere existential presence cannot provide the basis for cognitively detaching an object from its environment, because, Stout wrote, “If we inquire what in mere acquaintance we are acquainted with, mere acquaintance itself, being blind and dumb, can supply no answer.” In this respect Stout anticipated later developments within analytic philosophy, specifically Wilfred Sellar’s (1912-89) famous critique of ‘the Myth of the Given.’ So even though Moore and Russell’s common-sense and scientific outlooks carried the day, whilst Hegelianism became as defunct in the United Kingdom as it already had in Germany, recognizably Hegelian ideas continued to pose a challenge to Russell’s and Moore’s realism.

Is Western thought marching towards Eastern Idealism?

Is Western thought marching towards Eastern Idealism?

Reading | Ontology

Ancient marble statue of the great Greek philosopher Socrates on background the blue sky.

Prof. Richard Grego argues that, if we extrapolate the evolutionary trajectory of Western scientific and philosophical thought since the European Enlightenment, it becomes possible to discern that it is progressing towards a consciousness-only ontology convergent with Eastern thought. This is a very scholarly but accessible essay.

Both Richard Richard Rorty [1] and Raymond Martin [2] have made the not altogether inaccurate, if somewhat simplistic, claim that ‘the mind’ (or consciousness), as we currently understand it in the West, is a contrivance of 17-18th century philosophy. Certainly, from the Enlightenment era onward, the predominant theories of mind and consciousness informing western philosophy, theology, psychology, cognitive science, neuroscience and popular culture have emerged from this intellectual legacy. As a consequence of the scientific revolution’s influence on what is often referred to as “the Western paradigm,” these theories have revolved around the “hard problem”: what is the mind-body relation and how can the existence of an immaterial mind be explained with respect to the material body? Since our minds and our conscious awareness, which seem to be non-material, also seem to involve the operations of our material body-brain, how does our nonmaterial mental experience relate to, or involve, the material world to which it is connected? Descartes’ “cogito ergo sum” first established the parameters of this problem by defining the respective ontological categories of ‘mind’ and ‘body,’ based on the “clear and distinct” datum of conscious experience that is plainly non-material, self-aware, subjective, purposeful and free, over against a physical body that is material, unaware, objective, purposeless and determined (except when it is animated by the mind that ‘inhabits’ it) [3]. This distinction also engendered further dichotomies like material/immaterial, subject/object, private /public, free/determined and natural/supernatural. As a founding father of the scientific revolution himself, Descartes understood that this division was an inevitable by-product of its naturalistic assumptions and methodology, which banished spirit, mind, meaning, purpose and value from the purview of physical science—and when adopted as a formal naturalistic metaphysics, would eventually banish them from reality entirely.

The mind-body / mental-physical “hard problem” [4] thus became, and continues to pose, a problematic dichotomy in the Western paradigm, and no field of knowledge or professional practice is unaffected by it. One consequence is that contemporary philosophy of mind has been configured by three general ‘umbrella’ theories of “dualism”  (that mind and body are two distinct entities or elements of some sort [5]), “materialism” (that the physical world described by contemporary science is the only reality, and what we call mind-consciousness is merely the neurochemical activity of the brain, or some epiphenomenon of this activity)—probably still the most popular view in our science-dominated age [6]—and “idealism” (that what we call the physical world is actually an aspect of consciousness, which is the sole and fundamental reality [7]). A fourth option, sometimes referred to as “neutral monism” (that there is some indeterminate ultimate basis for all dimensions of reality—mental, physical and anything else—that encompasses all these without being reducible to any of them [8]) has also emerged at various times through the history of Western thought, but has until very recently received relatively little popular attention.

Again, given the paradigm-shaping prestige of science in Western intellectual culture, various materialist philosophies of mind continue to remain popular, perhaps dominant, in contemporary discourse. As science has become increasingly influential not only as a narrow method for pursuing certain limited kinds of problems and projects (methodological naturalism [9]), but also as a kind of grand theory describing the nature of all existence exhaustively (metaphysical naturalism or scientism[10]), mind-consciousness has consequently come to be regarded as a physical phenomenon or substance entirely describable via scientific categories. The once popular dualist view advocated by philosophers like Descartes gradually, through the 18th and 19th centuries, gave way to more materialist theories of mind, among which are theories like “identity theory” (that mental states are simply brain states[11]), “behaviorism” (that what we call mental states are, in fact, forms of physical behavior [12]) and “epiphenomenalism”(that mental states are an inconsequential residual by-product or ‘shadow’ of physical states [13]). This has culminated recently in a group of theories falling under the umbrella term “eliminativism,” which suggests that the very concept of consciousness should either be understood in terms of some behavioral or physical processes amenable to scientific quantification, or dismissed as a kind of brain-generated illusion [14].

Over the past few decades however, numerous logical and empirical critiques of materialism have gained increasing influence in philosophy of mind and consciousness studies, despite the persistent cultural prominence of scientism and materialist metaphysics. Cognitive and neuroscientists have noted, for instance, that despite years of research recording accurate correlations between mental states and physical-brain states, the physical sciences still lack any empirically viable theory regarding how these might be causally connected, and what mechanisms may be involved. Philosophers have pointed out how, contrary to claims by materialists that conscious experience is reducible to some physical entity or force describable by the physical sciences, consciousness remains nonetheless beyond the ability of the sciences to define, measure or describe in any coherent physical way. Thoughts, feelings, imagination, etc., have no discernable volume, mass, charge or any physically measurable property to qualify as physical entities verifiable by the scientific method. Nor is there any explanation for the more epiphenomenal materialist claim that consciousness is a byproduct of material processes, as there is no scientifically discernible or logically sensible way that something non-material (like mind) can magically pop out of the material world. The mind, it seems, is an undeniable aspect of reality that can’t be explained away via any quantifiable or empirical material explanation.

As a result of these problems, philosophy of mind in the West, since the late 20th century, has begun to produce an increasing number of theories that trend in the direction of idealism—even if most are unwilling to embrace it completely. Panpsychism, for instance, is another umbrella term for a group of popular recent theories that attempt to reconcile scientific materialism with consciousness as a fundamental reality. Panpsychism is the general thesis that mind-consciousness, while still ontologically distinct from the rest of the physical universe, is nonetheless integral to it, and a number of prominent formerly materialist neuroscientists and philosophers have expanded their metaphysical purviews to accommodate it. David Chalmers (who coined the term “hard problem”) [15], brain scientist Kristof Koch [16] and eminent philosopher Galen Strawson [17] are former materialists-turned-panpsychists. Phil Goff [18] and Itay Shani [19] have advocated a form of panpsychism known as cosmopsychism—in which consciousness is not only a fundamental element of material reality, but also foundational to it.

In addition to panpsychist theories that portray consciousness as coextensive with the material world, more specific physics-based theories portray mind as emergent from increasingly abstract conceptions of the material world. For example, Roger Penrose and Stuart Hameroff’s “orchestrated reduction” theory locates the origin of conscious awareness in state vector collapse of the Schrödinger wave function at the subatomic level, which takes place in the microtubules of the brain [20]. Giulio Tononi’s “integrated information theory” explains consciousness as the product of bits of quantum information functioning at high levels of complexity [21]. Bernard Carr traces consciousness to dimensions of hyperspace from contemporary string theory [22].

Beyond these, even more recent theories describe the status of consciousness in terms of straight-forward idealism. Bernardo Kastrup, for instance, conceives of consciousness as the single primordial substrate of all reality—encompassing completely the physical world described by science. Advocating a form of absolute idealism in the tradition of Schopenhauer (in a refined form that he calls “analytic idealism”) Kastrup conceives of material phenomena as kinds of mental qualities—resolving the “hard problem” by turning it on its head. Instead of attempting to explain how mind is possible in a material world, he explains how materiality, and the supposed separation between the mental and material, is all actually a form of conscious experience. Further, fundamental consciousness that creates the material world is a single substrate that only experiences material reality via individual minds, which in turn are dissociated aspects of this conscious substrate itself, like individual identities experienced by a person with multiple personality disorder. Material reality is a construct of the ultimate mind, and individuated minds experience this reality separately because they are estranged from their conscious source [23].

Interestingly, this trend in contemporary philosophy of mind suggests that the entire way in which Western metaphysics and mind are conceived may be evolving eventually toward some sort of self-transcendence, perhaps via a rapprochement with corresponding perennial ideas in Asian philosophical traditions. Several recent thinkers have drawn significant connections between Western cosmopsychism and idealism on one hand, and Hindu Advaita Vedanta philosophy (especially in its more recent neo-Vedanta formulations) on the other. Miri Albahari, for instance, has examined important similarities between Western cosmopsychism/idealism and Advaita Vedanta, while also noting substantial problems the former sometimes face and that the latter resolves. Western cosmopsychists (and even idealists like Kastrup to some extent), she claims, conceive of pure cosmic consciousness as a kind of ultimate or basic subject that posits the material world and other individual minds as its objects. However, in subtle contrast, Advaita Vedanta contends that the subjective and objective aspects of this reality are one and the same—both unified in the cosmic consciousness of which they are a part—just as the character’s perspective in a dream, and the seemingly external dream-world that this character perceives, are both ultimately aspects of a single unified consciousness that encompasses them both [24]. In Advaita Vedanta, rather than cosmic consciousness being a subject that posits each human mind –along with the apprehensions of each mind—as objects of its own apprehension, “nirvikulpa samadi” (the experience of Brahman or absolute Being, in its primordial state of unmitigated purity), like dreaming consciousness, is instead conscious experience prior to any subject/object duality, which also provides the basis for all the conscious subjects and their material objects of apprehension, generated as aspects of itself. Rather than a subject positing the world as its object, Brahman is the cosmic unity in which subject and object are unified. Advaita Vedanta’s cosmic consciousness is “one without a second” and beyond the subject/object relation that characterizes traditional Western conceptions of consciousness.

This kind of nuanced but significant difference that Albahari highlights between Advaita Vedanta and Western cosmopsychism/idealism can be illustrated further by contrasting Advaita Vedanta’s metaphysical categories with those of cosmopsychicism and idealism. Via its various interlocuters from Gaudapada and Shankara to modern neo-Vedanta philosophy, Advaita Vedanta views reality on the respective levels of ‘maya’ (the illusion of material and cognitive reality as the entirety of reality itself), “salvikalpa samadi” (the knowledge that one’s perception of cognitive-material reality is an illusional or truncated representation of true reality—Brahman), and “nirvikalpa samadi” (the experience of Brahman via pure experience itself, which transcends all knowing, even while encompassing it), which is nothing less than the vital experience of oneness with cosmic consciousness as it continuously creates all existence. Similar Western schools of thought all retain in some way a conception of consciousness (via various forms of subjective, absolute and analytic idealism, or panpsychism and cosmopsychism) shared by Advaita Vedanta, in recognizing the ultimately mental nature of both the cognitive/physical world and the cosmic consciousness generating it. However, the persistent understanding of consciousness in these Western conceptualizations always retains some sense of consciousness inhering in a substrate—whether this be the physical universe, as in many forms of panpsychism and cosmopsychism, or even perhaps Kastrup’s idealism, which posits a subjective substrate (“that which is conscious”) underwriting the contents of consciousness as its objects—which fails to resolve the dilemma of subject-object dualism as completely as the all-encompassing Advaita Vedanta cosmic mind does1. From the Advaita Vedanta perspective, Western panpsychism and Idealism remain at the ontological level of savikulpa samadi, rather than nirvikalpa samadi.

Thus, the trajectory of Western philosophy of mind appears to be culminating in a Vedanta-inspired universal conception of consciousness that transcends dualism, materialism and even idealism as heretofore conceived. Philosopher of science Michael Silberstein, for instance, subscribes to a “neutral monist” cosmology (based on current developments in theoretical physics and a ‘block universe’ interpretation of quantum cosmology, to which he has also drawn parallels with Advaita Vedanta metaphysics), which posits a more primordial source of all reality that precedes and grounds what we call material and mental—a source that is best described as what philosopher William James called ‘pure experience,’ or what Silberstein thinks may be best described as a kind of  “presence” in and through which mental and material, subject and object, operate in contingent relation to one another [25]. The experience of my subjective mind encountering an objective material world co-arise with one another and create one another whenever there is an asymmetry or dichotomy in the primordial ‘presence’ that engenders them (also understood as “dependent co-arising” or “dependent origination” in Buddhist philosophy). “As I awaken in the morning, the world appears to me, and this asymmetric dichotomy between my mind and the material world arises,” physicists Adam Frank and Marcelo Gleiser and Philosopher Evan Thompson write,

At a deeper level, we might ask how experience comes to have a subject-object structure in the first place. Scientists and philosophers often work with the image of an ‘inside’ mind or subject grasping an outside world or object. But philosophers from different cultural traditions have challenged this image. For example, the philosopher William James (whose notion of ‘pure experience’ influenced Husserl and Whitehead) wrote in 1905 about the ‘active sense of living which we all enjoy, before reflection that shatters our instinctive world for us.’ That active sense of living doesn’t have an inside-outside/subject-object structure; it’s subsequent reflection that imposes this structure on experience. More than a millennium ago, Vasubandhu, an Indian Buddhist philosopher of the 4th to 5th century CE, criticised the reification of phenomena into independent subjects versus independent objects. For Vasubandhu, the subject-object structure is a deep-seated, cognitive distortion of a causal network of phenomenal moments that are empty of an inner subject grasping an outer object. [26]

Ultimately though, perhaps this ‘neutral’ kind of ‘presence’ might, as Advaita Vedanta suggests, actually be a deeper kind of consciousness –“pure experience” in James’ terms or ‘pure awareness’ in Advaita Vedanta terms. Since cosmic consciousness or Brahman (like ‘presence’ for Silberstein), as the primordial groundless ground of all existence, remains beyond the subject-object distinction and is the source of all possibility while remaining itself both immanent in, but irreducible to, any comprehension itself, it certainly would seem to exhibit the qualities that Silberstein’s neutral monism prescribes. Silberstein’s work suggests that the problematic nature of the hard problem perhaps involves the realization, foundational to so many Eastern philosophies and religions, that the living experience of consciousness transcends any theory—physical or philosophical—about it. As the ground of possibility for all theories, cosmic consciousness is not reducible to any theory itself.

Preeminent neo-Vedanta, idealist and comparative philosopher of world religions, Sarvepalli Radhakrishnan, maintained that Advaita Vedanta’s concept of “nirguna Brahman” (Brahman as primordial consciousness encountered beyond all conceptual representations) provides the world’s oldest original, perennial and universal mode of encountering existence that simultaneously transcends and includes all world civilizations’ religious, philosophical, scientific and other conceptual frameworks [27]. In this way, universal consciousness lies beyond our ability to comprehend it via any rational, discursive or abstract ideation that we may use to represent it conceptually—always exceeding any representation of it, although it engenders and encompasses these representations. This explains the inability of dualist, materialist, and even most Western idealist theories of mind to ever fully countenance consciousness. So long as we try to reduce consciousness—which is cosmic presence or pure awareness—to any abstract theory that fits neatly into a conceptual scheme, we separate our understanding from the very phenomenon we are attempting to understand, and so the ‘hard problem’ of consciousness in Western philosophy and science will never go away. As the Vedas famously proclaim:

Who knows for certain, who shall here declare it?
Whence was it born, and whence came this creation?
The gods were born after this world’s creation:
Who can know from whence it has arisen?

None knoweth whence creation has arisen.
And whether he has or has not produced it.
He who surveys it in the highest heaven,
He only knows, or haply he may know not. [28]


  27. ,

Editor’s notes
1 Bernardo Kastrup does not endorse this interpretation or characterization of analytic idealism. Under the latter, all experiences—and, therefore, all seeming ‘objects’—are merely excitations of a universal field of subjectivity, just as ripples are excitations of water. As such, for the same reason that there is ultimately nothing to ripples but water, there is ultimately nothing to physical objects—and even seemingly individual subjects, such as you and me—but the field of subjectivity itself. The subject-object dualism is thus completely resolved under analytic idealism. Allusions to ‘substrates,’ under analytic idealism, are merely metaphorical, meant to aid understanding, but do not entail or imply the ultimate existence of anything but pure subjectivity.

How can you be me? The answer is time

How can you be me? The answer is time

Reading | Philosophy

Chess Character knight warrior reflection in a mirror-represent hypocrisy personality

That you believe you were your five-year-old self is grounds to believe that you can be another person, right now, while still being you, argues our executive director in this stimulating theoretical essay.

How can one universal subject be you, and me, and everybody else, at once? This is perhaps the most difficult aspect of analytic idealism to wrap one’s head around, for it implies that you are me, at the same time that you are yourself. How can this possibly be? After all, you can see the world through your eyes right now, but not through mine.

Although reference to dissociative disorders, empirically validated as they are, forces us to accept that such somehow can indeed be the case—for it is the case in severely dissociated human minds—the question of how to visualize the dissociation remains difficult. How can you visualize a process by virtue of which you are me while being yourself concurrently? How are we to get an intuitive handle on this?

Notice that what makes it so difficult is the simultaneity of being implied in the hypothesis: you can easily visualize yourself being your five-year-old self—an entity different from your present self in just about every way—because being your five-year-old self is not concurrent with being your present self: one is in the past, the other is in the present. Visualizing oneself taking two different points of view into the world does not offer any challenge to our intuition, provided that these points of view aren’t taken concurrently.

Here is an example. When I was a child, I used to observe a very curious behavior of my father’s: he would play chess against himself, a common and effective training technique in a time before computerized chess engines. Doing so helps a chess player learn how to contemplate the position on the board from the opponent’s point of view, in order to anticipate the opponent’s moves. My father would perform this exercise quite literally: he would play a move with the white pieces, turn the entire board around by 180 degrees, and play a move with the black pieces. Then turn the board back to white again, and so on.

My father—a single subject—was taking two different points of view into the world, experiencing the battle drama of the game from each of the two opposing perspectives; one subject, two points of view. We have no difficulty understanding this because the two perspectives weren’t simultaneous, but instead occupied distinct points in time.

Yet, we’ve known for over a century now that time and space are aspects of one and the same thing: the fabric of spacetime. Both are dimensions of extension in nature, which allow for different things and events to be distinct from one another by virtue of occupying different points in that extended fabric. For if two ostensibly distinct things occupy the same point in both space and time, then they can’t actually be distinct. But a difference in location in either space or time suffices to create distinction and, thereby, diversity. By occupying the same point in space, but at different times, two objects or events can be distinguished from each other; but so can they be distinguished if they exist simultaneously at different points in space.

The way to gain intuition about how one subject can seem to be many is to understand that differences in spatial location are essentially the same thing as differences in temporal location. This way, for the same reason that we have no difficulty in intuitively understanding how my father—a single subject—could seem to be two distinct chess players, we should have no intuitive difficulty in understanding how one universal subject can be you and me: just as my father could do so by occupying different perspectives at different points in time—that is, by alternating between black and white perspectives—the universal subject can do so by occupying different perspectives at different points in space; for, again, space is essentially the same thing as time.

Yet, the demand for this transposition from time to space still seems to be too abstract, not concrete or intuitively satisfying enough; at least to me. We need to make our metaphor a little more sophisticated.

A few years ago, I had to undergo a simple, short, but very painful medical procedure. So the doctors decided to give me a fairly small dose of a general anesthetic, which would knock me out for about 15 minutes or so. I figured that that would be a fantastic opportunity for an experiment: I would try to focus my metacognition and fight the effects of the drug for as long as I could, so to observe the subjective effects of the anesthetic on myself. I had undergone general anesthesia before, in my childhood, but had no recollection of that, so this was a fantastic chance to study my own consciousness with the maturity and deliberateness of an adult.

And so there I was, lying on an operating table, rather excited about my little experiment. The drug went in via the IV and I focused my observation of the contents of my own consciousness, like a laser. Yet, as the seconds ticked by, I couldn’t notice anything. “Strange,” I thought, “nothing seems to be happening.” After several seconds I decided to ask the doctors if it was normal for the drug to take so long to start causing an effect. Their answer: “We’re basically done, just hang on in there for a few more moments so we can wrap it up.”

“WHAT?” I thought. “They are basically done? How can that be? It hasn’t been a minute yet!” In fact, more than 15 minutes had already elapsed; they had already performed the whole procedure. I experienced absolutely no gap or interruption in my stream of consciousness; none whatsoever. Yet, obviously there had been one. How could that be? What had happened to my consciousness during the procedure?

The drug altered my perception of time in a very specific and surprising way. If we visualize subjective time as a string from where particular experiences—or, rather, the memories thereof—hang in sequence, the drug had not only distorted or eliminated access to some of those memories, but also cut off a segment of the string and tied the two resulting ends together, so to produce the impression that the string was still continuous and uninterrupted. I shall call this peculiar dissociative phenomenon ‘cognitive cut and tie.’ The memory of certain experiences in a cognitively associated line are removed from the line, and the two resulting ends seamlessly re-associated together, so the subject notices nothing missing.

Now let us bring this to bear on my father’s chess game. Imagine that we could manipulate my father’s perception of time in the following way: we would cut every segment of time when my father was playing white and tie—that is, cognitively associate—these segments together in a string, in the proper order; we would also do the same for the black segments. As a result, my father would have a coherent, continuous memory of having played a game of chess only as white, and another memory of having played another—albeit bizarrely identical—game of chess only as black. In both cases, his opponent would appear to him as somebody else. If you were to tell my father that it was him, himself, on the other side of the board all along, he would have thought you mad. For how could the other player be him, at the same time that he was himself, playing against his opponent?

The answer to how one universal subject can be many—to how you can be me, as you read these words—resides in a more sophisticated understanding of the nature of time and space, including the realization that, cognitively speaking, what applies to one ultimately applies to the other. As such, if you believe that you were your five-year-old self, then there is an important sense in which, by the same token, you must believe that you can be me. There is only the universal subject, and it is you. When you talk to another person, that other person is just you in a ‘parallel timeline’—which we call a different point in space—talking back to you across timelines. The problem is simply that ‘both of you’ have forgotten that each is the other, due to dissociative ‘cut and tie.’

A different subjective position in space is just a different point in a multidimensional form of time, and vice-versa. Indeed, such interchangeability between space and time is a field of rich speculation in physics. Physicist Lee Smolin, for instance, has proposed that space can be reduced to time. Physicist Julian Barbour, in turn, has proposed the opposite: that there is no time, just space. There may be a coherent theoretical sense in which both are right.

The most promising theoretical investigation in this area is perhaps that of Prof. Bernard Carr, from Queen Mary University London, a member of Essentia Foundation’s Academic Advisory Board. If his project is given a chance to be pursued to its final conclusions, it is possible that physics will offer us a conceptually coherent, mathematically formalized way to visualize how one consciousness can seem to be many.

Looking upon personal identity through the lens suggested above may convince you that, when an old wise man turns to a brash young lad and says, “I am you tomorrow,” such statement may have more layers of meaning than meets the eye at first.

The futile search for the non-mental: Derrida’s critique of metaphysics (The Return of Metaphysics)

The futile search for the non-mental: Derrida’s critique of metaphysics (The Return of Metaphysics)

Reading | Metaphysics

Peter Salmon | 2022-04-04

shutterstock 1785148856 small

Peter Salmon discusses Jacques Derrida’s critique of metaphysics: the argument that finding some objective, ‘uncontaminated,’ pure presence of being or reality in the world is impossible, for all of our experiences of the world are determined by our own mental contexts, our conceptual dictionaries, memories and expectations. However, the attentive reader will notice that, in criticizing metaphysics this way, far from refuting it, Derrida may actually make a case for idealism: the recognition that our reality isn’t just contaminated by the mental, but is mental in essence and being; for “the distinction between essence and existence, and between the ideal and the real (‘whatness’ and ‘thatness’) are illusions.” This essay is part of our The Return of Metaphysics series, produced in collaboration with the Institute of Art and Ideas (IAI). It was first published by the IAI on the 30th of March, 2022.

In January 1954, the philosopher Jacques Derrida, then 24 and just back from a summer in his Algerian home, visited the Husserl Archives in Louvain, Belgium. The archive had been founded in 1938, shortly after Husserl’s death, in order to protect his corpus from the Nazi authorities. Smuggled out by the Franciscan Father Herman Leo van Breda, the archive contains more than 45,000 shorthand pages, Husserl’s complete research library and 10,000 pages of typescripts.

But it was a small paper of no more than 30 pages, working title The Origin of Geometry, which was to spur a revolution in Derrida’s thinking. It would inform, with astonishing consistency, his work for the rest of his life, across a vast range of subjects – from traditional philosophical subjects such as meaning, language, ethics and religion, to issues such as gender, colonialism, film and hospitality. His first book was a translation of Husserl’s paper, its 30 pages ‘supplemented’ – to use a Derridean term – with an introduction of over 100 pages. In this introduction lay the seeds of all his later philosophy, and the terms forever associated with his name – deconstruction, différance, iteration and, crucially ‘the metaphysics of presence’ – Derrida’s vital contribution to the calling into question of the whole basis of Western metaphysics.


Husserl, phenomenology and the metaphysics of presence

How do we know stuff about the world? Husserl wrote in a letter to the mathematician Gottlob Frege that he was ‘tormented by those incredibly strange realms: the world of the purely logical and the world of actual consciousness… I had no idea how to unite them, and yet they had to interrelate and form an intrinsic unity.’ His first attempts had been via mathematics. By analyzing what a number is – something that ‘exists’ or something humans ‘create’ – he thought he would be able to establish a relationship between consciousness and the world. It was Frege’s criticism of this attempt due to its ‘psychologism’ – that is, its dependence on the internal mental states of the subject, rather than the logical relations at hand – which spurred Husserl to his subsequent investigations.

What if, Husserl argued, we put aside the question of ‘the world’ entirely, and look simply at consciousness? Whether something exists or not is both moot and distracting. Husserl introduced the concept of the ‘epoché’ – from the ancient Greek, meaning ‘suspension of judgement’. We ‘bracket’ the world; what is important is not whether this tree exists, but how we encounter it, how it affects us. The job of philosophy is to describe these affects and to build concepts from them, which we can later extend outwards.

Crucial here is the idea of ‘intentionality’: as Franz Brentano had pointed out, we don’t pace Descartes, or merely ‘think’; we ‘think about.’ All consciousness has a content, and in analyzing this content, Husserl wanted to unite the strange realms of thought and world. He called this method ‘phenomenology’ – the study of phenomena – and by the time Derrida arrived at Louvain it was one of the dominant strands of twentieth century philosophy, spurred on by students of Husserl such as Emmanuel Levinas and, crucially, Martin Heidegger.

The Origin of Geometry is a late unpublished work, but it grapples with the same problems as his early work. Geometrical objects are, for Husserl, the perfect example of ‘ideal’ objects: they are defined precisely by their non-spatiotemporal nature (there are no perfect circles in the world) and are thus purely ‘transcendental.’ How do we – humans – think them and use them? How do we – finite beings – create transcendental things? What is their origin? This is not a historical question – Husserl is not looking for the person to whom the first geometrical object occurred. It is a question of meaning.

While Derrida would always acknowledge his debt to Husserl – ‘Even in moments where I had to question certain presuppositions of Husserl, I tried to do so while keeping to phenomenological discipline’ – his critique of The Origin is wide-ranging and multi-stranded. One strand catches Husserl out for asserting that ideal objects require writing down in order to establish their existence – contrary to Husserl’s usual assertion, shared with most philosophers, that writing is a secondary activity compared to speech, indeed a parasitic derivation of it. This bias, which Derrida would later term ‘phonocentrism,’ would expand into his great work Of Grammatology.

Derrida also critiques the idea of the ahistorical, a strange state which contravenes, Derrida argues, all human experience. Derrida, in a method that would become familiar in his later works of deconstruction, seeks out moments in the text where history, as it were, sneaks back into Husserl’s analysis – slips of the pen which, like the example of writing, reveal aporias (irresolvable contradictions) in Husserl’s thinking, as surely as Freudian slips indicate the same in our thinking.

But his main focus is on the idea of origin, which – incorporating the two previous critiques – he uses as a lever to prise apart fundamental aspects of Husserl’s philosophy across his entire corpus, and from which he develops his critique of ‘the metaphysics of presence.’

Phenomenology, argues Derrida, posits a position from which we are able to study the affects of the world upon us, and from which we can investigate phenomena, including concepts. This position – the ‘now’ – is, somehow, pure, uncontaminated by anything that is not the now. And yet here, as in works such as The Phenomenology of Internal Time Consciousness, Husserl had very deliberately assessed that whatever the ‘now’ is, it isn’t pure.

We exist, as Husserl memorably puts it, in a ‘flowing thisness,’ from which we posit ‘now’s.’ But these ‘now’s’ are not independent entities, which can be extracted and analyzed. Rather, we are to think of them like notes in a piece of music. A particular note gets its meaning from its position in the overall piece – our memory of what has come before, our anticipation of what follows. Otherwise, we would have the same experience of hearing a C note whether it was part of a Beethoven symphony or a piece of death metal (not Husserl’s example). It is, in temporal terms, contextual.

Husserl calls what has come before ‘retention’ and what follows ‘protention’ and each ‘contaminates’ the now as surely as the notes before and after that C note. What Derrida highlights in his critique of phenomenology here is that, despite retention and protention always being already part of the now, Husserl retains an unexamined faith that there is still – sort of – a now, which retention and protention contaminate. A pure ‘now’ is still, in some sense, posited, even as its impossibility is asserted. ‘Contamination’ supposes something to be contaminated.

This is not, as can be seen, a case where, with greater knowledge, with greater effort dedicated to the question, we could get to the pure now. The pure now is impossible. This ‘fixed point’ on which phenomenology bases its claims is always impossible, can never not be ‘contaminated.’ The concept of the pure now is a hope.

Derrida’s crucial insight is that this ‘hope’ is not an idiosyncrasy of phenomenology, nor only of its analysis of time. Rather, it is endemic to philosophy itself. We exist in a ‘flowing thisness’ and philosophy, again and again, posits ideal, timeless, pure forms, which life somehow contaminates – as though there were a something ‘before’ or ‘outside’ of life. This is the structure of most religious philosophies – the ideal being God, the contamination being humanity. Platonic forms are ‘ideal’ examples of things like circles, to which no actual circle could aspire. The critique of temporal purity is as valid when applied to the spatial dimension.

The history of metaphysics, then, is a history of our hopes for presence – for a pure, central, present object of enquiry, from which we can derive our knowledge – the self included. Derrida’s critique of speech and writing captures this – unlike writing, speech is seen as ‘pure’ language, and thus an expression of our ‘true’ being – the religious might call it the soul, the non-religious some other term that really means soul. In fact, Husserl at one point goes further, arguing that even speaking words is a form of contamination, as we may be misunderstood. It is only the speech in our own head that is the pure self – an argument Derrida fully critiques in his Speech and Phenomena, perhaps his most thoroughgoing analysis of the metaphysics of presence.


There is no going “beyond” metaphysics

As Derrida recognized, Heidegger had, both directly and indirectly, made a similar critique of Husserl and of Western metaphysics. Husserl had attempted to arrive at pure phenomena and describe beings independent of any presuppositions – ‘to the things themselves’ as Husserl famously put it. But, as we have seen, pure phenomena do not exist. This, for Heidegger, was one of the ways in which the ‘question of the meaning of Being’ has been lost. In its search for a fundamentum absolutum, of an indubitable grounding for metaphysics, the openness of Being, as the Greeks understood it, has been occluded. In addition, the distinction between essence and existence, and between the ideal and the real (‘whatness’ and ‘thatness’) are illusions; Being precedes both. The mistake lies as far back as Plato – the birth of Western philosophy, with its categories, its hierarchies and taxonomies, wherein the moment Being is forgotten.

For Derrida – whose ‘deconstruction’ is deliberately based on Heidegger’s ‘destruktion,’ a method of taking apart while leaving intact – Heidegger, despite himself, is unable to go beyond metaphysics as he explicitly attempts to do. But then, as Derrida himself is aware, neither does Derrida. Firstly, we have no language to do so that is not already informed by metaphysical propositions:

There is no sense in doing without the concepts of metaphysics in order to shake metaphysics. We have no language – no syntax or lexicon – that is foreign to this history; we can pronounce not a single destructive proposition that has not already had to slip into the form, the logic and the implicit postulations of precisely what it seeks to contest.

Secondly, there is not ‘going beyond’ metaphysics, as this is to repeat the gesture about which he warns – to posit an ‘entity’ outside of (before, beyond) the mess of life. To take the example of the ‘now’ again: any analysis of the ‘now’ can only deal with the ‘now’ we have to deal with, impure as it is.

What Derrida does do, in recognizing this urge to posit the pure based on the impure, is to open up the possibility of a metaphysics that recognizes absence as fundamental to its structure. Derrida has some big gestures for this, such as his idea of hauntology, a near homonym of ontology, which studies ‘what there isn’t’ instead of ‘what there is’ (while recognizing the distinction is ultimately as contested, and revealing, as all dichotomies); thus histories that did not occur, beings that do not exist, futures and existents that never come to be – including pure democracy, the pure gift, pure hospitality. These limit cases, always beyond what can actually be, disclose knowledge about what there actually is, including concepts.

But his critique is also more intrinsic than that. Where there is ‘essence’ and ‘identity,’ Derrida posits ‘alterity’ and ‘difference.’ More, he posits ‘différance,’ a word he first uses in Speech and Phenomena. Pronounced exactly the same way as ‘difference’ (this is Derrida forcing the written word to be more decisive than the spoken) it is a complicated term, which incorporates the idea of differing and deferring. Western metaphysics has, in Derrida’s reading, always been a history of trying, as it were, to secure the meaning of words  – ‘truth is…’, ‘beauty is…’.

However, as anyone who has picked up a dictionary knows, every word is defined by another word, which is defined by another word – the meaning of word x is both deferred as we move along the chain, and is an effect of difference – we get its meaning in contrast to other words. There is no ur-word at the end of the dictionary, both sufficient to itself (it needs no other word to define it) and generative of everything else (thus producing meaning).

This is not accidental – ‘différance’ is built into language, as it is built into all concepts. It precedes meaning – for Derrida, fixing a meaning is a form of violence, and we should look not only at the act of doing so, but what it means that we attempt to. Deconstruction is a form of suspicion – Derrida sometimes described it as a parasitical method; anything is open to being deconstructed. But, as he pointed out, it is not imposed from without. Any text deconstructs itself the moment it attempts to fix meaning.

One could call Derrida’s work a metaphysics of absence as opposed to a metaphysics of presence, but it is the ways in which they intertwine that is of interest. And the effort metaphysics has expended on suppressing the absent – the gaps between ideas, the ghosts and specters that are called up within its thinking, the things that stand outside its purview in one era and why they are excluded. We are used to the Freudian concept that our words are not to be taken at face value – the unconscious, that exemplary sort of absence, is playing its part. Like a psychoanalyst of metaphysics, Derrida wants to know what is really being said.

If Western metaphysics is a search for fixed meanings, Derrida is not against this search. The search for the pure end term of religion – God – creates religion, the search for such things as Truth, consciousness and the self, generates philosophy. For Derrida, these searches are ‘tasks’ in the sense that we always already find ourselves – to use a Heideggerian term – ‘thrown’ into them. Part of our impulse is and will always be to seek an origin, or a culmination, or at least solid ground. At the moment we do so – given we can actually experience none of those things – we are performing a gesture, attempting to renounce the equivocal, expressing a hope, be it finding an origin of geometry or overcoming metaphysics.

Where Heidegger argued that we are reaching the end of metaphysics, Derrida argued that metaphysics – philosophy – always already works in the shadow of this death. It is a structural component of metaphysics to imagine its own completion, present and correct. Or, as Hegel put it in 1820:

Only in the maturity of reality does the ideal appear as counterpart to the real, apprehends the real world in its substance, and shapes it into an intellectual kingdom. When philosophy paints its grey in grey, one form of life has become old, and by means of grey it cannot be rejuvenated, but only known. The owl of Minerva takes its flight only when the shades of night are gathering.

Western metaphysics will always search for the ideal, and believe itself to be edging forward towards it. Perhaps one day presence will triumph. But as Derrida noted, “The end approaches, but the apocalypse is long lived.”

Mind may be older than we think

Mind may be older than we think

Reading | Abiogenesis | 2022-03-20

shutterstock 2015519294

We may have reasons to believe that life is an intentional work of art; and not a very original one, for it may be based on a form of planetary imitation! This article is a continuation of the authors’ popular previous essay.

The purpose of this essay is to show that the mind may be older than is currently thought. More specifically, we wish to demonstrate that there is sufficient reason to seriously consider ascribing an earlier date to the appearance of individuated consciousness within the timeline of our universe.

Because of the privateness of conscious experience, no one can be sure that consciousness is exclusively the preserve of humans, higher primates or mammals; nevertheless, its existence is typically considered as a possibility solely within the animal kingdom. This is due to the fact that animals have developed nervous systems to a greater or lesser degree; and also that our own conscious experience is closely related to the neural activity of our brain.

The argument that the mind appeared earlier than 600 million years ago, therefore prior to the appearance of the first animals endowed with primordial neural networks, is considered entirely unfounded. In our universe, which is about 13.7 billion years old, the appearance of the mind is thus thought to be quite recent, and an event that pertains strictly to life as we know it here on Earth.

It could certainly be argued that forms of animal life may have appeared far earlier on other planets. However, investigating this possibility is beyond the scope of the current paper. Rather, we will continue to pursue the line of research that we began to illustrate in the essay entitled ‘Conscious storms and the origin of life,’ revisiting it here from a different perspective. In the latter essay, we showed that the mind-matter problem should be given more serious consideration, since it can potentially question the certainties we have about the nature of living organisms. In fact, we argued that, although governed by blind, purposeless laws of nature, the Earth’s atmosphere—just like the biological brain—may be associated with a subjective first-person perspective and may have, therefore, purposely created life on Earth.

The aforementioned essay implicitly opens up the possibility that the mind is older than currently thought, and that it is not necessarily unique to the animal kingdom, or exclusive to living organisms as a whole. We intend to develop this theme here by asking the following question:  how do we render more plausible the possibility that there actually was—and perhaps still is—conscious experience related to the activity of the atmospheric system? In our earlier essay, we highlighted the similarities between this system and our brain; here we will follow a different approach.

If we were to consider the possibility that life on Earth is the work of an author—possibly the Earth itself—there would still be something about life that we would need to further comprehend. In other words, our current understanding would be limited to a mere reduction of life to a physico-chemical process, yet its meaning as the creation of an author would remain absolutely obscure, simply because we never entertained the possibility that it could have such a meaning. Resorting to a metaphor, this would be akin to studying a painting, such as the “Narciso” by Caravaggio, by simply attempting to ascertain the chemical composition of the colors and the physical structure of the canvas. Our understanding of the physico-chemical aspects of the object might well be advanced, and we might also be able to identify some recurring patterns in the image, yet the meaning of the work—a depiction of Narcissus looking at his own reflection in a body of water—would escape us entirely.

We therefore propose the following approach: we will attempt to gain an understanding of what life is by acknowledging that it actually could be the product of a conscious author, for whom it is endowed with meaning. In attempting to shed light on what this meaning could be, we will try to understand what life represents for its supposed author. If we were to find a satisfactory answer to this question, we would have made the initial hypothesis more plausible; namely, that life was intentionally created by planet Earth, and that the latter should no longer be conceived of solely as a physical object, but also as a conscious subject whose mind, observed from a third person perspective, appears as the electromagnetic activity of the Earth’s atmospheric system.

Paraphrasing Wilczek’s beautiful question (Wilczek, 2016) “Is the universe a work of art?”, here we are narrowing the question down to “Is life a work of art?” In order to understand what life can represent for its author, we must go in search of what they perceived. Is there something that comes from the outer environment, which has a causal power on the activity of the Earth’s ‘brain’? Are there any physical entities coming from outer space, which impact the Earth and trigger atmospheric discharges?

Despite being one of the most familiar and widely recognized natural phenomena, lightning discharges still remain relatively poorly understood (J. R. Dwyer, 2014); the main problem with lightning physics being that the electrical fields that occur between the clouds (10-100kV/m) are typically an order of magnitude lower than those required for the dielectric breakdown of the atmosphere (2MV/m). Thus, the physical mechanism that initiates many lightning strikes is not yet completely clear.

Furthermore, during thunderstorms X-ray flashes (50keV ) and γ-ray flashes (0, 05 − 10MeV ) are detected. Gurevich et al. (A.V. Gurevich, 1999) argues that the existence of high-energy emissions indicates that relativistic electrons must play a significant role in thundercloud discharge. Therefore, the initiation mechanism proposed is a relativistic runaway electron avalanche (RREA). Without delving into the study in any great detail, the proposed mechanism  essentially entails the presence of highly energetic particles, which can lower to 200 kV/m the threshold of the electric field necessary for an atmospheric discharge to occur.

Thus, an atmospheric discharge can be stimulated by high energy particles provided by an extensive atmospheric cosmic ray shower. According to this approach, cosmic rays play a decisive role in initiating atmospheric discharges, in which clouds (under certain conditions) essentially constitute natural versions of an old type of particle detector: the spark chamber. If we acknowledge that the discharges directly induced by these events may induce others in turn, the whole process of discharging is analogous to what happens in our brain when visible photons impact our retinas, and we know that corresponding to such brain activity we experience a visual perception of an object or an event.

Therefore, if we acknowledge the possibility that there is subjective experience related to the atmospheric discharges with which we are dealing, we can infer that the content of this experience will be some form of perception of the cosmic ray impact events. A perception of this type is obviously very difficult for us to imagine, at least as difficult as trying to imagine what it is like to be a bat, whose perception mechanism based on echolocation is very different from ours, as Nagel points out in his well-known paper.  Humans do not have dedicated receptors to perceive these physical events and must reconstruct them using specific detectors and accurate data processing, which happens daily in several cosmic ray physics experiments.

We can attempt to find elements in common between the physics of these events and the salient characteristics of life. Essentially, we wish to understand whether life can be considered the product of a creative process in which what is perceived by the author is reworked and represented. If in the previous essay we ventured into a comparative study between the Earth’s atmospheric system and the brain of an author, it is now time to try our hand at a comparative study between the physics of cosmic rays and living organisms.

In an attempt to arrive at a deeper understanding of what life is, we offer the following analogy: Earth takes on the role of van Gogh as he perceives the actual sunflowers and produces their representation, while cosmic ray showers take on the role of physical sunflowers and living organisms are their representation. That is to say, living organisms are a product of the Earth inspired by cosmic ray shower physics. In fig. 1 these three elements are illustrated: the perceived phenomenon (showers), the author’s brain (the atmospheric system), and his creation (the first living organisms).

Picture 1

Figure 1.  The proposed scenario for the origin of life.

Such cosmic rays (mainly protons coming from outer space), through interaction with atomic nuclei in the atmosphere (mainly nitrogen nuclei), determine the production of other particles, which in turn interact with atmospheric nuclei, leading to the production of further particles. As far as such products have enough energy, the process can be reiterated and the result is a ‘shower’ of particles, which is typically referred to as a cosmic ray shower. During the first phase of a shower, the number of particles is subject to a phase of exponential growth. This growth ceases when the energy of the produced particles is too low. A decay phase then follows, since most of the particles produced in the shower have a finite lifetime. This latter phase, in which the population of particles is decimated, is likewise exponential.

Similarly, in a bacterial culture, after a first phase of exponential population growth, a phase of exponential decrease follows. More generally, in a population of organisms exponential growth and decrease compete with one another according to the environmental conditions in which the organisms are found. As in the case of cosmic ray showers, exponential growth dominates when there is enough energy available in the environment in which the organisms live, otherwise exponential decrease dominates.

We now want to delve deeper into those characteristics that are common to cosmic ray atmospheric phenomena and phenomena pertaining to living organisms. Since even the simplest living organisms are extraordinarily complex systems, it is quite natural to focus our attention on the more complex physical objects that are produced in a cosmic ray shower. As we know, the constituent particles of matter are grouped into two large classes: quarks and leptons. While leptons are isolated particles in cosmic air showers, quarks form so-called ‘bound states.’ Physical systems made up of quarks constitute the class of hadrons. Hadrons are composite subatomic particles made up of two or more quarks held together. As an example, the most famous hadrons are the proton p and the neutron n, the constituents of the atomic nucleus. We want to demonstrate that, among all the particles produced in a shower, there is at least one that can be considered a living organism: the neutral pion , which is the lightest and thus most easily produceable hadron. While the proton and the neutron are made up of three quarks, the neutral pion consists of a quark and its respective antiquark, as shown in the top drawing in fig. 2a. Such particle pairs are thought of as confined in a bag, which separates the pion from the outer environment.

So let us consider what happens if we supply energy to a physical system such as the neutral pion (fig. 2a). When the energy available within the environment of this system is absorbed in an interaction, it is used to draw apart the constituent quark and antiquark. When this process reaches a certain threshold, a portion of the absorbed energy is converted into the mass of a new  pair. The newly created quark binds to the pre-existing antiquark while the newly created antiquark binds to the pre-existing quark, giving rise to two ‘children’ hadrons starting from a single ‘parent’ hadron. Since the pion is the lightest hadron, it is possible to ensure that the two children hadrons are still two neutral pions, identical to the initial one, provided that the absorbed energy is below a certain threshold. In this case, each of the children hadrons thus possesses the same  quark structure as the parent hadron, which is replicated during the latter’s reproduction process. Once the new  pair is produced, the splitting of the bag completes the fission process. If, on the other hand, this energy is not supplied in time, the initial pion simply decays: its existence comes to an end as it decays into a photon pair. The latter instances typically occur in a cosmic air shower, given the short lifetime of the neutral pions. These are some of the interesting peculiarities of these objects located above the clouds. Let us now observe what is happening below the clouds, within the living organisms.

Fundamentally, a living organism is a thermodynamic system that dissipates energy available in its environment in order to reproduce itself. This energy dissipation is due to metabolism, a process in which the organism autonomously synthesizes the molecules necessary for its reproduction. In practice, in order to reproduce itself, an organism must necessarily replicate its DNA, and to do so it must be able to autonomously synthesize the four distinct nucleotides of which the DNA is made up: A, T, G, C. In this synthesis process a certain amount of available energy is transformed into the mass of nucleotides. The term ‘metabolism’ indicates precisely this transformation. The transformation of energy into mass is much more marked in nuclear and particle physics reactions than in chemical reactions; yet, from the point of view of fundamental physics, it is the same transformative process. Finally, the splitting of the membrane completes the binary fission process of a simple living organism (fig. 2b).

Picture 2

Figure 2. (a) Fission of a pion. (b) Binary fission of a living organism.

The process of fission of a neutral pion can similarly be described in terms of metabolism, replication and, finally, reproduction through the splitting of the membrane which separates it from its outer environment. For these reasons we believe that, to all intents and purposes, it can be considered a living organism.

Undoubtedly, another striking difference between a neutral pion and a living organism is that, in the latter, its defining replicated internal structure is a double antiparallel strand (DNA), whereas in the case of a neutral pion it is simply a  pair. In this regard, it is useful to investigate the neutral pion’s quark structure.

As a quantum system, the state of the neutral pion is a superposition of states of a system made up of a  pair. The quantum state of each quark is defined by their so-called ‘flavor’ and ‘spin’ properties. In practice, each neutral pion is therefore a superposition of 4 spin-flavor states. It is possible to place these four states in one-to-one correspondence with the four possible pairs of nucleotides that make up DNA (fig. 3). Furthermore, quarks have another non-observable quantum number: the ‘color’ represented by R, G or B. Each of the four spin-flavor states can be in one of three color states, just as each pair of nucleotides can be in one of the three possible positions within a DNA codon, consisting precisely of three pairs of nucleotides.

Picture 3

Figure 3. Correspondence between the quantum states of the neutral pion and the DNA structure.

Given this peculiar correspondence, a question naturally arises as to whether pions can form filaments, i.e., linear structures that are also able to replicate. Although these structures have not been observed experimentally to date, there are numerous theoretical predictions about their existence. For those who are not experts on the subject, it must be specified that what has been said so far with respect to hadrons is well-known and consolidated physics, whereas this last part regarding more exotic linear structures is the subject of ongoing scientific investigation and their physical existence is still very much disputed.

Owing to the very nature of these pion strings, they can appear in cosmic ray impact events in which an exotic state of matter is produced: the quark-gluon plasma (QGP), a very dense and hot state of matter discovered in 2005 (I. Arsene, 2005). This was also the state of matter during the very first moments of the universe’s timeline; a state that can be produced in high-energy impacts between particles, in dedicated collision experiments, as well as in cosmic ray impacts. In this exotic state of matter, quarks do not bind to form hadrons but stand as free-floating particles. As in the first moments of the universe’s life, these QGP fireballs produced in cosmic ray impacts cool by expanding. During this cooling process hadrons and possibly pion strings are formed, in a similar manner to the way that ice crystals of various kinds are formed during water-cooling. In this state, free-floating quarks can participate in the replication process of parts of the string (fig. 4a), just as free-floating nucleotides inside the cell participate in the replication process of cellular DNA (fig. 4b).

Picture 4

Figure 4. (a) Replication of a pion string in a bath of free floating quarks. (b) Replication of DNA surrounded by free-floating nucleotides.

Note also that, in the interaction between two strings, these can join in a process called ‘intercommutation’ (fig. 5a), much in the same way that DNA molecules come together in horizontal gene transfer processes, such as DNA transposition (fig. 5b).

Picture 5

Figure 5. (a) String intercommutation. (b) DNA transposition.

This type of enquiry certainly cannot be exhausted in a few lines. However, numerous characteristics and patterns that unite the physics of cosmic rays and living organisms support the overall picture that we have outlined so far. Living organisms could actually be the product of a mind, in particular of a mind that knows cosmic ray atmospheric phenomena; a mind whose corresponding brain is the Earth’s atmospheric system, stimulated by cosmic ray events.

Here we have attempted to present a possible semantic level of understanding of what life could be as a product of the mind of an author: it would be the representation of a process typical of the strong interaction between quarks, which is represented through the electromagnetic interaction among molecules by means of numerous and ingenious tricks, such as encoding in a replicating double filament of DNA all the information necessary for the replication of the strand itself, and for the reproduction of the whole organism.

Undoubtedly, the meaning of each work of art can be grasped at different levels. Here we simply wish to observe that the nuclear objects that we have just reviewed constitute the first ordered structures to appear in the first moments of the universe’s timeline. It would not be surprising, therefore, if life had a deeper meaning than that which has been proposed so far. For example, it could be a great work of celebration of the origins of the universe itself, or an attempt to restore an initial state that was rapidly lost. We simply do not know. Nonetheless, the fact that it is possible to distinguish reality and representation from the time when the first living organism appeared on Earth, is certainly a meaningful clue that the mind can be dated to at least 4 billion years ago.



A.V. Gurevich, K. Z.-D. (1999). Lightning initiation by simultaneous effect of runaway breakdown and cosmic ray showers. Physics Letters A, 79-87.

Arsene, I. B. (2005). Quark-gluon plasma and color glass condensate at RHIC? The perspective from the BRAHMS experiment. Nuclear Physics A, vol 757, 1-27.

R. Dwyer, M. A. (2014). The physics of lightning. Physics Report, 147-241.

Nagel, T. (1974 october). What is it like to be a bat? Philosophical Review, vol. 83, 435-450.

Vachaspati, T. (1992). Vortex solutions in the Weinberg-Salam model. Phys. Rev. Lett. vol. 68, no. 1977.

Wilczek, F. (2016). A beautiful question: finding nature’s deep design. Penguin Random House.

Zhang, T. H. (Jun 1998). Pion and eta’ strings. Phys. Rev. D, vol. 58.

Idealism rediscovered (The Return of Metaphysics)

Idealism rediscovered (The Return of Metaphysics)

Reading | Philosophy

shutterstock 1914975934 small

The Return of Metaphysics is a series of heavy-weight essays, produced in collaboration with the Institute of Art and Ideas (IAI), to mark the present-day resurgence of metaphysics as a serious intellectual endeavor in the Western cultural dialogue. After many decades of a kind of stupor, the fundamental questions of what we are, what nature is in its essence, have re-awoken in academia and demand to retake their role in orienting our lives. In today’s essay, Prof. Paul Redding highlights the recently rediscovered importance of German Idealism, particularly Hegel’s idealism, in articulating solutions to present-day problems. This essay was first published by the IAI on the 4th of March, 2022.

In 2019, the American philosopher Robert Brandom published his long-awaited interpretation of Hegel’s Phenomenology of Spirit, entitled, “A Spirit of Trust. While many others have offered interpretations of this massively ambitious and daunting work from the early nineteenth century, none have elaborated its messages on the basis of a philosophical theory they themselves had forged in the context of highly technical debates at the heart of contemporary analytic philosophy. Brandom’s attraction to Hegel seems to have been from the start inextricably bound up with what had drawn him to heroes of analytic philosophy such as Ludwig Wittgenstein, W. V. O. Quine, Wilfrid Sellars, and Donald Davidson—typical of the figures engaged with in his 1994 game-changing treatise in the philosophy of language, Making It Explicit: Reasoning, Representing and Discursive Commitment. Many would have scratched their heads: How could such apparently antithetical approaches as Hegel’s “absolute idealism” and contemporary analytic philosophy ever come to occupy the same intellectual space?

It is an oft-repeated story that the analytic approach to philosophy, which would come to dominate academic philosophy departments in the English-speaking world, had emerged around the turn of the twentieth century with the rebellion of Bertrand Russell and G. E. Moore against the British variant of Hegelianism then dominant at Cambridge. A crucial motivation for Russell in this attempt to drive Hegelianism from philosophy had been his attraction to the program of “logicism” developed by the German mathematician Gottlob Frege. But when Russel attempted to use Frege’s logic to secure the foundations not only of mathematics, but of the natural sciences, he run into trouble.


The new logic and its limits

The rapid growth of mathematics in the nineteenth century had led many to question the ultimate grounds of its increasingly abstract and counter-intuitive truths. Logicism was meant to address this need by demonstrating the “foundations” of mathematical knowledge in the a priori science of logic. However, the most recent logic available in England, an algebraic form of traditional Aristotelian logic introduced in the mid nineteenth century by George Boole, was regarded as inadequate to the task. In contrast, Frege had developed a revolutionary new “logical language”—in his original formulation, a “concept-script” (Begriffsschrift)—which radically broke with the Aristotelian form of logic that had endured for over two thousand years. Frege’s logic would be the starting point of the multi-volumed, Principia Mathematica on which Russell would work together with Alfred North Whitehead over the first decade of the new century.

Russell’s attitude towards Aristotelian logic in any of its forms was that it had provided a trojan horse for importing non-scientific assumptions into philosophy. Here he often portrayed Hegel’s philosophy, which he understood as based on Aristotle’s logic, as exemplifying the way a metaphysical house of cards could be constructed upon a faulty logical basis. As the new Frege-Russell approach to logic would soon be accepted as the way forward, it can be difficult to imagine there being a place for Hegel in the new intellectual environment, given the fact that Hegel had placed his logic at the centre of his philosophical system. Nevertheless, it would not take long for Russell’s vision for a new philosophy to start breaking down, and by the mid-century there had emerged paths along which a return of Hegel’s approach might be imagined.

Russell’s vision may have been born in the context of the felt need to supply a logical foundation for mathematics, but any philosophy, of course, must be capable of addressing a much wider range of issues. The obvious development for Russell was to extend the role of logic to the foundations of the natural sciences that were at that time undergoing revolutionary changes, but here the strains of the new approach would soon start to show. In contrast to pure mathematics, the natural sciences are crucially based on empirical experience, and Russell would soon attempt to marry the logicist project to a form of empiricism, with ideas of a “logical empiricism” or “logical positivism” developing especially among various influential groups in places such as Vienna and Berlin in the 1920s and 30s.

But such approaches faced the problem of how to conceive of the sensory “givens” of perception in relation to the entirely abstract conception of thoughts as envisioned by Frege. Traditional empiricism had pictured thoughts as made up of “ideas” or concepts that were derived from sensation, but Frege criticized any tendency to identify logical processes with psychological ones. Frege, in a rather Platonic fashion, conceived of laws of logic as dictating how rational beings ought to think; they were not generalizations of how humans actually think. This may not have been a problem when pondering the mind’s grasp of mathematical truths (so-called necessary truths, constrained by the very nature of mathematics), but could “thought” in this sense accommodate empirical, sense-derived content? In other words, could Frege’s logic account for contingent truths of experience upon which empiricists based the sciences of nature? It didn’t seem to.

In the algebraic version of Aristotelian logic introduced in the 1850s by Boole, there had been a place for simple object-centered judgments in which a predicate could be ascribed to a perceived object, as when one says of the sun that it shinesBut Boole’s logic, seemingly restricted to such simple “one-placed” predicates, and unable to capture more complex judgments involving relations, was deemed useless by Frege and Russel for the project of grounding mathematical truths.


Sellars, the myth of the given and Hegel’s return

By the mid-50s, appeals to Hegel in relation to these sorts of problems were starting to appear. Two examples of these—both given as lectures in London—illustrate possible paths for Hegel’s return. In 1956, in a series of lectures delivered at the University of London under the title “The Myth of the Given,” the American philosopher Wilfrid Sellars would refer to his response to the problems of logical empiricism as his “Méditations Hégéliennes,” and these lectures would be the taking-off point of Brandom’s later work. Three years later, in a now largely forgotten but forward-looking lecture on “The Contemporary Relevance of Hegel,” presented to a colloquium on “Contemporary British Philosophy,” the South African born peripatetic philosopher, John N. Findlay would discuss Hegel’s logic in relation to post-Principia developments in logic itself. Juxtaposed, the lectures might be regarded as directing a pincer-like attack on the empiricist and logicist flanks of Russell’s conception of philosophy.

At the core of Sellars’ criticism of the “myth” of the empiricist was the claim that empiricism conflated two distinct roles played by the mind’s alleged sensory states, conceived traditionally as “sensory ideas” or, more recently, as “sense-data.” On the one hand, they were meant to signal a causal role for the perceived object in the formation of perceptual knowledge. On the other, they were meant to play a justificatory role in relation to such knowledge claims—that is, to provide reasons, and not merely causes, for holding certain beliefs. If challenged, a person will typically cite other beliefs as reasons for holding a particular one—a situation that conforms with the idea that logical relations, as in the Fregean new paradigm, are conceived as holding primarily between the complete propositions expressed by asserted sentences, rather than between subjects and predicates of those sentences. But so-called “sense-data” were not supposed to have any logical form. Sense-data could only be seen to play the role of causes—the world’s brute impact on our senses—not reasons one could appeal to. How, then, could they play a role in justifying our knowledge claims?

Developing Sellars’ original ideas along a number of trajectories, Brandom would later suggest a way around the problem. One should forget about relations between individual statements and the world—statements and “facts”—and concentrate on relations among the statements used in talking about the world. One can exploit the logical links between asserted statements so as to develop a concept of their meanings. Thus, the meaning of a statement S could be thought of in terms of the range of other statements to which it was inferentially related. These include statements that could be inferred from it (what other statements a speaker was committed to in asserting S) and the statements from which the original statement could itself be inferred (the evidence the speaker might cite in support of S). It was this alternative approach that allowed earlier forms of idealism, like those of Kant and Hegel, back in.

In the first place, a “concept” was now no longer to be thought of as some sort of “faded” sensory idea, like the empiricists has thought. For Kant, a concept was a rule—a rule that, according to Brandom, governs the inferences that can be made among sentences expressing that concept. For example, my claim that this thing in front of me is a horse commits me to the further claim that it is an animal. The claim that it’s an animal commits me to the further claim that it is mortal, and so on. Next, while Kant, somewhat like the logical empiricists, conceived of thoughts as still in need of something “given” in experience (what he called “intuitions”), Hegel criticized this distinction between thought and experience. No aspect of experience remained unconceptualized.

Brandom also followed other suggestions in Sellars, such as the need for logic to capture modal claims about necessary and possible truths, a theme that brings us to Findlay’s lecture.


Findlay, Prior and the holism of logic

Findlay had just published a book on Hegel when he delivered his 1959 lecture, but important background to his comments there is to be found in yet another lecture series given in 1956—this time in Oxford rather than London. These were the “Locke Lectures” on “Time and Modality” that helped to open up not simply the area of modal logic but of a variety of logics now thought of under the general heading of “modal.” The series was given by the New Zealand logician, Arthur Prior, a former pupil of Findlay. Prior’s idea of the logics of time and modality, distinct from the type of logic introduced by Frege and Russell, drew upon earlier work of Findlay’s.

While not a logician himself, Findlay had long been interested in the subject and had introduced Prior to logic and its history when teaching in New Zealand in the 1930s. In the 1959 lecture, Findlay alluded to developments introduced by logicians in the decades after the appearance of Principia Mathematica, relating this history to the type of dialectical processes that had characterized Hegel’s logic. Principia, he pointed out, was not entirely made up of “clear-cut notions, fixed axioms and rigorous deductive chains.” In its “interstices” were ordinary English sentences in which Russell explained, interpreted and justified the way the symbols of the formal system functioned. After Principia, logicians had made the formal distinction between an “object language” that was talked about and the “meta-language” used to talk about it—a distinction, they claimed, that Russell had conflated. Hegel’s logic, Findlay contended, corresponded more to the logic of that “informal, non-formalizable passages of comment and discussion” found in the interstices of Russell’s text. This hierarchy challenged, Findlay thought, the very idea of some ultimate, unitary logical language and thereby the whole logicist project.

Findlay’s approach was more like that of late developers of the Boolean tradition such as the American pragmatist Charles Sanders Peirce and the Cambridge logician W. E. Johnson, who had developed the Boolean from of logic so as to allow it to deal with the types of complex assertions treated by Frege and Russell. But for them, logic did not purport to be a complete language within which mathematical truths could be ultimately grounded. It was mathematical in virtue of providing formal models meant to illuminate the logical processes of reasoning in different types of contexts, including those specifically addressed by logic in the style of Frege and Russell.

By distancing ourselves from our own reasoning—by objectivizing our thought patterns in a foreign medium—such logical calculi can shed light on the nature of our rational capacities. But without interpretation, mathematical symbols are just squiggles on a page. Logical processes represented formally need to be interpreted within and assessed by a linguistically expressed rational capacity that is presupposed. Any attempts to fully explain the latter in terms of the formalism was condemned to circularity. This, thought Findlay, was at the heart of the Hegelian dialectic. Hegel’s dialectic shows us how natural creatures like ourselves are able to reflect upon and improve those finite reasoning processes which nature had originally supplied us with.

These lectures by both Sellars and Findlay would prove prophetic in many complementary ways. In the second half of the twentieth century, more holistic and historically sensitive approaches to knowledge, especially in the philosophy of science, would gain traction. Along with this, a plurality of modal logics as introduced by Prior’s “tense logic” would flourish, and especially find important places in emerging fields such as computer science, in which a basically Boolean approach to logic had been retained.

Habits and the myths that feed them, however, die hard. Many contemporary analytic philosophers still think of Hegel in terms of Russell’s early denunciations made on the basis of assumptions few would consciously hold today. In the longer run, history might well tell a different story to theirs about the fate of Hegel at the hands of Russell, and who ended up surviving that encounter.

The miraculous epicycles of materialism

The miraculous epicycles of materialism

Reading | Ontology

shutterstock 1816368056

Faced with a growing mountain of refutations in the form of empirical evidence and clear reasoning, materialism tries to survive through a bizarre display of absurd imaginary entities, hypotheses and hollow rhetoric, writes our executive director in this week’s mid-week nugget. This is a long, in-depth, but worthwhile read.

Starting with Ptolemaic astronomy in antiquity, and then all the way to Copernican astronomy during the Renaissance, the celestial bodies were thought to move in perfectly circular orbits. The motivation for this widespread assumption was a particular metaphysical commitment: the heavens were perfect and only circles are perfect shapes; ergo, the celestial bodies had to move in circular orbits.

At the time, scholars didn’t think of this notion as an arbitrary assumption, but as an obvious reality instead; one that everybody had known to be true for almost two millennia. It was preposterous to think that all those people had been wrong all that time. Culture—not reason, not evidence—had made circular orbits not only extremely plausible, but even self-evident.

As empirical observations began to show that orbits aren’t circles, scholars started postulating so-called ‘epicycles’: the hypothesis that the celestial bodies move along circles, which in turn move along other circles, and these along yet other circles, and so forth. Despite the precariousness of the resulting models, the entire house of cards was still built on circles alone, so a cherished metaphysical commitment could be preserved.

Eventually, of course, the continuous need to add more and more epicycles reached a tipping point. The sheer accumulation of ‘anomalies’ eventually forces one to abandon one’s metaphysical commitments. Thomas Kuhn has famously called this tipping point a ‘paradigm shift’: when reason and evidence force us to look upon reality with different eyes [1].

But make no mistake: before scholars accept a paradigm shift, they will always conjure up epicycle on top of epicycle in a naked defiance of reason and evidence, so to preserve the metaphysical commitment they identify with. The motivation for this is that, at any single step in the process, parting with that commitment feels less plausible than adding just one more epicycle. And so, more and more epicycles are added, driven by the sense of plausibility that culture manufactures. The history of philosophy and science shows that this has happened repeatedly.

The present is no different. Our metaphysical commitment today is that physical stuff—abstractly defined as being purely quantitative and independent of mind—has standalone existence and somehow generates mind. Technically, this is called ‘mainstream physicalism’ but, colloquially, it’s often referred to as ‘materialism.’ As evidence accumulates in analytic philosophy, foundations of physics and the neuroscience of consciousness against materialism, scholars are busy fantasizing about the epicycles needed to safeguard it from the relentless clutches of reason and evidence.

Objectively speaking, these epicycles of materialism are now reaching the level of patent absurdity. But because they are couched in the sense of plausibility manufactured by our culture, they are still put forward not only with a straight face, but also with the triumphant pride that accompanies a great scientific advancement. My goal for the remainder of this essay is to present to you, as neutrally as I can bring myself to do it, what the latest epicycle proposals are.


‘Anomalies’ in foundations of physics

For over forty years now, we’ve known from repeatedly refined and confirmed laboratory experiments that the physical properties of the basic building blocks of the material world—think of the mass, charge, spin, speed and direction of movement of elementary particles—do not exist prior to being measured [2-19].

In general lines, these experiments go as follows: two entangled particles—say, A and B—are shot in opposite directions. After they’ve traveled for a little while, a first scientist—say, Alice—measures particle A. Simultaneously but far away, scientist Bob measures particle B. As it turns out, the physical property Alice choses to measure about particle A determines what Bob sees when he measures particle B.

What this shows is that measuring particles A and B doesn’t simply reveal what their physical properties already were immediately prior to the measurement, but in some sense creates those physical properties. For as long as no measurement is performed, we cannot say that particles A and B even exist, for they are defined in terms of their observable properties.

As strange as this may sound at first, there is a simple and intuitive explanation for it: physical properties are simply the appearance or representation, upon measurement, of a deeper and more fundamental layer of reality. Analogously, the dials on an airplane’s dashboard are also a representation of the actual world outside the airplane, insofar as they convey information about that world. And the dials show nothing if the airplane’s sensors aren’t measuring the real world outside, so the appearance or representation can’t be said to exist unless and until a measurement is performed. Only then, do the dials produce an appearance.

What we call the physical world is thus analogous to the airplane’s dashboard: it shows nothing unless and until we measure the real world underlying the physical. Physicality or materiality is the result of a measurement displayed on the internal dashboard we call perception. We do not have a transparent windscreen to see the world as it is in itself; all we have is the dashboard and the sensors that feed it with data. In other words, we only have perception and our sense organs. Therefore, we confuse what is perceived with the actual world and proclaim that reality is physical, or material. Experiments have now shown that this is as nonsensical as a pilot, flying by instruments alone, insisting that his dashboard is the world outside, as opposed to an appearance or representation thereof.

The point of scientific experimentation, of course, is to unravel this understandable confusion and make clear to us that the real world out there is not what we call ‘material.’ But since this contradicts materialism, many scientists and philosophers feel that it is more plausible to add an epicycle to their theories than to accept what experiments tell us.


The ‘fantastic hidden stuff’ epicycle

Take popular YouTuber and physicist Sabine Hossenfelder, for instance. She proposes that there are mysterious ‘hidden variables’ that account, under a materialist metaphysical framework, for the experimental results discussed above. These hidden variables are not explicitly defined—beyond imaginary toy models with little to no bearing on reality [20]—even in principle. Like circular orbits and their epicycles, they are purely imaginary entities for which there is precisely zero empirical evidence.

Even the underlying motivation for postulating hidden variables is conspicuously questionable. Indeed, if we were to exclude the non-hidden properties of nature—mass, charge, spin, etc.—from our picture of reality, all kinds of obvious things would go immediately unaccounted for: without mass we couldn’t account for inertia; without charge we couldn’t account for electricity; without spin we couldn’t account for magnetism; etc. As such, there’s good reason to infer that these properties exist in some sense. However, and very tellingly, if we deem ‘hidden variables’ to be just what they appear to be—that is, fantasies—nothing in our world goes unaccounted for; nothing at all. We don’t need hidden variables for anything other than to safeguard a metaphysical commitment; one so internalized that many have grown to conflate it with fact.

To make her epicycles at all tenable, Hossenfelder asks us to part with a notion integral to our understanding of experimentation and reality itself. She considers this notion a mere “assumption” and refers to it in technical terms: “statistical independence” [21]. If you don’t know what this means, you might believe Hossenfelder’s claim that it’s merely some kind of arcane mathematical postulate we might as well do without. But what if you grasp what “statistical independence” actually means?

Suppose that you want to photograph the moon. You set the aperture and exposure of your camera so to capture a clear image. But you don’t ever imagine that what the moon is—or does—up there in the sky will change in response to your particular camera settings, do you? The moon doesn’t care about the settings of your camera or even the fact that you are photographing it; it is what it is and does what it does irrespective of how it is being measured. There’s no causal chain, starting at your camera and somehow finding its way to the moon, which turns the moon into something it isn’t—say, green—or forces it to do something it otherwise wouldn’t—say, rotate the other way around—just because of how you set your camera. Is this a mere “assumption” or a basic, empirically-established understanding of how reality, experimentation and measurement work? Do cameras have the power to change reality by merely photographing it?

What I’ve just described is what “statistical independence” means. It states that the thing measured (the moon) doesn’t change in response to the settings of the detector (the camera) used to measure it; how could it? Nonetheless, Hossenfelder calls this fundamental understanding a mere “assumption” and asks us to abandon it: according to her, what Alice and Bob see depend on their measurement choices because particles A and B, which have standalone existence, change in response to the detectors’ settings; just as if what the moon is or does depended on the aperture and exposure settings of your camera. Mind you, Hossenfelder provides no coherent account of how this magic is supposed to happen; she just knows that it does, because her metaphysical commitment implies that physical properties must have standalone existence.

Notice that, under analytic idealism, the moon we see is not the thing in itself, but a representation thereof on the dashboard we call perception. There is indeed something real out there, which itself cannot be magically changed by measurement or detector settings, but which, when measured, presents itself to us in the form of the appearance we call the moon. Under analytic idealism, measurement doesn’t change reality; it simply produces an appearance or representation thereof, which in turn is relative to the measurement context. The physical world is the representation produced by measurement, not the reality measured in the first place.

But under Hossenfelder’s ‘hidden variables’ model, that great spheroid in the night sky, with a certain mass, speed and direction of movement, is the thing in itself, not a mere appearance or representation. In stating that camera settings—in the context of this metaphor—change the thing observed, she is attributing to these settings the magical power to change reality itself, not mere representations thereof.


The ‘compensate hollow argument with assertive rhetoric’ epicycle

Independently of the above, a series of even more recent and compelling experiments refutes Hossenfelder’s epicycles in a different way: these experiments show that, just as predicted by quantum theory, physical properties—which define what physical entities are—aren’t absolute, but relative, or “relational,” or “contextual” [22, 23]. In other words, they do not have standalone existence but arise, instead, as a function of observation in a way that depends on the particular vantage point of the observer.

If we go back to our dashboard metaphor, this isn’t surprising at all: what the dials display is a function of what the airplane’s sensors measure, which in turn is relative to the particular position and orientation of the airplane in space and time. So two different pilots in two different airplanes may get different dashboard readings of the same sky, because of their particular position and orientation in it. This doesn’t mean that they don’t share the same world; of course they do. It only means that their dashboards aren’t the world, but merely representations or appearances thereof. Dashboard indications can be different, while the real world measured is the same.

But these experimental results violate Hossenfelder’s metaphysical commitment, just as the observed orbits of the celestial bodies violated the notion of perfectly circular motion. So how does she deal with them? She uses her particular brand of rhetorical assertiveness and casual dismissal of results that don’t agree with her views, so to brand the experiments invalid or “debunked.” In a recent video, she dismisses the results with a single sentence: photons—used as observers in the experiments—are not measurement devices because they don’t cause decoherence, therefore the experiment means nothing. Add a big red ‘X’ on top of the respective papers and touché; job done. With one simple statement and a silly visual aid, Hossenfelder wants you to believe that she has dismantled the careful and judicious work of many theoreticians and experimentalists over years of effort.

What is ironic is that, because she makes the statement in such an assertive manner, alongside purely rhetorical but effective visual aids—see illustrations below, taken from her videos—many non-specialist viewers are bound to buy into it despite its obvious hollowness. But I digress.

It is true enough that decoherence is often operationally associated with measurement. But we know how to probe and collect information about a quantum system without causing decoherence; that is, without disturbing the system’s superposition state. We call these “interference experiments,” an example of which is the famous double-slit experiment showing wave interference patterns corresponding to a superposition. Something of this nature—though a bit more involved—is precisely what the researchers in question have judiciously done. As such, it is simply false to maintain that, because photons don’t cause decoherence, no conclusions can be drawn from the data gathered in these experiments.

You see, epicycles are not only about adding fantastic stuff—such as ‘hidden variables’—but also about arbitrarily dismissing inconvenient stuff—such as interference experiments. They represent attempts to protect a metaphysical commitment based not only on hand-waving forms of argument—tortuous as those may be—but also on pure rhetorical strength.

The ‘turtles all the way down’ epicycle

But what if one is intellectually honest to a fault, and incapable of proposing fantastic imaginary entities that have no empirical grounding? How does an honest and brilliant mind, culturally conditioned to commitment towards materialism, find its way out of the dilemma posed by evidence and reason?

Carlo Rovelli is both a physicist and a person I have sincere respect and admiration for, one of the few truly open and honest top-thinkers in the world today, I suspect. He has acknowledged, already almost 30 years ago, that physical entities cannot have standalone or absolute existence; instead, they are “relational,” or relative to observation. As such, they arise as a result of observation. Yet, Rovelli is also a man of his time and cultural context, committed to the materialist notion that physical stuff is not reducible to—that is, explainable in terms of—anything else.

Rovelli’s way out of this dilemma is to say that reality is purely relational, or relative, which immediately raises the question: relative to what? Movement is relative, alright: two cars on a highway may or may not be moving relative to one another, even though they are certainly moving relative to the buildings along the highway. But for there to be any sense in the very concept of movement, there has to be something that moves in relation to something else; movement is not a thing unto itself, but a relational property that operates between things; and, of course, the things that move can’t themselves be movement.

But according to Rovelli, the whole of reality is made of relations. “Relations between what?” you might ask. Rovelli’s answer: relations between meta-relations, which are themselves relations between meta-meta-relations, and so on. It’s turtles all the way down. The world is made of relations but there is nothing that relates [24]. It’s like saying that the world is made of movement but there’s nothing that moves. Or, more accurately: the world is made of movement but the things that move are themselves movement. Huh?

No, really, this is Rovelli’s position, which I have confirmed directly with him. He isn’t bothered by the fact that he is clearly incurring in the fallacy of infinite regress. His epicycle is not just cumbersome and arbitrary, it’s illogical. Yet, defying logic clearly feels more plausible to him than abandoning his metaphysical commitment. Such is the psychological power of metaphysics.

Epicycles in the neuroscience of consciousness

It was only a little over ten years ago that nearly every neuroscientist—and many ordinary people—thought that psychedelics caused the ‘trip’ by lighting the brain up like a Christmas tree. Then research started coming in showing precisely the opposite: psychedelics only reduce brain activity, in many different areas of the brain. They don’t increase activity anywhere [25-29].

Predictably, neuroscientists started looking for something physical that did increase in the brain following the administration of a psychedelic drug. After all, the immensely rich, structured, intense psychedelic experience must be caused by something in the physical brain; right?

Many materialist hypotheses were put forward and eventually abandoned: functional coupling, activity variability, etc. One emerged as the most promising candidate to save cherished materialist assumptions from the clutches of empirical results: the grandiosely named ‘entropic brain hypothesis’ [30].

Indeed, there is something about grandiose technical names when it comes to epicycles. What the researchers variously call ‘entropy,’ ‘complexity’ (wow!), or ‘randomness’ is… well, noise; brain noise; brain activity that follows no recognizable pattern; the brain equivalent of TV static. And, as it turns out, researchers could show that, on average, brain noise levels increase a little bit—the understatement of the century—under psychedelics [31].

This result has now been published in several respected neuroscience journals. The notion that there is any real effect here at all is based on an analysis called ‘statistical significance.’ It means that, in the experimental data accumulated by the researchers, a certain statistical factor—called ‘p-factor’—has crossed a certain threshold. And that threshold was chosen in an entirely arbitrary fashion by someone in the 1930s [32]. Indeed, the pitfalls and arbitrariness of p-factor analyses have been much discussed in recent times [33-36]. There are even calls to abandon p-values and statistical significance altogether, so unreliable they are in showing that there is any real effect at all [37]. But in this essay, for the sake of argument, I shall overlook all this and consider the effect real.

The question now is, is a small increase in brain noise levels a plausible account of the psychedelic experience under materialist premises? Let us first consider that, in some of the drug-placebo pairs studied, brain noise levels actually decreased [38]. Yet, those subjects, too, experienced a psychedelic ‘trip.’ If their brain noise levels didn’t increase, what accounts for their ‘trips’? The researchers do not offer an explanation.

Secondly, anyone who has ever experimented with psychedelics knows that real ‘trips’ are anything but random noise. Psychedelic experiences are extremely structured, beyond even ordinary perception. Psychonauts often speak of hyper-dimensional geometry, internally-consistent alternative realities, alien beings, intricate but coherent messages and insights, and so on [39]. If a small increase in brain noise levels—which by definition have no structure—generates these experiences, where does the structure of the experience come from, under materialist premises?

Thirdly, almost the entire neuroscience literature reports correlations between patterns of brain activation and experience, not brain noise and experience. Researchers know that a sleeping subject is dreaming of something as dull as starring at a statue [40] or clenching a hand [41] based on their patterns of brain activation. Artificial neural networks can even reconstruct one’s conscious inner imagery just by looking at patterns of brain activation [42]. Is it plausible that, for psychedelic trances alone, the salient correlate of experience is noise and for all other states it is something entirely else? Can there be two entirely different biological bases for consciousness? You see, if one proposes a different materialist account of experience for each type of data, materialism becomes unfalsifiable.

Fourthly: I’ve been speaking of ‘small increases’ in brain noise levels. But I haven’t told you how small they actually are. On average, the observed increase in ‘complexity’ is of 0.005 in a complexity scale of zero to one hundred [38]! This isn’t small; it’s minuscule. Even if one ignores the problems surrounding the notion of statistical significance and considers the effect real—as opposed to an irrelevant statistical fluke, which I bet is what it actually is—it is still minuscule. And it matters that it is minuscule, for the attempt here is to account for the formidable richness, intensity and structure of the psychedelic experience—one of the top 5 most significant experiences of one’s life, according to John’s Hopkins research [43]—with a minuscule increase in, of all things, brain noise.

If we didn’t live in a culture that has manufactured plausibility for the metaphysical commitments of materialism, this result would, in all likelihood, have been dismissed not only as an inconsequential statistical fluke, but also as lacking any explanatory force whatsoever. But as it is, the result is presented as a major neuroscientific breakthrough. Of all the epicycles listed in this essay, this one probably takes the crown of most daring hypothesis, for its sheer implausibility in view of the data presented to substantiate it.


Beyond the epicycles: a paradigm shift

This is the world and culture we live in today: one where fantastic imaginary stuff for which there’s zero empirical evidence, flat-out logical fallacies, arbitrary rhetorical dismissals of solid experimental results and extremely implausible hypotheses are used to rescue one’s metaphysical commitments from the clutches of reason and evidence. Yet, that cannot last too long, for we know from history that, eventually, even the most ingrained metaphysical commitments give way to clear thinking.

We may be witnessing this today already, in subtle but clear and growing ways. Indeed, it was only a few days ago that I was debating Prof. Patricia Churchland in an event organized by the Institute of Art and Ideas, and hosted by Robert Kuhn, known for his PBS series ‘Closer to Truth.’ Churchland and I had been chosen as the most recognizable and unambiguous defenders of our respective views: I as an analytic idealist and Churchland as an eliminative materialist. The latter means a materialist that not only maintains that the mind is a product of the brain, but even that (certain) experiences don’t actually exist.

Lo and behold, after some brief introductions and commentary, Churchland opened her participation in the debate by stressing that she… well, isn’t a materialist. She claimed that she doesn’t subscribe to any ‘ism’ but prefers, instead, to peruse the data; which she surely did for the rest of the debate, avoiding argument in favor of telling what can only be described as ‘storified’ accounts of research she considered interesting. It was all indeed very interesting but terribly anticlimactic.

This is not the first or second time this happens. Some well-known materialists are suddenly turning into metaphysical agnostics. They are still willing to criticize non-materialist views and portray themselves as people who ‘simply follow the science,’ but not to unambiguously defend the materialist metaphysics that has characterized their entire public careers. Not only do they suddenly become agnostics, they try to re-write history and portray themselves as having always been agnostics. This, in my view, is the way people will slowly part with their metaphysical commitments while trying to save face. Materialism is now so ludicrously indefensible that no alternative is left to them. Expect to see much more of it in the years to come.

There is something else that I predict will happen. This prediction is based on private, personal conversations with intellectually-honest materialists, so I won’t name sources. But once we acknowledge that the physical world is indeed relational, that it indeed doesn’t have standalone existence, and that it is indeed just a superficial appearance of a deeper layer of reality, there will be a concerted attempt to very matter-of-factly extend the meaning of the word ‘physical’ so to encompass that underlying layer as well; whatever it turns out to be. To put it bluntly, whatever reality turns out to be, we will simply call it ‘physical’ and thereby render materialism unfalsifiable by mere linguistic definition. It’s like acknowledging that there is a real world outside, behind and beyond the dashboard, but referring to it as just more dashboard. Silly and extraordinarily misleading as this surely is, it will be an important means for many to feel comfortable with embracing a broader view of what’s going on, and for some others to save face and public careers. Expect to watch this pernicious but sincere—perhaps even well-meaning—charade unfold in the next couple of decades.

Ultimately, of course, it is our understanding of what is really going on that matters; our understanding of who and what we are, what reality is, and how we relate to the rest of nature. It’s not about labels or personal vindication. The meaning of our lives is what is at stake here. As such, it is irrelevant whether some will get away with face-saving charades.

Our view of reality not only will change dramatically, it is already changing as you read these lines. Thomas Kuhn’s paradigm shift is unfolding before our very eyes. We will only recognize it unambiguously in hindsight, but the writing is on the wall. Nonsense can last long and cause much harm, but reason and evidence are like the proverbial wave the slowly dissolves the rock: inexorable, irresistible and patient beyond our ability to conceive.




Can consciousness understand itself?

Can consciousness understand itself?

Reading | Philosophy

John Hogan, PhD | 2022-02-20

shutterstock 638004649 small

Has human consciousness evolved enough to understand what it is and isn’t? Dr. Hogan warns against a rush to judgment when it comes to answering the big questions of life, self, and reality at large.

The mind/body problem is intrinsically related to the concept of consciousness. The two most common philosophical positions that bound such concept are:

  1. Consciousness is dependent on the immutable laws of chemistry and physics, which have been subject to the forces of evolution. That is, consciousness has evolved in a similar fashion to the evolution of any of our senses. Any proponent of the New Atheism would exemplify this position, say Richard Dawkins discussing his book, The Blind Watchmaker.
  2. Consciousness is dependent on a reality separate from the known laws of chemistry and physics. For the purposes of this discussion, we designate this separate reality as C. Consciousness is dependent on C, but C is independent of the laws of chemistry and physics, which have been deduced by our species through logic and measurement. Any leader of a monotheistic religion would exemplify this position, say Pope Francis reciting the Nicaean Creed.

Anyone who addresses consciousness within this spectrum, while accepting the validity of the theory of evolution, is “hoist on their own petard,” to quote Shakespeare. A “petard” is a clump of gun powder with a fuse. In medieval times, it was stuck on the wooden door of a castle and lit to allow entry by the invaders. If the military engineer who had this job didn’t run away fast enough, he was “hoist” into the air. The expression now relates to a self-inflicted unexpected outcome, say the politician who passes a tough campaign finance law only to be convicted under it later.

What is our “petard” under the theory of evolution? According to it, we share a nearest common ancestor with any other living plant or animal. Let’s consider a person and their family dog, as well as the paternal genealogy of both. Roughly 20 million years ago, a specific mammal had multiple offspring. The progeny upon progeny of one of this mammal’s offspring became the person. The progeny upon progeny of another offspring of this mammal from 20 million years ago became the family dog.

There is a sense in which this is our “petard.” To understand why, let’s return to the family dog. It might stare or howl at a full moon, but it does not have the evolved mental capacity to understand Newton’s theory of gravity, much less understand that the moon is a sphere rather than a disc. There are truths that are unknowable to the dog. Much of the knowledge humans know to be true—from algebra to the political history of the western world—is simply unknowable to the family dog. (Let­­­­’s be humble, the dog also has knowledge unavailable to the human; for example, the relationship of smell to danger or pleasure.) The dog’s cousin, the person, may believe they have the mental capacity to navigate the pathways to account for consciousness in a manner bounded by positions 1 and 2. But humility is in order.

The person’s conclusion as to what is true will be based on the neuro-chemical reactions that drive logic and emotion. These drives have been forged in their brain by evolutionary selection over the past 20 million years. There is no logical thread that posits that the dog’s mental capacity—formed by the same evolutionary forces—is limited, while the person’s is not. Evolutionary theory allows that descendants of some modern-day humans may be, in another 20 million years, as different from us as we are from the dog. They may know things about evolution and our place in the universe that are as unknowable to us as algebra is unknowable to the family dog. This is not the limitation of logic discovered by Kurt Gödel; it is the biological limit imposed by realizing that evolution is not yet done with our finite brain.

The person need not give up hope while lighting this petard. Evolution is consistent—albeit not inherent or exclusive—with either of the extremes captured by position 1—the atheist position—and position 2—the theist position. Let’s explore this with the understanding of time.

In position 1, consciousness follows biology on earth while in position 2 C precedes biology. Given this formulation, the implications of position 2 for theists are obvious. In the Christian tradition, a loving God starts the whole human experience. However, an atheistic interpretation of position 2 is not only rational under this formulation, but also has an analogy with one of the most important evolutionary events in history: the biological exploitation of water.

Water became essential for life on earth 100 million years ago, via an adaptation made by our sponge ancestors, which exploited water’s chemistry and physics to extract nutrients. The circulation systems of the family dog and the person can be traced to this adaptation and have certainly improved upon it.

Now, the consciousness experienced and amplified by our ancestors over the last 500,000 years may depend on a series of neural adaptations that have exploited the existence of C, much like our circulatory system exploited water. It may take another 10 million years of evolution for substantially evolved descendants of Homo sapiens to understand the mechanisms of this adaptation.

While the assertion that our consciousness is solely dependent on evolution (position 1 above) may be true, it is not provable given the evolutionary status of our brain, and therefore may in fact be false. Likewise, the assertion that our consciousness is derived from a reality separate from the known laws of physics, C, may be true, but it is not provable given the evolutionary status of our brain, and therefore may in fact be false. The only thing provable is that evolution is consistent with this uncertainty and this uncertainty is a hallmark of the human condition.

We should not give up personal responsibility as we lead our conscious lives, make decisions, and ponder the meaning of life. We should recognize the scientific foundation of evolution and willingly express evolutionary thoughts and emotions that drive what we consider to be good outcomes (say, kinship within the family), and constrain evolutionary thoughts and emotions that drive what we consider to be poor outcomes (say, fear of the outsider). However, we should also be open to the possibility of a reality that is greater than what we can observe with our senses. While pondering this greater reality we should be wary of human concocted spiritual dogma.

The fantasy behind Sabine Hossenfelder’s superdeterminism

The fantasy behind Sabine Hossenfelder’s superdeterminism

Debating | Physics

shutterstock 1634115994 small

In today’s mid-week nugget, our Executive Director critiques physicist Sabine Hossenfelder’s proposed ‘superdeterminism,’ which aims to account for the theoretical difficulties of quantum measurement without departing from physicalist metaphysical assumptions.

In an introductory short video by Essentia Foundation, the cumulative evidence from foundations of physics that contradicts metaphysical physicalism—namely, the notion that physical entities have absolute, standalone existence—was reviewed. See below. The relevant technical literature is linked in the description of the video.

The argument goes as follows: if physical entities did have standalone existence, then their properties should be simply revealed by measurement. They should have whatever properties they have regardless of whether they are measured or what is measured about them. Measuring a table shouldn’t create its weight or length, but simply reveal what its weight or length already were immediately prior to measurement—or so physicalism presupposes.

As it turns out, however, when measurements are made on entangled quantum particles—the building blocks of nature out of which everything else, including tables, is constructed, according to physicalism—the measurement outcome from one particle depends on what is measured about the other. The choice of what to measure on the first particle determines what the second is, so their physical properties couldn’t have had existence prior to the measurement. Instead, what we see is that physical properties are the result of the measurement itself, not pre-existing realities merely revealed by measurement.

What this tells us is that physical entities aren’t fundamental in nature. Instead, they are merely the appearance or representation, upon observation, of a deeper layer of nature that is, by definition, non-physical.

Here is a metaphor to clarify this point. The physical world is akin to what is displayed on the dials of an airplane’s dashboard: the dials only display something when a measurement on the world outside is performed by the airplane’s sensors. If nothing is measured, then nothing is displayed on the dials. This, of course, doesn’t mean that there is no world outside! Surely there is one; it’s just not what the dials display. Instead, the world is what is measured in order for the dials to display something. The short video below illustrates this metaphor.

Similarly, the experimental results from foundations of physics don’t mean that there is no world outside; surely there is one! It’s just that the world, as it is in itself, prior to measurement, is not physical; for physicality is what is displayed on our own internal dashboard of dials, which we call perception. The ‘sensors’ are our five sense organs: eyes, nose, ears, tongue and skin. Physics is, essentially, a science of perception—that is, a science of the dials—as renowned physicist Andrei Linde, who developed cosmic inflation theory, once observed [1, p. 12]:

Let us remember that our knowledge of the world begins not with matter but with perceptions. I know for sure that my pain exists, my ‘green’ exists, and my ‘sweet’ exists … everything else is a theory. Later we find out that our perceptions obey some laws, which can be most conveniently formulated if we assume that there is some underlying reality beyond our perceptions. This model of material world obeying laws of physics is so successful that soon we forget about our starting point and say that matter is the only reality, and perceptions are only helpful for its description.

Physicist and science communicator Sabine Hossenfelder, however, takes a different view regarding the meaning of the experimental results in question, as discussed in two draft papers [2, 3] and a recent short video:

Let me start my commentary on Hossenfelder’s material by confessing that I sympathize with her work and tend to like her incisive, provocative style. Her voice opposes and slows down the descent of physics into fantasy informed by notions of beauty, as opposed to predictive models informed by hardnosed empirical observations. As someone disappointed with the failure of Super Symmetry, which inspired my younger years at CERN, I realize how important it is to keep our eyes on the proverbial empirical ball. However, despite Hossenfelder’s zeal to stay true to empiricism, she may have now failed it for the sake of safeguarding a physicalist metaphysical commitment.

Hossenfelder’s view is that the experimental results in question can be accounted for by ‘superdeterminism.’ The idea is that the particles must have some hidden properties that we know nothing about. These hidden properties presumably take part in a complex causal chain that encompasses the settings of the detectors used to make measurements. In other words, the detectors’ settings somehow influence something hidden about the particles measured. And since the choice of what to measure is necessarily reflected on those settings, the measurement results depend on that choice. She summarizes superdeterminism thus: “What a quantum particle does depends on what measurement will take place.”

If one posits that quantum particles are nature’s fundamental building blocks, then Hossenfelder’s summary above is strictly incorrect: we define quantum particles in terms of their properties, and their properties depend on what measurement will take place. Therefore, the accurate formulation would be that “what a quantum particle [is] depends on what measurement will take place,” not what it does. Formulated this way—i.e. correctly—the statement starts to look a little less intuitive than Hossenfelder makes it out to be. After all, how can what a particle is depend on what is measured about it? Shouldn’t measurement simply reveal what the particle already was, just as it does in the case of a table? How can the particle change what it is merely because the detector is set up in a different way?

But let us be charitable towards Hossenfelder. According to Quantum Field Theory, there are no such things as particles; the latter are merely metaphors for particular patterns of excitation of an underlying quantum field. And one can reasonably state that the quantum field does those excitations, for excitations are behaviors of the field. Therefore, “what the quantum [field] does depends on what measurement will take place.” That’s accurate enough and seems, at first, to restore plausibility to Hossenfelder’s argument.

The problem is that she is asking us to imagine the existence of things for which we have no direct empirical evidence; after all, they are “hidden.” She is also asking us to grant these invisible things some very specific and non-trivial capabilities: they must somehow—neither Hossenfelder nor anyone else has ever specified how—change, in some very particular ways, in response to the settings of the detector. Mind you, detectors are designed precisely to minimize disturbances to the state of what is measured. Hossenfelder’s imagined hidden variables would have to somehow overcome this barrier as well.

Let us use a metaphor to illustrate what we are being asked to believe. When I take a picture of some celestial body in the night sky—say, the moon—I can set my camera in a variety of different ways. I can, for instance, set aperture and exposure time to a variety of different values. What Hossenfelder is saying, in the context of this metaphor, is that there is some hidden and mysterious something about the moon that changes in response to what aperture or exposure I set on my camera. What the moon does up there in the sky somehow—we’re not told how—depends on how I set my camera here on the ground. This is superdeterminism in a nutshell and you be the judge of its plausibility.

Be that as it may, appealing to hidden variables is inevitably an appeal to a vague, undefined unknown; and not just any vague unknown, but one capable of non-trivial interactions with its environment. The same criticism Hossenfelder leverages against, for instance, superstring theory can be leveraged, verbatim, against hidden variables: we’re appealing to imaginary entities for which there is no direct empirical evidence. To put it in plain English, we have no reason to believe this fantastic invisible stuff, except for trying to save physicalism.

Hossenfelder could argue that the peculiarities of quantum mechanical measurements—the very problem at hand—are themselves evidence of hidden variables. But this, obviously, would beg the question entirely: quantum measurements can only be construed as evidence of hidden variables if one presupposes that hidden variables are responsible for them to begin with. In precisely the same way, only if I presuppose the existence of the—equally hidden—Flying Spaghetti Monster, who moves the celestial bodies around their orbits using His invisible noodly appendages, can the movements of the celestial bodies be construed as evidence of the existence of the Flying Spaghetti Monster.

In one of her papers [2], Hossenfelder speculates about a possible type of experiment that could one day substantiate superdeterminism. The proposal is to make multiple series of measurements on a quantum system, each based on the same initial conditions. If the series are determined by the system’s initial conditions, as superdeterminism postulates, we should see time-correlations across the different series that deviate from quantum mechanical predictions. The obvious problem, however, is that to reproduce the system’s initial state one needs to reproduce the initial values of the postulated hidden variables as well. But Hossenfelder has no idea what the hidden variables are, so she can’t control for their initial states and the whole exercise is pointless. To her credit, she admits as much in her paper. She then proceeds to speculate about some scenarios under which we could, perhaps, still derive some kind of indication from the experiment, even without being able to control its conditions. But the idea is so loose, vague and imprecise as to be useless.

Indeed, Hossenfelder’s proposed experiment has a critical and fairly obvious flaw: it cannot falsify superdeterminism. Therefore, it’s not a valid experiment, for we’ve known since Popper in the 1960s that valid experiments motivated by a hypothesis are supposed to be able to falsify the hypothesis. More specifically, if Hossenfelder’s experiment shows little time-correlation between the distinct series of measurements, she can always (a) say that the series were not carried out in sufficiently rapid succession, so the initial state drifted; or (b) say that there aren’t enough samples in each measurement series to find the correlations. The problem is that (a) and (b) are mutually contradictory: a long series implies that the next series will happen later, while series in rapid succession imply fewer samples per series. So the experiment is, by construction, incapable of falsifying hidden variables.

In conclusion, no, hidden variables have no empirical substantiation, neither in practice nor in principle; neither directly nor indirectly.

You see, I would like to say that hidden variables are just imaginary theoretical entities meant to rescue physicalist assumptions from the relentless clutches of experimental results. But even that would be saying too much; for proper imaginary entities entailed by proper scientific theories are explicitly and coherently defined. For instance, we knew what the Higgs boson should look like before we succeeded in measuring its footprints; we knew what to look for, and thus we found it. But hidden variables aren’t defined in terms of what they are supposed to be; instead, they are defined merely in terms of what they need to do in order for physical properties to have standalone existence. If I were tasked with looking for hidden variables—just as I once was tasked with looking for the Higgs boson—I wouldn’t know even how to begin, because we are not told by Hossenfelder what they are supposed to be. She is just furiously waving her hands and saying, “there has to be something (I have no clue what) that somehow (I have no clue how) does what I need it to do so I can continue to believe in a physicalist metaphysics.”

This is akin to the medieval notion of ‘effluvium,’ an imaginary elastic material that supposedly connected—invisibly—chaff to amber rods. Effluvium was meant to account for what we today understand to be electrostatic attraction, a field phenomenon. Medieval scholars observed that chaff somehow clung to amber rods when the latter were rubbed. Therefore—since they had no notion of fields—they figured that there had to be some material connecting chaff to rod through direct contact, right? After all, everything that happens in nature happens through direct material contact, right? Never mind that such material was invisible (hidden!), couldn’t be felt with the fingers, couldn’t be cut or measured directly, and that no one had the faintest idea what it was supposed to be, beyond defining it in terms of what it allegedly did to chaff; it just had to be there.

Hidden variables are Hossenfelder’s effluvium: there must be some mysterious invisible something that somehow does what needs to be done for us to think of physical entities as having standalone existence, right? Because measurable physical entities are all that exists and, as such, must have standalone existence… right?

On a more technical note, Hossenfelder bases her entire discussion on Bell’s inequalities but conspicuously fails to mention Leggett’s inequalities [4], a 21st-century extension of Bell’s work, more relevant to the points in contention, which separates the hypotheses of physical realism and locality so they can be tested independently. Neither does she address the experimental work done to verify Leggett’s inequalities, which later refuted physical realism rather specifically [5, 6]. Even more recent experiments have also demonstrated that physical quantities aren’t absolute, but contextual (i.e. relative, or ‘relational’) [7, 8], thereby contradicting superdeterminism. By now, a broad class of hidden variables has been refuted by experiments [9], which Hossenfelder doesn’t comment on at all.

Instead, she bases her case on the notion that the opponents of hidden variables dislike the latter simply because the hypothesis supposedly contradicts free will. While I am sure that there are physicists emotionally committed to free will, refuting their emotional commitments does not validate superdeterminism. Suggesting that it does is a straw man. I, for one, am on record stating that free will is a red herring [10] and don’t base my case against superdeterminism on it at all. I don’t need to.

We have to guard against turning the particular metaphysical assumptions of our time into an unfalsifiable system; one that accommodates anomalies through arbitrary, fantastical appeals to unknowns and vague, promissory hand waving. When the anomalies that contradicted Ptolemaic and Copernican astronomy—according to which the celestial bodies move in perfectly circular orbits—began to be observed, adherents came up with fantastical ‘epicycles’—circles moving on other circles—to accommodate the anomalies. The tortuous cumbersomeness of the resulting models should have been enough to force them to take a step back and contemplate their dilemma in a more intellectually honest manner. But subjective attachment to a particular set of metaphysical assumptions didn’t allow them to do so.

Today, as anomalies accumulate in various branches of science against the metaphysical assumptions of physicalism, we would do well to avoid a humiliating repetition of the epicycles affair. However, instead, what we are now witnessing are hypotheses being put forward, with a straight face, that fly in the face of any honest notion of reason and plausibility. The epicycles look benign and reasonable in comparison to the willingness of some 21st-century theoreticians to believe in fantasy. I shall comment more broadly on this peculiar phenomenon—a harbinger of paradigm changes—in my next essay for this magazine.