Gratis verzending vanaf €35,-
Unieke producten
Milieuvriendelijk, hoogste kwaliteit
Professioneel advies: 085 - 743 03 12

In defense of Integrated Information Theory (IIT)

In defense of Integrated Information Theory (IIT)

Reading | Neuroscience

AI(Artificial Intelligence) concept.

In what may come as a surprise to many, our Executive Director highlights the careful, scientifically laudable metaphysical agnosticism of IIT—the leading neuroscientific theory of the structure of consciousness—as well as its superiority to alternative theories and synergy with Analytic Idealism.

In recent days, the world of neuroscience has been shaken by an open letter—signed by several neuroscientists and philosophers of mind—claiming that the Integrated Information Theory (IIT) of consciousness is pseudo-science. The shock effect of this letter is due to the fact that IIT has been considered the leading theory of consciousness in neuroscience for many years now, being supported by big names such as Christof Koch and Giulio Tononi. The ensuing brouhaha has been covered by my friend John Horgan here.

 

Rush to judgment?

I, too, criticized IIT as a physicalist account of consciousness in a couple of paragraphs of my book, Why Materialism Is Baloney, written well over a decade ago. My point was that no amount of information integration could explain the magical step of a fundamentally unconscious system suddenly becoming conscious. My thoughts on this specific claim did not change: IIT cannot account for consciousness under physicalist premises; it does not explain how an otherwise unconscious substrate can light up with consciousness just because re-entrant loops of information integration form in neuronal activity.

Since 2017, however, I have been discretely looking more closely into IIT. I have realized that, while there are enough physicalists out there claiming that IIT may offer an emergentist account of consciousness, the theory itself can be very straightforwardly interpreted as being metaphysically neutral, agnostic. As a matter of fact, the language used by the neuroscientists developing the theory is clearly and deliberately chosen so as to precisely avoid metaphysical commitments. Giulio Tononi, for instance, avoids equating experience with the neuronal substrate. Instead, he speaks of an “explanatory identity” between experience and the information structure IIT can derive from brain activity.

Note the carefulness and rigor here: the identity in question is explanatory, not ontic or metaphysical. It means that the information structure derived by IIT from patterns of brain activity can inform us about the qualities and dynamics of experience; it does not commit IIT to any position regarding the nature or origin of experience. When it comes to free will, Tononi also talks about “operational determination”: it is the information structure reflected by brain activity that determines our conscious choices, not necessarily brain activity itself. These word choices may seem obscurantist for their nuance, but they embody scientific clarity, honesty and rigor: good science is science done without metaphysical commitments, no matter how popular those commitments may be. IIT is good science, unshackled by unexamined physicalist prejudices.

Therefore, while I still stand by the letter of my original criticism of attempts to use IIT as an emergentist, physicalist account of consciousness, I realize now that I was unfair in spirit. And for that I offer my sincere and heartfelt apology to Giulio, Christof, and all the folks laboring to develop IIT as a scientific theory—as opposed to a metaphysical claim. Again, in my view, IIT is good science; even the best science, as far as attempts to study consciousness via objective methods are concerned.

 

The power of IIT

It is extraordinary—even appalling—how neuroscience regularly ignores experience in its attempts to explain, well, experience. For instance, a team at the Imperial College London has been attempting to account for the psychedelic experience with a minuscule increase in brain noise (i.e., desynchronized patterns of brain activity) observed during the psychedelic state. Such a hypothesis would be a little less absurd if the psychedelic experience felt like watching faint TV static. Alas, anyone who has ever had a psychedelic experience knows that this isn’t quite the case.

Science is an objective method of knowledge pursuit, and this is its key strength. Therefore, scientists should by and large avoid subjective approaches. But when it comes to consciousness, the very thing we are trying to account for is subjectivity itself. Therefore, while attempting to account for consciousness we cannot ignore, well, what consciousness is. This is so obvious it’s embarrassing to have to argue for it. In attempting to account for anything we cannot ignore the properties of that which we are trying to account for; experience is no exception.

And here IIT shines, for its key tenet is to stay true to what experience is. Its five axioms are derived from direct introspective access to experience and its properties. The theory is then built upon these axioms, as any theory of consciousness worthy of being taken seriously must do. A theory of consciousness cannot ignore consciousness in its attempt to account for it.

Beyond this, IIT provides a rigorous conceptual and mathematical framework, based on information theory, which allows us to model experience and derive experiential implications from observed patterns of brain activity. It can do so because there is an obvious correlation between felt experience and brain function; no serious scientist or philosopher would deny that. And such a correlation is all that IIT needs: it creates a layer of abstraction—the ‘phi structure,’ or sometimes the ‘causal structure’—derived from brain activity, which can then be translated into experiential implications. IIT creates a two-way map between the characteristics of felt experience and patterns of brain activity, in a metaphysically neutral manner—the hallmark of good science.

SanServolo ASNS

Part of the merry crew in San Servolo, Venice, during the Advanced School of Neuroscience, IIT meeting in September 2023.

IIT and Analytic Idealism

A couple of weeks ago, I spent a full week—literally all mornings, afternoons and evenings—in the small island of San Servolo, in the Venice lagoon, with Giulio, Christoph, and several other IIT researchers and students, discussing IIT. I was given the opportunity even to present my views on Analytic Idealism and its relation to IIT. I’ve learned a lot during that week, at both technical and personal levels, and I am very grateful to Christof and Giulio for that opportunity.

San Servolo congealed in my mind something that had been growing in it since 2017: that IIT complements Analytic Idealism extremely well. Through the process of “exclusion,” mathematically formalized in IIT, we could develop the first rigorous conceptual account of dissociation in psychiatry. Just such an account is the missing element of Analytic Idealism, for the latter postulates dissociation as the solution to the so-called ‘decomposition problem’ of idealism.

Moreover, the abstracted “phi” or “causal structure” IIT constructs is, under Analytic Idealism, our first model of the structure and dynamics of experiential states as they are in themselves, as opposed to how they appear to external observation. The latter is what we call ‘matter.’ While matter is the “phenomena” of Kantian terminology, experiential states are the Kantian “noumena.” IIT provides the first scientific model of the noumena, as opposed to the phenomena. The significance of this cannot possibly be overstated.

Notice that physics—the basis of all sciences—is purely a science of phenomena: a science of the contents of perception. In Kantian terminology, physics does not—and fundamentally cannot—describe nature as it is in itself, but just as it appears to observation. Physics is not—and cannot be—a science of the noumena. But IIT can. The abstracted phi structures of information integration are, under Analytic Idealism, the structure of nature as it is in itself—i.e., the structure of irreducible experiential states.

I thus regard IIT not as an enemy of Idealism, but precisely as one of its most important scientific complements and most promising partner for future theoretical and practical developments. This doesn’t mean that IIT is an idealist theory; it isn’t, and shouldn’t be, for science must remain metaphysically neutral. Philosophers, on the other hand, must precisely not be metaphysically neutral, for doing metaphysics is our job. And as it happens, metaphysics must be informed by science, since any metaphysics that contradicts established science is just wrong. It is in this sense that I see great promise in work that brings together IIT and Idealism: IIT can inform Idealism in a mathematically precise manner that Idealism has so far lacked (except for the work of Donald Hoffman and his team, which offers a different take on the formalisms).

 

The open letter brouhaha

Which brings me back to the beginning of this essay: the open letter claiming that IIT is pseudo-science, due to its alleged failure to make testable predictions. The people developing IIT are in a much better position to answer this charge—if they choose to do so, which I’m not sure is the best course of action, for reasons I will soon discuss—than I am. But I will very briefly share my view in this regard.

IIT did make predictions; and they were successfully tested. For instance, a key prediction of IIT is that, because of their neuronal anatomy, more interconnected areas of the brain should be more tightly correlated with conscious experience than the pre-frontal cortex. This is counterintuitive, since most neuroscientists thought that precisely the pre-frontal cortex was the seat of consciousness. Tests revealed IIT’s prediction to be correct.

There are other studies that I could allude to, but my point here is just to not concede the charge that IIT hasn’t made testable predictions; it has. However, even if it hadn’t, I think the pseudo-science charge made in the open letter betrays alarming ignorance of what IIT is and how it is progressing.

Indeed, IIT is trying to construct a map that allows us to infer the qualities of experience from the information structure of neuronal activity. We can create this map because we have access to both sides of the equation: we can measure brain activity and access experience through direct introspection. This double access allows us to calibrate or tune our models over time. We measure brain activity, feed those measurements into a mathematical model of the associated qualities of experience, and then check the output of the model against introspection. If the check fails, we refine the model and try again.

This calibration process requires foreknowledge of both sides of the equation: the measured patterns of neuronal activity and the corresponding introspective reports. Without this foreknowledge, no tuning can be performed and the model cannot be improved. No predictions can be done at this stage, for predictions entail precisely guessing an unknown side of the equation based on the other, known side. This, of course, can only be done once the calibration is complete and the model can be extrapolated to yet-unseen data.

Now, refining a model of experience—the richest, most nuanced, complex, and literally mind-boggling element of nature—in this manner is a titanic enterprise; there are countless variables and parameters at play, many of which are extremely hard to pin down through introspection. On the other side of the equation, measuring patterns of brain activity with the level of detail—resolution, granularity—required for the calibration is also exquisitely challenging. Let us keep in mind that functional neuroimaging is very recent: it’s a 21st-century thing, still in its infancy.

Because of this, to expect IIT to make a wide variety of concrete and testable predictions—i.e., to infer one side of the equation from the other with consistency and accuracy—at this point in the game is just naïve and unreasonable. That IIT has made testable predictions speaks to the resourcefulness of the researchers in question and should be considered a bonus. The timeline for having a consistent, reliable, extrapolatable model of the qualities of experience should be measured in decades, even in the best-case scenario of a theory that set out in the right direction.

All this aside, though, it strikes me as odd that academics should choose to criticize their peers’ work through an open letter. Academics typically use open letters to talk to the government or the public; not other academics. For the latter, the tool of communication is peer reviewed academic articles, not open letters. The use of an open letter is political, not scientific, and raises uncomfortable questions in my mind regarding the motivations.

Indeed, it is intriguing that, precisely as IIT makes an effort to be more explicit in its metaphysical agnosticism—through careful choices of words in the claims it makes—life-time activists of metaphysical physicalism, such as Daniel Dennett and Patricia Churchland, should choose to turn against it in a highly politicized move alien to academic custom.

Be that as it may, I considered it appropriate that I now broke my silence on IIT—which I have carefully maintained since 2017—and publicly voiced my support for it. Because IIT is good science, it doesn’t unthinkingly commit itself to physicalism. That this may have ruffled the feathers of physicalist activists is predictable, but their reaction is censurable, in my view.

 

Conclusions

IIT, far from contradicting Idealism, can in fact dovetail with it extremely well, in a synergistic combination that can finally allow us to: (1) develop a rigorous conceptual account of dissociation in psychiatry, possibly leading to new and desperately needed treatment protocols; (2) tackle the missing piece of Analytic Idealism by addressing the decomposition problem in a more explicit and rigorous manner, through the formalized notion of “exclusion” in IIT; (3) develop more effective treatment or palliative protocols for locked-in syndrome; (4) develop a no-report paradigm for the study of the neural correlates of consciousness, allowing us to finally overcome the severe limits imposed by metacognitive introspection; (5) possibly develop the very first theory of the noumena—as opposed to the phenomena—in the history of humanity; a theory of everything as it is in itself.

These are lofty prospects, with an unfathomable payoff. And in my view, no other theory of the structure and dynamics of experience comes anywhere near IIT in its rigor, clarity, formalisms, theoretical infrastructure, and insight. I see the open letter against IIT as a political move whose dubious motivations should have no place in academia. Finally, I look very much forward to cooperating with the IIT teams around the world in further exploring its synergies with Idealism.

Is the human brain a model of the universe? Read and commented on by Nadia Hassan

Is the human brain a model of the universe? Read and commented on by Nadia Hassan

Listening | Cosmology | 2022-05-12

Essentia Readings Episode 8 thumbnail

The article Nadia Hassan reads today (written original here) presents documented similarities between the human brain and the cosmos, and poses the question: can either be modeled after the other? This very exciting possibility might allow as of yet undiscovered truths about both realms, and bring us closer to the Holy Grail of modern physics, the Theory of Everything. Check out other episodes of the Essentia Readings podcast.

Check out Melvin Felton’s recent book, Universe Within

How can you be me? The answer is time

How can you be me? The answer is time

Reading | Philosophy

Chess Character knight warrior reflection in a mirror-represent hypocrisy personality

That you believe you were your five-year-old self is grounds to believe that you can be another person, right now, while still being you, argues our executive director in this stimulating theoretical essay.

How can one universal subject be you, and me, and everybody else, at once? This is perhaps the most difficult aspect of analytic idealism to wrap one’s head around, for it implies that you are me, at the same time that you are yourself. How can this possibly be? After all, you can see the world through your eyes right now, but not through mine.

Although reference to dissociative disorders, empirically validated as they are, forces us to accept that such somehow can indeed be the case—for it is the case in severely dissociated human minds—the question of how to visualize the dissociation remains difficult. How can you visualize a process by virtue of which you are me while being yourself concurrently? How are we to get an intuitive handle on this?

Notice that what makes it so difficult is the simultaneity of being implied in the hypothesis: you can easily visualize yourself being your five-year-old self—an entity different from your present self in just about every way—because being your five-year-old self is not concurrent with being your present self: one is in the past, the other is in the present. Visualizing oneself taking two different points of view into the world does not offer any challenge to our intuition, provided that these points of view aren’t taken concurrently.

Here is an example. When I was a child, I used to observe a very curious behavior of my father’s: he would play chess against himself, a common and effective training technique in a time before computerized chess engines. Doing so helps a chess player learn how to contemplate the position on the board from the opponent’s point of view, in order to anticipate the opponent’s moves. My father would perform this exercise quite literally: he would play a move with the white pieces, turn the entire board around by 180 degrees, and play a move with the black pieces. Then turn the board back to white again, and so on.

My father—a single subject—was taking two different points of view into the world, experiencing the battle drama of the game from each of the two opposing perspectives; one subject, two points of view. We have no difficulty understanding this because the two perspectives weren’t simultaneous, but instead occupied distinct points in time.

Yet, we’ve known for over a century now that time and space are aspects of one and the same thing: the fabric of spacetime. Both are dimensions of extension in nature, which allow for different things and events to be distinct from one another by virtue of occupying different points in that extended fabric. For if two ostensibly distinct things occupy the same point in both space and time, then they can’t actually be distinct. But a difference in location in either space or time suffices to create distinction and, thereby, diversity. By occupying the same point in space, but at different times, two objects or events can be distinguished from each other; but so can they be distinguished if they exist simultaneously at different points in space.

The way to gain intuition about how one subject can seem to be many is to understand that differences in spatial location are essentially the same thing as differences in temporal location. This way, for the same reason that we have no difficulty in intuitively understanding how my father—a single subject—could seem to be two distinct chess players, we should have no intuitive difficulty in understanding how one universal subject can be you and me: just as my father could do so by occupying different perspectives at different points in time—that is, by alternating between black and white perspectives—the universal subject can do so by occupying different perspectives at different points in space; for, again, space is essentially the same thing as time.

Yet, the demand for this transposition from time to space still seems to be too abstract, not concrete or intuitively satisfying enough; at least to me. We need to make our metaphor a little more sophisticated.

A few years ago, I had to undergo a simple, short, but very painful medical procedure. So the doctors decided to give me a fairly small dose of a general anesthetic, which would knock me out for about 15 minutes or so. I figured that that would be a fantastic opportunity for an experiment: I would try to focus my metacognition and fight the effects of the drug for as long as I could, so to observe the subjective effects of the anesthetic on myself. I had undergone general anesthesia before, in my childhood, but had no recollection of that, so this was a fantastic chance to study my own consciousness with the maturity and deliberateness of an adult.

And so there I was, lying on an operating table, rather excited about my little experiment. The drug went in via the IV and I focused my observation of the contents of my own consciousness, like a laser. Yet, as the seconds ticked by, I couldn’t notice anything. “Strange,” I thought, “nothing seems to be happening.” After several seconds I decided to ask the doctors if it was normal for the drug to take so long to start causing an effect. Their answer: “We’re basically done, just hang on in there for a few more moments so we can wrap it up.”

“WHAT?” I thought. “They are basically done? How can that be? It hasn’t been a minute yet!” In fact, more than 15 minutes had already elapsed; they had already performed the whole procedure. I experienced absolutely no gap or interruption in my stream of consciousness; none whatsoever. Yet, obviously there had been one. How could that be? What had happened to my consciousness during the procedure?

The drug altered my perception of time in a very specific and surprising way. If we visualize subjective time as a string from where particular experiences—or, rather, the memories thereof—hang in sequence, the drug had not only distorted or eliminated access to some of those memories, but also cut off a segment of the string and tied the two resulting ends together, so to produce the impression that the string was still continuous and uninterrupted. I shall call this peculiar dissociative phenomenon ‘cognitive cut and tie.’ The memory of certain experiences in a cognitively associated line are removed from the line, and the two resulting ends seamlessly re-associated together, so the subject notices nothing missing.

Now let us bring this to bear on my father’s chess game. Imagine that we could manipulate my father’s perception of time in the following way: we would cut every segment of time when my father was playing white and tie—that is, cognitively associate—these segments together in a string, in the proper order; we would also do the same for the black segments. As a result, my father would have a coherent, continuous memory of having played a game of chess only as white, and another memory of having played another—albeit bizarrely identical—game of chess only as black. In both cases, his opponent would appear to him as somebody else. If you were to tell my father that it was him, himself, on the other side of the board all along, he would have thought you mad. For how could the other player be him, at the same time that he was himself, playing against his opponent?

The answer to how one universal subject can be many—to how you can be me, as you read these words—resides in a more sophisticated understanding of the nature of time and space, including the realization that, cognitively speaking, what applies to one ultimately applies to the other. As such, if you believe that you were your five-year-old self, then there is an important sense in which, by the same token, you must believe that you can be me. There is only the universal subject, and it is you. When you talk to another person, that other person is just you in a ‘parallel timeline’—which we call a different point in space—talking back to you across timelines. The problem is simply that ‘both of you’ have forgotten that each is the other, due to dissociative ‘cut and tie.’

A different subjective position in space is just a different point in a multidimensional form of time, and vice-versa. Indeed, such interchangeability between space and time is a field of rich speculation in physics. Physicist Lee Smolin, for instance, has proposed that space can be reduced to time. Physicist Julian Barbour, in turn, has proposed the opposite: that there is no time, just space. There may be a coherent theoretical sense in which both are right.

The most promising theoretical investigation in this area is perhaps that of Prof. Bernard Carr, from Queen Mary University London, a member of Essentia Foundation’s Academic Advisory Board. If his project is given a chance to be pursued to its final conclusions, it is possible that physics will offer us a conceptually coherent, mathematically formalized way to visualize how one consciousness can seem to be many.

Looking upon personal identity through the lens suggested above may convince you that, when an old wise man turns to a brash young lad and says, “I am you tomorrow,” such statement may have more layers of meaning than meets the eye at first.

The miraculous epicycles of materialism

The miraculous epicycles of materialism

Reading | Ontology

shutterstock 1816368056

Faced with a growing mountain of refutations in the form of empirical evidence and clear reasoning, materialism tries to survive through a bizarre display of absurd imaginary entities, hypotheses and hollow rhetoric, writes our executive director in this week’s mid-week nugget. This is a long, in-depth, but worthwhile read.

Starting with Ptolemaic astronomy in antiquity, and then all the way to Copernican astronomy during the Renaissance, the celestial bodies were thought to move in perfectly circular orbits. The motivation for this widespread assumption was a particular metaphysical commitment: the heavens were perfect and only circles are perfect shapes; ergo, the celestial bodies had to move in circular orbits.

At the time, scholars didn’t think of this notion as an arbitrary assumption, but as an obvious reality instead; one that everybody had known to be true for almost two millennia. It was preposterous to think that all those people had been wrong all that time. Culture—not reason, not evidence—had made circular orbits not only extremely plausible, but even self-evident.

As empirical observations began to show that orbits aren’t circles, scholars started postulating so-called ‘epicycles’: the hypothesis that the celestial bodies move along circles, which in turn move along other circles, and these along yet other circles, and so forth. Despite the precariousness of the resulting models, the entire house of cards was still built on circles alone, so a cherished metaphysical commitment could be preserved.

Eventually, of course, the continuous need to add more and more epicycles reached a tipping point. The sheer accumulation of ‘anomalies’ eventually forces one to abandon one’s metaphysical commitments. Thomas Kuhn has famously called this tipping point a ‘paradigm shift’: when reason and evidence force us to look upon reality with different eyes [1].

But make no mistake: before scholars accept a paradigm shift, they will always conjure up epicycle on top of epicycle in a naked defiance of reason and evidence, so to preserve the metaphysical commitment they identify with. The motivation for this is that, at any single step in the process, parting with that commitment feels less plausible than adding just one more epicycle. And so, more and more epicycles are added, driven by the sense of plausibility that culture manufactures. The history of philosophy and science shows that this has happened repeatedly.

The present is no different. Our metaphysical commitment today is that physical stuff—abstractly defined as being purely quantitative and independent of mind—has standalone existence and somehow generates mind. Technically, this is called ‘mainstream physicalism’ but, colloquially, it’s often referred to as ‘materialism.’ As evidence accumulates in analytic philosophy, foundations of physics and the neuroscience of consciousness against materialism, scholars are busy fantasizing about the epicycles needed to safeguard it from the relentless clutches of reason and evidence.

Objectively speaking, these epicycles of materialism are now reaching the level of patent absurdity. But because they are couched in the sense of plausibility manufactured by our culture, they are still put forward not only with a straight face, but also with the triumphant pride that accompanies a great scientific advancement. My goal for the remainder of this essay is to present to you, as neutrally as I can bring myself to do it, what the latest epicycle proposals are.

 

‘Anomalies’ in foundations of physics

For over forty years now, we’ve known from repeatedly refined and confirmed laboratory experiments that the physical properties of the basic building blocks of the material world—think of the mass, charge, spin, speed and direction of movement of elementary particles—do not exist prior to being measured [2-19].

In general lines, these experiments go as follows: two entangled particles—say, A and B—are shot in opposite directions. After they’ve traveled for a little while, a first scientist—say, Alice—measures particle A. Simultaneously but far away, scientist Bob measures particle B. As it turns out, the physical property Alice choses to measure about particle A determines what Bob sees when he measures particle B.

What this shows is that measuring particles A and B doesn’t simply reveal what their physical properties already were immediately prior to the measurement, but in some sense creates those physical properties. For as long as no measurement is performed, we cannot say that particles A and B even exist, for they are defined in terms of their observable properties.

As strange as this may sound at first, there is a simple and intuitive explanation for it: physical properties are simply the appearance or representation, upon measurement, of a deeper and more fundamental layer of reality. Analogously, the dials on an airplane’s dashboard are also a representation of the actual world outside the airplane, insofar as they convey information about that world. And the dials show nothing if the airplane’s sensors aren’t measuring the real world outside, so the appearance or representation can’t be said to exist unless and until a measurement is performed. Only then, do the dials produce an appearance.

What we call the physical world is thus analogous to the airplane’s dashboard: it shows nothing unless and until we measure the real world underlying the physical. Physicality or materiality is the result of a measurement displayed on the internal dashboard we call perception. We do not have a transparent windscreen to see the world as it is in itself; all we have is the dashboard and the sensors that feed it with data. In other words, we only have perception and our sense organs. Therefore, we confuse what is perceived with the actual world and proclaim that reality is physical, or material. Experiments have now shown that this is as nonsensical as a pilot, flying by instruments alone, insisting that his dashboard is the world outside, as opposed to an appearance or representation thereof.

The point of scientific experimentation, of course, is to unravel this understandable confusion and make clear to us that the real world out there is not what we call ‘material.’ But since this contradicts materialism, many scientists and philosophers feel that it is more plausible to add an epicycle to their theories than to accept what experiments tell us.

 

The ‘fantastic hidden stuff’ epicycle

Take popular YouTuber and physicist Sabine Hossenfelder, for instance. She proposes that there are mysterious ‘hidden variables’ that account, under a materialist metaphysical framework, for the experimental results discussed above. These hidden variables are not explicitly defined—beyond imaginary toy models with little to no bearing on reality [20]—even in principle. Like circular orbits and their epicycles, they are purely imaginary entities for which there is precisely zero empirical evidence.

Even the underlying motivation for postulating hidden variables is conspicuously questionable. Indeed, if we were to exclude the non-hidden properties of nature—mass, charge, spin, etc.—from our picture of reality, all kinds of obvious things would go immediately unaccounted for: without mass we couldn’t account for inertia; without charge we couldn’t account for electricity; without spin we couldn’t account for magnetism; etc. As such, there’s good reason to infer that these properties exist in some sense. However, and very tellingly, if we deem ‘hidden variables’ to be just what they appear to be—that is, fantasies—nothing in our world goes unaccounted for; nothing at all. We don’t need hidden variables for anything other than to safeguard a metaphysical commitment; one so internalized that many have grown to conflate it with fact.

To make her epicycles at all tenable, Hossenfelder asks us to part with a notion integral to our understanding of experimentation and reality itself. She considers this notion a mere “assumption” and refers to it in technical terms: “statistical independence” [21]. If you don’t know what this means, you might believe Hossenfelder’s claim that it’s merely some kind of arcane mathematical postulate we might as well do without. But what if you grasp what “statistical independence” actually means?

Suppose that you want to photograph the moon. You set the aperture and exposure of your camera so to capture a clear image. But you don’t ever imagine that what the moon is—or does—up there in the sky will change in response to your particular camera settings, do you? The moon doesn’t care about the settings of your camera or even the fact that you are photographing it; it is what it is and does what it does irrespective of how it is being measured. There’s no causal chain, starting at your camera and somehow finding its way to the moon, which turns the moon into something it isn’t—say, green—or forces it to do something it otherwise wouldn’t—say, rotate the other way around—just because of how you set your camera. Is this a mere “assumption” or a basic, empirically-established understanding of how reality, experimentation and measurement work? Do cameras have the power to change reality by merely photographing it?

What I’ve just described is what “statistical independence” means. It states that the thing measured (the moon) doesn’t change in response to the settings of the detector (the camera) used to measure it; how could it? Nonetheless, Hossenfelder calls this fundamental understanding a mere “assumption” and asks us to abandon it: according to her, what Alice and Bob see depend on their measurement choices because particles A and B, which have standalone existence, change in response to the detectors’ settings; just as if what the moon is or does depended on the aperture and exposure settings of your camera. Mind you, Hossenfelder provides no coherent account of how this magic is supposed to happen; she just knows that it does, because her metaphysical commitment implies that physical properties must have standalone existence.

Notice that, under analytic idealism, the moon we see is not the thing in itself, but a representation thereof on the dashboard we call perception. There is indeed something real out there, which itself cannot be magically changed by measurement or detector settings, but which, when measured, presents itself to us in the form of the appearance we call the moon. Under analytic idealism, measurement doesn’t change reality; it simply produces an appearance or representation thereof, which in turn is relative to the measurement context. The physical world is the representation produced by measurement, not the reality measured in the first place.

But under Hossenfelder’s ‘hidden variables’ model, that great spheroid in the night sky, with a certain mass, speed and direction of movement, is the thing in itself, not a mere appearance or representation. In stating that camera settings—in the context of this metaphor—change the thing observed, she is attributing to these settings the magical power to change reality itself, not mere representations thereof.

 

The ‘compensate hollow argument with assertive rhetoric’ epicycle

Independently of the above, a series of even more recent and compelling experiments refutes Hossenfelder’s epicycles in a different way: these experiments show that, just as predicted by quantum theory, physical properties—which define what physical entities are—aren’t absolute, but relative, or “relational,” or “contextual” [22, 23]. In other words, they do not have standalone existence but arise, instead, as a function of observation in a way that depends on the particular vantage point of the observer.

If we go back to our dashboard metaphor, this isn’t surprising at all: what the dials display is a function of what the airplane’s sensors measure, which in turn is relative to the particular position and orientation of the airplane in space and time. So two different pilots in two different airplanes may get different dashboard readings of the same sky, because of their particular position and orientation in it. This doesn’t mean that they don’t share the same world; of course they do. It only means that their dashboards aren’t the world, but merely representations or appearances thereof. Dashboard indications can be different, while the real world measured is the same.

But these experimental results violate Hossenfelder’s metaphysical commitment, just as the observed orbits of the celestial bodies violated the notion of perfectly circular motion. So how does she deal with them? She uses her particular brand of rhetorical assertiveness and casual dismissal of results that don’t agree with her views, so to brand the experiments invalid or “debunked.” In a recent video, she dismisses the results with a single sentence: photons—used as observers in the experiments—are not measurement devices because they don’t cause decoherence, therefore the experiment means nothing. Add a big red ‘X’ on top of the respective papers and touché; job done. With one simple statement and a silly visual aid, Hossenfelder wants you to believe that she has dismantled the careful and judicious work of many theoreticians and experimentalists over years of effort.

What is ironic is that, because she makes the statement in such an assertive manner, alongside purely rhetorical but effective visual aids—see illustrations below, taken from her videos—many non-specialist viewers are bound to buy into it despite its obvious hollowness. But I digress.

It is true enough that decoherence is often operationally associated with measurement. But we know how to probe and collect information about a quantum system without causing decoherence; that is, without disturbing the system’s superposition state. We call these “interference experiments,” an example of which is the famous double-slit experiment showing wave interference patterns corresponding to a superposition. Something of this nature—though a bit more involved—is precisely what the researchers in question have judiciously done. As such, it is simply false to maintain that, because photons don’t cause decoherence, no conclusions can be drawn from the data gathered in these experiments.

You see, epicycles are not only about adding fantastic stuff—such as ‘hidden variables’—but also about arbitrarily dismissing inconvenient stuff—such as interference experiments. They represent attempts to protect a metaphysical commitment based not only on hand-waving forms of argument—tortuous as those may be—but also on pure rhetorical strength.

The ‘turtles all the way down’ epicycle

But what if one is intellectually honest to a fault, and incapable of proposing fantastic imaginary entities that have no empirical grounding? How does an honest and brilliant mind, culturally conditioned to commitment towards materialism, find its way out of the dilemma posed by evidence and reason?

Carlo Rovelli is both a physicist and a person I have sincere respect and admiration for, one of the few truly open and honest top-thinkers in the world today, I suspect. He has acknowledged, already almost 30 years ago, that physical entities cannot have standalone or absolute existence; instead, they are “relational,” or relative to observation. As such, they arise as a result of observation. Yet, Rovelli is also a man of his time and cultural context, committed to the materialist notion that physical stuff is not reducible to—that is, explainable in terms of—anything else.

Rovelli’s way out of this dilemma is to say that reality is purely relational, or relative, which immediately raises the question: relative to what? Movement is relative, alright: two cars on a highway may or may not be moving relative to one another, even though they are certainly moving relative to the buildings along the highway. But for there to be any sense in the very concept of movement, there has to be something that moves in relation to something else; movement is not a thing unto itself, but a relational property that operates between things; and, of course, the things that move can’t themselves be movement.

But according to Rovelli, the whole of reality is made of relations. “Relations between what?” you might ask. Rovelli’s answer: relations between meta-relations, which are themselves relations between meta-meta-relations, and so on. It’s turtles all the way down. The world is made of relations but there is nothing that relates [24]. It’s like saying that the world is made of movement but there’s nothing that moves. Or, more accurately: the world is made of movement but the things that move are themselves movement. Huh?

No, really, this is Rovelli’s position, which I have confirmed directly with him. He isn’t bothered by the fact that he is clearly incurring in the fallacy of infinite regress. His epicycle is not just cumbersome and arbitrary, it’s illogical. Yet, defying logic clearly feels more plausible to him than abandoning his metaphysical commitment. Such is the psychological power of metaphysics.

Epicycles in the neuroscience of consciousness

It was only a little over ten years ago that nearly every neuroscientist—and many ordinary people—thought that psychedelics caused the ‘trip’ by lighting the brain up like a Christmas tree. Then research started coming in showing precisely the opposite: psychedelics only reduce brain activity, in many different areas of the brain. They don’t increase activity anywhere [25-29].

Predictably, neuroscientists started looking for something physical that did increase in the brain following the administration of a psychedelic drug. After all, the immensely rich, structured, intense psychedelic experience must be caused by something in the physical brain; right?

Many materialist hypotheses were put forward and eventually abandoned: functional coupling, activity variability, etc. One emerged as the most promising candidate to save cherished materialist assumptions from the clutches of empirical results: the grandiosely named ‘entropic brain hypothesis’ [30].

Indeed, there is something about grandiose technical names when it comes to epicycles. What the researchers variously call ‘entropy,’ ‘complexity’ (wow!), or ‘randomness’ is… well, noise; brain noise; brain activity that follows no recognizable pattern; the brain equivalent of TV static. And, as it turns out, researchers could show that, on average, brain noise levels increase a little bit—the understatement of the century—under psychedelics [31].

This result has now been published in several respected neuroscience journals. The notion that there is any real effect here at all is based on an analysis called ‘statistical significance.’ It means that, in the experimental data accumulated by the researchers, a certain statistical factor—called ‘p-factor’—has crossed a certain threshold. And that threshold was chosen in an entirely arbitrary fashion by someone in the 1930s [32]. Indeed, the pitfalls and arbitrariness of p-factor analyses have been much discussed in recent times [33-36]. There are even calls to abandon p-values and statistical significance altogether, so unreliable they are in showing that there is any real effect at all [37]. But in this essay, for the sake of argument, I shall overlook all this and consider the effect real.

The question now is, is a small increase in brain noise levels a plausible account of the psychedelic experience under materialist premises? Let us first consider that, in some of the drug-placebo pairs studied, brain noise levels actually decreased [38]. Yet, those subjects, too, experienced a psychedelic ‘trip.’ If their brain noise levels didn’t increase, what accounts for their ‘trips’? The researchers do not offer an explanation.

Secondly, anyone who has ever experimented with psychedelics knows that real ‘trips’ are anything but random noise. Psychedelic experiences are extremely structured, beyond even ordinary perception. Psychonauts often speak of hyper-dimensional geometry, internally-consistent alternative realities, alien beings, intricate but coherent messages and insights, and so on [39]. If a small increase in brain noise levels—which by definition have no structure—generates these experiences, where does the structure of the experience come from, under materialist premises?

Thirdly, almost the entire neuroscience literature reports correlations between patterns of brain activation and experience, not brain noise and experience. Researchers know that a sleeping subject is dreaming of something as dull as starring at a statue [40] or clenching a hand [41] based on their patterns of brain activation. Artificial neural networks can even reconstruct one’s conscious inner imagery just by looking at patterns of brain activation [42]. Is it plausible that, for psychedelic trances alone, the salient correlate of experience is noise and for all other states it is something entirely else? Can there be two entirely different biological bases for consciousness? You see, if one proposes a different materialist account of experience for each type of data, materialism becomes unfalsifiable.

Fourthly: I’ve been speaking of ‘small increases’ in brain noise levels. But I haven’t told you how small they actually are. On average, the observed increase in ‘complexity’ is of 0.005 in a complexity scale of zero to one hundred [38]! This isn’t small; it’s minuscule. Even if one ignores the problems surrounding the notion of statistical significance and considers the effect real—as opposed to an irrelevant statistical fluke, which I bet is what it actually is—it is still minuscule. And it matters that it is minuscule, for the attempt here is to account for the formidable richness, intensity and structure of the psychedelic experience—one of the top 5 most significant experiences of one’s life, according to John’s Hopkins research [43]—with a minuscule increase in, of all things, brain noise.

If we didn’t live in a culture that has manufactured plausibility for the metaphysical commitments of materialism, this result would, in all likelihood, have been dismissed not only as an inconsequential statistical fluke, but also as lacking any explanatory force whatsoever. But as it is, the result is presented as a major neuroscientific breakthrough. Of all the epicycles listed in this essay, this one probably takes the crown of most daring hypothesis, for its sheer implausibility in view of the data presented to substantiate it.

 

Beyond the epicycles: a paradigm shift

This is the world and culture we live in today: one where fantastic imaginary stuff for which there’s zero empirical evidence, flat-out logical fallacies, arbitrary rhetorical dismissals of solid experimental results and extremely implausible hypotheses are used to rescue one’s metaphysical commitments from the clutches of reason and evidence. Yet, that cannot last too long, for we know from history that, eventually, even the most ingrained metaphysical commitments give way to clear thinking.

We may be witnessing this today already, in subtle but clear and growing ways. Indeed, it was only a few days ago that I was debating Prof. Patricia Churchland in an event organized by the Institute of Art and Ideas, and hosted by Robert Kuhn, known for his PBS series ‘Closer to Truth.’ Churchland and I had been chosen as the most recognizable and unambiguous defenders of our respective views: I as an analytic idealist and Churchland as an eliminative materialist. The latter means a materialist that not only maintains that the mind is a product of the brain, but even that (certain) experiences don’t actually exist.

Lo and behold, after some brief introductions and commentary, Churchland opened her participation in the debate by stressing that she… well, isn’t a materialist. She claimed that she doesn’t subscribe to any ‘ism’ but prefers, instead, to peruse the data; which she surely did for the rest of the debate, avoiding argument in favor of telling what can only be described as ‘storified’ accounts of research she considered interesting. It was all indeed very interesting but terribly anticlimactic.

This is not the first or second time this happens. Some well-known materialists are suddenly turning into metaphysical agnostics. They are still willing to criticize non-materialist views and portray themselves as people who ‘simply follow the science,’ but not to unambiguously defend the materialist metaphysics that has characterized their entire public careers. Not only do they suddenly become agnostics, they try to re-write history and portray themselves as having always been agnostics. This, in my view, is the way people will slowly part with their metaphysical commitments while trying to save face. Materialism is now so ludicrously indefensible that no alternative is left to them. Expect to see much more of it in the years to come.

There is something else that I predict will happen. This prediction is based on private, personal conversations with intellectually-honest materialists, so I won’t name sources. But once we acknowledge that the physical world is indeed relational, that it indeed doesn’t have standalone existence, and that it is indeed just a superficial appearance of a deeper layer of reality, there will be a concerted attempt to very matter-of-factly extend the meaning of the word ‘physical’ so to encompass that underlying layer as well; whatever it turns out to be. To put it bluntly, whatever reality turns out to be, we will simply call it ‘physical’ and thereby render materialism unfalsifiable by mere linguistic definition. It’s like acknowledging that there is a real world outside, behind and beyond the dashboard, but referring to it as just more dashboard. Silly and extraordinarily misleading as this surely is, it will be an important means for many to feel comfortable with embracing a broader view of what’s going on, and for some others to save face and public careers. Expect to watch this pernicious but sincere—perhaps even well-meaning—charade unfold in the next couple of decades.

Ultimately, of course, it is our understanding of what is really going on that matters; our understanding of who and what we are, what reality is, and how we relate to the rest of nature. It’s not about labels or personal vindication. The meaning of our lives is what is at stake here. As such, it is irrelevant whether some will get away with face-saving charades.

Our view of reality not only will change dramatically, it is already changing as you read these lines. Thomas Kuhn’s paradigm shift is unfolding before our very eyes. We will only recognize it unambiguously in hindsight, but the writing is on the wall. Nonsense can last long and cause much harm, but reason and evidence are like the proverbial wave the slowly dissolves the rock: inexorable, irresistible and patient beyond our ability to conceive.

 

References

  1. https://books.google.nl/books?id=3eP5Y_OOuzwC
  2. https://arxiv.org/abs/1712.01826
  3. https://www.nature.com/articles/nature05677
  4. https://iopscience.iop.org/article/10.1088/1367-2630/12/12/123007/meta
  5. https://arxiv.org/abs/1902.05080
  6. https://www.nature.com/articles/s41586-018-0085-3
  7. https://link.springer.com/article/10.1023%2FA%3A1012682413597
  8. https://books.google.nl/books/about/Mindful_Universe.html?id=pArDC3K3O2UC
  9. http://ispcjournal.org/journals/2017-19/Kastrup_19.pdf
  10. https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.49.1804
  11. https://arxiv.org/abs/quant-ph/9806043
  12. https://arxiv.org/abs/quant-ph/9810080
  13. https://www.nature.com/articles/nature10119
  14. https://www.nature.com/articles/nphys3343
  15. https://link.springer.com/article/10.1023/A:1026096313729
  16. https://arxiv.org/abs/quant-ph/9609002
  17. https://www.technologyreview.com/2019/03/12/136684/a-quantum-experiment-suggests-theres-no-such-thing-as-objective-reality/
  18. https://physicsworld.com/a/quantum-physics-says-goodbye-to-reality/
  19. https://www.newscientist.com/article/dn20600-quantum-magic-trick-shows-reality-is-what-you-make-it/
  20. https://arxiv.org/abs/2010.01327
  21. https://arxiv.org/pdf/1912.06462.pdf
  22. https://arxiv.org/abs/1902.05080
  23. https://www.nature.com/articles/s41567-020-0990-x
  24. https://books.google.nl/books?id=0iohEAAAQBAJ
  25. https://www.pnas.org/content/109/6/2138
  26. https://journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0118143
  27. https://www.jneurosci.org/content/33/38/15171.short
  28. https://www.pnas.org/content/113/17/4853
  29. https://www.sciencedirect.com/science/article/abs/pii/S1053811917305888
  30. https://www.sciencedirect.com/science/article/abs/pii/S0028390818301175
  31. https://www.nature.com/articles/srep46421
  32. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4111019/
  33. https://www.tandfonline.com/doi/full/10.1080/00031305.2016.1154108
  34. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4448847/
  35. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5502092/
  36. https://www.nature.com/articles/s41562-017-0224-0
  37. https://www.tandfonline.com/doi/full/10.1080/00031305.2018.1527253
  38. https://www.nature.com/articles/srep46421
  39. https://books.google.nl/books?id=DNhTvwEACAAJ
  40. https://www.theguardian.com/science/neurophilosophy/2013/apr/05/brain-scans-decode-dream-content
  41. https://www.newscientist.com/article/dn20934-dreams-read-by-brain-scanner-for-the-first-time/
  42. https://techxplore.com/news/2019-10-neural-network-reconstructs-human-thoughts.html
  43. https://www.newswise.com/articles/study-probes-sacred-mushroom-chemical

The fantasy behind Sabine Hossenfelder’s superdeterminism

The fantasy behind Sabine Hossenfelder’s superdeterminism

Debating | Physics

shutterstock 1634115994 small

In today’s mid-week nugget, our Executive Director critiques physicist Sabine Hossenfelder’s proposed ‘superdeterminism,’ which aims to account for the theoretical difficulties of quantum measurement without departing from physicalist metaphysical assumptions.

In an introductory short video by Essentia Foundation, the cumulative evidence from foundations of physics that contradicts metaphysical physicalism—namely, the notion that physical entities have absolute, standalone existence—was reviewed. See below. The relevant technical literature is linked in the description of the video.

The argument goes as follows: if physical entities did have standalone existence, then their properties should be simply revealed by measurement. They should have whatever properties they have regardless of whether they are measured or what is measured about them. Measuring a table shouldn’t create its weight or length, but simply reveal what its weight or length already were immediately prior to measurement—or so physicalism presupposes.

As it turns out, however, when measurements are made on entangled quantum particles—the building blocks of nature out of which everything else, including tables, is constructed, according to physicalism—the measurement outcome from one particle depends on what is measured about the other. The choice of what to measure on the first particle determines what the second is, so their physical properties couldn’t have had existence prior to the measurement. Instead, what we see is that physical properties are the result of the measurement itself, not pre-existing realities merely revealed by measurement.

What this tells us is that physical entities aren’t fundamental in nature. Instead, they are merely the appearance or representation, upon observation, of a deeper layer of nature that is, by definition, non-physical.

Here is a metaphor to clarify this point. The physical world is akin to what is displayed on the dials of an airplane’s dashboard: the dials only display something when a measurement on the world outside is performed by the airplane’s sensors. If nothing is measured, then nothing is displayed on the dials. This, of course, doesn’t mean that there is no world outside! Surely there is one; it’s just not what the dials display. Instead, the world is what is measured in order for the dials to display something. The short video below illustrates this metaphor.

Similarly, the experimental results from foundations of physics don’t mean that there is no world outside; surely there is one! It’s just that the world, as it is in itself, prior to measurement, is not physical; for physicality is what is displayed on our own internal dashboard of dials, which we call perception. The ‘sensors’ are our five sense organs: eyes, nose, ears, tongue and skin. Physics is, essentially, a science of perception—that is, a science of the dials—as renowned physicist Andrei Linde, who developed cosmic inflation theory, once observed [1, p. 12]:

Let us remember that our knowledge of the world begins not with matter but with perceptions. I know for sure that my pain exists, my ‘green’ exists, and my ‘sweet’ exists … everything else is a theory. Later we find out that our perceptions obey some laws, which can be most conveniently formulated if we assume that there is some underlying reality beyond our perceptions. This model of material world obeying laws of physics is so successful that soon we forget about our starting point and say that matter is the only reality, and perceptions are only helpful for its description.

Physicist and science communicator Sabine Hossenfelder, however, takes a different view regarding the meaning of the experimental results in question, as discussed in two draft papers [2, 3] and a recent short video:

Let me start my commentary on Hossenfelder’s material by confessing that I sympathize with her work and tend to like her incisive, provocative style. Her voice opposes and slows down the descent of physics into fantasy informed by notions of beauty, as opposed to predictive models informed by hardnosed empirical observations. As someone disappointed with the failure of Super Symmetry, which inspired my younger years at CERN, I realize how important it is to keep our eyes on the proverbial empirical ball. However, despite Hossenfelder’s zeal to stay true to empiricism, she may have now failed it for the sake of safeguarding a physicalist metaphysical commitment.

Hossenfelder’s view is that the experimental results in question can be accounted for by ‘superdeterminism.’ The idea is that the particles must have some hidden properties that we know nothing about. These hidden properties presumably take part in a complex causal chain that encompasses the settings of the detectors used to make measurements. In other words, the detectors’ settings somehow influence something hidden about the particles measured. And since the choice of what to measure is necessarily reflected on those settings, the measurement results depend on that choice. She summarizes superdeterminism thus: “What a quantum particle does depends on what measurement will take place.”

If one posits that quantum particles are nature’s fundamental building blocks, then Hossenfelder’s summary above is strictly incorrect: we define quantum particles in terms of their properties, and their properties depend on what measurement will take place. Therefore, the accurate formulation would be that “what a quantum particle [is] depends on what measurement will take place,” not what it does. Formulated this way—i.e. correctly—the statement starts to look a little less intuitive than Hossenfelder makes it out to be. After all, how can what a particle is depend on what is measured about it? Shouldn’t measurement simply reveal what the particle already was, just as it does in the case of a table? How can the particle change what it is merely because the detector is set up in a different way?

But let us be charitable towards Hossenfelder. According to Quantum Field Theory, there are no such things as particles; the latter are merely metaphors for particular patterns of excitation of an underlying quantum field. And one can reasonably state that the quantum field does those excitations, for excitations are behaviors of the field. Therefore, “what the quantum [field] does depends on what measurement will take place.” That’s accurate enough and seems, at first, to restore plausibility to Hossenfelder’s argument.

The problem is that she is asking us to imagine the existence of things for which we have no direct empirical evidence; after all, they are “hidden.” She is also asking us to grant these invisible things some very specific and non-trivial capabilities: they must somehow—neither Hossenfelder nor anyone else has ever specified how—change, in some very particular ways, in response to the settings of the detector. Mind you, detectors are designed precisely to minimize disturbances to the state of what is measured. Hossenfelder’s imagined hidden variables would have to somehow overcome this barrier as well.

Let us use a metaphor to illustrate what we are being asked to believe. When I take a picture of some celestial body in the night sky—say, the moon—I can set my camera in a variety of different ways. I can, for instance, set aperture and exposure time to a variety of different values. What Hossenfelder is saying, in the context of this metaphor, is that there is some hidden and mysterious something about the moon that changes in response to what aperture or exposure I set on my camera. What the moon does up there in the sky somehow—we’re not told how—depends on how I set my camera here on the ground. This is superdeterminism in a nutshell and you be the judge of its plausibility.

Be that as it may, appealing to hidden variables is inevitably an appeal to a vague, undefined unknown; and not just any vague unknown, but one capable of non-trivial interactions with its environment. The same criticism Hossenfelder leverages against, for instance, superstring theory can be leveraged, verbatim, against hidden variables: we’re appealing to imaginary entities for which there is no direct empirical evidence. To put it in plain English, we have no reason to believe this fantastic invisible stuff, except for trying to save physicalism.

Hossenfelder could argue that the peculiarities of quantum mechanical measurements—the very problem at hand—are themselves evidence of hidden variables. But this, obviously, would beg the question entirely: quantum measurements can only be construed as evidence of hidden variables if one presupposes that hidden variables are responsible for them to begin with. In precisely the same way, only if I presuppose the existence of the—equally hidden—Flying Spaghetti Monster, who moves the celestial bodies around their orbits using His invisible noodly appendages, can the movements of the celestial bodies be construed as evidence of the existence of the Flying Spaghetti Monster.

In one of her papers [2], Hossenfelder speculates about a possible type of experiment that could one day substantiate superdeterminism. The proposal is to make multiple series of measurements on a quantum system, each based on the same initial conditions. If the series are determined by the system’s initial conditions, as superdeterminism postulates, we should see time-correlations across the different series that deviate from quantum mechanical predictions. The obvious problem, however, is that to reproduce the system’s initial state one needs to reproduce the initial values of the postulated hidden variables as well. But Hossenfelder has no idea what the hidden variables are, so she can’t control for their initial states and the whole exercise is pointless. To her credit, she admits as much in her paper. She then proceeds to speculate about some scenarios under which we could, perhaps, still derive some kind of indication from the experiment, even without being able to control its conditions. But the idea is so loose, vague and imprecise as to be useless.

Indeed, Hossenfelder’s proposed experiment has a critical and fairly obvious flaw: it cannot falsify superdeterminism. Therefore, it’s not a valid experiment, for we’ve known since Popper in the 1960s that valid experiments motivated by a hypothesis are supposed to be able to falsify the hypothesis. More specifically, if Hossenfelder’s experiment shows little time-correlation between the distinct series of measurements, she can always (a) say that the series were not carried out in sufficiently rapid succession, so the initial state drifted; or (b) say that there aren’t enough samples in each measurement series to find the correlations. The problem is that (a) and (b) are mutually contradictory: a long series implies that the next series will happen later, while series in rapid succession imply fewer samples per series. So the experiment is, by construction, incapable of falsifying hidden variables.

In conclusion, no, hidden variables have no empirical substantiation, neither in practice nor in principle; neither directly nor indirectly.

You see, I would like to say that hidden variables are just imaginary theoretical entities meant to rescue physicalist assumptions from the relentless clutches of experimental results. But even that would be saying too much; for proper imaginary entities entailed by proper scientific theories are explicitly and coherently defined. For instance, we knew what the Higgs boson should look like before we succeeded in measuring its footprints; we knew what to look for, and thus we found it. But hidden variables aren’t defined in terms of what they are supposed to be; instead, they are defined merely in terms of what they need to do in order for physical properties to have standalone existence. If I were tasked with looking for hidden variables—just as I once was tasked with looking for the Higgs boson—I wouldn’t know even how to begin, because we are not told by Hossenfelder what they are supposed to be. She is just furiously waving her hands and saying, “there has to be something (I have no clue what) that somehow (I have no clue how) does what I need it to do so I can continue to believe in a physicalist metaphysics.”

This is akin to the medieval notion of ‘effluvium,’ an imaginary elastic material that supposedly connected—invisibly—chaff to amber rods. Effluvium was meant to account for what we today understand to be electrostatic attraction, a field phenomenon. Medieval scholars observed that chaff somehow clung to amber rods when the latter were rubbed. Therefore—since they had no notion of fields—they figured that there had to be some material connecting chaff to rod through direct contact, right? After all, everything that happens in nature happens through direct material contact, right? Never mind that such material was invisible (hidden!), couldn’t be felt with the fingers, couldn’t be cut or measured directly, and that no one had the faintest idea what it was supposed to be, beyond defining it in terms of what it allegedly did to chaff; it just had to be there.

Hidden variables are Hossenfelder’s effluvium: there must be some mysterious invisible something that somehow does what needs to be done for us to think of physical entities as having standalone existence, right? Because measurable physical entities are all that exists and, as such, must have standalone existence… right?

On a more technical note, Hossenfelder bases her entire discussion on Bell’s inequalities but conspicuously fails to mention Leggett’s inequalities [4], a 21st-century extension of Bell’s work, more relevant to the points in contention, which separates the hypotheses of physical realism and locality so they can be tested independently. Neither does she address the experimental work done to verify Leggett’s inequalities, which later refuted physical realism rather specifically [5, 6]. Even more recent experiments have also demonstrated that physical quantities aren’t absolute, but contextual (i.e. relative, or ‘relational’) [7, 8], thereby contradicting superdeterminism. By now, a broad class of hidden variables has been refuted by experiments [9], which Hossenfelder doesn’t comment on at all.

Instead, she bases her case on the notion that the opponents of hidden variables dislike the latter simply because the hypothesis supposedly contradicts free will. While I am sure that there are physicists emotionally committed to free will, refuting their emotional commitments does not validate superdeterminism. Suggesting that it does is a straw man. I, for one, am on record stating that free will is a red herring [10] and don’t base my case against superdeterminism on it at all. I don’t need to.

We have to guard against turning the particular metaphysical assumptions of our time into an unfalsifiable system; one that accommodates anomalies through arbitrary, fantastical appeals to unknowns and vague, promissory hand waving. When the anomalies that contradicted Ptolemaic and Copernican astronomy—according to which the celestial bodies move in perfectly circular orbits—began to be observed, adherents came up with fantastical ‘epicycles’—circles moving on other circles—to accommodate the anomalies. The tortuous cumbersomeness of the resulting models should have been enough to force them to take a step back and contemplate their dilemma in a more intellectually honest manner. But subjective attachment to a particular set of metaphysical assumptions didn’t allow them to do so.

Today, as anomalies accumulate in various branches of science against the metaphysical assumptions of physicalism, we would do well to avoid a humiliating repetition of the epicycles affair. However, instead, what we are now witnessing are hypotheses being put forward, with a straight face, that fly in the face of any honest notion of reason and plausibility. The epicycles look benign and reasonable in comparison to the willingness of some 21st-century theoreticians to believe in fantasy. I shall comment more broadly on this peculiar phenomenon—a harbinger of paradigm changes—in my next essay for this magazine.

 

References

[1] http://web.stanford.edu/~alinde/SpirQuest.doc

[2] https://arxiv.org/pdf/1912.06462.pdf

[3] https://arxiv.org/abs/2010.01324

[4] https://link.springer.com/article/10.1023/A:1026096313729

[5] https://www.nature.com/articles/nature05677

[6] https://iopscience.iop.org/article/10.1088/1367-2630/12/12/123007/meta

[7] https://arxiv.org/abs/1902.05080

[8] https://www.nature.com/articles/s41567-020-0990-x

[9] https://physicsworld.com/a/quantum-physics-says-goodbye-to-reality/

[10] https://blogs.scientificamerican.com/observations/yes-free-will-exists/

The ethics of Idealism, read and discussed

The ethics of Idealism, read and discussed

Listening | Ethics | 2022-02-02

Essentia Readings Ep6 thumbnail

In today’s Mid-Week Nugget, we have episode 6 of Essentia Foundation’s ever more popular podcast, Essentia Readings, by host Nadia Hassan. She reads and discusses Dr. Asher Walden’s, The ethics of Idealism. Enjoy!

Welcome back everyone! Today’s article is a fascinating deep dive into the subject of morality and the neurological mechanics of our joint human condition. It is an eye-opening presentation of how we actually relate to each other, and what this could mean for the future of our collective consciousness. Join me, for a fittingly wholesome start to the year.

What lurks behind spacetime?

What lurks behind spacetime?

Reading | Physics

shutterstock 773678275 small

The cosmic riddle of structure without extension—of how complexity can exist outside space and time—is tackled by our Executive Director in this first edition of our Mid-Week Nugget.

Long gone are the days in which spacetime was regarded as an immutable, absolute, irreducible scaffolding of nature. Although our ordinary intuitions still insist on this outdated notion, since the late 18th century a series of developments in philosophy and science—such as Kant’s and Schopenhauer’s proposal that spacetime is a mere category of perception, Einstein and his block universe, Julian Barbour and his universe without time, Lee Smolin and his universe without space, loop quantum gravity, etc.—have relegated it to the status of persistent illusion. Spacetime is but a relatively superficial layer of nature contingent upon more fundamental underlying processes.

The problem is that spacetime seems to be a prerequisite for differentiation and, by implication, structure. Things and events can only be distinguished from one another insofar as they occupy different volumes of space or different moments in time. Without spaciotemporal extension, all of nature would seem to collapse into a singularity without internal differentiation and, therefore, without structure. Schopenhauer had already seen this in the early 19th century, when he argued that spacetime is nature’s principium individuationis, or ‘principle of individuation.’

Yet, it is empirically obvious that nature does have structure: its very regularities of behavior betray just that. Under certain circumstances nature does one thing and, under others, something else; repeatedly and reliably. Such distinguishable and consistent behaviors can only occur with some form of underlying, immanent structure.

So how are we to reconcile the empirical fact that nature has structure with the understanding that spacetime is not fundamental? How are we to think of the irreducible foundations of nature as both lacking extension and having structure? I submit that this is the least recognized and discussed dilemma of modern science.

To solve it, we must start with an admission: objects and events do indeed inherently require spaciotemporal extension to be differentiated; Schopenhauer was right about the principium individuationis. But we know of one other type of natural entity whose intrinsic structure does not require extension.

Consider, for instance, a hypothetical database of student records. Each record contains the respective student’s intellectual aptitudes and dispositions, so the school can develop an effective educational workplan. The records are linked to one another so to facilitate the formation of classes: students with similar aptitudes and dispositions are associated together in the database. Starting from a given aptitude, a teacher can thus browse the database for compatible students.

Now, notice that these associations between records are fundamentally semantic: they represent links of meaning. Associated records mean similar or compatible aptitudes, which in turn mean something about how students naturally cluster together. Therein lies the usefulness of the database. Even though it may have a spatiotemporal embodiment—say, paper files stored in the same box in an archive—there is a sense in which their structure fundamentally resides in their meaning. Spaciotemporal embodiments merely copy or reflect such meaning. After all, the semantic relationships between my intellectual aptitudes and those of others wouldn’t disappear if our respective paper files went up in flames.

I submit that this is how we must think of the most foundational level of nature, the universe behind extension: as a database of natural semantic associations, spontaneous links of meaning. This is similar to how a mathematical equation associates variables based on their meaning, whether such associations happen to have spatiotemporal embodiments or not. The associations can indeed be projected onto spacetime—just as databases can have physical embodiments—but, in and of themselves, they do not require spacetime to be said to exist. This is how nature can have structure without extension.

But what about causality? Its central tenet is that effect follows cause in time, so what are we to make of it without extension? Philosopher Alan Watts once proposed a metaphor to illustrate the answer: imagine that you are looking through a vertical slit on a wooden fence. On the other side of the fence, a cat walks by. From your perspective, you first see the cat’s head and then, a moment later, the cat’s tail. This repeats itself consistently every time the cat walks by. If you didn’t know what is actually going on—that is, the existence of the complete pattern called a ‘cat’—you would understandably say that the head causes the tail.

Behind extension, the universe is the complete pattern of semantic associations—that is, the cat. Our ordinary traversing of spacetime is our looking through the fence, experiencing partial segments of that pattern. All we see is that the cat’s tail consistently follows the cat’s head every time we look. And we call it causality.

The notion that, at its most fundamental level, nature is a complete pattern of associations has been hinted at by physicists before. Max Tegmark, for instance, has proposed that matter is mere “baggage,” the universe consisting purely of abstract mathematical relationships.

We must, however, avoid vague abstract handwaving: every mathematical structure ever devised has existed in a mind, not in an ontic vacuum. The only coherent and explicit conception of mathematical objects is that of mental objects. To speak of mathematical structure without a mind is like talking of the Cheshire Cat’s grin without the cat. Unless you are Lewis Carroll, you won’t get away with that.

Meaning—such as those of the variables in a mathematical equation—is an intrinsically mental phenomenon. In the absence of spacetime, this betrays the only possible ontic ground of a cosmic semantic database: the universe is a web of semantic associations in a field of spontaneous, natural mentation; for mind is the only ontic substrate we know of that isn’t indisputably extended.

Indeed, dispositions and aptitudes are palpably real—in the sense of being known through direct acquaintance—yet transcend extension. What is the size of my aptitude for math? What is the length of my disposition to philosophize, or even of my next thought? Whatever theory of mind you subscribe to, the pre-theorical fact remains: you can’t take a tape measure to my next thought; mentation is not indisputably extended.

As such, within the bounds of coherent and explicit reasoning, a structured universe without irreducible extension is per force a mental universe—not in the sense of residing in our individual minds, but of consisting of a field of natural, spontaneous mental activity, whose intrinsic ‘dispositions’ and ‘aptitudes’ are known to us as the ‘laws of nature.’