World's First Proof that Consciousness is Nonlocal

Welcome to my blog! I am the author of the world's FIRST paper (explained here on my YouTube channel ) to appear in the academic lite...

Monday, May 25, 2020

Speaking the Wrong Language

In my last post, I pointed out a fundamental problem in a particular paper – although the same problem appears in lots of papers: specifically, that there is no way to test whether an object is in a quantum superposition.  I feel like this is a point that many physicists and philosophers of physics overlook, so to be sure, I went ahead and posted the question on a few online physics forums, such as this one.  Here’s basically the response I got:
Every state that is an eigenstate of a first observable is obviously in a superposition of eigenstates of some second observable that does not commute with the first.  Therefore: of course you can test whether an object is in a quantum superposition.  Also, you are an idiot.
OK, so they didn’t actually say that last part, but it was clearly implied.  If you don’t speak the language of quantum mechanics, let me rephrase.  Quantum mechanics tells us that there are certain features (“observables”) of a system that cannot be measured/known/observed at the same time, thus the order of measurement matters.  For example, position and momentum are two such observables, so measuring the position and then the momentum will inevitably give different results from measuring the momentum and then the position – that is, the position and momentum operators do not commute.  And because they don’t commute, an object in a particular position (that is, “in an eigenstate of the position operator”) does not have a particular momentum, which is to say that it is in a superposition of all possible momenta.  In other words, the above response basically boils down to this: quantum mechanically, every state is a superposition.

Fine.  The problem is that this response has nothing to do with the question I was asking.  I ended up having to edit my question to ask whether any single test could distinguish between a “pure” quantum superposition versus a mixed state (which is a probabilistic mixture), and even then the responses weren't all that useful.

This is why I think the big fundamental problems in physics will probably not be solved by insiders.  They speak a very limited language that, by its nature, limits a speaker’s ability to discover and understand the flaws in the system it describes.  My original question, I thought, was relatively clear: is it actually possible, as Mari et al. suggest, to receive information by measuring (in a single test) whether an object is in a macroscopic quantum superposition?  But when the knee-jerk response of several intelligent quantum physicists is to discuss the noncommutability of quantum observables and come to the irrelevant (and, frankly, condescending) point that all states are superpositions and therefore of course we can test whether an object is in superposition – well, it makes me wonder whether they actually understand, at a fundamental level, what a quantum superposition is.  I feel like there’s a huge disconnect between the language and mathematics of physics, and the actual observable world that physics tries to describe. 

Tuesday, May 19, 2020

It is Impossible to Measure a Quantum Superposition

In a previous post, I discussed how and to what extent gravity might prevent the existence of macroscopic quantum superpositions.  There has been surprisingly little discussion of this possibility and there is still debate on whether gravity is quantized and whether gravitational fields are, themselves, capable of existing in quantum superpositions.

Today I came across a paper, "Experiments testing macroscopic quantum superpositions must be slow," by Mari et al., which proposes and analyzes a thought experiment involving a first mass mA placed in a position superposition in Alice’s lab, the mass mA producing a gravitational field that potentially affects a test mass mB in Bob’s lab (separated from Alice’s lab by a distance R), depending on whether or not Bob turns on a detector.  The article concludes that special relativity puts lower limits on the amount of time necessary to determine whether an object is in a superposition of two macroscopically distinct locations.

The paper seems to have several important problems, none of which have been pointed out in papers that cite it, notably this paper.  For example, its calculation of the entanglement time TB assumes that correlation of the location of test mass mB with the gravitational field of mass mA occurs when the change in position δx of the test mass mB exceeds its quantum uncertainty Δx, which seems like a reasonable argument – except that they failed to include the increase in quantum uncertainty due to dispersion.  (This is particularly problematic where they let Δx be the Planck length!)  Another problem is their proposed experiment in Section IV: Alice is supposed to apply a spin-dependent force on the mass mA which results in different quantum states, depending on whether or not Bob turned on the detector, but both quantum states correlate to mass mA located at L (instead of R).  The problem is that by the time she has applied the force, Bob’s test mass mB has presumably already correlated to the gravitational field produced by Alice’s mass mA located at L or R, but how could that happen before Alice applied the force that caused the mass mA to be located at L?

But the biggest problem with the paper is not in their determination of the time necessary to determine whether an object is in a superposition of two macroscopically distinct locations.  No – the bigger problem is that, as far as I understand, there is no way to determine whether an object is in a superposition at all! 

Wait, what?  Obviously quantum superpositions exist.

Yes, but a superposition is determined by doing an interference experiment on a bunch of “identically prepared” objects (or particles or masses or whatever).  The idea is that if we see an interference pattern emerge (e.g., the existence of light and dark fringes), then we can infer that the individual objects were in coherent superpositions.  However, detection of a single object never produces a pattern, so we can’t infer whether or not it was in a superposition.  Further, the outcome of every interference experiment on a superposition state, if analyzed one detection at a time, will be consistent with that object not having been in superposition.  A single trial can confirm that an object was not in a superposition (such as if we detect a blip in a dark fringe area), but no single trial can confirm that the object was in a superposition.  Moreover, even if a pattern does slowly emerge after many trials, every pattern produced by a finite number of trials – and remember that infinity does not exist in the physical world – is always a possible random outcome of measuring objects that are not in a superposition.  We can never confirm the existence of a superposition, but lots and lots of trials can certainly increase our confidence.

In other words, if I’m right, then every measurement that Alice makes (in the Mari paper) will be consistent with Bob's having turned the detector on (and decohered the field) -- thus, no information is sent!  No violation of special relativity!  No problem!

Look, I could be wrong.  I’ve been studying the foundations of quantum mechanics independently for a couple of years now, and very, very few references point out that there’s no way to determine if any particular object is in a quantum superposition, which is also why it’s taken me so long to figure it out.  So either I’m wrong about this, or there’s some major industry-wide crazy-making going on in the physics community that leads to all kinds of wacky conclusions and paradoxes... no wonder quantum mechanics is so confusing!

Is there a way to test whether a particular object is in a coherent superposition?  If so, how?  If not, then why do so few discussions of quantum superpositions mention this?

Update to this post here

Why Special Relativity Prevents Copying Conscious States

I was honored to be asked by Kenneth Augustyn to present to the 3rd Workshop on Biological Mentality on Jan. 6, 2020.  The talk was entitled, “Why Mind Uploading, Brain Copying, and Conscious Computers Are Impossible.”  While the talk addressed work in my earlier papers, it offers a clearer argument explaining why Special Relativity prevents the existence of physical copies of conscious states -- specifically, why two instances of physical copies of conscious states located at different points in spacetime, whether spacelike or timelike, would require either superluminal or backward causation.  I also show that because conscious states cannot be copied or repeated, consciousness cannot be algorithmic and cannot be created by a digital computer.  I mention some possible explanatory hypotheses, several of which are related to quantum mechanics, such as Quantum No-Cloning.  Finally, I touch on my related work of whether conscious states are history dependent.

This 36-minute video is probably my clearest video explanation so far as to why mind uploading and conscious computers are inconsistent with Special Relativity.




Friday, May 8, 2020

Into the Lion's Den

Two years ago, I sold my businesses and “retired” so that I could focus full-time on learning about, addressing, and attempting to solve some of the fundamental questions in physics and philosophy of mind... things like the physical nature of consciousness, whether we have free will, the measurement problem in quantum mechanics, etc.  What gave me the audacity to think I might be able to tackle these problems where so many have failed before?  Well, first, tackling a problem only requires desire.  I find these big-picture questions fascinating and looked forward to learning, analyzing, and at least trying.  But I did think I had a reasonable shot at actually solving some of these mysteries.  Why?

While I don’t (yet) have a degree in physics or philosophy, I do have an undergraduate and master’s degree in nuclear engineering as well as a law degree (which is certainly applicable to philosophical reasoning), and have taken lots of physics and philosophy classes along the way.  As an example, I’ve taken graduate-level quantum mechanics, or a course closely related or heavily dependent on QM, at UF, MIT, Princeton, and ECU, and even a fascinating course called Philosophy of Quantum Mechanics.  In other words, I’m no expert – and I plan to continue graduate studies in physics – but I certainly have more than a superficial understanding of physics.

It takes more than education to solve problems; it also takes creativity and a willingness to say or try things that others won’t.  As the sole inventor of 17 U.S. patents on a wide variety of inventions, from rocket engines to software to pumps to consumer products, I’ve always felt confident in my ability to solve problems creatively.  As for independence – let’s just say I’ve always been a maverick.  As an example, while in law school I realized that a loophole in American patent law allowed for the patenting of fictional storylines, so I published an article to that effect.  Over the next couple years, at least six law review articles were published specifically to argue that I was full of shit: great evidence that I was actually on to something!  (Since then the courts closed the loophole.)  I’m not trying to list my CV – just to explain my state of mind when I started this process.  I had plenty of free time, an independent spirit, a history of creativity in solving problems, and a strong and relevant educational foundation.  This gave me confidence that I was in a better position than most to actually solve an important riddle.  I also figured, perhaps naively, that the field of physics was one place where novel approaches, critical thinking, and objective analysis would be rewarded.

I jumped right in.  After extraordinary amounts of research and independent thought, I soon realized that special relativity would cause problems for copying or repeating conscious states.  I wrote my first paper on the topic; the most recent iteration is here.  Not long after that, I realized that QM would also, independently of relativity, cause problems for copying or repeating conscious states, and wrote my second paper; the most recent iteration is here.  In July, 2018, I sent my first paper to the British Journal for the Philosophy of Science; it was summarily rejected without comments or review.  Fuck them.  Over the next year and a half, I submitted it to four more journals, and despite getting close to publication with one, the paper was ultimately rejected by all.  Over the same period, I submitted my second paper to three journals and, again, despite getting close to publication with one, the paper was ultimately rejected.  What had gone wrong?  Was I in over my head? 

Regarding the first paper, the same objection kept coming up over and over: that copying the physical state of a person does not necessarily copy that person’s identity.  Without getting too technical, my argument was that whether or not a person’s identity depends on their underlying physical state, special relativity implied the same conclusion.  But no matter how I replied, the conversation always felt like this:
Them: “How do you know that copying a person’s physical state would copy their identity?” 
Me: “I don’t.  But if it does, then copying that state violates special relativity.  If it doesn’t, then there is nothing to copy.  Either way, we can’t copy a person’s identity.”
Them: “But wait.  First you need to show that copying a person’s physical state would copy their identity.”
Me: “No, I don’t.  Consider statements A and B.  If AàB, and also ¬AàB, then B is true, and we don’t need to figure out if A is true.”
Them: “Hold on.  How can you be so sure that statement A is true?...”
Me: [Banging head against wall]

It’s literally crazymaking.  No one seemed to have a problem with the physics or the implications of special relativity.  Instead, their problem almost always boiled down to the concept of identity and its relationship to physical reality.  I suspect that what’s happening is that people find a conclusion they’re uncomfortable with – such as “mind uploading is impossible” or “consciousness is not algorithmic” – and then work backward to find something they can argue with... and that something always happens to be some variation on “How do you know that statement A is true?”  I don’t know if it’s a case of intentional gaslighting or unintentional cognitive dissonance, but either way it took me a long time to finally rebuild my confidence, realize I’m not crazy, completely rewrite the paper to address the identity issue head-on, and submit it to a new journal.

Regarding the second paper, the referee of the third journal brought up what I believed, at the time, was a correct and fatal objection.  But by then, I had experienced 18 continuous months of essentially nothing but rejection, criticism, or being ignored (which is sometimes worse).  Prior to that, I’d spent so much of my life feeling confident about my ability to think clearly and rationally, to solve problems creatively, to analyze arguments skeptically, and to eventually arrive at correct conclusions.  So by the time I received that final rejection, I threw the paper aside and basically forgot about it – until about two weeks ago.  Somehow the human spirit can reawaken.  I took a look at the paper with fresh eyes, fully expecting to confirm the fatal error, but found exactly the opposite.  I (and the journal referee) had been wrong about my being wrong.  In other words, the error that had been pointed out, as it turns out, was not an error.  That isn’t to say that my reasoning and conclusions in the paper are ultimately correct – there could still be other errors – but the referee had been wrong.  What I argued in my second paper is original and it just may be right.  If so, its implications are important and potentially groundbreaking.  The paper needs to be rewritten, the physics tightened, and the arguments cleaned up: a project for another day.

As for now, here’s the problem I face.  On one hand, answers to some of the deepest and most important questions plaguing humanity for millennia are finally starting to become accessible via science, particularly physics.  On the other hand, it has become, for whatever reason, out of vogue in the physics community to research or even discuss these issues, which is odd for many reasons.  First, many of the giants of physics, even in modern history, routinely debated them, including Einstein, Bohr, Wigner, and Feynman.  Second, physics has itself produced several of these hard questions (like the QM measurement problem and the inconsistency between QM and general relativity).  But because physicists rarely talk about these big-picture and foundational questions, and because there’s essentially no funding to research them, the conversations are typically left to: a) self-made or retired mavericks who don’t need funding (e.g., Roger Penrose); b) writers who profit on popular viewpoints (e.g., Sean Carroll and Deepak Chopra); c) academic philosophers who may or may not (but typically don’t) have any formal training in physics; and d) crackpots, nutjobs, and wackadoodles.  And there are a LOT of wackadoodles; category d) might dwarf the others by a factor of 100, and occasionally even includes members of the other categories.  The Internet is teeming with “amateur physicists” with their own solutions to quantum gravity, theories about “quantum consciousness” (whatever the hell that is), yada yada.

I am in category a), but I understand, if on statistics alone, why I’d be assumed to be in category d).  The thing is, maybe I am a little crazy.  But the solutions to the big problems in physics, cosmology, and philosophy of mind are not going to come from tweaking the same old shit we’ve been tweaking for the past century.  They are going to require truly revolutionary ideas, and those ideas, when first proposed, WILL seem crazy.  I want to be openminded, diligent, and creative enough to explore the crazy, revolutionary ideas that ultimately lead to the correct solutions.  Still, the hardest challenge of all will be maintaining my confidence throughout the process.  Not only will I be continually discouraged by incorrect solutions, but I suspect that my journey will be somewhat lonely.

Blogger and theoretical physicist Sabine Hossenfelder points out that stagnation in physics is in large part due to a feedback mechanism in which those who pull the strings – journal editors, those who award grant funding, members of academic tenure committees, etc. – tend to reward what is most familiar to them and popular with their peers.  This has the effect of stifling innovation.  Her solution: “Stop rewarding scientists for working on what is popular with their colleagues.”  Lee Smolin made a similar point in his article, “Why No ‘New Einstein’?”  He says that the current system of academic promotion and publication has “the unintended side effect of putting people of unusual creativity and independence at a disadvantage.”   Despite the current publish-or-perish system that incentivizes scientists to do “superficial work that ignores hard problems,” the field of physics is actually “most often advanced by those who ignore established research programs to invent their own ideas and forge their own directions.”

In other words, even though I didn’t know it when I began this process two years ago, it was a foregone conclusion that my intention to independently and creatively attack some of the hard foundational problems in science would be met with contempt, condescension, and unresponsiveness.

I am planning to begin a master’s program in physics at NYU in the fall.  NYU has some of the world’s best (or at least most academically well regarded) faculty in the fields of cosmology, the foundations of physics, the philosophy of physics, and the philosophy of mind.  But I will be entering with eyes wide open: into the lion’s den.  I certainly hope some of the faculty will be legitimately interested in answering some of the big questions – and will be responsive to and encouraging of original approaches – but I won’t expect it.  Instead, I will enter with low expectations, understanding clearly that any progress I make in answering the big questions may be despite, not because of, the physics academy.  I will hope to remain guided by a burning curiosity, a passion to learn and understand, and a confidence in my abilities to think, analyze, and create.  Please wish me luck.

Wednesday, May 6, 2020

The Effect of Gravity in Preventing Macroscopic Quantum Superpositions

In a recent post, I posited that a quantum superposition just exists if and only if the facts of the universe are consistent with the superposition; i.e., a system described at time t by state |A> + |B> just means that there is no fact about whether the system is in state |A> or |B> at time t.  In other words, had the system been measured at time t in a basis that includes elements |A> and |B>, then either outcome A or outcome B would have been measured (with probabilities according to the Born rule), but since it was not measured, then information regarding whether the system was in state |A> or |B> did not exist at time t and no future measurement/observation/fact can contradict that fact.  The production of facts (or the happening of events) over time creates new information that reduces future possibilities.

Thinking about quantum mechanics in this way has helped me immensely in understanding and solving many of the various philosophical problems in QM.  To get feedback on it, I submitted a version of the explanation to an essay contest of the Foundational Questions Institute, entitled “Interpreting Quantum Mechanics and Predictability in Terms of Facts About the Universe,” and a preprint is also available here.

However, apparently this point of view is more revolutionary than I had originally thought.  The typical way to think about or describe a quantum superposition described by state |A> + |B> is that it “is kind of in state |A> and kind of in state |B>” or that it “is in both state |A> and state |B> simultaneously” or something like that.  But these descriptions are inaccurate, sloppy, and just plain wrong. 

For example, it is typical in QM to work with expectation values, such as the expectation of position <X>, which is found by taking a weighted average of an object’s position distribution (i.e., weighted by probability, which is the square of amplitude).  The problem arises when this is treated as something real as opposed to something simply mathematically useful for making predictions.  For instance, if a particle whose position expectation is <X> is actually measured/detected at a location X0 that is somewhere far away from <X>, then do we say there’s been a violation in conservation of energy if X0 and <X> are at different potentials?  Likewise if an object having momentum expectation <P> is measured having momentum P0, but <P>2/2m ≠ P02/2m.

The problem is that there was nothing real about the particle’s location when we calculated <X>.  If we were right that the particle was in a location superposition at time t, then there is no fact, nor will there ever be, about the particle’s location at time t, so there can’t be a violation of conservation of energy by detecting the particle at X0 at a later time if there is no fact about where the particle came from.

For instance, when Roger Penrose, whom I greatly admire, tried to analyze the effect of gravity on quantum state reduction, he postulated that the difference in gravitational self-energy (EΔ) between the spacetime geometries of a quantum superposition “in which one lump [of mass] is in two spatially displaced locations” produces an instability that results in a decay into one or the other of the spacetime eigenstates.  He even goes so far as to give a decay time T ≈ ℏ/EΔ, reminiscent of the quantum uncertainty principle.  The problem, as I see it, is that he treated the “two” lumps (in the superposition) as real, so real in fact that he requires taking into account “the gravitational interaction effects between the pair of lumps.”  What pair of lumps?!  There is only one lump!

But this (mis)understanding of QM seems to permeate the field.  So far, I have been unable to find my characterization of QM in the academic literature.  It certainly may be out there, but I feel comfortable in saying that nearly all characterizations of a quantum superposition treat it as if the terms represent something real.  For instance, in the classic Schrodinger’s Cat thought experiment (which is essentially the same as the Wigner’s Friend thought experiment), we are given a quantum state of the cat |Ψ> = |alive> + |dead>, which is a linear superposition of a state in which the cat is alive and one in which it is dead.  QM tells us that the likelihood of finding the cat in one state or another depends on the square of the amplitudes, which I’ve left out for simplicity.

So here’s the classic conundrum: before we look, is the cat dead or alive?  The answer: there is no fact about it being dead or alive until evidence exists (in the form of a correlation somewhere in the universe) that it is one or the other.  Until that information exists, there simply is no fact.  The real difficulty in this thought experiment, which almost no one points out, is the extreme difficulty (and likely impossibility) of creating state |Ψ> = |alive> + |dead> in the first place.  To do so requires that there is no evidence anywhere (beyond the cat itself, of course, which we assume is thermally isolated) of the cat’s being dead or alive.  Even a single photon bouncing off the cat – and keep in mind that the universe is inundated with radiation, such as CMB – would almost certainly provide evidence correlated to its being either dead or alive.

Getting back to Penrose’s paper, in making his argument about a superposition of spacetimes, he points out that “these two space-time geometries differ significantly from each other.”  But my question is this: how could such a superposition arise in the first place?  If I am right that a superposition exists if and only if the facts of the universe are consistent with the superposition, then what would it mean if there was a “significant difference” between two (or more) eigenstates?  If we say, “There would have been a significant difference had that difference been measured but it wasn’t actually measured,” then that does not justify Penrose’s treatment of the spacetime geometries as being actually significantly different.  But to say “There is a significant difference” is wrong because: by whose standards?  By what measure?  After all, if there is a measure (in the form of evidence anywhere in the universe) by which the spacetime geometries are different, then there could not have been a superposition!

The thing is – gravity may be weak (e.g., the electromagnetic attraction between a proton and electron in a hydrogen atom is something like 1040 greater than their gravitational attraction), but it is ubiquitous in the universe and always attractive.  So my question is this: wouldn’t gravity effectively prevent any macroscopic superposition?  To use Penrose’s example, imagine a macroscopic lump of matter near Earth that we are somehow able to perfectly isolate from the universe (already a ridiculous assumption) to allow it to enter a superposition of macroscopically distinct positions.  A lump creates a gravitational field that is tiny but – as far as we know – potentially affects everything in the universe.  If the gravitational field of the lump located at position A affects even a single particle differently than the field of the lump located at B, then the lump at one of these two positions will be correlated with the rest of the universe and a quantum superposition of the lump at position A and position B cannot exist.  Note that the speed of light is irrelevant here; if the lump’s gravity takes 20 years to affect the trajectory of a particle 20 light-years away, that correlation is enough to ensure that there could not have been a superposition at the time.  (This argument may be related to the production of gravity waves, which I know little about.)

Anyway, my point is that when Penrose discusses a superposition of spacetime geometries that “differ significantly from each other,” then wouldn’t significant differences correlate to measurable differences in effects, events, and/or interactions elsewhere (i.e., outside the isolated system)?  If so, such a superposition could never exist.  Which is to say, as soon as there is a fact in the universe that differentiates the two possibilities, they are no longer both possibilities and there is no superposition. 

I haven’t done the calculation yet, but I suspect that gravity would destroy a macroscopic superposition very quickly.  Interestingly, a group of researchers showed that relativistic time dilation at different heights on the Earth’s surface was enough to decohere a macroscopic quantum superposition pretty quickly.  They showed that an isolated gram-scale object in a superposition of locations vertically separated near Earth’s surface by 1mm would decohere in around a microsecond.  This implies that even a “perfectly isolated” Schrodinger’s Cat experiment could never even get off the ground if located anywhere near a planet; however it says little about performing such an experiment in deep space with flat spacetime curvature.  But even though the word “gravitational” appeared in its title, the article was really about time dilation.  So far, I haven’t found an article that deals with how the gravitational effects of a macroscopic object in different locations would correlate to measurable differences elsewhere in the universe, and how this would prevent macroscopic quantum superpositions.  If it were the case that an isolated system described by |dead> caused some correlated event different than an isolated system described by |alive>, then the superposition |Ψ> = |alive> + |dead> could not exist. 

Of course, the question is not really whether gravitational effects are relevant to the existence of quantum superpositions.  Of course they are.  The sun could not exist in a superposition of a state in which it is located at the center of our solar system and a state in which it is located a light-year away, as the gravitational differences between such states would be heavily correlated to measurable differences in other places in the universe.  (Obviously, other differences besides gravitational differences would decohere any potential superposition long before this point.)  The question is at what scale are gravitational effects relevant to the existence of quantum superpositions.  That may place an upper limit to the size of quantum superpositions and the applicability of QM.  (This whole notion that there is no limit, in principle, to the size of objects in interference experiments is driving me crazy, but I’ll save that rant for another time.)  If the answer happens to be such as to prevent any kind of Schrodinger’s Cat or Wigner’s Friend experiment anywhere in the universe, no matter how isolated, then we can finally stop being confused by (and hearing about) these thought experiments.

Before I spend time doing these calculations or trying to reinvent the wheel, it would be great to know if it’s already been done.  Do you know of any such calculation, article, or research? 

Tuesday, February 25, 2020

Interpreting Quantum Mechanics in Terms of Facts About the Universe

Like so many, I am trying to understand quantum mechanics – or, at least, to explain it in a way that makes sense to me.

I’ve taken graduate-level quantum mechanics, or a course that intimately depends on quantum mechanics, at four universities, including MIT and Princeton.  I’ve read countless books and journal articles on quantum mechanics and its various interpretations.  But I’ve never seen quantum mechanics characterized or explained the way I am about to explain it, so I sincerely hope that: a) if it is incorrect, someone can (kindly) point out the flaw; b) if it is correct but is equivalent to another interpretation (e.g., Consistent/Decoherent Histories), someone can expound on the equivalence; or c) if it is correct and novel, that it helps other people to understand quantum mechanics.  If c), then I’d like to submit this to a journal on physics education.

My Interpretation/Understanding

I am attempting to characterize, interpret, and understand quantum mechanics using the following set of propositions, and then more deeply explain this interpretation using a specific example.

The state of the universe is a particular chronological set of facts/events, and the relationships between objects in the universe are the information storing/instantiating those facts.  Those facts must be consistent throughout the entire universe.

A fact occurs exactly when the number (or density) of future possibilities decreases.  Every fact limits future facts and is limited by prior facts.  A fact does not necessarily require an “impact” or “interaction” as colloquially understood.[1]

A (quantum) superposition exists if and only if the facts of the universe are consistent with the superposition.  For example, in the case of the classic two-slit interference experiment with the particle passing the double slit at time T0, the particle is in a superposition of passing through both slits if and only if there is no fact about the particle’s location in one slit or another at time T0.  If even a single photon, correlated to the location of the particle in one slit or the other at time T0, scurries away at light speed, there is a fact about the location of the particle and it cannot be in a superposition at time T0.[2]  In the unlikely event that the experiment is set up so that that photon later gets uncorrelated such that no “which-path” information is ever available, then the particle is, amazingly, in superposition at time T0.  Such a “delayed-choice quantum eraser experiment” (See, e.g., Aspect et al., 1982) demonstrates that whether an event occurs seems to depend on the future permanence of a correlating fact.  In reality, the “window of opportunity” to prevent the decoherence of a superposition is extremely short, so we don’t generally need to wait long before we can officially declare the happening of an event.

Quantum uncertainty (e.g., in the form of the Heisenberg Uncertainty Principle) is simply one type of superposition, in which a spread of possible positions and a spread of possible momenta are related.  For instance, if a particle is tightly localized at time T0, then the facts of the universe at that time are consistent with a wide spread of possible momenta – i.e., a superposition of many momenta exists at T0. 

Explanation of this Interpretation

I’ll try to explain this interpretation with a specific example.  Imagine N objects ({O1, ..., ON}), which need not be microscopic “particles,” distributed in three-dimensional space discretized into M possibilities per dimension.  Assume also that velocity is discretized into M possibilities per dimension.  Each possible combination of location (X) and momentum (P) vectors for each and every object might be considered a single point in classical phase space, yielding a total of M^(6N) such points/possibilities.  A fact (or event) is anything that reduces the number of such possibilities, so one example of a fact is an impact between two objects.  Assume for simplicity that an impact between two objects is always repulsive and their masses are equal, so an impact just has the effect of swapping the objects’ velocities.  Assume also that an impact occurs only when two objects are at the same location at the same time; we will neglect fields.

Let us choose one set of possibilities at time T0, specifically the set in which O1 has a particular position X1 and three possible momenta P11, P12, P13, and O2 has a particular position X2 and three possible momenta P21, P22, P23, as shown in Fig. 1 below.  For the sake of demonstration, these values are chosen such that O1 with P11 will, at time T1, reach the same location in space as O2 with P21; also, O1 with P12 will, at time T2 (which may or may not be different from T1), reach the same location in space as O2 with P23; but every other combination always results in non-coinciding future locations.


Fig. 1.  Nine possibilities for two objects.


There are no restrictions on the possible locations and momenta of other objects, so for each of the nine combinations of O1 and O2, there are M^[6(N-2)] possibilities involving the remaining (N-2) objects. For simplicity, let’s ignore those other combinations and simply write the nine points in phase space as {X1, P11, X2, P21}, {X1, P11, X2, P22}, {X1, P11, X2, P23}, {X1, P12, X2, P21}, etc. 

We now add the following fact about the universe: by time T3 (which is after T1 and T2), O1 and O2 have interacted with each other but not with any other objects.  (That is, they reach the same location in space and then repel, thus swapping their momenta.)  Notice that this fact has the effect of reducing the number of possible combinations that can exist at T3.  Specifically, only the two possibilities, {X1, P11, X2, P21} and {X1, P12, X2, P23} as they existed at time T0, can now exist at T3.  Note that at time T3, the objects O1 and O2 in each of the two combinations have swapped momenta and are in different locations.  For clarity, let’s assume that possibilities {X1, P11, X2, P21} and {X1, P12, X2, P23} at time T0 evolve, respectively, to {X1’, P21, X2’, P11} and {X1’’, P23, X2’’, P12} at time T3.

This reduction in the number of combinations has two features.  First, there are broad categories of individual momenta that simply cannot exist: specifically, at time T3, O1 cannot have a position/momentum combination that traces it back to (or is correlated to) the combination {X1, P13} at time T0, just as O2 cannot be traced back or correlated to the combination {X2, P22} at T0, and no future measurement can contradict this.  (Note that I’m not asserting that an event after T0 retroactively eliminates possibilities at T0.  Rather, while at T0 there were nine possibilities, there are only two at T3.)  Second, while other broad categories of individual momenta may not be ruled out, there are now correlations between the possible momenta of the objects.  For example, if an evolution of O1 from state {X1’, P21} exists at some later time, then a corresponding evolution of O2 from state {X2’, P11} must also exist.  If a future fact rules out one, then it rules out both.  Similarly, if an evolution of O1 from state {X1’’, P23} exists at some later time, then a corresponding evolution of O2 from state {X2’’, P12} must also exist.  These two objects are now entangled, no matter the distance between them.

Let me further clarify.  For the moment, let’s only consider the nine original possible configurations of objects O1 and O2.  By time T3 the only remaining possibilities are: O1 having P21 AND O2 having P11; or O1 having P23 AND O2 having P12.  If at some later time (but before the objects have had a chance to interact with other objects), Alice measures the momentum of object O1 to be P21, it will necessarily be the case that the momentum of object O2, if measured by Bob, would be found to be P11.  Even if the Alice and Bob are far apart, their measurements will be perfectly correlated.  Even if the measurement events are spacelike separated – i.e., there is no fact about which measurement happens first – object O1 having momentum P21 will correspond to object O2 having momentum P11 and not P12.  In other words, among the nine possibilities at time T0, the first fact (O1 interacts with O2) eliminates all but two, and the second fact (O1 has momentum P21) eliminates one.  Thus, these facts make future facts incompatible with all but one of those original nine possibilities, specifically {X1, P11, X2, P21} at T0.[3]

Notice that the reduction in possibilities – and the resulting correlations – have nothing to do with whether Alice or Bob knows about the correlations.  I think there’s been a lot of experimental research and discussion regarding how measurements on systems with known entanglements correlate to each other, as if entanglement were some rare, almost magical quantum configuration created only in expensive labs.  Instead, I think entanglement is ubiquitous.  If every (or almost every) impact between objects results in a new correlation between them, then isn’t every object entangled with every other?  The universe goes on creating new facts, reducing future possibilities, correlating the possibilities of one system with those of another, so that the possibilities for any one object depend, in some sense, on the possibilities of every other.  The notion of universal entanglement is far more important and useful, I think, than has been discussed in the scientific literature.

Of course, this example is insanely oversimplified.  My goal is simply to show how the quantity/density of possible combinations in phase space gets reduced by facts.  For instance, as discussed above, the fact that O1 interacts with O2 implies that O1 cannot have a state after T3 that traces it back or correlates it to the state {X1, P13} at time T0.  However, this does NOT imply that O1 can’t have momentum P13 after T3.  The analysis considered only a tiny (TINY!) subset of possibilities at time T0 in which O1 was located at X1 and O2 was located at X2.  To determine whether O1 might have momentum P13 after T3, we have to consider every other possible combination in which O1 is not at X1 at T0.  Looking back at Fig. 1, we can obviously move O1 to some other location so that, with momentum P13, it does impact O2.

Now that I’ve explained the example, the primary questions I want to consider are the effect of facts on the universe in reducing the entire phase space of possibilities, and whether any interesting or large-scale pattern or structure emerges.  For example, if it turned out, after several events, that O1 having momentum P13 does not appear in any of the possible combinations at T3, then we can state with certainty that O1 does not have momentum P13 at T3.  And if in every possible combination after T3 in which O1 has momentum P21 we find that O2 has momentum P11, then we can say with certainty that if Alice measures the momentum of O1 as P21 and Bob, who is several light-years away from Alice, measures the momentum of O2, he will measure P11.[4]

I think the most interesting question is: as the phase space of possibilities gets reduced in time by facts, does any structure or pattern emerge in the distributions of object locations and/or momenta?  For example, if after lots of events involving objects O4 and O7, do we find, among the remaining possibilities in phase space, that the locations of O7 relative to O4 start to converge?  If so, does the spread of the distribution (e.g., standard deviation) get tighter with the addition of subsequent facts?

Computer Simulation and Questions

I tried programming a simulating and answering the above questions with Mathematica, but quickly realized that even the simplest possible analysis (three objects in one dimension discretized into 10 possibilities, repeating universe, no gravity) took about 10 seconds to analyze the one million points of phase space.  Imagine trying to do a more reasonable analysis of, say, 100 objects in two-dimensional space discretized to 1000 places per dimension; we’re now at 1000^400 possibilities, which significantly exceeds the computational power of the entire universe, estimated at 10^122.  (See, e.g., Davies, 2007.)

There are a variety of mathematical tools and shortcuts that could help with the analysis.  For example, I suspect that an interesting analysis could be done with a Monte Carlo simulation, essentially by just randomly selecting initial states.  I could start with a set of chronological facts/events (e.g., O1 impacts O5, then O3 impacts O9, then O5 impacts O6, etc.) and then run a Monte Carlo simulation to find a statistically useful set of initial states that satisfy the facts.  Then, I’d like to see what kind of patterns and/or localizations, if any, emerge.  I suspect that after enough events, some objects would start to appear fixed relative to some other objects, and once all objects are entangled/correlated, they would all begin to show a (potentially fuzzy) localization relative to each other.  Further, I suspect that if we were to look at the fuzziness of, say, object #74, we would find a particular spread in its location and momentum, but if we were to look only at the distribution of momenta of object #74 in particular locations, we would find a larger spread.  If so, then such an analysis might numerically demonstrate quantum uncertainty.  Of course, I could be wrong about all this, but won’t know until I can do some sort of simulation or analysis.

Another question that might be answered by such an analysis is whether the times of events must be inputted (e.g., O1 impacts O5 at T=35 units) or whether time itself is emergent.  I suspect the latter.  In the previous example, O1 having P21 at T3 is correlated with O2 having P11, but it is also correlated with an impact at T1, while O1 having P23 at T3 is correlated with an impact with O2 at T2.  Thus, the later fact about the universe causes the time of the earlier impact to emerge.  I suspect that when the phase space specifies velocity, event times are emergent; likewise, if the set of possibilities includes only locations but event times are specified, velocities would emerge. 

Another issue that might be addressed by such an analysis is the relationship of objects to the underlying grid.  Objects shouldn’t leave the grid, so will objects wrap around or should we include a gravitation force sufficient to prevent their reaching the edge?  And suddenly an analysis of quantum mechanics necessitates general relativity and the curvature of space!

Finally, I don’t have the math background to figure out how to do the analysis with continuous initial states (versus discrete states).  I suspect that there is no fundamental discretization of spacetime, but rather the “resolution” of the universe increases with more facts/events.  That is, there is no fundamental limit to the precision of a measurement, except to the extent that facts just don’t (yet) exist to answer questions that probe beyond a certain scale.  One scale, quantum uncertainty, involves a tradeoff between an object’s location precision and momentum precision, while another, the Planck length, implies an energy sufficient to create a black hole if a distance smaller than the Planck length is probed.  Both scales are related to Planck’s constant. 

But if every interaction between objects creates a new fact that slightly increases the universe’s resolution, then Planck’s constant is actually decreasing with time.  As Planck’s constant continues to decrease, the energy of a photon at a given wavelength decreases, so shorter lengths can be probed before reaching a black-hole-inducing energy.  Also as uncertainty decreases, the momentum-changing kick given by that photon to probe the location of an object would have less of an effect on the measured object. 

Objections

I’ll try to address a few potential objections to this interpretation.

Implies Planck’s constant is not a constant.  The time scale of this interpretation by which new facts increase the resolution of the universe (and decrease Planck’s constant) is sufficiently slow that there is no reason to think that any change could have been detected in the last century, although improving measurement precision may allow this prediction to be tested in the future.  If Planck's constant is decreasing with time, one way to test this hypothesis without doing further measurements might be to retrodict the number of facts/events and/or correlations/entanglements that would be necessary to bring quantum uncertainty to within the scale of Planck’s constant, and then determine whether the actual number of such events and/or entanglements in the universe is consistent with this retrodiction.  In other words, it may be the case that Planck’s constant is actually decreasing if it emerges from variations among possibilities, the number (or density) of which decrease with the happening of events.

In any event, despite some debate as to its implications, there is already strong evidence that correlation/entanglement within a system reduces its quantum uncertainty.  (See, e.g., Rigolin, 2002.)  If indeed universal entanglement correlates every object in the universe directly or indirectly to every other, it should not be surprising that increasing correlations further reduce quantum uncertainties, an hypothesis that would be verified by observing a change in Planck’s constant.

Implies that the wave state Ψ is not the full description of a system.  An underlying assumption of our current understanding of quantum mechanics is that a system’s wave state is its complete description, and that “the momentum wave packet for a particular quantum state [is] equal to the Fourier transform of the position wave packet for the same state.”  (Griffiths, Ch. 2.)  These are assumptions that, so far, have provided excellent agreement with observation, but have also given rise to confusion and a variety of seeming paradoxes.  It may be that the current computational power of quantum mechanics is an approximation that results from the convergence of remaining possibilities after facts of the universe eliminate the vast majority.  As an analogy, one may use a very high precision thermometer to obtain the temperature of a system to many significant figures, but its temperature is not its complete description.

Treating objects classically.  My example in Fig. 1 treats objects macroscopically as they bounce off each other classically.  But that was just an example to show how facts reduce possibilities and that the remaining possibilities inherently embed evidence of those facts.  That is essentially tautological: it must be true that impacts between systems produce facts that reduce possibilities, because otherwise what would it mean that an impact occurred?  Any event must distinguish possibilities in which the event happens and those in which it doesn’t.  Rather, my point (I think!) is that is the history of facts in the universe is instantiated in the form of correlations/entanglements between objects, localizes the positions and momenta of objects relative to each other, and gives rise to (or eliminates the possibilities of ) superpositions.

Identity.  My interpretation requires that objects have identity.  For example, if two of the facts of the universe are that object O9 impacts object O4 at time T0 and then O4 impacts O12 at time T1, then the possible locations and momenta of object O4 after time T1 (along with, of course, its correlations with O9 and O12) effectively embed the history of these facts.  This can only be true if object O4 at T0 is the same as object O4 at T1 – i.e., objects must maintain their identity.  However, as currently understood, many quantum mechanical objects don’t have identities; they are indistinguishable in principle.  For instance, if two helium nuclei (which are bosons) are exchanged in a superfluid represented by wave state Ψ, then the state (and any predictive power we possess) will remain unchanged.  How can a particular helium nucleus (and its entanglements with other objects) embed a history of facts if there’s no such thing as a “particular” helium nucleus? 

I’ll provide several responses.  First, the examples I gave were generically about objects; I did not specify that they were particles or microscopic.  They’re true of baseballs, which clearly can be treated classically.  If it turns out that protons cannot be treated classically (e.g., if protons do not maintain identity), then there may not be a fact about one particular proton impacting another particular proton.  But there may be a fact about a group of protons (for example) creating some lasting correlation in the universe, a fact that would be reflected in reducing possibilities.  Second, the objection is based on the assumption that Ψ contains all information about a system; as discussed above, this assumption may be merely a convenient approximation.  Finally, we already know that entanglement is possible between such particles; what would this mean if they didn’t have identity?  For instance, imagine two entangled photons (A and B) such that their polarizations are perfectly correlated.  If photon A is mixed up with lots of other “identical” photons, doesn’t photon A still perfectly correlate to photon B?  Don’t photon A and B (or, perhaps the universe as a whole) still “know” they are entangled, whether or not we can distinguish photon A from others?


References

Aspect, A., Dalibard, J. and Roger, G., 1982. Experimental test of Bell's inequalities using time-varying analyzers. Physical review letters49(25), p.1804.

Davies, P.C.W., 2007. The implications of a cosmological information bound for complexity, quantum information and the nature of physical law. Cristian S. Calude, p.69.

Elitzur, A.C. and Vaidman, L., 1993. Quantum mechanical interaction-free measurements. Foundations of Physics23(7), pp.987-997.

Griffiths, R.B., 2003. Consistent quantum theory. Cambridge University Press.

Haroche, S., 1998. Entanglement, decoherence and the quantum/classical boundary. Physics today51(7), pp.36-42.

Rigolin, G., 2002. Uncertainty relations for entangled states. Foundations of Physics Letters15(3), pp.293-298.



[1] Elitzur et al. (1993) unintentionally gives a great argument as to how quantum mechanical events can occur without an “interaction.”  Whether or not the suggested method disturbs a measured system’s internal quantum state, it undoubtedly produces facts that reduce the number of future possibilities.
[2] “The coherence vanishes as soon as a single quantum is lost to the environment.”  (Haroche, 1998.)
[3] I don’t think it matters, scientifically, whether we say that all nine combinations truly were possibilities at time T0 and future facts narrow down possibilities when the facts occur, or that eight of the nine combinations were not actually possible at T0 and future facts simply clarify past possibilities.  The predictive power of both ideas is the same.
[4] So long as Alice measures after T3 in her frame of reference but before O1 has impacted another object and Bob measures after T3 in his frame of reference but before O2 has impacted another object.