Like so many, I am trying to understand quantum mechanics
– or, at least, to explain it in a way that makes sense to me.
I’ve taken graduate-level quantum mechanics, or a course
that intimately depends on quantum mechanics, at four universities, including
MIT and Princeton. I’ve read countless
books and journal articles on quantum mechanics and its various
interpretations. But I’ve never seen quantum
mechanics characterized or explained the way I am about to explain it, so I
sincerely hope that: a) if it is incorrect, someone can (kindly) point out the
flaw; b) if it is correct but is equivalent to another interpretation (e.g.,
Consistent/Decoherent Histories), someone can expound on the equivalence; or c)
if it is correct and novel, that it helps other people to understand quantum
mechanics. If c), then I’d like to
submit this to a journal on physics education.
My Interpretation/Understanding
I am attempting to characterize, interpret, and
understand quantum mechanics using the following set of propositions, and then
more deeply explain this interpretation using a specific example.
The state of the universe is a particular chronological
set of facts/events, and the relationships between objects in the universe are
the information storing/instantiating those facts. Those facts must be consistent throughout the
entire universe.
A fact occurs exactly when the number (or density) of
future possibilities decreases. Every
fact limits future facts and is limited by prior facts. A fact does not necessarily require an
“impact” or “interaction” as colloquially understood.[1]
A (quantum) superposition exists if and only if the facts
of the universe are consistent with the superposition. For example, in the case of the classic two-slit
interference experiment with the particle passing the double slit at time T0,
the particle is in a superposition of passing through both slits if and only if
there is no fact about the particle’s location in one slit or another at time T0. If even a single photon, correlated to the
location of the particle in one slit or the other at time T0,
scurries away at light speed, there is a fact about the location of the
particle and it cannot be in a superposition at time T0.[2] In the unlikely event that the experiment is
set up so that that photon later gets uncorrelated such that no “which-path”
information is ever available, then the particle is, amazingly, in
superposition at time T0.
Such a “delayed-choice quantum eraser experiment” (See, e.g., Aspect et
al., 1982) demonstrates that whether an event occurs seems to depend on the
future permanence of a correlating fact.
In reality, the “window of opportunity” to prevent the decoherence of a
superposition is extremely short, so we don’t generally need to wait long
before we can officially declare the happening of an event.
Quantum uncertainty (e.g., in the form of the Heisenberg Uncertainty
Principle) is simply one type of superposition, in which a spread of possible
positions and a spread of possible momenta are related. For instance, if a particle is tightly
localized at time T0, then the facts of the universe at that time
are consistent with a wide spread of possible momenta – i.e., a superposition
of many momenta exists at T0.
Explanation of this Interpretation
I’ll try to explain this interpretation with a specific
example. Imagine N objects ({O1,
..., ON}), which need not be microscopic “particles,” distributed in
three-dimensional space discretized into M possibilities per dimension. Assume also that velocity is discretized into
M possibilities per dimension. Each
possible combination of location (X) and momentum (P) vectors for each and
every object might be considered a single point in classical phase space, yielding
a total of M^(6N) such points/possibilities.
A fact (or event) is anything that reduces the number of such
possibilities, so one example of a fact is an impact between two objects. Assume for simplicity that an impact between
two objects is always repulsive and their masses are equal, so an impact just
has the effect of swapping the objects’ velocities. Assume also that an impact occurs only when two
objects are at the same location at the same time; we will neglect fields.
Let us choose one set of possibilities at time T0,
specifically the set in which O1 has a particular position X1
and three possible momenta P11, P12, P13, and
O2 has a particular position X2 and three possible
momenta P21, P22, P23, as shown in Fig. 1 below. For the sake of demonstration, these values
are chosen such that O1 with P11 will, at time T1,
reach the same location in space as O2 with P21; also, O1
with P12 will, at time T2 (which may or may not be
different from T1), reach the same location in space as O2
with P23; but every other combination always results in non-coinciding
future locations.
Fig. 1. Nine possibilities for two objects.
There are no restrictions on the possible locations and
momenta of other objects, so for each of the nine combinations of O1
and O2, there are M^[6(N-2)] possibilities involving the remaining
(N-2) objects. For simplicity, let’s ignore those other combinations and simply
write the nine points in phase space as {X1, P11, X2,
P21}, {X1, P11, X2, P22},
{X1, P11, X2, P23}, {X1,
P12, X2, P21}, etc.
We now add the following fact about the universe: by time
T3 (which is after T1 and T2), O1
and O2 have interacted with each other but not with any other
objects. (That is, they reach the same location
in space and then repel, thus swapping their momenta.) Notice that this fact has the effect of
reducing the number of possible combinations that can exist at T3. Specifically, only the two possibilities, {X1,
P11, X2, P21} and {X1, P12,
X2, P23} as they existed at time T0, can now
exist at T3. Note that at
time T3, the objects O1 and O2 in each of the
two combinations have swapped momenta and are in different locations. For clarity, let’s assume that possibilities
{X1, P11, X2, P21} and {X1,
P12, X2, P23} at time T0 evolve,
respectively, to {X1’, P21, X2’, P11}
and {X1’’, P23, X2’’, P12} at time
T3.
This reduction in the number of combinations has two
features. First, there are broad
categories of individual momenta that simply cannot exist: specifically, at
time T3, O1 cannot have a position/momentum combination
that traces it back to (or is correlated to) the combination {X1, P13}
at time T0, just as O2 cannot be traced back or
correlated to the combination {X2, P22} at T0,
and no future measurement can contradict this.
(Note that I’m not asserting that an event after T0
retroactively eliminates possibilities at T0. Rather, while at T0 there were
nine possibilities, there are only two at T3.) Second, while other broad categories of
individual momenta may not be ruled out, there are now correlations
between the possible momenta of the objects.
For example, if an evolution of O1 from state {X1’,
P21} exists at some later time, then a corresponding evolution of O2
from state {X2’, P11} must also exist. If a future fact rules out one, then it rules
out both. Similarly, if an evolution of
O1 from state {X1’’, P23} exists at some later
time, then a corresponding evolution of O2 from state {X2’’,
P12} must also exist. These two
objects are now entangled, no matter the distance between them.
Let me further clarify.
For the moment, let’s only consider the nine original possible
configurations of objects O1 and O2. By time T3 the only remaining possibilities
are: O1 having P21 AND O2 having P11;
or O1 having P23 AND O2 having P12. If at some later time (but before the objects
have had a chance to interact with other objects), Alice measures the momentum
of object O1 to be P21, it will necessarily be the case
that the momentum of object O2, if measured by Bob, would be found
to be P11. Even if the Alice
and Bob are far apart, their measurements will be perfectly correlated. Even if the measurement events are spacelike
separated – i.e., there is no fact about which measurement happens first –
object O1 having momentum P21 will correspond to object O2
having momentum P11 and not P12. In other words, among the nine possibilities
at time T0, the first fact (O1 interacts with O2)
eliminates all but two, and the second fact (O1 has momentum P21)
eliminates one. Thus, these facts make
future facts incompatible with all but one of those original nine
possibilities, specifically {X1, P11, X2, P21}
at T0.[3]
Notice that the reduction in possibilities – and the
resulting correlations – have nothing to do with whether Alice or Bob knows
about the correlations. I think there’s
been a lot of experimental research and discussion regarding how measurements
on systems with known entanglements correlate to each other, as if entanglement
were some rare, almost magical quantum configuration created only in expensive labs.
Instead, I think entanglement is
ubiquitous. If every (or almost every)
impact between objects results in a new correlation between them, then isn’t
every object entangled with every other?
The universe goes on creating new facts, reducing future possibilities,
correlating the possibilities of one system with those of another, so that the
possibilities for any one object depend, in some sense, on the possibilities of
every other. The notion of universal entanglement
is far more important and useful, I think, than has been discussed in the
scientific literature.
Of course, this example is insanely oversimplified. My goal is simply to show how the
quantity/density of possible combinations in phase space gets reduced by
facts. For instance, as discussed above,
the fact that O1 interacts with O2 implies that O1
cannot have a state after T3 that traces it back or correlates it to
the state {X1, P13} at time T0. However, this does NOT imply that O1
can’t have momentum P13 after T3. The analysis considered only a tiny (TINY!)
subset of possibilities at time T0 in which O1 was
located at X1 and O2 was located at X2. To determine whether O1 might have
momentum P13 after T3, we have to consider every other
possible combination in which O1 is not at X1 at T0. Looking back at Fig. 1, we can obviously move
O1 to some other location so that, with momentum P13, it does
impact O2.
Now that I’ve explained the example, the primary questions
I want to consider are the effect of facts on the universe in reducing the
entire phase space of possibilities, and whether any interesting or large-scale
pattern or structure emerges. For
example, if it turned out, after several events, that O1 having momentum
P13 does not appear in any of the possible combinations at T3,
then we can state with certainty that O1 does not have momentum P13
at T3. And if in every
possible combination after T3 in which O1 has momentum P21
we find that O2 has momentum P11, then we can say with
certainty that if Alice measures the momentum of O1 as P21
and Bob, who is several light-years away from Alice, measures the momentum of O2,
he will measure P11.[4]
I think the most interesting question is: as the phase
space of possibilities gets reduced in time by facts, does any structure or pattern
emerge in the distributions of object locations and/or momenta? For example, if after lots of events
involving objects O4 and O7, do we find, among the
remaining possibilities in phase space, that the locations of O7
relative to O4 start to converge?
If so, does the spread of the distribution (e.g., standard deviation)
get tighter with the addition of subsequent facts?
Computer Simulation and Questions
I tried programming a simulating and answering the above
questions with Mathematica, but quickly realized that even the simplest
possible analysis (three objects in one dimension discretized into 10
possibilities, repeating universe, no gravity) took about 10 seconds to analyze
the one million points of phase space. Imagine
trying to do a more reasonable analysis of, say, 100 objects in two-dimensional
space discretized to 1000 places per dimension; we’re now at 1000^400
possibilities, which significantly exceeds the computational power of the
entire universe, estimated at 10^122. (See,
e.g., Davies, 2007.)
There are a variety of mathematical tools and shortcuts
that could help with the analysis. For
example, I suspect that an interesting analysis could be done with a Monte
Carlo simulation, essentially by just randomly selecting initial states. I could start with a set of chronological facts/events
(e.g., O1 impacts O5, then O3 impacts O9,
then O5 impacts O6, etc.) and then run a Monte Carlo
simulation to find a statistically useful set of initial states that satisfy
the facts. Then, I’d like to see what
kind of patterns and/or localizations, if any, emerge. I suspect that after enough events, some
objects would start to appear fixed relative to some other objects, and once
all objects are entangled/correlated, they would all begin to show a
(potentially fuzzy) localization relative to each other. Further, I suspect that if we were to look at
the fuzziness of, say, object #74, we would find a particular spread in its
location and momentum, but if we were to look only at the distribution of
momenta of object #74 in particular locations, we would find a larger
spread. If so, then such an analysis
might numerically demonstrate quantum uncertainty. Of course, I could be wrong about all this,
but won’t know until I can do some sort of simulation or analysis.
Another question that might be answered by such an
analysis is whether the times of events must be inputted (e.g., O1
impacts O5 at T=35 units) or whether time itself is emergent. I suspect the latter. In the previous example, O1 having
P21 at T3 is correlated with O2 having P11,
but it is also correlated with an impact at T1, while O1
having P23 at T3 is correlated with an impact with O2
at T2. Thus, the later fact
about the universe causes the time of the earlier impact to emerge. I suspect that when the phase space specifies
velocity, event times are emergent; likewise, if the set of possibilities
includes only locations but event times are specified, velocities would
emerge.
Another issue that might be addressed by such an analysis
is the relationship of objects to the underlying grid. Objects shouldn’t leave the grid, so will
objects wrap around or should we include a gravitation force sufficient to
prevent their reaching the edge? And
suddenly an analysis of quantum mechanics necessitates general relativity and
the curvature of space!
Finally, I don’t have the math background to figure out
how to do the analysis with continuous initial states (versus discrete states). I suspect that there is no fundamental discretization
of spacetime, but rather the “resolution” of the universe increases with more
facts/events. That is, there is no
fundamental limit to the precision of a measurement, except to the extent that
facts just don’t (yet) exist to answer questions that probe beyond a certain
scale. One scale, quantum uncertainty,
involves a tradeoff between an object’s location precision and momentum
precision, while another, the Planck length, implies an energy sufficient to
create a black hole if a distance smaller than the Planck length is
probed. Both scales are related to
Planck’s constant.
But if every interaction between objects creates a new
fact that slightly increases the universe’s resolution, then Planck’s constant
is actually decreasing with time. As
Planck’s constant continues to decrease, the energy of a photon at a given
wavelength decreases, so shorter lengths can be probed before reaching a
black-hole-inducing energy. Also as uncertainty
decreases, the momentum-changing kick given by that photon to probe the
location of an object would have less of an effect on the measured object.
Objections
I’ll try to address a few potential objections to this
interpretation.
Implies Planck’s constant is not a constant. The time scale of this interpretation by
which new facts increase the resolution of the universe (and decrease Planck’s
constant) is sufficiently slow that there is no reason to think that any change
could have been detected in the last century, although improving measurement
precision may allow this prediction to be tested in the future. If Planck's constant is decreasing with time, one way to test this hypothesis without doing
further measurements might be to retrodict the number of facts/events and/or correlations/entanglements
that would be necessary to bring quantum uncertainty to within the scale of Planck’s
constant, and then determine whether the actual number of such events and/or
entanglements in the universe is consistent with this retrodiction. In other words, it may be the case that
Planck’s constant is actually decreasing if it emerges from variations among
possibilities, the number (or density) of which decrease with the happening of
events.
In any event, despite some debate as to its implications,
there is already strong evidence that correlation/entanglement within a system
reduces its quantum uncertainty. (See,
e.g., Rigolin, 2002.) If indeed
universal entanglement correlates every object in the universe directly or
indirectly to every other, it should not be surprising that increasing
correlations further reduce quantum uncertainties, an hypothesis that would be
verified by observing a change in Planck’s constant.
Implies that the wave state Ψ is not the full
description of a system. An
underlying assumption of our current understanding of quantum mechanics is that
a system’s wave state is its complete description, and that “the momentum wave
packet for a particular quantum state [is] equal to the Fourier transform of
the position wave packet for the same state.”
(Griffiths, Ch. 2.) These are
assumptions that, so far, have provided excellent agreement with observation,
but have also given rise to confusion and a variety of seeming paradoxes. It may be that the current computational
power of quantum mechanics is an approximation that results from the
convergence of remaining possibilities after facts of the universe eliminate
the vast majority. As an analogy, one
may use a very high precision thermometer to obtain the temperature of a system
to many significant figures, but its temperature is not its complete
description.
Treating objects classically. My example in Fig. 1 treats objects
macroscopically as they bounce off each other classically. But that was just an example to show how
facts reduce possibilities and that the remaining possibilities inherently
embed evidence of those facts. That is
essentially tautological: it must be true that impacts between systems produce
facts that reduce possibilities, because otherwise what would it mean that an impact
occurred? Any event must distinguish
possibilities in which the event happens and those in which it doesn’t. Rather, my point (I think!) is that is the
history of facts in the universe is instantiated in the form of
correlations/entanglements between objects, localizes the positions and momenta
of objects relative to each other, and gives rise to (or eliminates the
possibilities of ) superpositions.
Identity. My
interpretation requires that objects have identity. For example, if two of the facts of the
universe are that object O9 impacts object O4 at time T0
and then O4 impacts O12 at time T1, then the
possible locations and momenta of object O4 after time T1
(along with, of course, its correlations with O9 and O12)
effectively embed the history of these facts.
This can only be true if object O4 at T0 is the same
as object O4 at T1 – i.e., objects must maintain their
identity. However, as currently
understood, many quantum mechanical objects don’t have identities; they are indistinguishable
in principle. For instance, if two helium
nuclei (which are bosons) are exchanged in a superfluid represented by wave
state Ψ, then the state (and any predictive power we possess) will remain
unchanged. How can a particular helium
nucleus (and its entanglements with other objects) embed a history of facts if there’s
no such thing as a “particular” helium nucleus?
I’ll provide several responses. First, the examples I gave were generically
about objects; I did not specify that they were particles or microscopic. They’re true of baseballs, which clearly can
be treated classically. If it turns out
that protons cannot be treated classically (e.g., if protons do not maintain identity),
then there may not be a fact about one particular proton impacting another
particular proton. But there may be a
fact about a group of protons (for example) creating some lasting
correlation in the universe, a fact that would be reflected in reducing
possibilities. Second, the objection is
based on the assumption that Ψ contains all information about a system; as
discussed above, this assumption may be merely a convenient approximation. Finally, we already know that entanglement is
possible between such particles; what would this mean if they didn’t have
identity? For instance, imagine two
entangled photons (A and B) such that their polarizations are perfectly
correlated. If photon A is mixed up with
lots of other “identical” photons, doesn’t photon A still perfectly correlate
to photon B? Don’t photon A and B (or,
perhaps the universe as a whole) still “know” they are entangled, whether or
not we can distinguish photon A from others?
References
Aspect, A.,
Dalibard, J. and Roger, G., 1982. Experimental test of Bell's inequalities
using time-varying analyzers. Physical review letters, 49(25),
p.1804.
Davies,
P.C.W., 2007. The implications of a cosmological information bound for
complexity, quantum information and the nature of physical law. Cristian
S. Calude, p.69.
Elitzur,
A.C. and Vaidman, L., 1993. Quantum mechanical interaction-free
measurements. Foundations of Physics, 23(7),
pp.987-997.
Griffiths,
R.B., 2003. Consistent quantum theory. Cambridge University Press.
Haroche, S.,
1998. Entanglement, decoherence and the quantum/classical boundary. Physics
today, 51(7), pp.36-42.
Rigolin, G.,
2002. Uncertainty relations for entangled states. Foundations of
Physics Letters, 15(3), pp.293-298.
[1]
Elitzur et al. (1993) unintentionally gives a great argument as to how
quantum mechanical events can occur without an “interaction.” Whether or not the suggested method disturbs
a measured system’s internal quantum state, it undoubtedly produces facts that
reduce the number of future possibilities.
[2]
“The coherence vanishes as soon as a single quantum is lost to the
environment.” (Haroche, 1998.)
[3]
I don’t think it matters, scientifically, whether we say that all nine
combinations truly were possibilities at time T0 and future facts
narrow down possibilities when the facts occur, or that eight of the nine
combinations were not actually possible at T0 and future facts
simply clarify past possibilities. The
predictive power of both ideas is the same.
[4] So long as Alice measures after T3 in her
frame of reference but before O1 has impacted another object and Bob
measures after T3 in his frame of reference but before O2
has impacted another object.
No comments:
Post a Comment
All comments are moderated. After submitting your comment, please give me 24 hours to approve. Thanks!