World's First Proof that Consciousness is Nonlocal

Welcome to my blog! I am the author of the world's FIRST paper (explained here on my YouTube channel ) to appear in the academic lite...

Tuesday, February 25, 2020

Interpreting Quantum Mechanics in Terms of Facts About the Universe

Like so many, I am trying to understand quantum mechanics – or, at least, to explain it in a way that makes sense to me.

I’ve taken graduate-level quantum mechanics, or a course that intimately depends on quantum mechanics, at four universities, including MIT and Princeton.  I’ve read countless books and journal articles on quantum mechanics and its various interpretations.  But I’ve never seen quantum mechanics characterized or explained the way I am about to explain it, so I sincerely hope that: a) if it is incorrect, someone can (kindly) point out the flaw; b) if it is correct but is equivalent to another interpretation (e.g., Consistent/Decoherent Histories), someone can expound on the equivalence; or c) if it is correct and novel, that it helps other people to understand quantum mechanics.  If c), then I’d like to submit this to a journal on physics education.

My Interpretation/Understanding

I am attempting to characterize, interpret, and understand quantum mechanics using the following set of propositions, and then more deeply explain this interpretation using a specific example.

The state of the universe is a particular chronological set of facts/events, and the relationships between objects in the universe are the information storing/instantiating those facts.  Those facts must be consistent throughout the entire universe.

A fact occurs exactly when the number (or density) of future possibilities decreases.  Every fact limits future facts and is limited by prior facts.  A fact does not necessarily require an “impact” or “interaction” as colloquially understood.[1]

A (quantum) superposition exists if and only if the facts of the universe are consistent with the superposition.  For example, in the case of the classic two-slit interference experiment with the particle passing the double slit at time T0, the particle is in a superposition of passing through both slits if and only if there is no fact about the particle’s location in one slit or another at time T0.  If even a single photon, correlated to the location of the particle in one slit or the other at time T0, scurries away at light speed, there is a fact about the location of the particle and it cannot be in a superposition at time T0.[2]  In the unlikely event that the experiment is set up so that that photon later gets uncorrelated such that no “which-path” information is ever available, then the particle is, amazingly, in superposition at time T0.  Such a “delayed-choice quantum eraser experiment” (See, e.g., Aspect et al., 1982) demonstrates that whether an event occurs seems to depend on the future permanence of a correlating fact.  In reality, the “window of opportunity” to prevent the decoherence of a superposition is extremely short, so we don’t generally need to wait long before we can officially declare the happening of an event.

Quantum uncertainty (e.g., in the form of the Heisenberg Uncertainty Principle) is simply one type of superposition, in which a spread of possible positions and a spread of possible momenta are related.  For instance, if a particle is tightly localized at time T0, then the facts of the universe at that time are consistent with a wide spread of possible momenta – i.e., a superposition of many momenta exists at T0. 

Explanation of this Interpretation

I’ll try to explain this interpretation with a specific example.  Imagine N objects ({O1, ..., ON}), which need not be microscopic “particles,” distributed in three-dimensional space discretized into M possibilities per dimension.  Assume also that velocity is discretized into M possibilities per dimension.  Each possible combination of location (X) and momentum (P) vectors for each and every object might be considered a single point in classical phase space, yielding a total of M^(6N) such points/possibilities.  A fact (or event) is anything that reduces the number of such possibilities, so one example of a fact is an impact between two objects.  Assume for simplicity that an impact between two objects is always repulsive and their masses are equal, so an impact just has the effect of swapping the objects’ velocities.  Assume also that an impact occurs only when two objects are at the same location at the same time; we will neglect fields.

Let us choose one set of possibilities at time T0, specifically the set in which O1 has a particular position X1 and three possible momenta P11, P12, P13, and O2 has a particular position X2 and three possible momenta P21, P22, P23, as shown in Fig. 1 below.  For the sake of demonstration, these values are chosen such that O1 with P11 will, at time T1, reach the same location in space as O2 with P21; also, O1 with P12 will, at time T2 (which may or may not be different from T1), reach the same location in space as O2 with P23; but every other combination always results in non-coinciding future locations.


Fig. 1.  Nine possibilities for two objects.


There are no restrictions on the possible locations and momenta of other objects, so for each of the nine combinations of O1 and O2, there are M^[6(N-2)] possibilities involving the remaining (N-2) objects. For simplicity, let’s ignore those other combinations and simply write the nine points in phase space as {X1, P11, X2, P21}, {X1, P11, X2, P22}, {X1, P11, X2, P23}, {X1, P12, X2, P21}, etc. 

We now add the following fact about the universe: by time T3 (which is after T1 and T2), O1 and O2 have interacted with each other but not with any other objects.  (That is, they reach the same location in space and then repel, thus swapping their momenta.)  Notice that this fact has the effect of reducing the number of possible combinations that can exist at T3.  Specifically, only the two possibilities, {X1, P11, X2, P21} and {X1, P12, X2, P23} as they existed at time T0, can now exist at T3.  Note that at time T3, the objects O1 and O2 in each of the two combinations have swapped momenta and are in different locations.  For clarity, let’s assume that possibilities {X1, P11, X2, P21} and {X1, P12, X2, P23} at time T0 evolve, respectively, to {X1’, P21, X2’, P11} and {X1’’, P23, X2’’, P12} at time T3.

This reduction in the number of combinations has two features.  First, there are broad categories of individual momenta that simply cannot exist: specifically, at time T3, O1 cannot have a position/momentum combination that traces it back to (or is correlated to) the combination {X1, P13} at time T0, just as O2 cannot be traced back or correlated to the combination {X2, P22} at T0, and no future measurement can contradict this.  (Note that I’m not asserting that an event after T0 retroactively eliminates possibilities at T0.  Rather, while at T0 there were nine possibilities, there are only two at T3.)  Second, while other broad categories of individual momenta may not be ruled out, there are now correlations between the possible momenta of the objects.  For example, if an evolution of O1 from state {X1’, P21} exists at some later time, then a corresponding evolution of O2 from state {X2’, P11} must also exist.  If a future fact rules out one, then it rules out both.  Similarly, if an evolution of O1 from state {X1’’, P23} exists at some later time, then a corresponding evolution of O2 from state {X2’’, P12} must also exist.  These two objects are now entangled, no matter the distance between them.

Let me further clarify.  For the moment, let’s only consider the nine original possible configurations of objects O1 and O2.  By time T3 the only remaining possibilities are: O1 having P21 AND O2 having P11; or O1 having P23 AND O2 having P12.  If at some later time (but before the objects have had a chance to interact with other objects), Alice measures the momentum of object O1 to be P21, it will necessarily be the case that the momentum of object O2, if measured by Bob, would be found to be P11.  Even if the Alice and Bob are far apart, their measurements will be perfectly correlated.  Even if the measurement events are spacelike separated – i.e., there is no fact about which measurement happens first – object O1 having momentum P21 will correspond to object O2 having momentum P11 and not P12.  In other words, among the nine possibilities at time T0, the first fact (O1 interacts with O2) eliminates all but two, and the second fact (O1 has momentum P21) eliminates one.  Thus, these facts make future facts incompatible with all but one of those original nine possibilities, specifically {X1, P11, X2, P21} at T0.[3]

Notice that the reduction in possibilities – and the resulting correlations – have nothing to do with whether Alice or Bob knows about the correlations.  I think there’s been a lot of experimental research and discussion regarding how measurements on systems with known entanglements correlate to each other, as if entanglement were some rare, almost magical quantum configuration created only in expensive labs.  Instead, I think entanglement is ubiquitous.  If every (or almost every) impact between objects results in a new correlation between them, then isn’t every object entangled with every other?  The universe goes on creating new facts, reducing future possibilities, correlating the possibilities of one system with those of another, so that the possibilities for any one object depend, in some sense, on the possibilities of every other.  The notion of universal entanglement is far more important and useful, I think, than has been discussed in the scientific literature.

Of course, this example is insanely oversimplified.  My goal is simply to show how the quantity/density of possible combinations in phase space gets reduced by facts.  For instance, as discussed above, the fact that O1 interacts with O2 implies that O1 cannot have a state after T3 that traces it back or correlates it to the state {X1, P13} at time T0.  However, this does NOT imply that O1 can’t have momentum P13 after T3.  The analysis considered only a tiny (TINY!) subset of possibilities at time T0 in which O1 was located at X1 and O2 was located at X2.  To determine whether O1 might have momentum P13 after T3, we have to consider every other possible combination in which O1 is not at X1 at T0.  Looking back at Fig. 1, we can obviously move O1 to some other location so that, with momentum P13, it does impact O2.

Now that I’ve explained the example, the primary questions I want to consider are the effect of facts on the universe in reducing the entire phase space of possibilities, and whether any interesting or large-scale pattern or structure emerges.  For example, if it turned out, after several events, that O1 having momentum P13 does not appear in any of the possible combinations at T3, then we can state with certainty that O1 does not have momentum P13 at T3.  And if in every possible combination after T3 in which O1 has momentum P21 we find that O2 has momentum P11, then we can say with certainty that if Alice measures the momentum of O1 as P21 and Bob, who is several light-years away from Alice, measures the momentum of O2, he will measure P11.[4]

I think the most interesting question is: as the phase space of possibilities gets reduced in time by facts, does any structure or pattern emerge in the distributions of object locations and/or momenta?  For example, if after lots of events involving objects O4 and O7, do we find, among the remaining possibilities in phase space, that the locations of O7 relative to O4 start to converge?  If so, does the spread of the distribution (e.g., standard deviation) get tighter with the addition of subsequent facts?

Computer Simulation and Questions

I tried programming a simulating and answering the above questions with Mathematica, but quickly realized that even the simplest possible analysis (three objects in one dimension discretized into 10 possibilities, repeating universe, no gravity) took about 10 seconds to analyze the one million points of phase space.  Imagine trying to do a more reasonable analysis of, say, 100 objects in two-dimensional space discretized to 1000 places per dimension; we’re now at 1000^400 possibilities, which significantly exceeds the computational power of the entire universe, estimated at 10^122.  (See, e.g., Davies, 2007.)

There are a variety of mathematical tools and shortcuts that could help with the analysis.  For example, I suspect that an interesting analysis could be done with a Monte Carlo simulation, essentially by just randomly selecting initial states.  I could start with a set of chronological facts/events (e.g., O1 impacts O5, then O3 impacts O9, then O5 impacts O6, etc.) and then run a Monte Carlo simulation to find a statistically useful set of initial states that satisfy the facts.  Then, I’d like to see what kind of patterns and/or localizations, if any, emerge.  I suspect that after enough events, some objects would start to appear fixed relative to some other objects, and once all objects are entangled/correlated, they would all begin to show a (potentially fuzzy) localization relative to each other.  Further, I suspect that if we were to look at the fuzziness of, say, object #74, we would find a particular spread in its location and momentum, but if we were to look only at the distribution of momenta of object #74 in particular locations, we would find a larger spread.  If so, then such an analysis might numerically demonstrate quantum uncertainty.  Of course, I could be wrong about all this, but won’t know until I can do some sort of simulation or analysis.

Another question that might be answered by such an analysis is whether the times of events must be inputted (e.g., O1 impacts O5 at T=35 units) or whether time itself is emergent.  I suspect the latter.  In the previous example, O1 having P21 at T3 is correlated with O2 having P11, but it is also correlated with an impact at T1, while O1 having P23 at T3 is correlated with an impact with O2 at T2.  Thus, the later fact about the universe causes the time of the earlier impact to emerge.  I suspect that when the phase space specifies velocity, event times are emergent; likewise, if the set of possibilities includes only locations but event times are specified, velocities would emerge. 

Another issue that might be addressed by such an analysis is the relationship of objects to the underlying grid.  Objects shouldn’t leave the grid, so will objects wrap around or should we include a gravitation force sufficient to prevent their reaching the edge?  And suddenly an analysis of quantum mechanics necessitates general relativity and the curvature of space!

Finally, I don’t have the math background to figure out how to do the analysis with continuous initial states (versus discrete states).  I suspect that there is no fundamental discretization of spacetime, but rather the “resolution” of the universe increases with more facts/events.  That is, there is no fundamental limit to the precision of a measurement, except to the extent that facts just don’t (yet) exist to answer questions that probe beyond a certain scale.  One scale, quantum uncertainty, involves a tradeoff between an object’s location precision and momentum precision, while another, the Planck length, implies an energy sufficient to create a black hole if a distance smaller than the Planck length is probed.  Both scales are related to Planck’s constant. 

But if every interaction between objects creates a new fact that slightly increases the universe’s resolution, then Planck’s constant is actually decreasing with time.  As Planck’s constant continues to decrease, the energy of a photon at a given wavelength decreases, so shorter lengths can be probed before reaching a black-hole-inducing energy.  Also as uncertainty decreases, the momentum-changing kick given by that photon to probe the location of an object would have less of an effect on the measured object. 

Objections

I’ll try to address a few potential objections to this interpretation.

Implies Planck’s constant is not a constant.  The time scale of this interpretation by which new facts increase the resolution of the universe (and decrease Planck’s constant) is sufficiently slow that there is no reason to think that any change could have been detected in the last century, although improving measurement precision may allow this prediction to be tested in the future.  If Planck's constant is decreasing with time, one way to test this hypothesis without doing further measurements might be to retrodict the number of facts/events and/or correlations/entanglements that would be necessary to bring quantum uncertainty to within the scale of Planck’s constant, and then determine whether the actual number of such events and/or entanglements in the universe is consistent with this retrodiction.  In other words, it may be the case that Planck’s constant is actually decreasing if it emerges from variations among possibilities, the number (or density) of which decrease with the happening of events.

In any event, despite some debate as to its implications, there is already strong evidence that correlation/entanglement within a system reduces its quantum uncertainty.  (See, e.g., Rigolin, 2002.)  If indeed universal entanglement correlates every object in the universe directly or indirectly to every other, it should not be surprising that increasing correlations further reduce quantum uncertainties, an hypothesis that would be verified by observing a change in Planck’s constant.

Implies that the wave state Ψ is not the full description of a system.  An underlying assumption of our current understanding of quantum mechanics is that a system’s wave state is its complete description, and that “the momentum wave packet for a particular quantum state [is] equal to the Fourier transform of the position wave packet for the same state.”  (Griffiths, Ch. 2.)  These are assumptions that, so far, have provided excellent agreement with observation, but have also given rise to confusion and a variety of seeming paradoxes.  It may be that the current computational power of quantum mechanics is an approximation that results from the convergence of remaining possibilities after facts of the universe eliminate the vast majority.  As an analogy, one may use a very high precision thermometer to obtain the temperature of a system to many significant figures, but its temperature is not its complete description.

Treating objects classically.  My example in Fig. 1 treats objects macroscopically as they bounce off each other classically.  But that was just an example to show how facts reduce possibilities and that the remaining possibilities inherently embed evidence of those facts.  That is essentially tautological: it must be true that impacts between systems produce facts that reduce possibilities, because otherwise what would it mean that an impact occurred?  Any event must distinguish possibilities in which the event happens and those in which it doesn’t.  Rather, my point (I think!) is that is the history of facts in the universe is instantiated in the form of correlations/entanglements between objects, localizes the positions and momenta of objects relative to each other, and gives rise to (or eliminates the possibilities of ) superpositions.

Identity.  My interpretation requires that objects have identity.  For example, if two of the facts of the universe are that object O9 impacts object O4 at time T0 and then O4 impacts O12 at time T1, then the possible locations and momenta of object O4 after time T1 (along with, of course, its correlations with O9 and O12) effectively embed the history of these facts.  This can only be true if object O4 at T0 is the same as object O4 at T1 – i.e., objects must maintain their identity.  However, as currently understood, many quantum mechanical objects don’t have identities; they are indistinguishable in principle.  For instance, if two helium nuclei (which are bosons) are exchanged in a superfluid represented by wave state Ψ, then the state (and any predictive power we possess) will remain unchanged.  How can a particular helium nucleus (and its entanglements with other objects) embed a history of facts if there’s no such thing as a “particular” helium nucleus? 

I’ll provide several responses.  First, the examples I gave were generically about objects; I did not specify that they were particles or microscopic.  They’re true of baseballs, which clearly can be treated classically.  If it turns out that protons cannot be treated classically (e.g., if protons do not maintain identity), then there may not be a fact about one particular proton impacting another particular proton.  But there may be a fact about a group of protons (for example) creating some lasting correlation in the universe, a fact that would be reflected in reducing possibilities.  Second, the objection is based on the assumption that Ψ contains all information about a system; as discussed above, this assumption may be merely a convenient approximation.  Finally, we already know that entanglement is possible between such particles; what would this mean if they didn’t have identity?  For instance, imagine two entangled photons (A and B) such that their polarizations are perfectly correlated.  If photon A is mixed up with lots of other “identical” photons, doesn’t photon A still perfectly correlate to photon B?  Don’t photon A and B (or, perhaps the universe as a whole) still “know” they are entangled, whether or not we can distinguish photon A from others?


References

Aspect, A., Dalibard, J. and Roger, G., 1982. Experimental test of Bell's inequalities using time-varying analyzers. Physical review letters49(25), p.1804.

Davies, P.C.W., 2007. The implications of a cosmological information bound for complexity, quantum information and the nature of physical law. Cristian S. Calude, p.69.

Elitzur, A.C. and Vaidman, L., 1993. Quantum mechanical interaction-free measurements. Foundations of Physics23(7), pp.987-997.

Griffiths, R.B., 2003. Consistent quantum theory. Cambridge University Press.

Haroche, S., 1998. Entanglement, decoherence and the quantum/classical boundary. Physics today51(7), pp.36-42.

Rigolin, G., 2002. Uncertainty relations for entangled states. Foundations of Physics Letters15(3), pp.293-298.



[1] Elitzur et al. (1993) unintentionally gives a great argument as to how quantum mechanical events can occur without an “interaction.”  Whether or not the suggested method disturbs a measured system’s internal quantum state, it undoubtedly produces facts that reduce the number of future possibilities.
[2] “The coherence vanishes as soon as a single quantum is lost to the environment.”  (Haroche, 1998.)
[3] I don’t think it matters, scientifically, whether we say that all nine combinations truly were possibilities at time T0 and future facts narrow down possibilities when the facts occur, or that eight of the nine combinations were not actually possible at T0 and future facts simply clarify past possibilities.  The predictive power of both ideas is the same.
[4] So long as Alice measures after T3 in her frame of reference but before O1 has impacted another object and Bob measures after T3 in his frame of reference but before O2 has impacted another object.


Tuesday, October 8, 2019

I don’t understand double-slit interference. Do you?

Update:
So, yes, the “Adding Probabilities” method is wrong.  As it turns out, the reason I was getting such bad distributions when using Mathematica to produce graphs for the (correct) “Adding Fields” method is that I had not properly adjusted the phase for each field point source within each of the slits.  When I do that, it produces distributions in the near- and far-fields that, I think, are consistent with what would be observed in actual experiments.

But this essay is important to me for several reasons.  First, it underscores one of the problematic assumptions I’d been making, namely the assumption that there is some reality about where, in each slit, a particle is located.  Identifying that as incorrect helped me come to what I believe is a better understanding/interpretation of QM, which I describe here, in which a superposition is indicative of a lack of a fact.  Second, writing it helped me to understand the relationship between single-slit Fraunhofer distributions and double-slit interference distributions.  Third, it makes some good points about problems in QM, and is mostly correct if you’ll ignore any nonphysical wackiness in the “Adding Fields” graphs.

I have made huge progress in understanding physics over the past couple years and wouldn’t be where I am today without the experiences of yesterday.

Friday, June 21, 2019

Is It Possible to Copy the Brain?

The science fiction plot involving copying brains or uploading minds onto computers or fighting conscious AI or teleportation yada yada yada is everywhere.  Black Mirror wouldn't even exist without these fascinating ideas.

Every one of these plots depends on the assumed ability to copy brains or consciousness, or on the assumption that consciousness is algorithmic, like software running on a computer.  These are very related assumptions: all algorithms can be copied and executed on any general-purpose computer, so if consciousness is algorithmic, then it should be possible to copy conscious states and/or duplicate brains.

Let me be blunt: every science nerd on the planet (including me) has, at some point, wondered about and been intrigued by the possibility and implications of "brain copying."  (Although really I mean the more general notion that one's consciousness can be copied, whether by digitizing consciousness, physically copying the brain, whatever.) 

But here's something weird.  VERY few scientists have actually questioned the assumptions that conscious states can be copied or that consciousness is fundamentally computational.  For instance, if you Google the exact phrase "impossible to copy the brain" a total of ZERO results are found, but if you don't question the possibility of brain copying, then the exact phrase "copy the brain" yields over a MILLION results.  Does it seems strange that despite our fascination with AI, teleportation, mind uploading, and so forth, that this particular post might be the very first in the entire history of the Internet to state, in these words, that it might be impossible to copy the brain?  Really?!  No one has ever said that phrase on the Internet before?  (BTW there are lots of other such phrases, printed at the bottom of this post.)

These assumptions are so ingrained within the scientific community that most young physicists, neurobiologists, engineers, etc., don't even realize that they are making such assumptions, and those that do are unlikely to question them.  Famed philosopher John Searle once pointed out that "to deny that the brain is computational is to risk losing your membership in the scientific community."  Entire industries are even being launched (mind uploading, digital immortality, etc.) on the underlying supposition that it's just a matter of time before we'll be able to digitize the brain, or create a conscious computer, or create a perfect duplicate of the brain.  Are these assumptions valid?

Sir Roger Penrose (Oxford) argues that consciousness cannot be simulated on a computer because, he claims, humans are able to discover truths that cannot be discovered by any algorithm running on a Turing machine.  However, despite his eminence in the fields of mathematics and physics, he is still criticized by the "mainstream" scientific community for this suggestion.

Scott Aaronson (U. Texas @ Austin) asks in his paper, "Does quantum mechanics ... put interesting limits on an external agent's ability to scan, copy, and predict human brains ... ?"  He says he regards this "as an unsolved scientific question, and a big one," and then gives one possible explanation of how physics might explain that conscious brains can't be copied (if in fact they can't).  In a blog post, he points to an empirical fact "about the brain that currently separates it from any existing computer program.  Namely, we know how to copy a computer program ... how to rerun it ... how to transfer it from one substrate to another.  With the brain, we don't know how to do any of those things."  In both works, he is careful not to offend the majority, with self-deprecating comments about expecting to be "roasted alive" for his dissension from "the consensus of most of my friends and colleagues."

There are a few other scientists who cautiously suggest that brains can't be copied or that brains aren't computers (one example here).  I myself have written a paper (preprint here or related YouTube videos herehere and here) that argues that consciousness is not algorithmic and can't be copied, in part because consciousness correlates to quantum measurement events that occur outside the body.  But, let's face it: for the most part, very few scientists question these assumptions.

I assert that the following assumptions pervade academia and popular science, and that they are unfounded and unsupported by empirical evidence:
a) That consciousness is computational/algorithmic;
b) That consciousness can be duplicated; and
c) That brains can be copied.

Here is my question: What empirical evidence do we currently have for making any of these assumptions?  I think the answer is "none," but I could be wrong. 

If you are going to answer this question, please consider these guidelines:
*  Please provide actual empirical evidence to support your point.  For example, if you think that brains can be copied, then linking to a bunch of papers in which neurobiologists have sliced rat brains (or whatever) is inadequate, because that says nothing to support the assumption that brains can be copied over the assumption that brains cannot be copied.   And extrapolating into the future ("If we can slice rat brains today, then in 50-100 years we'll be able to digitize them and copy them...") is not evidence for your point.  On that note...
*  Please do not talk about what is expected, or what "should" happen, or what you think is possible in principle.  (The phrase "in principle" should be banned from the physicist's lexicon.)  Please focus on what is actually known today based on scientific inquiry and discovery. 
*  Please do not bully with hazy notions of "consensus."  Scientific truth does not equal consensus.  I don't care (and nor should you) what a "majority" of scientists believe if those beliefs are not founded on scientific data and evidence.  Further, considering that anyone who openly questions these assumptions has to apologetically tiptoe on eggshells, for fear of offending the majority, it's difficult or impossible to know whether there really is any consensus on this issue.
*  Please be aware of your own assumptions.  For example, if you reply that "consciousness must be capable of being simulated because it is part of the universe, which is itself being capable of being simulated," note that the latter statement is itself an unproven assumption.



Additional comments:
The following search terms in Google yield either zero or just a few results, which underscores how pervasive the assumptions about brains and consciousness are:

“impossible to copy conscious”
“impossible to copy consciousness”
“not possible to copy consciousness”
“possible to copy consciousness”
“possible to copy conscious”
"cannot copy conscious"
"cannot copy consciousness"
"impossible to duplicate conscious"
"possible to duplicate conscious"
"possible to duplicate consciousness"
"impossible to duplicate consciousness" 
"cannot duplicate consciousness" 
"cannot duplicate conscious"
 “not possible to copy brain”
“impossible to copy the brain”
“not possible to copy the brain”
“cannot copy the brain”
"cannot duplicate the brain"
"impossible to duplicate the brain"
"possible to duplicate the brain" 
 “consciousness cannot be algorithmic” 

Thursday, June 20, 2019

The Physics of Free Will

I know the topic of free will has been debated endlessly for millennia, and everyone has their own opinion.  However, I’ve read and searched endlessly, and I can’t find anyone who addresses or answers the following problem.

Let’s say that I perceive that I have the choice to press button A or B.  There are only three possibilities:
a) There is no actual branching event.  The perception is an illusion.  The button I press is entirely predetermined.  (That doesn’t imply that the universe as a whole is deterministic, but that indeterminacy is irrelevant to my perception of a free choice.)
b) There is a branching event, but it is quantum mechanical in nature.  In other words, the button I press actually depends on some QM event (whether you call it measurement, reduction, or collapse), so while the outcome is not predetermined, it is random.  The perception that a branching event was about to happen was correct, but the perception that I can control it is an illusion.
c) There is a branching event, and my free will caused the outcome.

In case a), my “choice” is simply a prediction about the future.  But there are several problems with this:
1) Why would I ever perceive as possible an event that is actually impossible?  (If pressing button A was predetermined, then pressing B is an impossible event.)
2) What is the advantage of making a prediction if awareness of the predicted outcome will not affect anything that will happen in the future?  In other words, if I can’t DO anything to change anything (because I don’t have free will), what’s the point in predicting? 
3) What is the advantage of perceiving free will when I am actually making a prediction?  When I drop a ball, I predict it will accelerate downward toward the Earth.  But imagine if I (falsely) believed I had free will over that ball... “OK, am I going to drop the ball UP or DOWN?  Hmmm... today I’ll decide to drop it DOWN.”  What would be the point of that false perception? 

The case of b) isn’t much better, because my “choice” is, again, just a prediction about the future (possibly coupled with measurement of a random QM event).  The same problems arise.

Note that my perception of free will is limited to my body, and not even my entire body (for example, I don’t think I can consciously control my digestion process).  In fact, I only perceive “free will” with regard to a few aspects of my body, such as motions of my hands and fingers.  But what is true is that I have never EVER once observed the experience of NOT having free will over those parts.  For example, I have never decided to raise my right hand, but then my left hand rises instead.  I never raise my hand and then say, “I didn’t do that!” 

But that COULD have been the case.  I could have been born into a world in which I just observed things happening... where my body was no different from a dropping ball or a planet orbiting a star... where it’s just an object that moves on its own and I experience it.  In other words, why am I not just experiencing the world through a body that moves on its own as if I were just watching an immersive (five-sense) movie?  It’s not like we need to believe in free will.  For example, we are perfectly fine watching movies or riding roller coasters, full well knowing that we can’t control them.  Why couldn’t we just be passing through the world moment-to-moment, just experiencing the ride, without any perception that we have free choices?  In other words, if a) or b) above is true, we need to explain WHY I perceive the freedom to press button A or B, but also why my choices are always 100% consistent with the outcome.

That’s a real problem.  Because now we have to explain why the universe would conspire to:
* Fool me into believing that I have a choice when I don’t; AND
* Fool me into believing that the outcome is always consistent with what I (mistakenly) thought I chose!

Why would the universe fool us like that? 

As an aside, please don’t answer with “compatibilism,” which is the philosopher’s way of avoiding the question of free will.  You can look it up, but I regard it as a non-answer.  Even famed philosopher JohnSearle agrees that philosophers haven’t made any progress on the free will question in the past hundred years.

Tuesday, June 11, 2019

Why Mind Uploading and Brain Copying Violate the Physics of Consciousness

I just finished creating a video, now posted on YouTube, that attempts to prove why the laws of physics, particularly Special Relativity and Quantum Mechanics, prohibit the copying or repeating of conscious states.  This time, I introduce the Unique History Theorem, which essentially states that every conscious state uniquely determines its history from a previous conscious state.  If true, then the potential implications are significant: consciousness is not algorithmic; computers (including any artificial intelligence) will never become conscious; mind uploading, as well as digital immortality, will never be possible; and teleportation and any form of brain copying or digitization will remain science fiction.

The video, which lasts about an hour and a half, is here:




However, if you want a brief SUMMARY of the two main videos, a 17-minute video is posted on YouTube here:




Please keep in mind that the above summary video is a great introduction to the proofs and arguments in the main videos, but that the arguments themselves are truncated.  

Tuesday, June 4, 2019

Can Physics Answer the Hardest Questions of the Universe?

At some point during an intro to philosophy class in college, I was first exposed to the classic "Brain in a vat" thought experiment: how do I know I'm not just a brain in a vat of goo with a bunch of wires and probes poking out, being measured and controlled by some mad scientist?  That was a few years before The Matrix came out, which asked essentially the same question.

So -- are you a brain in a vat?  And how could you know?

This is just the tip of the iceberg; once we start down this path, we come face-to-face with more difficult questions.  "What creates consciousness?"  "Can consciousness be simulated?"  "If I copy my brain, will it create another me, and what would that feel like?"  And once we've fallen down the rabbit hole, we see that there are a thousand other seemingly unanswerable questions... questions about free will, the arrow of time, the nature of reality, and so forth.

I think physics can help answer these questions, and in fact I think I have answered a couple of them to some degree.  For example, I don't know (yet) whether I'm in a simulation, but I think I do know whether or not I am a simulation.  Here is my first YouTube talk in which I explain why consciousness cannot be algorithmic, conscious states cannot be copied or repeated, and computers will never be conscious:




If you prefer a written explanation, here is a preprint of my article, "Refuting Strong AI: Why Consciousness Cannot Be Algorithmic."

I am also working on another proof of the same conclusions from a different angle.  Here is a preprint of my article, "Killing Science Fiction: Why Conscious States Cannot Be Copied or Repeated."  I'll post a link to a YouTube talk on this paper as soon as it's available.