World's First Proof that Consciousness is Nonlocal

Welcome to my blog! I am the author of the world's FIRST paper (explained here on my YouTube channel ) to appear in the academic lite...

Thursday, May 20, 2021

Quantum Computing is 99% Bullshit

 

In this post, just before beginning a class on quantum computing at NYU, I predicted that scalable quantum computing ("SQC") is in fact impossible in the physical world.

I was right.

And I can finally articulate why.  The full explanation (“Scalable Quantum Computing is Impossible”) is posted here and in the following YouTube video.

Here is the general idea.  Let me make a few assumptions:

·       A system is not “scalable” in T (where T might represent, for example, total time, number of computation steps, number of qubits, number of gates, etc.) if the probability of success decays exponentially with T.  In fact, the whole point of the Threshold Theorem (and fault-tolerant quantum error correction (“FTQEC”) in general) is to show that the probability of success of a quantum circuit could be made arbitrarily close to 100% with “only” a polynomial increase in resources.

·       Quantum computing is useless without at least a million or a billion controllably entangled physical qubits, which is among the more optimistic estimates for useful fault-tolerant quantum circuits.  (Even "useful" QC isn’t all that useful, limited to a tiny set of very specific problems.  Shor’s Algorithm, perhaps the most famous of all algorithms that are provably faster on a quantum computer than a classical computer, won’t even be useful if and when it can be implemented because information encryption technology will simply stop making use of prime factorization!)

o   There are lots of counterarguments, but they’re all desperate attempts to save QC.  “Quantum annealing” is certainly useful, but it’s not QC.  Noisy Intermediate-Scale Quantum (“NISQ”) is merely the hope that we can do something useful with the 50-100 shitty, noisy qubits that we already have.  For example, Google’s “quantum supremacy” demonstration did absolutely nothing useful, whether or not it would take a classical computer exponential time to do a similarly useless computation.  (See the “Teapot Problem.”)

Given these assumptions, what do I actually think about the possibility of SQC?

First of all, what reasons do we have to believe that SQC is possible at all?  Certainly the thousands of peer-reviewed publications, spanning the fields of theoretical physics, experimental physics, computer science, and mathematics, that endorse SQC, right?  Wrong.  As I pointed out in my last post, there is an unholy marriage between SQC and the Cult of U, and the heavily one-sided financial interest propping them up is an inherent intellectual conflict of interest.  Neither SQC nor FTQEC has ever been experimentally confirmed, and even some of their most vocal advocates are scaling back their enthusiasm.  The academic literature is literally full of falsehoods, my favorite one being that Shor’s Algorithm has been implemented on a quantum computer to factor the numbers 15 and 21.  (See, e.g., p. 175 of Bernhardt’s book.)  It hasn’t. 

Second, SQC depends heavily on whether U (the assumption that quantum wave states always evolve linearly or unitarily… i.e., that wave states do not “collapse”) is true.  It is not true, a point that I have made many, many times (here here here here here here etc.).  Technically, useful QC might still be possible even if U is false, as long as we can controllably and reversibly entangle, say, a billion qubits before irreversible collapse happens.  But here’s the problem.  The largest double-slit interference experiment (“DSIE”) ever done was on an 810-atom molecule.  I’ll discuss this more in a moment, but this provides very good reason to think that collapse would happen long before we reached a billion controllably entangled qubits.

Third, the Threshold Theorem and theories of QEC, FTQEC, etc., all depend on a set of assumptions, many of which have been heavily criticized (e.g., Dyakonov).  But not only are some of these assumptions problematic, they may actually be logically inconsistent… i.e., they can’t all be true.  Alicki shows that noise models assumed by the Threshold Theorem assume infinitely fast quantum gates, which of course are physically impossible.  And Hagar shows that three of the assumptions inherent in TT/FTQEC result in a logical contradiction.  Given that FTQEC has never been empirically demonstrated, and that its success depends on theoretical assumptions whose logical consistency is assumed by people who are generally bad at logic (which I’ve discussed in various papers (e.g., here and here) and in various blog entries (e.g., here and here)), I’d say their conclusions are likely false.

But here’s the main problem – and why I think that SQC is in fact impossible in the real world:

Noise sometimes measures, but QC theory assumes it doesn't.

In QC/QEC theory, noise is modeled as reversible, which means that it is assumed to not make permanent measurements.  (Fundamentally, a QC needs to be a reversible system.  The whole point of QEC is to “move” the entropy of the noise to a heat bath so that the evolution of the original superposition can be reversed.  I pointed out here and here that scientifically demonstrating the reversibility of large systems is impossible as a logical contradiction.)  This assumption is problematic for two huge reasons.

First, measurements are intentionally treated with a double standard in QC/QEC theory.  The theory assumes (and needs) measurement at the end of computation but ignores it during the computation.  The theory's noise models literally assume that interactions with the environment that occur during the computation are reversible (i.e., not measurements), while interactions with the environment that occur at the end of the computation are irreversible measurements, with no logical, mathematical, or scientific justification for the distinction.  This is not an oversight: QEC cannot correct irreversible measurements, so proponents of QEC are forced to assume that unintended interactions are reversible but intended interactions are irreversible.  Can Mother Nature really distinguish our intentions?  

Second, and more importantly, the history and theory of DSIEs indicates that noise sometimes measures!  All DSIEs have in fact depended on dispersion of an object’s wave packet both to produce a superposition (e.g., “cat” state) and to demonstrate interference effects.  However, the larger the object, the more time it takes to produce that superposition and the larger the cross section for a decohering interaction with particles and fields permeating the universe.  As a result, the probability of success of a DSIE decays exponentially as the square of the object’s mass (p ~ e^(-m2)), which helps to explain why despite exponential technological progress, we can't yet do a DSIE on an object having 1000 atoms, let alone a million or a billion.  What this means is that DSIEs are not scalable, and the fundamental reason for this unscalability – a reason which seems equally applicable to SQC – is that noise at least sometimes causes irreversible projective measurements.

This is fatal to the prospect of scalable quantum computing.  If a single irreversible measurement (even if such an event is rare) irreparably kills a quantum calculation, then the probability of success decays exponentially with T, which by itself implies that quantum computing is not scalable.  But DSIEs demonstrate that not only does noise sometimes cause irreversible measurement, those irreversible measurements happen frequently enough that, despite the very best technology developed over the past century, it is practically impossible to create controllably highly entangled reversible systems larger than a few thousand particles.  

Quantum computing is neither useful nor scalable.  

Monday, May 17, 2021

The (Quantum Computing) House of Cards

In the physics community, there is a house of cards built upon a nearly fanatical belief in the universality of quantum mechanics – i.e., that quantum wave states always evolve in a linear or unitary fashion.  Let’s call this fanaticism the Cult of U.

When I began this process a couple years ago, I didn’t realize that questioning U was such a sin, that I could literally be ostracized from an “intellectual” community by merely doubting U.  Having said that, there are a few doubters, not all of whom have been ostracized.  For instance, Roger Penrose, one of the people I most admire in this world, recently won the Nobel Prize in Physics, despite his blatant rejection of U.  However, he rejected U in the only way deemed acceptable by the physics community: he described in mathematical detail the exact means by which unitarity may be broken, and conditioned the rejection of U on the empirical confirmation of his theory.  As I describe in this post, Penrose proposes gravitational collapse of the wave function, a potentially empirically testable hypothesis that is being explored at the Penrose Institute.  In other words, he implicitly accepts that: a) U should be assumed true; and b) it is his burden to falsify U with a physical experiment.

I disagree.  In the past year, I’ve attempted (and, I believe, succeeded) to logically falsify U – i.e., by showing that it is logically inconsistent and therefore cannot be true – in this paper and this paper.  I also showed in this paper why U is an invalid inference and should never have been assumed true.  Setting aside that they have been ignored or quickly (and condescendingly) dismissed by nearly every physicist who glanced at them, all three were rejected by arvix.org.  This is both weird and significant.

The arXiv is a preprint server, specializing in physics (although it has expanded to other sciences), supported by Cornell University, that is not peer reviewed.  The idea is simply to allow researchers to quickly and publicly post their work as they begin the process of formal publication, which can often take years.  Although not peer-reviewed, arXiv does have a team of moderators who reject “unrefereeable” work: papers that are so obviously incorrect (or just generally shitty) that no reputable publisher would even consider it or send it for peer review by referees.  Think perpetual motion machines and proofs that we can travel faster than light.

What’s even weirder is that I submitted the above papers under the “history and philosophy of physics” category.  Even if a moderator thought the papers did not contain enough equations for classification in, say, quantum physics, on what basis could anyone say that they weren’t worthy of being refereed by a reputable journal that specializes in the philosophy of physics?  For the record, a minor variation of the second paper was in fact refereed by Foundations of Physics, and the third paper was not only refereed, but was well regarded and nearly offered publication by Philosophy of Science.  Both papers are now under review by other journals.  No, they haven’t been accepted for publication anywhere yet, but arXiv’s standard is supposed to be whether the paper is at least refereeable, not whether a moderator agrees with the paper’s arguments or conclusions! 

It was arXiv’s rejection of my third paper (“The Invalid Inference of Universality in Quantum Mechanics”) that made it obvious to me that the papers were flagged because of their rejection of U.  This paper offers an argument about the nature of logical inferences in science and whether the assumption of U is a valid inference, an argument that was praised by two reviewers at a highly rated journal that specializes in the philosophy of physics.  No reasonable moderator could have concluded that the paper was unrefereeable. As a practical matter, it makes no difference, as there are other preprint servers where I can and do host my papers.  (I also have several papers on the arXiv, such as this – not surprisingly, none of them questions U.)

But the question is: if my papers (and potentially others’ papers) were flagged for their rejection of U… why?!

You might think this is a purely academic question.  Who cares whether or not quantum wave states always evolve linearly?  For example, the possibilities of Schrodinger’s Cat and Wigner’s Friend follow from the assumption of U.  But no one actually thinks that we’ll ever produce a real Schrodinger’s Cat in a superposition state |dead> + |alive>, right?  This is just a thought experiment that college freshmen like to talk about while getting high in their dorms, right? 

Is it possible that there is a vested interest… perhaps a financial interest… in U?

Think about some of the problems and implications that follow from the assumption of U.  Schrodinger’s Cat and Wigner’s Friend, of course, but there’s also the Measurement Problem, the Many Worlds Interpretation of quantum mechanics, the black hole information paradox, physical reversibility, and – oh yeah – scalable quantum computing. 

Since 1994, with the publication of Shor’s famous algorithm, untold billions of dollars have flowed into the field of quantum computing.  Google, Microsoft, IBM, and dozens of other companies, as well as the governments of many countries, have poured ridiculous quantities of money into the promise of quantum computing. 

And what is that promise?  Well, I have an answer, which I’ll detail in a future post.  But here’s the summary: if there is any promise at all, it depends entirely on the truth of U.  If U is in fact false, then a logical or empirical demonstration that convincingly falsifies U (or brings it seriously into question) would almost certainly be catastrophic to the entire QC industry. 

I’m not suggesting a conspiracy theory.  I’m simply pointing out that if there are two sides to a seemingly esoteric academic debate, but one side has thousands of researchers whose salaries and grants and reputations and stock options depend on their being right (or, at least, not being proven wrong), then it wouldn’t be surprising to find their view dominating the literature and the media.  The prophets of scalable quantum computing have a hell of a lot more to lose than the skeptics.

That would help to explain why the very few publications that openly question U usually do so in a non-threatening way: accepting that U is true until empirically falsified.  For example, it will be many, many years before anyone will be able to experimentally test Penrose’s proposal for gravitational collapse.  Thus it would be all the more surprising to find articles in well-ranked, peer-reviewed journals that question U on logical or a priori grounds, as I have attempted to do.

Quoting from this post:

As more evidence that my independent crackpot musings are both correct and at the cutting edge of foundational physics, Foundations of Physics published this article at the end of October that argues that “both unitary and state-purity ontologies are not falsifiable.”  The author correctly concludes then that the so-called “black hole information paradox” and SC disappear as logical paradoxes and that the interpretations of QM that assume U (including MWI) cannot be falsified and “should not be taken too seriously.”  I’ll be blunt: I’m absolutely amazed that this article was published, and I’m also delighted. 

Today, I’m even more amazed and delighted.  In the past couple of posts, I have referenced an article (“Physics and Metaphysics of Wigner’s Friends: Even Performed Premeasurements Have No Results”), which was published in perhaps the most prestigious and widely read physics journal, Physical Review Letters, but only in the past few days have I really understood its significance.  (The authors also give a good explanation in this video.)

What the authors concluded about a WF experiment is that either there is “an absolutely irreversible quantum measurement [caused by an objective decoherence process] or … a reversible premeasurement to which one cannot ascribe any notion of outcome in logically consistent way.”

What this implies is that if WF is indeed reversible, then he does not make a measurement, which is very, very close to the logical contradiction I pointed out here and in Section F of this post.  While the authors don’t explicitly state it, their article implies that U is not scientific because it cannot (as a purely logical matter) be empirically tested at the size/complexity scale of WF.  This is among the first articles published in the last couple decades in prestigious physics journals that make a logical argument against U.

What’s even more amazing about the article is that it explicitly suggests that decoherence might result in objective collapse, which is essentially what I realized in my original explanation of why SC/WF are impossible in principle, even though lots of physicists have told me I’m wrong.  Further, the article openly suggests a relationship between (conscious) awareness, the Heisenberg cut between the microscopic and macroscopic worlds, and the objectivity of wave function collapse below that cut.  All in an article published in Physical Review Letters!

Now, back to QC.  After over two decades of hype that the Threshold Theorem would allow for scalable quantum computing (by providing for fault-tolerant quantum error correction (“FTQEC”)), John Preskill, one of the most vocal proponents of QC and original architects of FTQEC, finally admitted in this 2018 paper that “the era of fault-tolerant quantum computing may still be rather distant.”  As a consolation prize, he offered up NISQ, an acronym for Noisy Intermediate-Scale Quantum, which I would describe as: “We’ll just have to try our best to make something useful out of the 50-100 shitty, noisy, non-error-corrected qubits that we’ve got.”

Despite what should have been perceived as a huge red flag, more and more money keeps flowing into the QC industry, leading Scott Aaronson to openly muse just two months ago about the ethics of unjustified hype: “It’s genuinely gotten harder to draw the line between defensible optimism and exaggerations verging on fraud.”

Fraud??!!

The quantum computing community and the academic members of the Cult of U are joined at the hip, standing at the top of an unstable house of cards.  When one falls, they all do.  Here are some signs that their foundation is eroding:

·       Publication in reputable journals of articles that question or reject U on logical bases (without providing any mathematical description of collapse or means for empirically confirming it).

·       Hints and warnings among leaders in the QC industry that promises of scalable quantum computing (which inherently depends on U) are highly exaggerated.

I am looking forward to the day when the house of cards collapses and the Cult of U is finally called out for what it is.

Friday, May 14, 2021

Another Comment on “Physical Reversibility is a Contradiction”

Scott Aaronson, whose argument on reversibility of quantum systems I mentioned in this post, responded to it (and vehemently disagreed with it).  Here is his reply:

Your argument is set out with sufficient clarity that I can unequivocally say that I disagree.

Reversibility is just a formal property of unitary evolution.  As such, it has the same status as countless other symmetries of the equations of physics that seem to be broken by phenomena (charge,
parity, even just Galilean invariance).  I.e., once you know that the equations have some symmetry, you then reframe your whole problem as how it comes about that observed phenomena break the symmetry anyway.

And in the case of reversibility, I find the usual answer -- that it all comes down to the Second Law, or equivalently, the "specialness" of the universe's past state -- to be really compelling.  I don't see anything wrong with that answer.  I don't think there's something obvious here that the physics community has overlooked.

And yes, you can confirm by experiments that dynamics are reversible. To do so, you (for example) apply a unitary transformation U to an initial state |Ψ>.  You then CHOOSE whether to
(1) apply U-1, the inverse transformation, and check that the state returned to |Ψ>, or

(2) measure immediately (in various bases that you can choose on the fly), in order to check if the system is in the state U|Ψ>.

Provided we agree that Nature had no way to know in advance whether you were going to apply (1) or (2), the only way to explain all the results -- assuming they're the usual ones predicted by QM -- is that |Ψ> really did get mapped to U|Ψ>, and that that map was indeed reversible.  In your post, you briefly entertain this obvious answer (when you talk about lots of identically prepared systems), but reject it on the grounds that making identical systems is physically impossible.

And yet, things equivalent to what I said above -- by my lights, a "direct demonstration of reversibility" -- are now ROUTINELY done, with quantum states of thousands of atoms or even billions of
electrons (as with superconducting qubits).  Of course, maybe something changes between the scale of a superconducting qubit and the scale of a cat (besides the massive increase in technological difficulty), but I'd say the burden is firmly on the person proposing that to explain where the change happens, how, and why.


I sincerely appreciated his response... and of course disagree with it!  I’m going to break this down to several points:

You then CHOOSE whether to
(1) apply U-1, the inverse transformation, and check that the state returned to |Ψ>,

First, I think he is treating U-1 as a sort of deus ex machina.  If you don’t know whether a system is reversible, or how it can be reversed, just reduce it all down to a mathematical symbol corresponding to an operator (such as H, for Hamiltonian) and its inverse, despite the fact that this single operator might correspond to complicated and correlated interactions between trillions of trillions of degrees of freedom.  Relying on oversimplified symbol manipulation makes it harder to pinpoint potentially erroneous assumptions about the physical world.

Second, and more importantly, if you apply U-1, you cannot check that the state returned to |Ψ>.  Maybe (MAYBE!) you can check to see that the state is |Ψ>, but you cannot check to see that it “returned” to that state.  And while you may think I’m splitting hairs here, this point is fundamental to my argument, and his choice of this language indicates that he really doesn’t understand the argument, despite his compliment that I had set it out “with sufficient clarity.”

The reason you cannot check to see if the state “returned” to |Ψ> is because that requires knowing that the state was in U|Ψ> at some point.  But you can’t know that, nor can any evidence exist anywhere in the universe that such an evolution occurred, because then the state would no longer be reversible.  (You also can’t say that the state was in U|Ψ> by asserting that, “If I had measured it, prior to applying U-1, then I would have found it in state U|Ψ>,” because measurements that are not performed have no results.  This is the “counterfactuals” problem in QM that confuses a lot of physicists as I pointed out in this paper on the Afshar experiment.)  So if you actually apply U and then U-1 to an isolated system, this is scientifically indistinguishable from having done nothing at all to the system. 

or
(2) measure immediately (in various bases that you can choose on the fly), in order to check if the system is in the state U|Ψ>.  …In your post, you briefly entertain this obvious
answer (when you talk about lots of identically prepared systems), but reject it on the grounds that making identical systems is physically impossible.  And yet, things equivalent to what I said above -- by my lights, a "direct demonstration of reversibility" -- are now ROUTINELY done, with quantum states of thousands of atoms or even billions of electrons (as with superconducting qubits). 

In this blog post, I pointed out that identity is about distinguishability.  I didn’t say that it’s impossible to make physically identical systems.  It’s easy to make two electrons indistinguishable.  By cooling them to near absolute zero, you can even make lots of electrons indistinguishable.  But the only way to create Schrodinger’s Cat is to create two cats that even the universe can’t distinguish – i.e., not a single bit of information in the entire universe can distinguish them.  In other words, for Aaronson's argument (about superpositions of billions of electrons in superconducting qubits) to have any relevance to the question of SC, we would have to be able to create a cat out of fermions that even the universe can’t distinguish. 

Tell me how!  Don't just tell me that this is a technological problem that the engineers need to figure out.  And do it without resorting to mathematical symbol manipulation.  I'll make it "easy."  Let's just start with a single hair on the cat's tail.  Simply explain to me how the wave function of that single hair could spread sufficiently (say, 1mm) to distinguish a dead cat from a live cat.  Or, equivalently, explain to me how the wave functions of two otherwise identical hairs, separated by 1mm, could overlap.  Tell me how to do this in the actual universe in which even the most remote part of space is still constantly bombarded with CMB, neutrinos, etc.  So far, no one has ever explained how to do anything like this.

Of course, maybe something changes between the scale of a superconducting qubit and the scale of a cat (besides the massive increase in technological difficulty), but I'd say the burden is firmly on the person proposing that to explain where the change happens, how, and why.

I strongly disagree!  As I point out in “The Invalid Inference of Universality in Quantum Mechanics,” the assumption that QM always evolves in a unitary/reversible manner is an unjustified and irrational belief.  Anyway, my fundamental argument about reversibility, which apparently wasn’t clear, is perhaps better summarized as follows:

1)     You cannot confirm the reversibility of a QM system by actually reversing it, as it will yield no scientifically relevant information.

2)     The only way to learn whether a system has evolved to U|Ψ> is to infer that conclusion by doing a statistically significant number of measurements on physically identical systems.  That’s fine for doing interference experiments on photons and Buckyballs, but not cats. 


Wednesday, April 28, 2021

Comment on "Physical Reversibility is a Contradiction"

Someone famous in the field of philosophy of mind (although I’m not at liberty to say) asked me the following question regarding my most recent blog post on the logical contradiction of quantum mechanical reversibility:

If one can't prove that Schrodinger’s Cat was in a superposition, I presume the same goes for “Schrodinger’s Particle.”  But we seem to get that evidence all the time in interference experiments.  Are particles different in principle from cats, or what else is going on?

 Here’s my reply:

That's kind of a technical question about how superpositions are "seen."  Of course, we never see a superposition... that's the heart of the measurement problem.  

What we do in a typical double-slit interference experiment is start with a bunch of "identically prepared" particles and then measure them on the other side of the slits.  The distribution we get is consistent with the particles having been in a linear superposition at the slits, where the amplitudes are complex numbers.  The fact that they are complex numbers allows for "negative" probabilities, which is at the heart of (the mathematics of) QM.

The key is that no particular particle is (or can be) observed in superposition... rather, it's from the measurement of lots of identically prepared particles that we infer an earlier superposition state.

The problem is that it's technologically (and I would argue, in-principle) impossible to create multiple "identically prepared" cats.  If you could, you would just do lots of trials of an interference experiment until you could statistically infer a SC state.  But since you can't, you have to rely on doing a single experiment on a cat, by controlling all its degrees of freedom, so as to reverse any correlations between the cat and the quantum event.  But by doing so (assuming it was even possible), there remains no evidence that the cat was ever in a SC superposition at all.  So, since science depends on evidence, it's not logically possible to scientifically show that a SC ever existed... and no one seems to have addressed this in the literature.

Amazingly, this paper just came out in Physical Review Letters, so it's something that people in the physics community are just now starting to wrap their heads around.  The paper doesn’t go far enough, but it at least points out that if WF makes a “measurement” but then is manipulated to show that WF was in a superposition, then even that “measurement” has no results.

Friday, April 23, 2021

Physical Reversibility is a Contradiction

I’m working on a project/presentation on whether scalable quantum computers are possible.  A quantum circuit can be simplified as application of a unitary matrix to an initial state of qubits.  Unitary matrices represent reversible basis shifts, which means that the computation must be shielded from irreversible decoherence events (or subject to quantum error correction to the extent possible) until the purposeful measurement of qubits at the end of the computation.

The word “reversibility” has come up a lot in my reading.  Essentially, the idea is that physical laws seem, for the most part, to be the same whether time is run forward or backward.  For example, if you were shown a video of a planet orbiting some distant star, you would not be able to tell whether the video was being played forward or in reverse.  Yet we experience time to move in a particular direction (namely, the future).  This has led to a centuries-long debate about the “arrow of time”: whether physical laws are reversible or whether there is actually some direction built into the fabric of the physical world.

It’s time to nip it in the bud: the physical world is not time-reversible.

As an example of a typical argument for classical reversibility, imagine dropping a porcelain teapot on a wooden floor.  Of course, it will irreparably break into probably hundreds of pieces.  “In principle,” they say, “if you know the positions and trajectories of all those pieces, you can then apply forces that will completely reverse the process, causing the pieces to recombine to the original teapot.” 

But that’s crap.  We already know, thanks to the Heisenberg Uncertainty Principle (“HUP”), that the pieces don’t have positions and momenta to infinite precision.  That alone is enough to guarantee that any attempt to apply time-reversed forces to the pieces will, thanks to chaos, fail to result in a perfect recombination of the pieces.  (One of my favorite papers discusses how even “gargantuan” black holes become chaotic over time, thanks to HUP.)  This problem is only compounded by the fact that any measurement of the positions and/or momenta of the pieces will inevitably change their trajectories very slightly also. 

So quantum mechanics guarantees that the classical world is not and cannot be time-reversible.  But I’ve recently realized that the notion of time-reversibility in quantum mechanics is not only false… it’s actually a contradiction.  In Section F of this post, I had already realized and pointed out that there is something logically contradictory about the notion of Schrodinger’s Cat (“SC”) or Wigner’s Friend (“WF”).  (I copied the most relevant section of that post below.)

The idea is simple.  To actually create SC, which is a macroscopic superposition state, the cat (and its health) has to correlate to a vial of poison (and whether it is broken), which has to correlate to some quantum event.  These correlations are colloquially called “measurements.”  But to prove (or experimentally show) that the cat is in a macroscopic superposition state, you have to do an interference experiment that undoes the correlations.  In other words, to show that the measurements are reversible (as assumed by the universality of QM), you have to reverse the measurements to the extent that there is no evidence anywhere in the universe (including the cat’s own clock) that the measurements happened. 

Remember, scientific inquiry depends on evidence.  We start by assuming that SC is created in some experiment.  But then the only way to show that SC is created is… to show that it was not created.  The very evidence we scientifically rely upon to assert that SC exists must not exist.  Proving SC exists requires proving that it does not exist.  This is gibberish.  (David Deutsch tried to explain away this problem in this paper but failed.  Igor Salom correctly pointed out in this paper that any attempt to correlate the happening of a measurement inside the otherwise “isolated” SC container will inevitably correlate to the result of that measurement, in which case the measurement event will be irreversible.)

Whether discussing WF, SC, quantum computers, etc., if the evolution of a quantum mechanical system from time t1 to t2 is actually reversible at t2, then that must mean there is no evidence at t2 of its evolution.  And if you actually reverse the system to how it was at t1, then there can be no evidence of (and thus no scientific fact or meaning about) its having evolved or done anything from t1 to t2.  There can be no evidence anywhere, including as “experienced” by the system itself, because even by its own internal clock, there was no evolution to t2.  For a reversible system that is actually reversed, there just is no scientific fact about its having had any evolution.  And for a reversible system that is actually measured, so that information exists in the universe about its state (correlations, etc.), then that system is no longer reversible.

Finally, I want to mention that even for a quantum mechanically reversible system, in order to reverse it, you must have already set up the system to be reversible.  For example, if you want an exploding bomb to be reversible, you can’t let the explosion happen and then go hunting for all the fragments to measure their trajectories, etc.  Setting aside the classical problems I mentioned earlier (e.g., by measuring the particles you change their positions/momenta), the problem quantum mechanically is that once the happening of the event correlates to some particle that you don’t already have full control over, it’s too late… evidence now exists.  If a quantum superposition did exist at an earlier time, it no longer does because it has now, thanks to the decoherence event, irreversibly reduced to a definite state.

This is an error that Scott Aaronson seems to make.  Aaronson, one of the most brilliant people ever to discuss the relationship between physics and consciousness (such as in this paper), makes a compelling argument here (also here) that consciousness might be related to irreversible decoherence.  However, he seems to think of quantum mechanical reversibility as something that depends on a future event, like whether we take the time to search for all the records of an event and then reverse them.  For example, he posits that the irreversible decoherence related to one’s consciousness means that “the records of what you did are now heading toward our de Sitter horizon at the speed of light, and for that reason alone – even if for no others – you can’t put Humpty Dumpty back together again.”

But that’s wrong.  The reason you can’t put Humpty Dumpty back together again is not because evidence-carrying photons are streaming away… it’s because the fall of Humpty Dumpty was not set up before his fall to be reversible.  So a system described by wave function Ψ(t) can only be reversible at t2 if it is set up at earlier time t1 to be reversible (which means, at least in part, isolating it from decoherence sources).  But if you actually do succeed in reversing it at time t2 to its earlier state Ψ(t1), then there can never be scientific evidence that it was ever in state Ψ(t2).  Therefore, as a scientific matter, reversibility is a contradiction because the only way to show that a system is reversible is to show that it did not do something that it did.

Of course, assuming you could prepare lots of systems in identical states Ψ(t1), you could presumably let them evolve to state Ψ(t2), and then measure all of them except one, which you would then reverse to state Ψ(t1).  If the measured systems yield statistics that are consistent with the Born rule applied to state Ψ(t2), then you might logically infer that the system you reversed actually “was” in state Ψ(t2) at some point.  However, there’s a real problem, especially with macroscopic objects, with producing “identical” states, as I discuss here.  It is simply not physically possible, “in principle” or not, to make an identical copy of a cat.  Therefore, any attempt to scientifically show that SC exists requires showing that it does not exist. 

Physical reversibility is a contradiction.

_________________________________________________

From Section F of this post:

Consider this statement:

Statement Cat: “The measurement at time t1 of a radioactive sample correlates to the integrity of a glass vial of poison gas, and the vial’s integrity correlates at time t2 to the survival of the cat.” 

Let’s assume this statement is true; it is a fact; it has meaning.  A collapse theory of QM has no problem with it – at time t1, the radioactive sample either does or does not decay, ultimately causing the cat to either live or die.  According to U [the "universality" assumption that quantum states always evolve linearly and reversibly], however, this evolution leads to a superposition in which cat state |dead> is correlated to one term and |alive> is correlated to another.  Such an interpretation is philosophically baffling, leading countless students and scholars wondering how it might feel to be the cat or, more appropriately, Wigner’s Friend.  Yet no matter how baffling it seems, proponents of U simply assert that a SC superposition state is possible because, while technologically difficult, it can be demonstrated with an appropriate interference experiment.  However, as I pointed out above, such an experiment will, via the choice of an appropriate measurement basis that can demonstrate interference effects, necessarily reverse the evolution of correlations in the system so that there is no fact at t1 (to the cat, the external observer, or anyone else) about the first correlation event nor a fact at t2 about the second correlation event.  In other words, to show that U is true (or, rather, that the QM wave state evolves linearly in systems at least as large as a cat), all that needs to be done is to make the original statement false:

1)         Statement Cat is true;

2)         U is true;

3)         To show U true, Statement Cat must be shown false.

4)         Therefore, U cannot be shown true.

Friday, March 19, 2021

The Folly of Brain Copying: Conscious Identity vs. Physical Identity

The notion of “identity” is a recurring problem both in physics and in the nature of consciousness.  Philosophers love to discuss consciousness with brain-in-a-vat type thought experiments involving brain copying.  The typical argument goes something like this:

i)          The brain creates consciousness.

ii)         It is physically possible to copy the brain and thereby create two people having the same conscious states.

iii)        Two people having the same conscious states each identifies as the “actual” one, but at least one is incorrect.

iv)        Therefore, conscious identity (aka personal identity) is an illusion.

I spent a long time in Section II of this paper explaining why questioning the existence of conscious identity is futile and why the above logic is either invalid or inapplicable.  Yes, we have a persistent (or “transtemporal”) conscious identity; doubting that notion would unravel the very nature of scientific inquiry.  Of course, you might ask why anyone would actually doubt if conscious identity exists.  Suffice it to say that this wacky viewpoint tends to be held by those who subscribe to the equally wacky Many Worlds Interpretation (“MWI”) of quantum mechanics, which is logically inconsistent with a transtemporal conscious identity.

I showed in Section III of the above paper why special relativity prevents the existence of more than one instantiation of a physical state creating a particular conscious state.  In other words, at least one of assumptions i) and ii) above is false.  For whatever reason, the universe prohibits the duplication or repeating of consciousness-producing physical states.  In Section IV(A) of the same paper, I suggested some possible explanatory hypotheses for the mechanism(s) by which such duplications may be physically prevented, such as quantum no-cloning. 

Nevertheless, the philosopher’s argument seems irresistible... after all, why can’t we make a “perfect” copy of a brain?  If multiple instances of the same conscious state are physically impossible then what is the physical explanation for why two consciousness-producing physical states cannot be identical?  I finally realized that conscious identity implies physical identity.  In other words, if conscious identity is preserved over time, then physical identity must also be preserved over time, and this may help explain why the philosopher’s brain-copying scheme is a nonstarter.

I’d been struggling for some time with the notion of physical identity, such as in this blog post and this preprint.  The problem can be presented a couple ways:

·         According to the Standard Model of physics, the universe seems to be made up of only a handful of fundamental particles, and each of these particles is “identical” to another.  For example, any two electrons are identical, as are any two protons, or any two muons, etc.  The word “identical” is a derivative of “identity,” so it’s easy to confuse two “identical” electrons as being indistinguishable and thus having the same (or indistinct) identities.  So if all matter is made up of atoms comprising electrons, protons, and neutrons, then how can any particular clump of atoms have a different identity than another clump made of the same type of atoms?

·         Let’s assume that consciousness is created by physical matter and that physical matter is nothing but a collection of otherwise identical electrons, protons, and neutrons.  In the above paper I showed that if conscious identity exists, then conscious states cannot be copied or repeated.  And that means there is something fundamentally un-copiable about the physical state that creates a particular conscious state, which would seem odd if all matter is fundamentally identical. 

·         Consciousness includes transtemporal identity.  Assuming physicalism is true, then conscious states are created by underlying physical states, which means those physical states must have identity.  But physics tells us that physical matter comprises otherwise identical particles.

I finally realized that this problem can be solved if particles, atoms, etc., can themselves have identity.  (I do not mean conscious identity... simply that it makes sense to discuss Electron “Alice” and Electron “Bob” and keep track of them separately... that they are physically distinguishable.)  An object’s identity can be determined by several factors (e.g., position, entanglements and history of interactions, etc.) and therefore can be distinguished from another object that happens to comprise the same kind of particles.  Two physically “identical” objects can still maintain separate “identities” to the extent that they are distinguishable.  And we can distinguish (or separately identify) two objects, no matter how physically similar they may otherwise be, by their respective histories and entanglements and how those histories and entanglements affect their future states. 

Where does physical identity come from?  It is a necessary consequence of the laws of physics.  For instance, imagine we have an electron source in the center of a sphere, where the sphere’s entire surface is a detector (assume 100% efficiency) that is separated into hemispheres A and B.  The detector is designed so that if an electron is detected in hemisphere A, an alarm immediately sounds, but if it is detected in hemisphere B, a delayed alarm sounds one minute later.  The source then emits an electron, but we do not immediately hear the alarm.  What do we now know?  We know that an electron has been detected in hemisphere B and that we will hear an alarm in one minute.  Because we know this for certain, we conclude that the detected electron is the same as the emitted electron.  It has the same identity.  The following logical statement is true:

(electron emitted) ∩ (no detection in hemisphere A) à (detection in hemisphere B)

But more importantly, the fact that the above statement is true itself implies that the electron has identity.  In other words:

[(electron emitted) ∩ (no detection in hemisphere A) à (detection in hemisphere B)]

à (the electron emitted is the electron detected in hemisphere B)

(On retrospect, I feel like this is obvious.  Of course physical identity is inherent in the laws of physics.  How could Newton measure the acceleration of a falling apple if it’s not the same apple at different moments in time?)

So if electrons can have identity, then in what sense are they identical?  Can they lose their identity?  Yes.  Imagine Electron Alice and Electron Bob, each newly created by an electron source and having different positions (i.e., their distinct wave packets providing their separate identities).  The fact that they are distinguishable maintains their identity.  For example, if we measure an electron where Electron Bob cannot be found, then we know it was Electron Alice.  However, electrons, like all matter, disperse via quantum uncertainty.  So what happens if their wave functions overlap so that an electron detection can no longer distinguish them?  That’s when Bob and Alice lose their identity.  That’s when there is no fact about which electron is which.  (As a side note, Electron Bob could not have a conscious identity given that when he becomes indistinguishable with Electron Alice, even he cannot distinguish Bob from Alice.  This suggests that conscious identity cannot even arise until physical identity is transtemporally secured.)

This realization clarified my understanding of conscious identity.  My body clearly has an identity right now in at least the same sense that Electron Bob does.  What would it take to lose that physical identity?  Well, it wouldn’t be enough to make an atom-by-atom copy of the atoms in my body (call it “Andrew-copy”), because Andrew-copy would still be distinguishable from me by nature, for example, of its different location.  Rather, the wave functions of every single particle making up my body and the body of Andrew-copy would have to overlap so that we are actually indistinguishable.  But, as I showed in this paper, that kind of thing simply can’t happen with macroscopic objects in the physical universe because of the combination of slow quantum dispersion with fast decoherence.

What would it take for me to lose my conscious identity (or copy it, or get it confused with another identity, etc.)?  Given that conscious states cannot be physically copied or repeated, if conscious identity depends only the particular arrangement of otherwise identical particles that make up matter, then we need a physical explanation for what prevents the copying of that particular arrangement.  But if conscious identity depends on not just the arrangement of those (otherwise identical) particles but also on their physical distinguishability, then the problem is solved.  Here’s why.  Two macroscopic objects, like bowling balls, will always be physically distinguishable in this universe.  Bowling Ball A will always be identifiably distinct from Bowling Ball B, whether or not any particular person can distinguish them.  So if my conscious identity depends at least in part on the physical distinguishability of the particles/atoms/objects that create my consciousness, then that fact alone would explain why conscious states (and their corresponding transtemporal identity) cannot be copied.

Let me put this another way.  Identity is about distinguishability.  It is possible for two electrons to be physically indistinguishable, such as when the wave states of two previously distinguishable electrons overlap.  However, it is not possible, in the actual universe, for a cat (or any macroscopic object) and another clump of matter to be physically indistinguishable because it is not possible for the wave states of these two macroscopic objects to overlap, no matter how physically similar they may otherwise be.  A cat’s physical identity cannot be lost by trying to make a physical copy of it.  It is not enough to somehow assemble a set of ≈10^23 atoms that are physically identical to, and in a physically identical arrangement as, the ≈10^23 atoms comprising the cat.  Each of those constituent atoms also has a history of interactions and entanglements that narrowly localize their wave functions to such an extent that overlap of those wave functions between corresponding atoms of the original cat and the copy cat is physically impossible.  (See note below on the Myth of the Gaussian.)

Imagine that someone has claimed to have made a “perfect copy” of me in order to prove that conscious identity is just an illusion.  He claims that Andrew-copy is indistinguishable from me, that no one else can tell the difference, that the copy looks and acts just like me.  Of course, I will know that he’s wrong: even if no one else can distinguish the copy from me, I can.  And that alone is enough to establish that Andrew-copy is not a perfect copy.  But now I understand that my conscious identity implies physical identity – that my ability to distinguish Andrew-copy from me also implies physical distinguishability.  There is no such thing as a perfect physical copy of me.  Even if the atoms in Andrew-copy are in some sense the same and in the same configuration as those in my body, and even if some arbitrary person cannot distinguish me from Andrew-copy, the universe can.  The atoms in Andrew-copy have a history and entanglements that are distinguishable from the atoms in my body, the net result being that the two bodies are physically distinguishable; their separate physical identities are embedded as facts in the history of the universe.

So if the universe can distinguish me from Andrew-copy, then why should it be surprising that I can distinguish myself from Andrew-copy and that I have an enduring conscious identity?  The question is not whether some evil genius can make a physical copy of my body that is indistinguishable to others.  The question is whether he can make a copy that is indistinguishable to me or the universe.  And the answer is that he can’t because making that copy violates special relativity. 

 

Note on the Myth of the Gaussian:

Physicists often approximate wave functions in the position basis as Gaussian distributions, in large part because Gaussians have useful mathematical properties, notably that the Fourier transform of a Gaussian is another Gaussian.  Because the standard deviation of a Gaussian is inversely related to the standard deviation of its Fourier transform, it clearly demonstrates the quantum uncertainty principle whereby the commutator of two noncommuting operators is nonzero.  An important feature of a Gaussian is that it is never zero for arbitrarily large distances from the mean.  This treatment of wave functions often misleads students into believing that wave functions are or must be Gaussian and that: a) an object can be found anywhere; and b) the wave states of any two arbitrary identical objects always overlap.  Neither is true. 

Regarding a), physics students are often given the problem of calculating the probability that his/her body will quantum mechanically tunnel through a wall, or even tunnel to Mars; the calculation (which is based on the simple notion of a particle of mass M tunneling through a potential barrier V) always yields an extremely tiny but nonzero probability.  But that’s wrong.  Setting aside the problem with special relativity – i.e., if I am on Earth now, I can’t be measured a moment later on Mars without exceeding the speed of light – the main problem is physical distinguishability.  The future possibilities for my body (and its physical constituents) are limited by their histories and entanglements. 

While some electron may, due to quantum dispersion or being trapped in a potential well, develop a relatively wide quantum wave packet over time whose width “leaks” to the other side of the wall/potential barrier, this requires that the electron remain unmeasured (i.e., with no new correlations) during that time period.  But the particles and atoms in a human body are constantly “measuring each other” through decoherence so that their individual wave packets remain extremely tightly localized.  In other words, my body doesn’t get quantum mechanically “fuzzy” or “blurry” over time.  Thus none of the wave packets of the objects comprising my body get big enough to leak through (or even to) the wall.  More to the point, the QM “blurriness” of my body is significantly less than anything that can be seen... I haven’t done the calculation, but the maximum width of any wave packet (not the FWHM of a Gaussian, which extends to infinity, but the actual maximum extent) is much, much, much smaller than the wavelength of light. 

As I showed above, physical distinguishability is an inherent feature of the physical world.  An object that appeared on the other side of the wall that happened to look like my body would be physically distinguishable from my body and cannot be the same.  That is, there is no sense in which the body that I identify as mine could quantum mechanically tunnel to Mars or through a wall – that is, there is ZERO probability of me tunneling to Mars or through a wall.  If I have just been measured in location A (which is constantly happening thanks to constant decohering interactions among the universe and the objects comprising my body), then tunneling to location B requires an expansion of the wave packets of those objects to include location B – i.e., my tunneling to B requires a location superposition in which B is a possibility.  But past facts, including the fact that I am on Earth (or this side of the wall) right now have eliminated all configurations in which my body is on Mars (or on the other side of the wall) a moment later.