World's First Proof that Consciousness is Nonlocal

Welcome to my blog! I am the author of the world's FIRST paper (explained here on my YouTube channel ) to appear in the academic lite...

Wednesday, January 27, 2021

Is Scalable Quantum Computing Possible? And Why Does It Matter?

Tomorrow I begin a class on quantum computing at NYU, taught by Javad Shabani.  In preparation, I am reading Scott Aaronson’s fascinating Quantum Computing Since Democritus.

The notion of quantum computing is simple.  Computers rely on bits – transistors that serve as little on-off switches.  By starting with an initial string of bits and then manipulating them in a particular way according to software (e.g., turning “on” or 1 switches to “off” or 0, etc.), a computer can essentially perform any calculation.  Computers don’t need to be made of transistors, of course, but that tends to be much more efficient than using, say, Tinker Toys.  A quantum computer is simply a computer whose bits are replaced with “qubits” (quantum bits).  Unlike a classical bit that can only take the state 0 or 1, a qubit can be in a superposition of 0 and 1 (or, more precisely, state α|0> + β|1>, where α and β are complex amplitudes and |α|2 is the likelihood of finding the qubit in state |0> and |β|2 is the likelihood of finding the qubit in state |1> if measured in the {|0>,|1>} basis). 

The reason this matters is that because there are “infinitely many” (well, not really, but certainly lots of) possible states for a single qubit, because α and β can vary widely, while there are only two states (0 or 1) for a classical bit.  In some sense, then, the “information content” of a qubit (and ultimately a quantum computer) is vastly greater than the information content of a classical bit (and corresponding classical computer).  If you think your iPhone is fast now, imagine one with a quantum computer processor!

At least... that’s the advertisement for quantum computing.  In reality, there are several problems with actual quantum computing.  I won’t dig too deeply into them, as they’re well described by articles such as this, but here are a few:

·         Nobody knows what to do with them.  There are a couple of particular kinds of software, such as Shor’s algorithm for factoring large composite numbers, that would have useful implications for cryptography and information security.  Beyond that, there don't seem to be many real-world applications of quantum computers that would be significantly faster than classical computers.

·         Qubits must remain isolated from the rest of the world (except for their entanglements with other qubits) during the computation, but this is a massively difficult problem because of decoherence.  You can have a microSD card with literally billions of classical bits... you can stick it in your pocket, use it to pick a piece of chicken out of your teeth, drop it in the toilet, probably zap it in the microwave for a few seconds... and it will probably still work fine.  (Full disclosure: I’ve never actually tried.)  But qubits are so ridiculously sensitive to influences from the world that it takes a huge multi-million-dollar system just to adequately cool and isolate even a dozen qubits.

·         Even if there were a way to adequately isolate lots of qubits, as well as entangle and manipulate them in a way necessary to execute a useful algorithm, and even if you could do this for a reasonable price on a reasonably sized device, error correction seems to be a major problem.  Errors are caused (at least in part) by decoherence, and quantum error-correction means are supposedly possible in principle, but these means (e.g., requiring 1000 additional error-correcting qubits for each existing qubit) may prove seriously problematic for the future of quantum computing.

At the end of the day, the real question is not whether a “quantum computer” consisting of a handful of entangled qubits is possible – of course it is, and such computers have already been built.  Rather, it is whether the problems of isolation, decoherence, and error-correction will prevent the possibility of "scaling up" a quantum computer to some useful size.  Aaronson famously offered $100,000 for “a demonstration, convincing to me, that scalable quantum computing is impossible in the physical world.”  I want to know the answer to this question not just because it’s such a massively important question pervading modern science and technology, but also because of its relationship to my own work on consciousness, with implications going both ways.  Specifically, what might the physical nature of consciousness tell us about the possibility of scalable quantum computing, and what might the possibility of scalable quantum computing tell us about the physical nature of consciousness?

Here’s an example.  I have been arguing for some time (e.g., in this paper and this post) that macroscopic quantum superpositions, like Schrodinger’s Cat (“SC”) and Wigner’s Friend (“WF”), can never be demonstrated, even in principle, because any “macroscopic” object (e.g., a dust particle, a cat, a planet, etc.) is already so well correlated to other objects through a history of interactions (including “indirect” interactions because of transitivity of correlation) that it can never exist in a superposition of macroscopically distinct position eigenstates relative to those other objects.  Of course, the majority opinion – practically the default position – among physicists and philosophers of physics is that WF is possible.  Nevertheless, even those who claim that WF is possible will admit that it’s really difficult (and perhaps impossible) in practice and will often resort to the plausibility of conscious AI (i.e., “Strong AI”) to save their arguments.  David Deutsch in this article, for example, spends dozens of pages with lots of quantum mechanics equations “proving” that WF is possible, but then spends a half page saying, essentially, that OK, this probably isn’t possible for an actual flesh-and-blood human but we might be able to do it on a computer and since it’s obvious that consciousness can be created on a computer... blah blah...

The problem, of course, is that not only is it not obvious, but I showed in these papers (here and here) that consciousness actually cannot be created on a computer because it is not algorithmic.  So if the possibility of WF depends on AI being conscious, and if computer consciousness is in fact physically impossible, then there must be some explanation for why WF is also physically impossible – and that explanation may equally apply to the impossibility of large quantum computers.  Further, many proponents of the possibility of computer consciousness, such as Aaronson, suspect that we’ll need a quantum computer to do the job, in which case the possibility of WF and conscious AI may hinge on the possibility of scalable quantum computing.   

Anyway, this is all to say that much of what I have discovered, innovated, and now believe about consciousness, quantum mechanics, information, Wigner’s Friend, etc., is closely related to the question of whether scalable quantum computing is possible.  Before actually beginning the class on quantum computing, here is my prediction: I think that scalable quantum computing is, in fact, impossible in the physical world.  Here’s why.

First, the possibility of scalable quantum computing, like the possibility of macroscopic quantum superpositions, follows from the assumption of “U” (i.e., the “universality” or “unitary-only” assumption that a quantum wave state always evolves linearly).  But U is an invalid logical inference as I argue in this paper; I actually think it is irrational to believe U.  In other words, it seems that the primary argument in support of scalable quantum computing is actually a logically invalid inference.  Further, I think that most of those who believe U (which is probably the vast majority of physicists) don’t even know why they believe U.  As a bettor, I would say that the smart money goes on those who actually understand (and, better yet, can justify) the assumptions they make.  The fact that so many of those who believe in scalable quantum computing also assume U leads me to doubt their claims.

Second, the possibility of scalable quantum computing depends on foundational questions about quantum mechanics, and very few scientists (including those who assert that scalable quantum computing is possible) actually understand quantum mechanics.  I know this may sound arrogant... how can I possibly claim to understand QM well enough to conclude that so few people do?  Well, that isn’t what I said – although, incidentally, I do believe I now understand QM at a level far more deeply than most.  You don’t have to understand a topic to be able to identify logical contradictions.  Unlike my brilliant physician wife, I know next to nothing about medicine or the human body, but if I heard a doctor say, “The brain does X” and then later say “The brain does not do X,” then I will know that the doctor does not understand the brain.  So it is with QM.  Here are a couple of papers in which I’ve addressed contradictions by physicists discussing QM (here, here, and here), and it drives me absolutely bonkers at the cognitive dissonance required for a physicist to say something like “Schrodinger’s Cat is both dead and alive.”

Third, and most importantly, I think that scalable quantum computing will run into the same problem as macroscopic quantum superpositions, which (as discussed above and in the cited papers) I think are impossible to create and empirically demonstrate.  I’m not sure it’s exactly the same problem, but it’s really similar.  For example, I argued here that when a tiny object measures the position of a measuring device, it inherently measures the position of other objects in the rest of the universe, whose positions are already well correlated to that of the measuring device.  Will that argument apply to, say, a million qubits that are entangled with each other but isolated and uncorrelated to other objects in the universe?  I don’t know, but it certainly suggests a similar problem. 

On a related note, I have argued that a superposition state of a single particle can grow via quantum dispersion, but as the object grows in size, a potential superposition suffers two problems: reduction in the rate of dispersion (thanks to the Uncertainty Principle) and increase in the rate of decoherence.  We can do double-slit interference experiments on objects as large as maybe a thousand atoms, although anything much beyond that seems to be impossible for all practical purposes.  I suspect the same problem, or something comparable, will arise with groups of entangled qubits.  In other words, I am reasonably confident that there is a set of quantum superpositions that are physically impossible to empirically demonstrate, even in principle – and I would bet that whatever physical mechanism prevents such superpositions would also prevent scalable quantum computing.   

But I don’t know for certain.  For example, I don’t know how an individual qubit is placed in an initial (superposition) state, nor do I know how groups of qubits are entangled and manipulated in the way necessary to perform the desired algorithm.  It may turn out that the only real limitation is decoherence, and perhaps error correction may indeed be adequate to overcome decoherence limitations.  I sincerely doubt it, but these are the sorts of questions I am looking forward to answering this semester!

No comments:

Post a Comment

All comments are moderated. After submitting your comment, please give me 24 hours to approve. Thanks!