20071127

Back by popular demand "Donald Duck" technical thread Biercuk (DARPA MTO) As per request, attached are the original posts from the "Donald Duck" technical thread, which were controversially removed from the D-wave weblog shortly after they were posted in advance of the widely-reported press demo early this year. Geordie has stated that the comments were not taken down – though his comment (#35) apologizing for their initial removal appears in the original thread and in closing below. For those who were following the discussion in the days leading up to the demo, the abrupt disappearance of a technical thread left a memorable impression as to the status of further critical discussion on the weblog. Still notably absent from the debate is any substantive discussion of standard industry benchmarks: quantitative characterization of fidelity, persistence of entanglement in the presence of decoherence, amenability to 1/f noise, Rabi oscillations, Ramsey fringes, Larmor frequency, T1, T2 – as well any third-party referee or peer-reviewed technical publication outlining these hardware requirements.


Donald Duck January 22, 2007 – Look, I am not aware of any theory that says that NP complete problems are amenable to any significant speedup on a quantum computer. (Factoring intergers, i.e. Shor’s algorithm, I remind you is somewhat special—it is not NP complete). In this case, you will not be able to compete with conventional computers. Another thing to keep in mind. The press conference method of announcing scientific results doesn’t have a very good track record. In 1989, chemists Stanley Pons and Martin Fleischmann held a press conference to report they had successfully achieved cold fusion with a simple device. In 2002, a group called Clonaid held a press conference to announce they had successfully achieved human cloning. In both cases, the stories were widely reported in the press but were later debunked. How about some good old-fashioned peer review? And so what if you can find the ground state of a 16 spin Ising model. I’m willing to bet that in this particular physical device that quantum coherence has very little if not nothing at all to do with it.

Geordie January 22, 2007 – Donald: (1) One of the most fundamental results of QC theory is that QCs can quadratically speed up unstructured search. I suggest you visit Eddy Farhi’s website at MIT and download and read some papers on AQC, or visit arxiv.org and search for adiabatic quantum computing. Most of the papers on AQCs are about solving NP-complete problems. (2) We’re not announcing scientific results. We’re announcing a technical capability. When we do announce scientific results they will be via the peer review process. (3) I would take that bet in a second, but unless you really are Donald Duck I would have difficulty collecting.

Donald Duck January 22, 2007– (1) That is precisely my point. Quadratic speedup is not good enough to be competitive with current computing technology. (2) + (3) Well, it’s not completely clear, but it sounds like you are claiming the technical capability to perform adiabatic quantum computation. If this is true you need to prove experimentally that what you have is AQC and not some sophisticated form of thermal annealing. This is what I would really like to see.

Geordie January 22, 2007 – Donald: I suppose if a quadratic speed up isn’t good enough, then a constant pre-factor speed-up must be even less useful…damn thanks for pointing that out…now I can go back to using my trusty ole abacus. You should probably email Intel and AMD and let them know. Damn “computers” and their useless pre-factor speed-ups. I understand that presentation of scientific results in Science or Nature is appealing to the expert community, and we do have plans to do this. But our primary objective isn’t publishing science papers, it’s building quantum computers.

Donald Duck January 23, 2007 – Geordie: True, quadratic speedup for general purpose computing would be nice—if the cost is not too outrageous. But that’s not what we are talking about here. AQC may give quadratic speedup for a few select algorithms, e.g. Grover’s search algorithm. There are also problems known to be exponentially hard using AQC. I think its very much still an open question as to how useful AQC is w.r.t. computing in general. Yet I also think that studying this will perhaps tell us something very fundamental about the nature of computing and possibly physical reality. However, I’m not convinced that there is now, or ever will be, a market for AQC. Back to your device. I read somewhere else that your technology works at -269C, i.e. 4K, so I take that to mean a liquid Helium temperatures. Now from what I hear, individual s.c. flux qubits, including yours, have a energy gap E0 of about 10GHz or 0.5K. My guess is that a modest collection of coupled flux qubits as in your ‘processor’ has a minimum energy gap ~2 orders of magnitude smaller than E0. So the temperature is something like 3 orders of magnitude greater that the minimum energy gap. How is AQC possible here? How can you even initialize the system?

Geordie January 23, 2007 – Donald: There are only two reasons why QCs will ever be built: quantum simulation and solving NP-complete problems. Both of these represent enormous markets. We’ve checked. Re. your questions about temperature: these are excellent questions. As a generalization of your question, think about ANY AQC operating on a “hard” (ie exponentially small gap) problem. Is there any physical system whose temperature is smaller than the gap at an anti-crossing of a hard problem? Of course not. All AQCs have the feature you’re describing, not just our approach. At an anticrossing, the temperature is ALWAYS going to be orders of magnitude larger than the gap. That’s why inclusion of a thermal environment is REQUIRED in order to analyze how to operate an “AQC” (although note that at the anticrossings it’s not really adiabatic anymore). In order to see what happens when T>>\Delta take a look at the TAQC (Thermally assisted adiabatic quantum computation) paper in the sidebar. Qualitatively, the effect of the large temperature is to thermalize the two energy levels involved in the anticrossing, reducing the probability of success by 1/2, which is of course completely acceptable.

Uncle Scrooge January 23, 2007 – The unfortunate reality is that this is really just classical SFQ being used for what is effectively analog computation (i.e. system simulation). The fact that only Z coupling is achieveable attests to this. Further, given that nowhere in any of your discussions does DWave ever mention quantum coherence, T2, phase evolution, or superpositions, one is forced to believe, as I said, that this is effectively a classical machine. Frankly, you really shouldn’t call your SQUIDs qubits, as they are no more qubits than are the SQUIDs in SFQ pulse generators. They are two level systems (clockwise and counterclockwise propagating persistent currents), but the quantum nature of said system is never exploited! Indeed, given that all experimental results to date have shown coherence times of order ~10-100ns for Nb trilayer devices, I’d be shocked to learn that Dwave had somehow overcome this technological hurdle ahead of the entire research community. If I’m incorrect, please publish something demonstrating quantum coherence using your “qubits” and prove me wrong. I’d be thrilled with such a response.

Geordie January 23, 2007 – Scrooge: ::sigh:: OK I understand that for some reason you’re desperate to find some reason why what we’re doing can’t possibly be correct, which is fine. I’m familiar with this approach. It goes something like this: I can’t figure out how to do it, therefore you can’t figure out how to do it. Do you want me to point out the basic flaw in this reasoning or can you figure it out all by yourself. As to your specific comments:
There is NO SFQ in this design. Zero. The qubits are compound junction RF squids. The tunneling matrix elements for each qubit can be controlled by varying the flux applied through the CJJs for each qubit. This approach is well-known and is centrally featured in the superconducting AQC papers I’ve linked to here. As I mentioned earlier the Hamiltonian is of the X+Z+ZZ type. Notice the X? As to your comment that I haven’t talked about T2 etc. As you yourself pointed out scientific results belong in peer-reviewed scientific articles, not in a blog whose objective is to reach a broad audience with a message that isn’t completely incomprehensible because it’s buried under jargon. As I said before, our objective is to build quantum computers, not to publish science papers. If the latter supports the former, we’ll publish. If it doesn’t then it’s just a distraction for us.

Uncle Scrooge January 24, 2007 – Geordie, I did not claim that you are using SFQ, I claim that the behavior of your system is akin to classical SFQ. My apologies if the word choice was confusing. My criticism of your approach has nothing to do with me figuring anything out, or an apparent claim that I have been unsuccessful in doing so. I don’t work in superconducting qubits. However, I know the field, and the MANY MANY players as well as the challeges they face. You are claiming to have surpassed them all by more than an order of magnitude in the number of qubits you can control and manipulate. Such a claim warrants a publication, or a detailed press release, or something to suggest that you have actually just ushered in the computing revolution which you are claiming. You may not be in the business of publishing science papers, but you are in applied science. The validity of technical claims in ANY applied science discipline is upheld by scientific scrutiny, generally facilitated by publishing scientific results. Would you prefer a webinar? Fine, but demonstrate the behavior you are claiming transparently for all to see. Further, you shouldn’t fall back on the fact that this is a blog. I have read DWave’s papers on the arxiv and find the same lack of anything quantum coherent in your published results (e.g. cond-mat/0509557, cond-mat/0501085). Dwave and collaborators certainly know how to make quasi-classical superconducting electronics and SQUIDs, but where are the superposition states? the Rabi or Larmor oscillations? anything suggesting that you are operating and controlling a coherent quantum system? I understand the premise of AQC, but again ask this: Can Dwave demonstrate that their simulator/processor can take an input superposition state and output the appropriate answers in superposition? If so, please provide the data and I will be most impressed and GLADLY give you the credit you are due. In stark contrast to your claim, I am not desperate to find some reason why what you’re doing is incorrect. Nothing could be further from the truth, but I do expect reasonable experimental evidence to support your very significant claims.

Geordie January 24, 2007 – Scrooge: Fair enough! While we obviously can’t release everything we’ve learned from the hardware, what we’re planning to submit for publication should clarify (at least) the issue of the role of QM in the operation of the system.

Uncle Scrooge January 24, 2007 – I’m looking forward to those publications, but have a follow-up question. Your statement that said publications will “clarify the role of QM [quantum mechanics] in the operation of the system,” gives me pause. We understand the role of quantum mechanics in quantum computing; does the DWave system exploit QM in the same way? Or are the effects what one might term semi-classical? For example, QM plays a significant role in the operation of the laser, the FET, and classical SFQ logic, but none of these are coherent quantum devices. By this statement I mean they do not preserve and exploit quantum mechanical phase information. Accordingly, they cannot provide the parallelism which leads to exponential speedup in a quantum computer. How would one describe DWave’s system?

Geordie January 24, 2007 – Scrooge: I am not so sure you’re correct when you say that the role of QM in QC is understood. There are of course lots of things that are known, but there is still alot of unexplored territory. The example you brought up about temperature & the role it plays in AQC is a great example. From the theory perspective, adding environments qualitatively changes the behavior of the system. I don’t believe that even this simple point is widely understood. There are lots of things like this where computation and physics are related in non-trivial ways, and where cross-overs between classical and quantum behavior may affect computational scaling in a way that isn’t just either/or. Also just to be clear I don’t believe that the system we’re building is going to exponentially speed up anything. The objective is the quadratic speed up for unstructured search. Chris (and also Scrooge): The way we operate our AQCs is like this (X_i and Z_i are the pauli X and Z matrices for qubit i):

(1) Turn up the tunneling term in the Hamiltonian to its maximum value (H=\sum_i \Delta_i X_i)
(2) Slowly turn the qubit biases and coupler strengths up to their target values (these define the particular problem instance); after this process the Hamiltonian is H=\sum_i (\Delta_i X_i + h_i Z_i) +\sum_{ij} J_ij Z_i Z_j
(3) Slowly turn the tunneling terms off; after this the Hamiltonian is H=\sum_i h_i Z_i +\sum_{ij} J_ij Z_i Z_j
(4) Read out the (binary digital) values of the qubits

OK so the point of this is that the qubits are only read out when they are in classical bit states by design. The readout devices are sensitive magnetometers called DC-squids which sense the direction of the magnetic field threading the qubit and hence it’s bit state. The computational model is explicitly set up so that superposition states are used only during the “annealing” stage; the readouts never fire during this step. Answers are encoded in bit strings. Each bit string corresponds to a particular solution. If the computation succeeds, the bit string returned ({s_i}) will minimize the energy E=\sum_i h_i s_i +\sum_{ij} J_ij s_i s_j. Hope this helps! Also re. the demo. There will be almost zero technical stuff in the demo. The foxus is on describing how one would use the system as an application developer–what it does and how you interact with it. All of the science-type stuff, including details of operation, won’t be part of the demo.

Geordie January 24, 2007 – Hi everybody: As a favor to our non-technical audience, if you have any technical questions about the system, please email me directly at rose@dwavesys.com and I’ll try to help.

Also Donald and Scrooge: Sorry about cutting your posts, please email me directly & we can continue the discussion. I love the feedback, keep it coming!

20071121

        

Disruptive Technologies SC07 "The disruptive technologies panel serves as a forum for examining those technologies that may significantly reshape the world of high-performance computing (HPC) in the next five to fifteen years, but which are not common in today's systems. Generally speaking, a disruptive technology is a technological innovation or product that eventually overturns the existing dominant technology or product in the marketplace. Disruptive Technologies showcases these technologies in two panel sessions and in a competitively-selected exhibit showcase." This year's showcase featured quantum computing, optical interconnects, CMOS photonics, carbon nanotube memory, and software for massively-parallel multicore processors. The two panel sessions explored potential for disruptions in each major component of HPC architecture: processors, memory, interconnects, and storage.

Progress in Quantum Computing SC07 Panel discussion and HPCWire summary by DiVincenzo. "Hardware to perform quantum information processing is being developed on many fronts. Representing points of view from academia, government, and industry, this panel will give an indication of how work is progressing on quantum computing devices and systems, and what the theoretical possibilities and limitations are in this quantum arena." Panel members included David DiVincenzo (IBM), Wim Van Dam (UCSB), Mark Heiligman (ODNI), Geordie Rose (∂-wave), and Will Oliver (Lincoln Lab).

Rabi, Ramsey, fidelity, 1/f noise, T1, T2 MIT EECS Biercuk (MTO) brings back the "Donald Duck" technical thread calling for further clarification on fidelity, 1/f noise, T1, T2 Rabi and Ramsey at the new Vatican. Farhi, Chuang, Shor, and Viola follow-up with the same fundamental questions at Amin and Berkley's MIT talk, covered in further detail by Scott Aaronson at Shtetl-Optimized.