Common Logic Challenges in Quantum Computing

Explore top LinkedIn content from expert professionals.

Summary

Common logic challenges in quantum computing refer to difficulties in building, managing, and scaling quantum circuits and the special types of bits—called qubits—that power them. These challenges make it hard to create reliable, large-scale quantum computers due to issues like error rates, fabrication inconsistencies, and complex error-correction needs.

  • Improve fabrication precision: Focus on refining the manufacturing process for quantum components to reduce inconsistencies and make qubit behavior more predictable.
  • Combine error detection tools: Use a mix of error detection and correction strategies to identify and fix mistakes in quantum computations as you scale up the number of qubits.
  • Balance scaling and reliability: Carefully manage the trade-offs between adding more qubits and keeping error rates under control by exploring different error correction codes and qubit designs.
Summarized by AI based on LinkedIn member posts
  • View profile for Michaela Eichinger, PhD

    Product Solutions Physicist @ Quantum Machines. I break down quantum computing.

    13,806 followers

    Why can’t we scale superconducting qubits like transistors? Qubits, even those on the same chip or wafer, often show big frequency variations. Here’s the thing: qubit frequency is directly tied to the Josephson Junction (JJ), the core circuit component in superconducting qubits. And while we’ve mastered transistor fabrication at nanometer precision, JJs remain a challenge. Why? Turns out, the issue isn’t what you’d expect. It’s something rarely discussed: 𝗚𝗿𝗮𝗶𝗻 𝗕𝗼𝘂𝗻𝗱𝗮𝗿𝘆 𝗚𝗿𝗼𝗼𝘃𝗶𝗻𝗴. A Josephson Junction is a trilayer (Al-AlOx-Al), typically made by oxidizing the bottom aluminum layer before depositing the top one. The problem is that aluminum grains form grooves at their boundaries. The oxide layer inherits this roughness, leading to an uneven thickness across the barrier. And that’s where the chaos begins. Because the barrier thickness impacts the critical current, which in turn dictates the qubit frequency. Even tiny variations in the AlOx barrier have a big impact on hitting target frequencies. 𝗦𝗼, 𝗵𝗼𝘄 𝗱𝗼 𝘄𝗲 𝗳𝗶𝘅 𝗶𝘁? We have quite few levers to pull. For instance, • 𝗙𝗹𝘂𝘅 𝗧𝘂𝗻𝗮𝗯𝗶𝗹𝗶𝘁𝘆 We design the qubit as a SQUID loop to tune the frequency using magnetic flux. This has become the state-of-the-art architecture, however it adds to the wiring overhead (one line per qubit). • 𝗣𝗼𝘀𝘁-𝗙𝗮𝗯𝗿𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗧𝗿𝗶𝗺𝗺𝗶𝗻𝗴 We can use techniques like Laser Annealing to permanently trim the junction resistance 𝘢𝘧𝘵𝘦𝘳 fabrication. This allows us to "edit" qubits to the hit their target frequency.    • 𝗕𝗲𝘁𝘁𝗲𝗿 𝗠𝗮𝘁𝗲𝗿𝗶𝗮𝗹𝘀 The field is relentlessly trying to improve the hardware stack. One example is growing epitaxial aluminum films. It’s the superior physical solution, but currently expensive and difficult to integrate into standard fabrication workflows. What are you doing to improve qubit reproducibility ? Applied Materials imec Quantum Foundry Copenhagen IQM Quantum Computers Infineon Technologies Intel Foundry TSMC

  • View profile for Jay Gambetta

    Director of IBM Research and IBM Fellow

    18,213 followers

    The preparation of GHZ states is a common benchmark for quantum processors. These states are not only a test of device-wide entanglement, they are also useful resources in numerous quantum algorithms. Our team recently demonstrated a 120-qubit logical GHZ state on our Heron r2 processors, the largest reported on any hardware. This includes a 60-logical qubit GHZ on a single-shot basis (i.e. with no readout error mitigation). These experiments were enabled by error detection both at the device and circuit level. At the device level, we can use our knowledge of the device architecture to detect if some couplers fail during a particular shot. At the circuit level, we can use symmetries inherent in the GHZ state to detect if certain violations occur. The state preparation proceeds as follows: we first eliminate some edges with bad CZ or bad readout (above a given threshold). Then, starting from a qubit at the center of the remaining graph, we perform a breadth-first search (BFS) to prepare a GHZ state in shallow depth. During the BFS, some nodes are randomly blocked in order to increase the chance of check qubits being found. Afterwards, any node that does not belong to the GHZ but is adjacent to 2 of its qubits may act as a check in a ZZ parity measurement. We aim to maximize the ''coverage'' of checks that we can find through this randomization, while not increasing the depth beyond a given threshold above the best possible depth. The coverage of checks is the number of locations in the circuit whose failure is detected by one of the checks, which we can compute efficiently using Pauli propagation. Therefore, we can predict exactly how many failures will be detected using our checks, and can optimize the layout for them. These experiments were performed by Ali Javadi and Simon Martiel. They also leverage many of the recent advances made by our team, including improved readout on Herons, characterization of coupler errors, and M3 readout error mitigation. For comparison, the recent demonstrations by Microsoft/Atom with a 24-qubit GHZ, Quantinuum with a 50-qubit GHZ, and Q-ctrl with a 75-qubit GHZ (also on Heron) also relied on error detection. As we chart the path towards advantage all that really matters is how large a quantum circuit can we run and can we trust the method used gives accurate results. While GHZ are simple to simulate this method shows that error detection with post selection is a potentially viable tool to add with error mitigation or sample based quantum diagonalization, to run experiments at the utility scale (100+ qubits) and build the set of trusted tools to search for quantum advantage on near term devices. This is why we are pushing near term methods such as error mitigation, error detection on utility-scale (100+ qubits) quantum computers.

  • View profile for Laurent Prost

    Product Manager at Alice & Bob

    5,479 followers

    Google's Willow chip shows that quantum error correction is starting to work. Just "starting", because while the ~1e-3 error rate reached by Willow is good, it has been achieved by others without error correction. So, how do we get error rates we couldn't reach with physical qubits alone? Easy: you "just" add more qubits in your logical qubit. But because there are errors on two dimensions in quantum computing, a 2D-structure (the surface code) is usually required to correct errors. This means that increasing protection against errors causes the number of qubits to grow quickly. With a surface code, protecting against 1 error at a time during an error correction cycle requires 17 qubits. 2 errors at a time? 49 qubits. 3 errors at a time? 97 qubits. This is the max Willow could achieve. This quadratic scaling leads Google to expect that reaching a 1e-6 error rate on a Willow-like chip will require some 1457 physical qubits (protecting against 13 errors at a time). And this is the reason why Alice & Bob is going for cat qubits instead. By reducing error correction from a 2D to a 1D problem, cat qubits make the scaling of error rates much more favorable. Even with the simplest error correction code (a repetition code), correcting one error at a time only requires 5 qubits. 2 errors? 9 qubits. 3 errors? 13 qubits. 13 errors? This is just 53 qubits instead of 1457! This situation is summarized in the graph below. It is taken from our white paper (link in the 1st comment) and I added a point corresponding to the biggest Willow experiment. Now, to be fair, Alice & Bob still needs to release the results of even a 5-qubit experiment. But when this is done, there is a fair chance the error rates will quickly catch up with those achieved by Google or others, because so few additional qubits are required to improve error rates. There are big challenges on both sides. Mastering cat qubits is hard. Scaling chips is hard. But consistent progress is being made on both sides too. Anyway, I can't wait for the moment when I can add the Alice & Bob equivalent of the Willow experiment on the chart below. And for once, I hope it will be up and to the left!

Explore categories