Researchers at the University of California, Riverside have published findings on how quantum computers can be scaled up by linking smaller chips together to create more powerful and reliable systems. The study, appearing in Physical Review A, explored the potential of “scalable” quantum architectures that use multiple small chips working as one unit.
Mohamed A. Shalby, a doctoral candidate in UCR’s Department of Physics and Astronomy and first author of the paper, explained: “Our work isn’t about inventing a new chip. It’s about showing that the chips we already have can be connected to create something much larger and still work. That’s a foundational shift in how we build quantum systems.”
The research focused on overcoming challenges associated with connecting separate quantum chips, especially when these are housed in different cryogenic refrigerators. Shalby noted: “In practice, connecting multiple smaller chips has been difficult. Connections between separate chips — especially those housed in separate cryogenic refrigerators — are much noisier than operations within a single chip. This increased noise can overwhelm the system and prevent error correction from working properly.”
Despite this challenge, simulations conducted by the team demonstrated that even when links between chips were up to ten times noisier than the chips themselves, the overall system could still detect and correct errors.
“This means we don’t have to wait for perfect hardware to scale quantum computers,” said Shalby. “We now know that as long as each chip is operating with high fidelity, the links between them can be ‘good enough’ — not perfect — and we can still build a fault-tolerant system.”
The research emphasizes that building reliable quantum computers requires not just increasing qubit numbers but also ensuring fault tolerance through error correction techniques like the surface code method. Shalby described how logical qubits are built from clusters of physical qubits to correct errors common in fragile quantum systems.
The study involved thousands of simulations using six different modular designs under various levels of noise and error rates. Parameters were based on Google’s current quantum infrastructure.
“Until now, most quantum milestones focused on increasing the sheer number of qubits,” said Shalby. “But without fault tolerance, those qubits aren’t useful. Our work shows we can build systems that are both scalable and reliable — now, not years from now.”
The project was supported by the National Science Foundation and used simulation tools developed by Google Quantum AI team. The research team included Leonid P. Pryadko and Renyu Wang at UCR as well as Denis Sedov at University of Stuttgart.
The paper is titled “Optimized noise-resilient surface code teleportation interfaces.”



