An exciting panel at Q2B 2024 discussed "Systems Integration and Support for On-Premises Deployments". Below are the key insights.
1. Bob Sorensen – Chief Analyst for High-Performance Computing (HPC) at Hyperion Research.
2. Mandy Birch – CEO and Founder of Treq, a manufacturing and systems engineering company focusing on flexible, open-architecture quantum computer deployments.
3. Michael Brett (standing in for Sebastian Hussinger) – Leads Quantum Business Development at Amazon Web Services (AWS).
4. Dominic Ulmer – From ParTec, an HPC company now active in quantum-HPC integration and system integration.
5. Yuval Boger – Chief Commercial Officer at QuEra, a neutral-atom quantum computer manufacturer (also the panel moderator).
• Technical Familiarity and Control:
Bob Sorensen argued that having an on-premises quantum system allows organizations to “get down and dirty” with the hardware. This deep access is critical for research, stress testing, algorithm development, and ensuring data confidentiality.
• Hybrid Approach is Likely:
While on-prem offers advantages for performance and integration (especially around reducing latency and controlling data), cloud-based quantum access still holds a place for scaling, experimenting with different vendors’ hardware, and providing burst capacity. Michael Brett noted that the cloud remains essential even for supercomputing centers that buy on-prem quantum hardware, because they may need failover options or want to keep workforce development going prior to system delivery.
• Not an Either/Or Choice:
Sorensen emphasized that modern HPC sites no longer choose “on-prem OR cloud”; they use both, each suited for different workloads and stages of development.
• Different Physical Requirements:
Quantum computers today typically require specialized infrastructure (e.g., cryogenics, ultra-high vacuums, optical setups) that do not neatly fit into traditional HPC data centers. Mandy Birch suggested that in the near term, quantum “clusters” might sit adjacent to data centers because the operational complexities (coolant types, specialized lasers, etc.) do not align with standard server-rack environments.
• Maintenance and SLAs:
HPC centers expect high uptime, standard warranties, and straightforward support contracts, similar to how classical systems function. Quantum providers may have to embed technical staff directly on-site—akin to how supercomputing centers in the early days had vendor engineers living next to the machine. Dominic noted some recent procurements in Europe require formal SLAs for quantum systems.
• Evolution Toward Specialized Quantum Data Centers:
AWS envisions “special-purpose quantum data centers” that meet the unique demands of quantum hardware and eventually connect into broader cloud infrastructure. Over time, the industry might integrate quantum hardware more seamlessly into standard HPC environments, but it is a long path.
• Still in the Research Phase:
Michael Brett observed that “the number one use case for quantum computing is researching quantum computers.” Real-world, production-level applications remain on the horizon, and HPC teams are actively seeking ways to integrate quantum as a specialized accelerator.
• Hybrid Workflows:
Many HPC users want to treat a quantum processor like they do a GPU or other accelerator—identifying “hot spots” in codes that might benefit from quantum speedup. Because quantum hardware remains limited, this integration must be carefully architected so that HPC scheduling (e.g., SLURM) and resource allocation systems can handle quantum tasks.
• Hardware-Awareness Is Still Necessary:
Quantum software remains hardware-specific, in contrast to classical systems where compilers and libraries handle much of the optimization. Mandy noted that to get performance from quantum hardware today, developers must understand the underlying qubit technology deeply.
• Rapid Progress vs. Traditional Procurement:
Classical HPC systems often have 4- to 7-year life cycles, but quantum technology evolves faster. A quantum system purchased today could be effectively obsolete in fewer than four years if the qubit count or quality leaps ahead.
• Incremental Upgrades:
Data centers have historically done “midlife kickers,” swapping in new hardware components after 12–18 months. A similar model might emerge in quantum, where quantum computers are designed from the start to be partially upgradeable (e.g., adding qubit capacity or new control electronics).
• Open Architecture:
Mandy highlighted Trek’s open-architecture approach to reuse components (e.g., dilution refrigerators, control systems) as hardware technology changes. This can lower capital costs by allowing incremental component upgrades rather than wholesale system replacements.
• Supply Chain Maturity:
Mandy underscored the industry’s shift from fully “vertical” quantum developers (where each vendor builds almost every component in-house) toward a more robust supply chain, similar to how classical HPC eventually standardized components. Hundreds of firms now provide subsystems for quantum computers, but coordination and standards remain works in progress.
• Geographic Deployment and Workforce:
If a vendor sells quantum systems worldwide, do they need a local team in every region to maintain it? Early days likely require either dedicated local staff or strong regional partnerships. Over time, improved automation and simpler maintenance could reduce this burden.
• On-Prem Demand Growing:
According to Dominic, more on-prem quantum installations exist than one might assume. The pace of announcements and interest is rising, but the industry must figure out how to scale services and support profitably, without placing a small army of engineers in every single locale.
• Hardware and Software Co-Evolution:
Bob Sorensen drew parallels with the slow uptake of GPUs in HPC when codebases had to be adapted to GPU architectures. That learning curve was steep, especially for legacy code with decades of validation. Quantum faces similar challenges but arguably on an even more profound level, given how different quantum hardware is.
• Supply-Constrained Future:
Michael Brett pointed out that once quantum computers become truly commercially useful, the industry will face a supply-constrained environment akin to the GPU shortage. Organizations will need the agility to run workloads wherever quantum resources are available, be it on-prem or in the cloud.
• Managing Expectations:
Bob cautioned vendors against overselling near-term capabilities with “we’re the world’s leader in [narrow metric].” End users not deeply versed in quantum can be confused or disappointed when marketing statements fail to align with real-world performance and timelines.
• Promising but Long-Term:
Most panelists agreed that progress has been impressive but that quantum is still a multi-year journey toward widespread commercial utility. In the interim, HPC centers and large cloud providers will keep experimenting, searching for early use cases and forging the ecosystem (supply chain, skill sets, standards) needed to scale up.
• Excitement and Talent:
Michael noted that despite attention-grabbing fields like generative AI, quantum computing continues to attract new talent. Scientists and engineers see quantum as a place to make a significant impact—important for building the broader ecosystem.
• The “Quantum Great” Dinner:
On a lighter note, each panelist named a historical physics figure (e.g., Feynman, Einstein, Planck) they’d most like to have dinner with, highlighting ongoing inspiration from the field’s foundational thinkers.
The panel painted a picture of a dynamic but still-maturing quantum computing market. While cloud-based access to quantum systems lowers the barrier to experimentation, on-premises installations have become increasingly desirable for organizations needing performance, data control, and integration with HPC environments. Nevertheless, the unique physical and operational demands of quantum machines pose nontrivial challenges for data centers and vendors alike.
Supply-chain evolution, flexible upgrade paths, careful software integration, and pragmatic business models will define the next phase of quantum’s integration into HPC. Although genuine “production quantum computing” remains a future milestone, the panel was broadly optimistic about the steady advances in hardware, software, workforce readiness, and investment that continue to push the technology forward.
Watch the full video below