arrow left

Insights from the Quantum Era - July 2024

calender icon
July 18, 2024
clock icon
min read
Opinion
Share

Algorithmic Fault Tolerance for Fast Quantum Computing

A major result published by QuEra and Harvard collaborators, describes a new fault-tolerance method that achieves:

  • A factor of "d" (code distance) speedup relative to conventional fault tolerance methods.
  • 10-100x speedup when implementing this in reconfigurable quantum computing architectures such as neutral atoms.

Such improvements will dramatically shorten the time to execute highly-complex quantum algorithms.

Read on arXiv

Large-scale quantum reservoir learning with an analog quantum computer

The paper presents a general-purpose, gradient-free, and scalable quantum reservoir learning algorithm that harnesses the quantum dynamics of QuEra's Aquila to process data.

The QuEra team performed a successful quantum machine learning demonstration on up to 108 qubits, demonstrating the largest quantum machine learning experiment to date.

It further observed comparative quantum kernel advantage in learning tasks by constructing synthetic datasets based on the geometric differences between generated quantum and classical data kernels. This demonstrates the potential of utilizing classically intractable quantum correlations for effective machine learning.

Read on arXiv

RydbergGPT

While much attention has been paid recently to quantum computing techniques in machine learning, there is much that classical machine learning can offer to quantum computers. In this work, a team based in Canada and Sweden explores the use of GPT-style learning to predict measurement probabilities for calculations done on analog neutral-atom quantum computers. Their focus lies in the generation of bitstrings for systems trained close to a quantum phase transition in the square lattice. Promising capacity to extrapolate results are is observed, and the authors suggest the use of similar methodology, trained on vastly more data and larger system sizes, to increase prediction capacity and foster co-design of AI and quantum technologies.

Read on arXiv

{{Newsletter-signup}}

Single-shot quantum machine learning

A main challenge in quantum machine learning is the overhead incurred by the need to sample measurements, inherent probabilistic due to quantum mechanics, particularly in the prediction stage of the machine learning pipeline. The authors of this work consider situations where quantum learning can produce predictions with a single – or close to – shot of quantum measurements. The work is purely analytical, but establishes several relevant results, including that a minimal depth in gates is necessary to achieve “single shotness” and that this is not possible for generic learning models, i.e. for a possible input labels.

Read on arXiv

Ambiguity Clustering: an accurate and efficient decoder for qLDPC codes

Encoding of physical qubits into logical qubits is but one step in the operation of a fault-tolerant quantum computer. Strategies for decoding the error state of the qubits by extracting mid-circuit measurements on syndromes, as well as carrying operations between logical qubits in a fault-tolerant way are also crucial for efficient error correction. This work explores new ways of decoding the information on qLDPC codes, known for their efficiency in generating logical qubits out of physical ones. The new methodologies are benchmarked against standard generic decoder alternatives, demonstrating considerable improvements.

Read on arXiv


machine learning
with QuEra

Listen to the podcast

Join our mailing list

Sign up here