Noise Characterization
Measure, identify, and quantify noise sources on quantum hardware using pyqpanda3 simulators and the NoiseModel framework.
Problem
Real quantum processors deviate from ideal behavior in predictable ways. Before you can simulate realistic hardware or apply error mitigation, you need to characterize the noise on the device -- that is, measure its type, magnitude, and dependence on qubits, gates, and circuit depth.
Why Characterize Noise?
Understanding device performance. Raw calibration data (T1, T2, gate fidelities) are abstractions. Running targeted characterization circuits tells you how noise actually behaves on the device in practice.
Calibrating noise models. A noise model is only useful if its parameters match the hardware. Characterization experiments provide the data needed to populate
NoiseModelobjects with accurate error rates.Enabling error mitigation. Techniques such as readout error correction, zero-noise extrapolation, and probabilistic error cancellation all require calibrated noise parameters.
Types of Noise on NISQ Devices
| Noise Source | Physical Origin | Effect on Computation |
|---|---|---|
| Gate errors | Imperfect control pulses, calibration drift | Wrong unitary applied after each gate |
| Readout errors | Amplifier noise, state discrimination errors | Measured bitstring differs from true state |
| Crosstalk | Unwanted coupling between qubits | Errors that depend on simultaneous gate activity |
| Decoherence (T1) | Energy relaxation to the environment | |
| Dephasing (T2) | Random phase kicks from the environment | Loss of superposition coherence |
The diagram below shows how these noise sources enter the simulation pipeline:
Solution
Characterization proceeds in four stages. Each stage isolates a different noise mechanism and produces a parameter that feeds into the NoiseModel.
Step 1: Measure Readout Error Probabilities
Prepare each computational basis state (
Step 2: Characterize Single-Qubit Gate Errors
Apply a long sequence of random single-qubit Clifford gates followed by an inverse, and measure the survival probability. The exponential decay rate gives the average gate error. This is randomized benchmarking (RB).
Step 3: Characterize Two-Qubit Gate Errors
Run the same RB protocol with interleaved two-qubit gates (CNOT or CZ). The extra decay rate compared to single-qubit RB isolates the two-qubit gate error.
Step 4: Build a NoiseModel
Feed the measured readout matrix, depolarizing error rates, and decoherence parameters into a NoiseModel object and validate it against independent test circuits.
Code
1. Measuring Readout Error Probabilities
The following code prepares
from pyqpanda3 import core
def measure_readout_confusion(qubit, shots=10000):
"""Estimate the 2x2 readout confusion matrix for one qubit.
Returns a list of lists: probs[i][j] = P(measure j | prepared i).
"""
confusion = [[0.0, 0.0], [0.0, 0.0]]
for prepared in [0, 1]:
prog = core.QProg()
if prepared == 1:
prog << core.X(qubit)
prog << core.measure([qubit], [0])
machine = core.CPUQVM()
machine.run(prog, shots=shots)
counts = machine.result().get_counts()
p0 = counts.get("0", 0) / shots
p1 = counts.get("1", 0) / shots
confusion[prepared][0] = p0
confusion[prepared][1] = p1
return confusion
# Calibrate on qubit 0
matrix = measure_readout_confusion(qubit=0, shots=20000)
print("Readout confusion matrix (rows=prepared, cols=measured):")
print(f" |0> -> |0>: {matrix[0][0]:.4f} |0> -> |1>: {matrix[0][1]:.4f}")
print(f" |1> -> |0>: {matrix[1][0]:.4f} |1> -> |1>: {matrix[1][1]:.4f}")2. Readout Calibration Under a Known Noise Model
To validate the calibration procedure, inject a known readout error and verify that the measured confusion matrix matches expectations.
from pyqpanda3 import core
# Define the ground-truth readout error matrix
true_probs = [[0.96, 0.04], # P(measure 0|0)=0.96, P(measure 1|0)=0.04
[0.03, 0.97]] # P(measure 0|1)=0.03, P(measure 1|1)=0.97
# Build a noise model that contains ONLY readout error
noise = core.NoiseModel()
noise.add_read_out_error(true_probs, 0)
# Run the calibration experiment under noise
shots = 50000
confusion_measured = [[0.0, 0.0], [0.0, 0.0]]
for prepared in [0, 1]:
prog = core.QProg()
if prepared == 1:
prog << core.X(0)
prog << core.measure([0], [0])
machine = core.CPUQVM()
machine.run(prog, shots=shots, model=noise)
counts = machine.result().get_counts()
confusion_measured[prepared][0] = counts.get("0", 0) / shots
confusion_measured[prepared][1] = counts.get("1", 0) / shots
print("True matrix: ", true_probs)
print("Measured matrix: ", confusion_measured)
# The measured values should closely match the true values3. Estimating Gate Error Rates by Comparing Ideal vs. Noisy Circuits
A straightforward way to estimate the depolarizing error rate is to run the same circuit with and without noise, then compute the total variation distance (TVD) between the two output distributions.
from pyqpanda3 import core
def total_variation_distance(counts_a, counts_b, shots):
"""Compute the total variation distance between two count distributions."""
all_keys = set(list(counts_a.keys()) + list(counts_b.keys()))
tvd = 0.0
for key in all_keys:
p_a = counts_a.get(key, 0) / shots
p_b = counts_b.get(key, 0) / shots
tvd += abs(p_a - p_b)
return tvd / 2.0
def estimate_error_rate(num_layers, error_prob, shots=5000):
"""Build a random-looking circuit, run with and without depolarizing noise,
and return the total variation distance."""
prog = core.QProg()
for layer in range(num_layers):
prog << core.H(0)
prog << core.CNOT(0, 1)
prog << core.measure([0, 1], [0, 1])
# Ideal run
machine = core.CPUQVM()
machine.run(prog, shots=shots)
ideal_counts = machine.result().get_counts()
# Noisy run
noise = core.NoiseModel()
noise.add_all_qubit_quantum_error(
core.depolarizing_error(error_prob), core.GateType.H
)
noise.add_all_qubit_quantum_error(
core.depolarizing_error(error_prob), core.GateType.CNOT
)
machine.run(prog, shots=shots, model=noise)
noisy_counts = machine.result().get_counts()
tvd = total_variation_distance(ideal_counts, noisy_counts, shots)
return tvd
# Sweep error probabilities and observe the resulting TVD
for p in [0.001, 0.005, 0.01, 0.02, 0.05]:
tvd = estimate_error_rate(num_layers=4, error_prob=p, shots=10000)
print(f"Error prob {p:.3f} -> TVD = {tvd:.4f}")4. Running RB-Like Experiments with Random Clifford Circuits
Randomized benchmarking measures how quickly gate errors accumulate by applying random gate sequences of increasing length and observing the exponential decay of the survival probability. The decay constant is directly related to the average gate fidelity.
import random
from pyqpanda3 import core
# Clifford gate set: gates whose compositions are also Clifford
CLIFFORD_GATES = [
(core.GateType.H, lambda q: core.H(q)),
(core.GateType.X, lambda q: core.X(q)),
(core.GateType.Y, lambda q: core.Y(q)),
(core.GateType.Z, lambda q: core.Z(q)),
(core.GateType.S, lambda q: core.S(q)),
]
def build_rb_circuit(qubit, num_cliffords):
"""Build a simplified RB sequence: random Clifford gates followed by
a corrective gate that attempts to return the qubit to |0>.
For this example we use a heuristic inverse (apply the same gates
in reverse order with dagger where applicable) which is approximately
correct for short sequences.
"""
prog = core.QProg()
gates_applied = []
for _ in range(num_cliffords):
_, gate_fn = random.choice(CLIFFORD_GATES)
prog << gate_fn(qubit)
gates_applied.append(gate_fn)
# Apply gates in reverse to approximately invert
for gate_fn in reversed(gates_applied):
prog << gate_fn(qubit)
prog << core.measure([qubit], [0])
return prog
def run_rb_experiment(error_prob, sequence_lengths, shots=5000):
"""Run RB-like experiments at a given depolarizing error rate and
return the survival probability for each sequence length."""
noise = core.NoiseModel()
noise.add_all_qubit_quantum_error(
core.depolarizing_error(error_prob), core.GateType.H
)
noise.add_all_qubit_quantum_error(
core.depolarizing_error(error_prob), core.GateType.X
)
noise.add_all_qubit_quantum_error(
core.depolarizing_error(error_prob), core.GateType.Y
)
noise.add_all_qubit_quantum_error(
core.depolarizing_error(error_prob), core.GateType.Z
)
noise.add_all_qubit_quantum_error(
core.depolarizing_error(error_prob), core.GateType.S
)
results = {}
for length in sequence_lengths:
survival_count = 0
for trial in range(10):
random.seed(trial)
prog = build_rb_circuit(qubit=0, num_cliffords=length)
machine = core.CPUQVM()
machine.run(prog, shots=shots // 10, model=noise)
counts = machine.result().get_counts()
survival_count += counts.get("0", 0)
survival_prob = survival_count / shots
results[length] = survival_prob
return results
# Run RB at two different error rates
for err in [0.01, 0.05]:
results = run_rb_experiment(
error_prob=err,
sequence_lengths=[1, 3, 5, 10, 20, 40],
shots=10000
)
print(f"\nError rate = {err}")
for length, prob in sorted(results.items()):
print(f" Length {length:3d}: survival = {prob:.4f}")5. Sweeping Circuit Depth to Measure Error Accumulation
This experiment quantifies how errors accumulate as circuit depth increases. It is a simpler alternative to full RB and gives a direct view of the noise impact on real circuits.
from pyqpanda3 import core
def build_depth_sweep_circuit(num_qubits, depth):
"""Build a layered circuit of given depth using H and CNOT gates."""
prog = core.QProg()
for layer in range(depth):
for q in range(num_qubits):
prog << core.H(q)
for q in range(0, num_qubits - 1, 2):
prog << core.CNOT(q, q + 1)
prog << core.measure(
list(range(num_qubits)), list(range(num_qubits))
)
return prog
# Define a realistic noise model
noise = core.NoiseModel()
noise.add_all_qubit_quantum_error(
core.depolarizing_error(0.002), core.GateType.H
)
noise.add_all_qubit_quantum_error(
core.depolarizing_error(0.015), core.GateType.CNOT
)
noise.add_all_qubit_read_out_error([[0.97, 0.03], [0.04, 0.96]])
# Sweep depth and record fidelity
num_qubits = 3
shots = 8000
print(f"{'Depth':>6s} {'Bell-like Fidelity':>20s} {'TVD':>8s} {'Outcome Count':>14s}")
for depth in [1, 2, 4, 8, 16, 32, 64]:
prog = build_depth_sweep_circuit(num_qubits, depth)
# Ideal reference
machine = core.CPUQVM()
machine.run(prog, shots=shots)
ideal = machine.result().get_counts()
# Noisy
machine.run(prog, shots=shots, model=noise)
noisy = machine.result().get_counts()
# Total variation distance
all_keys = set(list(ideal.keys()) + list(noisy.keys()))
tvd = sum(
abs(ideal.get(k, 0) / shots - noisy.get(k, 0) / shots)
for k in all_keys
) / 2.0
# Fraction of outcomes matching ideal distribution keys
ideal_fraction = sum(
noisy.get(k, 0) for k in ideal.keys()
) / shots
print(f"{depth:6d} {ideal_fraction:20.4f} {tvd:8.4f} {len(noisy):14d}")6. Building a NoiseModel from Characterization Data
Once you have gathered calibration data, assemble it into a single NoiseModel object. This example shows a complete end-to-end pipeline from raw metrics to a validated model.
from pyqpanda3 import core
import math
def build_noise_model_from_calibration(
single_qubit_gate_error,
two_qubit_gate_error,
readout_error_matrix,
t1_us=None,
t2_us=None,
gate_time_1q_ns=20,
gate_time_2q_ns=300,
):
"""Construct a NoiseModel from hardware characterization data.
Parameters
----------
single_qubit_gate_error : float
Average depolarizing error probability for single-qubit gates,
typically obtained from single-qubit RB.
two_qubit_gate_error : float
Average depolarizing error probability for two-qubit gates,
typically obtained from two-qubit interleaved RB.
readout_error_matrix : list[list[float]]
2x2 confusion matrix where entry [i][j] is
P(measure j | prepared i).
t1_us : float, optional
T1 relaxation time in microseconds. If provided, amplitude
damping errors are added.
t2_us : float, optional
T2 dephasing time in microseconds. If provided, phase
damping errors are added.
gate_time_1q_ns : float
Single-qubit gate duration in nanoseconds.
gate_time_2q_ns : float
Two-qubit gate duration in nanoseconds.
"""
noise = core.NoiseModel()
# --- Gate errors from RB ---
noise.add_all_qubit_quantum_error(
core.depolarizing_error(single_qubit_gate_error), [
core.GateType.H,
core.GateType.X,
core.GateType.Y,
core.GateType.Z,
core.GateType.S,
core.GateType.T,
]
)
noise.add_all_qubit_quantum_error(
core.depolarizing_error(two_qubit_gate_error),
core.GateType.CNOT
)
# --- Decoherence from T1 (amplitude damping) ---
if t1_us is not None:
gamma_1q = 1.0 - math.exp(-(gate_time_1q_ns / 1000.0) / t1_us)
gamma_2q = 1.0 - math.exp(-(gate_time_2q_ns / 1000.0) / t1_us)
ad_1q = core.amplitude_damping_error(gamma_1q)
ad_2q = core.amplitude_damping_error(gamma_2q)
noise.add_all_qubit_quantum_error(ad_1q, [
core.GateType.H, core.GateType.X
])
noise.add_all_qubit_quantum_error(
ad_2q, core.GateType.CNOT
)
# --- Dephasing from T2 (phase damping) ---
if t2_us is not None:
lam_1q = 1.0 - math.exp(-(gate_time_1q_ns / 1000.0) / t2_us)
lam_2q = 1.0 - math.exp(-(gate_time_2q_ns / 1000.0) / t2_us)
pd_1q = core.phase_damping_error(lam_1q)
pd_2q = core.phase_damping_error(lam_2q)
noise.add_all_qubit_quantum_error(pd_1q, [
core.GateType.H, core.GateType.T
])
noise.add_all_qubit_quantum_error(
pd_2q, core.GateType.CNOT
)
# --- Readout errors ---
noise.add_all_qubit_read_out_error(readout_error_matrix)
return noise
# Build a model using typical superconducting device parameters
noise_model = build_noise_model_from_calibration(
single_qubit_gate_error=0.001,
two_qubit_gate_error=0.015,
readout_error_matrix=[[0.97, 0.03], [0.04, 0.96]],
t1_us=80.0,
t2_us=60.0,
gate_time_1q_ns=20,
gate_time_2q_ns=300,
)
# Validate: run a Bell state and check the output distribution
prog = core.QProg()
prog << core.H(0) << core.CNOT(0, 1)
prog << core.measure([0, 1], [0, 1])
machine = core.CPUQVM()
machine.run(prog, shots=20000, model=noise_model)
counts = machine.result().get_counts()
total = sum(counts.values())
bell_fraction = (counts.get("00", 0) + counts.get("11", 0)) / total
print(f"Bell state fidelity under noise: {bell_fraction:.4f}")
print(f"Full distribution: {counts}")7. Density Matrix Analysis of Noise Impact
For small circuits, the DensityMatrixSimulator gives exact probabilities under noise, avoiding shot noise. This is useful for validating characterization results at high precision.
from pyqpanda3 import core
import numpy as np
# Prepare a Bell state circuit (no measurement)
prog = core.QProg()
prog << core.H(0) << core.CNOT(0, 1)
# Ideal density matrix
dm_sim = core.DensityMatrixSimulator()
dm_sim.run(prog)
ideal_probs = dm_sim.state_probs()
ideal_dm = dm_sim.density_matrix()
print("=== Ideal Bell State ===")
print(f"Probabilities: {ideal_probs}")
purity_ideal = np.trace(np.dot(ideal_dm, ideal_dm)).real
print(f"Purity: {purity_ideal:.6f}")
# Analyze each noise channel independently
channels = [
("Depolarizing 1%", core.depolarizing_error(0.01)),
("Amplitude Damping 2%", core.amplitude_damping_error(0.02)),
("Phase Damping 2%", core.phase_damping_error(0.02)),
("Pauli X 3%", core.pauli_x_error(0.03)),
("Pauli Z 3%", core.pauli_z_error(0.03)),
]
print("\n=== Noise Channel Comparison ===")
for name, err in channels:
noise = core.NoiseModel()
noise.add_all_qubit_quantum_error(err, core.GateType.H)
noise.add_all_qubit_quantum_error(err, core.GateType.CNOT)
dm_sim.run(prog, noise)
noisy_dm = dm_sim.density_matrix()
noisy_probs = dm_sim.state_probs()
purity = np.trace(np.dot(noisy_dm, noisy_dm)).real
print(f"\n{name}:")
print(f" Probabilities: {noisy_probs}")
print(f" Purity: {purity:.6f}")
print(f" Purity loss: {purity_ideal - purity:.6f}")Explanation
Readout Error Matrix Calibration
Readout errors are among the most straightforward to characterize because they require no entangling gates. The procedure is:
- Prepare
(do nothing) and measure times. - Prepare
(apply X) and measure times. - Build the
assignment matrix where .
For
This factorization is a simplification -- correlated readout errors do occur on some hardware -- but it is a standard starting point.
Once
Randomized Benchmarking Theory
Randomized benchmarking (RB) is the standard protocol for estimating average gate fidelity without requiring full state tomography. The key insight is:
- Apply a sequence of
random Clifford gates . - Compute the inverse
and append it. - In a noiseless setting the qubit returns to its initial state.
- With noise, the survival probability decays exponentially:
where:
is the probability of measuring the initial state after Cliffords. is the depolarizing parameter related to the average gate fidelity by where . and are constants determined by state preparation and measurement (SPAM) errors. - The effective error rate per Clifford is
.
The exponential form holds because each Clifford gate is drawn uniformly at random from the Clifford group, so the average noise channel after
Extracting Error Rates from Exponential Decay
Given RB data
For a single qubit (
The depolarizing error probability used in core.depolarizing_error(p_dep) is related to the average gate fidelity by:
for a single qubit. So given
Building Device-Specific Noise Models
A practical noise model combines three error sources that are each characterized independently:
The parameters that feed into each error channel come from:
| Parameter | Source | Formula |
|---|---|---|
| Single-qubit RB fidelity | ||
| Two-qubit interleaved RB fidelity | ||
| T1 + gate time | ||
| T2 + gate time | ||
| Readout calibration | Direct measurement |
Limitations of Simple Characterization Approaches
The methods described above make several simplifying assumptions that are worth understanding:
Depolarizing assumption. Real gate noise is not perfectly depolarizing. It may be coherent (systematic over-rotation), correlated (multi-qubit crosstalk), or non-Markovian. The depolarizing model averages all these into a single parameter, which is useful for performance prediction but may miss structured errors.
Independent qubit errors. The standard
NoiseModelapplies errors independently to each qubit. On real hardware, simultaneous gate operations can produce correlated errors that this model does not capture.Gate-independent noise. Applying the same depolarizing error rate to all single-qubit gates ignores the fact that different gates have different durations and sensitivities. Use per-gate error rates when available.
State-dependent readout. The
readout matrix assumes measurement error depends only on the prepared state, not on neighboring qubit states. Some devices exhibit state-dependent readout crosstalk. SPAM-robustness of RB. Standard RB is designed to be insensitive to state preparation and measurement (SPAM) errors because they only affect the constants
and , not the decay rate . However, very large SPAM errors can bias the fit, especially at short sequence lengths. Simplified RB in simulation. The code examples in this guide use approximate inverse circuits rather than computing the exact Clifford inverse. For quantitative error rate extraction, a proper Clifford group implementation with exact inversions is required.
For production-grade characterization of real hardware, consider using interleaved RB, cross-entropy benchmarking (XEB), or gate set tomography (GST), which provide more detailed noise models at the cost of additional measurement overhead.
8. Cross-Entropy Benchmarking
Cross-entropy benchmarking (XEB) is a technique for estimating the fidelity of a quantum circuit by comparing the output distribution from a noisy execution against the ideal distribution. Unlike randomized benchmarking, XEB works well with random non-Clifford circuits and can be scaled to many qubits. The linear cross-entropy fidelity is defined as:
where
Problem
You want to quantify how much a given noise model degrades the output of a random quantum circuit, and you need a metric that captures the full distribution -- not just the probability of a single basis state.
Solution
Run the same random circuit on both an ideal simulator and a noisy simulator, compute the linear cross-entropy between the two output distributions, and interpret the result as a circuit-level fidelity.
Code
import random
import math
from pyqpanda3 import core
def build_random_circuit(num_qubits, depth, seed=42):
"""Build a random circuit with single-qubit rotations and CNOT layers.
Uses Rz and Rx rotations with random angles to produce a circuit
whose ideal output is not concentrated on a few basis states.
"""
random.seed(seed)
prog = core.QProg()
for layer in range(depth):
for q in range(num_qubits):
angle_h = random.uniform(0, 2 * math.pi)
angle_z = random.uniform(0, 2 * math.pi)
prog << core.H(q)
prog << core.RZ(q, angle_z)
for q in range(0, num_qubits - 1):
prog << core.CNOT(q, q + 1)
prog << core.measure(
list(range(num_qubits)), list(range(num_qubits))
)
return prog
def counts_to_probs(counts, shots):
"""Convert a counts dictionary to a probability dict."""
return {k: v / shots for k, v in counts.items()}
def linear_cross_entropy(ideal_probs, noisy_probs, num_qubits):
"""Compute the linear cross-entropy fidelity.
F_xeb = 2^n * sum_x P_ideal(x) * P_noisy(x) - 1
Returns a value in [0, 1] for valid fidelities (may be slightly
negative due to statistical fluctuations).
"""
dim = 2 ** num_qubits
weighted_sum = 0.0
for bitstring, p_noisy in noisy_probs.items():
p_ideal = ideal_probs.get(bitstring, 0.0)
weighted_sum += p_ideal * p_noisy
fidelity = dim * weighted_sum - 1.0
return max(fidelity, 0.0)
# Define a noise model with realistic parameters
noise = core.NoiseModel()
noise.add_all_qubit_quantum_error(
core.depolarizing_error(0.003), core.GateType.H
)
noise.add_all_qubit_quantum_error(
core.depolarizing_error(0.02), core.GateType.CNOT
)
num_qubits = 3
shots = 20000
print(f"{'Depth':>6s} {'XEB Fidelity':>14s} {'TVD':>8s}")
print("-" * 34)
for depth in [1, 2, 4, 8, 16, 32]:
prog = build_random_circuit(num_qubits, depth, seed=depth)
# Ideal execution
machine = core.CPUQVM()
machine.run(prog, shots=shots)
ideal_counts = machine.result().get_counts()
ideal_probs = counts_to_probs(ideal_counts, shots)
# Noisy execution
machine.run(prog, shots=shots, model=noise)
noisy_counts = machine.result().get_counts()
noisy_probs = counts_to_probs(noisy_counts, shots)
# XEB fidelity
xeb_f = linear_cross_entropy(ideal_probs, noisy_probs, num_qubits)
# Total variation distance for comparison
all_keys = set(list(ideal_counts.keys()) + list(noisy_counts.keys()))
tvd = sum(
abs(ideal_counts.get(k, 0) / shots - noisy_counts.get(k, 0) / shots)
for k in all_keys
) / 2.0
print(f"{depth:6d} {xeb_f:14.4f} {tvd:8.4f}")Explanation
The linear cross-entropy fidelity measures how much information about the ideal distribution is preserved under noise. Its advantages over simpler metrics like the total variation distance are:
It captures correlation structure. Two distributions can have the same TVD but very different XEB fidelities depending on whether the noisy outcomes concentrate on the same high-probability states as the ideal distribution.
It is sensitive to circuit quality. XEB was used by Google in their quantum supremacy experiment to verify that their random circuits were producing outputs far from uniform, which is exactly the signature of a high-fidelity quantum computation.
It scales efficiently. Unlike fidelity computed from density matrices (which requires
matrix elements), XEB only requires comparing sampled bitstrings against the ideal distribution, making it practical for circuits with many qubits.
The key limitation is that computing
9. Noise-Aware Circuit Design
Different circuit implementations of the same logical operation can have vastly different noise sensitivity. The number of two-qubit gates (especially CNOTs) is usually the dominant factor, since two-qubit gate errors are typically 10--50x larger than single-qubit gate errors on superconducting hardware. By choosing implementations that minimize the CNOT count, you can significantly improve the output quality under realistic noise.
Problem
You have two equivalent circuit implementations of the same unitary transformation. You need to determine which one degrades less under hardware noise and quantify the difference.
Solution
Build both circuit implementations, run them under an identical noise model, and compare their output distributions against the ideal result using fidelity metrics such as the total variation distance or cross-entropy fidelity.
Code
from pyqpanda3 import core
def build_circuit_a(qubits):
"""Implementation A: SWAP via three CNOTs (standard decomposition).
This is the textbook SWAP decomposition:
CNOT(q0, q1) -> CNOT(q1, q0) -> CNOT(q0, q1)
Total: 3 CNOT gates
"""
prog = core.QProg()
q0, q1 = qubits[0], qubits[1]
prog << core.CNOT(q0, q1)
prog << core.CNOT(q1, q0)
prog << core.CNOT(q0, q1)
prog << core.measure(list(qubits), list(range(len(qubits))))
return prog
def build_circuit_b(qubits):
"""Implementation B: SWAP preceded by single-qubit state preparation.
Uses the same SWAP decomposition but adds extra single-qubit gates
that cancel out, testing whether single-qubit gate overhead matters.
Total: 3 CNOT gates + 4 single-qubit gates (H before and after each CNOT)
"""
prog = core.QProg()
q0, q1 = qubits[0], qubits[1]
# Extra single-qubit gates (these cancel in the ideal case)
prog << core.H(q0)
prog << core.H(q0)
prog << core.CNOT(q0, q1)
prog << core.H(q1)
prog << core.H(q1)
prog << core.CNOT(q1, q0)
prog << core.H(q0)
prog << core.H(q0)
prog << core.CNOT(q0, q1)
prog << core.measure(list(qubits), list(range(len(qubits))))
return prog
def build_circuit_c(qubits):
"""Implementation C: Equivalent operation using fewer CNOTs.
Uses only 1 CNOT plus single-qubit gates, which is possible
for certain input states. Demonstrates that minimizing CNOT
count reduces noise impact.
Total: 1 CNOT gate + 2 H gates
"""
prog = core.QProg()
q0, q1 = qubits[0], qubits[1]
prog << core.H(q0)
prog << core.CNOT(q0, q1)
prog << core.H(q0)
prog << core.measure(list(qubits), list(range(len(qubits))))
return prog
def compute_tvd(counts_a, counts_b, shots):
"""Total variation distance between two count distributions."""
all_keys = set(list(counts_a.keys()) + list(counts_b.keys()))
return sum(
abs(counts_a.get(k, 0) / shots - counts_b.get(k, 0) / shots)
for k in all_keys
) / 2.0
# Noise model with asymmetry: CNOT errors are much larger
noise = core.NoiseModel()
noise.add_all_qubit_quantum_error(
core.depolarizing_error(0.002), core.GateType.H
)
noise.add_all_qubit_quantum_error(
core.depolarizing_error(0.025), core.GateType.CNOT
)
noise.add_all_qubit_read_out_error([[0.96, 0.04], [0.05, 0.95]])
qubits = [0, 1]
shots = 30000
# Prepare an initial superposition state
# so the SWAP has a non-trivial effect
def with_initial_state(builder, qubits):
"""Wrap a circuit builder to add an initial H on qubit 0."""
prog = core.QProg()
prog << core.H(qubits[0])
inner = builder(qubits)
# Rebuild the inner program without its own measurements
prog_bare = core.QProg()
prog_bare << core.H(qubits[0])
return builder(qubits)
# Build circuits (each starts with H on qubit 0)
circuit_a = core.QProg()
circuit_a << core.H(0) << core.CNOT(0, 1) << core.CNOT(1, 0) << core.CNOT(0, 1)
circuit_a << core.measure([0, 1], [0, 1])
circuit_b = core.QProg()
circuit_b << core.H(0)
circuit_b << core.H(0) << core.H(0)
circuit_b << core.CNOT(0, 1) << core.H(1) << core.H(1)
circuit_b << core.CNOT(1, 0) << core.H(0) << core.H(0)
circuit_b << core.CNOT(0, 1)
circuit_b << core.measure([0, 1], [0, 1])
circuit_c = core.QProg()
circuit_c << core.H(0) << core.H(0) << core.CNOT(0, 1) << core.H(0)
circuit_c << core.measure([0, 1], [0, 1])
# Run all three circuits
machine = core.CPUQVM()
results = {}
for label, circuit in [("A (3 CNOT)", circuit_a),
("B (3 CNOT + 4 H)", circuit_b),
("C (1 CNOT + 3 H)", circuit_c)]:
# Ideal
machine.run(circuit, shots=shots)
ideal = machine.result().get_counts()
# Noisy
machine.run(circuit, shots=shots, model=noise)
noisy = machine.result().get_counts()
tvd = compute_tvd(ideal, noisy, shots)
# Count the Bell-state fraction (00 + 11) for reference
bell_noisy = (noisy.get("00", 0) + noisy.get("11", 0)) / shots
bell_ideal = (ideal.get("00", 0) + ideal.get("11", 0)) / shots
results[label] = {
"tvd": tvd,
"bell_ideal": bell_ideal,
"bell_noisy": bell_noisy,
"noisy_counts": noisy,
}
print(f"{'Implementation':<22s} {'TVD':>8s} {'Bell(ideal)':>12s} {'Bell(noisy)':>12s}")
print("-" * 60)
for label, data in results.items():
print(f"{label:<22s} {data['tvd']:8.4f} {data['bell_ideal']:12.4f} {data['bell_noisy']:12.4f}")Explanation
The results will show a clear hierarchy: the circuit with fewer CNOT gates (Implementation C) will have a lower TVD and higher output fidelity under noise than the implementations that use three CNOTs, even though all three produce the same logical result in the ideal case. Implementation B, which adds unnecessary single-qubit gates, will typically show slightly worse results than Implementation A because the extra gate layers allow more error to accumulate.
The practical takeaways for noise-aware circuit design are:
Minimize two-qubit gates. CNOT and CZ errors are typically 1~5% on current hardware, compared to 0.01~0.1% for single-qubit gates. Every CNOT you can eliminate yields a disproportionate improvement in output quality.
Prefer shorter circuit depth. Even when the CNOT count is fixed, rearranging gates to reduce the overall circuit depth limits the time available for decoherence (T1/T2) errors.
Validate with simulation. Before running on hardware, compare your candidate circuit implementations under a noise model calibrated to the target device. The difference in TVD or XEB fidelity between implementations is a reliable predictor of which will perform better.
Account for qubit topology. On devices with limited connectivity, adding SWAP gates to route qubits can dramatically increase the CNOT count. Circuit optimizations that exploit the native connectivity can significantly reduce the effective error rate.
Next Steps
- Noise Simulation -- Full noise channel reference and simulation examples
- Noise Model Theory -- Mathematical foundations of CPTP maps and Kraus operators
- Simulation -- Overview of all simulator backends
- Quantum Information -- Distance metrics for comparing ideal vs. noisy states