Quantum State Preparation
Learn how to encode classical data into quantum states using the Encode class in pyqpanda3. This tutorial covers 13 encoding methods -- from simple basis encoding to advanced sparse and approximate MPS techniques -- with mathematical foundations, code examples, and practical guidance for choosing the right method.
Table of Contents
- Why Quantum State Preparation Matters
- The Encode Class
- Method Selection Guide
- 1. Basic Encoding
- 2. Angle Encoding
- 3. Dense Angle Encoding
- 4. Amplitude Encoding
- 5. Recursive Amplitude Encoding
- 6. IQP Encoding
- 7. Schmidt Encoding
- 8. Divide-and-Conquer Amplitude Encoding
- 9. BID Amplitude Encoding
- 10. Double Sparse State Preparation
- 11. Sparse Isometry Encoding
- 12. Efficient Sparse Encoding
- 13. Approximate MPS Encoding
- Comparing Encoding Methods
- Complete Example
- API Quick Reference
- Summary
Why Quantum State Preparation Matters
Quantum state preparation -- also called data encoding or data embedding -- is the process of mapping classical data into quantum states. It is a fundamental step in virtually every quantum algorithm:
- Quantum machine learning -- kernel methods, variational classifiers, and quantum neural networks all require classical data loaded into quantum registers.
- Quantum simulation -- requires preparing the initial state of the physical system under study before simulating its evolution.
- Quantum algorithms -- Grover's search, quantum phase estimation, and Hamiltonian simulation assume specific input states.
The choice of encoding method directly impacts circuit depth, qubit count, expressiveness, and ultimately algorithm performance. pyqpanda3 provides 13 encoding methods through the Encode class, each optimized for different data characteristics.
The Encode Class
The Encode class in pyqpanda3.core is the central interface for all quantum state preparation methods. You create an instance, call an encoding method, and then extract results.
from pyqpanda3 import core
enc = core.Encode()Core Methods
get_circuit() -- Returns the QCircuit that implements the encoding:
enc = core.Encode()
enc.amplitude_encode([0, 1], [0.5, 0.5, 0.5, 0.5])
circuit = enc.get_circuit()
prog = core.QProg()
prog << circuitget_out_qubits() -- Returns the output qubit indices. Some methods use auxiliary qubits internally, so output qubits may differ from input qubits:
enc.dc_amplitude_encode([0, 1, 2], [0.5, 0.3, 0.2, 0.4, 0.1, 0.6, 0.3, 0.5])
out_qubits = enc.get_out_qubits()get_fidelity(data) -- Computes fidelity between the encoded state and the target state. Accepts List[float] or List[complex]. Fidelity ranges from 0 (orthogonal) to 1 (identical):
fidelity = enc.get_fidelity([0.5, 0.5, 0.5, 0.5]) # 1.0 for exact methodsWorkflow
Method Selection Guide
1. Basic Encoding
Method: basic_encode(qubits, data: str)
Maps a binary string directly onto the computational basis state. Each character in the string corresponds to one qubit: '1' applies an X gate (bit flip), '0' leaves the qubit in the ground state
Given binary string
| Property | Value |
|---|---|
| Qubits | |
| Depth | 1 |
| Exact | Yes |
from pyqpanda3 import core
enc = core.Encode()
enc.basic_encode([0, 1, 2], "101")
prog = core.QProg()
prog << enc.get_circuit() << core.measure([0, 1, 2], [0, 1, 2])
machine = core.CPUQVM()
machine.run(prog, 1000)
print(machine.result().get_counts()) # {'101': 1000}Use for: encoding classical bit strings, preparing computational basis states, testing.
2. Angle Encoding
Method: angle_encode(qubits, data, gate_type=GateType.RY)
Maps each data value to a rotation angle on the corresponding qubit. This is one of the most commonly used encodings in quantum machine learning because it produces shallow circuits (depth 1) with tunable expressiveness through the choice of rotation axis.
For data vector
With the default GateType.RX and GateType.RZ.
| Property | Value |
|---|---|
| Qubits | |
| Depth | 1 |
| Exact | No (information compression) |
from pyqpanda3 import core
# Default RY gates
enc = core.Encode()
enc.angle_encode([0, 1, 2], [0.5, 1.0, 1.5])
# Using RX gates
enc2 = core.Encode()
enc2.angle_encode([0, 1, 2], [0.5, 1.0, 1.5], core.GateType.RX)Use for: QML feature encoding, variational circuit first layers, shallow circuits.
3. Dense Angle Encoding
Method: dense_angle_encode(qubits, data)
Packs two data values into each qubit using both
For a data vector of even length
The resulting single-qubit state for qubit
| Property | Value |
|---|---|
| Qubits | |
| Depth | 2 |
| Exact | No |
enc = core.Encode()
# 6 values encoded into 3 qubits
enc.dense_angle_encode([0, 1, 2], [3.14, 3.14, 0.5, 1.0, 0.3, 0.7])Use for: limited qubit count, higher-dimensional feature vectors.
4. Amplitude Encoding
Method: amplitude_encode(qubits, data)
Maps a normalized classical vector directly onto the amplitudes of a multi-qubit quantum state. This is the most information-dense exact encoding available:
Given normalized
Accepts List[float] (real) or List[complex] (complex amplitudes). Data is automatically normalized. For real input, the statevector matches the input exactly. For complex input
| Property | Value |
|---|---|
| Qubits | |
| Gate count | |
| Exact | Yes |
import numpy as np
# Real-valued: equal superposition
enc = core.Encode()
enc.amplitude_encode([0, 1], [0.5, 0.5, 0.5, 0.5])
print(f"Fidelity: {enc.get_fidelity([0.5, 0.5, 0.5, 0.5])}")
# Complex-valued
enc2 = core.Encode()
data = [0.5+0j, 0.5j, -0.5, 0.5-0.5j]
norm = np.linalg.norm(data)
enc2.amplitude_encode([0, 1], [x/norm for x in data])
# Run
prog = core.QProg()
prog << enc.get_circuit() << core.measure([0, 1], [0, 1])
machine = core.CPUQVM()
machine.run(prog, 1000)
print(machine.result().get_counts())Use for: exact data representation, moderate data sizes (up to ~16 qubits), loading classical data for quantum algorithms.
5. Recursive Amplitude Encoding
Method: amplitude_encode_recursive(qubits, data)
Same result as standard amplitude encoding but uses a top-down recursive decomposition strategy. The circuit structure differs from the standard approach, which can lead to better parallelizability on hardware with specific connectivity patterns. Accepts both List[float] and List[complex].
Given normalized
The first rotation sets
Accepts List[float] and List[complex].
| Property | Value |
|---|---|
| Qubits | |
| Depth | |
| Exact | Yes |
enc = core.Encode()
data = [0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25]
enc.amplitude_encode_recursive([0, 1, 2], data)
print(f"Fidelity: {enc.get_fidelity(data)}")Use for: same as amplitude encoding, but with hardware-favorable circuit structure.
6. IQP Encoding
Method: iqp_encode(qubits, data, control_list=[], bool_inverse=False, repeats=1)
Applies a structured pattern of Hadamard gates, diagonal rotations, and entangling phase gates. IQP circuits are believed hard to simulate classically.
The circuit is
control_list. Parameters: control_list (qubit pairs for RZZ gates), bool_inverse (apply inverse), repeats (circuit repetitions).
| Property | Value |
|---|---|
| Qubits | |
| Depth | |
| Exact | No (probabilistic) |
# Basic IQP
enc = core.Encode()
enc.iqp_encode([0, 1, 2], [0.5, 1.0, 1.5])
# With entanglement and repetition
enc2 = core.Encode()
enc2.iqp_encode([0, 1, 2], [0.5, 1.0, 1.5],
control_list=[(0,1), (1,2)], repeats=2)
prog = core.QProg()
prog << enc.get_circuit() << core.measure([0,1,2], [0,1,2])
machine = core.CPUQVM()
machine.run(prog, 1000)
print(machine.result().get_counts())Use for: QML kernel methods, entanglement-based feature maps, classically hard circuits.
7. Schmidt Encoding
Method: schmidt_encode(qubits, data, cutoff=0)
Uses the Schmidt decomposition to break the target state into bipartite components recursively. The cutoff parameter truncates small singular values for approximate encoding with fewer gates.
For an
Terms with
| Property | Value |
|---|---|
| Qubits | |
| Depth | Depends on Schmidt rank |
| Exact | Yes when cutoff=0 |
enc = core.Encode()
data = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8]
norm = np.linalg.norm(data)
data = [x/norm for x in data]
# Exact
enc.schmidt_encode([0, 1, 2], data, cutoff=0)
print(f"Fidelity (exact): {enc.get_fidelity(data):.6f}")
# Approximate
enc2 = core.Encode()
enc2.schmidt_encode([0, 1, 2], data, cutoff=1e-6)
print(f"Fidelity (cutoff): {enc2.get_fidelity(data):.6f}")Use for: low Schmidt-rank states, trading accuracy for depth, hierarchical data.
8. Divide-and-Conquer Amplitude Encoding
Method: dc_amplitude_encode(qubits, data)
Splits the data vector into halves and recursively encodes each half. Uses auxiliary qubits for the decomposition, producing shallower circuits than standard amplitude encoding at the cost of more qubits. Always use get_out_qubits() to find which qubits hold the final encoded state.
The algorithm computes norms
| Property | Value |
|---|---|
| Qubits | |
| Depth | |
| Exact | Yes |
enc = core.Encode()
data = [0.5, 0.3, 0.2, 0.4, 0.1, 0.6, 0.3, 0.5]
norm = np.linalg.norm(data)
enc.dc_amplitude_encode(list(range(7)), [x/norm for x in data])
out_qubits = enc.get_out_qubits()
prog = core.QProg()
prog << enc.get_circuit()
prog << core.measure(out_qubits, list(range(len(out_qubits))))
machine = core.CPUQVM()
machine.run(prog, 1000)
print(machine.result().get_counts())Use for: when circuit depth matters more than qubit count, NISQ hardware.
9. BID Amplitude Encoding
Method: bid_amplitude_encode(qubits, data, split=-1)
Block-Inverse Decomposition: partitions the data vector into split parameter controls block size; -1 selects automatically. Like dc_amplitude_encode, the output qubits may differ from input qubits, so always check with get_out_qubits().
For a normalized vector
where
| Property | Value |
|---|---|
| Qubits | |
| Depth | |
| Exact | Yes |
enc = core.Encode()
data = [0.1, 0.3, 0.2, 0.4, 0.15, 0.25, 0.35, 0.45]
norm = np.linalg.norm(data)
enc.bid_amplitude_encode([0, 1, 2], [x/norm for x in data], split=2)Use for: structured data with natural block decomposition, tunable encoding complexity.
10. Double Sparse State Preparation
Method: ds_quantum_state_preparation(qubits, data)
Handles sparse data where most entries are zero. Complexity depends on the number of non-zero entries
Given sparse state
Accepts four data types:
| Type | Description |
|---|---|
Dict[str, float] | Sparse real-valued state |
Dict[str, complex] | Sparse complex-valued state |
List[float] | Dense vector (sparsity auto-detected) |
List[complex] | Dense vector (sparsity auto-detected) |
| Property | Value |
|---|---|
| Qubits | |
| Depth | |
| Exact | Yes |
# Sparse dictionary format
enc = core.Encode()
enc.ds_quantum_state_preparation([0,1,2,3,4,5], {
"000": 0.40, "001": 0.91, "111": 0.08
})
out_qubits = enc.get_out_qubits()
# Dense vector format (auto-detect sparsity)
enc2 = core.Encode()
enc2.ds_quantum_state_preparation([0,1,2],
[0.0, 0.5, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5])Use for: data with many zeros, states specified in ket notation.
11. Sparse Isometry Encoding
Method: sparse_isometry(qubits, data)
Constructs an isometry mapping ds_quantum_state_preparation, this method uses a direct isometry approach rather than a double-sparse decomposition.
The isometry is decomposed into a sequence of Givens rotations and controlled operations. The complexity scales with the number of non-zero amplitudes
Accepts same four data types as ds_quantum_state_preparation.
| Property | Value |
|---|---|
| Qubits | |
| Depth | |
| Exact | Yes |
enc = core.Encode()
state = {
"000": complex(0.37, 0.44),
"001": complex(0.20, 0.34),
"010": complex(0.53, 0.25),
"100": complex(0.20, 0.35)
}
norm = sum(abs(v)**2 for v in state.values()) ** 0.5
state = {k: v/norm for k, v in state.items()}
enc.sparse_isometry([0, 1, 2], state)
prog = core.QProg()
prog << enc.get_circuit() << core.measure([0,1,2], [0,1,2])
machine = core.CPUQVM()
machine.run(prog, 1000)
print(machine.result().get_counts())Use for: exact encoding of sparse states, states with known basis labels.
12. Efficient Sparse Encoding
Method: efficient_sparse(qubits, data)
Optimizes sparse preparation by finding the minimal set of distinguishing bits sparse_isometry for clustered sparsity patterns.
The circuit first encodes the amplitude distribution on the distinguishing qubits, then uses conditional operations to set the remaining qubits. This avoids encoding the full
Accepts same data types as sparse_isometry.
| Property | Value |
|---|---|
| Qubits | |
| Depth | |
| Exact | Yes |
enc = core.Encode()
enc.efficient_sparse([0, 1, 2], {
"000": 0.40, "001": 0.91, "111": 0.08
})
# Or with a dense vector
enc2 = core.Encode()
data = [0.5, 0.3, 0.2, 0.4, 0.1, 0.6, 0.3, 0.5]
norm = sum(x**2 for x in data) ** 0.5
enc2.efficient_sparse([0,1,2], [x/norm for x in data])Use for: sparse states where gate count is critical, patterns with shared bit prefixes.
13. Approximate MPS Encoding
Method: approx_mps_encode(qubits, data, layers=3, sweeps=100, double2float=False)
Represents the target state as a matrix product state and iteratively optimizes the circuit to approximate it. Sacrifices exact accuracy for dramatically reduced circuit depth.
Each MPS tensor is mapped to single-qubit and two-qubit gates, optimized over sweeps iterations. More layers yield higher fidelity but deeper circuits. Accepts List[float] and List[complex].
| Parameter | Default | Description |
|---|---|---|
layers | 3 | MPS layers (expressiveness) |
sweeps | 100 | Optimization iterations |
double2float | False | Convert to float32 for speed |
| Property | Value |
|---|---|
| Qubits | |
| Depth | |
| Exact | No (approximate) |
import numpy as np
np.random.seed(42)
data = np.random.randn(8)
data = (data / np.linalg.norm(data)).tolist()
# Default parameters
enc = core.Encode()
enc.approx_mps_encode([0,1,2], data, layers=3, sweeps=100)
print(f"Fidelity (3 layers): {enc.get_fidelity(data):.6f}")
# Higher fidelity
enc2 = core.Encode()
enc2.approx_mps_encode([0,1,2], data, layers=5, sweeps=200)
print(f"Fidelity (5 layers): {enc2.get_fidelity(data):.6f}")
# Complex data
enc3 = core.Encode()
cdata = [0.3+0.1j, 0.2-0.1j, 0.4+0.2j, 0.1-0.3j,
0.25+0.15j, 0.35-0.05j, 0.1+0.3j, 0.15-0.2j]
norm = sum(abs(x)**2 for x in cdata) ** 0.5
enc3.approx_mps_encode([0,1,2], [x/norm for x in cdata], layers=3, sweeps=100)Use for: large data vectors, low-entanglement states, NISQ devices, when approximate encoding is acceptable.
Comparing Encoding Methods
| Method | Qubits | Depth | Exact | Data Types | Best For |
|---|---|---|---|---|---|
basic_encode | 1 | Yes | str | Basis states | |
angle_encode | 1 | No | float[] | QML features | |
dense_angle_encode | 2 | No | float[] | Compact features | |
amplitude_encode | Yes | float[], complex[] | Exact loading | ||
amplitude_encode_recursive | Yes | float[], complex[] | Structured states | ||
iqp_encode | No | float[] | QML kernels | ||
schmidt_encode | Varies | Yes/Approx | float[] | Low-rank states | |
dc_amplitude_encode | Yes | float[] | Shallow circuits | ||
bid_amplitude_encode | Varies | Yes | float[] | Block data | |
ds_quantum_state_preparation | Yes | Dict, float[], complex[] | Sparse states | ||
sparse_isometry | Yes | Dict, float[], complex[] | Sparse exact | ||
efficient_sparse | Yes | Dict, float[], complex[] | Efficient sparse | ||
approx_mps_encode | No | float[], complex[] | Large states |
Decision Flowchart
The following flowchart guides you to the most appropriate encoding method based on your data characteristics and constraints:
Qubit and Gate Cost Summary
For a data vector of length
| Method | Qubits Required | Gate Count | Circuit Depth |
|---|---|---|---|
basic_encode | |||
angle_encode | |||
dense_angle_encode | |||
amplitude_encode | |||
amplitude_encode_recursive | |||
iqp_encode | |||
schmidt_encode | |||
dc_amplitude_encode | |||
bid_amplitude_encode | |||
ds_quantum_state_preparation | |||
sparse_isometry | |||
efficient_sparse | |||
approx_mps_encode |
where
Complete Example
This example encodes real-world classification data using multiple methods and compares circuit statistics and fidelity:
from pyqpanda3 import core
import numpy as np
# Sample feature vectors (normalized)
samples = [
[0.50, 0.35, 0.60, 0.50],
[0.80, 0.20, 0.30, 0.45],
[0.15, 0.70, 0.55, 0.40],
]
for i, raw in enumerate(samples):
norm = np.linalg.norm(raw)
data = [x / norm for x in raw]
print(f"\n--- Sample {i+1} ---")
# Angle encoding (4 qubits)
enc1 = core.Encode()
enc1.angle_encode([0,1,2,3], raw)
c1 = enc1.get_circuit()
# Amplitude encoding (2 qubits)
enc2 = core.Encode()
enc2.amplitude_encode([0,1], data)
c2 = enc2.get_circuit()
# IQP encoding (4 qubits, with entanglement)
enc3 = core.Encode()
enc3.iqp_encode([0,1,2,3], raw, control_list=[(0,1),(1,2),(2,3)])
c3 = enc3.get_circuit()
# MPS approximation (2 qubits)
enc4 = core.Encode()
enc4.approx_mps_encode([0,1], data, layers=3, sweeps=100)
c4 = enc4.get_circuit()
# Schmidt encoding (2 qubits)
enc5 = core.Encode()
enc5.schmidt_encode([0,1], data, cutoff=0)
c5 = enc5.get_circuit()
print(f" Angle: {c1.size()} gates, depth {c1.depth()}")
print(f" Amplitude: {c2.size()} gates, depth {c2.depth()}, "
f"fidelity {enc2.get_fidelity(data):.4f}")
print(f" IQP: {c3.size()} gates, depth {c3.depth()}")
print(f" MPS: {c4.size()} gates, depth {c4.depth()}, "
f"fidelity {enc4.get_fidelity(data):.4f}")
print(f" Schmidt: {c5.size()} gates, depth {c5.depth()}, "
f"fidelity {enc5.get_fidelity(data):.4f}")
# Run amplitude-encoded circuit
prog = core.QProg()
prog << c2 << core.measure([0,1], [0,1])
machine = core.CPUQVM()
machine.run(prog, 1000)
print(f" Counts: {machine.result().get_counts()}")Fidelity Benchmarking
For approximate methods, you can sweep parameters to find the best accuracy-depth tradeoff:
np.random.seed(42)
data = np.random.randn(8)
data = (data / np.linalg.norm(data)).tolist()
qubits = [0, 1, 2]
# Exact methods (fidelity = 1.0)
for name, fn in [
("amplitude_encode", lambda e: e.amplitude_encode(qubits, data)),
("amplitude_encode_recursive", lambda e: e.amplitude_encode_recursive(qubits, data)),
("schmidt_encode", lambda e: e.schmidt_encode(qubits, data)),
]:
enc = core.Encode()
fn(enc)
print(f"{name}: fidelity = {enc.get_fidelity(data):.6f}, "
f"gates = {enc.get_circuit().size()}")
# Approximate MPS with varying parameters
for layers in [1, 2, 3, 5, 10]:
enc = core.Encode()
enc.approx_mps_encode(qubits, data, layers=layers, sweeps=200)
print(f"approx_mps_encode (layers={layers}): "
f"fidelity = {enc.get_fidelity(data):.6f}, "
f"gates = {enc.get_circuit().size()}")API Quick Reference
Encode Class Methods
| Method | Signature | Description |
|---|---|---|
basic_encode | (qubits, data: str) | Binary string to basis state |
angle_encode | (qubits, data, gate_type=RY) | Rotation-based per-qubit |
dense_angle_encode | (qubits, data) | Two angles per qubit |
amplitude_encode | (qubits, data) | Exact amplitude (real/complex) |
amplitude_encode_recursive | (qubits, data) | Recursive decomposition |
iqp_encode | (qubits, data, control_list=[], bool_inverse=False, repeats=1) | IQP circuit |
schmidt_encode | (qubits, data, cutoff=0) | Schmidt decomposition |
dc_amplitude_encode | (qubits, data) | Divide-and-conquer |
bid_amplitude_encode | (qubits, data, split=-1) | Block-based |
ds_quantum_state_preparation | (qubits, data) | Double sparse (dict/vector) |
sparse_isometry | (qubits, data) | Sparse isometry (dict/vector) |
efficient_sparse | (qubits, data) | Efficient sparse (dict/vector) |
approx_mps_encode | (qubits, data, layers=3, sweeps=100, double2float=False) | Approximate MPS |
get_circuit | () | Returns QCircuit |
get_out_qubits | () | Returns output qubit indices |
get_fidelity | (data) | Encoding fidelity |
Data Type Support
| Method | str | float[] | complex[] | Dict[str,float] | Dict[str,complex] |
|---|---|---|---|---|---|
basic_encode | Yes | -- | -- | -- | -- |
angle_encode | -- | Yes | -- | -- | -- |
dense_angle_encode | -- | Yes | -- | -- | -- |
amplitude_encode | -- | Yes | Yes | -- | -- |
amplitude_encode_recursive | -- | Yes | Yes | -- | -- |
iqp_encode | -- | Yes | -- | -- | -- |
schmidt_encode | -- | Yes | -- | -- | -- |
dc_amplitude_encode | -- | Yes | -- | -- | -- |
bid_amplitude_encode | -- | Yes | -- | -- | -- |
ds_quantum_state_preparation | -- | Yes | Yes | Yes | Yes |
sparse_isometry | -- | Yes | Yes | Yes | Yes |
efficient_sparse | -- | Yes | Yes | Yes | Yes |
approx_mps_encode | -- | Yes | Yes | -- | -- |
Summary
In this tutorial you learned:
Quantum state preparation maps classical data into quantum states. The encoding method you choose directly affects circuit resources and algorithm performance.
The Encode class (
core.Encode()) is the unified interface. Call an encoding method, then useget_circuit(),get_out_qubits(), andget_fidelity(data)to extract results.13 encoding methods span a wide design space:
- Basis:
basic_encodefor binary strings - Angle-based:
angle_encode,dense_angle_encodefor QML feature maps - Amplitude:
amplitude_encode,amplitude_encode_recursivefor exact state loading - Entangling:
iqp_encodefor classically hard circuits - Decomposition:
schmidt_encode,dc_amplitude_encode,bid_amplitude_encodefor structured states - Sparse:
ds_quantum_state_preparation,sparse_isometry,efficient_sparsefor sparse data - Approximate:
approx_mps_encodefor large-scale problems
- Basis:
Fidelity assessment with
get_fidelity()quantifies encoding accuracy, essential for approximate methods.Method selection depends on data type, accuracy requirements, and hardware constraints.
The next tutorial covers Hamiltonian and Pauli Operators, where you will learn to construct and manipulate Hamiltonians for variational quantum algorithms.
Knowledge Check
Test your understanding of quantum state preparation in pyqpanda3.
Q1: What is the difference between amplitude encoding and angle encoding? When would you prefer one over the other?
A1: Amplitude encoding maps data values directly to the probability amplitudes of a quantum state:
Q2: What normalization condition must the input data satisfy for amplitude encoding? What happens if it is not satisfied?
A2: The data must satisfy amplitude_encode method normalizes the input data internally, dividing by the total norm. This means you can pass unnormalized data and the method will handle it, but you should be aware of the implicit normalization.
Q3: Explain what the cutoff parameter does in schmidt_encode. Why might you want a non-zero cutoff?
A3: The cutoff parameter in schmidt_encode truncates singular values below the threshold in the Schmidt decomposition. With cutoff=0 (default), no truncation occurs and the encoding is exact. A non-zero cutoff removes small singular values, reducing circuit depth at the cost of encoding fidelity. This trade-off is useful for large-scale problems where exact encoding would require too many gates.
Q4: What is the difference between ds_quantum_state_preparation and sparse_isometry? Both handle sparse data.
A4: Both methods target sparse quantum states, but they use different algorithms. ds_quantum_state_preparation uses a double-sparse approach optimized for states with few non-zero amplitudes in the computational basis. sparse_isometry uses an isometry-based method that preserves inner product relations. The choice depends on the sparsity pattern: ds_quantum_state_preparation is generally more efficient for uniformly sparse states, while sparse_isometry handles structured sparsity better.
Q5: What does the layers parameter control in approx_mps_encode? What happens as you increase it?
A5: The layers parameter controls the number of variational layers in the Matrix Product State (MPS) ansatz used for encoding. More layers increase the expressiveness of the ansatz, allowing higher-fidelity encoding of complex data distributions. However, more layers also increase circuit depth and the number of parameters to optimize, requiring more sweeps and potentially longer computation time. The default of 3 layers provides a good balance for most use cases.
Q6: After calling an encoding method on an Encode object, how do you use the result in a quantum program?
A6: After calling an encoding method (e.g., enc.amplitude_encode(qubits, data)), you extract the circuit with enc.get_circuit() and the output qubits with enc.get_out_qubits(). You then append the circuit to a QProg using the << operator: prog << enc.get_circuit(). The output qubits can be used for subsequent operations or measurement.
Q7: Why does approx_mps_encode have a double2float parameter? What is the trade-off?
A7: The double2float parameter converts double-precision (64-bit) input data to single-precision (32-bit float) before encoding. This reduces the numerical precision of the optimization, which can lead to slightly lower fidelity but faster computation. The trade-off is accuracy vs. speed. For most quantum computing applications where quantum hardware has limited precision anyway, single-precision may be sufficient.
Exercise 1: Encoding Comparison
Encode the same 4-element data vector using three different methods (angle, amplitude, and Schmidt), then compare their fidelity and circuit depth.
Solution:
import numpy as np
from pyqpanda3.core import Encode, CPUQVM, QProg, measure
data = [0.5, 0.5, 0.5, 0.5]
qubits = [0, 1]
# --- Angle Encoding ---
enc_angle = Encode()
enc_angle.angle_encode(qubits, data[:2]) # Only 2 values for 2 qubits
angle_circuit = enc_angle.get_circuit()
print("Angle encoding circuit:", angle_circuit)
# --- Amplitude Encoding ---
enc_amp = Encode()
enc_amp.amplitude_encode(qubits, data)
amp_circuit = enc_amp.get_circuit()
amp_fidelity = enc_amp.get_fidelity(data)
print(f"Amplitude encoding fidelity: {amp_fidelity:.6f}")
# --- Schmidt Encoding ---
enc_schmidt = Encode()
enc_schmidt.schmidt_encode(qubits, data, cutoff=0)
schmidt_circuit = enc_schmidt.get_circuit()
schmidt_fidelity = enc_schmidt.get_fidelity(data)
print(f"Schmidt encoding fidelity: {schmidt_fidelity:.6f}")
# --- Run and compare ---
qvm = CPUQVM()
for name, circuit in [("Amplitude", amp_circuit), ("Schmidt", schmidt_circuit)]:
prog = QProg()
prog << circuit << measure(qubits, [0, 1])
qvm.run(prog, shots=1000)
counts = qvm.result().get_counts()
print(f"{name} encoding results: {counts}")Exercise 2: Sparse State Preparation
Prepare a 3-qubit state with only 2 non-zero amplitudes using both ds_quantum_state_preparation and sparse_isometry. Compare the results.
Solution:
from pyqpanda3.core import Encode, CPUQVM, QProg, measure
import numpy as np
# Sparse state: |000> and |111> only
sparse_data = {"000": 0.6, "111": 0.8}
qubits = [0, 1, 2]
# Normalize
norm = np.sqrt(sum(v**2 for v in sparse_data.values()))
sparse_data_norm = {k: v/norm for k, v in sparse_data.items()}
# Method 1: ds_quantum_state_preparation
enc1 = Encode()
enc1.ds_quantum_state_preparation(qubits, sparse_data_norm)
fidelity1 = enc1.get_fidelity(list(sparse_data_norm.values()))
print(f"ds_quantum_state_preparation fidelity: {fidelity1:.6f}")
# Method 2: sparse_isometry
enc2 = Encode()
enc2.sparse_isometry(qubits, sparse_data_norm)
fidelity2 = enc2.get_fidelity(list(sparse_data_norm.values()))
print(f"sparse_isometry fidelity: {fidelity2:.6f}")
# Run both on a simulator
qvm = CPUQVM()
for name, enc in [("DS Preparation", enc1), ("Sparse Isometry", enc2)]:
prog = QProg()
prog << enc.get_circuit() << measure(qubits, [0, 1, 2])
qvm.run(prog, shots=1000)
counts = qvm.result().get_counts()
print(f"{name}: {counts}")Exercise 3: Approximate MPS with Parameter Sweep
Encode an 8-element data vector using approx_mps_encode with varying layers (1, 3, 5) and sweeps (50, 100, 200). Report the fidelity for each combination.
Solution:
import numpy as np
from pyqpanda3.core import Encode
# 8-element data (requires 3 qubits)
data = [0.1, 0.3, 0.5, 0.2, 0.4, 0.6, 0.15, 0.25]
qubits = [0, 1, 2]
print(f"{'Layers':<8} {'Sweeps':<8} {'Fidelity':<12}")
print("-" * 28)
for layers in [1, 3, 5]:
for sweeps in [50, 100, 200]:
enc = Encode()
try:
enc.approx_mps_encode(qubits, data, layers=layers, sweeps=sweeps)
fidelity = enc.get_fidelity(data)
print(f"{layers:<8} {sweeps:<8} {fidelity:.6f}")
except Exception as e:
print(f"{layers:<8} {sweeps:<8} Error: {e}")