Skip to content

QAOA -- Quantum Approximate Optimization Algorithm

Solve combinatorial optimization problems with the Quantum Approximate Optimization Algorithm using pyqpanda3.


Problem

Combinatorial Optimization on Quantum Hardware

Many real-world problems reduce to finding the best assignment among a discrete set of choices. These combinatorial optimization problems appear across science and engineering:

  • MaxCut -- partition graph vertices into two sets to maximize crossing edges (network design, clustering)
  • MAX-SAT -- find a variable assignment that satisfies the most clauses in a Boolean formula
  • Portfolio optimization -- select assets that maximize return for a given risk budget
  • Scheduling -- assign tasks to time slots while respecting constraints

All of these are NP-hard in the worst case. Classical exact solvers scale exponentially, while heuristic approximations offer no general performance guarantees.

Why QAOA?

The Quantum Approximate Optimization Algorithm (QAOA), introduced by Farhi, Goldstone, and Gutmann in 2014, provides a framework for producing approximate solutions with provable guarantees. QAOA prepares a parameterized quantum state and measures it to produce candidate solutions. A classical optimizer tunes the parameters to improve solution quality.

QAOA has two properties that make it attractive:

  1. Performance guarantee -- at depth p=1, QAOA achieves an approximation ratio of at least 0.6924 for MaxCut on 3-regular graphs, beating the best known classical algorithm at the time of publication.
  2. Hardware near-term feasibility -- each QAOA layer requires only shallow circuits with two-qubit gates matching the problem graph structure.

The QAOA Framework

QAOA encodes a combinatorial problem into a cost Hamiltonian HC whose ground state corresponds to the optimal solution. It prepares the state

|γ,β=l=1peiβlHMeiγlHC|+n

where:

  • HC is the cost Hamiltonian encoding the objective function
  • HM=i=0n1Xi is the mixer Hamiltonian
  • γ=(γ1,,γp) and β=(β1,,βp) are classical parameters
  • |+n=Hn|0n is the uniform superposition
  • p is the QAOA depth (number of alternating layers)

The algorithm then:

  1. Measures |γ,β in the computational basis to obtain a candidate bitstring
  2. Evaluates the classical objective on that bitstring
  3. Uses a classical optimizer to adjust (γ,β) to minimize HC

As p, QAOA converges to the adiabatic path and finds the exact optimum.


Solution

Step-by-Step QAOA Approach

Step 1: Formulate the Problem as an Ising Hamiltonian

Express the combinatorial objective as a weighted sum of Pauli Z operators. For MaxCut on a graph G=(V,E):

HC=(i,j)E12(IZiZj)

Each edge (i,j) contributes +1 to the cut when qubits i and j have different values (ZiZj=1), and 0 when they agree (ZiZj=+1).

Step 2: Construct the QAOA Ansatz with p Layers

Build a quantum circuit that alternates between two unitaries:

  • Cost unitary eiγlHC: applies RZZ gates between qubits connected by graph edges
  • Mixer unitary eiβlHM: applies RX rotations on all qubits

Step 3: Define the Cost Function

The cost function is the expectation value of HC with respect to the QAOA state:

C(γ,β)=γ,β|HC|γ,β

Step 4: Optimize Parameters Classically

Use a classical optimizer (COBYLA, Nelder-Mead, L-BFGS-B, etc.) to find (γ,β) that minimizes C(γ,β). Multiple random restarts are recommended due to non-convexity.

Step 5: Sample and Interpret Results

Run the optimized circuit many times and evaluate each measured bitstring against the classical objective to find the best solution.


Code

MaxCut on a 4-Node Ring Graph with QAOA p=1

We demonstrate the complete QAOA workflow on a 4-node ring graph with edges (0,1), (1,2), (2,3), (3,0). The optimal MaxCut partitions the vertices into {0,2} and {1,3}, cutting all 4 edges.

Define the Problem Graph

python
from pyqpanda3 import core
import numpy as np

# Define a 4-vertex ring graph
edges = [(0, 1), (1, 2), (2, 3), (3, 0)]
n_qubits = 4
n_edges = len(edges)

print(f"Graph: {n_qubits} vertices, {n_edges} edges")
print(f"Edges: {edges}")
print(f"Optimal MaxCut value: {n_edges}")

Build the Cost Hamiltonian

Each edge contributes a ZiZj term. When the cost Hamiltonian is minimized, the expectation value approaches |E| (all edges cut).

python
from pyqpanda3 import hamiltonian

# Cost Hamiltonian: H_C = sum_{(i,j) in E} Z_i Z_j
# When all edges are cut, Z_i Z_j = -1 for every edge,
# so H_C = -|E| (minimum value).
cost_terms = {}
for i, j in edges:
    cost_terms[f"Z{i} Z{j}"] = 1.0

cost_hamiltonian = hamiltonian.PauliOperator(cost_terms)
print(f"Cost Hamiltonian terms: {cost_terms}")

Build the QAOA Circuit

The circuit structure for one QAOA layer on our ring graph:

python
def build_qaoa_circuit(n_qubits: int, edges: list, gamma: list, beta: list) -> core.QProg:
    """
    Build a QAOA circuit for MaxCut.

    Args:
        n_qubits: Number of qubits (one per graph vertex).
        edges: List of (i, j) tuples defining graph edges.
        gamma: Cost parameters, one per QAOA layer.
        beta: Mixer parameters, one per QAOA layer.

    Returns:
        QProg containing the QAOA ansatz.
    """
    p = len(gamma)
    assert len(beta) == p, "gamma and beta must have the same length"

    prog = core.QProg()

    # Initialize in uniform superposition |+>^n
    for q in range(n_qubits):
        prog << core.H(q)

    for layer in range(p):
        # Cost unitary: e^{-i * gamma * H_C}
        # For each edge, apply RZZ(i, j, 2 * gamma)
        for i, j in edges:
            prog << core.RZZ(i, j, 2 * gamma[layer])

        # Mixer unitary: e^{-i * beta * sum X_i}
        # Apply RX on every qubit
        for q in range(n_qubits):
            prog << core.RX(q, 2 * beta[layer])

    return prog


# Test with p=1
p = 1
gamma_init = [0.5]
beta_init = [0.25]
test_prog = build_qaoa_circuit(n_qubits, edges, gamma_init, beta_init)
print("QAOA p=1 circuit built successfully")

Define the Cost Function

python
def qaoa_cost(params: np.ndarray, n_qubits: int, edges: list,
              p: int, cost_H: hamiltonian.PauliOperator) -> float:
    """
    Compute the QAOA cost function: expectation of H_C.

    Args:
        params: Flat array [gamma_0, ..., gamma_{p-1}, beta_0, ..., beta_{p-1}].
        n_qubits: Number of qubits.
        edges: Graph edges.
        p: Number of QAOA layers.
        cost_H: Cost Hamiltonian as a PauliOperator.

    Returns:
        Expectation value of the cost Hamiltonian.
    """
    gamma = params[:p].tolist()
    beta = params[p:2 * p].tolist()

    prog = build_qaoa_circuit(n_qubits, edges, gamma, beta)

    # Compute <H_C> using statevector expectation
    expectation = core.expval_pauli_operator(prog, cost_H)
    return expectation


# Evaluate at initial parameters
params_init = np.array(gamma_init + beta_init)
cost_0 = qaoa_cost(params_init, n_qubits, edges, p, cost_hamiltonian)
print(f"Initial cost at gamma={gamma_init}, beta={beta_init}: {cost_0:.6f}")

Optimize with scipy

python
from scipy.optimize import minimize

# Multiple random restarts to avoid local minima
best_cost = float('inf')
best_params = None
n_restarts = 10

for restart in range(n_restarts):
    np.random.seed(restart)
    theta0 = np.random.uniform(0, np.pi, 2 * p)

    result = minimize(
        qaoa_cost,
        theta0,
        args=(n_qubits, edges, p, cost_hamiltonian),
        method='COBYLA',
        options={'maxiter': 300, 'rhobeg': 0.5}
    )

    if result.fun < best_cost:
        best_cost = result.fun
        best_params = result.x.copy()

    print(f"  Restart {restart}: cost = {result.fun:.6f}")

print(f"\nBest cost found (p=1): {best_cost:.6f}")
print(f"Best gamma: {best_params[:p]}")
print(f"Best beta:  {best_params[p:]}")

Sample and Interpret Results

After finding the optimal parameters, sample the circuit to obtain candidate solutions:

python
# Build circuit at optimal parameters
optimal_gamma = best_params[:p].tolist()
optimal_beta = best_params[p:].tolist()
optimal_prog = build_qaoa_circuit(n_qubits, edges, optimal_gamma, optimal_beta)

# Add measurement and sample
optimal_prog << core.measure(list(range(n_qubits)), list(range(n_qubits)))

machine = core.CPUQVM()
n_shots = 10000
machine.run(optimal_prog, n_shots)
counts = machine.result().get_counts()

# Evaluate each bitstring's cut value
def compute_cut(bitstring: str, edges: list) -> int:
    """Count the number of edges crossing the cut defined by the bitstring."""
    return sum(1 for i, j in edges if bitstring[i] != bitstring[j])

print("\nTop solutions by frequency:")
sorted_counts = sorted(counts.items(), key=lambda x: x[1], reverse=True)

for bitstring, count in sorted_counts[:5]:
    cut = compute_cut(bitstring, edges)
    prob = count / n_shots
    print(f"  {bitstring}: cut={cut}, count={count}, probability={prob:.3f}")

# Find the overall best solution
best_cut = 0
best_bitstring = None
for bitstring, count in counts.items():
    cut = compute_cut(bitstring, edges)
    if cut > best_cut:
        best_cut = cut
        best_bitstring = bitstring

print(f"\nBest solution: {best_bitstring} with cut={best_cut}/{n_edges}")
print(f"Approximation ratio: {best_cut / n_edges:.4f}")

Expected output for p=1 on the 4-vertex ring:

Top solutions by frequency:
  0101: cut=4, count=2450, probability=0.245
  1010: cut=4, count=2430, probability=0.243
  0110: cut=2, count=800, probability=0.080
  ...
Best solution: 0101 with cut=4/4
Approximation ratio: 0.7500~1.0000

Both optimal solutions (0101 and 1010) correspond to the partition {0,2} vs {1,3}, which cuts all 4 edges.


QAOA with p=2 and Comparing to p=1

Increasing the QAOA depth p adds more variational parameters, allowing the circuit to explore a larger portion of the Hilbert space. For p layers, there are 2p parameters: γ1,,γp,β1,,βp.

python
# Run QAOA with p=2 layers
p2 = 2
best_cost_p2 = float('inf')
best_params_p2 = None

for restart in range(10):
    np.random.seed(restart + 100)
    theta0 = np.random.uniform(0, np.pi, 2 * p2)

    result = minimize(
        qaoa_cost,
        theta0,
        args=(n_qubits, edges, p2, cost_hamiltonian),
        method='COBYLA',
        options={'maxiter': 500, 'rhobeg': 0.5}
    )

    if result.fun < best_cost_p2:
        best_cost_p2 = result.fun
        best_params_p2 = result.x.copy()

print(f"p=1 best cost: {best_cost:.6f}")
print(f"p=2 best cost: {best_cost_p2:.6f}")
print(f"Expected optimum (all edges cut): {-n_edges:.1f}")

The approximation ratio α measures solution quality:

α=|HCQAOA|MaxCut(G)
python
# Approximation ratios
alpha_p1 = abs(best_cost) / n_edges
alpha_p2 = abs(best_cost_p2) / n_edges

print(f"Approximation ratio (p=1): {alpha_p1:.4f}")
print(f"Approximation ratio (p=2): {alpha_p2:.4f}")

For the 4-vertex ring graph, p=1 typically achieves an approximation ratio near 0.75~1.0, while p=2 approaches 1.0 more consistently.


General MaxCut Solver Function

The following function encapsulates the complete QAOA workflow for any undirected graph:

python
def solve_maxcut_qaoa(
    edges: list,
    n_qubits: int,
    p: int = 1,
    n_restarts: int = 20,
    maxiter: int = 300,
) -> dict:
    """
    Solve MaxCut using QAOA for an arbitrary undirected graph.

    Args:
        edges: List of (i, j) tuples defining the graph edges.
        n_qubits: Number of vertices.
        p: QAOA depth parameter.
        n_restarts: Number of random restarts for optimization.
        maxiter: Maximum optimizer iterations per restart.

    Returns:
        Dictionary with keys:
        - best_cost: minimum expectation value of cost Hamiltonian
        - best_params: optimal parameter array
        - best_cut: best classical cut value found by sampling
        - optimal_cut: maximum possible cut value (= number of edges for ring)
        - approximation_ratio: best_cut / optimal_cut
        - counts: measurement counts from the optimal circuit
    """
    # Build cost Hamiltonian
    cost_terms = {f"Z{i} Z{j}": 1.0 for i, j in edges}
    cost_H = hamiltonian.PauliOperator(cost_terms)

    best_cost = float('inf')
    best_params = None

    for restart in range(n_restarts):
        np.random.seed(restart)
        theta0 = np.random.uniform(0, np.pi, 2 * p)

        def cost_fn(params):
            gamma = params[:p].tolist()
            beta = params[p:2 * p].tolist()
            prog = build_qaoa_circuit(n_qubits, edges, gamma, beta)
            return core.expval_pauli_operator(prog, cost_H)

        result = minimize(cost_fn, theta0, method='COBYLA',
                          options={'maxiter': maxiter, 'rhobeg': 0.5})

        if result.fun < best_cost:
            best_cost = result.fun
            best_params = result.x.copy()

    # Sample the optimal circuit
    gamma = best_params[:p].tolist()
    beta = best_params[p:].tolist()
    optimal_prog = build_qaoa_circuit(n_qubits, edges, gamma, beta)
    optimal_prog << core.measure(list(range(n_qubits)), list(range(n_qubits)))

    machine = core.CPUQVM()
    machine.run(optimal_prog, 10000)
    counts = machine.result().get_counts()

    # Find best bitstring by classical evaluation
    best_cut = 0
    for bitstring in counts:
        cut = sum(1 for i, j in edges if bitstring[i] != bitstring[j])
        best_cut = max(best_cut, cut)

    return {
        'best_cost': best_cost,
        'best_params': best_params,
        'best_cut': best_cut,
        'optimal_cut': len(edges),
        'approximation_ratio': best_cut / len(edges),
        'counts': counts,
    }


# Example 1: 5-vertex path graph
path_edges = [(0, 1), (1, 2), (2, 3), (3, 4)]
result_path = solve_maxcut_qaoa(path_edges, n_qubits=5, p=2)
print(f"Path graph: cut={result_path['best_cut']}/{result_path['optimal_cut']}, "
      f"ratio={result_path['approximation_ratio']:.4f}")

# Example 2: 6-vertex complete graph K_3,3
k33_edges = [(i, j) for i in range(3) for j in range(3, 6)]
result_k33 = solve_maxcut_qaoa(k33_edges, n_qubits=6, p=2)
print(f"K_3,3 graph: cut={result_k33['best_cut']}/{result_k33['optimal_cut']}, "
      f"ratio={result_k33['approximation_ratio']:.4f}")

Evaluating Cut Value from Measurement Samples

A helper function for computing the classical cut value of a measured bitstring:

python
def evaluate_cut_distribution(counts: dict, edges: list) -> dict:
    """
    Analyze the cut value distribution from QAOA measurement counts.

    Args:
        counts: Dictionary mapping bitstrings to measurement counts.
        edges: Graph edges as list of (i, j) tuples.

    Returns:
        Dictionary with cut value statistics.
    """
    total_shots = sum(counts.values())
    cut_counts = {}
    total_cut = 0

    for bitstring, count in counts.items():
        cut = compute_cut(bitstring, edges)
        cut_counts[cut] = cut_counts.get(cut, 0) + count
        total_cut += cut * count

    avg_cut = total_cut / total_shots
    max_cut = max(cut_counts.keys())
    optimal_cut = len(edges)

    print("Cut value distribution:")
    for cut_val in sorted(cut_counts.keys(), reverse=True):
        cnt = cut_counts[cut_val]
        bar = '#' * int(50 * cnt / total_shots)
        print(f"  cut={cut_val}: {cnt:5d} ({cnt/total_shots:.3f}) {bar}")

    print(f"\nAverage cut value: {avg_cut:.4f}")
    print(f"Best cut found:    {max_cut}")
    print(f"Optimal cut:       {optimal_cut}")
    print(f"Approximation ratio: {max_cut / optimal_cut:.4f}")

    return {
        'avg_cut': avg_cut,
        'max_cut': max_cut,
        'cut_counts': cut_counts,
    }


# Example usage with p=1 results
optimal_prog_eval = build_qaoa_circuit(n_qubits, edges, best_params[:1].tolist(), best_params[1:].tolist())
optimal_prog_eval << core.measure(list(range(n_qubits)), list(range(n_qubits)))

machine = core.CPUQVM()
machine.run(optimal_prog_eval, 10000)
counts_p1 = machine.result().get_counts()
evaluate_cut_distribution(counts_p1, edges)

Explanation

QAOA Circuit Structure: Alternating Cost and Mixer Layers

The QAOA circuit consists of p identical "blocks," each containing a cost unitary followed by a mixer unitary. This structure mirrors the Trotterization of quantum annealing.

Cost Unitary

The cost unitary implements eiγHC. For the MaxCut cost Hamiltonian HC=(i,j)EZiZj, the cost unitary factors into a product of two-qubit gates:

eiγ(i,j)ZiZj=(i,j)EeiγZiZj=(i,j)ERZZ(i,j,2γ)

Each RZZ gate is a rotation about the ZZ axis:

RZZ(θ)=eiθ2ZZ=(eiθ/20000eiθ/20000eiθ/20000eiθ/2)

Mixer Unitary

The mixer unitary implements eiβHM where HM=iXi. Since the Xi operators commute (they act on different qubits), this factors into single-qubit gates:

eiβiXi=i=0n1eiβXi=i=0n1RX(i,2β)

Each RX rotation explores the solution space by creating superpositions:

RX(θ)=eiθ2X=(cos(θ/2)isin(θ/2)isin(θ/2)cos(θ/2))

Complete Layer Structure

One QAOA layer (p=1) applies:

  1. Hadamard on all n qubits: |0n|+n
  2. Cost unitary: RZZ gates on all edges with angle 2γ1
  3. Mixer unitary: RX on all qubits with angle 2β1

For p layers, steps 2 and 3 repeat with different parameters (γl,βl) for each layer l.


Relationship to Quantum Annealing

QAOA can be understood as a discretized version of quantum annealing (also known as the quantum adiabatic algorithm). In quantum annealing, the system evolves slowly under the time-dependent Hamiltonian:

H(t)=(1tT)HM+tTHC

starting from the ground state of HM (the uniform superposition) and ending at the ground state of HC (the optimal solution).

By Trotterizing this continuous evolution into p discrete steps, we obtain exactly the QAOA circuit. Each QAOA layer corresponds to one Trotter step:

eiH(t)ΔteiγlHCeiβlHM

where γl and βl are related to the annealing schedule and the time step size.

Key insight: As p, QAOA reproduces the adiabatic path exactly, guaranteeing convergence to the optimal solution (assuming the adiabatic condition is satisfied). For finite p, the parameters (γ,β) are free to be optimized, which can actually yield better results than a naive Trotterization of the annealing path.


Approximation Ratio Analysis

The approximation ratio α quantifies how close the QAOA solution is to the true optimum:

α=E[cut value from QAOA]MaxCut(G)

Theoretical Guarantees

Graph TypeQAOA DepthApproximation RatioReference
3-regularp=1α0.6924Farhi et al. (2014)
3-regularp=2α0.7574Wurtz and Love (2021)
3-regularpα1.0Convergence guarantee
Generalp=1α>0.5 (beats random)Generic bound

For comparison, the classical Goemans-Williamson algorithm achieves α0.8785 for general graphs, but QAOA is expected to be competitive for specific graph classes and on quantum hardware.

Practical Observations

On small graphs (4~8 vertices), QAOA with p=1~2 typically achieves:

  • Ring graphs: α0.75~1.0 at p=1, near 1.0 at p=2
  • Path graphs: α0.8~1.0 at p=1
  • Complete graphs: α0.7~0.9 at p=2

Parameter Initialization Strategies

The choice of initial parameters significantly affects convergence speed and solution quality. Here are several strategies:

Strategy 1: Random Uniform

python
theta0 = np.random.uniform(0, np.pi, 2 * p)

Simple and unbiased. Works well with many restarts but can be slow to converge for some graphs.

Strategy 2: Annealing-Inspired

Initialize parameters to approximate the linear annealing schedule:

python
# gamma decreases, beta increases (or vice versa)
gamma_init = np.linspace(np.pi / (2 * p), 0, p)
beta_init = np.linspace(0, np.pi / (2 * p), p)
theta0 = np.concatenate([gamma_init, beta_init])

This mimics the adiabatic path and often converges faster than random initialization.

Strategy 3: Interpolation from Lower p

Use optimal parameters at depth p1 to initialize depth p:

python
def interp_from_lower_p(optimal_params_p1, p_new):
    """
    Initialize p_new parameters by interpolating from p=1 optimal.
    Insert a new layer at the midpoint of the annealing schedule.
    """
    gamma_old, beta_old = optimal_params_p1[:1], optimal_params_p1[1:]
    # For p=2, duplicate the p=1 parameters as starting point
    gamma_new = np.concatenate([gamma_old, gamma_old])
    beta_new = np.concatenate([beta_old, beta_old])
    return np.concatenate([gamma_new, beta_new])

theta0_p2 = interp_from_lower_p(best_params, p_new=2)

Strategy 4: Grid Search Warm Start

For small p, scan a coarse grid of (γ,β) values and start the optimizer from the best grid point:

python
def grid_search_init(n_qubits, edges, p, cost_H, grid_size=20):
    """Find the best starting point from a coarse grid."""
    best_energy = float('inf')
    best_theta = None

    gamma_vals = np.linspace(0, np.pi, grid_size)
    beta_vals = np.linspace(0, np.pi / 2, grid_size)

    for g in gamma_vals:
        for b in beta_vals:
            theta = np.array([g] * p + [b] * p)
            energy = qaoa_cost(theta, n_qubits, edges, p, cost_H)
            if energy < best_energy:
                best_energy = energy
                best_theta = theta.copy()

    return best_theta, best_energy

grid_theta, grid_energy = grid_search_init(n_qubits, edges, p, cost_hamiltonian)
print(f"Grid search best: cost={grid_energy:.6f} at {grid_theta}")

Scaling with Problem Size and QAOA Depth

Circuit Size

The number of gates in a QAOA circuit scales as:

ComponentGate CountPer Layer
Initializationnn Hadamard gates
Cost unitary|E||E| RZZ gates (one per edge)
Mixer unitarynn RX gates
Total per layern+|E|One- plus two-qubit gates
Total for p layersn+p(n+|E|)Including initialization

For a dense graph with |E|=O(n2) edges, the circuit depth is O(pn2). For sparse graphs with |E|=O(n), the depth is O(pn).

Classical Optimization Cost

The classical optimization cost depends on:

  • Parameter count: 2p parameters to optimize
  • Cost per evaluation: one circuit execution plus expectation computation
  • Optimizer iterations: typically O(100)~O(1000) for COBYLA
  • Number of restarts: 10~50 recommended

Total cost: O(restarts×iterations×cost_per_eval)

Scaling Table for MaxCut

QubitsEdges (ring)p=1 paramsp=1 gatesp=2 paramsp=2 gatesSim time estimate
44212420< 1 sec
88224440~1 sec
1616248480~10 sec
20202604100~1 min
25252754125~5 min

Simulation times assume 20 restarts, 300 COBYLA iterations each.


Connection to QAOA Theory Guarantees

The Farhi-Goldstone-Gutmann Theorem

The original QAOA paper establishes that for any fixed p, QAOA produces a state |γ,β such that the probability of measuring the optimal solution is at least Ω(1/poly(n)) for certain problem classes. This means QAOA can sample the optimal solution with polynomially many shots.

Concentration Property

For many graph families (including random regular graphs), the cost distribution of QAOA output concentrates around its mean as the problem size grows:

Pr[|C(x)E[C]1|>ϵ]0 as n

This means that measuring the expectation value HC accurately predicts the typical cut value of sampled solutions.

Quantum Advantage Prospects

QAOA is a leading candidate for demonstrating quantum advantage on optimization problems. Theoretical evidence suggests:

  1. Shallow circuits (p=1,2) can already match or exceed simple classical heuristics on specific graph families.
  2. At moderate depth (p10~20), QAOA may outperform the best classical approximation algorithms for certain graph classes.
  3. Warm-starting QAOA with classically computed solutions (e.g., Goemans-Williamson) can improve performance.

Limitations

  • Barren plateaus: For very deep circuits (p1), gradients can vanish exponentially, making optimization intractable.
  • Noise sensitivity: On real hardware, gate errors accumulate with depth, limiting practical p values.
  • Classical competition: Advanced classical algorithms (semidefinite programming, tensor networks) remain competitive for many problem sizes.

Adapting QAOA for Other Problems

The QAOA framework is not limited to MaxCut. Any combinatorial problem that can be expressed as a sum of local terms can be encoded.

MAX-2-SAT

A 2-SAT clause (xixj) maps to the Hamiltonian:

Hij=12(IZi)+14(I+Zi)(I+Zj)12(IZi)(IZj)

which simplifies to a sum of Zi, Zj, and ZiZj terms. The QAOA circuit uses the same RZZ and RX gates.

Weighted MaxCut

Assign weight wij to each edge:

HC=(i,j)EwijZiZj

The only change is in the PauliOperator coefficients; the circuit structure is identical.

Portfolio Optimization

Binary encoding of asset selection maps to:

HC=μTx+λxTΣx

where μ are expected returns and Σ is the covariance matrix. This produces both Zi (linear) and ZiZj (quadratic) terms.


Summary

ConceptImplementation in pyqpanda3
Cost Hamiltonianhamiltonian.PauliOperator({"Z0 Z1": 1.0, ...})
Cost unitarycore.RZZ(i, j, 2 * gamma) for each edge
Mixer unitarycore.RX(q, 2 * beta) for each qubit
Expectation valuecore.expval_pauli_operator(prog, cost_H)
Samplingcore.CPUQVM().run(prog, shots); result.get_counts()
Classical optimizationscipy.optimize.minimize with COBYLA

Next Steps

Released under the MIT License.