QPanda3
0.1.0
Supported by OriginQ
|
Prev Tutorial: Hellinger Fidelity
Next Tutorial: Unitary
The KL divergence in the quantum domain is an important concept used to measure the difference or information loss between two quantum probability distributions. The following is a detailed introduction to KL divergence in the quantum field:
KL divergence, also known as information gain or relative entropy, is widely used in information theory and statistics to evaluate the information loss incurred when a theoretical or fitted distribution Q approximates a true distribution P. In the quantum domain, the KL divergence can also be used to measure the difference between two quantum states or quantum probability distributions.
For quantum states ρ and σ, their KL divergence is usually defined as
\[ D_KL(ρ||σ) = tr(ρ log(ρ/σ)) \]
Among them, tr represents the trace of a matrix, which is the sum of the elements on the diagonal of the matrix. This formula is the standard form of the quantum KL divergence, which is used to calculate the difference between two quantum states.
divergence is asymmetric, that is \(D_KL(ρ||σ) ≠ D_KL(σ||ρ)。\) This means that when using KL divergence to measure the difference between two quantum states, one needs to pay attention to the order.
KL divergence is always greater than or equal to 0, that is \(D_KL(ρ||σ) ≥ 0\) .If and only if \(ρ=σ\) , \(D_KL(ρ||σ) = 0\) 。 This shows that when two quantum states are exactly the same, the KL divergence between them is 0. Information loss: KL divergence can be seen as the amount of extra information needed when using distribution σ to encode samples from distribution ρ. It measures the loss of information when using one distribution to approximate another.
Here is API doc for KL_divergence
Output
Output