povm_gram[source]

povm_gram(E, normalized=True)

The Gram matrix is the matrix of inner products. By default, we return the normalized Gram matrix: given POVM elements $\{\hat{E}\}$:

$$ G_{i,j} = tr(\frac{\hat{E}_{i}}{tr \hat{E}_{i}} \frac{\hat{E}_{j}}{tr \hat{E}_{j}})$$

Otherwise:

$$ G_{i,j} = tr(\hat{E}_{i}\hat{E}_{j} $$

We can check that a SIC-POVM indeed has the correct Gram matrix:

d = 4
assert np.allclose(povm_gram(sic_povm(d)), sic_gram(d), rtol=0.001)

Why not check out the Hoggar POVM too?

assert np.allclose(povm_gram(hoggar_povm()), sic_gram(8), atol=0.001)

povm_phi[source]

povm_phi(E)

Given POVM elements $\{\hat{E}\}$, we first form the matrix:

$$ \hat{\Phi}^{-1}_{i,j} = tr(\hat{E}_{i} \frac{\hat{E}_{j}}{tr \hat{E}_{j}}) $$

And then take its inverse to construct $\hat{\Phi}$, the magical quantum transition matrix.

$$ \left\lVert \hat{I} - \hat{\Phi} \right\rVert = \sqrt{tr(\hat{I}-\hat{\Phi})(\hat{I}-\hat{\Phi})}$$

quantumness[source]

quantumness(povm=None, phi=None)

A measure of the "quantumness" of a POVM:

$$ \left\lVert \hat{I} - \hat{\Phi} \right\rVert = \sqrt{tr(\hat{I}-\hat{\Phi})(\hat{I}-\hat{\Phi})} $$

In other words, the Frobenius distance (2-norm of the vector of singular values) between the magical quantum coherent matrix $\hat{\Phi}$ (aka the "Born matrix") and the identity.

The idea is that the difference between "classical" and "quantum" probabilities amounts to whether or not you stick $\hat{\Phi}$ in between your conditional probability matrix and your vector of probabilities. In the case of complex vector spaces, it's been proven than SIC-POVM's minimize this distance under any unitarily invariant norm, such as the Frobenius norm. In other words, SIC-POVM's minimize the quantum deformation of the law of total probability.

dm_probs[source]

dm_probs(rho, E)

Given a density matrix $\rho$, expands it in the basis provided by POVM elements $\{\hat{E}\}$, giving a probability vector $\vec{p}$.

$$ p_{i} = tr(\hat{E}_{i}\rho) $$

probs_dm[source]

probs_dm(p, E, phi=None)

Given a probability vector $\vec{p}$ and a POVM $\{\hat{E}\}$, recovers the density matrix $\rho$. If it's not provided, we first construct $\hat{\Phi}$, the magical quantum coherence matrix, and then form the vector of quasiprobabilities $\vec{q} = \hat{\Phi}\vec{p}$. Then:

$$ \rho = \sum_{i} q_{i}\frac{\hat{E_{i}}}{tr E_{i}}$$

Let's try it with a SIC-POVM:

d = 4
povm = sic_povm(d)
rho = qt.rand_dm(d)
p = dm_probs(rho, povm)
assert np.isclose(sum(p), 1)
assert np.allclose(rho, probs_dm(p, povm))

A rank-1 Weyl-Heisenberg POVM:

d = 4
povm = weyl_heisenberg_povm(qt.rand_ket(d))
rho = qt.rand_dm(d)
p = dm_probs(rho, povm)
assert np.isclose(sum(p), 1)
assert np.allclose(rho, probs_dm(p, povm))

And a general Weyl-Heisenberg POVM:

d = 4
povm = weyl_heisenberg_povm(qt.rand_dm(d))
rho = qt.rand_dm(d)
p = dm_probs(rho, povm)
assert np.isclose(sum(p), 1)
assert np.allclose(rho, probs_dm(p, povm))

conditional_probs[source]

conditional_probs(A, B)

Given two POVM's $\{\hat{A}\}$ and $\{\hat{B}\}$ (or PVM's), constructs the matrix of conditional probabilities $r(j|i)$ for outcome $A_{j}$ given outcome $B_{i}$:

$$\hat{R}_{j,i} = tr(\hat{A}_{j}\frac{\hat{B}_{i}}{tr \hat{B}_{i}})$$

We can use the conditional probability matrix to calculate the probabilities of a PVM (Von Neumann) measurement after a SIC-POVM measurement whose outcome we're ignorant of.

After the initial POVM measurement:

$$ \rho^{\prime} = \sum_{i} p_{i}\frac{\hat{E}_{i}}{tr \hat{E}_{i}} $$

Where $p_{i}$ are the probabilities $\vec{p}$ with respect to the POVM, and $\frac{\hat{E}_{i}}{tr \hat{E}_{i}}$ are the outcome states. We can then get the probabilities for the later PVM via $tr (\Pi_{j}\rho^{\prime})$. This should be the same as $\hat{R}\vec{p}$.

d = 3
povm = sic_povm(d)
rho = qt.rand_dm(d)
p = dm_probs(rho, povm)

post_povm_rho = sum([prob*povm[i]/povm[i].tr() for i, prob in enumerate(p)])

H = qt.rand_herm(d)
pvm = [v*v.dag() for v in H.eigenstates()[1]]
R = conditional_probs(pvm, povm)

assert np.allclose(dm_probs(post_povm_rho, pvm), R @ p)

Let's try it with a rank-1 Weyl-Heisenberg POVM:

d = 3
phi = povm_phi(povm)
rho = qt.rand_dm(d)
p = dm_probs(rho, povm)

post_povm_rho = sum([prob*povm[i]/povm[i].tr() for i, prob in enumerate(p)])

H = qt.rand_herm(d)
pvm = [v*v.dag() for v in H.eigenstates()[1]]
R = conditional_probs(pvm, povm)

assert np.allclose(dm_probs(post_povm_rho, pvm), R @ p)

And a general Weyl-Heisenberg POVM:

d = 3
povm = weyl_heisenberg_povm(qt.rand_dm(d))
rho = qt.rand_dm(d)
p = dm_probs(rho, povm)

post_povm_rho = sum([prob*povm[i]/povm[i].tr() for i, prob in enumerate(p)])

H = qt.rand_herm(d)
pvm = [v*v.dag() for v in H.eigenstates()[1]]
R = conditional_probs(pvm, povm)

assert np.allclose(dm_probs(post_povm_rho, pvm), R @ p)

And if the second measurement is a POVM:

d = 3
povm = sic_povm(d)
rho = qt.rand_dm(d)
p = dm_probs(rho, povm)

post_povm_rho = sum([prob*povm[i]/povm[i].tr() for i, prob in enumerate(p)])

povm2 = weyl_heisenberg_povm(qt.rand_dm(d))
R = conditional_probs(povm2, povm)

assert np.allclose(dm_probs(post_povm_rho, povm2), R @ p)

On the other hand, we can get the quantum probabilities in the case that we go directly to the second measurement by sticking the magical quantum coherence matrix $\hat{\Phi}$ in the middle:

$$ \vec{q} = \hat{R} \hat{\Phi} \vec{p} $$

This should be the same as $tr(\hat{F}_{i} \rho)$, where $\{F\}$ is the second POVM or PVM.

In the case of a PVM:

d = 3
povm = sic_povm(d)
phi = povm_phi(povm)
rho = qt.rand_dm(d)
p = dm_probs(rho, povm)

H = qt.rand_herm(d)
pvm = [v*v.dag() for v in H.eigenstates()[1]]
R = conditional_probs(pvm, povm)

assert np.allclose(dm_probs(rho, pvm), R @ phi @ p)

In the case of a POVM:

d = 3
povm = sic_povm(d)
phi = povm_phi(povm)
rho = qt.rand_dm(d)
p = dm_probs(rho, povm)

povm2 = weyl_heisenberg_povm(qt.rand_dm(d))
R = conditional_probs(povm2, povm)

assert np.allclose(dm_probs(rho, povm2), R @ phi @ p)

This also give us a way of representing quantum time evolution.

The usual way of obtaining the evolved probabilities is simply by evolving $\rho$ via some unitary $\hat{U}\rho\hat{U}^{\dagger}$ and the finding the probabilities with respect to the original POVM. Alternatively, we could leave $\rho$ alone, and get the same answer by finding the probabilities with respect to the time reverse evolved POVM, in other words, one whose elements have been evolved $\hat{U}^{\dagger}\hat{E}\hat{U}$.

In terms of the POVM formalism, we form the conditional probability matrix $R_{j,i}$ for outcome $j$ of the evolved POVM given outcome $i$ of the original POVM, and take its transpose. Then the evolved probabilities are: $\hat{R}^{T} \hat{\Phi} \vec{p}$. The same effect could be obtained by working with the conditional probabilities for the reversed evolved POVM given the original POVM, or the conditional probabilities for the original POVM given the evolved POVM.

d = 3
povm = weyl_heisenberg_povm(qt.rand_dm(d))
phi = povm_phi(povm)
rho = qt.rand_dm(d)
p = dm_probs(rho, povm)

U = qt.rand_unitary(d)
evolved_rho = U*rho*U.dag()
evolved_povm = [U*e*U.dag() for e in povm]
reverse_evolved_povm = [U.dag()*e*U for e in povm]
evolved_povm_given_povm = conditional_probs(evolved_povm, povm)

assert np.allclose(dm_probs(evolved_rho, povm), dm_probs(rho, reverse_evolved_povm))
assert np.allclose(dm_probs(evolved_rho, povm), evolved_povm_given_povm.T @ phi @ p)
assert np.allclose(dm_probs(evolved_rho, povm), conditional_probs(reverse_evolved_povm, povm) @ phi @ p)
assert np.allclose(dm_probs(evolved_rho, povm), conditional_probs(povm, evolved_povm) @ phi @ p)

Indeed, it's worth observing in this connection that:

$$ \hat{R}^{T} \hat{\Phi} \hat{R} \hat{\Phi} = \hat{I}_{d^2}$$

And $\hat{\Phi}$ will be the same for both the original POVM and the evolved POVM since unitary evolution preserves the inner product.

assert np.allclose(evolved_povm_given_povm.T @ phi @ evolved_povm_given_povm @ phi, qt.identity(d**2))

Interestingly, in the case of a SIC-POVM, $\hat{R}^{T} \hat{\Phi}$ and $\hat{R} \hat{\Phi}$, which are inverses, will be each other's transpose, implying that they are orthogonal matrices.

d = 2
povm = sic_povm(d)
phi = povm_phi(povm)

U = qt.rand_unitary(d)
evolved_povm = [U*e*U.dag() for e in povm]
evolved_povm_given_povm = conditional_probs(evolved_povm, povm)

A = evolved_povm_given_povm.T @ phi
Ainv = evolved_povm_given_povm @ phi
assert np.allclose(A, Ainv.T)

Finally, note how operators compose:

d = 2
rho = qt.rand_dm(2)

povm = sic_povm(d)
phi = povm_phi(povm)

U = qt.rand_unitary(d)

rho_evolved = U*U*rho*U.dag()*U.dag()

evolved_povm = [U*e*U.dag() for e in povm]
evolved_povm_given_povm = conditional_probs(evolved_povm, povm)

assert np.allclose(dm_probs(rho_evolved, povm), evolved_povm_given_povm.T @ phi @ evolved_povm_given_povm.T @ phi @ dm_probs(rho, povm))

quantum_inner_product[source]

quantum_inner_product(r, s, povm)

The quantum inner product expressed in terms of probability vectors.

$$ tr(\sigma\rho) = \vec{r} \hat{G_{-1}} \vec{s}$$

Where $\vec{r}$ is the probability vector for $\rho$ and $\vec{s}$ is the probability vector for $\sigma$ with respect to the same POVM, and $\hat{G_{-1}}$ is the inverse of the unnormalized Gram matrix.

d = 4
povm = weyl_heisenberg_povm(qt.rand_ket(d))
rho = qt.rand_dm(d)
sigma = qt.rand_dm(d)

r = dm_probs(rho, povm)
s = dm_probs(sigma, povm)

assert np.isclose((rho*sigma).tr(), quantum_inner_product(r, s, povm))

tensor_povm[source]

tensor_povm(*povms)

Forms the tensor product of a list of POVM's, which is itself a POVM.

upgrade_povm[source]

upgrade_povm(E, i, dims)

Upgrades a POVM to act on the $i^{th}$ subspace of a tensor product space whose subspaces are given by a list dims. If dims is an integer, we assume it refers to the number of subspaces all of the same dimensionality as the POVM.

For example, we can get the probabilities for a partial state by upgrading a POVM of the right dimensionality on that subsystem.

entangled = qt.rand_dm(4)
entangled.dims = [[2,2],[2,2]]
povm2 = sic_povm(2)

assert np.allclose(dm_probs(entangled.ptrace(0), povm2), dm_probs(entangled, upgrade_povm(povm2, 0, 2)))
assert np.allclose(dm_probs(entangled.ptrace(1), povm2), dm_probs(entangled, upgrade_povm(povm2, 1, 2)))

apply_dims[source]

apply_dims(E, dims)

Helper function which sets the tensor dimensions of each POVM element to dims.

implement_povm[source]

implement_povm(E)

Returns a unitary operator $\hat{U}$ implementing a given POVM on $H_{d} \otimes H_{n}$, where $d$ is the dimensionality of the original system and $n$ is the dimensionality of the auxilliary system and which is the same as the number of POVM elements.

d = 2
rho = qt.rand_dm(d)
povm = sic_povm(d)
U = implement_povm(povm)
state = U*qt.tensor(rho, qt.basis(d**2,0)*qt.basis(d**2,0).dag())*U.dag()
projectors = [qt.tensor(qt.identity(d), qt.basis(d**2, i)*qt.basis(d**2,i).dag()) for i in range(d**2)]

assert np.allclose(dm_probs(rho, povm), np.array([(proj*state).tr() for proj in projectors]))

discriminator_povm[source]

discriminator_povm(a, b)

Returns a non informationally complete POVM which has the special property of distinguishing between two arbitrary states $\mid a \rangle$ and $\mid b\rangle$, which are not necessarily orthogonal (which is impossible with a standard PVM).

It has three elements:

$$ \hat{F}_{a} = \frac{1}{1+\mid\langle a \mid b \rangle\mid}(\hat{I} - \mid b \rangle \langle b \mid) $$$$ \hat{F}_{b} = \frac{1}{1+\mid\langle a \mid b \rangle\mid}(\hat{I} - \mid a \rangle \langle a \mid) $$$$ \hat{F}_{?} = \hat{I} - \hat{F}_{a} - \hat{F}_{b} $$

The first tests for "not B", the second tests for "not A", and the third outcome represents an inconclusive result.

We can see that the "discriminator POVM" is indeed a POVM, and that if the initial state is $\mid a \rangle \langle a \mid$, then the probability of $\hat{F}_{b}$ is $0$; and if the initial state is $\mid b \rangle \langle b \mid$, then the probability of ${F}_{a}$ is $0$.

d = 2
a, b = qt.rand_ket(d), qt.rand_ket(d)
dpovm = discriminator_povm(a, b)
assert np.allclose(sum(dpovm), qt.identity(d))

arho = a*a.dag()
brho = b*b.dag()
print(dm_probs(arho, dpovm))
print(dm_probs(brho, dpovm))
print(dm_probs(dpovm[2]/dpovm[2].tr(), dpovm))
[0.08453251 0.         0.91546749]
[0.         0.08453251 0.91546749]
[0.02206577 0.02206577 0.95586847]