We can check that a SIC-POVM indeed has the correct Gram matrix:
d = 4
assert np.allclose(povm_gram(sic_povm(d)), sic_gram(d), rtol=0.001)
Why not check out the Hoggar POVM too?
assert np.allclose(povm_gram(hoggar_povm()), sic_gram(8), atol=0.001)
$$ \left\lVert \hat{I} - \hat{\Phi} \right\rVert = \sqrt{tr(\hat{I}-\hat{\Phi})(\hat{I}-\hat{\Phi})}$$
Let's try it with a SIC-POVM:
d = 4
povm = sic_povm(d)
rho = qt.rand_dm(d)
p = dm_probs(rho, povm)
assert np.isclose(sum(p), 1)
assert np.allclose(rho, probs_dm(p, povm))
A rank-1 Weyl-Heisenberg POVM:
d = 4
povm = weyl_heisenberg_povm(qt.rand_ket(d))
rho = qt.rand_dm(d)
p = dm_probs(rho, povm)
assert np.isclose(sum(p), 1)
assert np.allclose(rho, probs_dm(p, povm))
And a general Weyl-Heisenberg POVM:
d = 4
povm = weyl_heisenberg_povm(qt.rand_dm(d))
rho = qt.rand_dm(d)
p = dm_probs(rho, povm)
assert np.isclose(sum(p), 1)
assert np.allclose(rho, probs_dm(p, povm))
We can use the conditional probability matrix to calculate the probabilities of a PVM (Von Neumann) measurement after a SIC-POVM measurement whose outcome we're ignorant of.
After the initial POVM measurement:
$$ \rho^{\prime} = \sum_{i} p_{i}\frac{\hat{E}_{i}}{tr \hat{E}_{i}} $$Where $p_{i}$ are the probabilities $\vec{p}$ with respect to the POVM, and $\frac{\hat{E}_{i}}{tr \hat{E}_{i}}$ are the outcome states. We can then get the probabilities for the later PVM via $tr (\Pi_{j}\rho^{\prime})$. This should be the same as $\hat{R}\vec{p}$.
d = 3
povm = sic_povm(d)
rho = qt.rand_dm(d)
p = dm_probs(rho, povm)
post_povm_rho = sum([prob*povm[i]/povm[i].tr() for i, prob in enumerate(p)])
H = qt.rand_herm(d)
pvm = [v*v.dag() for v in H.eigenstates()[1]]
R = conditional_probs(pvm, povm)
assert np.allclose(dm_probs(post_povm_rho, pvm), R @ p)
Let's try it with a rank-1 Weyl-Heisenberg POVM:
d = 3
phi = povm_phi(povm)
rho = qt.rand_dm(d)
p = dm_probs(rho, povm)
post_povm_rho = sum([prob*povm[i]/povm[i].tr() for i, prob in enumerate(p)])
H = qt.rand_herm(d)
pvm = [v*v.dag() for v in H.eigenstates()[1]]
R = conditional_probs(pvm, povm)
assert np.allclose(dm_probs(post_povm_rho, pvm), R @ p)
And a general Weyl-Heisenberg POVM:
d = 3
povm = weyl_heisenberg_povm(qt.rand_dm(d))
rho = qt.rand_dm(d)
p = dm_probs(rho, povm)
post_povm_rho = sum([prob*povm[i]/povm[i].tr() for i, prob in enumerate(p)])
H = qt.rand_herm(d)
pvm = [v*v.dag() for v in H.eigenstates()[1]]
R = conditional_probs(pvm, povm)
assert np.allclose(dm_probs(post_povm_rho, pvm), R @ p)
And if the second measurement is a POVM:
d = 3
povm = sic_povm(d)
rho = qt.rand_dm(d)
p = dm_probs(rho, povm)
post_povm_rho = sum([prob*povm[i]/povm[i].tr() for i, prob in enumerate(p)])
povm2 = weyl_heisenberg_povm(qt.rand_dm(d))
R = conditional_probs(povm2, povm)
assert np.allclose(dm_probs(post_povm_rho, povm2), R @ p)
On the other hand, we can get the quantum probabilities in the case that we go directly to the second measurement by sticking the magical quantum coherence matrix $\hat{\Phi}$ in the middle:
$$ \vec{q} = \hat{R} \hat{\Phi} \vec{p} $$
This should be the same as $tr(\hat{F}_{i} \rho)$, where $\{F\}$ is the second POVM or PVM.
In the case of a PVM:
d = 3
povm = sic_povm(d)
phi = povm_phi(povm)
rho = qt.rand_dm(d)
p = dm_probs(rho, povm)
H = qt.rand_herm(d)
pvm = [v*v.dag() for v in H.eigenstates()[1]]
R = conditional_probs(pvm, povm)
assert np.allclose(dm_probs(rho, pvm), R @ phi @ p)
In the case of a POVM:
d = 3
povm = sic_povm(d)
phi = povm_phi(povm)
rho = qt.rand_dm(d)
p = dm_probs(rho, povm)
povm2 = weyl_heisenberg_povm(qt.rand_dm(d))
R = conditional_probs(povm2, povm)
assert np.allclose(dm_probs(rho, povm2), R @ phi @ p)
This also give us a way of representing quantum time evolution.
The usual way of obtaining the evolved probabilities is simply by evolving $\rho$ via some unitary $\hat{U}\rho\hat{U}^{\dagger}$ and the finding the probabilities with respect to the original POVM. Alternatively, we could leave $\rho$ alone, and get the same answer by finding the probabilities with respect to the time reverse evolved POVM, in other words, one whose elements have been evolved $\hat{U}^{\dagger}\hat{E}\hat{U}$.
In terms of the POVM formalism, we form the conditional probability matrix $R_{j,i}$ for outcome $j$ of the evolved POVM given outcome $i$ of the original POVM, and take its transpose. Then the evolved probabilities are: $\hat{R}^{T} \hat{\Phi} \vec{p}$. The same effect could be obtained by working with the conditional probabilities for the reversed evolved POVM given the original POVM, or the conditional probabilities for the original POVM given the evolved POVM.
d = 3
povm = weyl_heisenberg_povm(qt.rand_dm(d))
phi = povm_phi(povm)
rho = qt.rand_dm(d)
p = dm_probs(rho, povm)
U = qt.rand_unitary(d)
evolved_rho = U*rho*U.dag()
evolved_povm = [U*e*U.dag() for e in povm]
reverse_evolved_povm = [U.dag()*e*U for e in povm]
evolved_povm_given_povm = conditional_probs(evolved_povm, povm)
assert np.allclose(dm_probs(evolved_rho, povm), dm_probs(rho, reverse_evolved_povm))
assert np.allclose(dm_probs(evolved_rho, povm), evolved_povm_given_povm.T @ phi @ p)
assert np.allclose(dm_probs(evolved_rho, povm), conditional_probs(reverse_evolved_povm, povm) @ phi @ p)
assert np.allclose(dm_probs(evolved_rho, povm), conditional_probs(povm, evolved_povm) @ phi @ p)
Indeed, it's worth observing in this connection that:
$$ \hat{R}^{T} \hat{\Phi} \hat{R} \hat{\Phi} = \hat{I}_{d^2}$$
And $\hat{\Phi}$ will be the same for both the original POVM and the evolved POVM since unitary evolution preserves the inner product.
assert np.allclose(evolved_povm_given_povm.T @ phi @ evolved_povm_given_povm @ phi, qt.identity(d**2))
Interestingly, in the case of a SIC-POVM, $\hat{R}^{T} \hat{\Phi}$ and $\hat{R} \hat{\Phi}$, which are inverses, will be each other's transpose, implying that they are orthogonal matrices.
d = 2
povm = sic_povm(d)
phi = povm_phi(povm)
U = qt.rand_unitary(d)
evolved_povm = [U*e*U.dag() for e in povm]
evolved_povm_given_povm = conditional_probs(evolved_povm, povm)
A = evolved_povm_given_povm.T @ phi
Ainv = evolved_povm_given_povm @ phi
assert np.allclose(A, Ainv.T)
Finally, note how operators compose:
d = 2
rho = qt.rand_dm(2)
povm = sic_povm(d)
phi = povm_phi(povm)
U = qt.rand_unitary(d)
rho_evolved = U*U*rho*U.dag()*U.dag()
evolved_povm = [U*e*U.dag() for e in povm]
evolved_povm_given_povm = conditional_probs(evolved_povm, povm)
assert np.allclose(dm_probs(rho_evolved, povm), evolved_povm_given_povm.T @ phi @ evolved_povm_given_povm.T @ phi @ dm_probs(rho, povm))
d = 4
povm = weyl_heisenberg_povm(qt.rand_ket(d))
rho = qt.rand_dm(d)
sigma = qt.rand_dm(d)
r = dm_probs(rho, povm)
s = dm_probs(sigma, povm)
assert np.isclose((rho*sigma).tr(), quantum_inner_product(r, s, povm))
For example, we can get the probabilities for a partial state by upgrading a POVM of the right dimensionality on that subsystem.
entangled = qt.rand_dm(4)
entangled.dims = [[2,2],[2,2]]
povm2 = sic_povm(2)
assert np.allclose(dm_probs(entangled.ptrace(0), povm2), dm_probs(entangled, upgrade_povm(povm2, 0, 2)))
assert np.allclose(dm_probs(entangled.ptrace(1), povm2), dm_probs(entangled, upgrade_povm(povm2, 1, 2)))
d = 2
rho = qt.rand_dm(d)
povm = sic_povm(d)
U = implement_povm(povm)
state = U*qt.tensor(rho, qt.basis(d**2,0)*qt.basis(d**2,0).dag())*U.dag()
projectors = [qt.tensor(qt.identity(d), qt.basis(d**2, i)*qt.basis(d**2,i).dag()) for i in range(d**2)]
assert np.allclose(dm_probs(rho, povm), np.array([(proj*state).tr() for proj in projectors]))
We can see that the "discriminator POVM" is indeed a POVM, and that if the initial state is $\mid a \rangle \langle a \mid$, then the probability of $\hat{F}_{b}$ is $0$; and if the initial state is $\mid b \rangle \langle b \mid$, then the probability of ${F}_{a}$ is $0$.
d = 2
a, b = qt.rand_ket(d), qt.rand_ket(d)
dpovm = discriminator_povm(a, b)
assert np.allclose(sum(dpovm), qt.identity(d))
arho = a*a.dag()
brho = b*b.dag()
print(dm_probs(arho, dpovm))
print(dm_probs(brho, dpovm))
print(dm_probs(dpovm[2]/dpovm[2].tr(), dpovm))