On Gaussian Quantum Mechanics (and Negative Probabilities)

Negative Probabilities

Negative probabilities?! Impossible, you say. Well, as usual, there's a quantum mechanical twist to the story.

In normal probability theory, we can form the "joint probability distribution" over, say, two different random variables. But suppose, in a quantum mechanical context, we wanted to construct a joint probability distribution for both position and momentum. But position and momentum don't commute, and we can't know them both simultaneously with complete accuracy. So you might imagine this is going to throw a wrench into the machinery. And indeed, the quantum generalization of a joint probability distribution is called a "quasi-probability distribution," and on the one hand, it is much like a normal joint probability distribution in that, for the case of position/momentum, it's a function of possible position and momentum values, and if you integrate over momenta, you get the usual probability distribution for position, and if you integrate over positions, you get the usual probability distribution for momentum, but the twist is that the quasi-probability distribution itself can take negative values. One can try to interpret these negative joint probabilities in various ways, but like the use of imaginary numbers in the intermediate steps of a calculation, which eventually cancel out, the negative probabilities can be viewed purely instrumentally, if you like. The upshot is that quasi-probability distributions form yet another way of formulating quantum mechanics. Instead of working with wavefunctions, you can work with quasi-probability distributions with no loss of generality.


As a warm-up, let's consider the simple case of a single qubit. As we well know, the Pauli matrices don't commute, and so there is an intrinsic uncertainty associated to the qubit's rotation axis. To wit, if we measure the spin along the X direction, it ends up in an X eigenstate, either $\frac{1}{\sqrt{2}}(\mid \uparrow \rangle + \mid \downarrow \rangle)$ or $\frac{1}{\sqrt{2}}(\mid \uparrow \rangle - \mid \downarrow \rangle)$. But notice that these are both superpositions of Z eigenstates! So if we measure the spin along the X direction, and then measure along the Z direction, we'll always get 50%/50% up or down. A definite value of spin along the X direction precludes the spin from having a definite value of spin along the Z direction, and vice versa.

But what if we wanted to define a joint probability distribution for the spin to be up or down along the X axis and up or down along the Z axis? Here's one way to proceed. This construction, I believe, is originally due to Feynman.

Let's consider our X and Z eigenstates. We have:

$\mid z_{\uparrow} \rangle = \begin{pmatrix} 1 \\ 0 \end{pmatrix}$ and $\mid z_{\downarrow}\rangle = \begin{pmatrix} 0 \\ 1 \end{pmatrix}$

$\mid x_{\uparrow}\rangle = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 1 \end{pmatrix}$ and $\mid x_{\downarrow}\rangle = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ -1 \end{pmatrix}$

We can consider the projectors onto these eigenstates:

$P_{z_{\uparrow}} = \mid z_{\uparrow} \rangle \langle z_{\uparrow} \mid = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}$ and $P_{z_{\downarrow}} = \mid z_{\downarrow} \rangle \langle z_{\downarrow} \mid = \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}$

$P_{x_{\uparrow}} = \mid x_{\uparrow} \rangle \langle x_{\uparrow} \mid = \frac{1}{2} \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}$ and $P_{x_{\downarrow}} = \mid x_{\downarrow} \rangle \langle x_{\downarrow} \mid = \frac{1}{2} \begin{pmatrix} 1 & -1 \\ -1 & 1 \end{pmatrix}$

We can use these, of course, to get obtain the probabilities for a given state to be in one of the eigenstates. Given some qubit state $\mid \psi \rangle$, the expectation value of $\mid \psi \rangle$ on the projector gives the probability, e.g., $Pr(z_{\uparrow}) = \langle \psi \mid z_{\uparrow} \rangle \langle z_{\uparrow} \mid \psi \rangle$, which is clearly the amplitude squared, as usual.

Another way of writing these projectors is as:

$ \mid z_{\uparrow} \rangle \langle z_{\uparrow} \mid = \frac{1}{2}(I + Z) = \frac{1}{2}(\begin{pmatrix}1 & 0 \\ 0 & 1 \end{pmatrix} + \begin{pmatrix}1 & 0 \\ 0 & -1 \end{pmatrix}) = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}$

$ \mid z_{\downarrow} \rangle \langle z_{\downarrow} \mid = \frac{1}{2}(I - Z) = \frac{1}{2}(\begin{pmatrix}1 & 0 \\ 0 & 1 \end{pmatrix} - \begin{pmatrix}1 & 0 \\ 0 & -1 \end{pmatrix}) = \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}$

$ \mid x_{\uparrow} \rangle \langle x_{\uparrow} \mid = \frac{1}{2}(I + X) = \frac{1}{2}(\begin{pmatrix}1 & 0 \\ 0 & 1 \end{pmatrix} + \begin{pmatrix}0 & 1 \\ 1 & 0 \end{pmatrix}) = \frac{1}{2} \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}$

$ \mid x_{\downarrow} \rangle \langle x_{\downarrow} \mid = \frac{1}{2}(I - X) = \frac{1}{2}(\begin{pmatrix}1 & 0 \\ 0 & 1 \end{pmatrix} - \begin{pmatrix}0 & 1 \\ 1 & 0 \end{pmatrix}) = \frac{1}{2} \begin{pmatrix} 1 & -1 \\ -1 & 1 \end{pmatrix}$

Now let's engage in a little chicanery. Suppose we define two operators:

$P_{z_{\uparrow}x_{\uparrow}} = \frac{1}{4}(I + Z + X) = \begin{pmatrix}\frac{1}{2} & \frac{1}{4} \\ \frac{1}{4} & 0 \end{pmatrix}$

and

$P_{z_{\uparrow}x_{\downarrow}} = \frac{1}{4}(I + Z - X) = \begin{pmatrix}\frac{1}{2} & -\frac{1}{4} \\ -\frac{1}{4} & 0 \end{pmatrix}$

Obviously, $P_{z_{\uparrow}} = P_{z_{\uparrow}x_{\uparrow}} + P_{z_{\uparrow}x_{\downarrow}}$, since the $+X$ and $-X$ cancel out, and we've scaled both operators by $\frac{1}{4}$.

Similarly, let's define:

$P_{z_{\downarrow}x_{\uparrow}} = \frac{1}{4}(I - Z + X) = \begin{pmatrix}0 & \frac{1}{4} \\ \frac{1}{4} & \frac{1}{2} \end{pmatrix}$

and

$P_{z_{\downarrow}x_{\downarrow}} = \frac{1}{4}(I - Z - X) = \begin{pmatrix}0 & -\frac{1}{4} \\ -\frac{1}{4} & \frac{1}{2} \end{pmatrix}$

Naturally, $P_{z_{\downarrow}} = P_{z_{\downarrow}x_{\uparrow}} + P_{z_{\downarrow}x_{\downarrow}}$.

But also:

$P_{x_{\uparrow}} = P_{z_{\uparrow}x_{\uparrow}} + P_{z_{\downarrow}x_{\uparrow}} = \frac{1}{4}(I + Z + X) + \frac{1}{4}(I - Z + X) = \frac{1}{2}(I + X)$

and

$P_{x_{\downarrow}} = P_{z_{\uparrow}x_{\downarrow}} + P_{z_{\downarrow}x_{\downarrow}} = \frac{1}{4}(I + Z - X) + \frac{1}{4}(I - Z - X) = \frac{1}{2}(I - X)$.

Since the expectation value of a sum of operators is the sum of the expectation values of those operators, we can thus write our probabilities like:

$Pr(z_{\uparrow}) = Pr(z_{\uparrow}x_{\uparrow}) + Pr(z_{\uparrow}x_{\downarrow})$

$Pr(z_{\downarrow}) = Pr(z_{\downarrow}x_{\uparrow}) + Pr(z_{\downarrow}x_{\downarrow})$

$Pr(x_{\uparrow}) = Pr(z_{\uparrow}x_{\uparrow}) + Pr(z_{\downarrow}x_{\uparrow})$

$Pr(x_{\downarrow}) = Pr(z_{\uparrow}x_{\downarrow}) + Pr(z_{\downarrow}x_{\downarrow})$

This looks just like we're dealing with a joint probability distribution! Specifically, a joint probability distribution over two variables, Z and X, each of which can only take two possible values $\uparrow$ and $\downarrow$. The probability for a given value of Z can be found by summing over possible values of X, and the probability of a given value of X can be found by summing over possible values of Z.

And indeed, just like with a joint probability distribution, summing over both variables gives you 1, which follows from:

$P_{z_{\uparrow}x_{\uparrow}} + P_{z_{\uparrow}x_{\downarrow}} + P_{z_{\downarrow}x_{\uparrow}} + P_{z_{\downarrow}x_{\downarrow}}$

$=\begin{pmatrix}\frac{1}{2} & \frac{1}{4} \\ \frac{1}{4} & 0 \end{pmatrix} + \begin{pmatrix}\frac{1}{2} & -\frac{1}{4} \\ -\frac{1}{4} & 0 \end{pmatrix} + \begin{pmatrix}0 & \frac{1}{4} \\ \frac{1}{4} & \frac{1}{2} \end{pmatrix} + \begin{pmatrix}0 & -\frac{1}{4} \\ -\frac{1}{4} & \frac{1}{2} \end{pmatrix}$

$= \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$.

In other words, these four matrices form a resolution of the identity.

So it seems like we've managed to define a kind of joint probability distribution for both X and Z values of a qubit. What's the catch though?

Well:

In [16]:
import qutip as qt

# Projectors onto X and Z eigenstates
Xp = (1/2)*(qt.identity(2) + qt.sigmax())
Xm = (1/2)*(qt.identity(2) - qt.sigmax())
Zp = (1/2)*(qt.identity(2) + qt.sigmaz())
Zm = (1/2)*(qt.identity(2) - qt.sigmaz())

# Our "quasiprobability" operators
PP =  (1/4)*(qt.identity(2) + qt.sigmaz() + qt.sigmax()) 
PM  = (1/4)*(qt.identity(2) + qt.sigmaz() - qt.sigmax())
MP  = (1/4)*(qt.identity(2) - qt.sigmaz() + qt.sigmax())
MM  = (1/4)*(qt.identity(2) - qt.sigmaz() - qt.sigmax())

def qubit_quasiprobabilities(qubit):
    print("P(Z+, X+): %.4f" % qt.expect(PP, qubit))
    print("P(Z+, X-): %.4f" % qt.expect(PM, qubit))
    print("P(Z-, X+): %.4f" % qt.expect(MP, qubit))
    print("P(Z-, X-): %.4f" % qt.expect(MM, qubit))
    
    print("P(Z+): %.4f == P(Z+, X+) + P(Z+, X-): %.4f" 
          % (qt.expect(Zp, qubit),\
             qt.expect(PP, qubit)+qt.expect(PM, qubit)))
    print("P(Z-): %.4f == P(Z-, X+) + P(Z-, X-): %.4f" 
          % (qt.expect(Zm, qubit),\
             qt.expect(MP, qubit)+qt.expect(MM, qubit)))
    print("P(X+): %.4f == P(Z+, X+) + P(Z-, X+): %.4f" 
          % (qt.expect(Xp, qubit),\
             qt.expect(PP, qubit)+qt.expect(MP, qubit)))
    print("P(X-): %.4f == P(Z+, X-) + P(Z-, X-): %.4f" 
          % (qt.expect(Xm, qubit),\
             qt.expect(PM, qubit)+qt.expect(MM, qubit)))
    print("Total probability: %4f" % \
          (qt.expect(PP, qubit)+\
           qt.expect(PM, qubit)+\
           qt.expect(MP, qubit)+\
           qt.expect(MM, qubit)))

qubit_quasiprobabilities(qt.rand_ket(2))
P(Z+, X+): 0.3898
P(Z+, X-): -0.0122
P(Z-, X+): 0.5122
P(Z-, X-): 0.1102
P(Z+): 0.3776 == P(Z+, X+) + P(Z+, X-): 0.3776
P(Z-): 0.6224 == P(Z-, X+) + P(Z-, X-): 0.6224
P(X+): 0.9019 == P(Z+, X+) + P(Z-, X+): 0.9019
P(X-): 0.0981 == P(Z+, X-) + P(Z-, X-): 0.0981
Total probability: 1.000000

Run it a bunch of times, and you'll see the catch is that the "joint probabilities" can be negative!


Now let's take it to the next level: the Wigner quasiprobability distribution over position and momentum.

Here, we'll only consider the case of pure states, but the extension to mixed states is natural. See the wikipedia article. Suppose we have some position wavefunction $\psi(x)$. We can define its Wigner quasiprobability distribution:

$W(x,p) = \frac{1}{\pi\hbar} \int_{-\infty}^{\infty} \psi^{*}(x+f)\psi(x-f)e^{\frac{2ipf}{\hbar}}df$

Or if we had some momentum wavefunction $\phi(p)$:

$W(x,p) = \frac{1}{\pi\hbar} \int_{-\infty}^{\infty} \phi^{*}(p+g)\phi(p-g)e^{\frac{-2ixg}{\hbar}}dg$

Then:

$\int_{-\infty}^{\infty}W(x,p)dp = \mid \psi(x) \mid^2$

and

$\int_{-\infty}^{\infty}W(x,p)dx = \mid \phi(p) \mid^2$

In other words, integrating the Wigner quasiprobability distribution over possible momenta gives the probability distribution over position; and integrating the quasiprobability distribution over possible positions gives the probability distribution over momenta.

You can calculate the overlap between states:

$2\pi\hbar\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} W_{\alpha}(x,p)W_{\beta}(x,p) dx dp = \mid\langle\alpha\mid\beta\rangle\mid^2$

And if you take the Wigner transform of an operator:

$\hat{G} \rightarrow G(x,p) = \int_{-\infty}^{\infty}\langle x- \frac{f}{2}\mid\hat{G}\mid x + \frac{f}{2}\rangle e^{\frac{ipf}{\hbar}}df$

You can take expectation values:

$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} W_{\psi}(x,p)G(x,p)dxdp = \langle \psi \mid \hat{G} \mid \psi \rangle$.

And in general, anything you can do with wavefunctions, you can do entirely in terms of quasiprobability distributions. It's a complete alternative formulation of quantum mechanics.


Let's visualize some quasiprobability distributions! We'll be using Xanadu's StrawberryFields framework to do our calculations. Below, we create a Fock state (a state with definite number), and visualize its Wigner function. Change Fock(n=1) to some other n to check out the different number states. Indeed, one can see that the quasiprobability distribution is generally negative.

In [22]:
%matplotlib widget

import numpy as np
import strawberryfields as sf
from strawberryfields.ops import *
import matplotlib.pyplot as plt

prog = sf.Program(1)
with prog.context as q:
    Fock(n=1) | q[0]

eng = sf.LocalEngine(backend="fock", backend_options={"cutoff_dim": 8})
state = eng.run(prog).state

N = 50
q, p = np.linspace(-5, 5, N+1), np.linspace(-5, 5, N+1)
W = state.wigner(0, q, p)

Q, P = np.meshgrid(q, p)
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111, projection="3d")
ax.set_xlabel("q")
ax.set_ylabel("p")
plot = ax.plot_surface(Q, P, W, \
        cmap="RdYlGn", lw=0.5, rstride=1, cstride=1)
plt.show()

Gaussian QM

Okay, now let's talk about Gaussian states and Gaussian operations. Don't worry, we'll have more to do with quasiprobability distributions. Indeed, one interesting fact about (multimode) Gaussian states is that even as they might be quite entangled, nevertheless: their quasiprobability distributions are never negative--and what's more, they can be efficiently classically simulated! (And it's clear from the above that definite number states are not Gaussian--except the vacuum state, that is!)

The essential fact about Gaussian states is that they are fully characterized by their first and second "moments."

Suppose we have $n$ oscillators. Each has a position and momentum operator. Let's form a $2n$ dimensional vector $V$ out of these operators:

$\vec{V} = \begin{pmatrix} \hat{Q_{0}} \\ \hat{Q_{1}} \\ \hat{Q_{2}} \\ \vdots \\ \hat{P_{0}} \\ \hat{P_{1}} \\ \hat{P_{2}} \\ \vdots \end{pmatrix}$

We'll index this vector of operators like $\hat{V}_{i}$.

The "first moment" is a vector of the expectation values of these operators, in other words, the expectation values of all the position and momentum operators.

$\vec{m_{1}} = \begin{pmatrix} \langle\hat{V_{0}}\rangle \\ \langle\hat{V_{1}}\rangle \\ \langle\hat{V_{2}}\rangle \\ \langle\hat{V_{3}}\rangle \\ \langle\hat{V_{4}}\rangle \\ \langle\hat{V_{5}}\rangle \\ \vdots \end{pmatrix}$

The "second moment" corresponds to the "covariance matrix."

We iterate over $\hat{V}$ twice, and form a matrix whose elements are given by expectation values: $\langle \hat{V}_{i}\hat{V}_{j} + \hat{V}_{j}\hat{V}_{i}\rangle - 2\langle \hat{V}_{i}\rangle\langle \hat{V}_{j} \rangle$.

$M_{2} = \begin{pmatrix} \langle \hat{V}_{0}\hat{V}_{0} + \hat{V}_{0}\hat{V}_{0}\rangle - 2\langle \hat{V}_{0}\rangle\langle \hat{V}_{0} \rangle & \langle \hat{V}_{0}\hat{V}_{1} + \hat{V}_{1}\hat{V}_{0}\rangle - 2\langle \hat{V}_{0}\rangle\langle \hat{V}_{1} \rangle & \langle \hat{V}_{0}\hat{V}_{2} + \hat{V}_{2}\hat{V}_{0}\rangle - 2\langle \hat{V}_{0}\rangle\langle \hat{V}_{2} \rangle & \dots \\ \langle \hat{V}_{1}\hat{V}_{0} + \hat{V}_{0}\hat{V}_{1}\rangle - 2\langle \hat{V}_{1}\rangle\langle \hat{V}_{0} \rangle & \langle \hat{V}_{1}\hat{V}_{1} + \hat{V}_{1}\hat{V}_{1}\rangle - 2\langle \hat{V}_{1}\rangle\langle \hat{V}_{1} \rangle & \langle \hat{V}_{1}\hat{V}_{2} + \hat{V}_{2}\hat{V}_{1}\rangle - 2\langle \hat{V}_{1}\rangle\langle \hat{V}_{2} \rangle & \dots \\ \langle \hat{V}_{2}\hat{V}_{0} + \hat{V}_{0}\hat{V}_{2}\rangle - 2\langle \hat{V}_{2}\rangle\langle \hat{V}_{0} \rangle & \langle \hat{V}_{2}\hat{V}_{1} + \hat{V}_{1}\hat{V}_{2}\rangle - 2\langle \hat{V}_{2}\rangle\langle \hat{V}_{1} \rangle & \langle \hat{V}_{2}\hat{V}_{2} + \hat{V}_{2}\hat{V}_{2}\rangle - 2\langle \hat{V}_{2}\rangle\langle \hat{V}_{2} \rangle & \dots \\ \vdots & \vdots & \vdots & \ddots \end{pmatrix}$

A Gaussian state is fully characterized by these first two moments. In contrast, general states would require paying attention to the higher moments.

Moreover, partial states of a given oscillator can be extracted from vector of means and the covariance matrix by just considering the elements that involve a particular oscillator's position and momenta:

So for the first oscillator, we'd have $\begin{pmatrix} \hat{Q_{0}}\\ \hat{P_{0}} \end{pmatrix}$ and $\begin{pmatrix} \langle \hat{Q}_{0}\hat{Q}_{0} + \hat{Q}_{0}\hat{Q}_{0}\rangle - 2\langle \hat{Q}_{0}\rangle\langle \hat{Q}_{0} \rangle & \langle \hat{Q}_{0}\hat{P}_{0} + \hat{P}_{0}\hat{Q}_{0}\rangle - 2\langle \hat{Q}_{0}\rangle\langle \hat{P}_{0} \rangle \\ \langle \hat{P}_{0}\hat{Q}_{0} + \hat{Q}_{0}\hat{P}_{0}\rangle - 2\langle \hat{P}_{0}\rangle\langle \hat{Q}_{0} \rangle & \langle \hat{P}_{0}\hat{P}_{0} + \hat{P}_{0}\hat{P}_{0}\rangle - 2\langle \hat{P}_{0}\rangle\langle \hat{P}_{0} \rangle & \end{pmatrix}$, and so on.

Given the first and second moments, we can easily construct the Wigner quasiprobability distribution for $n$ oscillators.

Define a vector of variables $\vec{v}=\begin{pmatrix}q_{0} \\ q_{1} \\ \vdots \\ p_{0} \\ p_{1} \\ \vdots \end{pmatrix}$, then:

$W(q_{0}, q_{1}, \dots, p_{0}, p_{1}, \dots) = \frac{1}{(2\pi)^{n}\sqrt{\det{M_{2}}}} e^{-\frac{1}{2}(\vec{v}-\vec{m_{1}})^{T} M_{2}^{-1} (\vec{v}-\vec{m_{1}})}$, which is just ripe for some Gaussian integration.


Gaussian operations are operations that preserve Gaussian states. Gaussian states include: the vacuum state, position eigenstates, momentum eigenstates, coherent states (aka displaced vacuum states), squeezed states, and displaced squeezed states.

Corresponding to these, for a single oscillator, we have displacement operators, squeezing operators, and evolution by the harmonic oscillator Hamiltonian itself. For multimode states, we can also add in our favorite two mode squeezing operator, and the "beamsplitter gate" too for that matter. In general, if an operator can be written as an expression quadratic in creation and annihilation operators, then it's Gaussian.

As operators that act on wavefunctions, these operators can be written:

Displacement: $D(z) = e^{z\hat{a}^{\dagger} - z^{*}\hat{a}}$

Squeezing: $S(z) = e^{\frac{1}{2}(z^{*}\hat{a}^{2} - z\hat{a}^{\dagger 2})}$

Rotation/Oscillator Hamiltonian: $R(\theta) = e^{i\theta \hat{a}^{\dagger}\hat{a}}$

Two-Mode Squeezing: $S_{i,j}(z) = e^{\frac{1}{2}(z^{*}\hat{a}\hat{b} - z\hat{a}^{\dagger}\hat{b}^{\dagger})}$

Beamsplitter: $B_{i,j}(\theta, \phi) = e^{\theta(e^{i\phi}\hat{a_{i}}^{\dagger}\hat{a_{j}} - e^{-i\phi}\hat{a_{i}}\hat{a_{j}}^{\dagger}))}$

Before we go on, however, let's take a quick tour of single mode Gaussian states.

Below, you'll see a plot of the probability distributions for position and momentum for a single oscillator mode, which starts off in the vacuum state. Using the sliders below, you can parameterize position and momentum displacements, the real and imaginary part of the squeezing operator, and the angle of rotation (aka how long to apply the oscillator Hamiltonian for).

As you can see, we're using StrawberryField's Gaussian backend, and they provide useful functions for sampling the position and momentum distributions.

In [25]:
%matplotlib widget

import numpy as np
import strawberryfields as sf
from strawberryfields.ops import *

import matplotlib.pyplot as plt
from matplotlib.widgets import Slider, Button, RadioButtons

prog = sf.Program(1)
with prog.context as q:
    Vac | q[0]
eng = sf.Engine('gaussian')
state = eng.run(prog).state

Q, P = np.linspace(-5, 5, 100), np.linspace(-5, 5, 100)
Qvals = state.x_quad_values(0, Q, P)
Pvals = state.p_quad_values(0, Q, P)

fig = plt.figure(figsize=(8,6))

q_axis = fig.add_subplot(121)
q_axis.set_xlabel('q', fontsize=14)
q_axis.set_ylabel('Pr(q)', fontsize=14)
q_plot, = q_axis.plot(Q, Qvals)

p_axis = fig.add_subplot(122)
p_axis.set_xlabel('p', fontsize=14)
p_axis.set_ylabel('Pr(p)', fontsize=14)
p_plot, = p_axis.plot(P, Pvals)

fig.tight_layout()
fig.subplots_adjust(bottom=0.33) 

qdisp_axis = plt.axes([0.25, 0.0, 0.65, 0.03], facecolor="lightgoldenrodyellow")
qdisp_slider = Slider(qdisp_axis, 'Q Displacement (real)', -10, 10, valinit=0)

pdisp_axis = plt.axes([0.25, 0.05, 0.65, 0.03], facecolor="lightgoldenrodyellow")
pdisp_slider = Slider(pdisp_axis, 'P Displacement (imag)', -10, 10, valinit=0)

rsqueeze_axis = plt.axes([0.25, 0.1, 0.65, 0.03], facecolor="lightgoldenrodyellow")
rsqueeze_slider = Slider(rsqueeze_axis, 'Squeeze (r)', -10, 10, valinit=0)

thsqueeze_axis = plt.axes([0.25, 0.15, 0.65, 0.03], facecolor="lightgoldenrodyellow")
thsqueeze_slider = Slider(thsqueeze_axis, 'Squeeze (theta)', 0, 2*np.pi, valinit=0)

rot_axis = plt.axes([0.25, 0.2, 0.65, 0.03], facecolor="lightgoldenrodyellow")
rot_slider = Slider(rot_axis, 'Rotation (theta)', 0, 2*np.pi, valinit=0)

def update(val):
    global eng, Q, P
    global fig, q_plot, p_plot
    global qdisp_slider, pdisp_slider, rsqueeze_slider, thsqueeze_slider, rot_slider
    eng.reset()
    prog = sf.Program(1)
    with prog.context as q:
        Sgate(rsqueeze_slider.val, thsqueeze_slider.val) | q[0]
        dz = qdisp_slider.val + 1j*pdisp_slider.val
        Dgate(np.abs(dz), np.angle(dz)) | q[0]
        Rgate(rot_slider.val) | q[0]
    state = eng.run(prog).state

    Qvals = state.x_quad_values(0, Q, P)
    Pvals = state.p_quad_values(0, Q, P)

    q_plot.set_ydata(Qvals)
    p_plot.set_ydata(Pvals)
    fig.canvas.draw_idle()

qdisp_slider.on_changed(update)
pdisp_slider.on_changed(update)
rsqueeze_slider.on_changed(update)
thsqueeze_slider.on_changed(update)
rot_slider.on_changed(update)

plt.show()

So one profound fact is that Gaussian operations can be represented as symplectic transformations on the first and second moments of the state.

Recall that a symplectic transformation acts on a space of even dimensions and preserves the special commutation relationship between positions and momenta. To wit, it preserves the symplectic form $\Omega$, which in this case is a $2n \times 2n$ dimensional matrix:

$\Omega = \begin{pmatrix} 0_{n} & I_{n} \\ -I_{n} & 0_{n} \end{pmatrix}$,

where $0_{n}$ refers to a $n \times n$ matrix of 0's, and $I_{n}$ refers to the $n \times n$ identity.

When we say a symplectic matrix preserves the symplectic form, we mean that a symplectic matrix $S$ satisfies: $S\Omega S^{T} = \Omega$.

A general (finite, linear) symplectic transformation consists of a symplectic matrix $S$ and a displacement vector $\vec{d}$, and it transforms the first and second moments of a state like:

$\vec{m}_{1} \rightarrow S\vec{m}_{1} + \vec{d} $

$M_{2} \rightarrow SM_{2}S^{T}$

For a single mode, the symplectic transformation corresponding to the displacement operator $D(z)$ is just:

$\vec{m}_{1} \rightarrow \vec{m}_{1} + \begin{pmatrix}\Re{(z)} \\ \Im{(z)} \end{pmatrix} $

The symplectic matrix corresponding to the single mode squeezing operator $S(re^{i\theta})$ is:

$SQ = \begin{pmatrix}\cosh(r) + \cos(\theta)\sinh(r) & \sin(\theta)\sinh(r) \\ \sin(\theta)\sinh(r) & \cosh(r) - \cos(\theta)\sinh(r) \end{pmatrix}$

When $\theta = 0$, this is just:

$SQ = \begin{pmatrix} e^{-r} & 0 \\ 0 & e^{r} \end{pmatrix}$

So $M_{2}$ transforms like: $M_{2} \rightarrow (SQ)M_{2}(SQ)^{T}$

The oscillator hamiltonian (phase rotation) $R(\theta)$ can be represented with the symplectic matrix:

$PR = \begin{pmatrix} \cos{\theta} & \sin{\theta} \\ -\sin{\theta} & \cos{\theta}\end{pmatrix} $

And so $M_{2}$ transforms like:

$M_{2} \rightarrow (PR)M_{2}(PR)^{T}$

Finally, considering the two mode squeezing operation, we have the symplectic matrix:

$SQ_{i,j} = \begin{pmatrix} \cosh(r) & 0 & \sinh(r) & 0 \\ 0 & \cosh(r) & 0 & -\sinh(r) \\ \sinh(r) & 0 & \cosh(r) & 0 \\ 0 & -\sinh(r) & 0 & \cosh(r) \end{pmatrix}$

Indeed, if we have to act on one or more mode of many modes, we simply take a $2n \times 2n$ identity matrix and replace the relevant components corresponding to the p's and q's with our operator.


Now let's go a little deeper. Just for fun, we'll be doing our calculations by hand instead of with StrawberryFields. Below, you'll be able to explore a multimode Gaussian state, visualizing a given mode's Wigner quasiprobability distribution, as well as its position and momentum distributions.

In [31]:
%matplotlib widget

import numpy as np
import scipy as sc
import qutip as qt
import sympy as sy
import matplotlib.pyplot as plt
from matplotlib import transforms

#####################################################################################

def tensor_upgrade(O, i, n):
    return qt.tensor(*[O if i==j else qt.identity(O.shape[0]) for j in range(n)])

# upgrades a single mode operator O to act on the i'th of n modes
def direct_upgrade(O, i, n):
    I = np.eye(2*n)
    cols = np.zeros((2,2), dtype=np.intp)
    cols[0,:], cols[1,:] = i, i+n
    rows = cols.T
    I[rows,cols] = O[:]
    return I

# extracts the partial state of the i'th mode
def partial(i, m, M):
    n = int(len(m)/2)
    return np.array([m[i], m[i+n]]), np.array([[M[i,i], M[i, i+n]],\
                                               [M[i+n, i], M[i+n,i+n]]])

#####################################################################################

def first_moments(state, V):
    return np.array([qt.expect(v, state) for v in V])

def second_moments(state, V):
    return np.array([[qt.expect(V[i]*V[j] + V[j]*V[i], state) -\
                          2*qt.expect(V[i], state)*qt.expect(V[j], state)
                                for j in range(len(V))]
                                    for i in range(len(V))])

#####################################################################################

def displacement(z, i=0, n=1):
    d = np.zeros(2*n)
    d[i], d[i+n] = z.real, z.imag
    return d, np.eye(2*n)

def squeeze(r, i=0, n=1):
    d = np.zeros(2*n)
    D = np.array([[np.exp(-r), 0],\
                  [0, np.exp(r)]])
    return d, direct_upgrade(D, i, n)

def rotation(theta, i=0, n=1):
    d = np.zeros(2*n)
    R = np.array([[np.cos(theta), np.sin(theta)],\
                  [-np.sin(theta), np.cos(theta)]])
    return d, direct_upgrade(R, i, n)

# two mode squeezing operator on modes i and j
def tm_squeeze(r, i, j, n):
    S = np.array([[np.cosh(r), 0, np.sinh(r), 0],\
                  [0, np.cosh(r), 0, -np.sinh(r)],\
                  [np.sinh(r), 0, np.cosh(r), 0],\
                  [0, -np.sinh(r), 0, np.cosh(r)]])
    I = np.eye(2*n)
    cols = np.zeros((4,4), dtype=np.intp)
    cols[0,:], cols[1,:], cols[2,:], cols[3,:]  = i, j, i+n, j+n
    rows = cols.T
    I[rows,cols] = S[:]
    return np.zeros(2*n), I

def apply_symplectic(d, S, m, M):
    return S @ m + d, S @ M @ S.T

#####################################################################################

# returns a dictionary of variables and a sympy expression for the wigner function
def wigner(m, M):
    n = int(len(m)/2)
    symbols = sy.symbols(" ".join(["q%d" % i for i in range(n)]+\
                                  ["p%d" % i for i in range(n)]))
    v, m, M = sy.Matrix(symbols), sy.Matrix(m), sy.Matrix(M)
    return {"q": symbols[0:n], "p": symbols[n:]},\
           ((-1/2)*(v-m).T*M.inv()*(v-m)).exp()/(sy.sqrt(M.det())*(2*sy.pi)**(n))

#####################################################################################

n_modes, cutoff = 2, 3
 
# technically unnecessary, but why not start with an actual state and get its moments
V = [tensor_upgrade(qt.position(cutoff), i, n_modes) for i in range(n_modes)] +\
    [tensor_upgrade(qt.momentum(cutoff), i, n_modes) for i in range(n_modes)]
                         
vac = qt.basis(cutoff**n_modes)
vac.dims = [[cutoff]*n_modes, [1]*n_modes]
                         
#####################################################################################

state = vac.copy()
m, M = first_moments(state, V), second_moments(state, V)

# apply some operators!
m, M = apply_symplectic(*displacement(np.random.random()+1j*np.random.random(), i=0, n=n_modes), m, M)
m, M = apply_symplectic(*squeeze(np.random.random(), i=0, n=n_modes), m, M)
m, M = apply_symplectic(*rotation(np.random.random(), i=0, n=n_modes), m, M)
m, M = apply_symplectic(*tm_squeeze(np.random.random(), 0, 1, n_modes), m, M)

print("first moments:\n%s" % m)
print("second moments:\n%s" % M)

#####################################################################################

def plot_mode(m, M, i=0, n=1, N=50):
    m_partial, M_partial = partial(i, m, M) # get the partial state of the i'th mode
    v, W = wigner(m_partial, M_partial) # and its wigner function
                         
    xdist = W.integrate((v["p"][0], -sy.oo, sy.oo)) # integrate over momentum to get position
    pdist = W.integrate((v["q"][0], -sy.oo, sy.oo)) # integrate over position to get momentum
    
    print("norm check: %s" % W.integrate((v["p"][0], -sy.oo, sy.oo), (v["q"][0], -sy.oo, sy.oo)))
    
    # turn our sympy expressions into functions
    Wf, xdistf, pdistf  = sy.lambdify([v["q"][0], v["p"][0]], W),\
                          sy.lambdify(v["q"][0], xdist),\
                          sy.lambdify(v["p"][0], pdist)
    
    q, p = np.linspace(-5, 5, N+1), np.linspace(-5, 5, N+1)
    Q, P = np.meshgrid(q, p)
    
    fig = plt.figure(figsize=(8,8))
    fig.suptitle('mode %d' % i, fontsize=16)

    wigner_axis = fig.add_subplot(221, projection="3d")
    wigner_plot = wigner_axis.plot_surface(Q, P, np.array([[Wf(x_, p_)[0][0] for p_ in p] for x_ in q]),\
                                                    cmap="RdYlGn", lw=0.5, rstride=1, cstride=1)
    
    q_axis = fig.add_subplot(223)
    q_axis.set_xlabel('q')
    q_axis.yaxis.set_label_position("right")
    q_axis.yaxis.set_ticks_position("right")
    q_axis.set_ylabel('Pr(q)')
    q_axis.set_box_aspect(1/2)
    q_plot = q_axis.plot(q, np.array([xdistf(x_)[0][0] for x_ in q]))
    
    p_axis = fig.add_subplot(222)
    p_axis.set_xlabel('Pr(p)')
    p_axis.set_ylabel('p')
    p_axis.yaxis.set_label_position("right")
    p_axis.set_box_aspect(2)
    p_plot = p_axis.plot(np.array([pdistf(p_)[0][0] for p_ in p]), p)
    
    plt.subplots_adjust(left=0, bottom=0, right=1, top=1, wspace=0, hspace=-0.2)

for i in range(n_modes):
    plot_mode(m, M, i=i, n=n_modes)
    
plt.show()
first moments:
[0.6462062  0.         0.85829548 0.        ]
second moments:
[[ 0.91964926  0.          0.87715245  0.        ]
 [ 0.          1.48348921  0.         -1.09578293]
 [ 0.87715245  0.          1.92399047  0.        ]
 [ 0.         -1.09578293  0.          1.48348921]]
norm check: Matrix([[1.00000000000000]])
norm check: Matrix([[1.00000000000000]])

So you might wonder, in general, how do we find these symplectic transformations?

A Gaussian unitary can be written $U = e^{iH}$, where $H$ is quadratic in creation and annihilation operators.

If we define $\xi = \begin{pmatrix}\hat{a}_{0} \\ \hat{a}_{1} \\ \vdots \\ \hat{a}_{0}^{\dagger} \\ \hat{a}_{1}^{\dagger} \\ \vdots \end{pmatrix}$ to be a vector of $n$ annihilation operators and $n$ creation operators (analogously to our vector of position and momentum operators), then $H$ can be written:

$$H = \frac{1}{2}\xi^{\dagger}\textbf{H}\xi + \xi^{\dagger}\textbf{h}$$

,

where $\textbf{H} = \begin{pmatrix} A & B \\ B^{*} & A^{*} \end{pmatrix}$, is a $2n \times 2n$ Hermitian matrix and $\textbf{h} = \begin{pmatrix} h \\ h^{*} \end{pmatrix}$ is a $2n$ complex column vector. Here, $A$ is an $n \times n$ Hermitian matrix, and $B$ is symmetric.

In [32]:
import numpy as np
import qutip as qt
import scipy as sc

import strawberryfields as sf
from strawberryfields.ops import *

def make_Hh(A, B=None, h=None):
    B = np.zeros(A.shape) if type(B) == type(None) else B
    h = np.zeros(A.shape[0]) if type(h) == type(None) else h
    return np.block([[A, B],\
                     [B.conj(), A.conj()]]),\
           np.concatenate([h, h.conj()])

def rand_Hh(n):
    A = qt.rand_herm(n).full()
    B = np.random.randn(n, n) + 1j*np.random.randn(n, n)
    B = B @ B.T + B.T @ B
    h = np.random.randn(n) + 1j*np.random.randn(n)
    return make_Hh(A, B, h)

n = 2
H, h = rand_Hh(n)

It turns out that evolving all the creation and annihilation operators by a Gaussian unitary is equivalent to an affine transformation:

$$e^{iH} \xi e^{-iH} = \textbf{S}\xi + \textbf{s}$$

where $\textbf{S}$ is a complex symplectic matrix: $\textbf{S} = e^{-i\Omega_{c}\textbf{H}}$ and $\textbf{s} = (\textbf{S}-I_{2n})\textbf{H}^{-1}\textbf{h}$.

Here the complex symplectic form is $\Omega_{c} = \begin{pmatrix}I_{n} & 0 \\ 0 & -I_{n} \end{pmatrix}$, and if $\textbf{H}$ has no inverse, one should use the pseudoinverse instead to calculate $\textbf{h}$.

In [34]:
def omega_c(n):
    return sc.linalg.block_diag(np.eye(n), -np.eye(n))

def make_Ss(H, h, expm=True, theta=2): # the theta is for later 
    n = int(len(h)/2)
    omega = omega_c(n)
    S = sc.linalg.expm(-1j*(theta/2)*omega@H) if expm else H
    try:
        s = ((S - np.eye(2*n)) @ np.linalg.inv(H)) @ h
    except:
        s = ((S - np.eye(2*n)) @ np.linalg.pinv(H)) @ h
    return S, s

def test_c(S):
    n = int(len(S)/2)
    WC = omega_c(n)
    return np.allclose(S @ WC @ S.conj().T, WC)

n = 2
H, h = rand_Hh(n)
S, s = make_Ss(H, h)
print("complex symplectic? %s" % test_c(S))
complex symplectic? True

So we can use a complex symplectic matrix to perform a Gaussian unitary transformation on a vector of creation and annihilation operators.

At the same time, there is an equivalent real symplectic matrix $\textbf{R}$ and real displacement vector $\textbf{r}$ that implements the same transformation on the vector of position and momentum operators $\vec{V}$.

$$\vec{V} \rightarrow \textbf{R}\vec{V} + \textbf{r}$$

We can easily convert between $\xi$, the vector of annihilation and creation operators, and $\vec{V}$, the vector of positions and momenta, via:

$$\vec{V} = L\xi$$

Where $L = \frac{1}{\sqrt{2}}\begin{pmatrix} I_{n} & I_{n} \\ -iI_{n} & iI_{n} \end{pmatrix}$.

This comes from the definition of position and momentum operators in terms of creation and annihilation operators, e.g.:

$$\hat{Q} = \frac{1}{\sqrt{2}}(\hat{a} + \hat{a}^{\dagger})$$

$$\hat{P} = -\frac{i}{\sqrt{2}}(\hat{a} - \hat{a}^{\dagger})$$

Therefore, we can turn our complex symplectic transformation into a real symplectic transformation via:

$$\textbf{R} = L\textbf{S}L^{\dagger}$$

$$\textbf{r} = L\textbf{s}$$

Incidentally, if we write $\textbf{S}$ as $\begin{pmatrix} E & F \\ F^{*} & E^{*} \end{pmatrix}$, and $\textbf{s}$ as $\begin{pmatrix} s \\ s^{*} \end{pmatrix}$ then equivalently:

$$\textbf{R} = \begin{pmatrix}\Re(E+F) & -\Im(E-F) \\ \Im(E+F) & \Re(E-F) \end{pmatrix}$$

and $$\textbf{r} = \sqrt{2}\begin{pmatrix} \Re(s) \\ \Im(s) \end{pmatrix}$$

And if it wasn't obvious, if we've represented our Gaussian state in terms of its first and second moments, then the real sympectic transformations act on those guys!

In [35]:
def omega_r(n):
    return np.block([[np.zeros((n,n)), np.eye(n)],\
                     [-np.eye(n), np.zeros((n,n))]])
    
def make_Rr(S, s):
    n = int(len(s)/2)
    L = (1/np.sqrt(2))*np.block([[np.eye(n), np.eye(n)],\
                             [-1j*np.eye(n), 1j*np.eye(n)]])
    return (L @ S @ L.conj().T).real, (L @ s).real

def make_Rr2(S, s):
    n = int(len(s)/2)
    E, F = S[0:n, 0:n], S[0:n, n:]
    return np.block([[(E+F).real, -(E-F).imag],\
                     [(E+F).imag, (E-F).real]]),\
           np.sqrt(2)*np.concatenate([s[0:n].real,\
                                      s[0:n].imag])
def test_r(R):
    n = int(len(R)/2)
    WR = omega_r(n)
    return np.allclose(R @ WR @ R.T, WR)

n = 2
H, h = rand_Hh(n)
S, s = make_Ss(H, h)
R, r = make_Rr(S, s)
R2, r2 = make_Rr2(S, s) # let's calculate R and r both ways, and check we get the same!
print("equivalent? %s" % (np.allclose(R, R2) and np.allclose(r, r2)))
print("real symplectic? %s" % test_r(R))
equivalent? True
real symplectic? True

Now let's consider we're in a second quantization context. Our first quantized system has $n$ basis states, and so we introduce $n$ bosonic oscillators to keep track of the number of particles in those states. Recall that we can upgrade a first quantized operator $O$ to a second quantized operator $\textbf{O}$ via:

$$\textbf{O} = \begin{pmatrix} \hat{a_{0}}^{\dagger} & \hat{a_{1}}^{\dagger} & \dots \end{pmatrix} O \begin{pmatrix} \hat{a_{0}} \\ \hat{a_{1}} \\ \vdots \end{pmatrix}$$

Clearly, $\textbf{O}$ is going to be no more than quadratic in the creation and annihilation operators! Therefore we can represent it as a real symplectic transformation. In fact, $O$ just becomes $A$ in $\textbf{H} = \begin{pmatrix} A & B \\ B^{*} & A^{*} \end{pmatrix}$, while the $B$ are 0's, and we proceed as above, to complex symplectic and finally real symplectic.

Indeed, thinking about our second quantization picture of spin, we could represent the Pauli operators as real symplectic matrices!

In [36]:
def second_quantize(O, expm=True, theta=1):
    n = O.shape[0]
    Op = O.full() if type(O) == qt.Qobj else O
    H, h = make_Hh(Op, np.zeros((n,n)), np.zeros(n))
    S, s = make_Ss(H, h, expm=expm, theta=theta)
    R, r = make_Rr(S, s)
    return R, r

def make_XYZ():
    return {"X": second_quantize(qt.sigmax(), expm=False)[0],\
            "Y": second_quantize(qt.sigmay(), expm=False)[0],\
            "Z": second_quantize(qt.sigmaz(), expm=False)[0]} # note we don't exponentiate them

XYZ = make_XYZ()
for o, O in XYZ.items():
    print("%s:\n%s" % (o, O))
X:
[[0. 1. 0. 0.]
 [1. 0. 0. 0.]
 [0. 0. 0. 1.]
 [0. 0. 1. 0.]]
Y:
[[ 0.  0.  0.  1.]
 [ 0.  0. -1.  0.]
 [ 0. -1.  0.  0.]
 [ 1.  0.  0.  0.]]
Z:
[[ 1.  0.  0.  0.]
 [ 0. -1.  0.  0.]
 [ 0.  0.  1.  0.]
 [ 0.  0.  0. -1.]]

I think you can see where this is going. We're going to represent a spin as a Gaussian state of two oscillators.

Let's define:

In [37]:
def state_xyz(state, XYZ=None):
    if type(XYZ) == type(None):
        XYZ = make_XYZ()
    a = np.array([state.poly_quad_expectation(XYZ["X"])[0],\
                  state.poly_quad_expectation(XYZ["Y"])[0],\
                  state.poly_quad_expectation(XYZ["Z"])[0]]).real
    return a/np.linalg.norm(a) if np.linalg.norm(a) != 0 else a

Here state is a StrawberryFields state. They have a nice function that calculates expectation values for us in this context, where our symplectic Paulis are interpreted as second-order polynomials in the position and momentum operators.

In [38]:
import math

def xyz_sph(x,y,z):
    r = np.sqrt(x**2 + y**2 + z**2)
    phi = math.atan2(y, x)
    theta = math.acos(z/r)
    return r, phi, theta

def xyz_gaussianTransforms(xyz):
    r, phi, theta = xyz_sph(*xyz)
    IR1 = GaussianTransform(second_quantize(qt.sigmax(), expm=True, theta=-theta)[0])
    IR2 = GaussianTransform(second_quantize(qt.sigmaz(), expm=True, theta=phi-np.pi/2)[0])
    return IR1, IR2

The first function takes us from cartesian to spherical coordinates, and second returns two Gaussian transformations which rotate a qubit from the $\uparrow$ state to the desired state specified in cartesian coordinates.

Now we have everything in place. We're going to start off in the vacuum state, and then squeeze the first oscillator to put the spin in the $\uparrow$ state. Then we'll perform a rotation on the qubit to prepare the desired initial state (specified in cartesian coordinates), and then visualize the Wigner functions of the two oscillator modes along with the Bloch sphere of the qubit as it's rotating, frame by frame. We can even save it all as a movie!

In [40]:
%matplotlib widget

import matplotlib.pyplot as plt
import matplotlib.animation as animation

n = 2 # number of modes
N = 50 # mesh size
fps = 24 # frames per sec
frn = 60 # frame number

E = qt.sigmay() # which rotation
initial_state = None
#initial_state = np.random.randn(3) #np.array([1,0,0])
#initial_state = initial_state/np.linalg.norm(initial_state)

eng = sf.Engine('gaussian')

thetas = np.linspace(0, 2*np.pi, frn)
Rs = [GaussianTransform(\
        second_quantize(E, expm=True, theta=theta)[0])\
            for theta in thetas] # Generate a Gaussian transformation for each step in the movie

Q = np.linspace(-5, 5, N+1)
P = np.linspace(-5, 5, N+1)
Q_, P_ = np.meshgrid(Q, P)

# These will hold the Wigner data for each step in the movie, for each oscillator
zarray0 = np.zeros((N+1, N+1, frn))
zarray1 = np.zeros((N+1, N+1, frn))

# This will hold the XYZ data for the qubit, for each step in the movie
xyzs = np.zeros((3, frn))
XYZ = make_XYZ()

if initial_state != None:
    IR1, IR2 = xyz_gaussianTransforms(initial_state)

for i, theta in enumerate(thetas): # for each frame
    prog = sf.Program(2)
    with prog.context as q:
        Vac | q[0] # start off in the vacuum
        Vac | q[1]
        Sgate(1) | q[0] # squeeze the first qubit to get Z+
        if type(initial_state) != type(None):
            IR1 | (q[0], q[1]) # apply the GaussianTransforms
            IR2 | (q[0], q[1]) # to prepare the initial state
        Rs[i] | (q[0], q[1]) # perform the rotation for this frame
    state = eng.run(prog).state # get the state

    Z0 = state.wigner(0, Q, P)
    Z1 = state.wigner(1, Q, P)
    
    zarray0[:,:,i] = Z0 # save the wigner data
    zarray1[:,:,i] = Z1
    xyzs[:,i] = state_xyz(state, XYZ=XYZ) # save the xyz data
    
    eng.reset() # start over

def sphere(): # makes a mesh of a sphere
    u, v = np.mgrid[0:2*np.pi:20j, 0:np.pi:10j]
    x = np.cos(u)*np.sin(v)
    y = np.sin(u)*np.sin(v)
    z = np.cos(v)
    return x, y, z

def update_plot(frame_number, zarray0, zarray1, xyzs, plot):
    plot[0].remove()
    plot[0] = ax0.plot_surface(Q_, P_, zarray0[:,:,frame_number],\
                            cmap="RdYlGn", lw=0.5, rstride=1, cstride=1)
    plot[1].remove()
    plot[1] = ax1.plot_surface(Q_, P_, zarray1[:,:,frame_number],\
                            cmap="RdYlGn", lw=0.5, rstride=1, cstride=1)
    plot[2].remove()
    plot[2] = ax2.plot_surface(*sphere(), color="g", alpha=0.1)
    plot[3].remove()
    plot[3] = ax2.quiver(0,0,0,*xyzs[:,frame_number], arrow_length_ratio=0.3)

fig = plt.figure(figsize=(8,4))
ax0 = fig.add_subplot(131, projection="3d")
ax0.set_zlim(0, 0.18)
ax0.set_xlabel("q")
ax0.set_ylabel("p")
ax1 = fig.add_subplot(132, projection="3d")
ax1.set_zlim(0, 0.18)
ax1.set_xlabel("q")
ax1.set_ylabel("p")
ax2 = fig.add_subplot(133, projection="3d")
ax2.set_xlabel("x")
ax2.set_ylabel("y")

plot = [ax0.plot_surface(Q_, P_, zarray0[:,:,0],\
                            cmap="RdYlGn", lw=0.5, rstride=1, cstride=1),\
        ax1.plot_surface(Q_, P_, zarray1[:,:,0],\
                            cmap="RdYlGn", lw=0.5, rstride=1, cstride=1),\
        ax2.plot_surface(*sphere(), color="g", alpha=0.1),\
        ax2.quiver(0,0,0,*xyzs[:,0], arrow_length_ratio=0.3)]

plt.tight_layout()
ani = animation.FuncAnimation(fig, update_plot, frn,\
            fargs=(zarray0, zarray1, xyzs, plot), interval=1000/fps)

#ani.save('movie.mp4',writer='ffmpeg',fps=fps)
#ani.save('movie.gif',writer='imagemagick',fps=fps)
plt.show()

Check out sf_gaussian_spin.py, the standalone version, especially if you want to generate movies.

You can see pregenerated versions here:

  1. X rotation
  2. Y rotation
  3. Z rotation
  4. X rotation from random starting place
  5. Y rotation from random starting place
  6. Z rotation from random starting place

Finally, just for fun, check out sf_atiyah.py which implements the Atiyah construction from a few weeks ago using StrawberryField's Gaussian backend. We have $n$ spins arranged in 3D, and so we have $2n$ oscillators total. The exchange algebra of the spin consists in rotation operators across the spins, and so everything is Gaussian! Thus given a set of constellations, we can convert the resulting unitary operator into a symplectic transformation, and indeed, everything works out as desired!