Minimizing the amount of correlations

Author: Nicolas Quesada

In this paper by Jiang, Lang, and Caves [1], the authors show that if one has two qumodes states \(\left|\psi \right\rangle\) and \(\left|\phi \right\rangle\) and a beamsplitter \(\text{BS}(\theta)\), then the only way no entanglement is generated when the beamsplitter acts on the product of the two states

\[\left|\Psi \right\rangle = \text{BS}(\theta) \ \left|\psi \right\rangle \otimes \left|\phi \right\rangle,\]

is if the states \(\left|\psi \right\rangle\) and \(\left|\phi \right\rangle\) are squeezed states along the same quadrature and by the same amount.

Now imagine the following task: > Given an input state \(\left|\psi \right\rangle\), which is not necessarily a squeezed state, what is the optimal state \(\left|\phi \right\rangle\) incident on a beamsplitter \(\text{BS}(\theta)\) together with \(\left|\psi \right\rangle\) such that the resulting entanglement is minimized?

In our paper we showed that if \(\theta \ll 1\) the optimal state \(\left|\phi \right\rangle\), for any input state \(\left|\psi \right\rangle\), is always a squeezed state. We furthermore conjectured that this holds for any value of \(\theta\).

Here, we numerically explore this question by performing numerical minimization over \(\left|\phi \right\rangle\) to find the state that minimizes the entanglement between the two modes.

First, we import the libraries required for this analysis; NumPy, SciPy, TensorFlow, and StrawberryFields.

import numpy as np
from scipy.linalg import expm
import tensorflow as tf

import strawberryfields as sf
from strawberryfields.ops import *
from strawberryfields.backends.tfbackend.ops import partial_trace


# set the random seed
tf.random.set_seed(137)
np.random.seed(42)

Now, we set the Fock basis truncation; in this case, we choose \(cutoff=30\):

cutoff = 30

Creating the initial states

We define our input state \(\newcommand{ket}[1]{\left|#1\right\rangle}\ket{\psi}\), an equal superposition of \(\ket{0}\) and \(\ket{1}\):

\[\ket{\psi}=\frac{1}{\sqrt{2}}\left(\ket{0}+\ket{1}\right)\]
psi = np.zeros([cutoff], dtype=np.complex128)
psi[0] = 1.0
psi[1] = 1.0
psi /= np.linalg.norm(psi)

We can now define our initial random guess for the second state \(\ket{\phi}\):

phi = np.random.random(size=[cutoff]) + 1j*np.random.random(size=[cutoff])
phi[10:] = 0.
phi /= np.linalg.norm(phi)

Next, we define the creation operator \(\hat{a}\),

a = np.diag(np.sqrt(np.arange(1, cutoff)), k=1)

the number operator \(\hat{n}=\hat{a}^\dagger \hat{a}\),

n_opt = a.T @ a

and the quadrature operators \(\hat{x}=a+a^\dagger\), \(\hat{p}=-i(a-a^\dagger)\).

# Define quadrature operators
x = a + a.T
p = -1j*(a-a.T)

We can now calculate the displacement of the states in the phase space, \(\alpha=\langle \psi \mid\hat{a}\mid\psi\rangle\). The following function calculates this displacement, and then displaces the state by \(-\alpha\) to ensure it has zero displacement.

def recenter(state):
    alpha = state.conj() @ a @ state
    disp_alpha = expm(alpha.conj()*a - alpha*a.T)
    out_state = disp_alpha @ state
    return out_state

First, let’s have a look at the displacement of state \(\ket{\psi}\) and state \(\ket{\phi}\):

print(psi.conj().T @ a @ psi)
print(phi.conj().T @ a @ phi)

Out:

(0.4999999999999999+0j)
(1.4916658938055996+0.20766920902420394j)

Now, let’s center them in the phase space:

psi = recenter(psi)
phi = recenter(phi)

Checking they now have zero displacement:

print(np.round(psi.conj().T @ a @ psi, 9))
print(np.round(phi.conj().T @ a @ phi, 9))

Out:

0j
(8e-08+1.1e-08j)

Performing the optimization

We can construct the variational quantum circuit cost function, using Strawberry Fields:

penalty_strength = 10

prog = sf.Program(2)
eng = sf.Engine("tf", backend_options={"cutoff_dim": cutoff})

psi = tf.cast(psi, tf.complex64)
phi_var_re = tf.Variable(phi.real)
phi_var_im = tf.Variable(phi.imag)


with prog.context as q:
    Ket(prog.params("input_state")) | q
    BSgate(np.pi/4, 0) | q

Here, we are initializing a TensorFlow variable phi_var representing the initial state of mode q[1], which we will optimize over. Note that we take the outer product \(\ket{in}=\ket{\psi}\otimes\ket{\phi}\), and use the Ket operator to initialise the circuit in this initial multimode pure state.

def cost(phi_var_re, phi_var_im):
    phi_var = tf.cast(phi_var_re, tf.complex64) + 1j*tf.cast(phi_var_im, tf.complex64)

    in_state = tf.einsum('i,j->ij', psi, phi_var)

    result = eng.run(prog, args={"input_state": in_state}, modes=[1])
    state = result.state

    rhoB = state.dm()
    cost = tf.cast(tf.math.real((tf.linalg.trace(rhoB @ rhoB)
                                -penalty_strength*tf.linalg.trace(rhoB @ x)**2
                                -penalty_strength*tf.linalg.trace(rhoB @ p)**2)
                           /(tf.linalg.trace(rhoB))**2), tf.float64)

    return cost

The cost function runs the engine, and we use the argument modes=[1] to return the density matrix of mode q[1].

The returned cost contains the purity of the reduced density matrix

\[\text{Tr}(\rho_B^2),\]

and an extra penalty that forces the optimized state to have zero displacement; that is, we want to minimise the value

\[\langle \hat{x}\rangle=\text{Tr}(\rho_B\hat{x}).\]

Finally, we divide by the \(\text{Tr}(\rho_B)^2\) so that the state is always normalized.

We can now set up the optimization, to minimise the cost function. Running the optimization process for 200 reps:

opt = tf.keras.optimizers.Adam(learning_rate=0.007)

reps = 200

cost_progress = []

for i in range(reps):
    # reset the engine if it has already been executed
    if eng.run_progs:
        eng.reset()

    with tf.GradientTape() as tape:
        loss = -cost(phi_var_re, phi_var_im)

    # Stores cost at each step
    cost_progress.append(loss.numpy())

    # one repetition of the optimization
    gradients = tape.gradient(loss, [phi_var_re, phi_var_im])
    opt.apply_gradients(zip(gradients, [phi_var_re, phi_var_im]))

    # Prints progress
    if i % 1 == 0:
        print("Rep: {} Cost: {}".format(i, loss))

Out:

Rep: 0 Cost: -0.4766758978366852
Rep: 1 Cost: -0.42233195900917053
Rep: 2 Cost: -0.4963146448135376
Rep: 3 Cost: -0.45958417654037476
Rep: 4 Cost: -0.48182791471481323
Rep: 5 Cost: -0.5147174000740051
Rep: 6 Cost: -0.5162040591239929
Rep: 7 Cost: -0.5110472440719604
Rep: 8 Cost: -0.5209270119667053
Rep: 9 Cost: -0.5389845967292786
Rep: 10 Cost: -0.5502252578735352
Rep: 11 Cost: -0.5520216822624207
Rep: 12 Cost: -0.5553423166275024
Rep: 13 Cost: -0.5651861429214478
Rep: 14 Cost: -0.5767861604690552
Rep: 15 Cost: -0.5858913660049438
Rep: 16 Cost: -0.5925742983818054
Rep: 17 Cost: -0.5986006855964661
Rep: 18 Cost: -0.6053515076637268
Rep: 19 Cost: -0.613476574420929
Rep: 20 Cost: -0.6225422024726868
Rep: 21 Cost: -0.6312934756278992
Rep: 22 Cost: -0.6387171745300293
Rep: 23 Cost: -0.6450194716453552
Rep: 24 Cost: -0.6514302492141724
Rep: 25 Cost: -0.6588231921195984
Rep: 26 Cost: -0.6668862700462341
Rep: 27 Cost: -0.6747293472290039
Rep: 28 Cost: -0.6818798780441284
Rep: 29 Cost: -0.6885321736335754
Rep: 30 Cost: -0.6950533390045166
Rep: 31 Cost: -0.7016119956970215
Rep: 32 Cost: -0.7082611322402954
Rep: 33 Cost: -0.7150138020515442
Rep: 34 Cost: -0.7217400074005127
Rep: 35 Cost: -0.7281913161277771
Rep: 36 Cost: -0.734244704246521
Rep: 37 Cost: -0.7400315999984741
Rep: 38 Cost: -0.7457355260848999
Rep: 39 Cost: -0.7513757348060608
Rep: 40 Cost: -0.7568838596343994
Rep: 41 Cost: -0.7622534036636353
Rep: 42 Cost: -0.7675090432167053
Rep: 43 Cost: -0.772609531879425
Rep: 44 Cost: -0.7774856686592102
Rep: 45 Cost: -0.7821431756019592
Rep: 46 Cost: -0.7866565585136414
Rep: 47 Cost: -0.7910699844360352
Rep: 48 Cost: -0.7953727841377258
Rep: 49 Cost: -0.799550473690033
Rep: 50 Cost: -0.803610622882843
Rep: 51 Cost: -0.8075515031814575
Rep: 52 Cost: -0.8113498091697693
Rep: 53 Cost: -0.8149938583374023
Rep: 54 Cost: -0.8185037970542908
Rep: 55 Cost: -0.8219096660614014
Rep: 56 Cost: -0.8252221941947937
Rep: 57 Cost: -0.8284385800361633
Rep: 58 Cost: -0.8315609097480774
Rep: 59 Cost: -0.8345983028411865
Rep: 60 Cost: -0.8375518321990967
Rep: 61 Cost: -0.8404151797294617
Rep: 62 Cost: -0.8431856036186218
Rep: 63 Cost: -0.8458704948425293
Rep: 64 Cost: -0.848480224609375
Rep: 65 Cost: -0.851020336151123
Rep: 66 Cost: -0.8534917831420898
Rep: 67 Cost: -0.8558964133262634
Rep: 68 Cost: -0.8582367897033691
Rep: 69 Cost: -0.8605161309242249
Rep: 70 Cost: -0.8627355694770813
Rep: 71 Cost: -0.8648965954780579
Rep: 72 Cost: -0.8670016527175903
Rep: 73 Cost: -0.8690568208694458
Rep: 74 Cost: -0.8710663318634033
Rep: 75 Cost: -0.8730331063270569
Rep: 76 Cost: -0.8749580979347229
Rep: 77 Cost: -0.8768416047096252
Rep: 78 Cost: -0.8786839246749878
Rep: 79 Cost: -0.8804850578308105
Rep: 80 Cost: -0.8822448253631592
Rep: 81 Cost: -0.8839641213417053
Rep: 82 Cost: -0.8856425285339355
Rep: 83 Cost: -0.8872805237770081
Rep: 84 Cost: -0.8888777494430542
Rep: 85 Cost: -0.890434741973877
Rep: 86 Cost: -0.8919485807418823
Rep: 87 Cost: -0.8934179544448853
Rep: 88 Cost: -0.8948408961296082
Rep: 89 Cost: -0.8962129354476929
Rep: 90 Cost: -0.8975301384925842
Rep: 91 Cost: -0.8987878561019897
Rep: 92 Cost: -0.8999796509742737
Rep: 93 Cost: -0.9010999202728271
Rep: 94 Cost: -0.9021427035331726
Rep: 95 Cost: -0.9031026363372803
Rep: 96 Cost: -0.9039767384529114
Rep: 97 Cost: -0.9047644138336182
Rep: 98 Cost: -0.9054698348045349
Rep: 99 Cost: -0.9061003923416138
Rep: 100 Cost: -0.906666100025177
Rep: 101 Cost: -0.9071784615516663
Rep: 102 Cost: -0.9076468348503113
Rep: 103 Cost: -0.9080801010131836
Rep: 104 Cost: -0.9084807634353638
Rep: 105 Cost: -0.9088495969772339
Rep: 106 Cost: -0.9091846942901611
Rep: 107 Cost: -0.9094841480255127
Rep: 108 Cost: -0.909747302532196
Rep: 109 Cost: -0.9099751114845276
Rep: 110 Cost: -0.910169780254364
Rep: 111 Cost: -0.9103357791900635
Rep: 112 Cost: -0.9104783535003662
Rep: 113 Cost: -0.910601794719696
Rep: 114 Cost: -0.9107115268707275
Rep: 115 Cost: -0.9108108878135681
Rep: 116 Cost: -0.9109017252922058
Rep: 117 Cost: -0.9109862446784973
Rep: 118 Cost: -0.9110653400421143
Rep: 119 Cost: -0.9111391305923462
Rep: 120 Cost: -0.9112086296081543
Rep: 121 Cost: -0.9112733602523804
Rep: 122 Cost: -0.911334216594696
Rep: 123 Cost: -0.9113908410072327
Rep: 124 Cost: -0.9114441871643066
Rep: 125 Cost: -0.9114949107170105
Rep: 126 Cost: -0.9115433096885681
Rep: 127 Cost: -0.9115898609161377
Rep: 128 Cost: -0.9116342663764954
Rep: 129 Cost: -0.9116771817207336
Rep: 130 Cost: -0.9117188453674316
Rep: 131 Cost: -0.9117588996887207
Rep: 132 Cost: -0.9117968082427979
Rep: 133 Cost: -0.9118331670761108
Rep: 134 Cost: -0.9118679761886597
Rep: 135 Cost: -0.911899983882904
Rep: 136 Cost: -0.91193026304245
Rep: 137 Cost: -0.9119579195976257
Rep: 138 Cost: -0.9119837880134583
Rep: 139 Cost: -0.9120073318481445
Rep: 140 Cost: -0.9120290875434875
Rep: 141 Cost: -0.9120483994483948
Rep: 142 Cost: -0.9120661616325378
Rep: 143 Cost: -0.912082850933075
Rep: 144 Cost: -0.9120977520942688
Rep: 145 Cost: -0.9121111035346985
Rep: 146 Cost: -0.9121230840682983
Rep: 147 Cost: -0.9121342301368713
Rep: 148 Cost: -0.9121442437171936
Rep: 149 Cost: -0.9121530055999756
Rep: 150 Cost: -0.9121609330177307
Rep: 151 Cost: -0.9121682047843933
Rep: 152 Cost: -0.91217440366745
Rep: 153 Cost: -0.9121802449226379
Rep: 154 Cost: -0.9121854901313782
Rep: 155 Cost: -0.9121902585029602
Rep: 156 Cost: -0.9121941924095154
Rep: 157 Cost: -0.9121984243392944
Rep: 158 Cost: -0.9122018218040466
Rep: 159 Cost: -0.9122052788734436
Rep: 160 Cost: -0.9122081398963928
Rep: 161 Cost: -0.9122109413146973
Rep: 162 Cost: -0.9122133255004883
Rep: 163 Cost: -0.9122160077095032
Rep: 164 Cost: -0.9122180938720703
Rep: 165 Cost: -0.912219762802124
Rep: 166 Cost: -0.9122214913368225
Rep: 167 Cost: -0.9122228622436523
Rep: 168 Cost: -0.9122244119644165
Rep: 169 Cost: -0.9122257232666016
Rep: 170 Cost: -0.9122268557548523
Rep: 171 Cost: -0.9122277498245239
Rep: 172 Cost: -0.9122284650802612
Rep: 173 Cost: -0.9122296571731567
Rep: 174 Cost: -0.912230372428894
Rep: 175 Cost: -0.9122307896614075
Rep: 176 Cost: -0.912231981754303
Rep: 177 Cost: -0.9122324585914612
Rep: 178 Cost: -0.9122327566146851
Rep: 179 Cost: -0.9122333526611328
Rep: 180 Cost: -0.912233829498291
Rep: 181 Cost: -0.9122340679168701
Rep: 182 Cost: -0.9122345447540283
Rep: 183 Cost: -0.9122347831726074
Rep: 184 Cost: -0.9122349619865417
Rep: 185 Cost: -0.9122352004051208
Rep: 186 Cost: -0.9122353196144104
Rep: 187 Cost: -0.9122355580329895
Rep: 188 Cost: -0.912235677242279
Rep: 189 Cost: -0.9122357368469238
Rep: 190 Cost: -0.9122357964515686
Rep: 191 Cost: -0.9122358560562134
Rep: 192 Cost: -0.9122360944747925
Rep: 193 Cost: -0.9122359156608582
Rep: 194 Cost: -0.9122361540794373
Rep: 195 Cost: -0.9122363328933716
Rep: 196 Cost: -0.9122359156608582
Rep: 197 Cost: -0.9122360348701477
Rep: 198 Cost: -0.9122360944747925
Rep: 199 Cost: -0.9122360944747925

We can see that the optimization converges to the optimum purity value of 0.9122365.

Visualising the optimum state

We can now calculate the density matrix of the input state \(\ket{\phi}\) which minimises entanglement:

\[\rho_{\phi} = \ket{\phi}\left\langle \phi\right|\]
ket_val = phi_var_re.numpy() + phi_var_im.numpy()*1j
out_rhoB = np.outer(ket_val, ket_val.conj())
out_rhoB /= np.trace(out_rhoB)

Note that the optimization is not guaranteed to keep the state normalized, so we renormalize the minimized state.

Next, we can use the following function to plot the Wigner function of this density matrix:

import copy
def wigner(rho, xvec, pvec):
    # Modified from qutip.org
    Q, P = np.meshgrid(xvec, pvec)
    A = (Q + P * 1.0j) / (2 * np.sqrt(2 / 2))

    Wlist = np.array([np.zeros(np.shape(A), dtype=complex) for k in range(cutoff)])

    # Wigner function for |0><0|
    Wlist[0] = np.exp(-2.0 * np.abs(A) ** 2) / np.pi

    # W = rho(0,0)W(|0><0|)
    W = np.real(rho[0, 0]) * np.real(Wlist[0])

    for n in range(1, cutoff):
        Wlist[n] = (2.0 * A * Wlist[n - 1]) / np.sqrt(n)
        W += 2 * np.real(rho[0, n] * Wlist[n])

    for m in range(1, cutoff):
        temp = copy.copy(Wlist[m])
        # Wlist[m] = Wigner function for |m><m|
        Wlist[m] = (2 * np.conj(A) * temp - np.sqrt(m)
                    * Wlist[m - 1]) / np.sqrt(m)

        # W += rho(m,m)W(|m><m|)
        W += np.real(rho[m, m] * Wlist[m])

        for n in range(m + 1, cutoff):
            temp2 = (2 * A * Wlist[n - 1] - np.sqrt(m) * temp) / np.sqrt(n)
            temp = copy.copy(Wlist[n])
            # Wlist[n] = Wigner function for |m><n|
            Wlist[n] = temp2

            # W += rho(m,n)W(|m><n|) + rho(n,m)W(|n><m|)
            W += 2 * np.real(rho[m, n] * Wlist[n])

    return Q, P, W / 2

# Import plotting
from matplotlib import pyplot as plt
# %matplotlib inline

x = np.arange(-5, 5, 0.1)
p = np.arange(-5, 5, 0.1)
X, P, W = wigner(out_rhoB, x, p)
plt.contourf(X, P, np.round(W,3), cmap="PiYG")
plt.show()
../_images/sphx_glr_run_minimizing_correlations_001.png

We see that the optimal state is indeed a (mildly) squeezed state. This can be confirmed by visuallising the Fock state probabilities of state \(\ket{\phi}\):

plt.bar(np.arange(cutoff), height=np.diag(out_rhoB).real)
plt.show()
../_images/sphx_glr_run_minimizing_correlations_002.png

Finally, printing out the mean number of photons \(\bar{n} = \left\langle \phi \mid \hat{n} \mid \phi\right\rangle = \text{Tr}(\hat{n}\rho)\), as well as the squeezing magnitude \(r=\sinh^{-1}\left(\sqrt{\bar{n}}\right)\) of this state:

nbar = np.trace(n_opt @ out_rhoB).real
print("mean number of photons =", nbar)
print("squeezing parameter =", np.arcsinh(np.sqrt(nbar)))

Out:

mean number of photons = 0.08352928718091822
squeezing parameter = 0.28513493072974405

References

[1] Jiang, Z., Lang, M. D., & Caves, C. M. (2013). Mixing nonclassical pure states in a linear-optical network almost always generates modal entanglement. Physical Review A, 88(4), 044301.

[2] Quesada, N., & Brańczyk, A. M. (2018). Gaussian functions are optimal for waveguided nonlinear-quantum-optical processes. Physical Review A, 98, 043813.

Total running time of the script: ( 0 minutes 4.162 seconds)

Gallery generated by Sphinx-Gallery