{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# This cell is added by sphinx-gallery\n# It can be customized to whatever you like\n%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n.. role:: html(raw)\n :format: html\n\n\nQuantum neural network\n======================\n\n \"Neural Network are not black boxes. They are a big pile of linear algebra.\" - Randall Munroe,\n xkcd _\n\nMachine learning has a wide range of models for tasks such as classification, regression, and\nclustering. Neural networks are one of the most successful models, having experienced a resurgence\nin use over the past decade due to improvements in computational power and advanced software\nlibraries. The typical structure of a neural network consists of a series of interacting layers that\nperform transformations on data passing through the network. An archetypal neural network structure\nis the feedforward neural network, visualized by the following example:\n\n:html:
\n\n![](/tutorials/images/neural_network.svg)\n\n :align: center\n :width: 85%\n :target: javascript:void(0);\n\n:html:
\n\nHere, the neural network depth is determined by the number of layers, while the maximum width is\ngiven by the layer with the greatest number of neurons. The network begins with an input layer of\nreal-valued neurons, which feed forward onto a series of one or more hidden layers. Following the\nnotation of [_], if the $n$ neurons at one layer are given by the\nvector $\\mathbf{x} \\in \\mathbb{R}^{n}$, the $m$ neurons of the next layer take the\nvalues\n\n\\begin{align}\\mathcal{L}(\\mathbf{x}) = \\varphi (W \\mathbf{x} + \\mathbf{b}),\\end{align}\n\nwhere\n\n* $W \\in \\mathbb{R}^{m \\times n}$ is a matrix,\n\n* $b \\in \\mathbb{R}^{m}$ is a vector, and\n\n* $\\varphi$ is a nonlinear function (also known as the activation function).\n\nThe matrix multiplication $W \\mathbf{x}$ is a linear transformation on $\\mathbf{x}$,\nwhile $W \\mathbf{x} + \\mathbf{b}$ represents an **affine transformation**. In principle, any\nnonlinear function can be chosen for $\\varphi$, but often the choice is fixed from a standard\nset of activations _ that include the rectified\nlinear unit (ReLU) and the sigmoid function acting on each neuron. Finally, the output layer enacts\nan affine transformation on the last hidden layer, but the activation function may be linear\n(including the identity), or a different nonlinear function such as softmax\n_ (for classification).\n\nLayers in the feedforward neural network above are called **fully connected** as every neuron in a\ngiven hidden layer or output layer can be connected to all neurons in the previous layer through the\nmatrix $W$. Over time, specialized versions of layers have been developed to focus on\ndifferent problems. For example, convolutional layers have a restricted form of connectivity and are\nsuited to machine learning with images. We focus here on fully connected layers as the most general\ntype.\n\nTraining of neural networks uses variations of the gradient descent\n_ algorithm on a cost function characterizing the\nsimilarity between outputs of the neural network and training data. The gradient of the cost\nfunction can be calculated using automatic differentiation\n_, with knowledge of the feedforward\nnetwork structure.\n\nQuantum neural networks aim to encode neural networks into a quantum system, with the intention of\nbenefiting from quantum information processing. There have been numerous attempts to define a\nquantum neural network, each with varying advantages and disadvantages. The quantum neural network\ndetailed below, following the work of [_], has a CV architecture and is\nrealized using standard CV gates from Strawberry Fields. One advantage of this CV architecture is\nthat it naturally accommodates for the continuous nature of neural networks. Additionally, the CV\nmodel is able to easily apply non-linear transformations using the phase space picture - a task\nwhich qubit-based models struggle with, often relying on measurement postselection which has a\nprobability of failure.\n\nImplementation\n--------------\n\nA CV quantum neural network layer can be defined as\n\n\\begin{align}\\mathcal{L} := \\Phi \\circ \\mathcal{D} \\circ \\mathcal{U}_{2} \\circ \\mathcal{S} \\circ \\mathcal{U}_{1},\\end{align}\n\nwhere\n\n* $\\mathcal{U}_{k}=U_{k}(\\boldsymbol{\\theta}_{k},\\boldsymbol{\\phi}_{k})$ is an $N$ mode\n interferometer,\n\n* $\\mathcal{D}=\\otimes_{i=1}^{N}D(\\alpha_{i})$ is a single mode displacement gate\n (:class:~strawberryfields.ops.Dgate) with complex displacement $\\alpha_{i} \\in \\mathbb{C}$,\n\n* $\\mathcal{S}=\\otimes_{i=1}^{N}S(r_{i})$ is a single mode squeezing gate\n (:class:~strawberryfields.ops.Sgate)\n acting on each mode with squeezing parameter $r_{i} \\in \\mathbb{R}$, and\n\n* $\\Phi=\\otimes_{i=1}^{N}\\Phi(\\lambda_{i})$ is a non-Gaussian gate on each mode with parameter\n $\\lambda_{i} \\in \\mathbb{R}$.\n\n

#### Note

Any non-Gaussian gate such as the cubic phase gate (:class:~strawberryfields.ops.Vgate)\n represents a valid choice, but we recommend the Kerr gate (:class:~strawberryfields.ops.Kgate)\n for simulations in Strawberry Fields. The Kerr gate is more accurate numerically because it is\n diagonal in the Fock basis.

\n\nThe layer is shown below as a circuit:\n\n:html:
\n\n![](/tutorials/images/layer.svg)\n\n :align: center\n :width: 70%\n :target: javascript:void(0);\n\n:html:
\n\nThese layers can then be composed to form a quantum neural network. The width of the network can\nalso be varied between layers [_].\n\nReproducing classical neural networks\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nLet's see how the quantum layer can embed the transformation $\\mathcal{L}(\\mathbf{x}) =\n\\varphi (W \\mathbf{x} + \\mathbf{b})$ of a classical neural network layer. Suppose\n$N$-dimensional data is encoded in position eigenstates so that\n\n\\begin{align}\\mathbf{x} \\Leftrightarrow \\ket{\\mathbf{x}} := \\ket{x_{1}} \\otimes \\ldots \\otimes \\ket{x_{N}}.\\end{align}\n\nWe want to perform the transformation\n\n\\begin{align}\\ket{\\mathbf{x}} \\Rightarrow \\ket{\\varphi (W \\mathbf{x} + \\mathbf{b})}.\\end{align}\n\nIt turns out that the quantum circuit above can do precisely this! Consider first the affine\ntransformation $W \\mathbf{x} + \\mathbf{b}$. Leveraging the singular value decomposition, we\ncan always write $W = O_{2} \\Sigma O_{1}$ with $O_{k}$ orthogonal matrices and\n$\\Sigma$ a positive diagonal matrix. These orthogonal transformations can be carried out using\ninterferometers without access to phase, i.e., with $\\boldsymbol{\\phi}_{k} = 0$:\n\n\\begin{align}U_{k}(\\boldsymbol{\\theta}_{k},\\mathbf{0})\\ket{\\mathbf{x}} = \\ket{O_{k} \\mathbf{x}}.\\end{align}\n\nOn the other hand, the diagonal matrix $\\Sigma = {\\rm diag}\\left(\\{c_{i}\\}_{i=1}^{N}\\right)$\ncan be achieved through squeezing:\n\n\\begin{align}\\otimes_{i=1}^{N}S(r_{i})\\ket{\\mathbf{x}} \\propto \\ket{\\Sigma \\mathbf{x}},\\end{align}\n\nwith $r_{i} = \\log (c_{i})$. Finally, the addition of a bias vector $\\mathbf{b}$ is done\nusing position displacement gates:\n\n\\begin{align}\\otimes_{i=1}^{N}D(\\alpha_{i})\\ket{\\mathbf{x}} = \\ket{\\mathbf{x} + \\mathbf{b}},\\end{align}\n\nwith $\\mathbf{b} = \\{\\alpha_{i}\\}_{i=1}^{N}$ and $\\alpha_{i} \\in \\mathbb{R}$. Putting\nthis all together, we see that the operation $\\mathcal{D} \\circ \\mathcal{U}_{2} \\circ\n\\mathcal{S} \\circ \\mathcal{U}_{1}$ with phaseless interferometers and position displacement performs\nthe transformation $\\ket{\\mathbf{x}} \\Rightarrow \\ket{W \\mathbf{x} + \\mathbf{b}}$ on position\neigenstates.\n\n

#### Warning

The TensorFlow backend is the natural simulator for quantum neural networks in Strawberry\n Fields, but this backend cannot naturally accommodate position eigenstates, which require\n infinite squeezing. For simulation of position eigenstates in this backend, the best approach is\n to use a displaced squeezed state (:class:prepare_displaced_squeezed_state\n ) with high\n squeezing value r. However, to avoid significant numerical error, it is important to make sure\n that all initial states have negligible amplitude for Fock states $\\ket{n}$ with\n $n\\geq \\texttt{cutoff_dim}$, where $\\texttt{cutoff_dim}$ is the cutoff dimension.

\n\nFinally, the nonlinear function $\\varphi$ can be achieved through a restricted type of\nnon-Gaussian gates $\\otimes_{i=1}^{N}\\Phi(\\lambda_{i})$ acting on each mode (see\n[_] for more details), resulting in the transformation\n\n\\begin{align}\\otimes_{i=1}^{N}\\Phi(\\lambda_{i})\\ket{\\mathbf{x}} = \\ket{\\varphi(\\mathbf{x})}.\\end{align}\n\nThe operation $\\mathcal{L} = \\Phi \\circ \\mathcal{D} \\circ \\mathcal{U}_{2} \\circ \\mathcal{S}\n\\circ \\mathcal{U}_{1}$ with phaseless interferometers, position displacements, and restricted\nnon-Gaussian gates can hence be seen as enacting a classical neural network layer\n$\\ket{\\mathbf{x}} \\Rightarrow \\ket{\\phi(W \\mathbf{x} + \\mathbf{b})}$ on position eigenstates.\n\nExtending to quantum neural networks\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIn fact, CV quantum neural network layers can be made more expressive than their classical\ncounterparts. We can do this by lifting the above restrictions on $\\mathcal{L}$, i.e.:\n\n- Using arbitrary interferometers $U_{k}(\\boldsymbol{\\theta}_{k},\\boldsymbol{\\phi}_{k})$ with\n access to phase and general displacement gates (i.e., not necessarily position displacement). This\n allows $\\mathcal{D} \\circ \\mathcal{U}_{2} \\circ \\mathcal{S} \\circ \\mathcal{U}_{1}$ to\n represent a general Gaussian operation.\n- Using arbitrary non-Gaussian gates $\\Phi(\\lambda_{i})$, such as the Kerr gate.\n- Encoding data outside of the position eigenbasis, for example using instead the Fock basis.\n\nIn fact, gates in a single layer form a universal gate set, making the CV quantum neural network a\nmodel for universal quantum computing, i.e., a sufficient number of layers can carry out any quantum\nalgorithm implementable on a CV quantum computer.\n\nCV quantum neural networks can be trained both through classical simulation and directly on quantum\nhardware. Strawberry Fields relies on classical simulation to evaluate cost functions of the CV\nquantum neural network and the resultant gradients with respect to parameters of each layer.\nHowever, this becomes an intractable task with increasing network depth and width. Ultimately,\ndirect evaluation on hardware will likely be necessary to large scale networks; an approach for\nhardware-based training is mapped out in [_]. The PennyLane\n_ library provides tools for training hybrid\nquantum-classical machine learning models, using both simulators and real-world quantum hardware.\n\nExample CV quantum neural network layers are shown, for one to four modes, below:\n\n:html:
\n\n.. figure:: /tutorials/images/layer_1mode.svg\n :align: center\n :width: 31%\n :target: javascript:void(0);\n\n One mode layer\n\n:html:
\n\n\n.. figure:: /tutorials/images/layer_2mode.svg\n :align: center\n :width: 46%\n :target: javascript:void(0);\n\n Two mode layer\n\n:html:
\n\n\n\n.. figure:: /tutorials/images/layer_3mode.svg\n :align: center\n :width: 75%\n :target: javascript:void(0);\n\n Three mode layer\n\n:html:
\n\n.. figure:: /tutorials/images/layer_4mode.svg\n :align: center\n :width: 90%\n :target: javascript:void(0);\n\n Four mode layer\n\n:html:
\n\nHere, the multimode linear interferometers $U_{1}$ and $U_{2}$ have been decomposed into\ntwo-mode phaseless beamsplitters (:class:~strawberryfields.ops.BSgate) and single-mode phase shifters\n(:class:~strawberryfields.ops.Rgate) using the Clements decomposition [_]. The Kerr gate is used as\nthe non-Gaussian gate.\n\nCode\n----\n\nFirst, we import Strawberry Fields, TensorFlow, and NumPy:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\nimport tensorflow as tf\nimport strawberryfields as sf\nfrom strawberryfields import ops" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Before we begin defining our optimization problem, let's first create\nsome convenient utility functions.\n\nUtility functions\n~~~~~~~~~~~~~~~~~\n\nThe first step to writing a CV quantum neural network layer in Strawberry Fields is to define a\nfunction for the two interferometers:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def interferometer(params, q):\n \"\"\"Parameterised interferometer acting on N modes.\n\n Args:\n params (list[float]): list of length max(1, N-1) + (N-1)*N parameters.\n\n * The first N(N-1)/2 parameters correspond to the beamsplitter angles\n * The second N(N-1)/2 parameters correspond to the beamsplitter phases\n * The final N-1 parameters correspond to local rotation on the first N-1 modes\n\n q (list[RegRef]): list of Strawberry Fields quantum registers the interferometer\n is to be applied to\n \"\"\"\n N = len(q)\n theta = params[:N*(N-1)//2]\n phi = params[N*(N-1)//2:N*(N-1)]\n rphi = params[-N+1:]\n\n if N == 1:\n # the interferometer is a single rotation\n ops.Rgate(rphi) | q\n return\n\n n = 0 # keep track of free parameters\n\n # Apply the rectangular beamsplitter array\n # The array depth is N\n for l in range(N):\n for k, (q1, q2) in enumerate(zip(q[:-1], q[1:])):\n # skip even or odd pairs depending on layer\n if (l + k) % 2 != 1:\n ops.BSgate(theta[n], phi[n]) | (q1, q2)\n n += 1\n\n # apply the final local phase shifts to all modes except the last one\n for i in range(max(1, N - 1)):\n ops.Rgate(rphi[i]) | q[i]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

#### Warning

The :class:~strawberryfields.ops.Interferometer class in Strawberry Fields does not reproduce\n the functionality above. Instead, :class:~strawberryfields.ops.Interferometer applies a given\n input unitary matrix according to the Clements decomposition.

\n\nUsing the above interferometer function, an $N$ mode CV quantum neural network layer is\ngiven by the function:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def layer(params, q):\n \"\"\"CV quantum neural network layer acting on N modes.\n\n Args:\n params (list[float]): list of length 2*(max(1, N-1) + N**2 + n) containing\n the number of parameters for the layer\n q (list[RegRef]): list of Strawberry Fields quantum registers the layer\n is to be applied to\n \"\"\"\n N = len(q)\n M = int(N * (N - 1)) + max(1, N - 1)\n\n int1 = params[:M]\n s = params[M:M+N]\n int2 = params[M+N:2*M+N]\n dr = params[2*M+N:2*M+2*N]\n dp = params[2*M+2*N:2*M+3*N]\n k = params[2*M+3*N:2*M+4*N]\n\n # begin layer\n interferometer(int1, q)\n\n for i in range(N):\n ops.Sgate(s[i]) | q[i]\n\n interferometer(int2, q)\n\n for i in range(N):\n ops.Dgate(dr[i], dp[i]) | q[i]\n ops.Kgate(k[i]) | q[i]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we define one more utility function to help us initialize\nthe TensorFlow weights for our quantum neural network layers:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def init_weights(modes, layers, active_sd=0.0001, passive_sd=0.1):\n \"\"\"Initialize a 2D TensorFlow Variable containing normally-distributed\n random weights for an N mode quantum neural network with L layers.\n\n Args:\n modes (int): the number of modes in the quantum neural network\n layers (int): the number of layers in the quantum neural network\n active_sd (float): the standard deviation used when initializing\n the normally-distributed weights for the active parameters\n (displacement, squeezing, and Kerr magnitude)\n passive_sd (float): the standard deviation used when initializing\n the normally-distributed weights for the passive parameters\n (beamsplitter angles and all gate phases)\n\n Returns:\n tf.Variable[tf.float32]: A TensorFlow Variable of shape\n [layers, 2*(max(1, modes-1) + modes**2 + modes)], where the Lth\n row represents the layer parameters for the Lth layer.\n \"\"\"\n # Number of interferometer parameters:\n M = int(modes * (modes - 1)) + max(1, modes - 1)\n\n # Create the TensorFlow variables\n int1_weights = tf.random.normal(shape=[layers, M], stddev=passive_sd)\n s_weights = tf.random.normal(shape=[layers, modes], stddev=active_sd)\n int2_weights = tf.random.normal(shape=[layers, M], stddev=passive_sd)\n dr_weights = tf.random.normal(shape=[layers, modes], stddev=active_sd)\n dp_weights = tf.random.normal(shape=[layers, modes], stddev=passive_sd)\n k_weights = tf.random.normal(shape=[layers, modes], stddev=active_sd)\n\n weights = tf.concat(\n [int1_weights, s_weights, int2_weights, dr_weights, dp_weights, k_weights], axis=1\n )\n\n weights = tf.Variable(weights)\n\n return weights" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Optimization\n~~~~~~~~~~~~\n\nNow that we have our utility functions, lets begin defining our optimization problem\nIn this particular example, let's create a 1 mode CVQNN with 8 layers and a Fock-basis\ncutoff dimension of 6. We will train this QNN to output a desired target state;\na single photon state.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# set the random seed\ntf.random.set_seed(137)\nnp.random.seed(137)\n\n\n# define width and depth of CV quantum neural network\nmodes = 1\nlayers = 8\ncutoff_dim = 6\n\n\n# defining desired state (single photon state)\ntarget_state = np.zeros(cutoff_dim)\ntarget_state = 1\ntarget_state = tf.constant(target_state, dtype=tf.complex64)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, let's initialize an engine with the TensorFlow \"tf\" backend,\nand begin constructing out QNN program.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# initialize engine and program\neng = sf.Engine(backend=\"tf\", backend_options={\"cutoff_dim\": cutoff_dim})\nqnn = sf.Program(modes)\n\n# initialize QNN weights\nweights = init_weights(modes, layers) # our TensorFlow weights\nnum_params = np.prod(weights.shape) # total number of parameters in our model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To construct the program, we must create and use Strawberry Fields symbolic\ngate arguments. These will be mapped to the TensorFlow variables on engine\nexecution.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Create array of Strawberry Fields symbolic gate arguments, matching\n# the size of the weights Variable.\nsf_params = np.arange(num_params).reshape(weights.shape).astype(np.str)\nsf_params = np.array([qnn.params(*i) for i in sf_params])\n\n\n# Construct the symbolic Strawberry Fields program by\n# looping and applying layers to the program.\nwith qnn.context as q:\n for k in range(layers):\n layer(sf_params[k], q)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "where sf_params is a real array of size [layers, 2*(max(1, modes-1) + modes**2 + modes)]\ncontaining the symbolic gate arguments for the quantum neural network.\n\nNow that our QNN program is defined, we can create our **cost function**.\nOur cost function simply executes the QNN on our engine using the values of the\ninput weights.\n\nSince we want to maximize the fidelity $f(w) = \\langle \\psi(w) | \\psi_t\\rangle$\nbetween our QNN output state $|\\psi(w)\\rangle$ and our target state\n$\\psi_t\\rangle$, we compute the inner product between the two statevectors,\nas well as the norm $\\left\\lVert \\psi(w) - \\psi_t\\right\\rVert$.\n\nFinally, we also return the trace of the output QNN state. This should always\nhave a value close to 1. If it deviates significantly from 1, this is an\nindication that we need to increase our Fock-basis cutoff.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def cost(weights):\n # Create a dictionary mapping from the names of the Strawberry Fields\n # symbolic gate parameters to the TensorFlow weight values.\n mapping = {p.name: w for p, w in zip(sf_params.flatten(), tf.reshape(weights, [-1]))}\n\n # run the engine\n state = eng.run(qnn, args=mapping).state\n ket = state.ket()\n\n difference = tf.reduce_sum(tf.abs(ket - target_state))\n fidelity = tf.abs(tf.reduce_sum(tf.math.conj(ket) * target_state)) ** 2\n return difference, fidelity, ket, tf.math.real(state.trace())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We are now ready to minimize our cost function using TensorFlow:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# set up the optimizer\nopt = tf.keras.optimizers.Adam()\ncost_before, fidelity_before, _, _ = cost(weights)\n\n# Perform the optimization\nfor i in range(1000):\n # reset the engine if it has already been executed\n if eng.run_progs:\n eng.reset()\n\n with tf.GradientTape() as tape:\n loss, fid, ket, trace = cost(weights)\n\n # one repetition of the optimization\n gradients = tape.gradient(loss, weights)\n opt.apply_gradients(zip([gradients], [weights]))\n\n # Prints progress at every rep\n if i % 1 == 0:\n print(\"Rep: {} Cost: {:.4f} Fidelity: {:.4f} Trace: {:.4f}\".format(i, loss, fid, trace))\n\n\nprint(\"\\nFidelity before optimization: \", fidelity_before.numpy())\nprint(\"Fidelity after optimization: \", fid.numpy())\nprint(\"\\nTarget state: \", target_state.numpy())\nprint(\"Output state: \", np.round(ket.numpy(), decimals=3))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For more applications of CV quantum neural networks, see the :doc:state learning \nand :doc:gate synthesis  demonstrations.\n\nReferences\n----------\n\n..  Nathan Killoran, Thomas R Bromley, Juan Miguel Arrazola, Maria Schuld, Nicol\u00e1s Quesada, and\n Seth Lloyd. Continuous-variable quantum neural networks. arXiv preprint arXiv:1806.06871,\n 2018.\n\n..  Maria Schuld, Ville Bergholm, Christian Gogolin, Josh Izaac, and Nathan Killoran. Evaluating\n analytic gradients on quantum hardware. Physical Review A, 99(3):032331, 2019.\n\n..  William R Clements, Peter C Humphreys, Benjamin J Metcalf, W Steven Kolthammer, and Ian A\n Walsmley. Optimal design for universal multiport interferometers. Optica, 3(12):1460\u20131465,\n 2016. doi:10.1364/OPTICA.3.001460.\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.15" } }, "nbformat": 4, "nbformat_minor": 0 }