Note

Click here to download the full example code

# Graph similarity¶

*Technical details are available in the API documentation:* sf.apps.similarity

This page looks at how to use GBS to construct a similarity measure between graphs, known as a graph kernel [1]. Kernels can be applied to graph-based data for machine learning tasks such as classification using a support vector machine.

## Graph data¶

We begin by fixing a dataset of graphs to consider and loading GBS samples from these graphs, which will be needed in the following.

Let’s use the MUTAG dataset of graphs [2][3]. This is a dataset of 188 different graphs that each correspond to the structure of a chemical compound. Our goal is to use GBS samples from these graphs to measure their similarity.

The `data`

module provides pre-calculated GBS samples for selected graphs in the MUTAG
dataset. Each set of samples is generated by encoding the graph into a GBS device, and collecting
photon click events. We’ll start by loading four sets of samples and visualizing the
corresponding graphs.

```
from strawberryfields.apps import data, plot, similarity
m0 = data.Mutag0()
m1 = data.Mutag1()
m2 = data.Mutag2()
m3 = data.Mutag3()
```

These datasets contain both the adjacency matrix of the graph and the samples generated through GBS. We can access the adjacency matrix through:

Samples from these graphs can be accessed by indexing:

```
print(m0[0])
```

Out:

```
[0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]
```

We can now plot the four graphs using the `plot`

module. To use this module,
we need to convert the adjacency matrices into NetworkX Graphs:

```
import networkx as nx
import plotly
```

Plotting `m0_a`

:

```
plot.graph(nx.Graph(m0_a))
```

Plotting `m1_a`

:

```
plot.graph(nx.Graph(m1_a))
```

Plotting `m2_a`

:

```
plot.graph(nx.Graph(m2_a))
```

Plotting `m3_a`

:

```
plot.graph(nx.Graph(m3_a))
```

The graphs of `m1_a`

and `m2_a`

look very similar. In fact,
it turns out that they are *isomorphic* to each other, which means that the graphs can be made
identical by permuting their node labels.

## Creating a feature vector¶

Following [1], we can create a *feature vector* to describe each graph.
These feature vectors contain information about the graphs and can be viewed as a mapping to a
high-dimensional feature space, a technique often used in machine learning that allows us to
employ properties of the feature space to separate and classify the vectors.

The feature vector of a graph can be composed in a variety of ways. One approach is to associate features with the relative frequencies of certain types of measurements being recorded from a GBS device configured to sample from the graph, as we now discuss.

We begin by defining the concept of an *orbit*, which is the set of all GBS samples that are
equivalent under permutation of the modes. A sample can be converted to its corresponding orbit
using the `sample_to_orbit()`

function. For example, the
first sample of `m0`

is `[0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]`

and has orbit:

```
print(similarity.sample_to_orbit(m0[0]))
```

Out:

```
[1, 1]
```

Here, `[1, 1]`

means that two photons were detected, each in a separate mode. Other samples
can be randomly generated from the `[1, 1]`

orbit using:

```
print(similarity.orbit_to_sample([1, 1], modes=m0.modes))
```

Out:

```
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0]
```

Orbits provide a useful way to coarse-grain the samples from GBS into outcomes that are
statistically more likely to be observed. However, we are interested in coarse-graining further
into *events*, which correspond to a combination of orbits with the same photon number such
that the number of photons counted in each mode does not exceed a fixed value
`max_count_per_mode`

. To understand this, let’s look at all of the orbits with a photon
number of 5:

```
print(list(similarity.orbits(5)))
```

Out:

```
[[1, 1, 1, 1, 1], [2, 1, 1, 1], [3, 1, 1], [2, 2, 1], [4, 1], [3, 2], [5]]
```

All 5-photon samples belong to one of the orbits above. A 5-photon event with
`max_count_per_mode = 3`

means that we include the orbits: ```
[[1, 1, 1, 1, 1], [2, 1, 1, 1],
[3, 1, 1], [2, 2, 1], [3, 2]]
```

and ignore the orbits `[[4, 1], [5]]`

. For example,
the sample `[0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 3, 0, 0, 0]`

is a 5-photon event:

```
print(similarity.sample_to_event([0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 3, 0, 0, 0], 3))
```

Out:

```
5
```

Samples with more than `max_count_per_mode`

in any mode are not counted as part of the event:

```
print(similarity.sample_to_event([0, 4, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], 3))
```

Out:

```
None
```

Now that we have mastered orbits and events, how can we make a feature vector? It was shown in [1] that one way of making a feature vector of a graph is through the frequencies of events. Specifically, for a \(k\) photon event \(E_{k, n_{\max}}\) with maximum count per mode \(n_{\max}\) and corresponding probability \(p_{k, n_{\max}}:=p_{E_{k, n_{\max}}}(G)\) with respect to a graph \(G\), a feature vector can be written as

where \(\mathbf{k} := (k_{1}, k_{2}, \ldots , k_{K})\) is a list of different total photon numbers.

For example, if \(\mathbf{k} := (2, 4, 6)\) and \(n_{\max} = 2\), we have

In this case, we are interested in the probabilities of events \(E_{2, 2}\), \(E_{4,
2}\), and \(E_{6, 2}\). Suppose we are sampling from a four-mode device and have the samples
`[0, 3, 0, 1]`

and `[1, 2, 0, 1]`

. These samples are part of the orbits `[3, 1]`

and
`[2, 1, 1]`

, respectively. However, `[3, 1]`

is not part of the \(E_{4, 2}\) event while
`[2, 1, 1]`

is.

## Calculating a feature vector¶

We provide two methods for calculating a feature vector of GBS event probabilities in Strawberry Fields:

Through sampling.

Using a Monte Carlo estimate of the probability.

In the first method, all one needs to do is generate some GBS samples from the graph of interest and fix the composition of the feature vector. For example, for a feature vector \(f_{\mathbf{k} = (2, 4, 6), n_{\max}=2}\) we use:

```
print(similarity.feature_vector_sampling(m0, event_photon_numbers=[2, 4, 6], max_count_per_mode=2))
```

Out:

```
[0.19035, 0.2047, 0.1539]
```

For the second method, suppose we want to calculate the event probabilities exactly rather than through sampling. To do this, we consider the event probability \(p_{k, n_{\max}}\) as the sum over all sample probabilities in the event. In GBS, each sample probability is determined by the hafnian of a relevant sub-adjacency matrix. While this is tough to calculate, what makes calculating \(p_{k, n_{\max}}\) really challenging is the number of samples the corresponding event contains! For example, the 6-photon event over 17 modes \(E_{k=6, n_{\max}=2}\) contains the following number of samples :

```
print(similarity.event_cardinality(6, 2, 17))
```

Out:

```
58276
```

To avoid calculating a large number of sample probabilities, an alternative is to perform a Monte Carlo approximation. Here, samples within an event are selected uniformly at random and their resultant probabilities are calculated. If \(N\) samples \(\{S_{1}, S_{2}, \ldots , S_{N}\}\) are generated, then the event probability can be approximated as

with \(|E_{k, n_{\max}}|\) denoting the cardinality of the event.

This method can be accessed using the `prob_event_mc()`

function. The 4-photon event is approximated as:

```
print(similarity.prob_event_mc(nx.Graph(m0_a), 4, max_count_per_mode=2, n_mean=6))
```

Out:

```
0.2428883811210141
```

The feature vector can then be calculated through Monte Carlo sampling using
`feature_vector_mc()`

.

Note

The results of `similarityprob_event_mc()`

and
`feature_vector_mc()`

are probabilistic and may vary
between runs. Increasing the optional `samples`

parameter will increase accuracy but slow
down calculation.

The second method of Monte Carlo approximation is intended for use in scenarios where it is computationally intensive to pre-calculate a statistically significant dataset of samples from GBS.

## Machine learning with GBS graph kernels¶

The power of feature vectors that embed graphs in a vector space of real numbers is that we can now measure similarities between graphs. This is very useful in machine learning, where similar labels are assigned to graphs that are close to each other. GBS feature vectors therefore give rise to a similarity measure between graphs!

Let’s build this up a bit more. The MUTAG dataset we are considering contains not only graphs
corresponding to the structure of chemical compounds, but also a *label* of each
compound based upon its mutagenic effect. The four graphs we consider here have labels:

MUTAG0: Class 1

MUTAG1: Class 0

MUTAG2: Class 0

MUTAG3: Class 1

```
classes = [1, 0, 0, 1]
```

We can use GBS feature vectors in a support vector machine (SVM) that finds a separating hyperplane between classes in the feature space. We start by defining two-dimensional feature vectors:

```
events = [8, 10]
max_count = 2
f1 = similarity.feature_vector_sampling(m0, events, max_count)
f2 = similarity.feature_vector_sampling(m1, events, max_count)
f3 = similarity.feature_vector_sampling(m2, events, max_count)
f4 = similarity.feature_vector_sampling(m3, events, max_count)
import numpy as np
R = np.array([f1, f2, f3, f4])
print(R)
```

Out:

```
[[0.0884 0.042 ]
[0.0704 0.02855]
[0.06995 0.02935]
[0.0962 0.04585]]
```

There is freedom in the choice of `events`

composing the feature vectors and we encourage the
reader to explore different combinations. Note, however, that odd photon-numbered events have
zero probability because ideal GBS only generates and outputs pairs of photons.

Given our points in the feature space and their target labels, we can use scikit-learn’s Support Vector Machine LinearSVC as our model to train:

```
from sklearn.svm import LinearSVC
from sklearn.preprocessing import StandardScaler
R_scaled = StandardScaler().fit_transform(R) # Transform data to zero mean and unit variance
classifier = LinearSVC()
classifier.fit(R_scaled, classes)
```

Out:

```
LinearSVC(C=1.0, class_weight=None, dual=True, fit_intercept=True,
intercept_scaling=1, loss='squared_hinge', max_iter=1000,
multi_class='ovr', penalty='l2', random_state=None, tol=0.0001,
verbose=0)
```

Here, the term “linear” refers to the *kernel* function used to calculate inner products
between vectors in the space. We can use a linear SVM because we have already embedded the
graphs in a feature space based on GBS. We have also rescaled the feature vectors so that they
zero mean and unit variance using scikit-learn’s `StandardScaler`

, a technique
often used in machine learning.

We can then visualize the trained SVM by plotting the decision boundary with respect to the points:

```
w = classifier.coef_[0]
i = classifier.intercept_[0]
m = -w[0] / w[1] # finding the values for y = mx + b
b = -i / w[1]
xx = [-1, 1]
yy = [m * x + b for x in xx]
fig = plot.points(R_scaled, classes)
fig.add_trace(plotly.graph_objects.Scatter(x=xx, y=yy, mode="lines"))
fig
```

This plot shows the two classes (grey points for class 0 and red points for class 1) successfully separated by the linear hyperplane using the GBS feature space. Moreover, recall that the two MUTAG1 and MUTAG2 graphs of class 0 are actually isomorphic. Reassuringly, their corresponding feature vectors are very similar. In fact, the feature vectors of isomorphic graphs should always be identical [4] - the small discrepancy in this plot is due to the statistical approximation from sampling.

## References¶

- 1(1,2,3)
Maria Schuld, Kamil Brádler, Robert Israel, Daiqin Su, and Brajesh Gupt. A quantum hardware-induced graph kernel based on gaussian boson sampling. arXiv preprint arXiv:1905.12646, 2019.

- 2
Asim Kumar Debnath, Rosa L Lopez de Compadre, Gargi Debnath, Alan J Shusterman, and Corwin Hansch. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of medicinal chemistry, 34(2):786–797, 1991.

- 3
Nils Kriege and Petra Mutzel. Subgraph matching kernels for attributed graphs. In Proceedings of the 29th International Conference on Machine Learning. 2012.

- 4
Kamil Bradler, Shmuel Friedland, Josh Izaac, Nathan Killoran, and Daiqin Su. Graph isomorphism and gaussian boson sampling. arXiv preprint arXiv:1810.10644, 2018.

**Total running time of the script:** ( 3 minutes 42.535 seconds)

## Contents

## Downloads

## Related tutorials