similarities and representations of graphsged2018.sci-web.net/slides/tsitsulin.pdf · neural node...

74
Similarities and Representations of Graphs Anton Tsitsulin Hasso Plattner Institute → University of Bonn

Upload: others

Post on 05-Oct-2020

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Similarities and Representations of Graphs

Anton Tsitsulin

Hasso Plattner Institute → University of Bonn

Page 2: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Graph world is diverse

Different domains:• Information

• Social

• Biological

• Transportation

• …

2

Page 3: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Graph world is diverse

Different graph types:• (Un)directed

• (Un)weighted

• Temporal

• Heterogeneous

• …

3

Page 4: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Graph world is diverse

Different modalities:• Nodes

• Edges

• Motifs

• Subgraphs

• Whole graphs

• …

4

Page 5: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Graph world is diverse

Different tasks:• Classification

• Clustering

• Anomaly detection

• …

5

Page 6: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Graph world is diverse

6

Domains

InformationSocialBiologicalTransportation

Graph Types

(Un)directed(Un)weightedTemporalHeterogeneous

Modalities

NodesEdgesSubgraphsWhole graphs

Tasks

ClassificationClusteringAnomaly detection

Page 7: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Graph world is diverse

7

Domains

InformationSocialBiologicalTransportation

Graph Types

(Un)directed(Un)weightedTemporalHeterogeneous

Modalities

NodesEdgesSubgraphsWhole graphs

Tasks

ClassificationClusteringAnomaly detection

Embeddings

Page 8: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Why representations?

8

We have fast & good algorithms for mining vector data…

Clustering

Page 9: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Why representations?

9

We have fast & good algorithms for mining vector data…

Clustering

Page 10: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Why representations?

10

We have fast & good algorithms for mining vector data…

low-dimensional representation

Clustering

Page 11: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Why representations?

11

We have fast & good algorithms for mining vector data…

low-dimensional representation

k-means

Clustering

Page 12: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Why representations?

12

We have fast & good algorithms for mining vector data…

Classification

Page 13: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Why representations?

13

We have fast & good algorithms for mining vector data…

Classification

Page 14: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Why representations?

14

We have fast & good algorithms for mining vector data…

low-dimensional representation

Classification

Page 15: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Why representations?

15

We have fast & good algorithms for mining vector data…

low-dimensional representation

Log. regression

Classification

Page 16: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Why representations?

16

We have fast & good algorithms for mining vector data…

low-dimensional representation

Log. regression

Classification

Page 17: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

What makes a Representation?

Good representation preserves geometry of the original space.

17

Page 18: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

What makes a Representation?

Good representation preserves geometry of the original space.

18

SimilaritiesRepresentations

𝐿2 → geodesics

Page 19: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

What makes a Representation?

Good representation preserves geometry of the original space.

19

SimilaritiesRepresentations

𝐿2 → geodesics

Factorization, NNs, …

Page 20: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Part I: node representations

VERSE: Versatile Graph Embeddingsfrom Similarity Measures

Anton Tsitsulin1 Davide Mottin1 Panagiotis Karras2 Emmanuel Müller1

12

Page 21: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Neural node representations

21

Nodes in random walks ≈ words in sentences → word2vec

Random walks[2,3]

Self-supervised neural network[1]

𝑊 𝑊𝑇

representation

[1] Efficient Estimation of Word Representations in Vector Space, Mikolov et al., NIPS 2013[2] DeepWalk: Online Learning of Social Representations, Perozzi et al., KDD 2014[3] node2vec: Scalable Feature Learning for Networks, Grover & Leskovec, KDD 2016

Page 22: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Wait, what?

22

Do we know what do these walks mean?• What do parameters change?

• What does the model optimize?word2vec can be

understood as matrix factorization!

[1] Metric recovery from directed unweighted graphs, Hashimoto et al., AISTATS 2015[2] Neural Word Embedding as Implicit Matrix Factorization , Levy & Goldberg, NIPS 2014

Page 23: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Wait, what?

23

Do we know what do these walks mean?• What do parameters change?

• What does the model optimize?word2vec can be

understood as matrix factorization!

Yes, but the assumptions are too strict!(dimensionality = number of nodes)

[1] Metric recovery from directed unweighted graphs, Hashimoto et al., AISTATS 2015[2] Neural Word Embedding as Implicit Matrix Factorization , Levy & Goldberg, NIPS 2014

Page 24: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Key observation

24

Random walks define node similarity distributions!

𝑠𝑖𝑚(𝑢,⋅)𝑢

𝑣

𝑠𝑖𝑚(𝑣,⋅)

𝑢 𝑣

Random walks converge to Personalized PageRank

Page 25: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Key observation

25

Random walks define node similarity distributions!

𝑠𝑖𝑚(𝑢,⋅)𝑢

𝑣

𝑠𝑖𝑚(𝑣,⋅)

𝑢 𝑣

Q: Can we inject similarities fully into the model?

Random walks converge to Personalized PageRank

Page 26: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Yes, we can!

26

VERSE can learn similarity distributions

𝑢

Node similarities

Self-supervised neural network[1]

𝑊 𝑊𝑇

representation

𝑢

Q1: Which similarities can we possibly represent?

Q2: What other methods have to do with similarities?

Page 27: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Why similarities?

27

• We can explicitly measure the quality

Page 28: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Why similarities?

28

• We can measure the quality

• We can adapt the similarity to the data/task

• Examples in the paper: PageRank, SimRank, adjacency

Page 29: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Why similarities?

29

• We can measure the quality

• We can adapt the similarity to the data/task

• Examples in the paper: PageRank, SimRank, adjacency

• Thinking about similarities provides insight:

• We show how DeepWalk & node2vec ≈ PPR

• VERSE uses 1 parameter instead of 5

Page 30: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

VERSE graph embedding

30

Algorithm for given 𝑠𝑖𝑚(𝑢,⋅):

• Initialize 𝑊~𝒩(0, 1)

• For 𝑢 ∈ 𝑉 optimize 𝑊 for softmax 𝑠𝑖𝑚(𝑢,⋅) by gradient descent

Full updates are too expensive - 𝑂 𝑛2

𝑊 𝑊𝑇

representation

𝑠𝑖𝑚(𝑢,⋅)

We make it faster via sampling!

Page 31: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Sampling in VERSE

We use Noise Contrastive Estimation

31

Negative Sampling does not preserve similarities!

Page 32: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Experimental setup

• Goal: diverse tasks & datasets

• PPR as default similarity

• Max. one day on 10-core server (try that at home!)

• Code @ GitHub, C for performance (don’t try that at home!)

32

Page 33: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Social graph × link prediction

33

Link prediction(accuracy)

Page 34: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Different graphs × node clustering

34

Node clustering(modularity)

Page 35: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Web graph × node classification

35

Classification(accuracy)

Page 36: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Take home messages

• We provide new useful abstraction: node similarities

• We create VERSE to explicitly work with similarities

• We develop a scalable approximation technique with NCE

• There is a room for improvement!

36

Page 37: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Part II: graph representations

NetLSD: Hearing the Shape of a Graph

Anton Tsitsulin1 Davide Mottin1 Panagiotis Karras2

Alex Bronstein3 Emmanuel Müller1

1 HPIGermany

2 Aarhus universityDenmark

3 TechnionIsrael

Page 38: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Defining graph similarity

With it, we can do:• Classification

• Clustering

• Anomaly detection

• …

38

Page 39: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Scalability is key!

Two problem sources:• Big graphs• Many graphs

Solution: graph descriptors

39

Page 40: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Isomorphism ⇒ 𝑑 𝐺1, 𝐺2 = 0

3 key properties:• Permutation invariance

• Scale-adaptivity

• Size invariance

40

Page 41: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Local structures are important

3 key properties:• Permutation invariance

• Scale-adaptivity

• Size invariance

41

Page 42: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Global structure is important

3 key properties:• Permutation invariance

• Scale-adaptivity

• Size invariance

42

Page 43: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

We may need to disregard the size

3 key properties:• Permutation invariance

• Scale-adaptivity

• Size invariance

43

Page 44: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Network Laplacian Spectral Descriptors

3 key properties:• Permutation invariance

• Scale-adaptivity

• Size invariance

+ Scalability

= NetLSD

44

Page 45: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Optimal Transport

Geometry for probability measures supported on a space.

45

Page 46: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Optimal Transport

Geometry for probability measures supported on a space.

L. Kantorovich1939

G. Monge1781

46

Page 47: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Optimal Transport

Geometry for probability measures supported on a space.

L. Kantorovich1939

G. Monge1781

47C. Villani

2003

Page 48: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Optimal Transport

Geometry for probability measures supported on a space.

L. Kantorovich1939

G. Monge1781

𝜇

𝜈

48

Page 49: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Optimal Transport

Geometry for probability measures supported on a space.

L. Kantorovich1939

G. Monge1781

𝜇

𝜈

49

Page 50: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Optimal Transport

Geometry for probability measures supported on a space.

L. Kantorovich1939

G. Monge1781

Discrete case → Linear programming

𝜈

50

Page 51: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Gromov-Wasserstein distance

𝑋 𝑌

51

Page 52: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Gromov-Wasserstein distance

𝑋 𝑌

52

Page 53: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Gromov-Wasserstein distance

𝑑

ҧ𝑑

𝑋 𝑌

53

Page 54: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Gromov-Wasserstein distance

𝑑

ҧ𝑑

𝑋 𝑌54

Page 55: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Heat diffusion has an explicit notion of scale

55

Page 56: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Heat kernel has an explicit notion of scale

56

Page 57: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Scale corresponds to locality

57

Page 58: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Scale corresponds to locality

58

Page 59: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Spectral Gromov-Wasserstein =Gromov-Wasserstein + heat kernel

Using heat kernel at all 𝑡 as a distancedoesn’t make our task any easier

59

Page 60: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Spectral Gromov-Wasserstein has a useful lower bound!

Using heat kernel at all 𝑡 as a distancedoes make our task way easier!

We can just compare heat traces!

60

Page 61: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Spectral Gromov-Wasserstein has a useful lower bound!

Using heat kernel at all 𝑡 as a distancedoes make our task way easier!

We can just compare heat traces!

61

Page 62: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Network Laplacian Spectral Descriptors

We sample 𝑡 logarithmically, and compare ℎ𝑡 with 𝐿2 distanceHowever, ℎ𝑡 is size-dependent!

62

Page 63: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Size invariance = normalization

We can normalize by ℎ𝑡 of the complete (𝐾) or empty graph ഥ𝐾Computation of all 𝜆 is still expensive: 𝑂(𝑛3)

63

Page 64: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Scalability

We propose two options:1. Use local Taylor expansion:

Second term is degree distribution; third is weighted triangle count

64

Page 65: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Scalability

We propose two options:1. Use local Taylor expansion:

Second term is degree distribution; third is weighted triangle count

2. Compute top + bottom eigenvalues, approximate the restLinear extrapolation = explicit assumption on the manifold (Weyl’s law)

65

Page 66: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Scalability

We propose two options:1. Use local Taylor expansion:

Second term is degree distribution; third is weighted triangle count

2. Compute top + bottom eigenvalues, approximate the restLinear extrapolation = explicit assumption on the manifold (Weyl’s law)

Other spectrum approximators can be even more efficient![Cohen-Steiner et al. | KDD 2018]

[Adams et al. | arXiv 1802.03451]

66

Page 67: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Experimental design

3 key properties:• Permutation invariance

• Scale-adaptivity

• Size invariance

67

Page 68: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Detecting graphs with communities

3 key properties:• Permutation invariance

• Scale-adaptivity

• Size invariance

Accuracy of classification of SBM vs Erdős–Rényi graphs

NetLSD

NIPS’17ASONAM’13

68

Page 69: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Detecting rewired graphs

3 key properties:• Permutation invariance

• Scale-adaptivity

• Size invariance

Accuracy of classification of real vs rewired graphs

NetLSD

NIPS’17ASONAM’13

69

Page 70: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Classifying real graphs

3 key properties:• Permutation invariance

• Scale-adaptivity

• Size invariance

Accuracy of graph classification

NetLSD

NIPS’17ASONAM’13

70

Page 71: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Expressive graph comparison

3 key properties:• Permutation invariance

• Scale-adaptivity

• Size invariance

+ Scalability

= NetLSD

71

Page 72: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Questions?

code + data github.com/xgfswebsite tsitsul.in ← presentation will be therewrite me [email protected]

Page 73: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Network Laplacian Spectral Descriptors:wave kernel trace

We sample 𝑡 logarithmically, and compare Re(𝑤𝑡) with 𝐿2 distance𝑤𝑡 detects symmetries!≈quantum random walks

73

Page 74: Similarities and Representations of Graphsged2018.sci-web.net/slides/Tsitsulin.pdf · Neural node representations 21 Nodes in random walks ≈words in sentences →word2vec Random

Hearing the Shape of a Graph

“Can One Hear the Shape of a Drum?” – Kac 1966

No, as there are co-spectral drums (graphs)

Conjecture: # of co-spectral graphs → 0 as # of nodes → ∞[Dufree, Martin 2015]

74