interstellar: searching recurrent architecture for ......interstellar, which is a nas method...

11
Interstellar: Searching Recurrent Architecture for Knowledge Graph Embedding Yongqi Zhang 1,3 Quanming Yao 1,2 Lei Chen 3 1 4Paradigm Inc. 2 Department of Electronic Engineering, Tsinghua University 3 Department of Computer Science and Engineering, HKUST {zhangyongqi,yaoquanming}@4paradigm.com, [email protected] Abstract Knowledge graph (KG) embedding is well-known in learning representations of KGs. Many models have been proposed to learn the interactions between enti- ties and relations of the triplets. However, long-term information among multiple triplets is also important to KG. In this work, based on the relational paths, which are composed of a sequence of triplets, we define the Interstellar as a recurrent neural architecture search problem for the short-term and long-term information along the paths. First, we analyze the difficulty of using a unified model to work as the Interstellar. Then, we propose to search for recurrent architecture as the Interstellar for different KG tasks. A case study on synthetic data illustrates the importance of the defined search problem. Experiments on real datasets demon- strate the effectiveness of the searched models and the efficiency of the proposed hybrid-search algorithm. 1 1 Introduction Knowledge Graph (KG) [3, 43, 52] is a special kind of graph with many relational facts. It has in- spired many knowledge-driven applications, such as question answering [30, 37], medical diagnosis [60], and recommendation [28]. An example of the KG is in Figure 1(a). Each relational fact in KG is represented as a triplet in the form of (subject entity, relation, object entity), abbreviated as (s, r, o). To learn from the KGs and benefit the downstream tasks, embedding based methods, which learn low-dimensional vector representations of the entities and relations, have recently developed as a promising direction to serve this purpose [7, 18, 45, 52]. Many efforts have been made on modeling the plausibility of triplets (s, r, o)s through learning em- beddings. Representative works are triplet-based models, such as TransE [7], ComplEx [49], ConvE [13], RotatE [44], AutoSF [61], which define different embedding spaces and learn on single triplet (s, r, o). Even though these models perform well in capturing short-term semantic information in- side the triplets in KG, they still cannot capture the information among multiple triplets. In order to better capture the complex information in KGs, the relational path is introduced as a promising format to learn composition of relations [19, 27, 38] and long-term dependency of triplets [26, 11, 18, 51]. As in Figure 1(b), a relational path is defined as a set of L triplets (s 1 ,r 1 ,o 1 ), (s 2 ,r 2 ,o 2 ),..., (s L ,r L ,o L ), which are connected head-to-tail in sequence, i.e. o i = s i+1 , 8i =1 ...L - 1. The paths not only preserve every single triplet but also can capture the dependency among a sequence of triplets. Based on the relational paths, the triplet-based models can be compatible by working on each triplet (s i ,r i ,o i ) separately. TransE-Comp [19] and PTransE [27] learn the composition relations on the relational paths. To capture the long-term information 1 Code is available at https://github.com/AutoML-4Paradigm/Interstellar, and corre- spondence is to Q. Yao. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.

Upload: others

Post on 21-Mar-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Interstellar: Searching Recurrent Architecture for ......Interstellar, which is a NAS method customized to path-based KG representation learning. 2.2 Neural Architecture Search (NAS)

Interstellar: Searching Recurrent Architecture forKnowledge Graph Embedding

Yongqi Zhang1,3 Quanming Yao1,2 Lei Chen3

14Paradigm Inc.2Department of Electronic Engineering, Tsinghua University3Department of Computer Science and Engineering, HKUST

{zhangyongqi,yaoquanming}@4paradigm.com, [email protected]

Abstract

Knowledge graph (KG) embedding is well-known in learning representations ofKGs. Many models have been proposed to learn the interactions between enti-ties and relations of the triplets. However, long-term information among multipletriplets is also important to KG. In this work, based on the relational paths, whichare composed of a sequence of triplets, we define the Interstellar as a recurrentneural architecture search problem for the short-term and long-term informationalong the paths. First, we analyze the difficulty of using a unified model to workas the Interstellar. Then, we propose to search for recurrent architecture as theInterstellar for different KG tasks. A case study on synthetic data illustrates theimportance of the defined search problem. Experiments on real datasets demon-strate the effectiveness of the searched models and the efficiency of the proposedhybrid-search algorithm. 1

1 Introduction

Knowledge Graph (KG) [3, 43, 52] is a special kind of graph with many relational facts. It has in-spired many knowledge-driven applications, such as question answering [30, 37], medical diagnosis[60], and recommendation [28]. An example of the KG is in Figure 1(a). Each relational fact inKG is represented as a triplet in the form of (subject entity, relation, object entity), abbreviated as(s, r, o). To learn from the KGs and benefit the downstream tasks, embedding based methods, whichlearn low-dimensional vector representations of the entities and relations, have recently developedas a promising direction to serve this purpose [7, 18, 45, 52].

Many efforts have been made on modeling the plausibility of triplets (s, r, o)s through learning em-beddings. Representative works are triplet-based models, such as TransE [7], ComplEx [49], ConvE[13], RotatE [44], AutoSF [61], which define different embedding spaces and learn on single triplet(s, r, o). Even though these models perform well in capturing short-term semantic information in-side the triplets in KG, they still cannot capture the information among multiple triplets.

In order to better capture the complex information in KGs, the relational path is introduced asa promising format to learn composition of relations [19, 27, 38] and long-term dependency oftriplets [26, 11, 18, 51]. As in Figure 1(b), a relational path is defined as a set of L triplets(s1, r1, o1), (s2, r2, o2), . . . , (sL, rL, oL), which are connected head-to-tail in sequence, i.e. oi =si+1, 8i = 1 . . . L � 1. The paths not only preserve every single triplet but also can capture thedependency among a sequence of triplets. Based on the relational paths, the triplet-based modelscan be compatible by working on each triplet (si, ri, oi) separately. TransE-Comp [19] and PTransE[27] learn the composition relations on the relational paths. To capture the long-term information

1Code is available at https://github.com/AutoML-4Paradigm/Interstellar, and corre-spondence is to Q. Yao.

34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.

Page 2: Interstellar: Searching Recurrent Architecture for ......Interstellar, which is a NAS method customized to path-based KG representation learning. 2.2 Neural Architecture Search (NAS)

Cooper MurphyDr. Brand

Lois

TomAmelia

Daughter

Son

Spouse

Friend

Parent

Spaceman

Profession

(a) An example of Knowledge Graph.

long-term information

short-term information

Dr. Brand Amelia

Parent

Amelia

Child

Dr. Brand

Murphy Cooper Dr. Brand Amelia

Daughter Friend Parent

Murphy Amelia

Friend

(b) Examples of relational paths.Figure 1: Short-term information is represented by a single triplet. Long-term information passesacross multiple triplets. The two kinds of information in KGs can be preserved in the relational path.

in KGs, Chains [11] and RSN [18] design customized RNN to leverage all the entities and relationsalong path. However, the RNN models still overlook the semantics inside each triplet [18]. Anothertype of models leverage Graph Convolution Network (GCN) [24] to extract structural information inKGs, e.g. R-GCN, GCN-Align [53], CompGCN [50]. However, GCN-based methods do not scalewell since the entire KG needs to be processed and it has large sample complexity [15].

In this paper, we observe that the relational path is an important and effective data structure thatcan preserve both short-term and long-term information in KG. Since the semantic patterns and thegraph structures in KGs are diverse [52], how to leverage the short-term and long-term informationfor a specific KG task is non-trivial. Inspired by the success of neural architecture search (NAS)[14], we propose to search recurrent architectures as the Interstellar to learn from the relationalpath. The contributions of our work are summarized as follows:

1. We analyze the difficulty and importance of using the relational path to learn the short-term andlong-term information in KGs. Based on the analysis, we define the Interstellar as a recurrentnetwork to process the information along the relational path.

2. We formulate the above problem as a NAS problem and propose a domain-specific search space.Different from searching RNN cells, the recurrent network in our space is specifically designedfor KG tasks and covers many human-designed embedding models.

3. We identify the problems of adopting stand-alone and one-shot search algorithms for our searchspace. This motivates us to design a hybrid-search algorithm to search efficiently.

4. We use a case study on the synthetic data set to show the reasonableness of our search space.Empirical experiments on entity alignment and link prediction tasks demonstrate the effective-ness of the searched models and the efficiency of the search algorithm.

Notations. We denote vectors by lowercase boldface, and matrix by uppercase boldface. A KGG = (E ,R,S) is defined by the set of entities E , relations R and triplets S . A triplet (s, r, o) 2 Srepresents a relation r that links from the subject entity s to the object entity o. The embeddings inthis paper are denoted as boldface letters of indexes, e.g. s, r,o are embeddings of s, r, o. “�" is theelement-wise multiply and “⌦" is the Hermitian product [49] in complex space.

2 Related Works

2.1 Representation Learning in Knowledge Graph (KG)

Given a single triplet (s, r, o), TransE [7] models the relation r as a translation vector from subjectentity s to object entity o, i.e., the embeddings satisfy s + r ⇡ o. The following works DistMult[55], ComplEx [49], ConvE [13], RotatE [44], etc., interpret the interactions among embeddings s, rand o in different ways. All of them learn embeddings based on single triplet.

In KGs, a relational path is a sequence of triplets. PTransE [27] and TransE-Comp [19] propose tolearn the composition of relations (r1, r2, . . . , rn). In order to combine more pieces of informationin KG, Chains [11] and RSN [18] are proposed to jointly learn the entities and relations along therelational path. With different connections and combinators, these models process short-term andlong-term information in different ways.

2

Page 3: Interstellar: Searching Recurrent Architecture for ......Interstellar, which is a NAS method customized to path-based KG representation learning. 2.2 Neural Architecture Search (NAS)

Table 1: The recurrent function of existing KG embedding models. We represent the triplet/path-based models by Definition 1. N (·) denotes the neighbors of an entity. W 2 Rd⇥d’s are differentweight matrices. � is a non-linear activation function. “cell” means a RNN cell [12], like GRU [9]LSTM [46] etc. For mini-batch complexity, m is the batch size, d is the embedding dimension.

type model unit function complexity

triplet-based TransE [7] vt = st + rt,ht = 0 O(md)ComplEx [49] vt = st ⌦ rt, ,ht = 0 O(md)

GCN-based R-GCN [40] st = �(st�1 +P

s02N (s) W(r)t s0t�1) O(|E||R|d)

GCN-Align [53] st = �(st�1 +P

s02N (s) Wts0t�1) O(|E|d)

path-based

add vt = ht,ht = ht�1 + rt O(mLd)PTransE [27] multiply vt = ht,ht = ht�1 � rt O(mLd)

RNN vt = ht,ht=cell(rt,ht�1) O(mLd2)

Chains [11] vt = ht,ht=cell(st, rt,ht�1) O(mLd2)

RSN [18] vt=W1st+W2ht,ht=cell(rt, cell(st,ht�1)) O(mLd2)

Interstellar a searched recurrent network O(mLd2)

Graph convolutional network (GCN) [24] have recently been developed as a promising method tolearn from graph data. As a special instance of graph, GCN has also been introduced in KG learning,e.g., R-GCN [40], GCN-Align [53], VR-GCN [58] and CompGCN [50]. However, these models arehard to scale well since the whole KG should be loaded. for each training iteration. Besides, GCNhas been theoretically proved to have worse generalization guarantee than RNN in sequence learningtasks (Section 5.2 in [15]). Table 1 summarizes above works and compares them with the proposedInterstellar, which is a NAS method customized to path-based KG representation learning.

2.2 Neural Architecture Search (NAS)

Searching for better neural networks by NAS techniques have broken through the bottleneck inmanual architecture designing [14, 20, 41, 63, 56]. To guarantee effectiveness and efficiency, thefirst thing we should care about is the search space. It defines what architectures can be representedin principle, like CNN or RNN. In general, the search space should be powerful but also tractable.The space of CNN has been developed from searching the macro architecture [63], to micro cells[29] and further to the larger and sophisticated cells [47]. Many promising architectures have beensearched to outperform human-designed CNNs in literature [1, 54]. However, designing the searchspace for recurrent neural network attracts little attention. The searched architectures mainly focuson cells rather than connections among cells [36, 63].

To search efficiently, evaluation method, which provides feedback signals, and search algorithm,which guides the optimization direction, should be simultaneously considered. There are two im-portant approaches for NAS. 1) Stand-alone methods, i.e. separately training and evaluating eachmodel from scratch, are the most guaranteed way to compare different architectures, whereas veryslow. 2) One-shot search methods, e.g., DARTS [29], ASNG [1] and NASP [57], have recentlybecome the most popular approach that can efficiently find good architectures. Different candidatesare approximately evaluated in a supernet with parameter-sharing (PS). However, PS is not alwaysreliable, especially in complex search spaces like the macro space of CNNs and RNNs [5, 36, 41].

3 The Proposed Method

As in Section 2.1, the relational path is informative in representing knowledge in KGs. Since the pathis a sequence of triplets with varied length, it is intuitive to use Recurrent Neural Network (RNN),which is known to have a universal approximation ability [39], to model the path as language models[46]. While RNN can capture the long-term information along steps [32, 9], it will overlook domain-specific properties like the semantics inside each triplet without a customized architecture [18].Besides, what kind of information should we leverage varies from tasks [52]. Finally, as proved in[15], RNN has better generalization guarantee than GCN. Thus, to design a proper KG model, wedefine the path modeling as a NAS problem for RNN here.

3.1 Designing a Recurrent Search Space

To start with, we firstly define a general recurrent function (Interstellar) on the relational path.

3

Page 4: Interstellar: Searching Recurrent Architecture for ......Interstellar, which is a NAS method customized to path-based KG representation learning. 2.2 Neural Architecture Search (NAS)

Definition 1 (Interstellar). An Interstellar processes the embeddings of s1, r1 to sL, rL recurrently.In each recurrent step t, the Interstellar combines embeddings of st, rt and the preceding informa-tion ht�1 to get an output vt. The Interstellar is formulated as a recurrent function

[vt,ht] = f(st, rt,ht�1), 8t = 1 . . . L, (1)

where ht is the recurrent hidden state and h0 = s1. The output vt is to predict object entity ot.

In each step t, we focus on one triplet (st, rt, ot). To design the search space A, we need to figureout what are important properties in (1). Since st and rt are indexed from different embedding sets,we introduce two operators to make a difference, i.e. Os for st and Or for rt as in Figure 2. Anotheroperator Ov is used to model the output vt in (1). Then, the hidden state ht is defined to propagatethe information across triplets. Taking Chains [11] as an example, Os is an adding combinator forht�1 and st, Or is an RNN cell to combine Os and rt, and Ov directly outputs Or as vt.

!! !"

!#

"$

!!

!!

!"!#

#$

!$

!%

!&

"'(%

$$

%

Figure 2: Search space A of f for (1).

Table 2: The split search space of f in Figure 2into macro-level ↵1 and micro-level ↵2.macro-level connections ht�1, Os,0, st

↵1 2 A1 combinators +, �, ⌦, gated

micro-level activation identity, tanh, sigmoid

↵2 2 A2 weight matrix {Wi}6i=1, I

To control the information flow, we search the connections from input vectors to the outputs, i.e. thedashed lines in Figure 2. Then, the combinators, which combine two vectors into one, are importantsince they determine how embedding are transformed, e.g. “+” in TransE [7] and “�” in DistMult[55]. As in the search space of RNN [63], we introduce activation functions tanh and sigmoid togive non-linear squashing. Each link is either a trainable weight matrix W or an identity matrix I toadjust the vectors. Detailed information is listed in Table 2.

Let the training and validation set be Gtra and Gval, M be the measurement on Gval and L be theloss on Gtra. To meet different requirements on f , we propose to search the architecture ↵ of f as aspecial RNN. The network here is not a general RNN but one specific to the KG embedding tasks.The problem is defined to find an architecture ↵ such that validation performance is maximized, i.e.,

↵⇤ = argmax↵2A M (f(F ⇤;↵),Gval) , s.t. F

⇤ = argminF L (f(F ;↵),Gtra) , (2)

which is a bi-level optimization problem and is non-trivial to solve. First, the computation cost to getF

⇤ is generally high. Second, searching for ↵2A is a discrete optimization problem [29] and thespace is large (in Appendix A.1). Thus, how to efficiently search the architectures is a big challenge.

Compared with standard RNNs, which recurrently model each input vectors, the Interstellar modelsthe relational path with triplets as basic unit. In this way, we can determine how to model short-terminformation inside each triplet and what long-term information should be passed along the triplets.This makes our search space distinctive for the KG embedding problems.

3.2 Proposed Search Algorithm

In Section 3.1, we have introduced the search space A, which contains considerable different archi-tectures. Therefore, how to search efficiently in A is an important problem. Designing appropriateoptimization algorithm for the discrete architecture parameters is a big challenge.

3.2.1 Problems with Existing Algorithms

As introduced in Section 2.2, we can either choose the stand-alone approach or one-shot approach tosearch the network architecture. In order to search efficiently, search algorithms should be designedfor specific scenarios [63, 29, 1]. When the search space is complex, e.g. the macro space of CNNsand the tree-structured RNN cells [63, 36], stand-alone approach is preferred since it can provideaccurate evaluation feedback while PS scheme is not reliable [5, 41]. However, the computation

4

Page 5: Interstellar: Searching Recurrent Architecture for ......Interstellar, which is a NAS method customized to path-based KG representation learning. 2.2 Neural Architecture Search (NAS)

cost of evaluating an architecture under the stand-alone approach is high, preventing us to efficientlysearch in large spaces.

One-shot search algorithms are widely used in searching micro spaces, e.g. cell structures in CNNsand simplified DAGs in RNN cells [1, 36, 29]. Searching architectures on a supernet with PS inthe simplified space is relatively possible. However, since the search space in our problem is morecomplex and the embedding status influences the evaluation, PS is not reliable (see Appendix B.2).Therefore, one-shot algorithms is not appropriate in our problem.

3.2.2 Hybrid-search Algorithm

Even though the two types of algorithms have their limitations, is it possible to take advantage fromboth of them? Back to the development from stand-alone search in NASNet [63] to one-shot searchin ENAS [36], the search space is simplified from a macro space to micro cells. We are motivatedto split the space A into a macro part A1 and a micro part A2 in Table 2. ↵1 2 A1 controlsthe connections and combinators, influencing information flow a lot; and ↵2 2 A2 fine-tunes thearchitecture through activations and weight matrix. Besides, we empirically observe that PS for ↵2

is reliable (in Appendix B.2). Then we propose a hybrid-search method that can be both fast andaccurate in Algorithm 1. Specifically, ↵1 and ↵2 are sampled from a controller c — a distribution[1, 54] or a neural network [63]. The evaluation feedback of ↵1 and ↵2 are obtained throughthe stand-along manner and one-shot manner respectively. After the searching procedure, the bestarchitecture is sampled and we fine-tune the hyper-parameters to achieve the better performance.

Algorithm 1 Proposed search recurrent architecture as the Interstellar algorithm.Require: search space A ⌘ A1 [A2 in Figure 2, controller c for sampling ↵ = [↵2,↵1].1: repeat2: sample the micro-level architecture ↵2 2 A2 by c;3: update the controller c for k1 steps using Algorithm 2 (in stand-alone manner);4: sample the macro-level architecture ↵1 2 A1 by c;5: update the controller c for k2 steps using Algorithm 3 (in one-shot manner);6: until termination7: Fine-tune the hyper-parameters for the best architecture ↵⇤ = [↵⇤

1,↵⇤2] sampled from c.

8: return ↵⇤ and the fine-tuned hyper-parameters.

In the macro-level (Algorithm 2), once we get the architecture ↵, the parameters are obtained by fullmodel training. This ensures that the evaluation feedback for macro architectures ↵1 is reliable. Inthe one-shot stage (Algorithm 3), the main difference is that, the parameters F are not initialized anddifferent architectures are evaluated on the same set of F , i.e. by PS. This improves the efficiencyof evaluating micro architecture ↵2 without full model training.

Algorithm 2 Macro-level (↵1) updateRequire: controller c, ↵2 2 A2, parameters F .1: sample an individual ↵1 by c to get the architecture

↵ = [↵1,↵2];2: initialize F and train to obtain F ⇤ until converge

by minimizing L (f(F ,↵),Gtra);3: evaluate M(f(F ⇤

,↵),Gval) to update c.4: return the updated controller c.

Algorithm 3 Micro-level (↵2) updateRequire: controller c, ↵1 2 A1, parameters F .1: sample an individual ↵2 by c to get the archi-

tecture ↵ = [↵1,↵2];2: sample a mini-batch Btra from Gtra and update F

with gradient rFL (f(F ,↵),Btra);3: sample a mini-batch Bval from Gval and evaluate

M (f(F ,↵),Bval) to update c.4: return the updated controller c.

The remaining problem is how to model and update the controller, i.e. step 3 in Algorithm 2 andstep 3 in Algorithm 3. The evaluation metrics in KG tasks are usually ranking-based, e.g. Hit@k,and are non-differentiable. Instead of using the direct gradient, we turn to the derivative-free op-timization methods [10], such as reinforcement learning (policy gradient in [63] and Q-learning in[4]), or Bayes optimization [6]. Inspired by the success of policy gradient in searching CNNs andRNNs [63, 36], we use policy gradient to optimize the controller c.

Following [1, 16, 54], we use stochastic relaxation for the architectures ↵. Specifically, the para-metric probability distribution ↵ ⇠ p✓(↵) is introduced on the search space ↵ 2 A and then thedistribution parameter ✓ is optimized to maximize the expectation of validation performance

max✓

J(✓) = max✓

E↵⇠p✓(↵) [M (f(F ⇤;↵),Gval)] . (3)

5

Page 6: Interstellar: Searching Recurrent Architecture for ......Interstellar, which is a NAS method customized to path-based KG representation learning. 2.2 Neural Architecture Search (NAS)

Then, the optimization target is transformed from (2) with the discrete ↵ into (3) with the continuous✓. In this way, ✓ is updated by ✓t+1 = ✓t+⇢r✓J(✓t), where ⇢ is the step-size, and r✓J(✓) =E [M(f(F ;↵),Gval)r✓ ln (p✓t(↵))] is the pseudo gradient of ↵. The key observation is that we donot need to take direct gradient w.r.t. M, which is not available. Instead, we only need to get thevalidation performance measured by M.

To further improve the efficiency, we use natural policy gradient (NPG) [34] r✓J(✓t) to replacer✓J(✓t), where r✓J(✓t) = [H(✓t)]

�1 r✓J(✓t) is computed by multiplying a Fisher informationmatrix H(✓t) [2]. NPG has shown to have better convergence speed [1, 2, 34] (see Appendix A.2).

4 Experiments

4.1 Experiment Setup

Following [17, 18, 19], we sample the relational paths from biased random walks (details in Ap-pendix A.3). We use two basic tasks in KG, i.e. entity alignment and link prediction. Same as theliterature [7, 18, 52, 62], we use the “filtered” ranking metrics: mean reciprocal ranking (MRR) andHit@k(k = 1, 10). Experiments are written in Python with PyTorch framework [35] and run ona single 2080Ti GPU. Statistics of the data set we use in this paper is in Appendix A.4. Trainingdetails of each task are given in Appendix A.5. Besides, all of the searched models are shown inAppendix C due to space limitations.

4.2 Understanding the Search Space

Here, we illustrate the designed search space A in Section 3.1 using Countries [8] dataset, whichcontains 271 countries and regions, and 2 relations neighbor and locatedin. This dataset containsthree tasks: S1 infers neighbor ^ locatedin ! locatedin or locatedin ^ locatedin ! locatedin;S2 require to infer the 2 hop relations neighbor^ locatedin ! locatedin; S3 is harder and requiresmodeling 3 hop relations neighbor ^ locatedin ^ locatedin ! locatedin.

To understand the dependency on the length of paths for various tasks, we extract four subspacesfrom A (in Figure 2) with different connections. Specifically, (P1) represents the single hop embed-ding models; (P2) processes 2 successive steps; (P3) models long relational paths without interme-diate entities; and (P4) includes both entities and relations along path. Then we search f in P1-P4respectively to see how well these subspaces can tackle the tasks S1-S3.

OP2

OP3

)*

/*

,* -*

.

)*+"

CS P1

(a) P1.

OP2

OP3

)*

/*

,* -*

.

)*+"

CS P2

(b) P2.

OP2

+

&'

*'

3' -'

.

&'20

CS P3

(c) P3.

OP1 OP2

OP3

&'

*'

3' -'

.

&'20

CS P4(d) P4.Figure 3: Four subspaces with different connection components in the recurrent search space.

For each task, we randomly generate 100 models for each subspace and record the model with thebest area under curve of precision recall (AUC-PR) on validation set. This procedure is repeated 5times to evaluate the testing performance in Table 3. We can see that there is no single subspaceperforming well on all tasks. For easy tasks S1and S2, short-term information is more impor-tant. Incorporating entities along path like P4is bad for learning 2 hop relationships. For theharder task S3, P3 and P4 outperform the oth-ers since it can model long-term information inmore steps.

Table 3: Performance on Countries dataset.S1 S2 S3

P1 0.998±0.001 0.997±0.002 0.933±0.031P2 1.000±0.000 0.999±0.001 0.952±0.023P3 0.992±0.001 1.000±0.000 0.961±0.016P4 0.977±0.028 0.984±0.010 0.964±0.015

Interstellar 1.000±0.000 1.000±0.000 0.968± 0.007

Besides, we evaluate the best model searched in the whole space A for S1-S3. As in the last lineof Table 3, the model searched by Interstellar achieves good performance on the hard task S3.Interstellar prefers different candidates (see Appendix C.2) for S1-S3 over the same search space.

6

Page 7: Interstellar: Searching Recurrent Architecture for ......Interstellar, which is a NAS method customized to path-based KG representation learning. 2.2 Neural Architecture Search (NAS)

This verifies our analysis that it is difficult to use a unified model that can adapt to the short-termand long term information for different KG tasks.

4.3 Comparison with State-of-the-art KG Embedding Methods

Entity Alignment. The entity alignment task aims to align entities in different KGs referring thesame instance. In this task, long-term information is important since we need to propagate the align-ment across triplets [18, 45, 62]. We use four cross-lingual and cross-database subset from DBpediaand Wikidata generated by [18], i.e. DBP-WD, DBP-YG, EN-FR, EN-DE. For fair comparison, wefollow the same path sampling scheme and the data set splits in [18].

Table 4: Performance comparison on entity alignment task. H@k is short for Hit@k. The results ofTransD [21], BootEA [45], IPTransE [62], GCN-Align [53] and RSN [18] are copied from [18].

models DBP-WD DBP-YG EN-FR EN-DEH@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR

tripletTransE 18.5 42.1 0.27 9.2 24.8 0.15 16.2 39.0 0.24 20.7 44.7 0.29

TransD* 27.7 57.2 0.37 17.3 41.6 0.26 21.1 47.9 0.30 24.4 50.0 0.33BootEA* 32.3 63.1 0.42 31.3 62.5 0.42 31.3 62.9 0.42 44.2 70.1 0.53

GCNGCN-Align 17.7 37.8 0.25 19.3 41.5 0.27 15.5 34.5 0.22 25.3 46.4 0.22VR-GCN 19.4 55.5 0.32 20.9 55.7 0.32 16.0 50.8 0.27 24.4 61.2 0.36R-GCN 8.6 31.4 0.16 13.3 42.4 0.23 7.3 31.2 0.15 18.4 44.8 0.27

path

PTransE 16.7 40.2 0.25 7.4 14.7 0.10 7.3 19.7 0.12 27.0 51.8 0.35IPTransE* 23.1 51.7 0.33 22.7 50.0 0.32 25.5 55.7 0.36 31.3 59.2 0.41

Chains 32.2 60.0 0.42 35.3 64.0 0.45 31.4 60.1 0.41 41.3 68.9 0.51RSN* 38.8 65.7 0.49 40.0 67.5 0.50 34.7 63.1 0.44 48.7 72.0 0.57

Interstellar 40.7 71.2 0.51 40.2 72.0 0.51 35.5 67.9 0.46 50.1 75.6 0.59

Table 4 compares the testing performance of the models searched by Interstellar and human-designed ones on the Normal version datasets [18] (the Dense version [18] in Appendix B.1). Ingeneral, the path-based models are better than the GCN-based and triplets-based models by mod-eling long-term dependencies. BootEA [45] and IPTransE [62] win over TransE [7] and PTransE[27] respectively by iteratively aligning discovered entity pairs. Chains [11] and RSN [18] outper-form graph-based models and the other path-based models by explicitly processing both entities andrelations along path. In comparison, Interstellar is able to search and balance the short-term andlong-term information adaptively, thus gains the best performance. We plot the learning curve onDBP-WD of some triplet, graph and path-based models in Figure 4(a) to verify the effectiveness ofthe relational paths.

Link Prediction. In this task,an incomplete KG is givenand the target is to predict themissing entities in unknownlinks [52]. We use threefamous benchmark datasets,WN18-RR [13] and FB15k-237 [48], which are more real-istic than their superset WN18and FB15k [7], and YAGO3-10 [31], a much larger dataset.

Table 5: Link prediction results.models WN18-RR FB15k-237 YAGO3-10

H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRRTransE 12.5 44.5 0.18 17.3 37.9 0.24 10.3 27.9 0.16

ComplEx 41.4 49.0 0.44 22.7 49.5 0.31 40.5 62.8 0.48RotatE* 43.6 54.2 0.47 23.3 50.4 0.32 40.2 63.1 0.48R-GCN - - - 15.1 41.7 0.24 - - -PTransE 27.2 46.4 0.34 20.3 45.1 0.29 12.2 32.3 0.19

RSN 38.0 44.8 0.40 19.2 41.8 0.27 16.4 37.3 0.24Interstellar 43.8 54.6 0.48 23.3 50.8 0.32 42.4 66.4 0.51

We search architectures with dimension 64 to save time and compare the models with dimension256. Results of R-GCN on WN18-RR and YAGO3-10 are not available due to out-of-memoryissue. As shown in Table 5, PTransE outperforms TransE by modeling compositional relations, butworse than ComplEx and RotatE since the adding operation is inferior to ⌦ when modeling theinteraction between entities and relations [49]. RSN is worse than ComplEx/RotatE since it paysmore attention to long-term information rather than the inside semantics. Interstellar outperformsthe path-based methods PTransE and RSN by searching architectures that model fewer steps (seeAppendix C.4). And it works comparable with the triplet-based models, i.e. ComplEx and RotatE,which are specially designed for this task. We show the learning curve of TransE, ComplEx, RSNand Interstellar on WN18-RR in Figure 4(b).

7

Page 8: Interstellar: Searching Recurrent Architecture for ......Interstellar, which is a NAS method customized to path-based KG representation learning. 2.2 Neural Architecture Search (NAS)

4.4 Comparison with Existing NAS Algorithms

In this part, we compare the proposed Hybrid-search algorithm in Interstellar with the other NASmethods. First, we compare the proposed algorithm with stand-alone NAS methods. Once an archi-tecture is sampled, the parameters are initialized and trained into converge to give reliable feedback.Random search (Random), Reinforcement learning (Reinforce) [63] and Bayes optimization (Bayes)[6] are chosen as the baseline algorithms. As shown in Figure 5 (entity alignment on DBP-WD andlink prediction on WN18-RR), the Hybrid algorithm in Interstellar is more efficient since it takesadvantage of the one-shot approach to search the micro architectures in A2.

(a) Entity alignment. (b) Link prediction.Figure 4: Single model learning curve.

(a) Entity alignment. (b) Link prediction.Figure 5: Compare with NAS methods.

Then, we compare with one-shot NAS methods with PS on the entire space. DARTS [29] and ASNG[1] are chosen as the baseline models. For DARTS, the gradient is obtained from loss on trainingset since validation metric is not differentiable. We show the performance of the best architecturefound by the two one-shot algorithms. As shown in the dashed lines, the architectures found byone-shot approach are much worse than that by the stand-alone approaches. The reason is that PSis not reliable in our complex recurrent space (more experiments in Appendix B.2). In comparison,Interstellar is able to search reliably and efficiently by taking advantage of both the stand-aloneapproach and the one-shot approach.

4.5 Searching Time Analysis

We show the clock time of Interstellar on entity alignment and link prediction tasks in Table 6. Sincethe datasets used in entity alignment task have similar scales (see Appendix A.4), we show them inthe same column. For each task/dataset, we show the computation cost of the macro-level and micro-level in Interstellar for 50 iterations between step 2-5 in Algorithm 1 (20 for YAGO3-10 dataset);and the fine-tuning procedure after searching for 50 groups of hyper-parameters, i.e. learning rate,decay rate, dropout rate, L2 penalty and batch-size (details in Appendix A.5). As shown, the entityalignment tasks take about 15-25 hours, while link prediction tasks need about one or more days dueto the larger data size. The cost of the search process is at the same scale with that of the fine-tuningtime, which shows the search process is not expensive.

Table 6: Comparison of searching and fine-tuning time (in hours) in Algorithm 1.procedure entity alignment link prediction

Normal Dense WN18-RR FB15k-237 YAGO3-10

search macro-level (line 2-3) 9.9±1.5 14.9±0.3 11.7±1.9 23.2±3.4 91.6±8.7micro-level (line 4-5) 4.2±0.2 7.5±0.6 6.3± 0.9 5.6±0.4 10.4±1.3

fine-tune (line 7) 11.6±1.6 16.2±2.1 44.3±2.3 67.6±4.5 > 200

5 Conclusion

In this paper, we propose a new NAS method, Interstellar, to search RNN for learning from the re-lational paths, which contain short-term and long-term information in KGs. By designing a specificsearch space based on the important properties in relational path, Interstellar can adaptively searchpromising architectures for different KG tasks. Furthermore, we propose a hybrid-search algorithmthat is more efficient compared with the other the state-of-art NAS algorithms. The experimental re-sults verifies the effectiveness and efficiency of Interstellar on various KG embedding benchmarks.In future work, we can combine Interstellar with AutoSF [61] to give further improvement on theembedding learning problems. Taking advantage of data similarity to improve the search efficiencyon new datasets is another extension direction.

8

Page 9: Interstellar: Searching Recurrent Architecture for ......Interstellar, which is a NAS method customized to path-based KG representation learning. 2.2 Neural Architecture Search (NAS)

Broader impact

Most of the attention on KG embedding learning has been focused on the triplet-based models. Inthis work, we emphasis the benefits and importance of using relational paths to learn from KGs.And we propose the path-interstellar as a recurrent neural architecture search problem. This is thefirst work applying neural architecture search (NAS) methods on KG tasks.

In order to search efficiently, we propose a novel hybrid-search algorithm. This algorithm addressesthe limitations of stand-alone and one-shot search methods. More importantly, the hybrid-searchalgorithm is not specific to the problem here. It is also possible to be applied to the other domainswith more complex search space [63, 47, 42].

One limitation of this work is that, the Interstellar is currently limited on the KG embedding tasks.Extending to reasoning tasks like DRUM [38] is an interesting direction.

Acknowledgment

This work is partially supported by National Key Research and Development Program of ChinaGrant No. 2018AAA0101100, the Hong Kong RGC GRF Project 16202218, CRF Project C6030-18G, C1031-18G, C5026-18G, AOE Project AoE/E-603/18, China NSFC No. 61729201, Guang-dong Basic and Applied Basic Research Foundation 2019B151530001, Hong Kong ITC ITF grantsITS/044/18FX and ITS/470/18FX. Lei Chen is partially supported by Microsoft Research Asia Col-laborative Research Grant, Didi-HKUST joint research lab project, and Wechat and Webank Re-search Grants.

References

[1] Y. Akimoto, S. Shirakawa, N. Yoshinari, K. Uchida, S. Saito, and K. Nishida. Adaptivestochastic natural gradient method for one-shot neural architecture search. In ICML, pages171–180, 2019.

[2] S. Amari. Natural gradient works efficiently in learning. Neural Computation, 10(2):251–276,1998.

[3] S. Auer, C. Bizer, G. Kobilarov, J. Lehmann, R. Cyganiak, and Z. Ives. DBpedia: A nucleusfor a web of open data. In Semantic Web, pages 722–735, 2007.

[4] B. Baker, O. Gupta, N. Naik, and R. Raskar. Designing neural network architectures usingreinforcement learning. In ICLR, 2017.

[5] G. Bender, P. Kindermans, B. Zoph, V. Vasudevan, and Q. Le. Understanding and simplifyingone-shot architecture search. In ICML, pages 549–558, 2018.

[6] J. S Bergstra, R. Bardenet, Y. Bengio, and B. Kégl. Algorithms for hyper-parameter optimiza-tion. In NeurIPS, pages 2546–2554, 2011.

[7] A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, and O. Yakhnenko. Translating embed-dings for modeling multi-relational data. In NeurIPS, pages 2787–2795, 2013.

[8] G. Bouchard, S. Singh, and T. Trouillon. On approximate reasoning capabilities of low-rankvector spaces. In AAAI Spring Symposium Series, 2015.

[9] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Gated feedback recurrent neural networks. InICML, pages 2067–2075, 2015.

[10] A. Conn, K. Scheinberg, and L. Vicente. Introduction to derivative-free optimization, vol-ume 8. Siam, 2009.

[11] R. Das, A. Neelakantan, D. Belanger, and A. McCallum. Chains of reasoning over entities,relations, and text using recurrent neural networks. In ACL, pages 132–141, 2017.

[12] B. Dasgupta and E. Sontag. Sample complexity for learning recurrent perceptron mappings.In NIPS, pages 204–210, 1996.

[13] T. Dettmers, P. Minervini, P. Stenetorp, and S. Riedel. Convolutional 2D knowledge graphembeddings. In AAAI, 2017.

9

Page 10: Interstellar: Searching Recurrent Architecture for ......Interstellar, which is a NAS method customized to path-based KG representation learning. 2.2 Neural Architecture Search (NAS)

[14] T. Elsken, Jan H. Metzen, and F. Hutter. Neural architecture search: A survey. JMLR, 20(55):1–21, 2019.

[15] V. Garg, S. Jegelka, and T. Jaakkola. Generalization and representational limits of graph neuralnetworks. In ICML, 2020.

[16] S. Geman and D. Geman. Stochastic relaxation, gibbs distributions, and the bayesian restora-tion of images. TPAMI, (6):721–741, 1984.

[17] A. Grover and J. Leskovec. Node2vec: Scalable feature learning for networks. In SIGKDD,pages 855–864. ACM, 2016.

[18] L. Guo, Z. Sun, and W. Hu. Learning to exploit long-term relational dependencies in knowl-edge graphs. In ICML, 2019.

[19] K. Guu, J. Miller, and P. Liang. Traversing knowledge graphs in vector space. In EMNLP,pages 318–327, 2015.

[20] F. Hutter, L. Kotthoff, and J. Vanschoren. Automated Machine Learning: Methods, Systems,Challenges. Springer, 2018.

[21] G. Ji, S. He, L. Xu, K. Liu, and J. Zhao. Knowledge graph embedding via dynamic mappingmatrix. In ACL, volume 1, pages 687–696, 2015.

[22] K. Kandasamy, W. Neiswanger, J. Schneider, B. Poczos, and E. Xing. Neural architecturesearch with bayesian optimisation and optimal transport. In NeurIPS, pages 2016–2025, 2018.

[23] D. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2014.[24] T. N Kipf and M. Welling. Semi-supervised classification with graph convolutional networks.

In ICLR, 2016.[25] T. Lacroix, N. Usunier, and G. Obozinski. Canonical tensor decomposition for knowledge base

completion. In ICML, 2018.[26] N. Lao, T. Mitchell, and W. Cohen. Random walk inference and learning in a large scale

knowledge base. In EMNLP, pages 529–539, 2011.[27] Y. Lin, Z. Liu, H. Luan, M. Sun, S. Rao, and S. Liu. Modeling relation paths for representation

learning of knowledge bases. Technical report, arXiv:1506.00379, 2015.[28] C. Liu, L. Li, X. Yao, and L. Tang. A survey of recommendation algorithms based on knowl-

edge graph embedding. In CSEI, pages 168–171, 2019.[29] H. Liu, K. Simonyan, and Y. Yang. DARTS: Differentiable architecture search. In ICLR, 2019.[30] D. Lukovnikov, A. Fischer, J. Lehmann, and S. Auer. Neural network-based question answer-

ing over knowledge graphs on word and character level. In WWW, pages 1211–1220, 2017.[31] F. Mahdisoltani, J. Biega, and F. M Suchanek. Yago3: A knowledge base from multilingual

wikipedias. In CIDR, 2013.

[32] T. Mikolov, M. Karafiát, L. Burget, J. Cernocky, and S. Khudanpur. Recurrent neural networkbased language model. In InterSpeech, 2010.

[33] Y. Ollivier, L. Arnold, A. Auger, and N. Hansen. Information-geometric optimization algo-rithms: A unifying picture via invariance principles. JMLR, 18(1):564–628, 2017.

[34] R. Pascanu and Y. Bengio. Revisiting natural gradient for deep networks. Technical report,arXiv:1301.3584, 2013.

[35] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison,L. Antiga, and A. Lerer. Automatic differentiation in pytorch. In ICLR, 2017.

[36] H. Pham, M. Guan, B. Zoph, Q. Le, and J. Dean. Efficient neural architecture search viaparameters sharing. In ICML, pages 4095–4104, 2018.

[37] H. Ren, W. Hu, and J. Leskovec. Query2box: Reasoning over knowledge graphs in vectorspace using box embeddings. In ICLR, 2020.

[38] A. Sadeghian, M. Armandpour, P. Ding, and D. Wang. DRUM: End-to-end differentiable rulemining on knowledge graphs. In NeurIPS, pages 15321–15331, 2019.

[39] A. Schäfer and H. Zimmermann. Recurrent neural networks are universal approximators. InICANN, pages 632–640, 2006.

10

Page 11: Interstellar: Searching Recurrent Architecture for ......Interstellar, which is a NAS method customized to path-based KG representation learning. 2.2 Neural Architecture Search (NAS)

[40] M. Schlichtkrull, T. Kipf, P. Bloem, R. Van Den Berg, I. Titov, and M. Welling. Modelingrelational data with graph convolutional networks. In ESWC, pages 593–607, 2018.

[41] C. Sciuto, K. Yu, M. Jaggi, C. Musat, and M. Salzmann. Evaluating the search phase of neuralarchitecture search. In ICLR, 2020.

[42] D. So, Q. Le, and C. Liang. The evolved transformer. In ICML, pages 5877–5886, 2019.[43] R. Socher, D. Chen, C. Manning, and A. Ng. Reasoning with neural tensor networks for

knowledge base completion. In NeurIPS, pages 926–934, 2013.[44] Z. Sun, Z. Deng, J. Nie, and J. Tang. RotatE: Knowledge graph embedding by relational

rotation in complex space. In ICLR, 2019.[45] Z. Sun, W. Hu, Q. Zhang, and Y. Qu. Bootstrapping entity alignment with knowledge graph

embedding. In IJCAI, pages 4396–4402, 2018.[46] M. Sundermeyer, R. Schlüter, and H. Ney. Lstm neural networks for language modeling. In

CISCA, 2012.[47] M. Tan and Q. Le. EfficientNet: Rethinking model scaling for convolutional neural networks.

In ICML, pages 6105–6114, 2019.[48] K. Toutanova and D. Chen. Observed versus latent features for knowledge base and text infer-

ence. In Workshop on CVSMC, pages 57–66, 2015.[49] T. Trouillon, C. Dance, E. Gaussier, J. Welbl, S. Riedel, and G. Bouchard. Knowledge graph

completion via complex tensor factorization. JMLR, 18(1):4735–4772, 2017.[50] S. Vashishth, S. Sanyal, V. Nitin, and P. Talukdar. Composition-based multi-relational graph

convolutional networks. ICLR, 2020.[51] H. Wang, H. Ren, and J. Leskovec. Entity context and relational paths for knowledge graph

completion. Technical report, arXiv:1810.13306, 2020.[52] Q. Wang, Z. Mao, B. Wang, and L. Guo. Knowledge graph embedding: A survey of approaches

and applications. TKDE, 29(12):2724–2743, 2017.[53] Z. Wang, Q. Lv, X. Lan, and Y. Zhang. Cross-lingual knowledge graph alignment via graph

convolutional networks. In EMNLP, pages 349–357, 2018.[54] S. Xie, H. Zheng, C. Liu, and L. Lin. SNAS: stochastic neural architecture search. In ICLR,

2019.[55] B. Yang, W. Yih, X. He, J. Gao, and L. Deng. Embedding entities and relations for learning

and inference in knowledge bases. In ICLR, 2015.[56] Q. Yao and M. Wang. Taking human out of learning applications: A survey on automated

machine learning. Technical report, arXiv:1810.13306, 2018.[57] Q. Yao, J. Xu, W. Tu, and Z. Zhu. Efficient neural architecture search via proximal iterations.

In AAAI, 2020.[58] R. Ye, X. Li, Y. Fang, H. Zang, and M. Wang. A vectorized relational graph convolutional

network for multi-relational network alignment. In IJCAI, pages 4135–4141, 2019.[59] A. Zela, T. Elsken, T. Saikia, Y. Marrakchi, T. Brox, and F. Hutter. Understanding and robus-

tifying differentiable architecture search. In ICLR, 2020.[60] C. Zhang, Y. Li, N. Du, W. Fan, and P. Yu. On the generative discovery of structured medical

knowledge. In KDD, pages 2720–2728, 2018.[61] Y. Zhang, Q. Yao, W. Dai, and L. Chen. AutoSF: Searching scoring functions for knowledge

graph embedding. In ICDE, pages 433–444, 2020.[62] H. Zhu, R. Xie, Z. Liu, and M. Sun. Iterative entity alignment via joint knowledge embeddings.

In IJCAI, pages 4258–4264, 2017.[63] B. Zoph and Q. Le. Neural architecture search with reinforcement learning. In ICLR, 2017.

11