Transcript
Page 2: Visual-Semantic Embeddings: some thoughts on Language

Language Understanding

2

Page 3: Visual-Semantic Embeddings: some thoughts on Language

Can we understand Language ?

1. Language is ambiguous:Every sentence has many possible interpretations.

2. Language is productive:We will always encounter new words or new constructions

3. Language is culturally specific

Some of the challenges in Language Understanding:

3

Page 4: Visual-Semantic Embeddings: some thoughts on Language

Can we understand Language ?

1. Language is ambiguous:Every sentence has many possible interpretations.

2. Language is productive:We will always encounter new words or new constructions

• plays well with others

VB ADV P NN

NN NN P DT

• fruit flies like a banana

NN NN VB DT NN

NN VB P DT NN

NN NN P DT NN

NN VB VB DT NN

• the students went to class DT NN VB P NN

4

Some of the challenges in Language Understanding:

Page 5: Visual-Semantic Embeddings: some thoughts on Language

Can we understand Language ?

1. Language is ambiguous:Every sentence has many possible interpretations.

2. Language is productive:We will always encounter new words or new constructions

5

Some of the challenges in Language Understanding:

Page 6: Visual-Semantic Embeddings: some thoughts on Language

[Karlgren 2014, NLP Sthlm Meetup]6

Page 7: Visual-Semantic Embeddings: some thoughts on Language

Can we understand Language ?

1. Language is ambiguous:Every sentence has many possible interpretations.

2. Language is productive:We will always encounter new words or new constructions

3. Language is culturally specific

Some of the challenges in Language Understanding:

7

Page 8: Visual-Semantic Embeddings: some thoughts on Language

ML: Traditional Approach

1. Gather as much LABELED data as you can get

2. Throw some algorithms at it (mainly put in an SVM and keep it at that)

3. If you actually have tried more algos: Pick the best

4. Spend hours hand engineering some features / feature selection / dimensionality reduction (PCA, SVD, etc)

5. Repeat…

For each new problem/question::

8

Page 9: Visual-Semantic Embeddings: some thoughts on Language

Machine Learning for NLP

Data

Classic Approach: Data is fed into a learning algorithm:

Learning Algorithm

9

Page 10: Visual-Semantic Embeddings: some thoughts on Language

Machine Learning for NLP

some of the (many) treebank datasets

source: http://www-nlp.stanford.edu/links/statnlp.html#Treebanks

!

10

Page 11: Visual-Semantic Embeddings: some thoughts on Language

Penn TreebankThat’s a lot of “manual” work:

11

Page 12: Visual-Semantic Embeddings: some thoughts on Language

• the students went to class

DT NN VB P NN

• plays well with others

VB ADV P NN

NN NN P DT

• fruit flies like a banana

NN NN VB DT NN

NN VB P DT NN

NN NN P DT NN

NN VB VB DT NN

With a lot of issues:

Penn Treebank

12

Page 13: Visual-Semantic Embeddings: some thoughts on Language

Machine Learning for NLP

Learning AlgorithmData

“Features”

PredictionPrediction/Classifier

train set

test set

13

Page 14: Visual-Semantic Embeddings: some thoughts on Language

Machine Learning for NLP

Learning Algorithm

“Features”

PredictionPrediction/Classifier

train set

test set

14

Page 15: Visual-Semantic Embeddings: some thoughts on Language

Deep Learning: Why for NLP ?

Beat state of the art at:• Language Modeling (Mikolov et al. 2011) [WSJ AR task] • Speech Recognition (Dahl et al. 2012, Seide et al 2011;

following Mohammed et al. 2011)• Sentiment Classification (Socher et al. 2011)

and we already know for some longer time it works well for Computer Vision of course:• MNIST hand-written digit recognition (Ciresan et al.

2010)• Image Recognition (Krizhevsky et al. 2012) [ImageNet]

15

Page 16: Visual-Semantic Embeddings: some thoughts on Language

One Model rules them all ?DL approaches have been successfully applied to:

Deep Learning: Why for NLP ?

Automatic summarization Coreference resolution Discourse analysis

Machine translation Morphological segmentation Named entity recognition (NER)

Natural language generation

Natural language understanding

Optical character recognition (OCR)

Part-of-speech tagging

Parsing

Question answering

Relationship extraction

sentence boundary disambiguation

Sentiment analysis

Speech recognition

Speech segmentation

Topic segmentation and recognition

Word segmentation

Word sense disambiguation

Information retrieval (IR)

Information extraction (IE)

Speech processing

16

Page 17: Visual-Semantic Embeddings: some thoughts on Language

• What is the meaning of a word? (Lexical semantics)

• What is the meaning of a sentence? ([Compositional] semantics)

• What is the meaning of a longer piece of text?(Discourse semantics)

Semantics: Meaning

17

Page 18: Visual-Semantic Embeddings: some thoughts on Language

• NLP treats words mainly (rule-based/statistical approaches at least) as atomic symbols:

• or in vector space:

• also known as “one hot” representation.

• Its problem ?

Word Representation

Love Candy Store

[0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 …]

Candy [0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 …] ANDStore [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 …] = 0 !

18

Page 19: Visual-Semantic Embeddings: some thoughts on Language

• Structure corresponds to meaning:

Structure and Meaning

19

Page 20: Visual-Semantic Embeddings: some thoughts on Language

• Semantics

• Syntax

20

NLP: what can we work with?

Page 21: Visual-Semantic Embeddings: some thoughts on Language

• Language models define probability distributions over (natural language) strings or sentences

• Joint and Conditional Probability

Language Model

21

Page 22: Visual-Semantic Embeddings: some thoughts on Language

• Language models define probability distributions over (natural language) strings or sentences

Language Model

22

Page 23: Visual-Semantic Embeddings: some thoughts on Language

• Language models define probability distributions over (natural language) strings or sentences

Language Model

23

Page 24: Visual-Semantic Embeddings: some thoughts on Language

Word senses

What is the meaning of words?

• Most words have many different senses:dog = animal or sausage?

How are the meanings of different words related?

• - Specific relations between senses:Animal is more general than dog.

• - Semantic fields:money is related to bank

24

Page 25: Visual-Semantic Embeddings: some thoughts on Language

Word senses

Polysemy:

• A lexeme is polysemous if it has different related senses

• bank = financial institution or building

Homonyms:

• Two lexemes are homonyms if their senses are unrelated, but they happen to have the same spelling and pronunciation

• bank = (financial) bank or (river) bank

25

Page 26: Visual-Semantic Embeddings: some thoughts on Language

Word senses: relations

Symmetric relations:

• Synonyms: couch/sofaTwo lemmas with the same sense

• Antonyms: cold/hot, rise/fall, in/outTwo lemmas with the opposite sense

Hierarchical relations:

• Hypernyms and hyponyms: pet/dog The hyponym (dog) is more specific than the hypernym (pet)

• Holonyms and meronyms: car/wheelThe meronym (wheel) is a part of the holonym (car)

26

Page 27: Visual-Semantic Embeddings: some thoughts on Language

Distributional representations

“You shall know a word by the company it keeps” (J. R. Firth 1957)

One of the most successful ideas of modern statistical NLP!

these words represent banking

• Hard (class based) clustering models

• Soft clustering models27

Page 28: Visual-Semantic Embeddings: some thoughts on Language

Distributional hypothesis

He filled the wampimuk, passed it around and we all drunk some

We found a little, hairy wampimuk sleeping behind the tree

(McDonald & Ramscar 2001)28

Page 29: Visual-Semantic Embeddings: some thoughts on Language

Distributional semantics

Landauer and Dumais (1997), Turney and Pantel (2010), …29

Page 30: Visual-Semantic Embeddings: some thoughts on Language

Distributional semanticsDistributional meaning as co-occurrence vector:

30

Page 31: Visual-Semantic Embeddings: some thoughts on Language

Distributional representations

• Taking it further:

• Continuous word embeddings

• Combine vector space semantics with the prediction of probabilistic models

• Words are represented as a dense vector:

Candy =

31

Page 32: Visual-Semantic Embeddings: some thoughts on Language

Word Embeddings: SocherVector Space Model

adapted rom Bengio, “Representation Learning and Deep Learning”, July, 2012, UCLA

In a perfect world:

32

Page 33: Visual-Semantic Embeddings: some thoughts on Language

Word Embeddings: SocherVector Space Model

adapted rom Bengio, “Representation Learning and Deep Learning”, July, 2012, UCLA

In a perfect world:

the country of my birththe place where I was born

33

Page 34: Visual-Semantic Embeddings: some thoughts on Language

• Can theoretically (given enough units) approximate “any” function

• and fit to “any” kind of data

• Efficient for NLP: hidden layers can be used as word lookup tables

• Dense distributed word vectors + efficient NN training algorithms:

• Can scale to billions of words !

Why Neural Networks for NLP?

34

Page 35: Visual-Semantic Embeddings: some thoughts on Language

• Representation of words as continuous vectors has a long history (Hinton et al. 1986; Rumelhart et al. 1986; Elman 1990)

• First neural network language model: NNLM (Bengio et al. 2001; Bengio et al. 2003) based on earlier ideas of distributed representations for symbols (Hinton 1986)

How?

35

Page 36: Visual-Semantic Embeddings: some thoughts on Language

Word Embeddings: SocherVector Space Model

Figure (edited) from Bengio, “Representation Learning and Deep Learning”, July, 2012, UCLA

In a perfect world:

the country of my birththe place where I was born ?

36

Page 37: Visual-Semantic Embeddings: some thoughts on Language

Compositionality

Principle of compositionality:

the “meaning (vector) of a complex expression (sentence) is determined by:

— Gottlob Frege (1848 - 1925)

- the meanings of its constituent expressions (words) and

- the rules (grammar) used to combine them”

37

Page 38: Visual-Semantic Embeddings: some thoughts on Language

• How do we handle the compositionality of language in our models?

38

Compositionality

Page 39: Visual-Semantic Embeddings: some thoughts on Language

• How do we handle the compositionality of language in our models?

• Recursion :the same operator (same parameters) is applied repeatedly on different components

39

Compositionality

Page 40: Visual-Semantic Embeddings: some thoughts on Language

• How do we handle the compositionality of language in our models?

• Option 1: Recurrent Neural Networks (RNN)

40

RNN 1: Recurrent Neural Networks

Page 41: Visual-Semantic Embeddings: some thoughts on Language

• How do we handle the compositionality of language in our models?

• Option 2: Recursive Neural Networks (also sometimes called RNN)

41

RNN 2: Recursive Neural Networks

Page 42: Visual-Semantic Embeddings: some thoughts on Language

• achieved SOTA in 2011 on Language Modeling (WSJ AR task) (Mikolov et al., INTERSPEECH 2011):

• and again at ASRU 2011:

42

Recurrent Neural Networks

“Comparison to other LMs shows that RNN LMs are state of the art by a large margin. Improvements inrease with more training data.”

“[ RNN LM trained on a] single core on 400M words in a few days, with 1% absolute improvement in WER on state of the art setup”

Mikolov, T., Karafiat, M., Burget, L., Cernock, J.H., Khudanpur, S. (2011)Recurrent neural network based language model

Page 43: Visual-Semantic Embeddings: some thoughts on Language

43

Recurrent Neural Networks

(simple recurrent neural network for LM)

input

hidden layer(s)

output layer

+ sigmoid activation function+ softmax function:

Mikolov, T., Karafiat, M., Burget, L., Cernock, J.H., Khudanpur, S. (2011)Recurrent neural network based language model

Page 44: Visual-Semantic Embeddings: some thoughts on Language

44

Recurrent Neural Networks

backpropagation through time

Page 45: Visual-Semantic Embeddings: some thoughts on Language

45

Recurrent Neural Networks

backpropagation through time

class based recurrent NN

[code (Mikolov’s RNNLM Toolkit) and more info: http://rnnlm.org/ ]

Page 46: Visual-Semantic Embeddings: some thoughts on Language

• Recursive Neural Network for LM (Socher et al. 2011; Socher 2014)

• achieved SOTA on new Stanford Sentiment Treebank dataset (but comparing it to many other models):

Recursive Neural Network

46

Socher, R., Perelygin,, A., Wu, J.Y., Chuang, J., Manning, C.D., Ng, A.Y., Potts, C. (2013)Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank

info & code: http://nlp.stanford.edu/sentiment/

Page 47: Visual-Semantic Embeddings: some thoughts on Language

Recursive Neural Tensor Network

47

code & info: http://www.socher.org/index.php/Main/ParsingNaturalScenesAndNaturalLanguageWithRecursiveNeuralNetworks

Socher, R., Liu, C.C., NG, A.Y., Manning, C.D. (2011) Parsing Natural Scenes and Natural Language with Recursive Neural Networks

Page 48: Visual-Semantic Embeddings: some thoughts on Language

Recursive Neural Tensor Network

48

Page 49: Visual-Semantic Embeddings: some thoughts on Language

• RNN (Socher et al. 2011a)

Recursive Neural Network

49

Socher, R., Perelygin,, A., Wu, J.Y., Chuang, J., Manning, C.D., Ng, A.Y., Potts, C. (2013)Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank

info & code: http://nlp.stanford.edu/sentiment/

Page 50: Visual-Semantic Embeddings: some thoughts on Language

• RNN (Socher et al. 2011a)

• Matrix-Vector RNN (MV-RNN) (Socher et al., 2012)

Recursive Neural Network

50

Socher, R., Perelygin,, A., Wu, J.Y., Chuang, J., Manning, C.D., Ng, A.Y., Potts, C. (2013)Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank

info & code: http://nlp.stanford.edu/sentiment/

Page 51: Visual-Semantic Embeddings: some thoughts on Language

• RNN (Socher et al. 2011a)

• Matrix-Vector RNN (MV-RNN) (Socher et al., 2012)

• Recursive Neural Tensor Network (RNTN) (Socher et al. 2013)

Recursive Neural Network

51

Page 52: Visual-Semantic Embeddings: some thoughts on Language

• negation detection:

Recursive Neural Network

52

Socher, R., Perelygin,, A., Wu, J.Y., Chuang, J., Manning, C.D., Ng, A.Y., Potts, C. (2013)Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank

info & code: http://nlp.stanford.edu/sentiment/

Page 53: Visual-Semantic Embeddings: some thoughts on Language

NP

PP/IN

NP

DT NN PRP$ NN

Parse TreeRecurrent NN for Vector Space

53

Page 54: Visual-Semantic Embeddings: some thoughts on Language

NP

PP/IN

NP

DT NN PRP$ NN

Parse Tree

INDT NN PRP NN

Compositionality

54

Recurrent NN: CompositionalityRecurrent NN for Vector Space

Page 55: Visual-Semantic Embeddings: some thoughts on Language

NP

IN

NP

PRP NN

Parse Tree

DT NN

Compositionality

55

Recurrent NN: CompositionalityRecurrent NN for Vector Space

Page 56: Visual-Semantic Embeddings: some thoughts on Language

NP

IN

NP

DT NN PRP NN

PP

NP (S / ROOT)

“rules” “meanings”

Compositionality

56

Recurrent NN: CompositionalityRecurrent NN for Vector Space

Page 57: Visual-Semantic Embeddings: some thoughts on Language

Vector Space + Word Embeddings: Socher

57

Recurrent NN: CompositionalityRecurrent NN for Vector Space

Page 58: Visual-Semantic Embeddings: some thoughts on Language

Vector Space + Word Embeddings: Socher

58

Recurrent NN for Vector Space

Page 59: Visual-Semantic Embeddings: some thoughts on Language

Word Embeddings: Turian (2010)

Turian, J., Ratinov, L., Bengio, Y. (2010). Word representations: A simple and general method for semi-supervised learning

code & info: http://metaoptimize.com/projects/wordreprs/ 59

Page 60: Visual-Semantic Embeddings: some thoughts on Language

Word Embeddings: Turian (2010)

Turian, J., Ratinov, L., Bengio, Y. (2010). Word representations: A simple and general method for semi-supervised learning

code & info: http://metaoptimize.com/projects/wordreprs/ 60

Page 61: Visual-Semantic Embeddings: some thoughts on Language

Word Embeddings: Collobert & Weston (2011)

Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., Kuksa, P. (2011) . Natural Language Processing (almost) from Scratch

61

Page 62: Visual-Semantic Embeddings: some thoughts on Language

Multi-embeddings: Stanford (2012)

Eric H. Huang, Richard Socher, Christopher D. Manning, Andrew Y. Ng (2012)Improving Word Representations via Global Context and Multiple Word Prototypes

62

Page 63: Visual-Semantic Embeddings: some thoughts on Language

Linguistic Regularities: Mikolov (2013)

code & info: https://code.google.com/p/word2vec/ Mikolov, T., Yih, W., & Zweig, G. (2013). Linguistic Regularities in Continuous Space Word Representations

63

Page 64: Visual-Semantic Embeddings: some thoughts on Language

Word Embeddings for MT: Mikolov (2013)

Mikolov, T., Le, V. L., Sutskever, I. (2013) . Exploiting Similarities among Languages for Machine Translation

64

Page 65: Visual-Semantic Embeddings: some thoughts on Language

Recursive Deep Models & Sentiment: Socher (2013)

Socher, R., Perelygin, A., Wu, J., Chuang, J.,Manning, C., Ng, A., Potts, C. (2013) Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank.

code & demo: http://nlp.stanford.edu/sentiment/index.html65

Page 66: Visual-Semantic Embeddings: some thoughts on Language

Paragraph Vectors: Le & Mikolov (2014)

Le, Q., Mikolov,. T. (2014) Distributed Representations of Sentences and Documents66

• add context (sentence, paragraph, document) to word vectors during training

!

Results on Stanford Sentiment Treebank dataset:

Page 67: Visual-Semantic Embeddings: some thoughts on Language

Global Vectors, GloVe: Stanford (2014)

Pennington, P., Socher, R., Manning,. D.M. (2014). GloVe: Global Vectors for Word Representation

code & demo: http://nlp.stanford.edu/projects/glove/

vsresults on the word analogy task

“similar accuracy”

67

Page 68: Visual-Semantic Embeddings: some thoughts on Language

Dependency-based Embeddings: Levy & Goldberg (2014)

Levy, O., Goldberg, Y. (2014). Dependency-Based Word Embeddings

code & demo: https://levyomer.wordpress.com/2014/04/25/dependency-based-word-embeddings/

- Syntactic Dependency Context

Australian scientist discovers star with telescope

- Bag of Words (BoW) Context

0.3$

0.4$

0.5$

0.6$

0.7$

0.8$

0.9$

1$

0$ 0.1$ 0.2$ 0.3$ 0.4$ 0.5$ 0.6$ 0.7$ 0.8$ 0.9$ 1$

Precision

$

Recall$

“Dependency-based embeddings have more

functional similarities”

68

Page 69: Visual-Semantic Embeddings: some thoughts on Language

Joint Image-Word Embeddings

69

Page 70: Visual-Semantic Embeddings: some thoughts on Language

1. Multimodal representation learning

2. Generating descriptions of images

3. Ranking images and captions (“image-sentence ranking”)

4. Evaluation Methods?

Some Current Approaches

70

Page 71: Visual-Semantic Embeddings: some thoughts on Language

• Learning joint image-word embeddings in a low-dimension embedding space (Weston et al 2010; Frome et al. 2013)

• Embedding images and sentences into a common space (Socher et al. TACL 2014; Karpathy et al. NIPS 2014)

• also: deep Boltzmann machines (Srivastava&Salakhutdinov NIPS 2012), log-bilinear neural language models (Kiros et al. ICML 2014), autoencoders (Ngiam ICML 2011), recurrent neural networks (m-RNN: Mao et al. 2014) and topic-models (Jia ICCV 2011)

1. Multimodal representation learning

71

Page 72: Visual-Semantic Embeddings: some thoughts on Language

• Neural network methods

• Composition-based methods.

• Template-based methods.

2. Generating descriptions of images

72

Page 73: Visual-Semantic Embeddings: some thoughts on Language

• Bi-directional approaches (Hockenmaier group): kernel CCA (Hodosh et al. JAIR 2013), normalized CCA (Gong et al. EECV 2014)

• Dependency tree recursive networks (Stanford)(Socher et al. TACL 2014)

3. Ranking images and captions

73

Page 74: Visual-Semantic Embeddings: some thoughts on Language

• Current evaluation methods unreliable and don’t match with human judgements

• Bleu

• Rouge

• Perplexity??

• Ranking as proxy for generation / scoring function

4. Evaluation?

74

Page 75: Visual-Semantic Embeddings: some thoughts on Language

• unsupervised pre-training on many images

• in parallel train a neural network Language Model

• train linear mapping between image representations and word embeddings, representing the different “classes”

75

Zero-shot Learning

Page 76: Visual-Semantic Embeddings: some thoughts on Language

DeViSE model (Frome et al. 2013)

• skip-gram text model on wikipedia corpus of 5.7 million documents (5.4 billion words) - approach from (Mikolov et al. ICLR 2013)

76

Frome, A., Corrado, G.S., Shlens, J., Bengio, S., Dean, J., Mikolov, T., Ranzato, M.A. (2013) Devise: A deep visual-semantic embedding model

Page 77: Visual-Semantic Embeddings: some thoughts on Language

Encoder: A deep convolutional network (CNN) and long short-term memory recurrent network (LSTM) for learning a joint image-sentence embedding. Decoder: A new neural language model that combines structure and content vectors for generating words one at a time in sequence.

Encoder-Decoder pipeline (Kiros et al 2014)

77

Kiros, R., Salakhutdinov, R., Zemerl, R. S. (2014) Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models

Page 78: Visual-Semantic Embeddings: some thoughts on Language

Kiros, R., Salakhutdinov, R., Zemerl, R. S. (2014) Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models

• matches state-of-the-art performance on Flickr8K and Flickr30K without using object detections

• new best results when using the 19-layer Oxford convolutional network.

• linear encoders: learned embedding space captures multimodal regularities (e.g. *image of a blue car* - "blue" + "red" is near images of red cars)

Encoder-Decoder pipeline (Kiros et al 2014)

78

Page 79: Visual-Semantic Embeddings: some thoughts on Language

demo time (code adapted from: https://github.com/karpathy/

neuraltalk)

79

Page 80: Visual-Semantic Embeddings: some thoughts on Language

• Theano - CPU/GPU symbolic expression compiler in python (from LISA lab at University of Montreal). http://deeplearning.net/software/theano/

• Pylearn2 - library designed to make machine learning research easy. http://deeplearning.net/software/pylearn2/

• Torch - Matlab-like environment for state-of-the-art machine learning algorithms in lua (from Ronan Collobert, Clement Farabet and Koray Kavukcuoglu) http://torch.ch/

• more info: http://deeplearning.net/software links/

Wanna Play ?

Wanna Play ? General Deep Learning

80

Page 81: Visual-Semantic Embeddings: some thoughts on Language

• RNNLM (Mikolov) http://rnnlm.org

• NB-SVM https://github.com/mesnilgr/nbsvm

• Word2Vec (skipgrams/cbow)https://code.google.com/p/word2vec/ (original) http://radimrehurek.com/gensim/models/word2vec.html (python)

• GloVehttp://nlp.stanford.edu/projects/glove/ (original) https://github.com/maciejkula/glove-python (python)

• Socher et al / Stanford RNN Sentiment code:http://nlp.stanford.edu/sentiment/code.html

• Deep Learning without Magic Tutorial: http://nlp.stanford.edu/courses/NAACL2013/

Wanna Play ? NLP

81

Page 82: Visual-Semantic Embeddings: some thoughts on Language

• cuda-convnet2 (Alex Krizhevsky, Toronto) (c++/CUDA, optimized for GTX 580) https://code.google.com/p/cuda-convnet2/

• Caffe (Berkeley) (Cuda/OpenCL, Theano, Python)http://caffe.berkeleyvision.org/

• OverFeat (NYU) http://cilvr.nyu.edu/doku.php?id=code:start

Wanna Play ? Computer Vision

82

Page 83: Visual-Semantic Embeddings: some thoughts on Language

as PhD candidate KTH/CSC:“Always interested in discussing

Machine Learning, Deep Architectures, Graphs, and

Language Technology”

In touch!

[email protected]/~roelof/

Internship / EntrepeneurshipAcademic/Researchas CIO/CTO Feeda:

“Always looking for additions to our brand new R&D team”

[Internships upcoming on KTH exjobb website…]

[email protected]

Feeda

83

Page 84: Visual-Semantic Embeddings: some thoughts on Language

Were Hiring!

[email protected]

Feeda

• Dev Ops • Software Developers • Data Scientists

84


Top Related