some observations on hindi dependency parsingfaculty.washington.edu/.../samar_hindi_parsing.pdf ·...

Post on 30-Sep-2020

1 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Some Observations on Hindi

Dependency Parsing

Samar Husain

Language Technologies Research Centre,

IIIT-Hyderabad.

2

Introduction

• Parsing a free work order language with

(relatively) rich morphology is a challenging task

– Methods, problems, causes.

• Experiments with Hindi

3

Introduction

• Parsing a free work order language with

(relatively) rich morphology is a challenging task

– Methods, problems, causes.

• Experiments with Hindi

4

Hindi: Brief overview

• malaya ne sameer ko kitaba dii.

Malay ERG Sameer DAT book gave

“Malay gave the book to Sameer” (S-IO-DO-V)

S-DO-IO-V

IO-S-DO-V

IO-DO-S-V

DO-S-IO-V

DO-IO-S-V

5

Hindi: Brief overview

• Inflections

– Gender, number, person

– Tense, aspect and modality

• Agreement

– Noun-adjective

– Noun-verb

6

Dependency Grammar

• A formalism for linguistic analysis

– Dependencies between words central to analysis

– Different from phrase structure analysis

7

Dependency Grammar

• A formalism for linguistic analysis– Dependencies between words central to analysis

– Different from phrase structure analysis

• Abhay ate a mango

8

Dependency Grammar

• A formalism for linguistic analysis– Dependencies between words central to analysis

– Different from phrase structure analysis

• Abhay ate a mango

9

• Dependency Tree

– Root property

– Spanning property

– Connectedness property

– Single head property

– Acyclicity property

– Arc size property

Kübler et al. (2009)

10

Dependency Parsing

• M = (Γ, λ, h)

– A dependency parsing model M comprises of a set of constraints Γ that definethe space of permissible dependency structures, a set of parameters λ and aparsing algorithm h

– Γ maps an arbitrary sentence S and dependency type set R to a set of well-formed dependency trees Gs

• Γ = (Σ, R, C)

– where, Σ is the set of terminal symbols (here, words), R is the label set, and C isthe set of constraints. Such constraints restrict dependencies between words andpossible head of the word in well defined ways.

• G = h (Γ, λ, S)

– given a set of constraints Γ, parameter λ and a new sentence S, how does thesystem find out the most appropriate dependency tree G for that sentence

Kübler et al. (2009)

11

Dependency Parsing

• M = (Γ, λ, h)

– A dependency parsing model M comprises of a set of constraints Γ that definethe space of permissible dependency structures, a set of parameters λ and aparsing algorithm h

– Γ maps an arbitrary sentence S and dependency type set R to a set of well-formed dependency trees Gs

• Γ = (Σ, R, C)

– where, Σ is the set of terminal symbols (here, words), R is the label set, and C isthe set of constraints. Such constraints restrict dependencies between words andpossible head of the word in well defined ways.

• G = h (Γ, λ, S)

– given a set of constraints Γ, parameter λ and a new sentence S, how does thesystem find out the most appropriate dependency tree G for that sentence

Kübler et al. (2009)

12

Dependency Parsing

• M = (Γ, λ, h)

– A dependency parsing model M comprises of a set of constraints Γ that definethe space of permissible dependency structures, a set of parameters λ and aparsing algorithm h

– Γ maps an arbitrary sentence S and dependency type set R to a set of well-formed dependency trees Gs

• Γ = (Σ, R, C)

– where, Σ is the set of terminal symbols (here, words), R is the label set, and C isthe set of constraints. Such constraints restrict dependencies between words andpossible head of the word in well defined ways.

• G = h (Γ, λ, S)

– given a set of constraints Γ, parameter λ and a new sentence S, how does thesystem find out the most appropriate dependency tree G for that sentence

Kübler et al. (2009)

13

• Constraint based

– based on the notion of eliminative parsing, where sentences are

analyzed by successively eliminating representations that violate

constraints until only valid representations remain

• Data-driven

– learning problem, which is the task of learning a parsing model

from a representative sample of structure of sentences (training

data)

– the parsing problem (or inference/decoding problem), which is the

task of applying the learned model to the analysis of a new

sentence.

14

• Constraint based

– based on the notion of eliminative parsing, where sentences are

analyzed by successively eliminating representations that violate

constraints until only valid representations remain

• Data-driven

– learning problem, which is the task of learning a parsing model

from a representative sample of structure of sentences (training

data)

– the parsing problem (or inference/decoding problem), which is the

task of applying the learned model to the analysis of a new

sentence.

1515

Constraint based method

• A Two-Stage Generalized Hybrid Constraint

Based Parser (GH-CBP)

– Incorporates some of the notions of CPG

– Uses Integer linear programming for constraint

satisfaction

– Also incorporate ideas from graph-based parsing

and labeling for prioritization

Bharati et al. (2009a, 2009b); Husain (2011)

16

Quick Illustration

17

Data driven approaches

• Transition based systems

– MaltParser

• Graph based systems

– MSTParser

18

MaltParser

• Malt is a classifier based Shift/Reduce parser.

• It uses arc-eager, arc-standard, covington

projective and convington non-projective

algorithms for parsing

• History-based feature models are used for

predicting the next parser action

• Support vector machines are used for mapping

histories to parser actions

Nivre et al., (2006)

19

Quick Illustration

20

MSTParser

• MST uses Chu-Liu-Edmonds Maximum Spanning

Tree algorithm for non-projective parsing and

Eisner's algorithm for projective parsing.

• It uses online large margin learning as the learning

algorithm

McDonald et al., (2005a, 2005b)

21

Quick Illustration

22

Hybrid

• Constraint parser + MSTParser

(Husain et al., 2011b)

23

Quick Illustration

24

Quick Illustration

25

Quick Illustration

26

Use of modularity in parsing

27

Modularity

• Chunk

– Local word groups

– Local dependencies

28

Modularity

• Clause

– Intra-clausal

– Inter-clausal

29

Chunk based parsing (I)

• Chunk as hard constraint

• Intra-chunk and inter-

chunk dependencies

identified separately

– But use intra-chunk features

• Identifying intra-chunk

relations easy

Ambati et al., (2010b)

30

Chunk based parsing (II)

• Chunk as soft constraint

• Intra-chunk and inter-chunk dependencies identified together

• Use local

morphosyntactic

features

31

Clause based parsing (I)

Husain et al., (2009)

32

Clause based parsing (II)

Husain et al. (2011a)

33

MaltParser Configuration

34

Clause based parsing (III)

• Similar to parser stacking

– „guide‟ Malt with a 1st stage parse by Malt.

– The additional features added to the 2nd-stage parser

during 2-Soft parsing encode the decisions by the 1st-

stage parser concerning potential arcs and labels

considered by the 2nd stage parser,

• in particular, arcs involving the word currently on top of the

stack and the word currently at the head of the input buffer.

3535

Experimental setup

• Parsers

– GH-CBP (version 1.6)

– MaltParser (version 1.3.1)

– MSTParser (version 0.4b)

• Data

– ICON10 tools contest

– The training set had 3000 sentences, the development

had 500 sentences and test set had 300 sentences

36

• CoNLL dependency parsing shared task 2008 (Nivre et

al., 2008)

– UAS: Unlabeled attachment accuracy

– LAS: Label attachment accuracy

– LA: Label accuracy

• Performance

– Constraint based (coarse-grained tagset; oracle)

• UAS = 88.50

• LAS = 79.12

– Statistical (fine-grained)

• UAS = ~91

• LAS = ~76

Evaluation metric and accuracies

37

Remarks: Malt

• Crucial features

– Deprel of the partially built tree

– Conjoined features

• Good for short distance dependencies

• Non-projective algo doesn‟t help

• Arc-eager, Libsvm

Bharati et al., (2008), Ambati et al., (2010a)

38

Remarks: MSTParser

• Crucial features

– Conjoined features

• Modified MST

• Difficult to incorporate complex features for labeled parsing

– We use MaxEnt as a labeler

• Good for long distance dependencies and for identifying the root

• Non-projective performs better

• Training k=5, order=2

(Bharati et al., 2008), (Ambati et al., 2010a)

3939

What helps

• Morphological features

• Local morphosyntactic

• Clausal

• Minimal semantics

Bharati et al., (2008); Ambati et al;. (2009) Ambati et al., (2010a); Ambati et al., (2010b);

Gadde et al., (2010);

4040

What helps

• Morphological features

• Local morphosyntactic

• Clausal

• Minimal semantics

Root: Root form of the word

Case: Oblique/Direct case

Vibhakti: postpositions/suffix of the word

4141

What helps

• Morphological

• Local morphosyntactic

• Clausal

• Minimal semantics

Type of the chunk

Head/non-head of the chunk

Chunk boundary information

Distance to the end of the chunk

Vibhakti computation for the head of the chunk

4242

What helps

• Morphological

• Local morphosyntactic

• Clausal

• Minimal semantics

Clause boundary information

Clausal head information

4343

What helps

• Morphological

• Local morphosyntactic

• Clausal

• Minimal semantics

Human

Non-human

Inanimate

Time

Place

Abstract

Rest

4444

Relative comparison

• the relative importance of these features over the baseline

LAS of MSTParser.

4545

Method Benefits

Morphological features a) Captures the inflectional cues necessary for identifying different

relations

Clausal Features a) Helps in identifying the root of the tree

b) Helps in better handling of long distance relations

Minimal Semantics a) Captures semantic selectional restriction (needed precisely when

surface cues fail)

Chunk Parsing and

Local morphosyntax

a) Captures the notion of local word groups

b) Helps capturing the postposition-TAM mapping

c) Helps in reducing attachment mistakes

Clausal Parsing a) Better learning of intra-clausal deviant relations

b) Better handling of participles

c) Better handling of long distance relations

46

What doesn‟t

• Gender, number, person

4747

Parsing MOR-FWO languages

• Problems in parsing of MOR-FWO languages

– Non-configurational nature of these languages

– Inherent limitations in the parsing/learning algorithms

– Less amount of annotated data

48

Common errors

• Simple sentences

– the correct identification of the argument structure (labels)

49

Common errors

• Reasons for errors in label

– Word order not strict

– absence of postpositions,

– ambiguous postpositions,

– ambiguous TAMs, and

– inability of the parser to exploit agreement features

– inability to always make simple linguistic

generalizations

50

• Embedded clauses

– Relative clauses

– Participles

51

[ jo ladakaa vahaan baithaa hai ] vaha meraa bhaaii hai

„which‟ „boy‟ ‟there‟ ‟sitting‟ ‟is‟ ‟that‟ ‟my‟ ‟brother‟ „is‟

„The boy who is sitting there is my brother‟

52

[ jo ladakaa vahaan baithaa hai ] vaha meraa bhaaii hai

„which‟ „boy‟ ‟there‟ ‟sitting‟ ‟is‟ ‟that‟ ‟my‟ ‟brother‟ „is‟

„The boy who is sitting there is my brother‟

• Missing relative pronoun

• Non-projectivitiy

53

raama ne kala khaana khaakara cricket dekhaa

„Ram‟ Erg. yesterday food having eaten cricket saw

„Having eaten the food yesterday Ram watched cricket‟

54

raama ne kala khaana khaakara cricket dekhaa

„Ram‟ Erg. yesterday food having eaten cricket saw

„Having eaten the food yesterday Ram watched cricket‟

• Argument sharing

• Ambiguous attachment site

• Non-projectivity (intra-clausal)

55

• Coordination

• Paired connectives

56

raama aura sitaa ne khaanaa khaayaa

‟Ram‟ ‟and‟ ‟Sita‟ ERG ‟food‟ „ate‟

„Ram and Sita ate food.‟

57

raama aura sitaa ne khaanaa khaayaa

‟Ram‟ ‟and‟ ‟Sita‟ ERG ‟food‟ „ate‟

„Ram and Sita ate food.‟

• Can have different incoming arcs

depending on what is being coordinated

• Valency not fixed

• Leads to long distance dependencies

• Non-projectivity (intra-clausal)

58

59

Complex predicates

• Noun/Adjective + Verb (Verbalizer)

– raam ne shyaam kii madad kii

Ram Erg. Shyam Gen. help do-Past

„Ram helped Shyam.‟

• Difficult to identify

• Behavioral diagnostics do not always work across

the board

– Only some can be automated

Begum et al., (2011)

60

Non-verbal heads

• Nouns

– Appositions

• Predicative adjectives

61

Non-projectivity

• ~14% non-projective arcs

• Many are inter-clausal relations

– Relative co-relative construction

– Extraposed relative clause

– Paired connectives

– …

Mannem et al., (2009)

62

Some intra-clausal non-projective structures

[NP: Noun chunk, CCP: Conjunction chunk, VGF: Finite verb chunk,

NN[GEN]: Noun chunk with a genitive marker, VGNN: Verbal noun chunk]

63

Common Errors: Relations

6464

Causes

• Complex feature pattern– agreement

• Difficulty in making linguistic generalizations– Single subject

• Long distance dependencies • Nonprojectivity

– Both inter-clausal and intra-clausal

• Genuine ambiguities– Participle argument attachment

• Small corpus size– Training on ~70k word corpus

– But, for many languages small data size is not a crucial factor in ascertaining their parsing performance (Hall et al., 2007)

6565

Causes

• Complex feature pattern– agreement

• Difficulty in making linguistic generalizations– Single subject

• Long distance dependencies • Nonprojectivity

– Both inter-clausal and intra-clausal

• Genuine ambiguities– Participle argument attachment

• Small corpus size– Training on ~70k word corpus

– But, for many languages small data size is not a crucial factor in ascertaining their parsing performance (Hall et al., 2007)

6666

Causes

• Complex feature pattern– agreement

• Difficulty in making linguistic generalizations– Single subject

• Long distance dependencies • Nonprojectivity

– Both inter-clausal and intra-clausal

• Genuine ambiguities– Participle argument attachment

• Small corpus size– Training on ~70k word corpus

– But, for many languages small data size is not a crucial factor in ascertaining their parsing performance (Hall et al., 2007)

6767

Causes

• Complex feature pattern– agreement

• Difficulty in making linguistic generalizations– Single subject

• Long distance dependencies • Nonprojectivity

– Both inter-clausal and intra-clausal

• Genuine ambiguities– Participle argument attachment

• Small corpus size– Training on ~70k word corpus

– But, for many languages small data size is not a crucial factor in ascertaining their parsing performance (Hall et al., 2007)

6868

Causes

• Complex feature pattern– agreement

• Difficulty in making linguistic generalizations– Single subject

• Long distance dependencies • Nonprojectivity

– Both inter-clausal and intra-clausal

• Genuine ambiguities– Participle argument attachment

• Small corpus size– Training on ~70k word corpus

– But, for many languages small data size is not a crucial factor in ascertaining their parsing performance (Hall et al., 2007)

6969

Causes

• Complex feature pattern– agreement

• Difficulty in making linguistic generalizations– Single subject

• Long distance dependencies • Nonprojectivity

– Both inter-clausal and intra-clausal

• Genuine ambiguities– Participle argument attachment

• Small corpus size– Training on ~70k word corpus

– But, for many languages small data size is not a crucial factor in ascertaining their parsing performance (Hall et al., 2007)

70

Possible experiments with tree

transformations

• Possible alternative analysis

– Complex predicate structure

• pof, r6-k*

– Dependents of noun

• Finite clause

– Clausal complements (k2)

– Relative clauses

– Others (k2s, rs)

– Scope of coordination

– Nested k7

– Adjunct attachment

– Non-verbal/conjunction heads

7171

Final remarks

• MaltParser with clausal modularity outperforms the

parser that uses only the local morphosyntactic features

• The performance of linguistically rich MSTParser is

better than the one with clausal feature.

• In UAS, MSTParser with clausal feature is the highest

• Both MSTParser and MaltParser outperform GH-CBP

– But scope for improvement as oracle is high

– More knowledge

7272

Final remarks

• MaltParser with clausal modularity outperforms the

parser that uses only the local morphosyntactic features

• The performance of linguistically rich MSTParser is

better than the one with clausal feature.

• In UAS, MSTParser with clausal feature is the highest

• Both MSTParser and MaltParser outperform GH-CBP

– But scope for improvement as oracle is high

– More knowledge

7373

Final remarks

• MaltParser with clausal modularity outperforms the

parser that uses only the local morphosyntactic features

• The performance of linguistically rich MSTParser is

better than the one with clausal feature.

• In UAS, MSTParser with clausal feature is the highest

• Both MSTParser and MaltParser outperform GH-CBP

– But scope for improvement as oracle is high

– More knowledge

7474

Final remarks

• MaltParser with clausal modularity outperforms the

parser that uses only the local morphosyntactic features

• The performance of linguistically rich MSTParser is

better than the one with clausal feature.

• In UAS, MSTParser with clausal feature is the highest

• Both MSTParser and MaltParser outperform GH-CBP

– But scope for improvement as oracle is high

– More knowledge

7575

• In spite of the positive effects of important features,

linguistic modularity, etc. the overall performance for all

the parsers is still low.

– particularly true for LAS• ambiguous post-positions, and

• lack of post-positions

• But also because of larger tagset.

• Minimal semantics helps

– automatic identification of such semantic tags is not a trivial task

• Knowledge of verbal heads

– (linguistically rich) MSTParser

7676

• In spite of the positive effects of important features,

linguistic modularity, etc. the overall performance for all

the parsers is still low.

– particularly true for LAS• ambiguous post-positions, and

• lack of post-positions

• But also because of larger tagset.

• Minimal semantics helps

– automatic identification of such semantic tags is not a trivial task

• Knowledge of verbal heads

– (linguistically rich) MSTParser

7777

• In spite of the positive effects of important features,

linguistic modularity, etc. the overall performance for all

the parsers is still low.

– particularly true for LAS• ambiguous post-positions, and

• lack of post-positions

• But also because of larger tagset.

• Minimal semantics helps

– automatic identification of such semantic tags is not a trivial task

• Knowledge of verbal heads

– (linguistically rich) MSTParser

7878

• Overall the UAS for all the parsers is high

– shows that most of the languages structures are been identified

successfully

– Non-projectivity for data-driven parsing still a problem

– Ambiguous constructions

• More data?

• Data-driven parsers do not learn many linguistic

generalizations

– agreement, single subject constraint, etc.

– Such generalizations are frequent in some complex patterns that

exists between verbal heads and their children.

– Some recent work have been able to incorporate this

successfully

7979

• Overall the UAS for all the parsers is high

– shows that most of the languages structures are been identified

successfully

– Non-projectivity for data-driven parsing still a problem

– Ambiguous constructions

• More data?

• Data-driven parsers currently unable to learn many

linguistic generalizations

– agreement, single subject constraint, etc.

– Such generalizations are frequent in some complex patterns that

exists between verbal heads and their children.

– Some recent work have been able to incorporate this

successfully

8080

Thanks!

8181References

• B. Ambati, S. Husain, J. Nivre and R. Sangal. 2010a. On the Role of Morphosyntactic Features

in Hindi Dependency Parsing. In Proceedings of NAACL-HLT 2010 workshop on Statistical

Parsing of Morphologically Rich Languages (SPMRL 2010), Los Angeles,CA.

• B. Ambati, S. Husain, S. Jain, D. M. Sharma and R. Sangal. 2010b. Two methods to

incorporate 'local morphosyntactic' features in Hindi dependency parsing. In Proceedings of

NAACL-HLT 2010 workshop on Statistical Parsing of Morphologically Rich Languages

(SPMRL 2010) Los Angeles, CA.

• B. Ambati, P. Gade, G.S.K. Chaitanya and S. Husain. 2009. Effect of Minimal Semantics on

Dependency Parsing. In RANLP09 student paper workshop.

• R. Begum, K. Jindal, A. Jain, S. Husain and D. M. Sharma. 2011. Identification of Conjunct

Verbs in Hindi and Its Effect on Parsing Accuracy. In Proceedings of the 12th CICLing, Tokyo,

Japan.

• A. Bharati, S. Husain, D. M. Sharma and R. Sangal. 2009a. Two stage constraint based hybrid

approach to free word order language dependency parsing. In Proceedings of the 11th

International Conference on Parsing Technologies (IWPT). Paris.

• A. Bharati, S. Husain, M. Vijay, K. Deepak, D. M. Sharma and R. Sangal. 2009b. Constraint

Based Hybrid Approach to Parsing Indian Languages. In Proceedings of the 23rd Pacific Asia

Conference on Language, Information and Computation (PACLIC 23).Hong Kong.

8282

• A. Bharati, S. Husain, B. Ambati, S. Jain, D. M. Sharma and R. Sangal. 2008. Two semantic

features make all the difference in Parsing accuracy. In Proceedings of the 6th International

Conference on Natural Language Processing (ICON-08), CDAC Pune, India.

• P. Gadde, K. Jindal, S. Husain, D. M Sharma, and R. Sangal. 2010. Improving Data Driven

Dependency Parsing using Clausal Information. In Proceedings of NAACL-HLT 2010, Los

Angeles, CA. 2010.

• J. Hall, J. Nilsson, J. Nivre, G. Eryigit, B. Megyesi, M. Nilsson and M. Saers. 2007. Single Malt

or Blended? A Study in Multilingual Parser Optimization. In Proceedings of the CoNLL Shared

Task Session of EMNLP-CoNLL 2007, 933—939

• S. Husain. 2011. A Generalized Parsing Framework based on Computational Paninian

Grammar. PhD Thesis. IIIT-Hyderbad, India.

• S. Husain, P. Gadde, J. Nivre and R. Sangal. 2011a. Clausal parsing helps dependency parsing. In

Proceedings of IJCNLP2011.

8383

• S. Husain, P. Gade, and R. Sangal. 2011b. Linguistically Rich Graph Based Data Driven

Parsing. In Submission.

• S. Husain, P. Gadde, B. Ambati, D. M. Sharma and R. Sangal. 2009. A modular cascaded

approach to complete parsing. In Proceedings of the COLIPS International Conference on

Asian Language Processing 2009 (IALP). Singapore.

• S. Kubler, R. McDonald and J. Nivre. 2009. Dependency parsing. Morgan and Claypool.

• P. Mannem, H. Chaudhry and A. Bharati. 2009. Insights into Non-projectivity in Hindi. In

ACL IJCNLP09 student paper workshop.

• R. McDonald, K. Crammer, and F. Pereira. 2005a. Online large-margin training of dependency

• parsers. In Proceedings of ACL 2005. pp. 91–98.

• R. McDonald, F. Pereira, K. Ribarov, and J. Hajic. 2005b. Non-projective dependency parsing

• using spanning tree algorithms. Proceedings of HLT/EMNLP, pp. 523–530.

• J. Nivre. 2006. Inductive Dependency Parsing. Springer.

top related