improving scalability of support vector machines for...

227
Improving Scalability of Support Vector Machines for Biomedical Named Entity Recognition by Mona Soliman Habib B.Sc., Ain Shams University, 1981 M.Sc., Colorado Technical University, 2001 A dissertation submitted to the Graduate Faculty of the University of Colorado at Colorado Springs in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Computer Science 2008

Upload: others

Post on 05-Jun-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

Improving Scalability of Support Vector Machines for

Biomedical Named Entity Recognition

by

Mona Soliman Habib

B.Sc., Ain Shams University, 1981

M.Sc., Colorado Technical University, 2001

A dissertation submitted to the Graduate Faculty of the

University of Colorado at Colorado Springs

in partial fulfillment of the

requirements for the degree of

Doctor of Philosophy

Department of Computer Science

2008

Page 2: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

!

!

"!Copyright By Mona Soliman Habib 2008

All Rights Reserved!

Page 3: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

This dissertation for Doctor of Philosophy Degree by

Mona Soliman Habib

has been approved for the

Department of Computer Science

by

__________________________________________________ Dr. Jugal K. Kalita, Chair

__________________________________________________ Dr. Albert T. Chamillard

__________________________________________________ Dr. Edward C. Chow

__________________________________________________ Dr. Lisa Hines

__________________________________________________ Dr. Aleksander Kolcz

__________________________________________________ Dr. Xiaobo Zhou

______________________

Date

Page 4: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

iv

Soliman Habib, Mona (Ph.D. in Engineering, Focus in Computer Science)

Improving Scalability of Support Vector Machines for Named Entity Recognition

Thesis directed by Professor Jugal K. Kalita

This research work aims to explore the scalability problems associated with solving

the named entity recognition problem using high-dimensional input space and Support

Vector Machines (SVM), and to provide solutions that lower the computational time and

memory requirements. The named entity recognition approach used is one that fosters

language and domain independence by eliminating the use of prior language and/or

domain knowledge. The biomedical domain is chosen as the focus of this work due to its

importance and inherent challenges.

To investigate the scalability of the SVM machine learning approach, we conduct a

series of baseline experiments using biomedical abstracts. Single class and multi-class

classification performance and results are examined. The initial performance results –

measured in terms of precision, recall, and F-score – are comparable to those obtained

using more complex approaches. These results demonstrate that the simplified

architecture used is capable of providing a reasonable solution to the language and

domain-independent named entity recognition problem.

To improve the scalability of named entity recognition using SVM, two main factors

are tackled: computation time and memory requirements. We develop a new cutting

plane algorithm for multi-class classification – SVM-PerfMulti – based on a new

structural formulation for support vector optimization. The solution is several orders of

magnitude faster than the traditional SVM-Multiclass machine and provides good out-of-

Page 5: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

v

the-box performance, yet it requires a considerable amount of memory that is linear w.r.t.

the training data size.

We introduce a new database-supported solution embedded in a relational database

server. The database framework supports both the binary and multi-class classification

problems. We explore the trade-off between computation time and online memory

requirement when input feature vectors and trained support vectors are stored and fetched

from the database. Database caching of input vectors has minimal to no impact on the

computation time. On the other hand, constantly fetching support vectors from permanent

storage negatively impacts computation time and can be improved by caching kernel

computations in active memory. The database framework promotes the usability of the

machine learning solution and provides a way to build a growing dictionary of previously

identified named entities.

Page 6: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

To the Memory of my Beloved Father

To my Children, Marise and Pierre, the Joy of my Life

Page 7: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

vii

Acknowledgments

First, I would like to acknowledge my late father in a special way as this dissertation

work nears its end. Dad, this is for you … I miss you very much and I hope that I was

able to fulfill your last will despite the many years that went by since you left our world.

Without the unconditional love and strong belief in me that you and my mother have

constantly given me, I would not have achieved anything in life.

I would like to express my gratitude to my advisor, Dr. Jugal Kalita, for his valuable

advice and all of the help and support that he has provided me throughout this entire

research process. His continuous encouragement and trust, even when I doubted myself,

kept me going and motivated me to complete this dissertation.

My thanks also goes to my advisory committee members: Dr. Tim Chamillard, Dr.

Edward Chow, Dr. Xiaobo Zhou, Dr. Lisa Hines, and Dr. Alek Kolcz, for their valuable

feedback and directions to make this dissertation possible. Special thanks to Dr. Tim

Chamillard for dedicating one of his research machines for my use to run the many

experiments needed to complete this work. I am also grateful to Mr. Jim Martin for his

support with the research machines, and to Ms. Patricia Rea for her prompt response to

many inquiries and her constant willingness to help.

I would also like to thank the following individuals for providing software and tools

that proved invaluable in carrying out this work: Dr. Thorsten Joachims at Cornell

University for giving me the permission to use SVM-Struct as a basis for my work, as

well as his SVM-Light, SVM-Perf, and SVM-Multiclass implementations for

experimentation; Mr. Claudio Giuliano at the Dot.Kom Project in Italy for allowing me to

use the jFex feature extraction software and providing the initial extraction scripts; and

Page 8: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

viii

Mr. Stephen Senator at the US Air Force Academy for giving me access to the USAFA

research clusters during the early stages of this research endeavor.

Last, but not least, I extend my warmest expression of gratefulness and love to my

family – my husband, Reda, and my children, Marise and Pierre – for believing in me and

for standing by me in the midst of all the ups and downs I experienced throughout the

research process. I would also like to thank my brother, Aziz, and his family for their

love and encouragement. My special gratitude goes to my extended family and friends for

their thoughts and prayers. Finally, thanks be to the Lord for getting me to this day and

for giving me the strength to accomplish this work.

Page 9: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

ix

CONTENTS

Chapter 1 Introduction..................................................................................................1

1.1 Motivation....................................................................................................1

1.2 Research Methodology ................................................................................3

1.3 Dissertation Outline .....................................................................................6

Chapter 2 Named Entity Recognition...........................................................................9

2.1 Named Entity Recognition Approaches.....................................................10

2.2 Common Machine Learning Architecture .................................................12

2.3 Performance Measures...............................................................................15

2.4 Biomedical Named Entity Recognition .....................................................16

2.5 Language-Independent NER......................................................................25

2.6 Named Entity Recognition Challenges......................................................34

Chapter 3 Support Vector Machines ..........................................................................36

3.1 Support Vector Machines ..........................................................................36

3.2 Binary Support Vector Classification ........................................................37

3.3 Multi-class Support Vector Classification .................................................44

3.4 Named Entity Recognition Using SVM ....................................................49

3.5 SVM Scalability Challenges ......................................................................51

3.6 Emerging SVM Techniques.......................................................................52

Chapter 4 Identifying Challenges in Large Scale NER..............................................54

4.1 Baseline Experiment Design......................................................................55

4.2 Feature Selection........................................................................................57

Page 10: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

x

4.3 Single Class Results...................................................................................61

4.4 Multi-class Results.....................................................................................62

4.5 Challenges and Problems...........................................................................69

4.6 NER/SVM Scalability Challenges.............................................................72

4.7 SVM Usability Issues ................................................................................75

Chapter 5 Improving Multi-Class SVM Training ......................................................79

5.1 Basic All-Together Multi-Class SVM .......................................................79

5.2 Improving SVM Training Time.................................................................82

5.3 SVM-PerfMulti: New Multi-Class Instantiation ........................................86

5.4 Reducing Memory Requirements ..............................................................93

Chapter 6 Database Framework for SVM Learning...................................................95

6.1 SVM Database Architecture ......................................................................96

6.2 SVM-MultiDB Database Schema ...............................................................99

6.3 Embedded Database Functions ................................................................103

6.4 SVM-MultiDB Usage Example ................................................................105

6.5 Improved SVM Solution Usability ..........................................................106

6.6 Tradeoff of Training Time vs. Online Memory Needs............................107

Chapter 7 Single-Class Experiments ........................................................................113

7.1 Single-Class Scalability Results ..............................................................116

7.2 Effect of Regularization Factor C ............................................................118

7.3 Comparison of SVM-Perf and SVM-PerfDB............................................122

Chapter 8 Multi-Class Experiments .........................................................................126

8.1 Multi-Class Scalability Results................................................................128

Page 11: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

xi

8.2 Multi-Class Performance Measures.........................................................134

8.3 Effect of Acceleration Factor a................................................................139

8.4 Experiments Using CoNLL-02 Data .......................................................142

8.5 Other Experimental Datasets ...................................................................148

Chapter 9 Subsequent Development: New SVM-Multiclass ....................................150

9.1 Difference in Cutting Plane Algorithms ..................................................150

9.2 Experimental Results Comparison...........................................................151

9.3 Effect of Regularization Parameter C ......................................................155

Chapter 10 Conclusion and Future Research .............................................................158

10.1 Known Concerns and Solutions...............................................................159

10.2 Evaluation of Success Criteria .................................................................163

10.3 Contributions............................................................................................164

10.4 Future Work .............................................................................................165

Bibliography ....................................................................................................................173

Appendix A SVM Data Dictionary ..............................................................................187

Appendix B NER Experimentation Datasets ...............................................................191

B.1 BioNLP JNLPBA-04 Dataset ..................................................................191

B.2 CoNLL-02 Dataset...................................................................................194

B.3 CoNLL-03 Dataset...................................................................................195

Appendix C Biomedical Entities in the GENIA Corpus..............................................197

C.1 The Current GENIA Ontology.................................................................197

C.2 Entity Definitions from the GENIA Ontology.........................................199

C.3 Sample Annotated Biomedical Text ........................................................202

Page 12: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

xii

TABLES

Table 2.1 - Overview of BioNLP Methods, Features, And External Resources .............. 30

Table 2.2 – Performance Comparison of Systems Using BioNLP Datasets .................... 31

Table 2.3 – Comparison of Systems Using the CoNLL-02 Data ..................................... 32

Table 2.4 – Comparison of Systems Using the CoNLL-03 Data ..................................... 33

Table 4.1 – Effect of Positive Example Boosting............................................................. 62

Table 4.2 – Performance of BioNLP Systems Using SVM vs. Baseline Results............. 66

Table 4.3 – Effect of Positive Examples Boosting on Single-Class SVM Results.......... 66

Table 4.4 – Summary of Baseline Multi-Class Experiment Results ................................ 67

Table 4.5 – Baseline Multi-Class Experiment Results 1978-1989 Set............................. 67

Table 4.6 – Baseline Multi-Class Experiment Results 1990-1999 Set............................. 68

Table 4.7 – Baseline Multi-Class Experiment Results 2000-2001 Set............................. 68

Table 4.8 – Baseline Multi-Class Experiment Results 1998-2001 Set............................. 68

Table 4.9 – Single and Multi-class Training Times.......................................................... 70

Table 7.1 - Consistent Performance Change with Varying C-Factor ............................. 120

Table 8.1 – Summary of Dataset Contents ..................................................................... 126

Table 8.2 - Comparison of SVM-PerfMulti, SVM-MultiDB, and SVM-Multiclass Training

Time vs. Training Data Size ........................................................................................... 131

Table 8.3 – Performance of BioNLP Systems Using SVM vs. SVM-PerfMulti Results 136

Table 8.4 – Summary of SVM-PerfMulti Experiment Results ....................................... 136

Table 8.5 – SVM-PerfMulti Results 1978-1989 Set........................................................ 137

Table 8.6 – SVM-PerfMulti Results 1990-1999 Set........................................................ 137

Page 13: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

xiii

Table 8.7 – SVM-PerfMulti Results 2000-2001 Set........................................................ 137

Table 8.8 – SVM-PerfMulti Results 1998-2001 Set........................................................ 138

Table 8.9 – SVM-PerfMulti Overall Results ................................................................... 138

Table 8.10 – CoNLL-02 SVM-Multiclass Spanish Performance Results ....................... 143

Table 8.11 – CoNLL-02 SVM-PerfMulti Spanish Performance Results ........................ 144

Table 8.12 – CoNLL-02 SVM-Multiclass Dutch Performance Results.......................... 147

Table 8.13 – CoNLL-02 SVM-PerfMulti Dutch Performance Results ........................... 147

Table 8.14 – CoNLL-02 SVM-PerfMulti Dutch Performance Results ........................... 148

Table 8.15 – Other Experimental Datasets ..................................................................... 149

Table 9.1 – Biomedical JNLPBA-04 Performance Measures Comparison.................... 155

Table 9.2 – Spanish CoNLL-02 Performance Measures Comparison............................ 155

Table 9.3 – Dutch CoNLL-02 Performance Measures Comparison............................... 155

Table B.1– Basic Statistics for the JNLPBA-04 Data Sets............................................. 193

Table B.2 – Absolute and Relative Frequencies for Named Entities Within Each Set .. 193

Table B.3 – Number of Named Entities in the CoNLL-02 Dataset................................ 194

Table B.4 – Basic Statistics for the CoNLL-03 Dataset ................................................. 196

Table B.5 – Number of Named Entities in the CoNLL-03 Dataset................................ 196

Table C.1 – The Current GENIA Ontology.................................................................... 198

Table C.2 - Examples of Protein Named Entities ........................................................... 206

Table C.3 - Examples of DNA Named Entities.............................................................. 208

Table C.4 - Examples of RNA Named Entities .............................................................. 209

Table C.5 - Examples of Cell Type Named Entities....................................................... 210

Table C.6 - Examples of Cell Line Named Entities........................................................ 211

Page 14: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

xiv

FIGURES

Figure 2.1 - Commonly Used Architecture....................................................................... 15

Figure 3.1 – SVM Linearly Separable Case ..................................................................... 38

Figure 3.2 – SVM Non-Linearly Separable Case ............................................................. 41

Figure 3.3 – SVM Mapping to Higher Dimension Feature Space.................................... 42

Figure 3.4 – Comparison of Multi-Class Boundaries ....................................................... 44

Figure 3.5 – Half-Against-Half Multi-Class SVM .......................................................... 47

Figure 4.1 – Baseline Experiments Architecture .............................................................. 56

Figure 5.1 – Multi-Class SVM Error Representation ....................................................... 80

Figure 6.1 – Embedded Database Architecture ................................................................ 98

Figure 6.2 - SVM Learning Data Model......................................................................... 101

Figure 6.3 - SVM-Perf and SVM-PerfMulti Memory Usage vs. Data Size..................... 108

Figure 6.4 - SVM-PerfDB Training Time vs. Examples Cache Size ............................. 109

Figure 6.5 - SVM-MultiDB Training Size vs. Examples Cache Size.............................. 110

Figure 7.1 - SVM-Light, SVM-Perf, and SVM-PerfDB Training Time vs. Training Data

Size.................................................................................................................................. 117

Figure 7.2 - SVM-Light Number of Support Vectors vs. Training Data Size................. 118

Figure 7.3 - SVM-Perf Number of Support Vectors vs. Training Data Size .................. 118

Figure 7.4 - SVM-Perf Training Time vs. Variation of C-Factor ................................... 119

Figure 7.5 - SVM-Perf Performance vs. Regularization Factor C .................................. 121

Figure 7.6 – Single Class Protein Performance (Recall/Precision/F-Score) – C=1.0..... 122

Figure 7.7 – Modified SVM-Perf Memory vs. Varying C ............................................. 123

Page 15: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

xv

Figure 7.8 – Modified SVM-Perf vs. SVM-PerfDB Training Time: C=0.01 .................. 124

Figure 7.9 – Modified SVM-Perf vs. SVM-PerfDB Training Time: C=1.0 .................... 124

Figure 7.10 – Original SVM-Perf vs. SVM-PerfDB Training Time: C=0.01.................. 125

Figure 7.11 – Original SVM-Perf vs. SVM-PerfDB Training Time: C=1.0.................... 125

Figure 8.1 - SVM-Multiclass Number of Support Vectors vs. Training Data Size......... 130

Figure 8.2 - SVM-PerfMulti Number of Support Vectors vs. Training Data Size.......... 130

Figure 8.3 - SVM-Multiclass, SVM-PerfMulti, and SVM-MultiDB Training Time vs.

Training Data Size .......................................................................................................... 132

Figure 8.4 - SVM-MultiDB and SVM-PerfMulti Training Time vs. Training Data Size 132

Figure 8.5 – Comparison of Single-Class and Multi-Class Memory Usage vs. Training

Data Size ......................................................................................................................... 134

Figure 8.6 - SVM-PerfMulti Overall Performance vs. Training Data Size..................... 135

Figure 8.7 - SVM-PerfMulti Protein Performance vs. Training Data Size ..................... 135

Figure 8.8 – SVM-PerfMulti Training Time vs. Acceleration Factor ............................. 141

Figure 8.9 – SVM-PerfMulti Overall Performance vs. Acceleration Factor................... 141

Figure 8.10 – SVM-PerfMulti Protein Performance vs. Acceleration Factor ................. 141

Figure 8.11 – CoNLL-02 Spanish SVM-PerfMulti vs. SVM-Multiclass Training Time 144

Figure 8.12 – CoNLL-02 Spanish SVM-PerfMulti Overall Performance....................... 144

Figure 8.13 – CoNLL-02 Dutch SVM-PerfMulti vs. SVM-Multiclass Training Time ... 146

Figure 8.14 – CoNLL-02 Dutch SVM-PerfMulti Overall Performance ......................... 147

Figure 9.1 – Biomedical JNLPBA-04 Training Time Comparison................................ 152

Figure 9.2 – Biomedical JNLPBA-04 Memory Comparison ......................................... 152

Figure 9.3 – Spanish CoNLL-02 Training Time Comparison........................................ 153

Page 16: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

xvi

Figure 9.4 – Spanish CoNLL-02 Memory Comparison ................................................. 153

Figure 9.5 – Dutch CoNLL-02 Training Time Comparison........................................... 153

Figure 9.6 – Dutch CoNLL-02 Memory Comparison .................................................... 154

Figure 9.7 – Effect of Regularization Parameter C on Training Time ........................... 156

Figure 9.8 – Effect of Regularization Parameter C on Performance .............................. 157

Figure 10.1 – Recommended Service-Oriented Architecture......................................... 168

Page 17: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

Chapter 1

Introduction

This research work aims to explore the scalability problems associated with solving

the Named Entity Recognition (NER) problem using high-dimensional input space and

Support Vector Machines (SVM), and to present two implementations addressing these

issues. The NER domain chosen as the focus of this work is the biomedical publications

domain, especially selected due to its importance and inherent challenges. In the

following sections, we express our motivation for this work and briefly describe the

research methodology, followed by an outline of the dissertation contents.

1.1 Motivation Named entity recognition (NER) is an important task in information extraction. It

involves the identification and classification of words or sequences of words denoting a

concept or entity. Examples of such information units are names of persons,

organizations, or locations in the general context of newswires. Domain-specific named

entities are those terms or phrases that denote concepts relevant to one particular domain.

For example, protein and gene names are named entities which are of interest to the

domain of molecular biology and medicine. The massive growth of textual information

available in the literature and on the Web necessitates the automation of identification

and management of named entities in text.

The task of identifying named entities in a particular language is often accomplished

by incorporating knowledge about the language taxonomy in the method used. In the

Page 18: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

2

English language, such knowledge may include capitalization of proper names, known

titles, common prefixes or suffixes, part of speech tagging, and/or identification of noun

phrases in text. Techniques that rely on language-specific knowledge may not be suitable

for porting to other languages. With the extension of named entity recognition to new

information areas, the task of identifying meaningful entities has become more complex

as categories are more specific to a given domain. NER solutions that achieve a high

level of accuracy in some language or domain may perform much poorly in a different

context.

Different approaches are used to carry out the identification and classification of

entities. Statistical, probabilistic, rule-based, memory-based, and machine learning

methods have been developed. The extension of NER to specialized domains raise the

importance of devising solutions that require less human intervention in the annotation of

examples or the development of specific rules. Machine learning techniques are therefore

experiencing an increased adoption and much research activity is taking place in order to

make such solutions more feasible. Support Vector Machine (SVM) is rapidly emerging

as a promising pattern recognition methodology due to its generalization capability and

its ability to handle high-dimensional input. However, SVM is known to suffer from slow

training especially with large input data size.

In this research work, we explore a solution that eliminates language and domain-

specific knowledge from the named entity recognition process when applied to the

English biomedical entity recognition task, as a baseline for other languages and

domains. The biomedical field NER remains a challenging task due to growing

nomenclature, ambiguity in the left boundary of entities caused by descriptive naming,

Page 19: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

3

difficulty of manually annotating large sets of training data, strong overlap among

different entities, to cite a few of the NER challenges in this domain. Many examples of

biomedical named entities and annotated MEDLINE abstracts demonstrating the various

shapes and forms of such entities are included in Appendix C. We will discuss some of

these examples in more details in the following chapter. The proposed solution

capitalizes on SVM’s ability to handle high-dimensional input space. In this dissertation,

we explore the scalability issues for named entity recognition using high-dimensional

features and support vector machines and investigate ways to address them.

1.2 Research Methodology We propose an integrated machine learning NER solution using support vector

machines that attempts to improve the scalability of NER/SVM with large datasets. This

research will focus on All-Together multi-class training algorithms for their superior

performance and challenging scalability. This multi-class approach builds one SVM

model that maximizes all separating hyperplanes at the same time, which is a challenging

optimization problem.

We envision an SVM solution assisted by a special database schema and embedded

database modules. The database schema design incorporates input data, evolving training

model(s), pre-computed kernel outputs and dot products, and classification output data. In

order to reduce the communication overhead with the database backend, we propose

extending the database server with embedded modules. This should also provide a better

integration of all components. Database triggers may be used for frequently updated

fields to improve the potential parallelization of the learning processes.

Page 20: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

4

Building a growing list of previously identified and annotated named entities will be

made possible by the database repository, which would provide a valuable resource to

constantly improve the classification performance. The evolving gazetteer list can be

used during preprocessing or post-processing to annotate and/or correct the classification

of newly discovered named entities thereby boosting the overall performance. While this

extension will not be implemented as part of this work, it can be easily incorporated at a

future time.

Following an investigation and literature review stage, this work proceeds in three

phases with different objectives:

# The first phase addresses the named entity recognition problem using a language

and domain-independent approach. It focuses on the feature selection criteria and

the basic architecture design that eliminates language or domain-specific pre- and

post-processing. This phase also assesses the baseline performance using multi-

class support vector machines, and the potential challenges and problems using

this machine learning solution.

# The second phase aims to tackle the slow training time of multi-class support

vector machines and to develop algorithm(s) to accelerate learning as this issue

constitutes the main obstacle when dealing with large real-world datasets. A

secondary objective is to identify ways to lower the memory required by the

potential solution especially with large input sizes.

# The objective of the final phase is to design and implement a database framework

for SVM learning that promotes the usability of machine learning and integrates

training and testing example datasets, trained models, and classification outputs in

Page 21: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

5

a common environment. The database framework is to incorporate both the binary

and multi-class support vector machines and provide complementary solutions to

lower online memory needs.

During phase one of this research work, we realized the need for a novel machine

learning framework that promotes reusability, expandability, and maintainability of the

solution and provides an architecture that encourages future work. Initial thoughts about a

service-oriented architecture for machine learning are presented in Chapter 10. Given the

scope and challenge level of the proposed research plan, we decided to focus on tackling

NER/SVM scalability issues and not pursue the development of the recommended

service-oriented architecture for the time being.

In this research work, we focus on the main SVM learning and classification

modules, where both of the binary and multi-class cases co-exist and are accessible

through a user machine type selection. Feature extraction and classification performance

evaluation remain external to the database embedded solution, with provisions made to

import the input data vectors and to export the classified output for evaluation. We use

the jFex software (2005) for feature extraction. The database modules include an input

data loading module in order to streamline the inclusion of an embedded feature

extraction module in the future.

The evaluation of this research will concentrate on the biomedical literature domain

using the JNLPBA-04 (BioNLP) datasets and evaluation tools. We include evaluation

data in the database schema design and provide a module to export the classified data to

be evaluated by the external tools. This again should make the inclusion of embedded

evaluation modules easier in the future.

Page 22: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

6

In order to validate the applicability of the NER/SVM solution to other cases, we

conduct experiments using the CoNLL-02 Spanish and Dutch datasets and evaluate these

experiments using the corresponding evaluation scripts for the CoNLL challenge task.

1.3 Dissertation Outline This thesis is organized as follows. In Chapter 2, we present the named entity

recognition problem and the current state of the art solutions for it. We then explore the

language-independent NER research activities as well as those specific to recognizing

named entities in biomedical abstracts as an example of a specialized domain. The

methods and features used by many systems using the same sets of training and testing

data from three NER challenge tasks are summarized followed by a discussion of the

named entity recognition scalability challenges.

The mathematical foundation of Support Vector Machine (SVM) is briefly introduced

in Chapter 3. The binary classification for linearly separable and non-linearly separable

data is presented, followed by the different approaches used to classify data with several

classes. We conclude the chapter with a discussion of SVM scalability issues and briefly

introduce how they are addressed in the literature.

In order to investigate the potential problems associated with named entity

recognition using support vector machines, we performed a series of baseline single class

and multi-class experiments using datasets from the biomedical domain. The approach

used is one that eliminates the use of prior language or domain-specific knowledge. The

detailed architecture and methods used as well as the experiments’ results are presented

in Chapter 4. We compare the results to other systems using the same datasets and

Page 23: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

7

demonstrate that the simplified NER process is capable of achieving performance

measures that are comparable to published results. We discuss the challenges and

problems encountered and summarize the NER/SVM scalability issues. The usability

aspects of SVM machine learning are also examined.

To address the scalability issues of the NER/SVM solution using high-dimensional

input space, we begin by tackling the training computational time component. In Chapter

5, we explore the mathematical foundation of multi-class support vector machines and

the evolution of the problem formulation aiming to reduce the training time by improving

the optimization algorithms. We then introduce a new cutting plane algorithm that

approximates the optimization problem while achieving good out-of-the-box performance

in linear time. The cutting plane algorithm is first implemented as a standalone C

executable in order to study its effectiveness in reducing the computational time.

Having improved the training time for the multi-class classification problem, we then

present a database-supported SVM framework that seeks to promote the usability of the

SVM solution and the reusability of its classification results, as well as improve

scalability by lowering the online memory requirements. The database framework

addresses both the single class and multi-class classification problems. The database

architecture, schema design, and a description of the embedded SVM modules are

described in Chapter 6.

In Chapter 7 and 8, we report the results of a series of scalability experiments using

existing SVM implementations as well as our newly developed standalone and database-

supported solution. Chapter 7 examines the results of single class experiments classifying

protein named entities in the JNLPBA-04 biomedical data. In Chapter 8, we compare the

Page 24: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

8

outcomes of multi-class experiments using the same biomedical datasets where the

following five named entity types are discovered: protein, DNA, RNA, cell line, and cell

type. In addition, we apply the existing and new multi-class solutions to two additional

datasets, namely the CoNLL-02 Spanish and Dutch challenge data aiming to classify

general named entities such as a person, location, or organization.

At the final stages of this work, we conducted a final literature review where we came

across a new yet-to-be-published paper that analyzes some cutting plane algorithms using

the latest 1-slack SVM formulation that we use in conjunction with our new multi-class

cutting plane algorithm. This paper briefly mentions a multi-class implementation and

upon verification of the source code, we realized that it is a newer implementation than

the previous one used for our baseline experiments. In Chapter 9, we compare our cutting

plane algorithm results with those achieved using the newly available multi-class version.

We report on the differences in training time, memory, and performance in Chapter 9.

Finally, we discuss some of the known concerns about our research work and how we

addressed them as part of the solution, as well as a self-evaluation of the contributions

and success criteria in Chapter 10. We also present some research directions that we

contemplate for future research. These directions include a recommendation for a

dynamic service-oriented machine learning architecture, incremental learning, feature

selection, and unsupervised learning. We briefly brainstorm on some ideas in each

research direction.

Page 25: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

Chapter 2

Named Entity Recognition

Named entity recognition (NER) is one of the important tasks in information

extraction, which involves the identification and classification of words or sequences of

words denoting a concept or entity. Examples of named entities in general text are names

of persons, locations, or organizations. Domain-specific named entities are those terms or

phrases that denote concepts relevant to one particular domain. For example, protein and

gene names are named entities which are of interest to the domain of molecular biology

and medicine. The massive growth of textual information available in the literature and

on the Web necessitates the automation of identification and management of named

entities in text.

The task of identifying named entities in a particular language is often accomplished

by incorporating knowledge about the language taxonomy in the method used. In the

English language, such knowledge may include capitalization of proper names, known

titles, common prefixes or suffixes, part of speech tagging, and/or identification of noun

phrases in text. Techniques that rely on language-specific knowledge may not be suitable

for porting to other languages. For example, the Arabic language does not use

capitalization to identify proper names, and word variations are based on the use of

infixes in addition to prefixes and suffixes. Moreover, the composition of named entities

in literature pertaining to specific domains follows different rules in each, which may or

may not benefit from those relevant to general NER.

Page 26: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

10

This chapter presents an overview of the research activities in the area of named

entity recognition in several directions related to the focus of this work:

# Named entity recognition approaches,

# Common machine learning architecture for NER,

# Named entity recognition in the biomedical literature context,

# Language-independent named entity recognition, and

# Named entity recognition challenges.

2.1 Named Entity Recognition Approaches Named entity recognition activities began in the late 1990’s with limited number of

general categories such as names of persons, organizations, and locations (Sekine 2004;

Nadeau and Sekine 2007). Early systems were based on the use of dictionaries and rules

built by hand, and few used supervised machine learning techniques. The CoNLL-02

(Tjong Kim Sang 2002a) and CoNLL-03 (Tjong Kim Sang and De Meulder 2003)

discussed later in this chapter provided valuable NER evaluation tasks for four languages:

English, German, Spanish, and Dutch. With the extension of named entity recognition

activities to new languages and domains, more entity categories are introduced which

made the methods relying on manually built dictionaries and rules much more difficult to

adopt, if at all feasible.

The extended applicability of NER in new domains led to more adoption of

supervised machine learning techniques which include:

# Decision Trees (Paliouras et al. 2000; Black and Vasilakopoulos 2002)

# AdaBoost (Carreras et al. 2002, 2003b; Wu et al. 2003; Tsukamoto et al. 2002)

Page 27: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

11

# Hidden Markov Model (HMM) (Scheffer et al. 2001; Kazama et al. 2001; Zhang

et al. 2004; Zhao 2004; Zhou 2004)

# Maximum Entropy Model (ME) (Kazama et al. 2001; Bender et al. 2003; Chieu

and Ng 2003; Curran and Clark 2003; Lin et al. 2004; Nissim et al. 2004)

# Boosting and voted perceptrons (Carreras et al. 2003a; Dong and Han 2005; Wu

et al. 2002; Wu et al. 2003)

# Recurrent Neural Networks (RNN) (Hammerton 2003)

# Conditional Random Fields (CRF) (McCallum and Li 2003; Settles 2004; Song et

al. 2004; Talukdar et al. 2006)

# Support Vector Machine (SVM) (Lee, Hou et al. 2004; Mayfield et al. 2003;

McNamee and Mayfield 2002; Song et al. 2004; Rössler 2004; Zhou 2004;

Giuliano et al. 2005).

Memory-based (De Meulder and Daelemans 2003; Hendrickx and Bosch 2003;

Tjong Kim Sang 2002b) and transformation-based (Black and Vasilakopoulos 2002;

Florian 2002; Florian et al. 2003) techniques have been successfully used for recognizing

general named entities where the number of categories is limited, but are less adopted in

more complex NER tasks such as the biomedical domain (Finkel et al. 2004). Recent

NER systems also combined several classifiers using different machine learning

techniques in order to achieve better performance results (Florian et al. 2003; Klein et al.

2003; Mayfield et al. 2003; Song et al. 2004; Rössler 2004; Zhou 2004).

With the growing adoption of machine learning techniques for NER, especially for

specialized domains, the need for developing semi-supervised or unsupervised solutions

increases. Supervised learning methods rely on the existence of manually annotated

Page 28: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

12

training data, which is very expensive in terms of labor and time and a hindering factor

for many complex domains with growing nomenclature. However, using unannotated

training data or a mixture of labeled and unlabeled data requires the development of new

NER machine learning solutions based on clustering and inference techniques. Named

entity recognition systems that attempted to use unannotated training data include

(Cucerzan and Yarowsky 1999; Riloff and Jones 1999; De Meulder and Daelemans 2003;

Yangarber and Grishman 2001; Hendrickx and Bosch 2003; Goutte et al. 2004;

Yangarber et al. 2002; Zeng et al. 2003; Bodenreider et al. 2002; Collins and Singer

1999).

Comparing the relative performance of the various NER approaches is a nontrivial

task. The performance of earlier systems that relied on manually built dictionaries and

rules depends in the first place on the quality of the rules and dictionaries used. Systems

based on statistical approaches and machine learning techniques, whether they use just

one method or a combination of several techniques, require the use of annotated training

data and extraction of many orthographic, contextual, linguistic, and domain-specific

features in addition to possibly using external resources such as dictionaries, gazetteers,

or even the World Wide Web. Therefore, judging the performance of a given system

cannot be made solely based on the choice of a machine learning approach but rather on

the overall solution design and final performance results. This observation makes the use

of a machine learning technique for NER an art more than a science.

2.2 Common Machine Learning Architecture Constructing a named entity recognition solution using a machine learning approach

requires many computational steps including preprocessing, learning, classification, and

Page 29: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

13

post-processing. The specific components included in a given solution vary but they may

be viewed as making part of the following groups summarized in Figure 2.1.

2.2.1 Preprocessing Modules

Using a supervised machine learning technique relies on the existence of annotated

training data. Such data is usually created manually by humans or experts in the relevant

field. The training data needs to be put in a format that is suitable to the solution of

choice. New data to be classified also require the same formatting. Depending on the

needs of the solution, the textual data may need to be tokenized, normalized, scaled,

mapped to numeric classes, prior to being fed to a feature extraction module. To reduce

the training time with large training data, some techniques such as chunking or instance

pruning (filtering) may need to be applied.

2.2.2 Feature Extraction

In the feature extraction phase, training and new data is processed by one or more

pieces of software in order to extract the descriptive information about it. The choice of

feature extraction modules depends on the solution design and may include the extraction

of orthographic and morphological features, contextual information about how tokens

appear in the documents, linguistic information such as part-of-speech or syntactic

indicators, and domain-specific knowledge such as the inclusion of specialized

dictionaries or gazetteers (reference lists). Some types of information may require the use

of other machine learning steps to generate it, for example, part-of-speech tagging is

usually performed by a separate machine learning and classification software which may

or may not exist for a particular language.

Page 30: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

14

Preparing the data for use by the feature extractor may require special formatting to

suit the input format of the software. Also, depending on the choice of machine learning

software, one may need to reformat the output of the feature extraction to be compatible

with what’s expected by the machine learning module(s). Due to the lack of

standardization in this area and because no integrated solutions exist for named entity

recognition, several incompatibilities exist between the many tools one may use to build

the overall architecture. In addition, one may also need to build customized interfacing

modules to fit all the pieces of the solution together.

2.2.3 Machine Learning and Classification

Most of the publicly available machine learning software uses a two-phased approach

where learning is first performed to generate a trained machine followed by a

classification step. The trained model for a given problem can be reused for many

classifications as long as there is no need to change the learning parameters or the

training data.

2.2.4 Post-Processing Modules

The post-processing phase prepares the classified output for use by other applications

and/or for evaluation. The classified output may need to be reformatted, regrouped into

one large chunk if the input data was broken down into smaller pieces prior to being

processed, remapped to reflect the string class names, and tested for accuracy by

evaluation tools. Other post-processing modules may include the use of external

knowledge resources such as dictionaries or gazetteer lists to refine the classification

results and boost the performance measures. The final collection of annotated documents

may be reviewed by human experts prior to being used for other needs.

Page 31: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

15

Figure 2.1 - Commonly Used Architecture

2.3 Performance Measures The performance measures used to evaluate the named entity recognition systems

participating in the CoNLL-02, CoNLL-03 and JNLPBA-04 challenge tasks are

precision, recall, and the weighted mean F!=1-score. Precision is the percentage of named

entities found by the learning system that are correct. Recall is the percentage of named

entities present in the corpus that are found by the system. A named entity is correct only

if it is an exact match of the corresponding entity in the data file, i.e., the complete named

entity is correctly identified. Definitions of the performance measures used are

Page 32: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

16

summarized below. The same performance measures are used to evaluate the results of

the experiments performed during the course of this work. It is worth noting that since

named entities are frequently composed of several words, the definition of the

performance measures usually refers to complete matches of multi-word named entities,

and not to individual word classifications.

fn tptp

corpus in the entities of #entities classifiedcorrectly of #Recall

$%%

fp tptp

algorithmby found entities of #entities classifiedcorrectly of #Precision

$%%

recall)precision(!recall) (precision ) (1FMean Weighted 2

2

! $&&&$

%'

recall)(precisionrecall) (precision 2F 1, With 1! $

&&%% %'

2.4 Biomedical Named Entity Recognition Biomedical entity recognition aims to identify and classify technical terms in the

domain of molecular biology that are of interest to biologists and scientists. Example of

such entities are protein and gene names, cell types, virus name, DNA sequence, and

others. The U.S. National Library of Medicine maintains a large collection of controlled

vocabulary, MeSH (NLM 2007b), used for indexing articles for MEDLINE/PubMed

(NLM 2007a) and to provide a consistent way to retrieve information that may use

different terminology for the same concepts. PubMed’s constantly growing collection of

articles raises the need for automated tools to extract new entities appearing in the

literature.

Page 33: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

17

The biomedical named entity recognition remains a challenging task as compared to

general NER. Systems that achieve high accuracy in recognizing general names in the

newswires (Tjong Kim Sang and De Meulder 2003) have not performed as well in the

biomedical NER with an accuracy of 20 or 30 points difference in their F-score measure.

The biomedical field NER presents many challenges due to growing nomenclature,

ambiguity in the left boundary of entities caused by descriptive naming, shortened forms

due to abbreviation and aliasing, difficulty to create consistently annotated training data

with large number of classes, etc. (Kim et al. 2004). The following are some examples of

the five named entities: protein, DNA, RNA, cell type, and cell line, and the challenges

they pose. Multi-word entities are to be identified as a whole. Definitions of these named

entity categories are provided in Appendix C.

# An entity may appear in any shape or form. E.g., each of the following is a protein

named entity.

o CD28

o lipoxygenase metabolites

o transcription factor NF-kappa B

o Ras/protein kinase C

o major coat protein

o erythroid cell-specific histone

o constitutively expressed transcription factor Sp1

o PuB1

o T cell receptor alpha (TCR alpha) enhancer

o anti-CD4 mAg

Page 34: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

18

# It is difficult to distinguish the boundaries of a multi-word entity, especially if it is

composed of a sequence of descriptive words. E.g.:

o DNA ! human and mouse gene

o Protein ! major histocompatibility complex class II molecules

o Protein ! rat intercellular adhesion molecule-1 monoclonal antibody

o Protein ! seven transmembrane receptor

o DNA ! artificial promoter

o RNA ! chromosome 16 CBF beta-MYH11 fusion transcript

o RNA ! beta 1 triodothyronine receptor mRNA

o Cell type ! quiescent and stimulated cells

o Cell line ! highly purified CD34+ cells

# Named entities may be as simple as one word or may be a long sequence of words

and/or acronyms. E.g.:

o DNA ! enhancer

o DNA ! uracil-DNA glycosylase (UDG) gene

o Cell line ! YT

o Cell line ! unstimulated and 12-Otetradecanoylphorbol 13-

acetate/phytohemagglutinin-stimulated human Jurkat T cells

o RNA ! mRNA

o RNA ! monocytic I kappa B alpha/MAD-3 mRNA

# A named entity of one kind may include another named entity of the same or

another kind. E.g.:

o Protein ! Ras

Page 35: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

19

o Protein ! Ras/protein kinase C

o Protein ! IL-2

o DNA ! IL-2-inducible enhancer

o Cell type ! T cells

o Protein ! CD4

o Cell line ! CD4 positive T cell lines

# Inconsistent naming of entities. E.g., an entity may appear in its long or

abbreviated form, or both. E.g.:

o Protein ! interlukin-2

o Protein ! IL-2

o DNA ! IL-2 receptor alpha gene

o DNA ! IL-2R alpha gene

o DNA ! IL-2 receptor alpha (IL-2R alpha) gene

# Conjunctions of entities may appear as one named entity. E.g., the following are

all RNA named entities. Each line represents one entity.

o c-fos mRNA

o c-jun mRNA

o c-fos, c-jun, EGR2, and PDGF (B) mRNA

# Several named entities may appear in a contiguous manner, with or without

separators such as parentheses. E.g.:

o protein tyrosine kinases p56lck and p59fyn is a sentence fragment

containing three individual protein names: (1) protein tyrosine kinases,

(2) p56lck, and (3) p59fyn.

Page 36: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

20

o erythroid transcription factor GATA-1 is a fragment containing two

consecutive protein names erythroid transcription factor and GATA-1.

Biomedical named entity recognition systems make use of publicly or privately

available corpora to train and test their systems. The quality of the corpus used impacts

the output performance as one would expect. Cohen et al. (2005) compare the quality of

six publicly available corpora and evaluate them in terms of age, size, design, format,

structural and linguistic annotation, semantic annotation, curation, and maintenance. The

six public corpora are the Protein Design Group (PDG) corpus (Blaschke et al. 1999), the

University of Wisconsin corpus (Craven and Kumlein 1999), Medstract (Pustejovsky et

al. 2002), the Yapex corpus (Franzen et al. 2002; Eriksson et al. 2002), the GENIA

corpus (Ohta et al. 2002; Kim et al. 2003), and the BioCreative task dataset (GENETAG)

(Tanabe et al. 2005). These corpora are widely used in the biomedical named entity

recognition research community and serve as basis of performance comparison. Cohen et

al. (2005) conclude that the GENIA corpus’ quality has improved over the years most

probably due to its continued development and maintenance.

The GENIA corpus is the source of the JNLPBA-04 challenge task datasets described

in Appendix B, often referred to as the BioNLP data. We used the JNLPBA-04 (BioNLP)

datasets for the biomedical NER experiments performed during the course of this work.

We evaluate the classification performance using the JNLPBA-04 evaluation scripts in

order to provide a common basis of comparison of the results to published results using

the same datasets. Definitions of the biomedical named entities used in JNLPBA-04 NER

task, sample annotated GENIA biomedical abstracts, and examples of named entities

demonstrating the various shapes and forms in which they may appear are provided in

Page 37: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

21

Appendix C. The following is a sample annotated MEDLINE abstract from the GENIA1

corpus. The objective of biomedical named entity recognition is to automatically discover

and hightlight the word(s) denoting concepts in a similar manner.

Legend of Entity Types: protein, DNA, RNA, cell_line, cell_type

MEDLINE Abstract #93389400: Goldfeld et al. (1993)2

Identification of a novel cyclosporin-sensitive element in the human tumor necrosis

factor alpha gene promoter.

Tumor necrosis factor alpha (TNF-alpha), a cytokine with pleiotropic biological

effects, is produced by a variety of cell types in response to induction by diverse stimuli.

In this paper, TNF-alpha mRNA is shown to be highly induced in a murine T cell clone

by stimulation with T cell receptor (TCR) ligands or by calcium ionophores alone.

Induction is rapid, does not require de novo protein synthesis, and is completely blocked

by the immunosuppressant cyclosporin A (CsA). We have identified a human TNF-alpha

promoter element, kappa 3, which plays a key role in the calcium-mediated inducibility

and CsA sensitivity of the gene. In electrophoretic mobility shift assays, an

oligonucleotide containing kappa 3 forms two DNA protein complexes with proteins that

are present in extracts from unstimulated T cells. These complexes appear in nuclear

extracts only after T cell stimulation. Induction of the inducible nuclear complexes is

rapid, independent of protein synthesis, and blocked by CsA, and thus, exactly parallels

the induction of TNF-alpha mRNA by TCR ligands or by calcium ionophore. Our studies

indicate that the kappa 3 binding factor resembles the preexisting component of nuclear

factor of activated T cells. Thus, the TNF-alpha gene is an immediate early gene in

activated T cells and provides a new model system in which to study CsA-sensitive gene

induction in activated T cells.

1 The GENIA Project (5.1). Team Genia, 2002-2006. Available from http://www-tsujii.is.s.u-tokyo.ac.jp/GENIA/home/wiki.cgi. 2 Goldfeld, et al. 1993. Identification of a novel cyclosporin-sensitive element in the human tumor necrosis factor alpha gene promoter. J Exp Med 178 (4):1365-79.

Page 38: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

22

The availability of common experimentation data and evaluation tools provides a

great opportunity for researchers to compare their performance results against other

systems. The BioNLP tools and data are used by many systems with published results.

Table 2.1 summarizes the methods used by several systems performing biomedical

named entity recognition using the JNLPBA-04 (BioNLP) datasets as well as the features

and any external resources they used. Systems performing biomedical NER used Support

Vector Machine (SVM), Hidden Markov Model (HMM), Maximum Entropy Markov

Model (MEMM), or Conditional Random Fields (CRF) as their classification methods

either combined or in isolation. The features used by the different systems are listed in

abbreviated form in Table 2.1 and include some or all of the following:

# lex: lexical features beyond simple orthographic features

# ort: orthographic information

# aff: affix information (character n-grams)

# ws: word shapes

# gen: gene sequences (ATCG sequences)

# wv: word variations

# len: word length

# gaz: gazetteers (reference lists of known named entities)

# pos: part-of-speech tags

# np: noun phrase tags

# syn: syntactic tags

# tri: word triggers

# ab: abbreviations

Page 39: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

23

# cas: cascaded entities

# doc: global document information

# par: parentheses handling information

# pre: previously predicted entity tags

# External resources: B: British National Corpus (Oxford 2005); M: MEDLINE

corpus (NLM 2007a); P: Penn Treebank II corpus (The Penn Treebank Project

2002); W: World Wide Web; V: virtually generated corpus; Y: Yapex (Eriksson

et al. 2002) (Y); G: GAPSCORE (Chang et al. 2004).

In the following paragraphs we will highlight the top performing systems (Zhou and

Su 2004; Finkel et al. 2004; Settles 2004; Giuliano et al. 2005). Two of the top

performing systems used SVM, where one combined it with HMM (Zhou and Su 2004)

and the other used it as the only classification model (Giuliano et al. 2005). The second

best system used a maximum entropy approach (Finkel et al. 2004) and the system that

ranked third used conditional random fields (Settles 2004).

Zhou and Su (2004) combined a Support Vector Machine (SVM) with a sigmoid

kernel and a Hidden Markov Model (HMM). The system explored deep knowledge in

external resources and specialized dictionaries in order to derive alias information,

cascaded entities, and abbreviation resolution. The authors made use of existing gene and

protein name databases such as SwissProt in addition to a large number of orthographic

and language-specific features such as part-of-speech tagging and head noun triggers.

The system achieved the highest performance with an F-score of 72.6% with post-

processing using the external resources. However, the F-score achieved using machine

learning alone, i.e., prior to post-processing, is 60.3%. Given the complex system design

Page 40: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

24

and the large number of preprocessing and post-processing steps undertaken in order to

correctly identify the named entities, it is difficult to judge the impact of the machine

learning approach alone when comparing this system to others. The performance gain is

mostly due to the heavy use of specialized dictionaries, gazetteer lists, and previously

identified entities in order to flag known named entities.

Finkel et al. (2004) achieved F-score of 70.1% on BioNLP data using a maximum

entropy-based system previously used for the language-independent task CoNLL-03. The

system used a rich set of local features and several external sources of information such

as parsing and searching the Web, domain-specific gazetteers, and compared NE’s

appearance in different parts of a document. Post-processing attempted to correct multi-

word entity boundaries and combined results from two classifiers trained forward and

backward. Dingare et al. (2005) confirmed the known challenge in identifying named

entities in biomedical documents, which causes the performance results to lag behind

those achieved in general NER tasks such as CoNLL-03 by 18% or more.

Settles (2004) used Conditional Random Fields (CRM) and a rich set of orthographic

and semantic features to extract the named entities. The system also made use of external

resources, gazetteer lists, and previously identified entities in order to flag known named

entities.

Giuliano et al. (2005) used a Support Vector Machine (SVM) with a large number of

orthographic and contextual features extracted using the jFex software3. The system

incorporated part-of-speech tagging and word features of tokens surrounding each

analyzed token in addition to features similar to those used in this experiment. In

3 Giuliano, et al. Simple Information Extraction (SIE). ITC-irst, Istituto per la Ricerca Scientifica e Tecnologica, 2005. Available from http://tcc.itc.it/research/textec/tools-resources/sie/giuliano-sie.pdf.

Page 41: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

25

addition, Giuliano et al. (2005) pruned the data instances in order to reduce the dataset

size by filtering out frequent words from the corpora because they are less likely to be

relevant than rare words.

The comparison of the top performing systems does not single out a particular

machine learning methodology in being more efficient than others. From observation of

the rich feature sets used by these systems, which include language and domain-specific

knowledge, we may conclude that the combination of machine learning with prior

domain knowledge and a large set of linguistic features is what led to the superior

performance of these systems as compared to others that used the same machine learning

choice with a different set of features.

2.5 Language-Independent NER A common named entity recognition task is to identify general names such as names

of persons, locations, organizations, and other miscellaneous names in newswire text.

This task is equally useful in any language as newspapers and magazines appear in many

languages around the world. The complexity of developing systems to identify such

entities in a specific language raises the need for language-independent solutions. These

were the focus of two challenge tasks: the CoNLL-02 and CoNLL-03 NER tasks. To

demonstrate the applicability of our solution – presented in subsequent chapters – to other

languages and/or domains, we include experiments using the CoNLL-02 Spanish and

Dutch datasets.

Table 2.3 and Table 2.4 summarize the methods and features used by several systems

performing language-independent named entity recognition using CoNLL-02 datasets

and CoNLL-03 datasets respectively. The composition of these datasets is described in

Page 42: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

26

details in Appendix B. CoNLL-02 concerned general NER in Spanish and Dutch

languages, while CoNLL-03 focused on English and German languages. Systems

performing language-independent NER used Support Vector Machine (SVM), Hidden

Markov Model (HMM), Maximum Entropy Markov Model (MEMM), Condirional

Random Fields (CRF), Conditional Markov Model (CMM), Robust Risk Minimization

(RMM), Voted Perceptrons (PER), Recurrent Neural Networks (RNN), AdaBoost,

memory-based techniques (MEM), or transformation-based techniques (TRAN) as their

classification methods either combined or in isolation. The features used by the different

systems are listed in abbreviated form in Table 2.3 and Table 2.4 and include some or all

of the following:

# lex: lexical features beyond simple orthographic features

# ort: orthographic information

# aff: affix information (character n-grams)

# ws: word shapes

# wv: word variations

# gaz: gazetteers

# pos: part-of-speech tags

# tri: word triggers

# cs: global case information

# doc: global document information

# par: parentheses handling

# pre: previously predicted entity tags

# quo: word appears between quotes

Page 43: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

27

# bag: bag of words

# chu: chunk tags.

CoNLL-02 top performing systems used Adaboost (Carreras et al. 2002; Wu et al.

2002), character-based tries (Cucerzan and Yarowsky 2002), and stacked transformation-

based classifiers (Florian 2002) learning techniques. Carreras et al. (2002) combined

boosting and fixed depth decision trees to identify the beginning and end of an entity and

whether a token is within a named entity. Florian (2002) stacked several transformation-

based classifiers and Snow (Sparse Network of Winnows) – “an architecture for error-

driven machine learning, consisting of a sparse network of linear separator units over a

common predefined or incrementally learned feature space” – in order to boost the

system’s performance. The output of one classifier serves as input to the next one.

Cucerzan and Yarowsky (2002) used a semi-supervised approach using a small number

of seeds in a boosting manner based on character-based tries. Wu et al. (2002) combined

boosting and one-level decision trees as classifiers.

The F-score values achieved by the top performing systems in CoNLL-02 were in the

high 70’s to low 80’s for Spanish, and slightly less for Dutch. All systems used a rich set

of various features. It is interesting to note that the systems’ performance ranking,

summarized in Table 2.3, differed slightly between the two languages. In 2005, Giuliano

et al. (2005) classified the Dutch data using SVM and a rich set of orthographic and

contextual features in addition to part-of-speech tagging. Giuliano achieved an F-score of

75.60 which ranks second as compared to the CoNLL-02 top performing systems.

The CoNLL-03 task required the inclusion of a machine learning technique and the

incorporation of additional resources such as gazetteers and/or unannotated data in

Page 44: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

28

addition to the training data (Tjong Kim Sang and De Meulder 2003). All participating

systems used more complex combinations of learning tools and features than the CoNLL-

02 systems. A summary of the methods and features as well as the performance results of

CoNLL-03 systems is presented in Table 2.4. The three top performing systems used

Maximum Entropy Models (MEM). Florian et al. (2003) combined MEM with a Hidden

Markov Model (HMM), Robust Risk Minimization, a transformation-based technique,

and a large set of orthographic and linguistic features. Klein et al. (2003) used an HMM

and a Conditional Markov Model in addition to Maximum Entropy. Chieu and Ng (2003)

did not combine MEM with other techniques.

An interesting observation is that the systems that combined Maximum Entropy with

other techniques performed consistently in the two languages, while (Chieu and Ng 2003)

ranked second on the English data and #12 on the German data. Wong and Ng (2007)

used a Maximum Entropy approach with the CoNLL-03 data and a collection of

unannotated data and achieved an F-score of 87.13 on the English data. The system made

use of “a novel yet simple method of exploiting this empirical property of one class per

named entity to further improve the NER accuracy” (Wong and Ng 2007). Compared to

the CoNLL-03 systems, (Wong and Ng 2007) ranks third on the English data

experiments. No results were reported using the German data.

Another important note is that the CoNLL-03 systems achieved much higher

accuracy levels on the English data with F-scores reaching in the high 80’s, while F-score

levels using the German data remained comparable to CoNLL-02 results going as high as

72.41 for the best score. This observation leads us to confirm our earlier conclusion that

judging the performance of a machine learning technique based on NER performance

Page 45: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

29

would not be necessarily accurate, as the same system achieves inconsistent performance

levels when used with different languages. Our conclusion remains that the performance

of a given NER system is a result of the combination of classification techniques,

features, and external resources used, and no one component may be deemed responsible

for the outcome separately. The quality and complexity of the training and test data is

also a major contributing factor in reaching a certain performance.

Poibeau et al. (2003) discuss the issues of multilingualism and proposes an open

framework to support multiple languages which include: Arabic, Chinese, English,

French, German, Japanese, Finnish, Malagasy, Persian, Polish, Russian, Spanish and

Swedish. The project maintains resources for all supported languages using the Unicode

character set and apply almost the same architecture to all languages. Interestingly, the

multilingual system capitalizes on commonality across Indo-European languages and

shares external resources such as dictionaries by different languages.

In conclusion, the review of the NER approaches and how they are applied in

different languages and domains demonstrates that the performance of an NER system in

a given context depends on the overall solution including the classification technique(s) ,

features, and external resources used. The quality and complexity of the training and test

data also impacts the final accuracy level. Many researchers who worked on biomedical

NER noted that the same systems that achieved high performance measures on general or

multilingual NER failed to achieve similar results when used in the biomedical domain.

This highlights the complex nature of biomedical NER and the need for special

approaches to deal with its inherent challenges.

Page 46: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

30

Table 2.1 - Overview of BioNLP Methods, Features, And External Resources

System Methods/Features/External Resources Notes

Zhou and Su (2004) SVM + HMM + aff + ort + gen + gaz + pos + tri + ab + cas + pre

Combined models + cascaded entities + previous predictions + language & domain resources

Finkel et al. (2004) MEMM + lex + aff + ws + gaz + pos + syn + ab + doc + par + pre + B + W

Language & domain resources + lexical features + previous predictions + boundary adjustment + global document information + Web

Settles (2004) CRF + lex + aff + ort + ws + gaz + tri + pre + W Domain resources + Web + previous predictions

Song et al. (2004) SVM + CRF + lex + aff + ort + pos + np + pre + V

Combined models + language & domain resources + previous predictions

Zhao (2004) HMM + lex + pre + M Lexical features + domain resources + previous predictions + Medline

Rössler (2004) SVM + HMM + aff + ort + gen + len + pre + M

Combined models + domain resources + previous predictions + Medline

Park et al. (2004) SVM + aff + ort + ws + gen + wv + pos + np + tri + M + P

Language & domain resources + Medline + Penn Treebank corpus

Lee, Hwang et al. (2004) SVM + lex + aff + pos + Y + G Lexical features + language resources + Yapex corpus

+ Gapscore corpus

Giuliano (2005) SVM + lex + ort + pos + ws + wv Lexical features + language resources + collocation + instance pruning

SVM: Support Vector Machine; HMM: Hidden Markov Model; MEMM: Maximum Entropy Markov Model; CRF: Conditional Random Fields; lex: lexical features; aff: affix information (character n-grams); ort: orthographic information; ws: word shapes; gen: gene sequences (ATCG sequences); wv: word variations; len: word length; gaz: gazetteers; pos: part-of-speech tags; np: noun phrase tags; syn: syntactic tags; tri: word triggers; ab: abbreviations; cas: cascaded entities; doc: global document information; par: parentheses handling; pre: previously predicted entity tags; External resources (ext): B: British National Corpus; M: MEDLINE corpus; P: Penn Treebank II corpus; W: world wide web; V: virtually generated corpus; Y: Yapex; G: GAPSCORE. Legend Source: (Kim et al. 2004)

Page 47: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

31

Table 2.2 – Performance Comparison of Systems Using BioNLP Datasets4 (Recall / Precision/ F"=1)

1978-1989 Set 1990-1999 Set 2000-2001 Set S/1998-2001 Set Total Zhou and Su (2004) with post-processing

75.3 / 69.5 / 72.3 --

77.1 / 69.2 / 72.9 --

75.6 / 71.3 / 73.8 --

75.8 / 69.5 / 72.5 --

76.0 / 69.4 / 72.65 -- / -- / 60.3

Finkel et al. (2004) 66.9 / 70.4 / 68.6 73.8 / 69.4 / 71.5 72.6 / 69.3 / 70.9 71.8 / 67.5 / 69.6 71.6 / 68.6 / 70.1 Settles (2004) 63.6 / 71.4 / 67.3 72.2 / 68.7 / 70.4 71.3 / 69.6 / 70.5 71.3 / 68.8 / 70.1 70.3 / 69.3 / 69.8 Giuliano et al. (2005) -- -- -- -- 64.4 / 69.8 / 67.0 Song et al. (2004) 60.3 / 66.2 / 63.1 71.2 / 65.6 / 68.2 69.5 / 65.8 / 67.6 68.3 / 64.0 / 66.1 67.8 / 64.8 / 66.3 Zhao (2004) 63.2 / 60.4 / 61.8 72.5 / 62.6 / 67.2 69.1 / 60.2 / 64.7 69.2 / 60.3 / 64.4 69.1 / 61.0 / 64.8 Rössler (2004) 59.2 / 60.3 / 59.8 70.3 / 61.8 / 65.8 68.4 / 61.5 / 64.8 68.3 / 60.4 / 64.1 67.4 / 61.0 / 64.0 Park et al. (2004) 62.8 / 55.9 / 59.2 70.3 / 61.4 / 65.6 65.1 / 60.4 / 62.7 65.9 / 59.7 / 62.7 66.5 / 59.8 / 63.0 Lee, Hwang et al. (2004) 42.5 / 42.0 / 42.2 52.5 / 49.1 / 50.8 53.8 / 50.9 / 52.3 52.3 / 48.1 / 50.1 50.8 / 47.6 / 49.1 Baseline (Kim et al. 2004) 47.1 / 33.9 / 39.4 56.8 / 45.5 / 50.5 51.7 / 46.3 / 48.8 52.6 / 46.0 / 49.1 52.6 / 43.6 / 47.7

4 The JNLPBA-04 testing dataset is roughly divided into four subsets based on publication years: 1978-1989 set, 1990-1999 set, 2000-2001 set, and S/1998-2001 set. See Appendix B for more details 5 The overall F-score achieved in (Zhou and Su 2004) is 72.6% after applying additional post-processing using dictionaries, named entity lists, etc. Without post-processing, the F-score achieved is 60.3%.

Page 48: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

32

Table 2.3 – Comparison of Systems Using the CoNLL-02 Data

Spanish Results Dutch Results System Methods/Features Performance (Rec/Prec/ F!=1)

F!=1 Rank

Performance (Rec/Prec/ F!=1)

F!=1 Rank

Carreras et al. (2002) ADA + decision trees + lex + pre + pos + gaz + ort + ws 81.40 / 81.38 / 81.39 1 76.29 / 77.83 / 77.05 1

Florian (2002) Stacked TRAN + Snow + forward-backward 79.40 / 78.70 / 79.05 2 74.89 / 75.10 / 74.99 3

Cucerzan and Yarowsky (2002) Character-based tries + pos + aff + lex + gaz 76.14 / 78.19 / 77.15 3 71.62 / 73.03 / 72.31 5

Wu et al. (2002) ADA + decision tree + lex + pos + gaz + cs + pre 77.38 / 75.85 / 76.61 4 73.83 / 76.95 / 75.36 2

Burger et al. (2002) HMM + gaz + pre 77.44 / 74.19 / 75.78 5 72.45 / 72.69 / 72.57 4 Tjong Kim Sang (2002b) MEM + stacking + combination 75.55 / 76.00 / 75.78 6 68.88 / 72.56 / 70.67 7

Patrick et al. (2002) Six stages using compiled lists and n-grams + context 73.52 / 74.32 / 73.92 7 68.90 / 74.01 / 71.36 6

Jansche (2002) CMM + ws +cs + collocation 73.76 / 74.03 / 73.89 8 69.26 / 70.11 / 69.68 8

Malouf (2002) MEMM (also tried HMM) + pre + boundary detection 73.39 / 73.93 / 73.66 9 65.50 / 70.88 / 68.08 9

Tsukamoto et al. (2002) ADA (five cascaded classifiers) 74.12 / 69.04 / 71.49 10 65.02 / 57.33 / 60.93 10 Black and Vasilakopoulos (2002) TRAN (also tried decision trees) 66.24 / 68.78 / 67.49 11 51.69 / 62.12 / 56.43 12

McNamee and Mayfield (2002) SVM (two cascaded) + 9000 binary features 66.51 / 56.28 / 60.97 12 63.24 / 56.22 / 59.52 11

Baseline (Tjong Kim Sang 2002a) 56.48 / 26.27 / 35.86 13 45.19 / 64.38 / 53.10 13 Giuliano et al. (2005) SVM + lex + ort + pos + ws + wv not used – 69.60 / 82.80 / 75.60 * 2

SVM: Support Vector Machine; HMM: Hidden Markov Model; MEMM: Maximum Entropy Markov Model; CRF: Conditional Random Fields; CMM: Conditional Markov Model; RRM: Robust Risk Minimization; PER: Voted Perceptrons; RNN: Neural Networks; ADA: AdaBoost; MEM: Memory-Based; TRAN: Transformation-Based; lex: lexical features; aff: affix information (character n-grams); ort: orthographic information; ws: word shapes; wv: word variations; gaz: gazetteers; pos: part-of-speech tags; tri: word triggers; cs: global case information; doc: global document information; par: parentheses handling; pre: previously predicted entity tags; quo: between quotes; bag: bag of words; chu: chunk tags

Page 49: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

33

Table 2.4 – Comparison of Systems Using the CoNLL-03 Data English Results German Results System Methods/Features Performance

(Rec/Prec/ F!=1) F!=1

Rank Performance

(Rec/Prec/ F!=1) F!=1

Rank Florian et al. (2003) MEMM + HMM + RRM + TRAN + lex + pos +

aff + pre + ort + gaz + chu + cs 88.54 / 88.99 / 88.76 1 83.87 / 63.71 / 72.41 1

Chieu and Ng (2003) MEMM + lex + pos + aff + pre + ort + gaz + tri + quo + doc 88.51 / 88.12 / 88.31 2 76.83 / 57.34 / 65.67 12

Klein et al. (2003) MEMM + HMM + CMM + lex + pos + aff + pre 86.21 / 85.93 / 86.07 3 80.38 / 65.04 / 71.90 2 Zhang and Johnson (2003) RRM + lex + pos + aff + pre + ort + gaz + chu + tri 84.88 / 86.13 / 85.50 4 82.00 / 63.03 / 71.27 3 Carreras et al. (2003b) ADA + lex + pos + aff + pre + ort + ws 85.96 / 84.05 / 85.00 5 75.47 / 63.82 / 69.15 5 Curran and Clark (2003) MEMM + lex + pos + aff + pre + ort + gaz + ws + cs 85.50 / 84.29 / 84.89 6 75.61 / 62.46 / 68.41 7

Mayfield et al. (2003) SVM + HMM + lex + pos + aff + pre + ort + chu + ws + quo 84.90 / 84.45 / 84.67 7 75.97 / 64.82 / 69.96 4

Carreras et al. (2003a) PER + lex + pos + aff + pre + ort + gaz + chu + ws + tri + bag 82.84 / 85.81 / 84.30 8 77.83 / 58.02 / 66.48 10

McCallum and Li (2003) CRF + lex + ort + gaz + ws 83.55 / 84.52 / 84.04 9 75.97 / 61.72 / 68.11 8 Bender et al. (2003) MEMM + lex + pos + pre + ort + gaz + chu 83.18 / 84.68 / 83.92 10 74.82 / 63.82 / 68.88 6

Munro et al. (2003) Voting + Bagging + lex + pos + aff + chu + cs + tri bag 84.21 / 80.87 / 82.50 11 69.37 / 66.21 / 67.75 9

Wu et al. (2003) ADA (stacked 3 learners) + lex + pos + aff + pre + ort + gaz 81.39 / 82.02 / 81.70 12 75.20 / 59.35 / 66.34 11

Whitelaw and Patrick (2003) HMM + aff + pre + cs 78.05 / 81.60 / 79.78 13 71.05 / 44.11 / 54.43 15 Hendrickx and Bosch (2003) MEM + lex + pos + aff + pre + ort + gaz + chu 80.17 / 76.33 / 78.20 14 71.15 / 56.55 / 63.02 13 De Meulder and Daelemans (2003) MEM + lex + pos + aff + ort + gaz + chu + cs 78.13 / 75.84 / 76.97 15 63.93 / 51.86 / 57.27 14 Hammerton (2003) RNN + lex + pos + gaz + chu 53.26 / 69.09 / 60.15 16 63.49 / 38.25 / 47.74 16 Baseline 50.90 / 71.91 / 59.61 17 31.86 / 28.89 / 30.30 17 Giuliano et al. (2005) SVM + lex + ort + pos + ws + wv 76.70 / 90.50 / 83.10 * 11 not used –

Talukdar et al. (2006) CRF + Three NE lists + Context pattern induction + tri + pruning F-score = 84.52 * 8 not used –

Wong and Ng (2007) MEMM + 300 million unlabeled tokens + lex + aff + pre + ort + cs F-score = 87.13 * 3 not used –

SVM: Support Vector Machine; HMM: Hidden Markov Model; MEMM: Maximum Entropy Markov Model; CRF: Conditional Random Fields; CMM: Conditional Markov Model; RRM: Robust Risk Minimization; PER: Voted Perceptrons; RNN: Neural Networks; ADA: AdaBoost; MEM: Memory-Based; TRAN: Transformation-Based; lex: lexical features; aff: affix information (character n-grams); ort: orthographic information; ws: word shapes; wv: word variations; gaz: gazetteers; pos: part-of-speech tags; tri: word triggers; cs: global case information; doc: global document information; par: parentheses handling; pre: previously predicted entity tags; quo: between quotes; bag: bag of words; chu: chunk tags

Page 50: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

34

2.6 Named Entity Recognition Challenges In this section we summarize the named entity recognition challenges in different

domains:

# The explosion of information raises the need for automated tools to extract

meaningful entities and concepts from unstructured text in various domains.

# An entity is relevant in one domain may not irrelevant in another.

# Named entities may take any shape, often composed of multiple words. This

raises more challenges in correctly identifying the beginning and the end of a

multi-word NE.

# NER solutions are not easily portable across languages and domains, and the

same system performs inconsistently in different contexts.

# General NER systems that have an F-score in the high 80’s and higher do not

perform as well in the biomedical context, with F-score values lagging behind by

15 to 30 points.

# Manually annotating training data to be used with machine learning techniques is

a labor expensive, time consuming, and error prone task.

# The extension of NER to challenging domains with multiple NE classes makes

manual annotation very difficult to accomplish, especially with growing

nomenclature.

# There is a growing need for systems that use semi-supervised or unsupervised

machine learning technique in order to use mostly unannotated training data.

# Due to the large size of potential NER datasets in real-world applications,

classification techniques need to be highly scalable.

Page 51: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

35

# The quality of the annotated training data, the features and external resources used

impact the overall recognition performance and accuracy.

# Extraction of language and domain-specific features requires additional processes.

# The effect of part-of-speech (POS) tagging on performance may be questionable.

(Collier and Takeuchi 2004) note that simple orthographic features have

consistently been proven to be more valuable than POS. This observation has

been confirmed during phase One of this work (presented in Chapter 4).

# It is difficult to judge the efficacy of a given technique because of the different

components used to construct the total solution.. There is no consistent way to

conclude whether a particular machine learning or other approach are best suited

for NER. The quality of the recognition can only be seen as a whole.

In the following chapter, we briefly introduce the theory of support vector machines

as our choice of machine learning method for the biomedical named entity recognition.

Given the unique challenges in recognizing biomedical entities discussed earlier in this

chapter, we decided to select a classification model that is capable of handling a high

number of features and of discovering patterns in a large input space with irregular

representation of classes. Support vector machines promise to handle both questions but

not without challenges of their own.

Page 52: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

Chapter 3

Support Vector Machines

In this chapter we present a brief summary of the Support Vector Machine (SVM)

theory and its application in the area of named entity recognition. An introduction to the

mathematical foundation of support vector machines for binary classification is

presented, followed by an overview of the different approaches used for multi-class

problems. We then discuss the scalability issues of support vector machines and how they

have been addressed in the literature.

3.1 Support Vector Machines The Support Vector Machine (SVM) is a powerful machine learning tool based on

firm statistical and mathematical foundations concerning generalization and optimization

theory. It offers a robust technique for many aspects of data mining including

classification, regression, and outlier detection. SVM was first suggested by Vapnik in

the early 1970’s but it began to gain popularity in the mid-1990’s. SVM is based on

Vapnik’s statistical learning theory (Vapnik 1998) and falls at the intersection of kernel

methods and maximum margin classifiers. Support vector machines have been

successfully applied to many real-world problems such as face detection, intrusion

detection, handwriting recognition, information extraction, and others.

Support Vector Machine is an attractive method due to its high generalization

capability and its ability to handle high-dimensional input data. Compared to neural

networks or decision trees, SVM does not suffer from the local minima problem, it has

fewer learning parameters to select, and it produces stable and reproducible results. If two

Page 53: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

37

SVMs are trained on the same data with the same learning parameters, they produce the

same results independent of the optimization algorithm they use. However, SVMs suffer

from slow training especially with non-linear kernels and with large input data size.

Support vector machines are primarily binary classifiers. Extensions to multi-class

problems are most often done by combining several binary machines in order to produce

the final multi-classification results. The more difficult problem of training one SVM to

classify all classes uses much more complex optimization algorithms and are much

slower to train than binary classifiers.

In the following sections, we present the SVM mathematical foundation for the

binary classification case, then discuss the different approaches applied for multi-

classification.

3.2 Binary Support Vector Classification Binary classification is the task of classifying the members of a given set of objects

into two groups on the basis of whether they have some property or not. Many

applications take advantage of binary classification tasks, where the answer to some

question is either a yes or no. For example, product quality control, automated medical

diagnosis, face detection, intrusion detection, or finding matches to a specific class of

objects.

The mathematical foundation of Support Vector Machines and the underlying

Vapnik-Chervonenkis dimension (VC Dimension) is described in details in the literature

covering the statistical learning theory (Vapnik 1998; Abe 2005; Müller et al. 2001;

Kecman 2001; Joachims 2002; Alpaydin 2004) and many other sources. In this section

we briefly introduce the mathematical background of SVMs in the linearly separable and

Page 54: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

38

non-linearly separable cases. One of the attractive properties of support vector machines

is the geometric intuition of its principles where one may relate the mathematical

interpretation to simpler geometric analogies.

3.2.1 Linearly Separable Case

In the linearly separable case, there exists one or more hyperplanes that may separate

the two classes represented by the training data with 100% accuracy. Figure 3.1(a) shows

many separating hyperplanes (in the case of a two-dimensional input the hyperplane is

simply a line). The main question is how to find the optimal hyperplane that would

maximize the accuracy on the test data. The intuitive solution is to maximize the gap or

margin separating the positive and negative examples in the training data. The optimal

hyperplane is then the one that evenly splits the margin between the two classes, as

shown in Figure 3.1(b).

Figure 3.1 – SVM Linearly Separable Case (a) Left: Which line is best to separate the two classes?

(b) Right: The line that maximizes the margin between the two classes? (c) Support vectors are those points in (b) surrounded by black squares

Page 55: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

39

In Figure 3.1(b), the data points that are closest to the separating hyperplane are

called support vectors. In mathematical terms, the problem is to find )()( bf iT $% xwx

with maximal margin, such that:

1%$ biT xw for data points that are support vectors

1($ biT xw for other data points.

Assuming a linearly separable dataset, the task of learning coefficients w and b of

support vector machine )()( bf iT $% xwx reduces to solving the following constrained

optimization problem:

find w and b that minimize: 2

21 w … … … … … ( 3.1)

subject to: iby iT

i )*$ ,1)( xw . … … … … … ( 3.2)

Note that minimizing the inverse of the weights vector is equivalent to maximizing )(xf .

This optimization problem can be solved by using the unconstrained Lagrange

function defined as:

+%

,$,%n

iiii bybQ

1

TT ]1)([21),,( xwww"w - s.t. ii )*- ,0 … … … … … ( 3.3)

where -1,-2, … -N are Lagrange multipliers and - = [-1,-2, … -N]T.

The support vectors are those data points xi with -i > 0, i.e., the data points within

each class that are the closest to the separation margin.

Solving for the necessary optimization conditions results in

+%

%N

iiii y

1xw - where, +

%

%n

iii ya

10 . … … … … … ( 3.4)

Page 56: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

40

By replacing +%

%n

iiii y

1xw - into the Lagrange function and by using +

%

%n

iii ya

10 as a

new constraint, the original optimization problem can be rewritten as its equivalent

unconstrained dual problem as follows:

Find - that maximizes +++ ,i j

jT

ijijii

i yy xx--- 21 … … … … … ( 3.5)

subject to iy i

n

iii )*%+

%

,0,01

-- . … … … … … ( 3.6)

The optimization problem is therefore a convex quadratic programming problem

which has global minimum. This characteristic is a major advantage of support vector

machines as compared to neural networks or decision trees. The optimization problem

can be solved in O(n3) time, where n is the number of input data points.

3.2.2 Non-Linearly Separable Case

In the non-linearly separable case, it is not possible to find a linear hyperplane that

separates all positive and negative examples. To solve this case, the margin maximization

technique may be relaxed by allowing some data points to fall on the wrong side of the

margin, i.e., to allow a degree of error in the separation. Slack Variables .i are introduced

to represent the error degree for each input data point. Figure 3.2 demonstrates the non-

linearly separable case where data points may fall into one of three possibilities:

1. Points falling outside the margin that are correctly classified, with .i = 0

2. Points falling inside the margin that are still correctly classified, with 0 < .i < 1

3. Points falling outside the margin and are incorrectly classified, with .i = 1.

Page 57: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

41

Figure 3.2 – SVM Non-Linearly Separable Case

If all slack variables have a value of zero, the data is linearly separable. For the non-

linearly separable case, some slack variables have nonzero values. The optimization goal

in this case is to maximize the margin while minimizing the points with .i # 0, i.e., to

minimize the margin error.

In mathematical terms, the optimization goal becomes:

find w and b that minimize: +$i

iC 22

21 .w … … … … … ( 3.7)

s.t. iby iiiT

i )*..,*$ ,0,1)( xw … … … … … ( 3.8)

where C is an user-defined parameter to enforce that all slack variables are as close to

zero as possible. Finding the most appropriate choice for C will depend on the input data

set in use and requires tuning.

As in the linearly separable problem, this optimization problem can be converted to

its dual problem:

find - that maximizes +++ --,-i j

jT

ijijii

i yy xx21 … … … … … ( 3.9)

Page 58: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

42

s.t. . ,0

,01

iC

y

i

n

iii

)//

%+%

-

- … … … … … ( 3.10)

In order to solve the non-linearly separable case, SVM introduces the use of a

mapping function 0: RM 1 F to translate the non-linear input space into a higher

dimension feature space where the data is linearly separable. Figure 3.3 presents an

example of the effect of mapping the nonlinear input space into a higher dimension linear

feature space.

Figure 3.3 – SVM Mapping to Higher Dimension Feature Space

The dual problem is solved in feature space where its aim becomes to:

find - that maximizes +++ 00,i j

jT

ijijii

i yy )()(21 xx--- … … … … … ( 3.11)

s.t. iC

y

i

n

iii

)//

%+%

,0

,01

-

- … … … … … ( 3.12)

the resulting SVM is of the form: bybfn

i

Tiiii

T $00%$0% +%1

)()()()( xxxwx - .

Page 59: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

43

3.2.3 The Kernel “Trick”

Mapping the input space into a higher dimension feature space transforms the

nonlinear classification problem into a linear one that is more likely to be solved.

However, the problem is more likely to face the curse of dimensionality. The kernel

“trick” allows the computation of the vector product )()( jT

i xx 00 in the lower

dimension input space.

From Mercer’s theorem, there is a class of mappings 0 such that

),()()( yxyx KT %00 , where k is a corresponding kernel function. Being able to compute

the vector products in the lower dimension input space while solving the classification

problem in the linearly separable feature space is a major advantage of SVMs using a

kernel function. The dual problem then becomes to:

find - that maximizes +++ --,-i j

jijijii

i Kyy ),(21 xx … … … … … ( 3.13)

subject to iC

y

i

n

iii

)//

%+%

,0

,01

-

- … … … … … ( 3.14)

and the resulting SVM takes the form:

bKybfn

iiiii

T $%$0% +%1

),()()( xxxwx - . … … … … … ( 3.15)

Examples of kernel functions:

# Linear kernel (identity kernel): )(),( yxyx TK %

# Polynomial kernel with degree d: dTK )1(),( $% yxyx

# Radial basis kernel with width !: 2

2

),( 2

yx

yx,

% eK

Page 60: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

44

# Sigmoid kernel with parameter $ and %: )tanh(),( 3$4% yxyx TK

# It’s also possible to use other kernel functions to solve specific problems.

3.3 Multi-class Support Vector Classification For classification problems with multiple classes, different approaches have been

developed in order to decide whether a given data point belongs to one of the classes or

not. The most common approaches are those that combine several binary classifiers and

use a voting technique to make the final classification decision. These include: One-

Against-All (Vapnik 1998), One-Against-One (Kreßel 1999), Directed Acyclic Graph

(DAG) (Platt et al. 2000), and Half-against-half method (Lei and Govindaraju 2005). A

more complex approach is one that attempts to build one Support Vector Machine that

separates all classes at the same time. In this section we will briefly introduce these multi-

class SVM approaches. Figure 3.4 compares the decision boundaries for three classes

using a One-Against-All SVM, a One-Against-One SVM, and an All-Together SVM.

The interpretation of these decision boundaries will be discussed as we define the training

and classification techniques using each approach.

Figure 3.4 – Comparison of Multi-Class Boundaries

Page 61: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

45

In the following discussion of the different multi-class approaches, we are simply

providing a high-level description of the possible arrangements of one or more SVMs in

order to recognize multiple classes. The decision boundary comparison is based solely on

the geometrical interpretation of these arrangements and do not imply that addressing the

unclassifiable regions is not possible in any of these schemes, or that one approach

consistently outperforms another. For instance, although the simple geometrical

interpretation of the One-Against-One approach suggests that it leads to better

performance than the One-Against-All approach due to the elimination of the regions

with overlapping classes, Hao et al. (2006) demonstrate experimentally that the One-

Against-One scheme is not always better than the One-Against-All scheme.

3.3.1 One-Against-All Multi-Class SVM

One-Against-All (Vapnik 1998) is the earliest and simplest multi-class SVM. For a k-

class problem, it constructs k binary SVMs. The ith SVM is trained with all the samples

from the ith class against all the samples from the other classes. To classify a sample x, x

is evaluated by all of the k SVMs and the label of the class that has the largest value of

the decision function is selected.

For a k-class problem, One-Against-One maximizes k hyperplanes separating each

class from all the rest. Since all other classes are considered negative examples during

training of each binary classifier, the hyperplane is optimized for one class only. As

illustrated in Figure 3.4, unclassifiable regions exist when more than one classifier returns

a positive classification for an example x or when all classifiers evaluate x as negative

(Abe 2005).

Page 62: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

46

3.3.2 One-Against-One or Pairwise SVM

One-Against-One (Kreßel 1999) constructs one binary machine between pairs of

classes. For a k-class problem, it constructs 2/)1( ,kk binary classifiers. To classify a

sample x, each of 2/)1( ,kk machines evaluate x and casts a vote. Finally, the class with

the most votes is chosen.

Since One-Against-One separates two classes at a time, the separating hyperplanes

identified by this approach are tuned better than those found with One-Against-All

(Figure 3.4). Unclassifiable regions exist only when all classifiers evaluate a sample x as

negative.

3.3.3 Directed Acyclic Graph SVM

Similar to One-Against-One SVM, Directly Acyclic Graph (DAG) (Platt et al. 2000)

trains 2/)1( ,kk binary classifiers for pairwise classes. To evaluate a sample x, this

technique builds a DAG ordering the classes 1 through k to make a decision. The sample

x is first evaluated by the first and the last classifier on the DAG and eliminates the lower

vote from the DAG. The process is repeated until only one class remains and its label is

chosen. Therefore, a decision is reached after (k – 1) binary SVM evaluations.

Unclassifiable regions are eliminated by excluding one class at a time.

3.3.4 Half-Against-Half SVM

Half-Against-Half multi-class SVM (Lei and Govindaraju 2005) is useful for

problems where there is a close similarity between groups of classes. Figure 3.5

illustrates an example with six classes where a linear separation exist between a group of

three classes and another group of three classes. Using Half-Against-Half SVM, a binary

Page 63: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

47

classifier is built that evaluates one group of classes against another group. The trained

model consists of at most 5 6k2log2 binary SVMs.

To classify a sample x, this technique identifies the group of classes where the sample

x belongs, than continues to evaluate x with a subgroup, and so on, until the final class

label is found. The classification process is similar to a decision tree that requires

5 6k2log evaluations at most. Figure 3.5 illustrates a possible decision tree for the six

classes.

Figure 3.5 – Half-Against-Half Multi-Class SVM 6

3.3.5 All-Together or All-At-Once SVM

An All-Together multi-classification approach is computationally more expensive yet

usually more accurate than all other multi-classification methods. Hsu and Lin (2002)

note that “as it is computationally more expensive to solve multi-class problems,

6 Lei and Govindaraju. 2005. Half-Against-Half Multi-class Support Vector Machines. In Proc. of the 6th International Workshop on Multiple Classifier Systems, Jun 13-15, at Seaside, CA, USA:156-164.

Page 64: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

48

comparisons of these methods using large-scale problems have not been seriously

conducted.

This approach builds one SVM that maximizes all separating hyperplanes at the same

time. Training data representing all classes is used to generate the trained model. With

this approach, there are no unclassifiable regions as each data point belongs to some class

represented in the training dataset. Figure 3.4 illustrates the elimination of unclassifiable

regions in this case.

The All-Together multi-class SVM poses a complex optimization problem as it

maximizes all decision functions at the same time (Crammer and Singer 2001).

Algorithms to decompose the problem (Hsu and Lin 2002) and to solve the optimization

problem (Tsochantaridis et al. 2004) have been developed, however, the All-Together

multi-class SVM approach remains a daunting task. The training time is very slow which

makes the approach so far unusable for real-world problems with a large data set and/or a

high number of classes.

3.3.6 Unbalanced Data Representation

Support vector machines learning is mostly a discriminative learning approach where

a hyperplane separation is learned from positive and negative examples. The unbalanced

data distribution problem arises when the number of training data examples within each

class is disproportionate, resulting in overfitting or an inadequate final classification

performance.

Several techniques have been applied in order to address the unbalanced data

distribution problem. To alleviate the problem, one may reduce the number of negative

examples by eliminating those points that are judged to be un-informative (Giuliano et al.

Page 65: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

49

2005), partition the negative class into smaller classes using a different identification

criteria (Lee, Hwang et al. 2004), or cascade many SVMs and/or other techniques (Song

et al. 2004; Schebesch and Stecking 2008; Park et al. 2004).

Other solutions include using different weights for each class – an option supported

by LIBSVM (Chang and Lin 2001) – weighing the regular SVM margin by posterior

probabilities (Tao et al. 2005), modifying the support vector machine with an algorithm

“that assigns different penalty coefficients to the positive and negative samples

respectively by adding a new diagonal matrix in the primal optimization problem” (Tao

et al. 2007), and modifying the decision boundary with a new offset parameter computed

as a weighted harmonic mean (WHM) of support vector decision values (Li et al. 2008b).

The new offset parameter and a shape parameter are “estimated automatically from the

distribution of training samples around the boundary via a distribution of belief degree”

(Li et al. 2008a) thereby introducing a soft decision-making boundary that reduces the

effects of data unbalance and noise in the real world data.

3.4 Named Entity Recognition Using SVM The extension of named entity recognition to new domains with high ambiguity and

growing nomenclature raises concerns about the machine learning methodology that is

most suitable for such domains. Support Vector Machine (SVM) is a promising technique

due to its ability to handle a high number of features and to discover patterns in a large

input space with irregular representation of classes. SVM is gaining popularity as the

approach of choice for challenging data such as the biomedical NER data. Many systems

that participated in the JNLPBA-04 challenge task to identify named entities in the

Page 66: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

50

GENIA biomedical corpus were able to achieve reasonable performance results using

SVM either separately or in conjunction with other machine learning approaches.

Evidence exists that using high-dimensional orthographic and contextual features

leads to good NER classification. In order to classify entities with varying shapes and

ambiguous boundaries, input data vectors are usually sparse and high-dimensional. Due

to its high generalization ability, SVM is capable of discovering patterns under these

circumstances.

Multi-class problems remain a daunting task for SVM, especially with large data size

and a high number of classes. For multi-word named entities, one needs to design a

learning solution that identifies the beginning, middle, and end of a named entity with

accuracy. There is no known best approach to handle this situation. One possibility is to

label all parts of a named entity with the same label in order to minimize the number of

classes. Another approach is to train machines to identify either the beginning or the end

then combine all classified outputs in order to label the whole sequence of words in the

entity. The number of classes in the problem doubles using this method. If an All-

Together multi-class SVM is used, more classes lead to a much more complex

optimization problem and much longer training time.

Integration of prior domain knowledge into SVM training is currently achieved

outside of learning and classification process. SVM handles input vectors and is not yet

capable of handling general information types.

Page 67: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

51

3.5 SVM Scalability Challenges (Bennett and Campbell 2000) discuss the common usability and scalability issues of

support vector machines. In this section we summarize the SVM scalability challenges

noted in the literature, which include:

# Optimization requires O(n3) time and O(n2) memory for single class training,

where n is input size (depends on algorithm used). To address this issue, new

optimization algorithms continue to be introduced (Joachims 2006; Serafini and

Zanni 2005; Collobert et al. 2006b, 2006a).

# Multi-class training time is much higher, especially for All-Together

optimization. Performance depends on approach used and deteriorates with more

classes.

# Slow training, especially with non-linear kernels, may be addressed by :

o Reducing the input data size : pruning, chunking, clustering.

o Reducing the number of support vectors: model decomposition and

shrinking (Joachims 1998a).

o Reducing feature dimensionality : using a priori clustering (Barros de

Almeida et al. 2000) or adaptive clustering (Boley and Cao 2004).

# SVM is impractical for large input datasets, especially with non-linear kernel

functions.

In addition to the scalability issues, SVM shares similar usability challenges as neural

networks and other machine learning techniques related to the selection of model

parameters. Model parameters are often selected using a grid search, cross-validation, or

heuristic-based methods. Selection of a suitable kernel function for the problem at hand is

Page 68: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

52

another designer-determined factor, which is also tackled using cross-validation, decision

trees, or heuristics. While these issues are concerning SVM usability, as discussed in

Chapter 4, using grid search and cross-validation to select the best model may be

unfeasible with large datasets.

3.6 Emerging SVM Techniques Recent developments in the area of support vector machines attempt to use SVM for

unsupervised learning in order to answer the need for new techniques to deal with

unannotated data. Support Vector Clustering (SVC) (Ben-Hur et al. 2001) maps the input

data into a higher dimension feature space where it tries to find clusters of data by

minimizing a sphere in the higher dimension space. SVC introduced a novel idea that

takes advantage of SVM’s mapping; however, this technique remains applicable only to

toy data or very small datasets and cannot be used with real-world problems. Attempts to

improve and/or provide alternative methods for clustering using support vectors followed

(Lee and Daniels 2006; Lee 2006). This area of research would certainly benefit from

improved SVM scalability options.

Other research activities to accelerate SVM training in binary classification (Tsang et

al. 2005) and to introduce new kernels for classification (Sullivan and Luke 2007) are

also introduced.

In the next chapter, we describe the NER/SVM experimentation architecture and

report the results and observations we reached by constructing a set of biomedical named

entity recognition experiments using support vector machines. These experiments were

instrumental in providing good insight into the scalability and usability challenges when

Page 69: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

53

using SVM with large scale from the unique biomedical domain. Our research work is

influenced by reviewing the literature and the problems identified during the exploration

phase presented in the following chapter.

Page 70: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

Chapter 4

Identifying Challenges in Large Scale NER

In this chapter we report on the investigative phase of this work, which consists of

building the infrastructure necessary to conduct the research and of designing and

implementing a series of experiments to investigate the potential challenges of large scale

named entity recognition using support vector machines. The main motivation for

conducting these baseline experiments is to identify the challenges in large scale named

entity recognition problems, to assess the feasibility of the language and domain

independent NER approach, and to obtain a set of baseline performance measures.

The baseline experiments attempt to eliminate language and domain-specific

knowledge from the named entity recognition process when applied to the English

biomedical entity recognition task, as a baseline for other languages and domains. The

biomedical field NER remains a challenging task due to growing nomenclature,

ambiguity in the left boundary of entities caused by descriptive naming, difficulty of

manually annotating large sets of training data, strong overlap among different entities, to

cite a few of the NER challenges in this domain.

In the following sections, we present the architecture and methods used to conduct the

experiments. We then report the experiments results both in the case of single-class

(protein) classification and for the multi-class classification results, and compare the

results to those reported by other systems using the same challenge data. We conclude the

chapter with a discussion of the challenges and problems faced while conducting these

experiments which motivate our research work on scalability (and usability) issues.

Page 71: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

55

4.1 Baseline Experiment Design The baseline experiment aims to identify biomedical named entities using a

supervised learning approach. The training and testing data use the JNLPBA-04 (Kim et

al. 2004) challenge task data, where the names of proteins, cell lines, cell types, DNA and

RNA entities are previously labeled. The approach employed in this experiment is the

supervised machine learning using Support Vector Machines (SVM) (Vapnik 1998), due

to the their ability to handle high-dimensional feature and input space. In our experiment

setup, we limit the pre- and post-processing to feature extraction and class labeling only

and do not employ additional information and/or methods with the SVM machine

learning phase in order to assess the feasibility of the direct application of multi-class

SVM approach and how well it performs as an (almost) stand-alone approach. This

decision may be viewed as a worst case scenario evaluation of the simplified architecture

that aims to assess the contribution of the All-Together multi-class SVM to the overall

classification results. Other approaches used in the literature to address unbalanced data

representation and boosting classification performance can still be applied in conjunction

to the machine learning technique.

The JNLPBA-04 shared task (Kim et al. 2004) is an open challenge task proposed at

the “International Joint Workshop on Natural Language Processing in Biomedicine and

its Application”. The task uses the GENIA corpus (Kim et al. 2003) described above.

The systems participating in this task employ a variety of machine learning techniques

such as Support Vector Machines (SVM), Hidden Markov Models (HMM), Maximum

Entropy Markov Models (MEMMs) and Conditional Random Fields (CRFs). Five

systems adopted SVMs either in isolation (Park et al. 2004; Lee, Hwang et al. 2004), or

Page 72: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

56

in conjunction with other model (Zhou and Su 2004; Song et al. 2004; Rössler 2004).

The results of the experiment presented in this paper are compared to the five JNLPBA-

04 task participating systems listed above, in addition to the results reported in (Giuliano

et al. 2005). Table 4.2 summarizes the performance comparison results.

This experiment is composed of two main parts – the first identifies protein named

entities only, while the second locates and classifies all five named entities (protein,

DNA, RNA, cell type, and cell line). The SVM-Light software by T. Joachims (Joachims

2002) is used for the single-class part of the experiment, and the SVM-Multiclass

software – also by T. Joachims – is used for the multi-class experiments. Table 4.3 and

Table 4.4 summarize the experiment results of both parts.

Figure 4.1 – Baseline Experiments Architecture

Page 73: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

57

The training and test data pre-processing involves morphological and contextual

features extraction only. In order to estimate a worse-case scenario of the approach used,

no instance pruning or filtering is performed prior to learning and classification, thereby

leaving the scarcity nature of the data intact. No language-specific pre-processing such as

part-of-speech or noun phrases tagging is used. No dictionaries, gazetteers (seed words),

or other domain-specific knowledge are used. Figure 4.1 presents the architecture used to

conduct the experiments. The common machine learning architecture used for NER is

simplified by limiting the pre-processing stage to feature extraction and reducing post-

processing to class labeling and performance evaluation.

The initial results are promising and prove that the approach used is capable of

recognizing and identifying the named entities successfully without making use of any

language-specific or domain-specific prior knowledge. Performance measures of the

experiments are reported in terms of recall, precision, and F!=1-score.

4.2 Feature Selection The training and testing data is preprocessed using the jFex software (Giuliano et al.

2005) in order to extract morphological and contextual features that do not use language-

specific knowledge such as part-of-speech or noun phrase tagging. The generated feature

space is very large, including about a million different features. The features extracted are

described below. Since words appearing separately or within windows of other words

each constitutes a feature in the lexicon, the potential number of possibilities is very high.

Including character n-grams describing prefixes, infixes, and suffixes would further

increase the number of features in the lexicon. The feature extraction process is

intentionally designed that way in order to test the scalability of the approach used and to

Page 74: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

58

allow the experiments to proceed in a language-independent and domain-independent

fashion. All features are binary, i.e., each feature denotes whether the current token

possesses this feature (one) or not (zero). Character n-grams were not included in the

baseline experiment data due to memory limitations encountered during the feature

extraction process. The morphological features extracted are:

# Capitalization: token begins with a capital letter.

# Numeric: token is a numeric value.

# Punctuation: token is a punctuation.

# Uppercase: token is all in uppercase.

# Lowercase: token is all in lowercase.

# Single character: token length is equal to one.

# Symbol: token is a special character.

# Includes hyphen: one of the characters is a hyphen.

# Includes slash: one of the characters is a slash.

# Letters and Digits: token is alphanumeric.

# Capitals and digits: token contains caps and digits.

# Includes caps: some characters are in uppercase.

# General regular expression summarizing the word shape, for e.g., Xx+-X-x+

describes a word starting with one capital letter followed by a number of

lowercase letter, then a hyphen, one capital letter, another hyphen, and ending

with a number of lowercase letters.

The morphological features extracted examine a token as a whole and do not include

character n-grams features that detect in-word characteristics such as prefixes, suffixes,

Page 75: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

59

or infixes. Morphological feature extraction is applied to the three tokens preceding and

following the token being examined in addition to the token itself.

Each word appearing in the training text is considered its own feature. In addition, a

consecutive collocation of tokens active over three positions around the token itself is

used in order to provide a moving window of consecutive tokens which describes the

context of the token relative to its surrounding.

The number of features extracted from a given textual training dataset grows as the

number of words in the text increase, mostly due to the windows of words used to capture

the context in which a word appears and the word shape description. Other orthographic

features such as: alphanumeric, includes hyphen, includes slash, uppercase, lowercase,

etc., are represented by one feature each, which leads to a small number of features.

Assuming a total of 20 orthographic features, and assuming that these features are

observed within a window of seven tokens, the total number of potential orthographic

features may reach (20 * 7) = 140 orthographic features.

To illustrate how large the number of features may grow with the number of words in

the training text, we examine the JNLPBA-04 biomedical training dataset contents. The

dataset contains 492,551 tokens in total, with 22,056 unique tokens or words. Using a

window of seven words to describe the context, three preceding the word being examined

and three following it, the possible representations of context are:

# the token itself, ( we will denote it by ABC in the rest of the list)

# one preceding token + ABC

# ABC + one following token

# one preceding token + ABC + one following token

Page 76: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

60

# token preceding by two places + ABC

# ABC + token following by two places

# token preceding by two places + ABC + token following by two places

# token preceding by three places + ABC + token following by three places.

With 22,056 unique tokens, the number of combinations composed of three

consecutive tokens is 22,056 * 22,055 * 22,054 = 10,728,059,794,320 combinations. This

number is much higher if we consider all combinations with a window of seven tokens.

In practice, since feature extraction is based on the actual tokens and sequences of tokens

appearing in the training text, the number of features corresponding to the different

combinations of tokens describing the context is much less that the theoretical estimate,

yet it remains a very high number.

In addition to the contextual features described above, the number of general

expression describing the word shape of all words/tokens in the training text. Without

considering word shapes within a moving window, there are as many word shape features

as there are words in the text. If a window of word shapes is used, similar to the

contextual window of tokens, this number can get as high as the contextual features

number. Other complex features such as descriptions of parts of words, optionally used

within a window of tokens, also leads to a high number of potential features. The total

number of binary features extracted using the complete JNLPBA-04 training dataset and

a window of [-3, 3] (a total of 7 tokens) is 1,161,970 features.

Since biomedical named entities often are composed of more than one token, special

labeling for the beginning, the middle, and the ending of a named entity sequence is often

used. However, in the initial experiments, all tokens within a sequence use the same label

Page 77: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

61

during the SVM training phase. The output of the SVM classification phase is post-

processed in order to label the beginning, middle, and end part of a sequence differently,

as required by the JNLPBA-04 task evaluation scripts. The post-processing labeling

scripts did not attempt to correct classification errors that may arise while identifying the

beginning of a sequence, however, the overall performance results are still indicative of

the feasibility of the approach used.

4.3 Single Class Results Using SVM-Light (Joachims 2002), a single-class support vector machine is trained to

recognize protein name sequences. The trained machine is then used to classify proteins

in the test data. Performance results of the protein classification task are summarized in

Table 4.3. The hardware configuration used for single-class classification experiments is

a single-processor Windows-based Pentium IV machine with 1 GB of RAM. This

configuration was enough to complete each single-class experiment successfully within

20-30 minutes. Since no pre-processing was performed on the training and testing data

besides features extraction, the positive examples in the data sets remained scarce. As a

result, we consider the performance results reported in Table 4.3 to represent a worse-

case indication of the potential performance.

SVM-Light (Joachims 2002) offers the option of “boosting” the weight of the positive

examples relative to the negative ones. We experimented with boosting factors of 2, 4,

and 8 in order to counter the effect of positive data scarcity. The relative performance

results with different boosting factors are reported in Table 4.1 and Table 4.3. As

expected, increasing the positive weight boosting factor led to an improved recall

measure at the expense of the precision measure. The resulting F!=1-score improved with

Page 78: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

62

a boosting factor of two and four relative to the experiment without any positive weight

boosting. However, a further increase of the positive boosting factor to eight led to a

decrease of the overall F!=1-score, as a result of the decreasing precision measure. A

careful balance of the recall and precision results is required in order to maintain an

overall F!=1-score, if such a measure is deemed to be the final performance indicator.

Table 4.1 – Effect of Positive Example Boosting

No positive boosting 68.06 / 59.29 / 63.37 Positive boosting factor = 2 75.54 / 58.82 / 66.14 Positive boosting factor = 4 78.70 / 57.30 / 66.32 Positive boosting factor = 8 78.49 / 54.84 / 64.56

4.4 Multi-class Results The SVM-Multiclass implementation by T. Joachims is based on (Crammer and

Singer 2001) and uses a different quadratic optimization algorithm described in

(Tsochantaridis et al. 2004). The SVM-Multiclass implementation uses an “All-

Together” multi-classification approach, which is computationally more expensive yet

usually more accurate than “One-Against-All” or “one-against-one” multi-classification

methods. Hsu and Lin (Hsu and Lin 2002) note that “as it is computationally more

expensive to solve multi-class problems, comparisons of these methods using large-scale

problems have not been seriously conducted. Especially for methods solving multi-class

SVM in one step, a much larger optimization problem is required so up to now

experiments are limited to small data sets.” The multi-class experiment presented in this

dissertation is one such attempt at solving a large-scale problem using an “All-Together”

classification method.

Initial experiments for multi-class classification were unsuccessful, mostly due to

hitting the processing power limits of the single processor machines. The same

Page 79: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

63

experiments were attempted on different machine configurations, and unreasonably long

processing time was needed to finally complete one such experiment. The first successful

experiment required a total learning and classification time of 17 days in order to

complete using a serial algorithm on a quad-processor Pentium IV machine. The same

experiment was repeated on a Xeon quad-processor 3.6 GHz Linux machine with four

gigabytes of main memory and completed in 97 hours or four days and one hour.

The multi-class performance results are summarized in Table 4.4. Detailed multi-

class results are presented in Table 4.5, Table 4.6, Table 4.7, and Table 4.8. The overall

recall measure achieved is 62.43%, with a precision of 64.50%, and a final F-score of

63.45%. These results are compared to those obtained by the five JNLPBA-04

participating systems which used support vector machines either in isolation or in

combination with other models, as well as the results reported by (Giuliano et al. 2005)

using the same task data. The performance comparison results are reported in Table 4.2.

Park et al. (2004) utilized SVM-Light with a second degree polynomial kernel using a

One-Against-All classification method combining several binary machines. Lee, Hwang,

et al. (2004) combined binary SVM-Light machines in a pairwise, or One-Against-One

structure, using a linear kernel. In order to alleviate the effect of unbalanced

representation of each class, the authors broke down the non-entity class – usually

denoted by O – into several smaller classes labeled with part-of-speech information, e.g.,

O-NN(noun) or O-PP(preposition) (Lee, Hwang et al. 2004). Rössler (2004) used seven

binary linear classifiers trained to detect each of the five classes and two classes

representing the non-entity class O – one for a beginning tag and one for a general

outside tag. The multi-class approach utilized is a One-Againt-All approach. Rössler

Page 80: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

64

(2004) used SVM-Light as the SVM binary classification software. Support vector

machines are used by Song et al. (2004) combined with Conditional Random Fields

(CRF). The SVM binary or multi-class approach used and the type of kernel are not

indicated (Song et al. 2004). To limit the representation of the non-entity class O and to

accelerate training, Song et al. (2004) eliminate tokens that are not constituents of a base

noun phrase, assuming that a named entity would only occur within a noun phrase. For

the machine learning component of their solution, Zhou and Su (2004) combined binary

linear SVMs in a One-Against-All strategy and address the data sparseness problem by

mapping the SVM output into a probability using a sigmoid model. Giuliano et al. (2005)

used SVM-Light with linear kernels to train 2k machines using a One-Against-All

approach, where k is the number of classes. For each named entity, Giuliano et al. (2005)

assign two machines to identify the beginning and the ending of an entity.

The language-independent approach used in this experiment performed very close to

(Park et al. 2004) and better than (Lee, Hwang et al. 2004) which both used SVM as the

only learning model. Park et al. (Park et al. 2004) used character n-grams, orthographic

information, word shapes, gene sequences prior knowledge, word variations, part-of-

speech tags, noun phrase tags, and word triggers. Lee, Hwang et al. (2004) used lexical

features, character n-grams, and part-of-speech tags in a two-phased SVM model.

Rössler (2004) adapted a NER-system for German to the biomedical field. The

system used character n-grams, orthographic information, gene sequences prior

knowledge, and word length as features. The overall performance of (Rössler 2004) is

very close to that achieved in this experiment. The approach used by (Rössler 2004) is

particularly interesting in demonstrating the applicability of some NER-system from one

Page 81: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

65

language to another by not incorporating language-specific features. However, Rössler

(2004) made use of domain-specific knowledge while applying the system to the

biomedical domain.

Zhou and Su (2004) developed the system that performed best in the JNLPBA-04

task. Their system performance reached an overall recall/precision/F-score of 76.0%,

69.4%, and 72.6% respectively. Zhou and Su (2004) used SVM in conjunction with

Hidden Markov Models in a more complex learning method. The systems also made use

of many language-specific and domain-specific knowledge such as character n-grams,

orthographic information, gene sequences, gazetteers, part-of-speech tags, word triggers,

abbreviations, and cascaded entities. While this system performed better than the current

multi-class experiment, its heavy use of language and domain-specific prior knowledge

contradicts the promise of the approach presented in this research work. The baseline

performance of (Zhou and Su 2004) without post-processing was 60.3%.

Song et al. (2004) used SVM in combination with Conditional Random Fields (CRFs)

and included character n-grams, orthographic information, and other lexical features in

addition to part-of-speech and noun phrase tagging. The overall performance of this

system is comparable to that of the multi-class experiment results hereby presented.

Giuliano et al. (2005) also incorporated part-of-speech tagging and word features of

tokens surrounding each analyzed token in addition to features similar to those used in

this experiment. In addition, Giuliano et al. (2005) pruned the data instances in order to

reduce the dataset size by filtering out frequent words from the corpora because they are

less likely to be relevant than rare words.

Page 82: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

66

Table 4.2 – Performance of BioNLP Systems Using SVM vs. Baseline Results

1978-1989 Set 1990-1999 Set 2000-2001 Set S/1998-2001 Set Total Zhou and Su (2004), with post-processing 75.3 / 69.5 / 72.3 77.1 / 69.2 / 72.9 75.6 / 71.3 / 73.8 75.8 / 69.5 / 72.5 76.0 / 69.4 / 72.6

-- / -- / 60.3 Giuliano et al. (2005) -- -- -- -- 64.4 / 69.8 / 67.0 Song et al. (2004) 60.3 / 66.2 / 63.1 71.2 / 65.6 / 68.2 69.5 / 65.8 / 67.6 68.3 / 64.0 / 66.1 67.8 / 64.8 / 66.3 Rössler (2004) 59.2 / 60.3 / 59.8 70.3 / 61.8 / 65.8 68.4 / 61.5 / 64.8 68.3 / 60.4 / 64.1 67.4 / 61.0 / 64.0 Habib (baseline) 53.2 / 70.8 / 60.7 63.7 / 63.6 / 63.7 64.2 / 65.4 / 64.8 63.0 / 63.2 / 63.1 62.3 / 64.5 / 63.4 Park et al. (2004) 62.8 / 55.9 / 59.2 70.3 / 61.4 / 65.6 65.1 / 60.4 / 62.7 65.9 / 59.7 / 62.7 66.5 / 59.8 / 63.0 Lee, Hwang et al. (2004) 42.5 / 42.0 / 42.2 52.5 / 49.1 / 50.8 53.8 / 50.9 / 52.3 52.3 / 48.1 / 50.1 50.8 / 47.6 / 49.1 Baseline (Kim et al. 2004) 47.1 / 33.9 / 39.4 56.8 / 45.5 / 50.5 51.7 / 46.3 / 48.8 52.6 / 46.0 / 49.1 52.6 / 43.6 / 47.7

Table 4.3 – Effect of Positive Examples Boosting on Single-Class SVM Results

1978-1989 Set 1990-1999 Set 2000-2001 Set S/1998-2001 Set Total No Boosting

Complete 57.47 / 53.35 / 55.34 71.69 / 62.76 / 66.93 68.35 / 59.60 / 63.68 68.27 / 58.63 / 63.08 68.06 / 59.29 / 63.37

Right 73.56 / 68.29 / 70.83 81.76 / 71.58 / 76.33 79.04 / 68.92 / 73.63 79.47 / 68.25 / 73.43 79.30 / 69.09 / 73.84 Left 59.61 / 55.34 / 57.39 77.82 / 68.13 / 72.65 76.70 / 66.88 / 71.45 76.08 / 65.34 / 70.30 75.24 / 65.55 / 70.06

Boost Factor = 2 Complete 65.35 / 50.83 / 57.18 78.59 / 60.88 / 68.61 76.33 / 60.36 / 67.41 75.58 / 58.40 / 65.89 75.54 / 58.82 / 66.14

Right 80.62 / 62.71 / 70.55 87.75 / 67.98 / 76.61 86.28 / 68.23 / 76.20 86.25 / 66.65 / 75.19 86.09 / 67.04 / 75.38 Left 67.98 / 52.87 / 59.48 85.70 / 66.39 / 74.82 83.67 / 66.16 / 73.89 83.21 / 64.30 / 72.54 82.57 / 64.30 / 72.30

Boost Factor = 4 Complete 71.43 / 48.49 / 57.77 81.55 / 59.32 / 68.68 78.99 / 59.03 / 67.57 78.63 / 57.04 / 66.11 78.70 / 57.30 / 66.32

Right 84.73 / 57.53 / 68.53 89.93 / 65.42 / 75.74 88.58 / 66.20 / 75.77 88.64 / 64.30 / 74.53 88.55 / 64.46 / 74.61 Left 74.38 / 50.50 / 60.16 88.66 / 64.50 / 74.67 86.10 / 64.35 / 73.65 86.00 / 62.39 / 72.31 85.58 / 62.31 / 72.11

Boost Factor = 8 Complete 71.43 / 44.52 / 54.85 81.41 / 57.14 / 67.15 78.76 / 56.91 / 66.08 78.34 / 54.74 / 64.45 78.49 / 54.87 / 64.59

Right 85.06 / 53.02 / 65.32 90.14 / 63.27 / 74.35 88.21 / 63.74 / 74.00 88.42 / 61.78 / 72.73 88.41 / 61.81 / 72.76 Left 73.89 / 46.06 / 56.75 88.59 / 62.18 / 73.08 86.47 / 62.48 / 72.54 86.44 / 60.39 / 71.11 85.83 / 60.00 / 70.63

Page 83: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

67

Table 4.4 – Summary of Baseline Multi-Class Experiment Results7

Named Entity 1978-1989 Set 1990-1999 Set 2000-2001 Set S/1998-2001 Set Total

protein 58.62 / 70.83 / 64.15 72.68 / 63.43 / 67.74 70.83 / 62.03 / 66.14 71.28 / 60.19 / 65.27 70.37 / 62.00 / 65.92 DNA 61.61 / 65.71 / 63.59 52.21 / 63.01 / 57.10 52.55 / 70.59 / 60.25 47.11 / 69.60 / 56.19 51.00 / 67.64 / 58.16 RNA 0.00 / 0.00 / 0.00 55.10 / 57.45 / 56.25 50.00 / 74.29 / 59.77 50.00 / 62.50 / 55.56 51.16 / 63.77 / 57.02

cell_type 51.79 / 73.55 / 60.78 51.42 / 72.84 / 60.28 52.94 / 82.89 / 64.62 50.09 / 81.31 / 61.99 59.56 / 78.00 / 67.55 cell_line 32.39 / 67.86 / 43.85 50.00 / 50.60 / 50.30 56.94 / 55.03 / 55.97 53.53 / 43.75 / 48.15 47.72 / 51.64 / 49.61 Overall 53.18 / 70.79 / 60.73 63.68 / 63.63 / 63.66 64.15 / 65.39 / 64.76 62.97 / 63.16 / 63.06 62.43 / 64.50 / 63.45

Correct Right 71.55 / 95.25 / 81.72 79.44 / 79.38 / 79.41 78.95 / 80.47 / 79.70 78.40 / 78.64 / 78.52 78.05 / 80.65 / 79.33 Correct Left 56.12 / 74.72 / 64.10 70.33 / 70.28 / 70.31 70.95 / 72.31 / 71.63 69.68 / 69.90 / 69.79 68.76 / 71.06 / 69.89

Table 4.5 – Baseline Multi-Class Experiment Results 1978-1989 Set

named entity complete match right boundary match left boundary match protein ( 609) 357 (58.62 / 70.83 / 64.15) 464 (76.19 / 92.06 / 83.38) 371 (60.92 / 73.61 / 66.67) DNA ( 112) 69 (61.61 / 65.71 / 63.59) 96 (85.71 / 91.43 / 88.48) 73 (65.18 / 69.52 / 67.28) RNA ( 1) 0 ( 0.00 / 0.00 / 0.00) 0 ( 0.00 / 0.00 / 0.00) 0 ( 0.00 / 0.00 / 0.00) cell_type ( 392) 203 (51.79 / 73.55 / 60.78) 273 (69.64 / 98.91 / 81.74) 215 (54.85 / 77.90 / 64.37) cell_line ( 176) 57 (32.39 / 67.86 / 43.85) 90 (51.14 / 107.14 / 69.23) 65 (36.93 / 77.38 / 50.00) [-ALL-] (1290) 686 (53.18 / 70.79 / 60.73) 923 (71.55 / 95.25 / 81.72) 724 (56.12 / 74.72 / 64.10)

7 The JNLPBA-04 testing dataset is roughly divided into four subsets based on publication years: 1978-1989 set, 1990-1999 set, 2000-2001 set, and S/1998-2001 set. See Appendix B for more details.

Page 84: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

68

Table 4.6 – Baseline Multi-Class Experiment Results 1990-1999 Set

named entity complete match right boundary match left boundary match protein (1420) 1032 (72.68 / 63.43 / 67.74) 1200 (84.51 / 73.76 / 78.77) 1129 (79.51 / 69.39 / 74.11) DNA ( 385) 201 (52.21 / 63.01 / 57.10) 288 (74.81 / 90.28 / 81.82) 232 (60.26 / 72.73 / 65.91) RNA ( 49) 27 (55.10 / 57.45 / 56.25) 41 (83.67 / 87.23 / 85.42) 31 (63.27 / 65.96 / 64.58) cell_type ( 459) 236 (51.42 / 72.84 / 60.28) 325 (70.81 / 100.31 / 83.01) 256 (55.77 / 79.01 / 65.39) cell_line ( 168) 84 (50.00 / 50.60 / 50.30) 117 (69.64 / 70.48 / 70.06) 97 (57.74 / 58.43 / 58.08) [-ALL-] (2481) 1580 (63.68 / 63.63 / 63.66) 1971 (79.44 / 79.38 / 79.41) 1745 (70.33 / 70.28 / 70.31)

Table 4.7 – Baseline Multi-Class Experiment Results 2000-2001 Set

named entity complete match right boundary match left boundary match protein (2180) 1544 (70.83 / 62.03 / 66.14) 1815 (83.26 / 72.92 / 77.75) 1723 (79.04 / 69.22 / 73.81) DNA ( 411) 216 (52.55 / 70.59 / 60.25) 284 (69.10 / 92.81 / 79.22) 241 (58.64 / 78.76 / 67.22) RNA ( 52) 26 (50.00 / 74.29 / 59.77) 40 (76.92 / 114.29 / 91.95) 26 (50.00 / 74.29 / 59.77) cell_type ( 714) 378 (52.94 / 82.89 / 64.62) 513 (71.85 / 112.50 / 87.69) 402 (56.30 / 88.16 / 68.72) cell_line ( 144) 82 (56.94 / 55.03 / 55.97) 112 (77.78 / 75.17 / 76.45) 92 (63.89 / 61.74 / 62.80) [-ALL-] (3501) 2246 (64.15 / 65.39 / 64.76) 2764 (78.95 / 80.47 / 79.70) 2484 (70.95 / 72.31 / 71.63)

Table 4.8 – Baseline Multi-Class Experiment Results 1998-2001 Set

named entity complete match right boundary match left boundary match protein (3186) 2271 (71.28 / 60.19 / 65.27) 2666 (83.68 / 70.66 / 76.62) 2526 (79.28 / 66.95 / 72.60) DNA ( 588) 277 (47.11 / 69.60 / 56.19) 386 (65.65 / 96.98 / 78.30) 312 (53.06 / 78.39 / 63.29) RNA ( 70) 35 (50.00 / 62.50 / 55.56) 53 (75.71 / 94.64 / 84.13) 36 (51.43 / 64.29 / 57.14) cell_type (1138) 570 (50.09 / 81.31 / 61.99) 802 (70.47 / 114.41 / 87.22) 612 (53.78 / 87.30 / 66.56) cell_line ( 170) 91 (53.53 / 43.75 / 48.15) 132 (77.65 / 63.46 / 69.84) 104 (61.18 / 50.00 / 55.03) [-ALL-] (5152) 3244 (62.97 / 63.16 / 63.06) 4039 (78.40 / 78.64 / 78.52) 3590 (69.68 / 69.90 / 69.79)

Page 85: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

69

4.5 Challenges and Problems Running the baseline experiments provided a good first-hand experience and

exposure to the usability and scalability challenges associated with named entity

recognition and support vector machines. In this section we describe some of the issues

encountered as they provide the motivation for our research work. We begin with a

discussion of the scalability challenges then proceed with some of the practical problems

and usability issues faced.

The Java-based jFex feature extraction system provided by (Giuliano et al. 2005)

provides a scripted approach to define which features are to be extracted, which adds

flexibility to the feature extraction process. However, the system memory requirements

are very high, especially with a large dataset and a selection of several complex features.

The memory requirements can be reduced by breaking down the datasets into smaller sets

grouped in a folder, yet similar memory needs reduction was not possible when complex

features were to be extracted. For the sake of this experiment, word shape features for

surrounding tokens could not be included due to the memory limitation. Also, extraction

of character n-grams was not included in this system. The resulting set of features used in

this experiment mostly includes simple orthographic information, contextual features,

and the words themselves.

Different machine configurations were tried during the course of this research.

Desktops and laptops with Pentium IV processors and 1-2 GB of RAM were used for

single-class classification experiments. Attempts to run multi-class experiments failed on

single processor machines due to the CPU-intensive nature of the classification process.

The later experiment was tried on a Pentium III quad-processor Linux machine. The

Page 86: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

70

process did not complete in 22 days and was aborted. The first successful multi-class

experiment that was run on a Pentium IV quad-processor Linux-based machine

completed in 17 days. Running the same experiment on a Xeon quad-processor 3.6GHz

machine completed in 96 hours (4 days). Table 4.9 summarizes the training time and

performance results for some single class tests with different kernel types and a multi-

class test with a linear kernel. All tests reported in Table 4.9 were run on a dual core

Xeon 3.6 GHz machine with 4 GB of RAM selecting a margin error of 0.1 and a

maximum of 2GB of total memory per process.

Table 4.9 – Single and Multi-class Training Times

Test Type Kernel Type Training Time Recall Precision F-score

Protein Linear 814.85 sec. 13.58 min. 68.13 59.33 63.43

Protein Polynomial degree=2

390082.24 sec. 6501.37 min. 69.93 62.23 65.86

Protein Polynomial degree=3

78506.33 sec. 1308.44 min. 68.47 62.44 65.32

Protein Radial basis -- 0 0 0

Multi-class Linear 353367.03 sec. 5889.45 min.

70.09 (P) 63.01 (A)

61.93 (P) 64.44 (A)

65.76 (P) 63.71 (A)

(P): Identified protein names; (A): All identified entities

Some initial parallelization single-class experiments were attempted successfully on a

cluster of Linux machines at the US Air Force Academy (USAFA). The parallel SVM

software developed by Zanni et al. (2006) based on (Zanghirati and Zanni 2003; Serafini

et al. 2005) was used. However, the initial performance measures achieved by the serial

SVM software were superior to those attained by the parallel algorithm, with no

considerable gain in computational time. Further experiments are needed in order to

Page 87: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

71

determine the best use of the parallel SVM software in (Zanni et al. 2006) or other

parallel algorithms.

Most of the NER and SVM usability issues were experienced during this exercise.

Model parameters and kernel selection, identification of a suitable class labeling

convention, developing preprocessing and post-processing modules to suit the needs of

different tools, and not being able to capitalize on results obtained from different

experiments without having to restart the learning process, were just a few examples of

the common challenges facing developers of machine learning solutions in general.

Designing and building the experimentation infrastructure was the main practical

problem faced. In order to compile and/or construct the resources needed for the machine

learning solution, it was evident that in order to contribute to the research activity one has

to spend a considerable amount of time and effort recreating existing solutions before

getting to the point where more focused research and development could be possible.

Despite the existence of previously developed tools within the research community, these

tools lack standardization and require extensive work to preprocess the data according to

different formats and to build interface modules to link the tools and construct a total

solution. The problem is compounded by the lack of compatible components and reliable

documentation on the type of preprocessing that may be needed by one or more of these

components. In order to build a new research infrastructure from scratch, one needs to

reproduce previous work to get to the point where new research may be started. This

experience demonstrates the need for a complete yet flexible research infrastructure that

facilitates the construction of new solutions from existing components and enables the

Page 88: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

72

addition of new research in focused areas. This is a software engineering issue in the first

place and is often overlooked in the excitement of a challenging research activity.

In summary, we view the issues associated with large-scale named entity recognition

using support vector machines as a two-fold problem: one of software engineering and

one of machine learning, and recommend that they should be addressed as such.

Although this research proposal will focus primarily on improving SVM’s scalability

with large datasets and multiple classes, we recommend a new service-oriented

framework to address the usability, maintainability, and expandability issues observed

while running the baseline experiments. Some initial thoughts about the framework are

presented in Chapter 10. We hope to explore and implement these ideas in future research

projects.

In Chapter 6, we present a novel database-supported framework for classification

using support vector machines, to be applied to the biomedical named entity recognition

problem in particular.

4.6 NER/SVM Scalability Challenges In this section we summarize the various usability and scalability challenges

discussed in the previous two chapters. The NER/SVM problem combines questions

inherent in both fields and compounds the software engineering issue. The biomedical

NER using SVM poses several challenges which include:

# Higher level of ambiguity and uncertainly regarding biomedical entity shapes.

# Difficulty to annotate training data manually, a labor and time expensive process.

# Need for high-dimensional features to compensate for ambiguity in defining

entities.

Page 89: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

73

# SVM slow training and high memory requirements.

# Difficulty to select suitable SVM model parameters and kernel for given datasets.

# Complex learning with training data that represent multiple classes.

# Lack of integrated tools to build a total solution.

# Incompatibility of existing tools and the need to develop interfacing modules to

fill the gaps and put the different pieces of the solution together.

A language and domain-independent named entity recognition system requires

elimination of prior language or domain-specific knowledge. Therefore, any proposed

solution needs to address a higher level of ambiguity and a minimize expectations about

the shape of named entities. Entities may take any shape or form and patterns may be

difficult to discover. Entities are often composed of multiple words with unclear

boundaries. When more than one named entity type is to be identified, one may face an

unbalanced distribution of the entities belonging to different classes. Positive examples of

such entities often appear scarcely and infrequently in the text.

Building large annotated corpora for a given language or domain is a difficult labor

and time expensive task. The task requires manual annotation of unstructured data which

may be inconsistent and error-prone. With the growing applications of information

extraction, the need for using unannotated data or a mixture of both labeled and unlabeled

data increases. Unsupervised or semi-supervised solutions are crucial for the widespread

adoption of machine learning techniques in various domains, particularly named entity

recognition.

Feature selection is the first step in any classification problem, especially those that

are ambiguous by nature such as named entity recognition problems. The challenge is

Page 90: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

74

compounded by the aim to develop a solution that does not rely on prior language or

domain knowledge. The plan proposes a feature discovery phase where an increasingly

large set of binary features from different textual definitions are extracted. The input

space therefore consists of high-dimensional sparse vectors.

Support vector machines have been used successfully for named entity recognition.

Newly emerging unsupervised or semi-supervised techniques such as support vector

clustering (Ben-Hur et al. 2001) promise new grounds in this area. However, slow

training and high memory requirements hinder the widespread adoption of support vector

machines in applications with large data. Tuning training parameters and selecting and/or

designing a kernel that is suitable for the problem at hand are major SVM usability

issues.

In addition, multi-class identification and classification requires training several

support vector machines on one class or a combination of classes when using One-

Against-All, pairwise (one-against-one), or half-against-half multi-class solutions. The

combined solution is an order of magnitude larger than the single class problem

depending on the number of separate SVMs built. Training one SVM to classify all

classes at the same time is a much bigger optimization problem and is impractical to use

on large datasets until more efficient optimization algorithms are developed.

Finally, due to the lack of integrated systems that offer a complete solution to the

NER/SVM problem, advancing the state of the art and/or constructing the solution

requires the integration of several, often incompatible, components and the development

of integration and interface tools. Reusability and maintainability of existing tools is

Page 91: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

75

difficult and the introduction of new algorithms and techniques requires the

reconstruction of several components.

4.7 SVM Usability Issues As discussed earlier, we view the usability issues of support vector machines as a

two-fold problem: one of machine learning and one of software engineering. The

machine learning usability issues are similar to those experienced with most machine

learning approaches. They include the difficulty in selecting the best learning parameters,

data preparation, and the need to restart the learning process when new training data is

available. In addition, selecting the best kernel function for the data at hand is another

factor that adds to the complexity of support vector machines.

As for the software engineering aspect, it stems from the lack of integrated and/or

compatible SVM tools. Building a new learning environment requires the compilation of

often incompatible tools to carry out any preprocessing needed, feature extraction,

training and classification, then any additional post-processing requirements for the

solution. Due to the lack of integrated tools, one may often need to build special

interfacing modules to fit all the pieces together and build a total solution. The following

are some of the usability questions for both sides of the problem.

4.7.1 Machine learning issues

# Model selection (parameter tuning): How to select model parameters? The

number of parameters to be configured for a given set of data depends on the

optimization algorithm and the kernel function. Common ways to help in deciding

the parameters to be used are grid search, cross validation, decision tress, and use

of heuristics.

Page 92: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

76

o Grid search: try different parameter values and compare the performance

results obtained by each combination using a grid.

o Cross validation: chunk the data into different pieces, train each piece with

different parameter values then cross-validate the performance results

using the rest of the data.

o Decision trees, heuristics: starting with one parameter value, modify other

parameters and us a decision tree to figure out the best combination. May

also apply heuristics to speed up the decision process.

# Kernel selection: How to select a kernel function that is suitable to a given

problem data? Since it is difficult to choose a suitable kernel function simply by

examining the data, the selection is based mostly on trying different functions and

validating the results obtained until a good function is found. Also, coming up

with a new kernel function for a specific type of data is a non-trivial task. Kernel

selection is done by:

o Heuristics based on input dimensions

o Cross validation

# Input data formatting, scaling, normalization: Training and test data needs to be

prepared for use by the machine learning of choice. Preparation may include

special formatting, scaling the vector values to a suitable range, or normalizing

the vector values across different data sets.

# Adding new training data requires restarting the learning process, which is a time

consuming process.

Page 93: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

77

4.7.2 Software engineering issues

# Lack of integrated machine learning tools: many pieces of software and tools

tackling specific processes such as feature extraction, learning or classification

exist, but using the pieces to build a total solution requires repeating or

reinventing one or more parts of the solution and/or developing interfacing tools

to fit the pieces together.

# Lack of standardization, incompatible interfaces, need to “reinvent the wheel” to

fit pieces together: this is related to the previous issue. Available tools are often

developed in isolation and no standard interfaces exist to make combining several

tools together more streamlined.

# How to implement new algorithms for partial problems? In order to develop new

algorithms for focused areas of the overall solution, for e.g., optimization, model

selection, etc., one needs to rebuild the overall machine learning solution before

getting to the point where new contributions could be made.

# How to incorporate optional components into the overall NER/SVM solution? It

is often useful to compare results from different algorithms for the same

component, for e.g., optimization. To accomplish this objective using existing

tools, one needs to build parallel solutions where only one or two components are

different.

For future research, we propose a dynamic service-oriented machine learning

architecture that promotes reusability, expandability, and maintainability of the various

components needed to implement the machine learning solution. The aim of the dynamic

architecture is to provide a research environment with a flexible infrastructure such that

Page 94: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

78

researchers may easily focus on specific components without spending much time on

rebuilding the experimentation infrastructure. The proposed architecture’s design will be

service-oriented with a clear definition of the inter-modules interaction and interfaces.

This future research suggestion aims to advance the state of the art by offering a

novel machine learning framework that promotes reusability, expandability, and

maintainability of the solution and provides an architecture that encourages future work.

More details about the recommended architecture are discussed in Chapter 10. We

consider the database solution presented in Chapter 6 to be a first step towards the

implementation of the full SOA architecture.

In the next chapter, we tackle the main scalability challenge of the NER solution

using SVM, which is slow training. We present the mathematical formulation of the basic

multi-class classification problem, followed by other formulations that lower the number

of variables considered for optimization. We then introduce and discuss a new cutting

plane algorithm to accelerate training while providing good out-of-the-box problem.

Page 95: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

Chapter 5

Improving Multi-Class SVM Training

To address the scalability issues of the NER/SVM solution using high-dimensional

input space, we begin by tackling the training computational time component. In this

chapter, we explore the mathematical foundation of multi-class support vector machines

and the evolution of the problem formulation aiming to reduce the training time by

improving the optimization algorithms. We then introduce a new cutting plane algorithm

that accelerates the training process while achieving good out-of-the-box performance in

linear time. The cutting plane algorithm is first implemented as a standalone C executable

in order to study its effectiveness in reducing the computational time, and is later

integrated into the database framework and embedded solution presented in Chapter 6.

5.1 Basic All-Together Multi-Class SVM The basic All-Together multi-class SVM idea is similar to the One-Against-All

approach. It constructs k two-class rules where the jth function bTj $)(xw 7 separates

training vectors of the class j from the other vectors. There are k decision functions but all

are obtained by solving one problem. Similar to the non-separable binary SVM case, the

objective of the machine is to maximize the margin separating the different classes while

minimizing the classification error of each data point, as represented by the slack

variable. For a k-class problem with n training points, the multi-class support vector

machine can be formulated as the minimization of:

+ ++% %

%8

$%k

j

n

i

k

jyj

ijiTi

i

CQ1 1

121),,( .ww#bw … … … … … ( 5.1)

Page 96: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

80

subject to: iij

ijjiTjyi

Ty

yjkjni

bbii

89%*

,$$*$

},,...,1{ and ,...,1,0

,1)()(

.

.77 xwxw … … … … … ( 5.2)

where, xi is the input vector for data point i, yi is the correct class for data point i, ij.

is the slack variable associated with data point i relative to class j (i.e., the error measure

for misclassifying i as belonging to class j), and C is the regularization parameter

determining the trade-off between the maximization of the margin and the minimization

of the classification error. Figure 5.1 is an illustration of the different slack variables

relative to individual classes, which is the basic way of formulating the multi-class SVM

problem.

Figure 5.1 – Multi-Class SVM Error Representation

In the basic multi-class SVM formulation, the machine needs to minimize k x n slack

variables, in addition to maximizing k margins. The multi-class classification decision

Page 97: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

81

function is defined by ))((argmax k 1,...,j jTj b$% xw 7 , i.e., a data point x is classified as the

class j whose weights maximize the classification score for the point x.

The constrained problem in (5.1) and (5.2) can be transformed into its unconstrained

equivalent formulation by introducing the non-negative Lagrange multipliers &ij and "ij:

+ ++% %

%8

$%k

j

n

i

k

jyj

ijjTj

i

CQ1 1

121),,,,( .ww!"#bw

++++%

%8%

%8

,$,,$,,n

i

k

jyj

ijijij

n

ijy

k

jyj

iT

jyiji

i

i

ibb

11

11

)1)()(( .'.7- xww

+++++%

%8%

%8%

,$,,$,%n

i

k

jyj

ijijij

n

ij

k

jyj

iT

jij

k

jj

Tj

ii

Cbz1

11

11

)()1)((21 .'-7 xwww … … … ( 5.3)

where,

::;

::<

=

,

%%+%8

otherwise

for 1

ij

i

k

jyj

ij

ij

yjz i

-

- … … … … … ( 5.4)

and the conditions for optimality are:

nikjyj

bb

i

ijjyiT

jyij ii

,.........1,,......,1,for

0)1)()((

%%8

%$,,$, .7- xww … … … … … ( 5.5)

nikjyj iijij ,.........1,,......,1,for 0 %%8%.' … … … … … ( 5.6)

in addition to ),,,,( !"#bwQ being minimized in #bw ,, (derivatives equal to zero).

The dual formulation is obtained by reducing (5.3) to (5.6) using the kernel function

)()(),( yxyx 77 TK % . The dual formulation is to maximize:

),(21)(

1, 111

li

n

li

k

jljij

n

i

k

jyj

ij KzzQi

xx" ++++% %%

%8

,% - … … … … … ( 5.7)

subject to:

Page 98: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

82

kjyjz i

n

iij ,......,1,for 0

1%8%+

%

… … … … … ( 5.8)

nikjyjC i ,.........1,,......,1,for 0 ij %%8// - . … … … … … ( 5.9)

Finally, the decision function for class j is given by:

+%

$%n

ijiijj bKzf

1

),()( xxx … … … … … ( 5.10)

and the classification task for data point x is to find class j to satisfy

)(argmax k 1,...,j xjf% .

By examining the dual formulation of the basic multi-class SVM learning problem as

defined in (5.7) to (5.9), we observe the following:

# Number of variables #ij in the optimization problem is equal to the number of

training points n times the number of classes k, i.e., n x k variables.

# Number of constraints to be satisfied zij is equal to number of training points n.

# The upper limit for the weight variables #ij is the regularization parameter C.

With n x k variables, the quadratic optimization problem would require O(n3k3)

computational time to solve – assuming the quadratic optimizer used runs in third power

order of the size of its input expressed in number of variables, as discussed in Chapter 3.

This is the main issue with the multi-class SVM training. One can easily see that the

training time would become prohibitive when a large number of training data points and

classes is used for the learning task.

5.2 Improving SVM Training Time From the discussion of the basic multi-class formulation, we see that any

improvement in training time would require tackling one of the sources of delay by

Page 99: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

83

either: reducing the number of data points, reducing the data points’ dimensionality,

and/or lowering the number of variables and constraints for the optimization problem.

Each of these possibilities has been the subject of research activities aiming to improve

SVM training. In this work, we will focus on the later approach which is to accelerate

training by lowering the number of variables and/or constraints for the optimization task.

Crammer and Singer (2001) reduce the number of optimization variables by reducing

the number of slack variables 'ij to 'i = max('ij) for j=1, .., k. In other words, it reduces

the size of the optimization problem by considering only the highest slack for each data

point across all classes, thereby having one slack variable per data point. The primal and

dual formulations are derived in the same way as for the basic SVM formulation. The

mathematical proof is provided in (Crammer and Singer 2001) and extended in (Abe

2005) to include bias terms. We will simply provide the initial optimization problem and

the final dual formulation in order to contrast it with the basic formulation. Using the n-

slack formulation, the optimization problem is:

+ +% %

$%k

j

n

iii

Ti CQ

1 121),,( .ww#bw … … … … … ( 5.11)

s.t. ijjyiTj

Ty yjkjnibb

ii8%%,*,$, ,,...,1, ,...,1for 1)()( .7 xww . … … ( 5.12)

The dual formulation is to maximize:

),(21)(

1, 11li

n

li

k

jliljij

n

ii KzzQ xx" +++

% %%

,% --- … … … … … ( 5.13)

subject to:

kjyjz i

n

iiij ,......,1,for 0

1%8%+

%

- … … … … … ( 5.14)

nikjyjCn i ,.........1,,......,1,for )1(0 i %%8/,/ - … … … … … ( 5.15)

Page 100: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

84

and the decision function for class j is given by:

+%

$%n

ijijijj bKzf

1),()( xxx - . … … … … … ( 5.16)

Comparing the basic formulation (5.7) to (5.9) to that of (Crammer and Singer 2001)

in (5.11) to (5.13), we observe that the reduction of the number of slack variables from n

x k to n did not increase the number of constraints and the final optimization problem is

much simpler that the basic SVM problem. However, using a large number of data points

n, the optimization time remains high as it requires O(n3) to complete. The optimization

algorithm in (Crammer and Singer 2001) is implemented in SVM-Multiclass

(Tsochantaridis et al. 2004).

In order to further improve the training time, one may attempt to reduce the number

of data points considered by the optimization algorithm. This may be achieved by

approximating the total accurate solution by one that attempts to reach a close accuracy

by using a smaller number of data points. Cutting plane algorithms are developed to

approximate convex optimization problems by finding data points that approach the

optimal solution and discarding the other points.

5.2.1 SVM Structural 1-Slack Formulation

Following the baseline experiments presented in the previous chapter which aimed to

identify the scalability issues with All-Together multi-class SVM, we performed a

number of experiments using both single class and multi-class cases in order to identify

ways to improve the multi-class training time. Using SVM-Light (Joachims 1998a, 2002)

for the single class experiments and SVM-Multiclass (Crammer and Singer 2001;

Tsochantaridis et al. 2004) for multi-class problems, we observe that the number of

support vectors generated in both cases is O(n0.8). In fact, SVM-Multiclass uses SVM-

Page 101: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

85

Light’s quadratic optimizer, where the input to the optimizer is a realization of (Crammer

and Singer 2001). The O(n0.8) is an experimental observation using the same training

dataset for binary and multi-class training. In general, the worst case estimate for the

number of support vectors is O(n), where all training points are potentially support

vectors. Details of the single-class and multi-class scalability experiments and results

using different tools and datasets are presented in Chapters 7 and 8. The training time

using SVM-Light and SVM-Multiclass is O(n2), where the multi-class case is O(k2) slower

than the single case case, k being the number of classes.

With the availability of the improved binary SVM implementation in SVM-Perf

(Joachims 2006) which reduces the training time to linear time, and knowing that the

multi-class solution can be built using the single class one, we analyzed the improved

binary implementation in SVM-Perf in order to investigate ways to extend the solution to

support multi-class training. SVM-Perf (Joachims 2006) is based on a newer SVM

formulation than that of (Crammer and Singer 2001) which attempts to reduce the

number of slack variables from n variables to just one. The new formulation is referred to

as the 1-slack formulation.

The 1-slack structural formulation is (excluding the bias terms):

+%

$%k

ji

Ti CQ

121),( .ww#w … … … … … ( 5.17)

s.t. . where,

11:}1,0{

n

1i

1 1

+

+ +

%

% ,

%

,*9)

i

n

i

n

iiiii

Tn cn

ycn

c

..

.xw … … … … … ( 5.18)

In this formulation, one common slack variable $ which constitutes an upper bound

on all training errors is used. However, the number of constraints in this case is 2n, one

Page 102: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

86

for each possible vector nnccc }1,0{),...,,( 21 9%c . Each constraint vector corresponds to

the sum os a subset of constraints from the n-slack formulation in (5.11) and (5.12). The

new constraint vectors constitute the input to the quadratic optimizer.

SVM-Perf (Joachims 2006) uses a binary cutting plane algorithm in order to reduce

the number of constraints in the problem while providing an approximate solution in a

constant number of iterations. Proof of equivalence of the 1-slack formulation to the n-

slack formulation and convergence of the cutting plane algorithm are provided in

(Joachims 2006). SVM-Perf binary training algorithm is the following:

Algorithm 5.1: Algorithm for training binary classification SVM8 1: Input: >,},{)},,(),...,,{( 11 CyyxyS nn -1,1 9% x 2: 7%C (C is the set of constraints for input to the optimizer) 3: repeat

4: .

.. .

,*9)

$?

++%%

*

n

iiii

n

ii

T

T

cn

ycn

c

C

11

0,

11: s.t.

}21{ argmin),(

xwC

www w

5: for i=1,…,n do

6: @AB

;<= C

%otherwise 0

1)( 1 iT

ii

yc

xw

7: end of 8: }{cD?CC

9: until >. $/, ++%%

)(1111

iT

i

n

ii

Tn

ii yc

nc

nxww

10: return(w,#)

5.3 SVM-PerfMulti: New Multi-Class Instantiation In the general case, the SVM structural 1-slack formulation (Joachims 2005) is the

following (unbiased formulation):

8 Joachims. 2006. Training Linear SVMs in Linear Time. In Proc. of the ACM Conference on Knowledge Discovery and Data Mining (KDD), Aug 20-23:217-226.

Page 103: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

87

+%

$%k

ji

Ti CQ

121),( .ww#w … … … … … ( 5.19)

s.t. E F .,GH*GI,I9G) ),(),(),(:\ yyyyyYy T xxw … … … … … ( 5.20)

where, ),...,,( 21 kyyyYy %9 is a set of possible k labels, and ),( yGI x is a function

that describes the match between ),...,( 1 nxx and ),...,( 1 nyy . The objective is to maximize

),( yT GI xw , where:

+%

G%GIn

iiiyy

1),( xx . … … … … … ( 5.21)

Designing the structure of the function ),( yGI x and a suitable training loss function

),( yyGH for a given problem such that the argmax is computed efficiently is the main

objective for a particular instantiation of SVM-Struct V3.0 and is left to the designer of

the solution. The general algorithm to solve a quadratic optimization problem using the

multivariate SVM 1-slack formulation (Joachims 2005) is the following:

Algorithm 5.2: Algorithm for solving multivariate quadratic optimization problems9

1: Input: >,),,...,,(),,...,( 211 CyyyYy kn %9% xxx 2: 7%C (C is the set of constraints for input to the optimizer) 3: repeat 4: )},(),({argmax yyyy T

Yy GI$GH?G 9 xw

5: )]}},(),([),(,0{max{max yyyy Ty GI,I,GH? 9 xxwC .

6: then)],(),([),( if >. $(GI,I,GH yyyy T xxw 7: }{y GD?CC 8: Cw over objective SVM optimize? 9: end if

10: until C has not changed during an iteration 11: return(w)

9 Joachims. 2005. A Support Vector Method for Multivariate Performance Measures. In Proc. of the International Conference on Machine Learning (ICML):377-384.

Page 104: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

88

Tsochantaridis et al. (2004) show that the algorithm terminates after a polynomial

number of iterations. The set of constraints C is iteratively filled with the most violated

constraints found in the input training dataset. Algorithm 5.1 is an instantiation of

Algorithm 5.2 for the binary classification case.

5.3.1 Accelerated Cutting Plane Algorithm

The 1-slack SVM structural formulation expedites the optimization process by

reducing the number of variables to be optimized while shifting the decision on how to

prepare the quadratic optimizer’s input data to the solution designer. Using an error rate

loss function, SVM-Perf builds one constraint vector for all data points and adds it to the

input data vectors for the optimization process. The individual constraint value is zero if

the point is correctly classified, or one otherwise. Inspired by the improved training time

of SVM-Perf as compared to that of SVM-Light, we develop a cutting plane algorithm for

handling multiple classes at the same time using a loss function based on error rate.

In this section we describe a new multi-class instantiation of SVM-Struct V3.0

(Tsochantaridis et al. 2005) – SVM-PerfMulti. Using the SVM 1-slack formulation, we

introduce a cutting plane algorithm that identifies the most violated constraints to be used

for the quadratic optimization. The cutting plane algorithm is inspired by the geometrical

intuition behind support vector machines illustrated in Figure 3.2 and Figure 5.1.

Considering the general non-linearly separable case, the objective of the optimization

problem is to:

# Maximize the margin(s) separating the hyperplanes

# Minimize the slack error for each data point.

Page 105: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

89

In addition to the optimization objectives, the algorithm also attempts to boost the

classification performance of the machine. The cutting plane algorithm iteratively

increases the gap between the positive and negative examples for each class, which

refines the input to the quadratic optimizer during consecutive learning iterations and

accentuates the impact of the scarce positive examples.

Algorithm 5.3: Algorithm for training multi-class classification SVM10 1: Input: aCkyyxyS nn ,,},,...,1{)},,(),...,,{( 11 > 9% x 2: 7%C (C is the set of constraints for input to the optimizer) 3: l = initial example loss value = 100.0 / n 4: repeat

5:

},...,{ where

11:}1,0{ s.t.

}21{argmin ),(

1

1 1

1

k

n

i

n

iiiii

Tn

k

ji

Ti

cn

ycn

c

C

www

xw

ww#w

%

,*9)

$?

+ +

+

% ,

%

.

.

6: for i=1,…,n do

7: :@

:AB

:;

:<= /,

% %8

otherwise 0

}{max)( 1,..,1

lc

iTj

kjyji

Ty

iii

xwxw

8: end for 9: }{cD?CC

10: }{max ,...,1 iTyni i

all xw%J$?

11: until >. $/, ++%%

)(1111

iT

i

n

ii

Tn

ii yc

nc

nxww

12: return(w,#)

Lines 6 to 8 of Algorithm 5.3 aim to satisfy line 4 of Algorithm 5.2 by finding the set

of most violated constraints that yGmaximizes +%

G%GIn

iiiyy

1

),( xx .

10 First prototype in Habib. 2008. Addressing Scalability Issues of Named Entity Recognition Using Multi-Class Support Vector Machines. International Journal of Computational Intelligence 4 (3):230-239.

Page 106: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

90

The SVM-PerfMulti (Habib 2008) cutting plane algorithm identifies the most violated

constraints by maximizing the difference between the classification score of a data point

relative to its own class and the best score among all other classes. If the difference is

greater than a loss measure threshold, the data point is considered to be correctly

classified and therefore its associated constraint is not violated. Otherwise, the constraint

is violated. In other words, the greater the difference in classification score between the

correct classification and the next best, the more we consider than the trained model is

capable of correctly classifying unknown data points. Since the cutting plane decision is

based on a slack error criterion, it falls in the category of a slack-rescaling algorithm.

However, we use the margin-rescaling option during the optimization process, where

individual weights are scaled by a fixed margin factor that is independent of the current

loss value.

The initial loss measure threshold is equivalent to that of a total loss, i.e., 100% loss

distributed among all data points. The loss threshold is increased after each iteration

thereby separating the correct vs. incorrect classifications by a larger distance. We use a

heuristic increment based on a fraction of the highest correctly classified score.

To find those yG positions that maximize line 4 of Algorithm 5.2 –

)},(),({argmax yyyy TYy GI$GH?G 9 xw – a sufficient condition is to maximize ),( yGI x

by classifying the input vectors using the trained model after each optimization cycle. If

an input example is incorrectly classified, its associated constraint would be considered

violated. In this case, finding the most violated constraints’ criteria would be:

:@

:AB

:;

:<= /

% %8

otherwise 0

}{max)( 1,..,1

iTj

kjyji

Ty

iiic

xwxw . … … … … … ( 5.22)

Page 107: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

91

We will now show that all constraints found by the criteria in (5.22) – all incorrectly

classified points – are also found by line 7 of Algorithm 5.3 where a constraint is

considered violated if:

liTj

kjyji

Ty ii

/,%8 }{max)(

,..,1xwxw

i.e., liTj

kjyji

Ty ii

$/%8 }{max)(

,..,1xwxw . … … … … … ( 5.23)

If the correct classification score for data point i is less than the maximum score for

all incorrect classes, }{max)(,..,1

iTj

kjyji

Ty ii

xwxw%8/ , it will also be less than the maximum

score plus a loss threshold l, liTj

kjyji

Ty ii

$/%8 }{max)(

,..,1xwxw . This means that all violated

constraints that need to be found in order to satisfy line 4 of Algorithm 5.2 are also found

by line 7 of Algorithm 5.3, even if no loss threshold comparison is performed (l=0).

However, adding the loss threshold comparison will cause a subset of the correctly

classified data points to be flagged as incorrectly classified and added to the violated

constraints. The loss threshold term l is therefore making the correct classification criteria

more strict and requiring that the correct example is as far as possible for all other

classes. Moreover, by incrementing l after each optimization cycle, the classification is

further refined and the separation between classes is widened.

5.3.2 Boosting Classification Performance

We hypothesize that the effect of the stricter correct classification decision has the

effect of boosting the classification performance, which is confirmed by the empirical

results presented in Chapter 8. Widening the gap between the correct and incorrect

classification for an example has the effect of boosting the weight of the positive

Page 108: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

92

examples for the correct class. As part of the investigative experiments presented in

Chapter 4, one of the single class experiments using SVM-Light11 assessed the effect of

boosting the weights of positive examples by some multiplier factor. The experiment

results are presented in Table 4.3. Using a preset boosting factor led to improved

performance measures up to a certain level, up to a certain point after which performance

decreased with higher boosting factors.

In Algorithm 5.3, we do not apply a preset boosting factor but rather use a fraction of

the maximum correctly classified score to increase the loss threshold measure. This is a

heuristic value indicating the highest sphere of correct scores. One may use other

heuristic measures, such as the current value of the loss function ),( yyGH . However, we

observed that using the maximum correct score led to the best and more consistent results

with different experimental datasets.

Algorithm 5.3 falls under the category of slack-rescaling algorithms because the

violation criteria is based on the difference between the correctly classified score and the

incorrect ones. We use slack rescaling for finding the most violated constraints yet use

margin rescaling for the optimization phase. According to the learning constraint in

(5.15), all learned weights are limited by the value of the regularization parameter C

which governs the trade-off between the slack minimization and the margin

maximization. Using the performance boosting mechanism in Algorithm 5.3 leads to a

faster stabilization of both optimization objectives thereby causing the trained model to

reach a good out-of-the-box performance independent of the value of C when binary

features are used.

11 Joachims. 2002. Learning to Classify Text Using Support Vector Machine. Norwell, MA: Kluwer Academic.

Page 109: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

93

5.4 Reducing Memory Requirements Using the SVM structural formulation for either binary or multi-class learning, the

training time is improved by combining feature vectors into vector(s) of most violated

constraints. The generated vectors require larger memory as the size of each constraint

vector is O(f) in the binary case and O(kf) in the multi-class case, where f is the number

of features in the training set and k is the number of classes. E.g., for a training set with

1,000,000 features and 10 classes – and assuming 8-bytes per feature for the feature

number and its weight – a binary constraint vector may need up to 8MB of memory while

a multi-class vector may need up to 80MB. These estimates constitute a worse-case

scenario, where all features are represented in each vector for all classes. In practice,

using the JNLPBA-04 training dataset with over a million features and 11 classes, the

multi-class constraint vector size was about 0.5MB.

Although the support vector size in multi-class training could reach O(k) multiples of

the corresponding size in the binary case, experiments found that the multi-class overall

memory requirements using SVM-PerfMulti do not approach this worst case possibility.

Figure 6.3 compares the actual memory consumption for several binary and multi-class

experiments. Since the memory needed depends on the number of constraints – or in

other words, the number of learning iterations – reducing the number of iterations will

lead to a lowered memory consumption. Algorithm 5.3 accelerates the learning process

thereby reducing the total number of iterations (and support vectors).

In order to ensure that all memory allocations performed during the learning process

are necessary, we performed extensive process time and memory profiling as part of the

empirical analysis. We identified one area of improvement in SVM-Struct V3.0 where the

Page 110: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

94

trained model is copied at the end of the learning iterations. With the high amount of

memory needed for a trained model using a large high-dimensional dataset, experiments

using larger data sizes failed due to lack of memory and the overall program aborts

without saving the trained model although the actual learning has been completed.

Another area of improvement that we identified is to alter the way that SVM-Light shrinks

the working set of constraint vectors by using a negative number of iterations-to-shrink.

This alteration has the effect of lowering the number of support vectors based on

inconsistency independent of how long the support vector has been deactivated. The final

effect is a reduced number of support vectors in the working set.

Having addressed ways to reduce the necessary online memory needed for the

learning process, additional reduction may be achieved by using a different medium to

store examples and/or support vectors. In the following chapter, we describe a database-

supported architecture to alleviate the online memory needs as well as provide a user-

friendly framework for SVM learning and classification.

Page 111: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

Chapter 6

Database Framework for SVM Learning

In this chapter, we present an SVM solution assisted by a special database schema

and embedded database modules. The solution incorporates the learning and optimization

algorithms of SVM-Perf and SVM-PerfMulti. The database schema design allows storage

of input data, evolving training model(s), pre-computed kernel outputs and dot products,

and output data. The aim of this approach is to improve scalability by reducing the online

memory requirements and to foster SVM usability by providing a framework for easy

reusability and manageability of the learning environment and experimentation results.

Using a relational database to support SVM has been attempted in (Rüping 2002) and

a more complete yet different solution is included in the Oracle 10g data mining product

(ODM) (Milenova et al. 2005). MySvmDB (Rüping 2002) addresses the high memory

requirements by using a relational database to store the input data and parameters. It does

not handle the computational time limitations. In fact, communicating constantly with the

database system is known to negatively impact the performance due to the cost of

fetching stored data.

The only SVM database implementation that tackles usability and scalability issues is

Oracle 10g commercialized SVM integration into the Oracle Data Mining (ODM)

product (Milenova et al. 2005). Oracle’s approach to reducing the number of data points

considered for training uses “adaptive learning” where a small model is built then used to

classify all input data. New informative data points are selected from all remaining input

data and the process is repeated until convergence or until reaching the maximum

Page 112: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

96

allowed number of support vectors. Our approach does not reduce the input data size in

order to evaluate the efficacy of the database-embedded modules in providing a scalable

solution.

Oracle’s multi-class implementation uses a One-Against-All classification method

where several binary machines are built and scoring is performed by voting for the best

classification. The number of binary machines in this case is equal to the number of

classes in the training data. Our SVM-MultiDB approach uses All-Together training and

classification where only one machine is built and used for classification. The new

embedded database modules supporting both the single class case as well as the All-

Together multi-class case can be used to implement the other multi-class learning

approaches combining binary machines, if needed.

Building a growing list of previously identified and annotated named entities will be

made possible by the database repository, which would provide a valuable resource to

constantly improve the classification performance. The evolving gazetteer list can be

used during preprocessing or post-processing to annotate and/or correct the classification

of newly discovered named entities thereby boosting the overall performance.

6.1 SVM Database Architecture We considered the potential choices for which database management system (DBMS)

to use to carry out the research plan. We decided to select an open-source DBMS that

would make easier the potential extension into a service-oriented architecture to be used

by other users and researchers. The two main open-source database management systems

Page 113: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

97

are MySQL12 and PostgreSQL13. Our selection criteria included the adherence to

standards, the support for internal and embedded functions in different programming

languages (preferably C), as well as performance and scalability. We decided to use

PostgreSQL due to its rich features, adherence to standards, and the flexible options to

extend DBMS via internal or embedded functions. While PostgreSQL is known to scale

up and benchmarks have shown its scalability potential up to several terabytes of data,

performance was an issue in the past. MySQL lacks some of the rich features of

PostgreSQL and does not provide the same flexibility in extending the database server

with embedded modules, yet it used to have a superior performance. Newer versions of

both products are moving towards compensating for the potential weaknesses. MySQL

5.0 implements the previously missing features such as views and subqueries, while

PostgreSQL 8.x versions focused on enhancing performance and scalability. As a result

of the improved performance potential of the newer version of PostgreSQL and its

established rich features and better adherence to standards, we decided to select

PostgreSQL as the database management server supporting the proposed architecture.

In order to reduce the communication overhead with the database backend, we extend

the database server with embedded C-Language functions. This also provides a better

integration of all components. Database triggers are used for frequently updated values to

ensure data integrity and improve the potential parallelization of the learning and

database processes. Figure 6.1 presents the architecture used in the current

implementation. Pre-processing (feature extraction) and post-processing (evaluation)

12 The GENIA Project (5.1). Team Genia, 2002-2006. Available from http://www-tsujii.is.s.u-tokyo.ac.jp/GENIA/home/wiki.cgi. 13 PostgreSQL Open Source Database (8.3.3). PostgreSQL Global Development Group, 1996 – 2008. Available from http://www.postgresql.org.

Page 114: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

98

modules are kept outside of the database modules for simplicity. Additional supporting

modules exist to import/export training and test examples, import/export training

model(s), and trigger functions to compute derived data fields. To improve the usability

of the SVM solution, a web-based user-friendly interface may be provided to allow the

user to define new learning problems and parameters, import/export training and testing

data and/or model(s), and monitor the execution of the learning process.

Figure 6.1 – Embedded Database Architecture

For the sake of simplicity, we will refer to the binary classification component of the

database implementation as SVM-PerfDB and will refer to the multi-class component as

SVM-MultiDB. The embedded database modules are written in C-language using

Page 115: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

99

PostgreSQL’s Server-Side Programming Interface (SPI) to access and manipulate the

data. The main objectives of using a database-supported solution are:

# Use improved learning algorithms in order to reduce the training time. This is

achieved by using SVM-Perf and SVM-PerfMulti as a basis of the implementation.

# Reduce the online memory requirements by storing input examples and generated

constraint vectors. A known concern of using a database in place of memory-

based data structures is the potentially negative impact on computational time due

to the need of frequent access to permanent storage. To remedy for this adverse

reaction, one needs to minimize the need to refetch data, possibly by storing

intermediate results of smaller size in memory.

# Improve usability of the SVM solution and provide a practical framework for

SVM learning and classification.

6.2 SVM-MultiDB Database Schema The database schema design of SVM-MultiDB aims to provide a practical framework

for binary and multi-class SVM learning and classification. Figure 6.2 represents the

main database schema. A detailed data dictionary is provided in Appendix A.

The objectives of the database schema are the following:

# Reduce online memory needs of a learning exercise by storing input examples and

generated constraint vectors.

# Provide a way to store training and/or testing example datasets independent of a

learning exercise.

# Be able to define multiple SVM experiments using the same training and/or

testing datasets.

Page 116: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

100

# Be able to use a subset of existing datasets for a learning experiment. This would

be useful to conduct a grid search of the best learning parameters.

# Be able to reuse the same SVM exercise definition with different learning

parameters.

# Be able to label the same example dataset differently in different learning

exercises. For e.g., the same dataset may be used for binary classification of

different named entities or for multi-class classification using a different number

of classes as part of individual learning exercises without the need to reload the

example dataset.

# Provide a way to store intermediate kernel evaluations and dot products of

constraint feature vectors.

# Provide a way to store the generated most violated constraint vectors and learned

model(s) for future classification use and potentially for new incremental learning

algorithms.

# Be able to classify different test datasets at any time using existing learned

model(s).

# Easily maintain the definition of learning experiments and parameters and

examine their results.

# Provide a common environment for data exchange between learning and

classification modules, which can serve as a basis for the recommended service-

oriented-architecture presented in Chapter 10.

# Store textual classification labels in addition to their numeric representation to

assist in building a growing dictionary of named entities.

Page 117: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

101

Figure 6.2 - SVM Learning Data Model

Page 118: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

102

As presented in Figure 6.2, the main table definitions supporting SVM learning and

classification are the following:

# Example_Set: Defines a new set of training and/or testing examples.

# Example: Individual example input vectors that belong to a given example set.

Note that the label stored is the original textual label, for e.g., B-protein, and not a

given class number in order to facilitate the use of class identifications within

different exercises.

# Label_Set: Defines a set of class labels.

# Label: Individual textual to class number mapping that belongs to a given

label_set.

# Kernel: A lookup table of valid kernel types.

# SVM: Defines a selection of all or part of an example set and a given label set, to

be used for a learning exercise.

# Learn_Param: Defines a set of learning parameters.

# SVM_Learn: Defines a specific learning exercise for an SVM definition and a set

of learning parameters.

# SVM_Model: A set of generated constraints belonging to a given learning

exercise. Support vectors that are selected for the final learned model are marked

using the selected Boolean field. Computed alphas of the final learned model for

each selected support vector are stored.

# SVM_Model_Kernel: This table may be used to store kernel evaluations (dot

products for the linear case) in order to reduce computational redundancy and the

need to refetch feature vectors for kernel computation.

Page 119: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

103

# SVM_Classify: Defines a classification exercise.

# Classified_Example: Stores computed predictions of classified examples for a

given classification exercise.

6.3 Embedded Database Functions The following embedded server-side database functions are provided to perform the

main learning and classification tasks as well as other supporting tasks.

# import_examples(example_set_id, filename): Used to load a set of examples for

the given example set ID from a file into the database. Each example appears on a

separate line and contains the original word, an array of features expressed as

number:weight pairs, and a textual label classification of the word. In order to

provide flexibility in the label/class translation and to allow reuse of the same

example set in different learning/classification tasks with different class

numbering – for instance, using the same example set in binary classification of

varying named entities and/or for multi-class classification– we import the

original label text for a given example, for e.g., B-protein.

# export_examples(example_set_id, filename): Export the examples belonging to

the given example set ID from the database to a text file. Examples are written in

a format suitable for re-import into the database.

# export_examples(example_set_id, label_set_id, filename): Export the examples

belonging to the given example set ID from the database to a text file in SVM-

Light format. Translate the textual labels to class numbers according to the

numbering scheme defined by the given label set ID. Note that PostgreSQL

supports polymorphism so the same function is usable with different arguments.

Page 120: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

104

# update_example_set(example_set_id): Re-compute all computed fields in the

Example_Set table and the SVM table for all pre-defined SVM entries that use the

given example set ID. Fields such as max_featnum, num_examples, num_labeled,

num_unlabeled, etc.. are re-computed. Note that this function is mostly redundant

as all computed fields are automatically updated on insert and/or update of the

Example_Set table or the SVM table using functions and triggers.

# svm_learn_trig(): Trigger function invoked on insert and/or update of data in the

SVM table. The function is associated with a before trigger named tlearn_before

that updates all computed fields prior to carrying out the insert/update operation.

# svm_learn(svm_id, learn_param_id): Perform the learning operation defined by

the given SVM ID using the learning parameters defined by the given learning

parameters ID. Upon completion of the learning task, the trained model will be

stored in the database and temporarily saved in an external file for optional use

with external classification tasks. The beginning and ending timestamps for the

learning operation are also logged in the database.

# svm_classify(svm_id, learn_paramid, example_set_id): Classify the given

example set using the machine defined by SVM ID and previously trained using

the given learning parameters ID.

# export_model(svm_id, learn_param_id): Export the trained model for SVM ID

using the set of learning parameters ID to an external file.

These functions may be used either through an interactive psql sessions or via a user

interface, preferably web-based. All direct database access operations are grouped in a

single source file – svm_db.c – in order to facilitate porting of the database operations to

Page 121: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

105

a different relational database management server or to a client-based environment. The

user-accessible database functions and modifications to the C source code for SVM

learning refer to generic database access routines defined in svm_db.c.

6.4 SVM-MultiDB Usage Example A typical usage example of the database-integrated SVM learning module(s) is as

follows (some definition steps need not be in a particular order):

1. Define new example set(s) using unique name ID(s).

2. Import examples for pre-defined example set ID(s).

3. Define a new set of class labels using a unique label set ID.

4. Define a new SVM learning environment using an existing example set ID and an

existing label set ID. May optionally indicate a starting and ending example

number if the learning task only uses a subset of the available examples in the

example set.

5. Define a new set of learning parameters to be used by one or more SVM learning

environments and assign a unique learning parameters ID to it.

6. Define a specific learning task using a pre-defined SVM learning environment

and a pre-defined set of learning parameters.

7. Carry out a pre-defined learning task and store the trained model in the database.

8. Define a specific classification task using a previously trained model identified by

an SVM learning environment ID and a set of learning parameters ID, and

examples from a pre-stored example set.

Page 122: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

106

9. Perform a pre-defined classification task and store the classified examples in the

database. May optionally map the computed prediction values to their

corresponding labels according to the label set defined in the SVM environment.

6.5 Improved SVM Solution Usability The embedded SVM database solution promotes the usability of SVM learning and

classification tasks by providing the following:

# Ability to define several SVM learning environments using the same example set.

# Ability to define sets of learning parameters for use with different learning tasks.

# Multiple label/class translation sets may be defined for later use with one or more

example sets in different learning environments.

# Easily repeat the learning task with different parameters without losing the

learning/classification results from a previous learning exercise. With the ability

to define SVM environments using subset(s) from the same example set and/or

different learning parameters, it is easy to develop additional supporting modules

to perform cross-validation, grid search of parameters, kernel selection, etc.

# Since classified examples are stored in the database along with their prediction

scores and/or classification labels, these results may be used to build a growing

named entities dictionary, improve existing trained models, perform post-

processing tasks to evaluate the model and/or boost the classification

performance, to name a few possibilities.

# Building user-friendly interfaces to the database environment to carry out the

definition of tasks and environments, perform the tasks, and monitor their results,

Page 123: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

107

is as easy as developing a simple database application interface in a web-based

environment or others.

6.6 Tradeoff of Training Time vs. Online Memory Needs Using the SVM structural formulation for either binary of multi-class learning, the

training time is improved by combining feature vectors into vector(s) of most violated

constraints. The generated vectors require larger memory as the size of each constraint

vector is O(f) in the binary case and O(fm) in the multi-class case, where f is the number

of features in the training set and m is the number of classes. E.g., for a training set with

1,000,000 features and 10 classes – and assuming 8-bytes per feature to store feature

number and its weight – a binary constraint vector may need up to 8MB of memory while

a multi-class vector may need up to 80MB. These estimates constitute a worse-case

scenario, where all features are represented in each vector for all classes. The support

vectors are represented by a sparse vector of number:weight pairs where only non-zero

weight features are stored. Since the constraint vectors are composed of a weighted

summation of several example feature vectors, they are much denser than individual

example vectors, especially with larger training datasets.

In practice, using the JNLPBA-04 training dataset with over a million features and 11

classes, the multi-class constraint vector size was about 0.5MB. Figure 6.3 presents the

online memory requirements with varying training data size for binary training (using

regularization factor C=0.01, 0.14, and 1.0) as well as for the multi-class training. Note

that although the multi-class constraint vector size is potentially 11 times larger in this

experiment, the actual memory needs are less than this estimate, and not much larger than

that needed for binary classification with a larger regularization factor C.

Page 124: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

108

0

1,000

2,000

3,000

4,000

5,000

6,000

0 50,000 100,000 150,000 200,000 250,000 300,000 350,000 400,000 450,000 500,000

# of Training Examples

Mem

ory

Size

(MB

ytes

)

SVM-PerfMulti SVM-Perf C=0.01 SVM-Perf C=0.14 SVM-Perf C=1.0

Figure 6.3 - SVM-Perf and SVM-PerfMulti Memory Usage vs. Data Size

From Bottom to Top: SVM-Perf C=0.1, C=0.14, C=1.0, and SVM-PerfMulti

By examining the time spent in different parts of the learning algorithm, it is noted

that about 50% of the time is spent computing argmax to find the most violated

constraints using the original input vectors, and the other 50% is spent optimizing the

model using the constraint vectors. We will examine the impact of storing each vector

type in the database on the overall training time.

6.6.1 Effect of Fetching Examples from Database

SVM-MultiDB provides a configurable example caching with three different options:

no caching (i.e., examples are always fetched from the database), full caching (all

examples are pre-fetched into memory), and a limited cache size where a predefined

number of example records is fetched as needed. As expected, increasing the cache size

minimizes the time impact up to a certain size after which we see some impact due to

longer prefetch time. However, since the example vectors are requested only once and in

a sequential manner to compute the most violated constraint, the overall impact of

keeping example vectors in the database and fetching them as needed had a minimal

impact on time in the binary case and almost no impact on the multi-class case. A

Page 125: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

109

comparison of the training time using an example cache size of 500 is presented in Figure

7.1 for the binary case, and in Figure 8.3 and Figure 8.4 for the multi-class case.

The impact of the examples cache size on the overall computational time is presented

in Figure 6.4 for the binary case, and in Figure 6.5 for the multi-class case. The binary

experiments are conducted using a training size of 250,000 examples, C=0.01, and cache

sizes varying from full cache, no cache, and sizes up to 10% of the training size. The

multi-class experiments use a training data size of 100,000 examples, C=0.01, and same

variation of cache sizes as the binary case.

0

50

100

150

200

250

-1,000 1,000 3,000 5,000 7,000 9,000 11,000 13,000 15,000 17,000 19,000 21,000 23,000 25,000

Examples Cache Size

Trai

ning

Tim

e (s

ec)

Figure 6.4 - SVM-PerfDB Training Time vs. Examples Cache Size 14

Since example vectors are accessed during the process of finding the most violated

constraints and computation of argmax and not during the learning and optimization

process, the effect of the example cache size is more noticeable with smaller values of C

in the binary case and decreases as the values of C increases. Using a small example

cache size offers the best trade-off between computational time and memory

14 The negative example cache size -1000 denotes full memory caching, i.e., all examples are pre-fetched and loaded into online memory. Full caching is denoted by a negative number just for visibility on graph.

Page 126: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

110

consumption as larger cache sizes would increase the time needed to replenish the case in

each iteration.

0

100

200

300

400

500

600

700

800

900

-1,000 0 1,000 2,000 3,000 4,000 5,000 6,000 7,000 8,000 9,000 10,000

Examples Cache Size

Trai

ning

Tim

e (s

ec)

Figure 6.5 - SVM-MultiDB Training Size vs. Examples Cache Size15

In the multi-class case, we notice that the example cache size has less impact on the

overall computational time, as presented in Figure 6.5, since more time is spent in the

learning and optimization phase than in the computation of argmax and the identification

of the most violated constraints. Therefore, using a small cache size would reduce the

online memory needs while providing the same effect on computational time.

6.6.2 Effect of Fetching Support Vectors from Database

Constraint and support vectors occupy more memory than example vectors and would

result in a huge saving of online memory if maintained in the database. However, the

existing C implementation of SVM-Perf and SVM-PerfMulti require frequent access to

the feature vectors during the optimization process, mostly to compute kernel products

and update linear weights. Moreover, using a variable support vector cache size may not

15 The negative example cache size -1000 denotes full memory caching.

Page 127: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

111

be useful due to the frequent non-sequential access to the support vectors. An initial

direct porting of the C implementations to database implementation without in-memory

support cache negatively impacted the overall training time, which was expected. An

efficient database implementation requires optimizing access to the constraint feature

vectors by caching kernel evaluations in memory and minimizing the number of loops

used to compute other intermediate results. An initial test of kernel product caching (with

no constraint caching) resulted in about 50% time reduction.

Using larger cache sizes for constraint (support) vectors is not particularly beneficial

since more time would be required to replenish the cache during each learning iteration.

In our experiments, we observed that not caching the support vectors and fetching them

one vector at a time had the same effect as keeping a small cache size. Since the

constraint vectors are potentially very large in size, using no cache would provide the

highest reduction in online memory needs while not impacting the computational time.

The key to achieving the best trade-off is to minimize the number of loops that require

access to the support vectors, ideally limiting such access to once per learning iteration.

Kernel products caching in active memory and an incremental computation of trained

weights make this objective possible, especially with a linear kernel.

In Chapter 7 and 8, we report the results of a series of scalability experiments using

existing SVM implementations as well as our newly developed standalone and database-

supported solution. Chapter 7 examines the results of single class experiments classifying

protein named entities in the JNLPBA-04 biomedical data. In Chapter 8, we compare the

outcomes of multi-class experiments using the same biomedical datasets where the

Page 128: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

112

following five named entity types are discovered: protein, DNA, RNA, cell line, and cell

type. In addition, we apply the existing and new multi-class solutions to two additional

datasets, namely the CoNLL-02 Spanish and Dutch challenge data aiming to classify

general named entities such as a person, location, or organization.

Page 129: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

Chapter 7

Single-Class Experiments

In this chapter, we examine the results of the first set of scalability experiments using

single-class SVM. These experiments use the same training and test datasets described in

Chapter 4 and Appendix B. The datasets represent a real-world problem, namely the

biomedical named entity recognition, to identify the names of proteins, DNA, RNA, cell

lines, and cell types in biomedical abstracts. The approach used promotes language and

domain independence by eliminating the use of prior language-specific and domain-

specific knowledge. Pre-processing of the training and test datasets is limited to

extracting morphological and contextual features describing words in the biomedical

abstracts and representing each vector with a high-dimensional binary vector, as

described in Chapter 4. The input dimensionality of the complete training data exceeds a

million features. The training data is composed of 492,551 examples and the test data

includes 101,039 tokens.

The scalability experiments train single-class support vector machines to identify

protein names using chunks of the training dataset with increasing size. The trained

model is then used to classify protein named entities in the complete test dataset. The

training time is noted in each experiment as well as the number of support vectors and the

accuracy measures achieved. Although the focus of this work is to train an SVM to

identify all named entities at the same time using an All-Together multi-class approach,

we begin our experiments with a set of single-class experiments to accomplish the

following objectives:

Page 130: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

114

# Assess the ability of SVM to recognize patterns in high-dimensional input space

in the simple binary case prior to an extended multi-class assessment.

# Establish a comparison baseline for the classification performance of one

representative class, in this case the protein named entity class. The All-Together

multi-class performance of individual classes is expected to be superior to that

achieved using several binary machines, as depicted in Figure 3.4.

# Compare the n-slack and 1-slack SVM formulation improvement in training time

to evaluate the potential gain that could be achieved by developing a multi-class

instantiation using the 1-slack formulation. In the binary case, SVM-Light16

represents the n-slack formulation while SVM-Perf17 makes use of the 1-slack

SVM formulation.

# Since our database framework incorporates both the single-class and multi-class

SVM implementations, we need to evaluate the performance of the database-

supported implementation and compare it to the non-database implementation.

The objective is to achieve the same performance measures in both

implementations, reduce the online memory requirements while minimizing the

database access impact on the overall training time.

In the following sections, we report and examine the results of several sets of

experiments conducted using increasing training data sizes, which include:

16 Joachims. 2002. Learning to Classify Text Using Support Vector Machine. Norwell, MA: Kluwer Academic. 17 Joachims. 2006. Training Linear SVMs in Linear Time. In Proc. of the ACM Conference on Knowledge Discovery and Data Mining (KDD), Aug 20-23:217-226.

Page 131: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

115

# Single-class experiments identifying protein names using Thorsten Joachims’

popular SVM-Light (Joachims 2002, 1998a, 1998b), and a regularization factor

C=0.01 and 1.0.

# A set of experiments using 150,000 training examples and varying values of C

ranging from 0.01 to 1.0, at 0.01 increments, i.e., a total of 100 tests. These

experiments aim to examine the impact of C on training time and tune the

parameter value for best performance which was found to occur at C=0.14.

# A set of performance validation experiments using varying training data sizes and

different C values, where the performance variation was found to be consistent.

# Single-class experiments identifying protein names only using the new SVM

implementation, SVM-Perf (Joachims 2006, 2005; Tsochantaridis et al. 2004;

Tsochantaridis et al. 2005), and a regularization factor C=0.01, 0.14, and 1.0.

# Single-class experiments identifying protein names only using our database

embedded solution, SVM-PerfDB, and a regularization factor C=0.01, 0.14, 1.0.

# The training data chunks range from 1,000 examples to 492,551 examples (the

complete training dataset). Each set of experiments consists of 51 tests.

All experiments use a linear kernel and a margin error of 0.1. The tests run mostly on

an Intel Core 2 quad-processor 2.66 GHz machine and some run on a Xeon quad-

processor 3.6 GHz machine. Running the same test on both machines completed in

similar training time. We summarize the experiments results and analyze the findings in

the following sections.

Page 132: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

116

7.1 Single-Class Scalability Results Using SVM-Light (Joachims 2002, 1998a, 1998b), a single-class support vector

machine is trained to recognize protein name sequences. The trained machine is then used

to classify proteins in the test data. Since no pre-processing was performed on the

training and testing data besides features extraction, the positive examples in the data sets

remained scarce. Training the SVM-Light machine with the complete training dataset and

a regularization factor C=0.01 completed in about 28.5 minutes. The recall, precision,

and F-score achieved in this case are 62.62, 56.07, and 59.17 respectively. Increasing C

to 1.0 raised the training time to about 269 minutes, and improved the accuracy measures

to 68.76, 58.83, and 63.41 respectively.

The same set of experiments is repeated using SVM-Perf (Joachims 2006, 2005;

Tsochantaridis et al. 2004; Tsochantaridis et al. 2005), which improves training time of

linear machines to be linear w.r.t. the training data size. The training time improvement

using SVM-Perf is several orders of magnitude as compared to that using SVM-Light,

with the same classification results when trained with equivalent learning parameters.

Since SVM-Perf optimizes vectors based on a linear combination of all training

examples, the value of C to be used in order to get equivalent results to those of SVM-

Light is computed as follows:

Cperf = Clight * training_data_size / 100 … … … … … ( 7.1)

SVM-PerfDB embeds SVM-Perf into the database environment. To simplify its usage,

the user selection of the C factor is based on the SVM-Light value. This way, the user

only selected one value independently of the training data size. SVM-PerfDB takes care

of adjusting the regularization factor C according to Equation (7.1).

Page 133: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

117

Figure 7.1 compares the training time using both SVM-Light, SVM-Perf, and SVM-

PerfDB with the same data and equivalent learning parameters. The training time using

SVM-Light is polynomial O(n2) while being linear using SVM-Perf and SVM-PerfDB.

0

200

400

600

800

1,000

1,200

1,400

1,600

1,800

0 50,000 100,000 150,000 200,000 250,000 300,000 350,000 400,000 450,000 500,000

# of Training Examples

Trai

ning

Tim

e (s

ec)

Figure 7.1 - SVM-Light, SVM-Perf, and SVM-PerfDB Training Time vs. Training Data Size

SVM-Light (Top), SVM-Perf (Bottom), and SVM-PerfDB (Middle)

The number of support vectors using SVM-Light is O(n0.8) w.r.t. the training data size.

However, using SVM-Perf, the number of support vectors is only a very small fraction of

the training data size and increased slightly with increased data size. The reduced number

of support vectors is the main basis for the improved training time of SVM-Perf. The

relationship between the number of support vectors and the training time is observed to

be the same as that of the training data size, i.e., training time is polynomial w.r.t. the

number of support vectors with SVM-Light, and linear with SVM-Perf. Figure 7.2 and

Figure 7.3 reflect the variation in the number of support vectors in the trained model with

the number of examples in the training dataset. Using all 492,551 training examples,

SVM-Light produced 74,306 support vectors vs. 32 vectors with SVM-Perf.

Page 134: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

118

0

10,000

20,000

30,000

40,000

50,000

60,000

70,000

80,000

0 50,000 100,000 150,000 200,000 250,000 300,000 350,000 400,000 450,000 500,000

# of Training Examples

# of

Sup

port

Vec

tors

Figure 7.2 - SVM-Light Number of Support Vectors vs. Training Data Size

0

5

10

15

20

25

30

35

40

45

0 50,000 100,000 150,000 200,000 250,000 300,000 350,000 400,000 450,000 500,000

# of Training Examples

# of

Sup

port

Vect

ors

Figure 7.3 - SVM-Perf Number of Support Vectors vs. Training Data Size

7.2 Effect of Regularization Factor C As a result of the observation of the enhanced accuracy measures with different

training error vs. margin error trade-off factor values, and the associated impact on

training time, we conducted a series of experiments investigating the effect of the trade-

off factor C on training time using SVM-Perf. Since the training time is consistently

longer with SVM-Light than it is with SVM-Perf, one can easily extrapolate the impact of

Page 135: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

119

the trade-off factor C on SVM-Light training time. The variation of training time w.r.t. the

variation of the regularization factor C is presented in Figure 7.4. The training time is

found to scale in O( C ).

0

100

200

300

400

500

600

700

800

900

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

SVM-Light C Factor

Trai

ning

Tim

e (s

ec)

Figure 7.4 - SVM-Perf Training Time vs. Variation of C-Factor

As the variation of C clearly impacts the training time, we investigated its impact on

various accuracy measures in order to assess whether it is possible to perform a grid

search for tuning the learning parameters using a subset of the training dataset. A set of

experiments using different training data sizes and a range of C values is conducted and

the accuracy measures achieved are noted. Table 7.1 presents a sample of these results

using different training data sizes and the complete set and some values of C. These

experiments show that the accuracy improvement with varying C is consistent with

different training data sizes, where the accuracy measures reach their best values within a

very close range of C values (in this case C values of 0.12 and 0.14 led to the best

performance measures). Using the same learning parameter for multi-class training

reflects similar consistency in accuracy enhancement.

Page 136: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

120

Table 7.1 - Consistent Performance Change with Varying C-Factor18

Training Data Size

Number of Active Features

Regularization Factor C

Performance Recall/Precision/F-Score

C=0.01 27.71 / 35.76 / 31.22 C=0.12 48.17 / 44.61 / 46.32 C=0.14 48.83 / 45.02 / 46.85 C=0.20 48.75 / 44.60 / 46.58

10,000 67,023

C=1.00 48.40 / 44.34 / 46.28 C=0.01 32.72 / 38.55 / 35.40 C=0.12 53.74 / 48.87 / 51.19 C=0.14 53.51 / 48.66 / 50.97 C=0.20 53.58 / 48.28 / 50.79

25,000 132,126

C=1.00 52.29 / 47.54 / 49.80 C=0.01 36.97 / 43.38 / 39.92 C=0.12 57.48 / 52.91 / 55.10 C=0.14 57.16 / 52.81 / 54.90 C=0.20 57.00 / 52.28 / 54.54

50,000 219,678

C=1.00 55.24 / 49.88 / 52.42 C=0.01 44.94 / 47.17 / 46.02 C=0.12 62.97 / 56.58 / 59.61 C=0.14 62.81 / 56.21 / 59.33 C=0.20 61.53 / 55.48 / 58.35

100,000 367,711

C=1.00 58.61 / 52.51 / 55.39 C=0.01 51.63 / 50.32 / 50.96 C=0.12 65.31 / 58.21 / 61.56 C=0.14 65.41 / 58.25 / 61.62 C=0.20 64.31 / 57.78 / 60.87

150,000 493,206

C=1.00 61.05 / 54.46 / 57.57 C=0.01 58.32 / 53.68 / 55.90 C=0.12 69.11 / 60.09 / 64.29 C=0.14 69.05 / 60.01 / 64.21 C=0.20 68.11 / 59.11 / 63.29

250,000 728,402

C=1.00 65.03 / 56.88 / 60.69 C=0.01 61.03 / 54.80 / 57.75 C=0.12 71.62 / 61.30 / 66.06 C=0.14 71.40 / 60.98 / 65.78 C=0.20 70.99 / 60.67 / 65.42

350,000 920,538

C=1.00 67.26 / 57.96 / 62.26 C=0.01 62.72 / 56.12 / 59.23 C=0.12 73.04 / 62.48 / 67.34 C=0.14 72.72 / 62.16 / 67.03 C=0.20 72.52 / 61.81 / 66.74

492,551 1,161,970

C=1.00 68.06 / 59.29 / 63.37

18 These experiments were conducted on a 32-bit Linux operating systems. A slight difference in floating-point computation precision is observed on 64-bit operating system.

Page 137: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

121

This observation is very useful for tuning the learning parameters with a subset of the

training dataset in order to achieve better accuracy with larger training data size during

the final learning phase. Note that the F-score for protein single-class classification

improved considerably using a C value of 0.12, with an eight points improvement over

that achieved with C value of 0.01. Increasing the trade-off factor C any further led to a

decline in the F-score measure, as observed in Table 7.1. This shows that one can reach a

reasonable trade-off between training time and the final accuracy measures by tuning the

support vector machine with a fraction of the training dataset. Figure 7.4 illustrates the

variation of the performance measures with varying C using SVM-Perf and a training data

size of 150,000 examples.

30

35

40

45

50

55

60

65

70

0.00 0.20 0.40 0.60 0.80 1.00Regularization Factor C

Perf

orm

ance

(%)

Recall Precision F-Score

Figure 7.5 - SVM-Perf Performance vs. Regularization Factor C

Figure 7.6 summarizes the protein classification performance variation with training

data size and a regularization factor C of 1.0 using SVM-Perf. The three different

implementations lead to similar trends with minor floating-point computation precision

differences.

Page 138: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

122

10

20

30

40

50

60

70

80

0 50,000 100,000 150,000 200,000 250,000 300,000 350,000 400,000 450,000 500,000

# of Training Examples

Perfo

rman

ce (%

)

Protein Recall Protein Precision Protein F-score

Figure 7.6 – Single Class Protein Performance (Recall/Precision/F-Score) – C=1.0

7.3 Comparison of SVM-Perf and SVM-PerfDB As mentioned in Chapter 6, we alter the criteria for shrinking the training model in

SVM-Perf when ported the database environment SVM-PerfDB. We use a negative value

for the number of iterations used to decide whether to remove a support vector from the

working set which changes the shrinking decision to be based on inconsistency only. This

modification accelerates training, especially with higher values of C. It also reduces the

number of support vectors thereby lowering the memory consumption. For example, the

number of support vectors when training with the complete dataset and C=1.0 is 226

vectors without modification and 164 vectors with modification. The training time is

reduced from 47.2 minutes to 19.1 minutes. The final recall/precision/F-score measures

are 68.76 / 58.83 / 63.41 with the original SVM-Perf vs. 69.07 / 58.73 / 63.48 with the

negative value of iterations_to_shrink.

Given the improved training time, number of support vectors, and memory usage, we

applied the modification to SVM-Perf and conducted more experiments in order to

Page 139: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

123

provide an apples-to-apples fair comparison of the database vs. the non-database

implementations where the training environment is exactly the same with the only

exception being the use of the database repository, leading to exactly the same

performance measures and number of support vectors. Figure 7.7 reflects the memory

needs with increasing training data size using C=0.01, 0.14, and 1.0 and a negative value

of iterations_to_shrink. The online memory consumed is linearly proportional to the

training data size, with higher slopes as C increases.

0

500

1,000

1,500

2,000

2,500

3,000

3,500

4,000

4,500

5,000

0 50,000 100,000 150,000 200,000 250,000 300,000 350,000 400,000 450,000 500,000

# of Training Examples

Mem

ory

Size

(MB

ytes

)

SVM-Perf C=0.01 SVM-Perf C=0.14 SVM-Perf C=1.0

Figure 7.7 – Modified SVM-Perf Memory vs. Varying C

Figure 7.10 and Figure 7.11 compare the training time of SVM-Perf and SVM-PerfDB

in order to examine the impact of fetching the examples feature vectors from the database

more closely. Figure 7.1 shows that both SVM-Perf (modified) and SVM-PerfDB improve

the training time as compared to SVM-Light, with a slight increase in time when caching

examples in the database. The example cache size used in this case is 500 examples.

Figure 7.8 examines the training time difference with C=0.01 while Figure 7.9 performs

the same with C=1.0. The training time remains linear using both implementations.

Page 140: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

124

0

50

100

150

200

250

300

350

400

450

500

0 50,000 100,000 150,000 200,000 250,000 300,000 350,000 400,000 450,000 500,000

# of Training Examples

Trai

ning

Tim

e (s

ec)

SVM-Perf SVM-PerfDB

Figure 7.8 – Modified SVM-Perf vs. SVM-PerfDB Training Time: C=0.01

0

500

1,000

1,500

2,000

2,500

0 50,000 100,000 150,000 200,000 250,000 300,000 350,000 400,000 450,000 500,000

# of Training Examples

Trai

ning

Tim

e (s

ec)

SVM-Perf SVM-PerfDB

Figure 7.9 – Modified SVM-Perf vs. SVM-PerfDB Training Time: C=1.0

Comparing SVM-PerfDB to the original SVM-Perf, we notice that the training time

using the database implementation with an example cache size of 500 is slightly higher

than SVM-Perf with a small value of C-0.01. However, increasing the value of C shows

that using SVM-PerfDB trains faster than the original SVM-Perf despite the impact of

fetching the examples from the database. This is due to the accelerated training and

reduced number of support vectors with the altered shrinking decision criteria. Figure

Page 141: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

125

7.10 and Figure 7.11 compare SVM-PerfDB to the original SVM-Perf with C=0.01 and

C=1.0 respectively.

0

50

100

150

200

250

300

350

400

450

500

0 50,000 100,000 150,000 200,000 250,000 300,000 350,000 400,000 450,000 500,000

# of Training Examples

Trai

ning

Tim

e (s

ec)

SVM-Perf SVM-PerfDB

Figure 7.10 – Original SVM-Perf vs. SVM-PerfDB Training Time: C=0.01

0

500

1,000

1,500

2,000

2,500

3,000

0 50,000 100,000 150,000 200,000 250,000 300,000 350,000 400,000 450,000 500,000

# of Training Examples

Trai

ning

Tim

e (s

ec)

SVM-Perf SVM-PerfDB

Figure 7.11 – Original SVM-Perf vs. SVM-PerfDB Training Time: C=1.0

In the next chapter, we examine the scalability of multi-class SVM using the n-slack

SVM-Multiclass implementation and compare it to our new instantiation of the 1-slack

SVM formulation in SVM-PerfMulti as well as the database-supported solution SVM-

MultiDB.

Page 142: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

Chapter 8

Multi-Class Experiments

In this chapter, we examine the results of multi-class scalability experiments using

three datasets representing different languages and domains: the biomedical JNLPBA-04

challenge task, the CoNLL-02 Spanish task, and the CoNLL-02 Dutch task data. Details

about these large-scale experimentation datasets is presented in Appendix B. The

biomedical named entity recognition aims to identify the names of proteins, DNA, RNA,

cell lines, and cell types in biomedical abstracts, while the objective of the CoNLL-02

multilingual named entity recognition task is to identify general names of persons,

locations, organizations, and miscellaneous entities.

As in the single-class experiments, the approach used eliminates prior language-

specific and domain-specific knowledge and limits pre-processing to the extraction of

morphological and contextual features describing words in the text with high-dimensional

binary vectors. The input dimensionality of the training and testing datasets is

summarized in Table 8.1.

Table 8.1 – Summary of Dataset Contents

Data Source Dataset # of Examples

# of Classes

Features Dimensionality

Training 492,551 Biomedical JNLPBA-04 Testing 101,039 11 1,161,970

Training 264,715 Development 54,837 CoNLL-02 Spanish

Testing 53,049 9 1,775,408

Training 202,931 Development 40,656 CoNLL-02 Dutch

Testing 74,189 9 1,900,465

Page 143: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

127

The class labeling scheme we use for the multi-class experiments is two class labels

per named entity: one indicating the beginning of an entity (B-XXX) and one indicating a

continuation of an entity (I-XXX), in addition to one class label representing other non-

NER words (O). With this scheme, we have 11 classes in the biomedical datasets and 9

classes in the multilingual datasets.

The scalability experiments train the multi-class support vector machines using

chunks of the training dataset with increasing size. The trained model is then used to

classify named entities in the complete test dataset. In each experiment, we note the

training time, the memory usage, the number of support vectors generated, and the

performance measures achieved (recall / precision / F-score). For each of the three tasks,

we conducted several sets of experiments, which include:

# Multi-class experiments identifying all named entities using Joachims’ multi-class

implementation, SVM-Multiclass (Crammer and Singer 2001; Tsochantaridis et al.

2004) with a regularization factor C=0.01.

# Multi-class experiments identifying all named entities using our new multi-class

instantiation, SVM-PerfMulti (Habib 2008) with C=0.01 and an acceleration

factor=0.0001.

# Multi-class experiments identifying all named entities using our database

embedded solution, SVM-MultiDB with C=0.01 and an acceleration

factor=0.0001.

# The training data chunks range from 1,000 examples to the complete set of

training examples in each case. Each set of biomedical experiments consists of 51

tests, the Spanish case includes 43 tests, and the Dutch case has 41 tests.

Page 144: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

128

All experiments use a linear kernel and a margin error of 0.1. Most of the biomedical

tests run on an Intel Core 2 quad-processor 2.66 GHz machine and some run on an Xeon

quad-processor 3.6 GHz machine. Running the same test on both machines completed in

similar training time. The CoNLL-02 Spanish and Dutch experiments are performed on

the Intel Core 2 quad-processor 2.66 GHz machine.

In the next section, we examine the results of multi-class learning and classification

of the biomedical named entities using the JNLPBA-04 datasets. Experiments using other

datasets are presented in subsequent sections.

8.1 Multi-Class Scalability Results The SVM-Multiclass implementation by T. Joachims is based on (Crammer and

Singer 2001) and uses a different quadratic optimization algorithm described in

(Tsochantaridis et al. 2004). Hsu and Lin (Hsu and Lin 2002) note that “as it is

computationally more expensive to solve multi-class problems, comparisons of these

methods using large-scale problems have not been seriously conducted. Especially for

methods solving multi-class SVM in one step, a much larger optimization problem is

required so up to now experiments are limited to small data sets.” The multi-class

experiments presented herein attempt to solve a real-world large-scale problem using an

All-Together classification method. The training data is composed of 11 classes where

each named entity is represented by two classes – one denoting the beginning of an entity

and the other denoting a continuation token within the same entity – in addition to one

class denoting non-named entity tokens.

To explore the scalability issues of the All-Together multi-class SVM

implementation, a series of experiments using different training data sizes is conducted

Page 145: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

129

with a low value for the C learning parameter equal to 0.01. The training time with 1,000

examples was 3.187 seconds and it increased considerably with increased data size to

reach 416,264.251 seconds (6,937.738 minutes or 4.8 days) with the complete training

data set of 492,551 examples, running on the same machine.

The SVM-Multiclass (Crammer and Singer 2001; Tsochantaridis et al. 2004)

implementation uses the learning algorithms implemented in SVM-Light19. The training

time remains polynomial O(n2) w.r.t. the training data size with a factor of O(k2) increase

in time as compared to the single-class SVM-Light time, where k is the number of classes.

The training time required for All-Together multi-class training is prohibitive to using

this approach with large datasets.

By examining the number of support vectors selected using SVM-Multiclass, we

observe that the number of SVs is O(n0.8) w.r.t. the training data size, which is the same

relationship as in the binary classification case using SVM-Light. Since SVM-Multiclass

uses the learning algorithms implementation of SVM-Light, and based on the effect that

lowering the number of support vectors using linear combinations of input training

examples has had on reducing the training time using SVM-Perf, it is reasonable to

anticipate similar gain if the new 1-slack SVM learning formulation is extended to the

multi-class classification problem. This observation constitutes the main motivation

behind developing our multi-class cutting plane algorithm. Figure 8.1 represents the

number of support vectors variation with the training data size using SVM-Multiclass.

Figure 8.2 depicts the same relationship using SVM-PerfMulti. The highest number of

19 Joachims. 2002. Learning to Classify Text Using Support Vector Machine. Norwell, MA: Kluwer Academic. Joachims. 1998a. Making Large-Scale SVM Learning Practical. In Advances in Kernel Methods - Support Vector Learning: Chapter 11, edited by Schölkopf, et al.: MIT-Press

Page 146: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

130

support vectors when training with the complete training dataset is 159,165 vectors using

SVM-Multiclass, compared to 132 vectors with SVM-PerfMulti.

0

20000

40000

60000

80000

100000

120000

140000

160000

0 50,000 100,000 150,000 200,000 250,000 300,000 350,000 400,000 450,000 500,000

# of Training Examples

# of

Sup

port

Vec

tors

Figure 8.1 - SVM-Multiclass Number of Support Vectors vs. Training Data Size

0

20

40

60

80

100

120

140

0 50,000 100,000 150,000 200,000 250,000 300,000 350,000 400,000 450,000 500,000

# of Training Examples

# of

Sup

port

Vec

tors

Figure 8.2 - SVM-PerfMulti Number of Support Vectors vs. Training Data Size

SVM-PerfMulti and SVM-MultiDB reduce the training time by using an improved

cutting plane algorithm in conjunction with the linear learning algorithm of SVM-Perf

which is based on the 1-slack SVM formulation. Table 8.2 and Figure 8.3 compare the

training time using the three methods. Using the 1-slack SVM formulation and the new

Page 147: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

131

cutting plane algorithm, the training time is improved by several orders of magnitude. For

instance, using the complete training dataset, the training time is 416,264.251 seconds

(6,937.738 minutes or 4.8 days) using SVM-Multiclass compared to 5,890.031 seconds

(98.17 minutes) using SVM-PerfMulti, or 1/70 of the SVM-Multiclass time. In addition,

the performance measures (recall/precision/F-score) are improved from 62.4/64.5/63.5

using SVM-Multiclass to 67.9/66.4/67.2 with SVM-PerfMulti, using the default learning

parameters in both cases (( = 0.1, C = 0.01, a = 0.0001).

Table 8.2 - Comparison of SVM-PerfMulti, SVM-MultiDB, and SVM-Multiclass Training Time vs. Training Data Size

Training Time (seconds) Training Data Size SVM-Multiclass SVM-PerfMulti

SVM-MultiDB (Examples Cache

Size=500) 5,000 94.213 15.370 15.553

10,000 355.101 38.110 34.798 25,000 2,214.092 104.721 108.257 50,000 7,784.148 254.326 268.901 75,000 16,662.753 494.168 492.843

100,000 23,531.920 788.916 716.954 125,000 45,260.549 1,059.173 1,105.600 150,000 65,524.410 1,294.051 1,156.002 175,000 72,108.693 1,618.301 1,533.246 200,000 91,632.975 1,866.653 1,830.997 225,000 88,282.629 2,153.460 1,932.653 250,000 100,189.058 2,483.609 2,204.677 275,000 158,229.577 2,535.886 2,369.009 300,000 165,178.301 2,528.715 2,594.110 325,000 206,240.711 2,851.640 3,006.574 350,000 248,070.056 3,549.062 3,733.130 375,000 215,760.927 3,589.297 3,766.130 400,000 310,408.398 4,488.086 4,523.788 425,000 352,636.685 4,066.876 4,327.744 450,000 358,716.303 4,615.680 5,011.895 492,551 416,264.251 5,890.031 6,292.167

Page 148: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

132

1

10

100

1,000

10,000

100,000

1,000,000

0 50,000 100,000 150,000 200,000 250,000 300,000 350,000 400,000 450,000 500,000

# of Training Examples

Trai

ning

Tim

e (s

ec)

Figure 8.3 - SVM-Multiclass, SVM-PerfMulti, and SVM-MultiDB Training Time vs. Training Data Size

SVM-Multiclass (Top), SVM-PerfMulti and SVM-MultiDB are very close (Bottom)

0

1,000

2,000

3,000

4,000

5,000

6,000

7,000

0 50,000 100,000 150,000 200,000 250,000 300,000 350,000 400,000 450,000 500,000

# of Training Examples

Trai

ning

Tim

e (s

ec)

Figure 8.4 - SVM-MultiDB and SVM-PerfMulti Training Time vs. Training Data Size

SVM-MultiDB curve is the one ending up

Figure 8.4 takes a closer look at the impact of examples caching in SVM-MultiDB

using a cache size of 500 examples. Note the minimal to no impact on training time in

this case. The overhead of fetching example feature vectors from the database is

minimized by limiting access to each example to one access per learning iteration. This

overhead is much smaller than in the single-class case because each example fetch

Page 149: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

133

corresponds to more processing in order to classify the example against all classes. The

additional processing lowers the ratio of the time spent fetching the examples from the

database to the time spent in learning and/or classification.

Lastly, we examine the memory consumed during the learning process, which is

mostly composed of training example feature vectors and the generated constraint

(support) feature vectors. As discussed in Chapter 6, the worst-case memory estimate is

O(nf) + O(kf), where k is the number of classes, n is the number of examples, and f is the

dimensionality of the feature vectors. The number of constraint vectors is proportional to

the number of learning iterations, which is difficult to estimate a priori, but is known to

be a small fraction os the number of training examples as discussed in (Joachims et al.

2008). For instance, using the JNLPBA-04 training dataset with 492,551 training

examples with 1,116,970 features and 11 classes, and assuming each feature is

represented by 8 bytes and about 1000 learning iterations, the worst-case memory needed

would be 550GB for the example vectors and 12GB for the support vectors. In practice,

the example vectors contain a very small fraction of all features per vector and not all

features are present in each class’ portion in the constraint vectors.

In the biomedical JNLPBA-04 training case, the maximum number of features per

example is 68, and each constraint vector size is about 0.5MB. As the number of learning

iterations increases with the value of the regularization factor C in the binary case, yet is

mostly constant using our multi-class cutting plane algorithm, the memory needed for

multi-class training is not much higher than binary training with higher C. Figure 8.5

compares the memory requirements for binary training (using regularization factor

C=0.01, 0.14, and 1.0) and for the multi-class training.

Page 150: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

134

0

1,000

2,000

3,000

4,000

5,000

6,000

0 50,000 100,000 150,000 200,000 250,000 300,000 350,000 400,000 450,000 500,000

# of Training Examples

Mem

ory

Size

(MB

ytes

)

SVM-PerfMulti SVM-Perf C=0.01 SVM-Perf C=0.14 SVM-Perf C=1.0

Figure 8.5 – Comparison of Single-Class and Multi-Class Memory Usage vs. Training Data Size

8.2 Multi-Class Performance Measures Figure 8.6 presents the impact of the training data size on the multi-class

classification performance measures in terms of precision, recall, and F!=1-score. Figure

8.7 presents the protein classification performance variation with training data size using

the multi-class approach. Note that the protein performance measures in this case are

superior to the best achieved using binary classification. The final protein F-score in the

binary case with C=0.14 is 67.27 as compared to an F-score of 69.17 in the multi-class

classification case. The higher classification performance is expected of the All-Together

multi-class learning approach, as all hyperplanes separating the classes are maximized at

the same time and no unclassifiable regions result using this approach. Our multi-class

empirical study confirms this expectation and makes the additional effort to develop an

efficient All-Together multi-class SVM technique worthwhile. The performance results

using SVM-PerfMulti or SVM-MultiDB are identical. The out-of-the-box performance

achieved using default learning parameters are superior to those of SVM-Multiclass,

where the final F-score if improved from 63.41% to 67.15%.

Page 151: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

135

10

20

30

40

50

60

70

80

0 50,000 100,000 150,000 200,000 250,000 300,000 350,000 400,000 450,000 500,000

# of Training Examples

Perf

orm

ance

(%)

Figure 8.6 - SVM-PerfMulti Overall Performance vs. Training Data Size

(Very Close Recall, F-Score, and Precision Measures)

10

20

30

40

50

60

70

80

0 50,000 100,000 150,000 200,000 250,000 300,000 350,000 400,000 450,000 500,000

# of Training Examples

Perfo

rman

ce (%

)

Figure 8.7 - SVM-PerfMulti Protein Performance vs. Training Data Size

(From Top to Bottom: Recall, F-Score, Precision)

In Table 8.3, we compare the out-of-the-box performance results of SVM-PerfMulti to

other published results using the same biomedical JNLPBA-04 datasets. The performance

we achieve places this simplified approach in second place as compared to the others. It

is important to note that the system in the first place (Zhou and Su 2004) achieved an F-

score of 60.3% prior to boosting the performance using external resources such as

dictionaries and gazetteer lists, which can still be applied to our classified output.

Page 152: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

136

Table 8.3 – Performance of BioNLP Systems Using SVM vs. SVM-PerfMulti Results20

1978-1989 Set 1990-1999 Set 2000-2001 Set S/1998-2001 Set Total Zhou and Su (2004) (with post-processing) 75.3 / 69.5 / 72.3 77.1 / 69.2 / 72.9 75.6 / 71.3 / 73.8 75.8 / 69.5 / 72.5 76.0 / 69.4 / 72.6

-- / -- / 60.321 Habib (no post-processing) 59.5 / 70.3 / 64.5 69.9 / 66.3 / 68.1 69.4 / 67.2 / 68.2 68.1 / 65.1 / 66.6 67.9 / 66.4 / 67.2 Giuliano et al. (2005) -- -- -- -- 64.4 / 69.8 / 67.0 Song et al. (2004) 60.3 / 66.2 / 63.1 71.2 / 65.6 / 68.2 69.5 / 65.8 / 67.6 68.3 / 64.0 / 66.1 67.8 / 64.8 / 66.3 Rössler (2004) 59.2 / 60.3 / 59.8 70.3 / 61.8 / 65.8 68.4 / 61.5 / 64.8 68.3 / 60.4 / 64.1 67.4 / 61.0 / 64.0 Habib and Kalita (2007) 53.2 / 70.8 / 60.7 63.7 / 63.6 / 63.7 64.2 / 65.4 / 64.8 63.0 / 63.2 / 63.1 62.3 / 64.5 / 63.4 Park et al. (2004) 62.8 / 55.9 / 59.2 70.3 / 61.4 / 65.6 65.1 / 60.4 / 62.7 65.9 / 59.7 / 62.7 66.5 / 59.8 / 63.0 Lee, Hwang et al. (2004) 42.5 / 42.0 / 42.2 52.5 / 49.1 / 50.8 53.8 / 50.9 / 52.3 52.3 / 48.1 / 50.1 50.8 / 47.6 / 49.1 Baseline (Kim et al. 2004) 47.1 / 33.9 / 39.4 56.8 / 45.5 / 50.5 51.7 / 46.3 / 48.8 52.6 / 46.0 / 49.1 52.6 / 43.6 / 47.7

Table 8.4 – Summary of SVM-PerfMulti Experiment Results

Named Entity 1978-1989 Set 1990-1999 Set 2000-2001 Set S/1998-2001 Set Total protein 65.02 / 70.84 / 67.81 77.61 / 66.19 / 71.44 75.28 / 64.08 / 69.23 75.55 / 62.44 / 68.37 75.00 / 64.19 / 69.17 DNA 66.96 / 65.79 / 66.37 59.74 / 66.67 / 63.01 59.61 / 74.92 / 66.40 53.74 / 71.98 / 61.54 57.89 / 70.64 / 63.63 RNA 100.00 / 50.00 / 66.67 69.39 / 66.67 / 68.00 53.85 / 66.67 / 59.57 55.71 / 58.21 / 56.93 59.30 / 62.96 / 61.08

cell_type 59.95 / 74.84 / 66.57 59.48 / 72.22 / 65.23 59.52 / 81.89 / 68.94 56.94 / 80.90 / 66.84 58.49 / 78.58 / 67.06 cell_line 34.66 / 59.22 / 43.73 57.14 / 53.93 / 55.49 61.81 / 53.61 / 57.42 58.24 / 43.81 / 50.00 52.43 / 51.19 / 51.80 Overall 59.53 / 70.33 / 64.48 69.93 / 66.30 / 68.07 69.35 / 67.16 / 68.24 68.11 / 65.13 / 66.58 67.93 / 66.39 / 67.15

Correct Right 74.81 / 88.37 / 81.02 83.35 / 79.02 / 81.13 81.92 / 79.34 / 80.61 81.68 / 78.10 / 79.85 81.37 / 79.52 / 80.43 Correct Left 63.88 / 75.46 / 69.19 77.27 / 73.25 / 75.21 76.15 / 73.75 / 74.93 75.00 / 71.71 / 73.32 74.62 / 72.93 / 73.77

20 The JNLPBA-04 testing dataset is roughly divided into four subsets based on publication years: 1978-1989 set, 1990-1999 set, 2000-2001 set, and S/1998-2001 set. See Appendix B for more details. 21 The overall F-score achieved in (Zhou and Su 2004) is 72.6% after applying additional post-processing using dictionaries, named entity lists, etc. Without post-processing, the F-score achieved is 60.3%.

Page 153: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

137

Table 8.5 – SVM-PerfMulti Results 1978-1989 Set

named entity complete match right boundary match left boundary match protein ( 609) 396 (65.02 / 70.84 / 67.81) 485 (79.64 / 86.76 / 83.05) 416 (68.31 / 74.42 / 71.23) DNA ( 112) 75 (66.96 / 65.79 / 66.37) 97 (86.61 / 85.09 / 85.84) 83 (74.11 / 72.81 / 73.45) RNA ( 1) 1 (100.00 / 50.00 / 66.67 1 (100.00 / 50.00 / 66.67 1 (100.00 / 50.00 / 66.67

cell_type ( 392) 235 (59.95 / 74.84 / 66.57) 286 (72.96 / 91.08 / 81.02) 248 (63.27 / 78.98 / 70.25) cell_line ( 176) 61 (34.66 / 59.22 / 43.73) 96 (54.55 / 93.20 / 68.82) 76 (43.18 / 73.79 / 54.48) [-ALL-] (1290) 768 (59.53 / 70.33 / 64.48) 965 (74.81 / 88.37 / 81.02) 824 (63.88 / 75.46 / 69.19)

Table 8.6 – SVM-PerfMulti Results 1990-1999 Set

named entity complete match right boundary match left boundary match protein (1420) 1102 (77.61 / 66.19 / 71.44) 1242 (87.46 / 74.59 / 80.52) 1215 (85.56 / 72.97 / 78.77) DNA ( 385) 230 (59.74 / 66.67 / 63.01) 310 (80.52 / 89.86 / 84.93) 256 (66.49 / 74.20 / 70.14) RNA ( 49) 34 (69.39 / 66.67 / 68.00) 45 (91.84 / 88.24 / 90.00) 37 (75.51 / 72.55 / 74.00) cell_type ( 459) 273 (59.48 / 72.22 / 65.23) 352 (76.69 / 93.12 / 84.11) 299 (65.14 / 79.10 / 71.45) cell_line ( 168) 96 (57.14 / 53.93 / 55.49) 119 (70.83 / 66.85 / 68.79) 110 (65.48 / 61.80 / 63.58) [-ALL-] (2481) 1735 (69.93 / 66.30 / 68.07) 2068 (83.35 / 79.02 / 81.13) 1917 (77.27 / 73.25 / 75.21)

Table 8.7 – SVM-PerfMulti Results 2000-2001 Set

named entity complete match right boundary match left boundary match protein (2180) 1641 (75.28 / 64.08 / 69.23) 1878 (86.15 / 73.33 / 79.22) 1830 (83.94 / 71.46 / 77.20) DNA ( 411) 245 (59.61 / 74.92 / 66.40) 295 (71.78 / 90.21 / 79.95) 262 (63.75 / 80.12 / 71.00) RNA ( 52) 28 (53.85 / 66.67 / 59.57) 41 (78.85 / 97.62 / 87.23) 28 (53.85 / 66.67 / 59.57) cell_type ( 714) 425 (59.52 / 81.89 / 68.94) 540 (75.63 / 104.05 / 87.59 449 (62.89 / 86.51 / 72.83) cell_line ( 144) 89 (61.81 / 53.61 / 57.42) 114 (79.17 / 68.67 / 73.55) 97 (67.36 / 58.43 / 62.58) [-ALL-] (3501) 2428 (69.35 / 67.16 / 68.24) 2868 (81.92 / 79.34 / 80.61) 2666 (76.15 / 73.75 / 74.93)

Page 154: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

138

Table 8.8 – SVM-PerfMulti Results 1998-2001 Set

named entity complete match right boundary match left boundary match protein (3186) 2407 (75.55 / 62.44 / 68.37) 2760 (86.63 / 71.60 / 78.40) 2682 (84.18 / 69.57 / 76.18) DNA ( 588) 316 (53.74 / 71.98 / 61.54) 404 (68.71 / 92.03 / 78.68) 342 (58.16 / 77.90 / 66.60) RNA ( 70) 39 (55.71 / 58.21 / 56.93) 54 (77.14 / 80.60 / 78.83) 40 (57.14 / 59.70 / 58.39) cell_type (1138) 648 (56.94 / 80.90 / 66.84) 858 (75.40 / 107.12 / 88.50 691 (60.72 / 86.27 / 71.27) cell_line ( 170) 99 (58.24 / 43.81 / 50.00) 132 (77.65 / 58.41 / 66.67) 109 (64.12 / 48.23 / 55.05) [-ALL-] (5152) 3509 (68.11 / 65.13 / 66.58) 4208 (81.68 / 78.10 / 79.85) 3864 (75.00 / 71.71 / 73.32)

Table 8.9 – SVM-PerfMulti Overall Results

named entity complete match right boundary match left boundary match protein (8640) 5546 (75.00 / 64.19 / 69.17) 6365 (86.07 / 73.67 / 79.39) 6143 (83.07 / 71.10 / 76.62) DNA (1226) 866 (57.89 / 70.64 / 63.63) 1106 (73.93 / 90.21 / 81.26) 943 (63.03 / 76.92 / 69.29) RNA (162) 102 (59.30 / 62.96 / 61.08) 141 (81.98 / 87.04 / 84.43) 106 (61.63 / 65.43 / 63.47) cell_type (674) 345 (52.43 / 51.19 / 51.80) 461 (70.06 / 68.40 / 69.22) 392 (59.57 / 58.16 / 58.86) cell_line (2012) 1581 (58.49 / 78.58 / 67.06) 2036 (75.32 / 101.19 / 86.36) 1687 (62.41 / 83.85 / 71.56) [-ALL-] (12712) 8440 (67.93 / 66.39 / 67.15) 10109 (81.37 / 79.52 / 80.43) 9271 (74.62 / 72.93 / 73.77)

Page 155: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

139

The detailed performance results for the different abstract collections in the JNLPBA-

04 data and the classification performance of each of the five named entities: protein,

DNA, RNA, cell_type, and cell_line are summarized in Table 8.4 through Table 8.9.

Table 8.4 summarizes the classification performance of complete matches for the four

publication periods as well as the whole testing dataset. Table 8.5 through Table 8.9

present the detailed classification performance results for each publication period subset

and the complete testing dataset showing left boundary matches, right boundary matches,

as well as the complete matches within each named entity. The difficulty in identifying

the left boundary, discussed in Chapter 2, is observed in our experiments. Right

boundaries are recognized at an F-score that is 6% to 12% higher than that of the left

boundary identification. As discussed in Chapter 2, this is mostly due to the ambiguity of

multi-word named entities especially those beginning with descriptive words such as

adverbs or adjectives. This difficulty has been consistently faced by all NER systems

using the GENIA corpus datasets.

8.3 Effect of Acceleration Factor a In this section, we examine the effect of the acceleration factor we introduced in the

cutting plane algorithm of SVM-PerfMulti. As discussed in Chapter 5, this factor refines

the criteria used when finding the most violated constraints by increasing the gap

between the positive classification and the maximum negative classification score. The

adjustment is a fraction of the maximum positive score achieved during the previous

learning iteration, where the acceleration factor controls this fraction.

Figure 8.8 shows the effect of increasing the acceleration factor on the overall

training time, using 50,000 training examples. As expected, training time is decreased as

Page 156: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

140

the acceleration factor increases. We also examine the effect this acceleration has on the

performance measures achieved. Figure 8.9 and Figure 8.10 represent the performance

variation with the acceleration factor for the overall classification of all five classes and

that of the protein classification, respectively. We observe a very small variation in

performance as long as the acceleration factor is synchronized with the final error criteria.

In the experiment results shown below, the maximum acceleration factor a is 1/100 of the

error rate (. Note that the training time comparison presented earlier in Table 8.2 and

Figure 8.3 corresponds to the slower training time in Figure 8.8 using a = 0.0001.

Similar to the effect of boosting the positive examples weights with SVM-Light22, a

decline in performance – in particular the precision and F-score – may occur with

increased positive boosting as the trained model becomes overfitted to the positive

examples. We observe the same with the acceleration factor as it has a positive boosting

effect in the learning process. However, the performance achieved in much lower training

time is still superior to that achieved using SVM-Multiclass. For instance, using 50,000

training examples, the performance measures (recall/precision/F-score) achieved with a =

0.0001 are 53.54/53.54/53.54, with a = 0.0005 are 50.26/52.77/51.48, and the lowest with

a = 0.001 are 48.39/48.85/48.62. The performance measures achieved using SVM-

Multiclass and 50,000 training examples are 44.03/50.77/47.16. The training time in the

three SVM-PerfMulti cases is 256.119, 199.611, and 108.041 seconds, respectively,

compared to 7,784.148 seconds using SVM-Multiclass.

22 Positive examples boosting is not supported by SVM-Perf and the 1-slack SVM formulation.

Page 157: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

141

0

50

100

150

200

250

300

350

0.0000 0.0001 0.0002 0.0003 0.0004 0.0005 0.0006 0.0007 0.0008 0.0009 0.0010

Acceleration Factor

Trai

ning

Tim

e (s

ec)

Figure 8.8 – SVM-PerfMulti Training Time vs. Acceleration Factor

10

20

30

40

50

60

70

80

0.0000 0.0001 0.0002 0.0003 0.0004 0.0005 0.0006 0.0007 0.0008 0.0009 0.0010

Acceleration Factor

Perf

orm

ance

(%)

Overall Recall Overall Precision Overall F-score

Figure 8.9 – SVM-PerfMulti Overall Performance vs. Acceleration Factor

10

20

30

40

50

60

70

0.0000 0.0001 0.0002 0.0003 0.0004 0.0005 0.0006 0.0007 0.0008 0.0009 0.0010

Acceleration Factor

Per

form

ance

(%)

Overall Recall Overall Precision Overall F-score

Figure 8.10 – SVM-PerfMulti Protein Performance vs. Acceleration Factor

Page 158: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

142

8.4 Experiments Using CoNLL-02 Data In this section, we report the experimentation results using two named entity tasks for

languages other than English, namely the CoNLL-02 Spanish and Dutch datasets. These

are examples of general named entity recognition where the names of persons, locations,

organizations, and miscellaneous entities are to be identified in news articles.

8.4.1 CoNLL-02 Spanish Dataset

The CoNLL-02 Spanish data is a collection of news wire articles from May 2000,

made available by the Spanish EFE News Agency. Three datasets are provided: a training

dataset, a development dataset that could be used to tune the trained system, and a testing

dataset. The training, development, and test datasets contain 273,037 words, 54,837

words, and 53,049 words, respectively. A sample annotated text highlighting the four

named entities: person, location, organization, and miscellaneous is presented below.

Legend of Named Entities: Person Location Organization Miscellaneous

El Abogado General del Estado, Daryl Williams, subrayó hoy la necesidad de tomar

medidas para proteger al sistema judicial australiano frente a una página de internet que

imposibilita el cumplimiento de los principios básicos de la Ley.

La petición del Abogado General tiene lugar después de que un juez del Tribunal

Supremo del estado de Victoria (Australia) se viera forzado a disolver un jurado popular

y suspender el proceso ante el argumento de la defensa de que las personas que lo

componían podían haber obtenido información sobre el acusado a través de la página

CrimeNet.

Por su parte, el Abogado General de Victoria, Rob Hulls, indicó que no hay nadie que

controle que las informaciones contenidas en CrimeNet son veraces.

Page 159: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

143

Hulls señaló que en el sistema jurídico de la Commonwealth, en el que se basa la

justicia australiana, es fundamental que la persona sea juzgada únicamente teniendo en

cuenta las pruebas presentadas ante el juez.

El portavoz del Consejo Político Municipal de IU en Santander, Ernesto Gómez de la

Hera, explicó hoy en conferencia deprensa que este boletín persigue, más que informar,

abrir espacios de debate y servir como”semilla de movilización política”en la ciudad.

Figure 8.11 compares the training time using SVM-Multiclass and SVM-PerfMulti.

We observe the same time reduction as that observed with the biomedical JNLPBA-04

data. The overall classification performance with different training data sizes is presented

in Figure 8.12. We compare the performance measures achieved using SVM-Multiclass

and SVM-PerfMulti on the development and testing CoNLL-02 Spanish datasets in Table

8.10 and Table 8.11. In this NER task, SVM-PerfMulti out-of-the-box performance is

superior to that of SVM-Multiclass using the default learning parameters, where the final

testing accuracy, precision, recall, and F-score are 96.54, 66.65, 70.97, and 68.74

respectively with SVM-MultiClass, compared to 96.98, 69.55, 74.32, and 71.86 using

SVM-PerfMulti – a ~3% boost in F-score. The training time is 39,082.52 seconds (10.86

hours) with SVM-Multiclass and 1,804.05 seconds (30.06 minutes) using SVM-PerfMulti.

Table 8.10 – CoNLL-02 SVM-Multiclass Spanish Performance Results

Using Default Learning Parameters (C=0.01, (=0.1)

Named Entity Development Dataset

(Accuracy) Precision/Recall/F-Score

Test Dataset (Accuracy)

Precision/Recall/F-Score person 74.16 / 73.73 / 73.94 71.91 / 86.39 / 78.49

organization 65.20 / 65.24 / 65.22 65.73 / 73.43 / 69.37 location 59.22 / 78.58 / 67.54 70.89 / 70.76 / 70.82

miscellaneous 36.31 / 27.42 / 31.24 36.78 / 28.24 / 31.95 [-ALL-] (95.27) 63.74 / 66.77 / 65.22 (96.54) 66.65 / 70.97 / 68.74

Page 160: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

144

Table 8.11 – CoNLL-02 SVM-PerfMulti Spanish Performance Results

Using Default Learning Parameters (C=0.01, a=0.0001, (=0.1)

Named Entity Development Dataset

(Accuracy) Precision/Recall/F-Score

Test Dataset (Accuracy)

Precision/Recall/F-Score person 78.59 / 80.20 / 79.38 74.74 / 89.39 / 81.41

organization 69.37 / 69.00 / 69.18 69.26 / 75.79 / 72.37 location 63.79 / 79.59 / 70.82 75.19 / 72.97 / 74.06

miscellaneous 42.09 / 42.47 / 42.28 40.00 / 40.00 / 40.00 [-ALL-] (95.95) 67.72 / 71.83 / 69.71 (96.98) 69.55 / 74.32 / 71.86

0

5,000

10,000

15,000

20,000

25,000

30,000

35,000

40,000

45,000

0 25,000 50,000 75,000 100,000 125,000 150,000 175,000 200,000 225,000 250,000 275,000

# of Training Examples

Trai

ning

Tim

e (s

ec)

SVM-PerfMulti SVM-Multiclass

Figure 8.11 – CoNLL-02 Spanish SVM-PerfMulti vs. SVM-Multiclass Training Time

10

20

30

40

50

60

70

80

0 25,000 50,000 75,000 100,000 125,000 150,000 175,000 200,000 225,000 250,000 275,000

# of Training Examples

Per

form

ance

(%)

Overall Recall Overall Precision Overall F-score

Figure 8.12 – CoNLL-02 Spanish SVM-PerfMulti Overall Performance

Page 161: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

145

8.4.2 CoNLL-02 Dutch Dataset

The Dutch data consist of four editions of the Belgian newspaper “De Morgen” from

the year 2000, and contain words, entity tags and part-of-speech tags. The training

dataset, development dataset, and test dataset contain 218,737 words, 40,656 words, and

74,189 words, respectively. The following is a sample annotated text highlighting the

four named entities: person, location, organization, and miscellaneous.

Legend of Named Entities: Person Location Organization Miscellaneous

Vandaag is Floralux dus met alle vergunningen in orde, maar het BPA waarmee die

konden verkregen worden, was omstreden omdat zaakvoerster Christiane Vandenbussche

haar schepenambt van...

In eerste aanleg werd Vandenbussche begin de jaren '90 veroordeeld wegens

belangenvermenging maar later vrijgesproken door het hof van beroep in Gent.

SP zit moeilijk -- (Eric, sd) Derycke wil BPA goedkeuren op voorwaarde dat BPA '

De Hoogte ' beperkt wordt aanvaard (burgemeester van Moorslede Walter Ghekiere zal 'n

voorstel doen).

De liberale minister van Justitie Marc Verwilghen is geen kandidaat op de lokale

VLD-lijst bij de komende gemeenteraadsverkiezingen in Dendermonde.

Het voorstel was naar verluidt vrij spontaan gegroeid binnen de door burgemeester

Norbert De Batselier (SP) geleide meerderheid.

De Morgen Volgens de plaatselijke VLD-voorzitter Rudy De Brandt is zijn beslissing

onherroepelijk.

Zo was er het voorstel voor een gezamenlijke lijst met Inzet, waarmee de VLD in

Dendermonde de meerderheid vormt en die bestaat uit SP'ers, VU'ers en enkele

onafhankelijken.

Marc Verwilghen geen kandidaat in Dendermonde Voor hen is Marc Verwilghen de

toekomst voor de VLD en Frans Verberckmoes de door hen verfoeide oude politieke

cultuur.

Page 162: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

146

The same set of scalability and performance experiments in the previous section is

repeated using the CoNLL-02 Dutch datasets. The training time comparison of SVM-

Multiclass and SVM-PerfMulti is presented in Figure 8.13. The overall classification

performance with different training data sizes is presented in Figure 8.14. Table 8.12 and

Table 8.13 compare the performance measures achieved using both SVM

implementations with default learning parameters. The final testing accuracy, precision,

recall, and F-score are 97.24, 67.27, 68.79, and 68.02 respectively with SVM-MultiClass,

compared to 97.57, 69.71, 72.55, and 71.10 using SVM-PerfMulti – a ~3% boost in F-

score. The training time is 25,708.97 seconds (7.14 hours) with SVM-Multiclass and

850.62 seconds (14.17 minutes) using SVM-PerfMulti. Table 8.14 shows further

improvement in performance using a slower acceleration factor of 0.00002, where the

accuracy, precision. recall, F-score reach 97.62, 70.92, 73.15, and 72.02 respectively in

871.034 seconds (14.57 minutes).

0

5,000

10,000

15,000

20,000

25,000

30,000

0 25,000 50,000 75,000 100,000 125,000 150,000 175,000 200,000 225,000

# of Training Examples

Trai

ning

Tim

e (s

ec)

SVM-PerfMulti SVM-Multiclass

Figure 8.13 – CoNLL-02 Dutch SVM-PerfMulti vs. SVM-Multiclass Training Time

Page 163: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

147

10

20

30

40

50

60

70

80

0 25,000 50,000 75,000 100,000 125,000 150,000 175,000 200,000 225,000

# of Training Examples

Per

form

ance

(%)

Overall Recall Overall Precision Overall F-score

Figure 8.14 – CoNLL-02 Dutch SVM-PerfMulti Overall Performance

Table 8.12 – CoNLL-02 SVM-Multiclass Dutch Performance Results Using Default Learning Parameters (C=0.01, (=0.1)

Named Entity Development Dataset

(Accuracy) Precision/Recall/F-Score

Test Dataset (Accuracy)

Precision/Recall/F-Score person 55.87 / 80.51 / 65.97 60.25 / 84.34 / 70.28

organization 71.27 / 48.10 / 57.44 67.48 / 53.63 / 59.76 location 71.07 / 70.77 / 70.92 76.88 / 75.19 / 76.03

miscellaneous 73.59 / 68.18 / 70.78 70.53 / 61.50 / 65.71 [-ALL-] (96.58) 65.95 / 66.70 / 66.32 (97.24) 67.27 / 68.79 / 68.02

Table 8.13 – CoNLL-02 SVM-PerfMulti Dutch Performance Results Using Default Learning Parameters (C=0.01, a=0.0001, (=0.1)

Named Entity Development Dataset

(Accuracy) Precision/Recall/F-Score

Test Dataset (Accuracy)

Precision/Recall/F-Score person 59.85 / 80.80 / 68.77 64.70 / 85.79 / 73.77

organization 71.21 / 53.35 / 61.00 66.24 / 59.18 / 62.51 location 73.47 / 72.86 / 73.17 79.12 / 76.87 / 77.98

miscellaneous 72.49 / 71.52 / 72.01 72.40 / 67.40 / 69.81 [-ALL-] (96.86) 67.94 / 69.50 / 68.71 (97.57) 69.71 / 72.55 / 71.10

Page 164: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

148

Table 8.14 – CoNLL-02 SVM-PerfMulti Dutch Performance Results Using Different Acceleration Rate (C=0.01, a=0.00002, (=0.1)

Named Entity Development Dataset

(Accuracy) Precision/Recall/F-Score

Test Dataset (Accuracy)

Precision/Recall/F-Score person 60.70 / 81.51 / 69.58 65.50 / 86.61 / 74.59

organization 72.37 / 54.23 / 62.00 68.16 / 60.20 / 63.94 location 74.84 / 73.90 / 74.37 79.25 / 76.49 / 77.84

miscellaneous 72.40 / 71.52 / 71.96 74.43 / 68.16 / 71.15 [-ALL-] (96.94) 68.69 / 70.11 / 69.39 (97.62) 70.92 / 73.15 / 72.02

8.5 Other Experimental Datasets Table 8.15 compares empirical results using some datasets often used for machine

learning experimentation in the research community. The training and testing feature

vectors are available for download at the LIBSVM (Chang and Lin 2001) website

http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets. These sets represent various

problems other than named entity recognition. While not related to the main focus of this

work, we use these experimental datasets to validate the multi-class SVM solution.

We do not perform any pre- or post-processing on the data. The feature weights in all

datasets – except the dna set – are real-valued. No specific knowledge about the

semantics of the features selected or how their weights are generated. The classification

precision results below are those reported by the SVM classification software which

represent the average loss on the test set. Training using SVM-Multiclass and small

datasets is fast, yet we do observe a difference in the test precision attained using SVM-

PerfMulti. For instance, the vehicle test set has a precision of 66.8% using SVM-

Multiclass and 78.0% using SVM-PerfMulti and the same value of the regularization

factor C. Most of these datasets are small in size, so training time differences are

Page 165: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

149

observed in the larger sets such as news20, protein, mnist, and shuttle. All SVM-PerfMulti

tests used an acceleration factor of 0.0005.

Table 8.15 – Other Experimental Datasets SVM-Multiclass SVM-PerfMulti

Dataset # of Examples

# of Classes

# of Features

Trade-Off Factor C Training

Time Precision Training Time Precision

vowel 528 11 10 256 112.851 41.56 1.391 42.21 usps 7,291 10 256 0.1 214.820 92.08 33.971 91.83

news20 15,935 20 62061 0.01 226.953 79.99 70.246 85.07 dna 1,400 3 180 0.01 0.895 93.68 0.353 92.42

protein 14,895 3 257 0.001 429.159 66.65 12.718 67.36 vehicle 846 4 18 0.01 0.449 66.80 0.306 78.00 mnist 42,000 10 780 0.01 718.977 92.48 127.497 92.36

satimage 3,104 6 36 0.01 1.886 79.35 0.788 83.45 segment 2,310 7 19 0.01 1.313 83.40 0.820 94.30 shuttle 30,450 7 9 0.01 33.147 91.92 3.458 96.26 letter 10,500 26 16 0.1 50.541 70.02 24.909 76.12

As we were about to conclude this research work, a new yet-to-be-published paper

brought to our attention a new SVM-Multiclass implementation based on the 1-slack

SVM formulation. We present the results of comparative experiments and an analysis of

the differences between our cutting plane algorithm and that implemented in the newer

SVM-Multiclass in the next chapter. Using our SVM-PerfMulti and the corresponding

SVM-MultiDB database implementation, we are able to achieve better out-of-the-box

performance in shorter training time.

Page 166: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

Chapter 9

Subsequent Development: New SVM-Multiclass

As we conclude this research work, we ran a final literature review and came across a

new yet-to-be-published paper (Joachims et al. 2008) that discusses the importance of

cutting plane algorithms in lowering the training time using SVM-Struct. The paper

presents some applications using the new SVM 1-slack formulation (Joachims 2006). A

careful reading of the new paper reveals a brief mention of a multi-class instantiation,

although not fully discussed. As the paper’s brief description of the multi-class case

addresses the n-slack formulation, which is the version used earlier as part of this work,

we checked a new copy of the source code and noticed differences suggesting that it is in

fact using the new 1-slack formulation. In this chapter, we report on another set of

scalability experiments that we ran using the new multi-class instantiation – SVM-

Multiclass V2.12 – and examine the differences in the cutting plane algorithms and the

experimentation results in terms of training time, memory, and classification

performance.

9.1 Difference in Cutting Plane Algorithms By examining the source code of the new SVM-Multiclass V2.12, and according to

the brief description in (Joachims et al. 2008), this instantiation finds the most violated

constraints by explicit enumeration of the classes. Using either margin-rescaling or slack-

rescaling, the data points are simply classified using the current model and the point is

assigned to the class with the maximum vote. The ),( ii yxI function computes a feature

vector for one data point at a time and the cutting plane algorithm identifies each violated

Page 167: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

151

constraint separately. The individual feature vectors are summed up as part of the

learning algorithm, not during the argmax computation.

The cutting plane algorithm we introduced in Chapter 5 builds one combined vector

for all violated constraints during the argmax computation and uses an additional

separation criteria to distinguish the positive and negative examples for each class. The

algorithm refines the trained model by introducing a positive boosting effect. The

experimental results highlight the effect of the different approaches used. Using

Algorithm 5.3, the resulting SVM achieves better performance out-of-the box without

the need for parameter tuning, and takes less time to complete training. The memory

consumption using both cutting plane algorithms is comparable.

9.2 Experimental Results Comparison In this section, we compare the training time, memory consumption, and

classification performance measures attained using SVM-Multiclass V2.12 and SVM-

PerfMulti. Since both machines are based on the 1-slack SVM formulation, the main

difference is in the decision criteria of the cutting plane algorithm. The following

experiments compare the results using default learning parameters as well as those

achieved with parameter tuning for SVM-Multiclass V2.12. For SVM-PerfMulti, we use

default learning parameters only. The experiments demonstrate that SVM-PerfMulti is

capable of achieving high performance measures without tuning. Reaching the same

performance measures using SVM-Multiclass V2.12 requires learning parameter search

and when a suitable parameter value is identified, the training time needed is found to be

longer than that of SVM-PerfMulti, not including the time required to perform the

Page 168: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

152

parameter tuning phase. The results are consistent for the JNLPBA-04 biomedical data,

as well as the CoNLL-02 Spanish and Dutch data.

Figure 9.1, Figure 9.3, and Figure 9.5 compare the training time needed to reach

comparable performance results using SVM-Multiclass and SVM-PerfMulti for the

biomedical JNLPBA-04 data, the CoNLL-02 Spanish data, and the CoNLL-02 Dutch

data, respectively. Figure 9.2, Figure 9.4, and Figure 9.6 compare the memory usage of

the two instantiations for the same datasets.

0

2,000

4,000

6,000

8,000

10,000

12,000

14,000

16,000

0 50,000 100,000 150,000 200,000 250,000 300,000 350,000 400,000 450,000 500,000

# of Training Examples

Trai

ning

Tim

e (s

ec)

SVM-Multiclass V2.12 SVM-PerfMulti

Figure 9.1 – Biomedical JNLPBA-04 Training Time Comparison

0

1,000

2,000

3,000

4,000

5,000

6,000

0 50,000 100,000 150,000 200,000 250,000 300,000 350,000 400,000 450,000 500,000

# of Training Examples

Mem

ory

Size

(MB

ytes

)

SVM-Multiclass V2.12 SVM-PerfMulti

Figure 9.2 – Biomedical JNLPBA-04 Memory Comparison

Page 169: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

153

0

500

1,000

1,500

2,000

2,500

3,000

3,500

0 25,000 50,000 75,000 100,000 125,000 150,000 175,000 200,000 225,000 250,000 275,000

# of Training Examples

Trai

ning

Tim

e (s

ec)

SVM-MulticlassV2.12 SVM-PerfMulti

Figure 9.3 – Spanish CoNLL-02 Training Time Comparison

0

200

400

600

800

1,000

1,200

1,400

1,600

1,800

0 25,000 50,000 75,000 100,000 125,000 150,000 175,000 200,000 225,000 250,000 275,000

# of Training Examples

Mem

ory

Siz

e (M

Byt

es)

SVM-Multiclass V2.12 SVM-PerfMulti

Figure 9.4 – Spanish CoNLL-02 Memory Comparison

0

500

1,000

1,500

2,000

2,500

0 25,000 50,000 75,000 100,000 125,000 150,000 175,000 200,000 225,000

# of Training Examples

Trai

ning

Tim

e (s

ec)

SVM-MulticlassV2.12 SVM-PerfMulti

Figure 9.5 – Dutch CoNLL-02 Training Time Comparison

Page 170: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

154

0

200

400

600

800

1,000

1,200

1,400

1,600

0 25,000 50,000 75,000 100,000 125,000 150,000 175,000 200,000 225,000

# of Training Examples

Mem

ory

Siz

e (M

Byt

es)

SVM-Multiclass V2.12 SVM-PerfMulti

Figure 9.6 – Dutch CoNLL-02 Memory Comparison

A comparison of the classification performance, learning parameters, and training

time is reported in Table 9.1 for the biomedical JNLPBA-04 data, Table 9.2 for the

CoNLL-02 Spanish data, and Table 9.3 for the CoNLL-02 Dutch data. Using the default

learning parameters, the F-score achieved using SVM-Multiclass and SVM-PerfMulti is

63.35% vs. 67.15% for the biomedical JNLPBA-04 case, 68.77% vs. 71.86% for the

CoNLL-02 Spanish case, and 67.78% vs. 71.10% for the CoNLL-02 Dutch case.

In order to achieve comparable performance results, the training time of SVM-

Multiclass compared to SVM-PerfMulti is 12,231.320 sec. to 5,890.032 sec. for the

JNLPBA-04 case, 3207.441 sec. to 1804.054 sec. for the CoNLL-02 Spanish case, and

1963.621 sec. to 941.293 sec. for the CoNLL-02 Dutch case. While the training time

using both algorithms is several orders of magnitude faster than the original SVM-

Multiclass implementation using the n-slack SVM formulation, there is approximately a

2:1 difference in training time between SVM-Multiclass V2.12 and SVM-PerfMulti.

Page 171: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

155

Table 9.1 – Biomedical JNLPBA-04 Performance Measures Comparison Test Dataset Classification Results – Format: Recall/Precision/F-Score

Named Entity SVM-Multiclass V2.12 Default C=0.01, $=0.1

SVM-Multiclass V2.12 With tuning: C=0.07, $=0.1

SVM-PerfMulti Default C=0.01, $=0.1, a=0.0001

protein 70.37 / 61.85 / 65.84 74.86 / 64.60 / 69.35 75.00 / 64.19 / 69.17 DNA 50.47 / 67.17 / 57.63 59.49 / 65.68 / 62.43 57.89 / 70.64 / 63.63 RNA 52.33 / 61.64 / 56.60 55.81 / 66.67 / 60.76 59.30 / 62.96 / 61.08

cell line 47.57 / 51.82 / 49.60 56.23 / 49.07 / 52.41 52.43 / 51.19 / 51.80 cell type 51.31 / 79.12 / 62.25 59.34 / 76.93 / 67.00 58.49 / 78.58 / 67.06 [-ALL-] 62.37 / 64.37 / 63.35 68.38 / 65.81 / 67.07 67.93 / 66.39 / 67.15

Training Time (sec) 6,642.820 12,231.320 5,890.032

Table 9.2 – Spanish CoNLL-02 Performance Measures Comparison Test Dataset Classification Results – Format: (Accuracy) Precision/Recall/F-Score

Named Entity SVM-Multiclass V2.12 Default C=0.01, $=0.1

SVM-Multiclass V2.12 With tuning: C=0.1, $=0.1

SVM-PerfMulti Default C=0.01, $=0.1,

a=0.0001 person 72.43 / 86.12 / 78.68 76.30 / 88.03 / 81.74 74.74 / 89.39 / 81.41

organization 66.20 / 73.57 / 69.69 68.97 / 76.21 / 72.41 69.26 / 75.79 / 72.37 location 70.96 / 70.57 / 70.77 73.86 / 73.25 / 73.55 75.19 / 72.97 / 74.06

miscellaneous 35.09 / 27.35 / 30.74 41.59 / 41.47 / 41.53 40.00 / 40.00 / 40.00 [-ALL-] (96.52) 66.82 / 70.83 / 68.77 (97.00) 69.55 / 74.43 / 71.91 (96.98) 69.55 / 74.32 / 71.86

Training Time (sec) 1699.860 3207.441 1804.054

Table 9.3 – Dutch CoNLL-02 Performance Measures Comparison Test Dataset Classification Results – Format: (Accuracy) Precision/Recall/F-Score

Named Entity SVM-Multiclass V2.12 Default C=0.01, $=0.1

SVM-Multiclass V2.12 With tuning: C=0.1, $=0.1

SVM-PerfMulti Default C=0.01, $=0.1,

a=0.0001 person 59.93 / 83.52 / 69.79 65.67 / 86.07 / 74.50 64.70 / 85.79 / 73.77

organization 68.11 / 53.51 / 59.94 67.57 / 59.30 / 63.16 66.24 / 59.18 / 62.51 location 75.19 / 75.97 / 75.58 78.78 / 76.74 / 77.75 79.12 / 76.87 / 77.98

miscellaneous 69.85 / 61.67 / 65.50 73.06 / 68.07 / 70.48 72.40 / 67.40 / 69.81 [-ALL-] (97.22) 66.84 / 68.74 / 67.78 (97.59) 70.46 / 72.82 / 71.62 (97.57) 69.71 / 72.55 / 71.10

Training Time (sec) 1314.670 1963.621 941.293

9.3 Effect of Regularization Parameter C For these experiments, we used the slack-rescaling option of SVM-Multiclass.

Although both margin-rescaling and slack-rescaling are said to be equivalent in

(Joachims et al. 2008), using the same regularization factor C and margin-rescaling failed

to produce results (all performance measures were zero). For a successful operation, we

had to use very large values of C to get reasonable results with margin-rescaling.

Page 172: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

156

As observed with the SVM-PerfMulti experiments, varying the value of C had no

effect on the learning process and final performance results. This is due to the

stabilization of the trained weights during the learning process after a certain point in

training with slack-rescaling. We investigated this observation with SVM-Multiclass and

noticed that the training time varies linearly with the value of C up to a certain point

where no more changes are recorded in time or performance results. Figure 9.7 records

the training time variation with the value of C using a training set size of 50,000

JNLPBA-04 examples.

Using SVM-PerfMulti, the cutting plane algorithm reaches the point of saturation at

much lower values of C due to the positive boosting effect introduced. The time and

performance results using the default learning parameters of SVM-PerfMulti (same

defaults as SVM-Multiclass) and higher were insensitive to the variation of C. The linear

component of Figure 9.7 is much shorter with SVM-PerfMulti and occurs at very small

values of C.

0

500

1,000

1,500

2,000

2,500

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

SVM-Light Equivalent C Factor

Trai

ning

Tim

e (s

ec)

Figure 9.7 – Effect of Regularization Parameter C on Training Time

Page 173: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

157

The effect of the variation of C on the overall performance results of SVM-Multiclass

is presented in Figure 9.8. Using SVM-PerfMulti, we observed that the performance

stabilization occurs at or very close to the peak performance.

30

35

40

45

50

55

60

0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00

Regularization Factor C

Per

form

ance

(%)

Overall Recall Overall Precision Overall F-score

Figure 9.8 – Effect of Regularization Parameter C on Performance

The following chapter concludes this dissertation with a discussion of the known

concerns and how we addressed them, a self-evaluation of the success criteria and

contributions, as well as some brainstorming on some research directions that we would

like to pursue in the future.

Page 174: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

Chapter 10 Conclusion and Future Research

In this research work, we explored the scalability issues associated with solving the

named entity recognition problem using support vector machines and high-dimensional

input. We also assessed the usability of the machine learning environment and the

available tools. Training an SVM to classify multiple independent classes at once is a

complex optimization problem with many variables. We reviewed the basic All-Together

multi-class problem formulation and two newer formulations that attempt to reduce the

size of the optimization problem by reducing the number of variables to be optimized.

The first method is presented in (Crammer and Singer 2001) where the number of slack

variables is reduced from n x k to n slack variables, where n is the number of training

data points and k is the number of classes. Tsochantaridis et al. (2005) simplifies the

optimization task further by reducing the number of slack variables to just one, yet this

improvement increases the number of constraints from polynomial to exponential O(nk)

constraints. Using a cutting plane algorithm suitable for a given problem can limit the

number of constraints within a linear subset.

To address the scalability issues of NER using SVM, we developed a new multi-class

cutting plane algorithm – SVM-PerfMulti – that accelerates the learning process while

boosting the classification performance. The algorithm produces good out-of-the-box

performance with linear kernels and limits the need to tune the learning parameters. On

the other hand, given the potentially large memory needed to hold the generated

constraints vectors – O(kf) where f is the number of features – we explored ways to

Page 175: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

159

reduce the number of constraint vectors and an alternative solution where all vectors need

not be resident in online memory. A new SVM database architecture and solution is

presented that supports database caching of examples and support vectors.

The database architecture also provides a framework for SVM machine learning that

promotes usability of the solution and reusability of prior learning exercises and

classification results. The framework incorporates both binary and multi-class learning

and allows flexibility in the definition of learning environments and class labeling options

using the same set of training and testing examples. We demonstrated that caching

examples’ feature vectors in the database does not pose a significant overhead on the

overall training time, especially in the multi-class learning case where the overhead is

almost non-existent. To eliminate any potential overhead due to support vectors storage

in the database and not in online memory, we investigated main memory caching of

kernel products and trained weights, which proved to be beneficial in cutting down the

database access overhead while lowering the online memory needs.

10.1 Known Concerns and Solutions In this section we summarize some of the concerns that we anticipated to face as part

of the challenge of improving NER/SVM scalability and how they were addressed:

1. Multi-class SVM learning is a complex problem and finding ways to accelerate

the training process is challenging: This was the most important and challenging

step in the solution. It required careful analysis of existing support vector

optimization and learning algorithms especially for the All-Together multi-class

problem. Extensive process monitoring and profiling was required. Due to the

lack of documentation for existing SVM implementations, we needed to undergo

Page 176: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

160

a detailed analysis of existing code matching it to published algorithms in order to

identify ways to develop the new solution.

2. A trade-off usually exists between training time and memory consumption: Using

the new 1-slack SVM formulation in SVM-Struct V3.0 and out new multi-class

instantiation SVM-PerfMulti with a large training dataset such as the JNLPBA-04

biomedical data, the size of the constraint vectors generated during the learning

process caused several experiments to fail on memory allocation requests. We

traced all memory allocations to identify potential areas of reduction, altered the

learning environment to reduce the number of inactive constraints in memory,

investigated the use of 32-bit and 64-bit operating systems, and developed a

database solution that allows caching of examples and support vectors while

limiting the need for frequent database access in order to lower overhead.

3. Designing a database schema to support SVM learning while minimizing the

overhead of accessing data in permanent storage: One of the important design

decisions related to the representation of feature vectors. These are sparse vectors

where each feature is denoted by a pair of values: a feature number and an

associated weight. If feature number:weight pairs are represented individually,

access to any vector would require multiple fetches from the database and

possibly several data type conversion and array building steps. While this

representation would satisfy normalization of the data, it would incur a heavy

overhead whenever a feature vector needs to be retrieved, which is frequently the

case during the learning process. Moreover, storing individual information would

require additional fields to identify each pair in a vector within a set of examples

Page 177: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

161

or support vectors. These identification fields would dramatically increase the

data size needed to store each vector. After careful analysis, we decided to store

feature vectors – for either examples or support vectors – as binary arrays thereby

representing the whole vector as a field in the corresponding table. This decision

has the advantage of storing and retrieving these vectors in one database access

without the need for data type conversion or array building. The resulting schema

– although not fully normalized per the 3NF definition – is much simpler and

faster to use.

4. The database schema and solution will incorporate both the binary and multi-class

learning and classification and attempt to support the future direction for a

service-oriented architecture that allows for the addition/replacement/deletion of

modules: Using PostgreSQL server-side API, we built an extension shared library

that eliminated redundancy in the learning and classification modules supporting

the binary and multi-class case by making use of function pointers to identify

unique modules while maintaining generic function names throughout the other

modules. The basic data exchange needed for an SOA architecture is provided by

the database schema where multiple learning environments and results may be

maintained independently. For future work, we plan to extend the schema to

support the definition of new modules and integrate them into the solution.

5. Implementation issues and/or limitations may arise with the use of low-level

PostgreSQL SPI interface: The decision to develop embedded server-side

modules improves efficiency while making the solution less portable. We

attempted to isolate direct database access functions such that future porting of the

Page 178: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

162

solution to other RDBMS systems or to a client-based environment would require

localized source code changes. We faced some limitations with the SPI supported

capabilities with earlier versions of PostgreSQL that impacted our

implementation. For instance, scrolling cursors are not supported in versions

earlier than 8.3. We currently use version 8.3.3 (the latest so far). This version has

a limited support for updatable cursors, especially scrolling cursors, so we had to

workaround this limitation. Also, PostgreSQL SPI does not currently support

transactions or nested calls to SPI (as a potential workaround to transactions).

Each extension function call is currently considered one transaction where failure

of the function at any time would rollback any changes to the database. If we need

to support the storage of partial learning results – for instance, permanently

storing the trained model after each learning iteration – we recommend separating

the quadratic optimizer module to run independently of the learning iterations.

6. The integration of the SVM learning and classification modules into the database

solution may be a major challenge: The integration required low-level analysis of

SVM-Light ans SVM-Struct V3.0 source code in order to identify how and where

to incorporate database access function calls in a way that minimizes the need for

frequent or unnecessary access. With the heavy use of pointers to data structures

(often with multiple levels of redirection), careful and detailed examination was

needed. The resulting integration pinpoints and minimizes database access

function calls. We also built the source code base as one integrated base for

building the three solutions: SVM-Perf, SVM-PerfMulti, and SVM-MultiDB.

Page 179: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

163

10.2 Evaluation of Success Criteria The following is a self-evaluation of the success criteria of this research work:

1. Using the JNLPBA-04 datasets, the out-of-the-box performance results (recall,

precision, and F-score) are comparable to published results which shows that the

proposed language-independent, domain-independent approach using high-

dimensional input space is valid. The performance results place the solution in

second place after (Zhou and Su 2004) final results. However, the F-score

achieved by (Zhou and Su 2004) prior to post-processing where many external

knowledge sources are used to refine the classification was only 60.3. We are able

to achieve an F-score of 67.15 using the machine learning step with no post-

processing. If external knowledge such as specialized dictionaries or gazetteer

lists of known named entities are available, they can be used to refine the

classification results and boost the final performance measures. We consider our

out-of-the-box results to be a baseline that can still be used in conjunction with

other solutions if available and/or needed.

2. The scalability improvement is assessed by using incrementally increasing input

data size and monitoring training time and online memory usage. Compared to

SVM-Multiclass using the n-slack formulation, our solution is several orders of

magnitude faster. Compared to the new SVM-Multiclass using the 1-slack

formulation, our solution achieves better out-of-the-box performance faster and

without the need for repeated experiments to tune the learning parameters. The

database-supported solution provides ways to lower the online memory needs.

Page 180: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

164

3. Scalability and performance results are reproducible with other datasets from

other languages and/or domains, as demonstrated using the CoNLL-02 Spanish

and Dutch training and testing datasets. We compare the results using the same

pre-processed data and do not consider post-processing possibilities.

4. To assess the improved usability of the database-supported solution, we defined

several learning environments for binary and multi-class training using different

data sizes, where all environments refer to the same example set loaded just once

into the database. The solution made it possible to train many SVM models using

the same example set and different class labeling schemes just by calling the

svm_learn() function with varying SVM and learning parameter IDs.

10.3 Contributions This research work contributes to the state of the art in biomedical named entity

recognition using support vector machines by offering the following:

1. Improved SVM scalability: The support vector machine can train with large

amounts of data in a reasonable total training time.

2. Improved multi-class training: New cutting plane algorithm for All-Together

multi-class training that lowers the total training time and produces good out-of-

the-box performance.

3. Improved SVM solution usability: Using the database-supported solution, users

have the ability to define multiple SVM learning environments and class labeling

schemes using the same example sets, store and reuse trained models, perform

and store classification results using different trained machines, and take

Page 181: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

165

advantage of the database environment to construct other solutions such as the

construction of dictionaries and gazetteer lists.

4. Using high-dimensional input space that eliminates prior language and/or domain

knowledge is shown to be a feasible approach to named entity recognition, and is

validated by the experiments results in the biomedical and multilingual examples.

5. An NER solution that is portable across languages and domains: Since the

solution does not incorporate prior language and domain knowledge and does not

require preprocessing beyond feature extraction, it can be used in different

contexts with the same level of complexity. With fast learning and good out-of-

the-box baseline performance, incorporating additional knowledge when available

would be expected to further improve the classification performance.

10.4 Future Work For future research, we would like to continue our work on natural language

processing using machine learning in several directions including the extension of the

database solution to support the recommended service-oriented architecture, incremental

learning, structural multi-word named entity recognition, feature selection, and

unsupervised learning. In this section, we present some ideas to investigate in each area.

10.4.1 Database Solution Extension

The SVM database framework presented in Chapter 6 offers a basis for a complete

machine learning environment. To expand on the work presented in this document, we

would like to refine the kernel matrix pre-computation when a new constraint vector is

added to enhance the support vector caching options. Initial work on kernel matrix

caching is very promising and shows at least 50% reduction in database access. Memory

Page 182: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

166

caching of incremental computation of the model’s weights can be introduced in order to

minimize the need to re-compute these weights during each learning iteration. Using a

linear kernel, such incremental computation is possible.

Further enhancements to the database framework include the use of sub-transactions

to store partially trained models and allow a failed or aborted learning exercise to resume

from the last saved learning iteration results. If future versions do not support sub-

transactions in server-side extension code, these may be simulated using further

decomposition of the SVM learning and optimization functions to run within separate SPI

environments. It is also possible to write a new client-based version of SVM-MultiDB

using PostgreSQL’s client libraries. Although a client-based version would require more

communication between the client and server processes, keeping the client software

housed on the same server as the RDBMS could reduce the communication overhead.

Another enhancement is to enable threading for the database SVM modules and make

them reentrant thereby promoting the parallelization of the machine.

Finally we would like to expand the database framework to support our recommended

service-oriented architecture for machine learning, presented in the following section.

10.4.2 Recommended Service-Oriented Architecture

We recommend a dynamic service-oriented machine learning architecture that

promotes reusability, expandability, and maintainability of the various components

needed to implement the machine learning solution. The aim of the dynamic architecture

is to provide a research environment with a flexible infrastructure such that researchers

may easily focus on specific components without spending much time on rebuilding the

Page 183: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

167

experimentation infrastructure. The proposed architecture’s design will be service-

oriented with a clear definition of the inter-modules interaction and interfaces.

Figure 10.1 summarizes our view of an integrated machine learning environment.

Within the recommended architecture, machine learning modules providing specific

services are separated such that one may select the different components needed to build

the total solution from existing alternative or complementary components. The

architecture is assisted by a database schema especially designed to provide data

exchange services between the different modules. With clear definition of the modules’

interfaces, one does not need to worry about compatibility issues. The dynamic

architecture allows the co-existence of alternative implementations for the same service

thereby providing a streamlined way to build different solutions for different problems or

for comparison purposes.

Improving SVM usability is considered a by-product of the proposed architecture.

Modules that address the model parameter selection or the kernel selection may be easily

incorporated into the overall solution. In fact, several modules offering different solutions

to the same problem may be included. The user can define the overall learning solution

and/or improve on existing ones by selecting the different components that compose a

learning task. These optional modules may include cross-validation for kernel selection,

heuristic-based approaches, grid search modules for model selection, decision tress, or

any other techniques. Model and kernel parameters are considered attributes of a given

learning instance in the proposed database schema. Model and kernel selection modules

are to be configurable options in the architecture. Oracle’s (Milenova et al. 2005) SVM

solution provides built-in model and kernel selection methods and/or heuristics. Our

Page 184: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

168

recommended approach is to provide an architecture where several solutions may be

simultaneously implemented and selected for a given learning task. In fact, optional

modules that further improve on SVM’s scalability such as instance pruning may also be

incorporated by a user or a researcher.

Figure 10.1 – Recommended Service-Oriented Architecture

10.4.3 Incremental Learning

In addition to lowering the overhead due to access to permanent storage, the previous

improvements would also pave the way for incremental learning, as we can reuse and

refine the stored trained model and weight vectors in future training exercises. The

incremental learning approach allows for easy incorporation of new training data when

such data becomes available as it does not require a restart of the training process from

Page 185: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

169

scratch. The retraining process may be triggered either explicitly (through a user’s action)

or implicitly (when a new input data chunk is available).

Finding the most violated constraints iteratively may be considered a form of

incremental learning, where additional constraint vectors are introduced to the learner in

each iteration. If new training data becomes available, one can incorporate this data into

the database stored model and resume the learning process to refine the model and its

weights using the new data. We anticipate that the incremental learning process may

proceed as follows. Let the original training data set be )},(),...,,{( 11 nninit yxyS x% and

that the newly available training data be )},(),...,,{( 11 aanew yxyS x% , where ),...,1{ ky9 :

Algorithm C.1: Algorithm for multi-class incremental training

1: Input: )},(),...,,{()},,(),...,,{( 1111 aanewnninit yxySyxyS xx %%

},{ model trained},,...,1{ wCM %9 ky 2: newmisseda SSS %% ,7 3: for i=1,…,a do (classify new data points using trained model) 4: if }{argmax ,..,1 i

Tjkjiy xw%% then

5: )},{( iiaa ySS xD? 6: )},{( iimissedmissed ySS x,? 7: end if 8: end for

9: ))},({( aii Sy 9I$? xCC (incorporate new correctly classified

points into existing constraint vectors) 10: if 7%missedS then (all new data points are correctly classified) 11: return (no more training is necessary) 12: else 13: newa SSS D%

14: Resume training using S as new input to Algorithm 5.3, starting with new constraint vector C and trained model M

15: end if 16: return new(w,#)

Algorithm C.1 starts by classifying the new data points using the existing trained

model. All correctly classified points are grouped in the set Sa. If all new points are

Page 186: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

170

correctly classified, incorporate their feature vectors into the existing constraint vectors

set C by summing up the feature weights to their corresponding positions according to

the function ),( yxI and exit as no more training is needed. Otherwise, incorporate the

correctly classified feature vectors into C and resume training using all training data

points starting with the modified constraint vectors set C and the trained model.

One possible modification on lines 9-15 would be to incorporate all new feature

vectors using the function ),( yxI into the constraint vectors C in the case where

misclassified points exist, as this will be an additional measure of violated constraints.

We would like to investigate both possibilities as part of future research work.

10.4.4 Structural Multi-Word NER

Another direction of future work is inspired by the ability to optimize structured

problems using SVM-Struct. One of the main problems with biomedical named entity

recognition is the limitation in fully classifying multi-word entities. By observing the

accuracy of identifying individual words, left boundary words, right boundary words, and

complete matches, it is clear that the machine learning solutions that focus on training

with separate words fall short in achieving high accuracy with complete matches, while

the accuracy of single word classification is much higher. A possible area of investigation

is to construct the machine learning task such that all words belonging to one entity are

optimized at the same time. Using the SVM-Struct API, since the optimization task is

performed on a structured input that is defined by the loss function, the cutting plane

algorithm, and the way in which the ),( yGI x function is computed, building a structured

named entity solution may be possible.

Page 187: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

171

10.4.5 Feature Selection

One of the practical problems in machine learning solutions is feature selection and

feature weight representation. While working with a high-dimensional input features is

becoming more feasible with the availability of faster training algorithms, designing a

suitable feature selection and representation for a given problem is more art than science.

For instance, choosing a reasonable window size to represent a sequence of words for

named entity recognition is performed by trial and error. During the investigation phase

of this work, we experimented with different window sizes ranging from [-1, 1] to [-5, 5]

and found that a window of [-3, 3] was the best for the JNLPBA-04 datasets, decided by

the performance measures achieved using the different feature vectors. However, using

the CoNLL-02 Dutch datasets, a window of [-1, 1] achieved the best performance with an

F-score of 73.24. We also investigated the elimination of unique features, merging

feature vectors of similar word/class occurrences as well as scaling feature weights by

feature rate per class. Exploring the automation of feature selection and representation to

achieve the best performance measures is another area of interest.

10.4.6 Unsupervised Learning

Using support vector machines offers the possibility of a one-class classification,

where the objective of the machine is to detect all points that likely belong to the same

domain description or the same class where all other points are considered outliers. The

idea is based on finding those points that lie within the same hypersphere in feature space

and consider them all to be members of the same class.

The one-class classification ability of SVM has been extended to SVM clustering

(Ben-Hur et al. 2001), where the machine objective is to identify hyperspheres in feature

Page 188: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

172

space to indicate the presence of a cluster in input space. The idea is to map the input

space into a higher dimension feature space, for example, map a two-dimensional space

to a three-dimensional feature space, where it is easier to identify enclosing spheres.

Reverting back to input space, those points that lie within the hypersphere are considered

to form a cluster. While the SVM clustering is a promising means for unsupervised or

semi-supervised learning, it had not been applied to large real-world data due to slow

training. With the development of new learning and optimization algorithms that reduce

the training time in the supervised learning case, we would like to explore the possibility

of extending these algorithms to support one-class classification and clustering.

Page 189: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

Bibliography

Abe, Shigeo. 2005. Support Vector Machines for Pattern Classification. Edited by Sameer Singh, Advances in Pattern Recognition. London: Springer-Verlag.

Alpaydin, Ethem. 2004. Introduction to Machine Learning. Cambridge, MA: The MIT Press. Barros de Almeida, M., A. de Padua Braga, et al. 2000. SVM-KM: Speeding SVMs Learning with A Priori

Cluster Selection and K-Means. In Proc. of the 6th Brazilian Symposium on Neural Networks, Nov 22-25:162 - 167.

Bender, Oliver, Franz Josef Och, et al. 2003. Maximum Entropy Models for Named Entity Recognition. In

Proc. of the 7th Conference on Natural Language Learning (CoNLL-2003), May 31 - Jun 1, at Edmonton, Canada:148-151.

Ben-Hur, Asa, David Horn, et al. 2001. Support Vector Clustering. Journal of Machine Learning Research

2:125-137. Bennett, Kristin P., and Colin Campbell. 2000. Support Vector Machines: Hype or Hallelujah? SIGKDD

Explor. Newsl. 2 (2):1-13. Black, William J., and Argyrios Vasilakopoulos. 2002. Language-Independent Named Entity Classification

by Modified Transformation-Based Learning and by Decision Tree Induction. In Proc. of the 6th Workshop on Computational Language Learning (CoNLL-2002), Aug 31 - Sep 1, at Taipei, Taiwan:159-162.

Blaschke, Christian, Miguel A. Andrade, et al. 1999. Automatic Extraction of Biological Information from

Scientific Text: Protein-Protein Interactions. In ISMB-99:60-67. Bodenreider, Olivier, Thomas C. Rindflesch, et al. 2002. Unsupervised, Corpus-Based Method for

Extending a Biomedical Terminology. In Proc. of the ACL 2002 Workshop on Natural Language Processing in the Biomedical Domain, Jul 11, at Philadelphia, PA:53-60.

Boley, Daniel, and Dongwei Cao. 2004. Training Support Vector Machine using Adaptive Clustering. In

Proc. of the 4th SIAM International Conference on Data Mining, Apr, at Lake Buena Vista, Florida:126-137.

Burger, John D., John C. Henderson, et al. 2002. Statistical Named Entity Recognizer Adaptation. In Proc.

of the 6th Workshop on Computational Language Learning (CoNLL-2002), Aug 31 - Sep 1, at Taipei, Taiwan:163-166.

Carreras, Xavier, Lluis Marques, et al. 2002. Named Entity Extraction using AdaBoost. In Proc. of the 6th

Workshop on Computational Language Learning (CoNLL-2002), Aug 31 - Sep 1, at Taipei, Taiwan:167-170.

———. 2003a. Learning a Perceptron-Based Named Entity Chunker via Online Recognition Feedback. In

Proc. of the 7th Conference on Natural Language Learning (CoNLL-2003), May 31 - Jun 1, at Edmonton, Canada:156-159.

———. 2003b. A Simple Named Entity Extractor using AdaBoost. In Proc. of the 7th Conference on

Natural Language Learning (CoNLL-2003), May 31 - Jun 1, at Edmonton, Canada:152-155.

Page 190: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

174

Chang, Chih-Chung, and Chih-Jen Lin. LIBSVM: a library for support vector machines, 2001. Available from http://www.csie.ntu.edu.tw/~cjlin/libsvm.

Chang, Jeffrey T., Hinrich Schütze, et al. 2004. GAPSCORE: Finding Gene and Protein Names One Word

at a Time. Bioinformatics 20 (2):216-225. Chieu, Hai Leong, and Hwee Tou Ng. 2003. Named Entity Recognition with a Maximum Entropy

Approach. In Proc. of the 7th Conference on Natural Language Learning (CoNLL-2003), May 31 - Jun 1, at Edmonton, Canada:160-163.

Cohen, K. Bretonnel, Lynne Fox, et al. 2005. Corpus Design for Biomedical Natural Language Processing.

In Proc. of the ACL-ISMB Workshop on Linking Biological Literature, Ontologies and Databases: Mining Biological Semantics, Jun, at Ann Arbor, Michigan:38-45.

Collier, Nigel, and Koichi Takeuchi. 2004. Comparison of Character-Level and Part of Speech Features for

Name Recognition in Biomedical Texts. Journal of Biomedical Informatics 37 (6):423-435. Collins, Michael, and Yoram Singer. 1999. Unsupervised Models for Named Entity Classification. In Proc.

of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP), at College Park, MD:189-196.

Collobert, Ronan, Fabian Sinz, et al. 2006a. Large Scale Transductive SVMs. Journal of Machine Learning

Research:1687-1712. ———. 2006b. Trading Convexity for Scalability. In Proc. of the 23rd international conference on

Machine learning, at Pittsburgh, PA:201-208. Crammer, K., and Y. Singer. 2001. On the Algorithmic Implementation of Multi-class SVMs. Journal of

Machine Learning Research 2:265–292. Craven, Mark, and Johan Kumlein. 1999. Constructing Biological Knowledge Bases by Extracting

Information from Text Sources. In ISMB-99:77-86. Cucerzan, Silviu, and David Yarowsky. 1999. Language Independent Named Entity Recognition

Combining Morphological and Contextual Evidence. In Proc. of the 1999 Joint SIGDAT Conference on EMNLP and VLC, at College Park, MD:90-99.

———. 2002. Language Independent NER using a Unified Model of Internal and Contextual Evidence. In

Proc. of the 6th Workshop on Computational Language Learning (CoNLL-2002), Aug 31 - Sep 1, at Taipei, Taiwan:171-174.

Curran, James R., and Stephen Clark. 2003. Language Independent NER using a Maximum Entropy

Tagger. In Proc. of the 7th Conference on Natural Language Learning (CoNLL-2003), May 31 - Jun 1, at Edmonton, Canada:164-167.

De Meulder, Fien, and Walter Daelemans. 2003. Memory-Based Named Entity Recognition using

Unannotated Data. In Proc. of the 7th Conference on Natural Language Learning (CoNLL-2003), May 31 - Jun 1, at Edmonton, Canada:208-211.

Dingare, Shipra, Malvina Nissim, et al. 2005. A System for Identifying Named Entities in Biomedical Text:

How Results from Two Evaluations Reflect on Both the System and the Evaluations. Comparative and Functional Genomics PU - John Wiley &amp; Sons, Ltd. 6 (1-2):77-85.

Dong, Yan-Shi, and Ke-Song Han. 2005. Boosting SVM Classifiers By Ensemble. In Proc. of the 14th

international conference on World Wide Web, May 10-14, at Chiba, Japan:1072-73.

Page 191: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

175

Eriksson, Gunnar, Kristofer Franzén, et al. 2002. Exploiting Syntax when Detecting Protein Names in Text. In NLPBA2002 Workshop on Natural Language Processing in Biomedical Applications, Mar 8-9, at Nicosia, Cyprus:1-6.

Finkel, Jenny, Shipra Dingare, et al. 2004. Exploiting Context for Biomedical Entity Recognition: From

Syntax to the Web. In Proc. of the 2004 Joint Workshop on Natural Language Processing in Biomedicine and its Applications (JNLPBA'2004), Aug 28-29, at Geneva, Switzerland:88-91.

Florian, Radu. 2002. Named Entity Recognition as a House of Cards: Classifier Stacking. In Proc. of the

6th Workshop on Computational Language Learning (CoNLL-2002), Aug 31 - Sep 1, at Taipei, Taiwan:175-178.

Florian, Radu, Abe Ittycheriah, et al. 2003. Named Entity Recognition through Classifier Combination. In

Proc. of the 7th Conference on Natural Language Learning (CoNLL-2003), May 31 - Jun 1, at Edmonton, Canada:168-171.

Franzen, Kristofer, Gunnar Eriksson, et al. 2002. Protein Names and How to Find Them. International

Journal of Medical Informatics 67 (1-3):49-61. The GENIA Project (5.1). Team Genia, 2002-2006. Available from http://www-tsujii.is.s.u-

tokyo.ac.jp/GENIA/home/wiki.cgi. Giuliano, Claudio, Alberto Lavelli, et al. Simple Information Extraction (SIE). ITC-irst, Istituto per la

Ricerca Scientifica e Tecnologica, 2005. Available from http://tcc.itc.it/research/textec/tools-resources/sie/giuliano-sie.pdf.

Goldfeld, A. E., P. G. McCaffrey, et al. 1993. Identification of a novel cyclosporin-sensitive element in the

human tumor necrosis factor alpha gene promoter. J Exp Med 178 (4):1365-79. Goutte, Cyril, Eric Gaussier, et al. 2004. Generative vs Discriminative Approaches to Entity Recognition

from Label-Deficient Data. In Journées Internationales d'Analyse Statistique des Données Textuelles, at Meylan, France:515-523.

Habib, Mona Soliman. 2008. Addressing Scalability Issues of Named Entity Recognition Using Multi-

Class Support Vector Machines. International Journal of Computational Intelligence 4 (3):230-239. Habib, Mona Soliman, and Jugal Kalita. 2007. Language and Domain-Independent Named Entity

Recognition: Experiment using SVM and High-Dimensional Features. In Proc. of the 4th Biotechnology and Bioinformatics Symposium (BIOT-2007), Oct 19-20, at Colorado Springs, CO:69-73.

Hammerton, James. 2003. Named Entity Recognition with Long Short-Term Memory. In Proc. of the 7th

Conference on Natural Language Learning (CoNLL-2003), May 31 - Jun 1, at Edmonton, Canada:172-175.

Hao, Zhi-Feng, Bo Liu, et al. 2006. A Comparison of Multiclass Support Vector Machine Algorithms. In

Proc. of the Fifth International Conference on Machine Learning and Cybernetics, Aug. 13-16, at Dalian, China:4221-4226.

Hendrickx, Iris, and Antal van den Bosch. 2003. Memory-Based One-Step Named-Entity Recognition:

Effects of Seed List Features, Classifier Stacking, and Unannotated Data. In Proc. of the 7th Conference on Natural Language Learning (CoNLL-2003), May 31 - Jun 1, at Edmonton, Canada:176-179.

Hirschman, L., J. C. Park, et al. 2002. Accomplishments and Challenges in Literature Data Mining for

Biology. Bioinformatics 18 (12):1553-1561.

Page 192: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

176

Hsu, Chih-Wei, and Chih-Chen Lin. 2002. A Comparison of Methods for Multi-Class Support Vector

Machines. IEEE Transactions on Neural Networks 13:415-425. Jansche, Martin. 2002. Named Entity Extraction with Conditional Markov Models and Classifiers. In Proc.

of the 6th Workshop on Computational Language Learning (CoNLL-2002), Aug 31 - Sep 1, at Taipei, Taiwan:179-182.

Joachims, Thorsten. 1998a. Making Large-Scale SVM Learning Practical. In Advances in Kernel Methods -

Support Vector Learning: Chapter 11, edited by B. Schölkopf, C. Burges, et al.: MIT-Press. ———. 1998b. Text Categorization with Support Vector Machines: Learning with Many Relevant

Features. In Proc. of the European Conference on Machine Learning:137-142. ———. 2002. Learning to Classify Text Using Support Vector Machine. Norwell, MA: Kluwer Academic. ———. 2005. A Support Vector Method for Multivariate Performance Measures. In Proc. of the

International Conference on Machine Learning (ICML):377-384. ———. 2006. Training Linear SVMs in Linear Time. In Proc. of the ACM Conference on Knowledge

Discovery and Data Mining (KDD), Aug 20-23:217-226. Joachims, Thorsten, Thomas Finley, et al. 2008. Cutting-Plane Training of Structural SVMs. To Appear:

Machine Learning:1-34. Kazama, Jun'ichi, Yusuke Miyao, et al. 2001. A Maximum Entropy Tagger with Unsupervised Hidden

Markov Models. In Proc. of the 6th Natural Language Processing Pacific Rim Symposium, at Tokyo, Japan:333-340.

Kecman, Vojislav. 2001. Learning and Soft Computing. London, UK: The MIT Press. Kim, J. D., T. Ohta, et al. 2003. GENIA Corpus--Semantically Annotated Corpus for Bio-Textmining.

Bioinformatics 19 Suppl 1:180-182. Kim, Jin-Dong, Tomoko Ohta, et al. 2006. GENIA Ontology Technical Report(TR-NLP-UT-2006-2).

Tokyo, Japan: Tsujii Laboratory, University of Tokyo. ———. 2004. Introduction to the Bio-Entity Recognition Task at JNLPBA. In Proc. of the 2004 Joint

Workshop on Natural Language Processing in Biomedicine and its Applications (JNLPBA'2004), Aug 28-29, at Geneva, Switzerland:70-75.

Klein, Dan, Joseph Smarr, et al. 2003. Named Entity Recognition with Character-Level Models. In Proc. of

the 7th Conference on Natural Language Learning (CoNLL-2003), May 31 - Jun 1, at Edmonton, Canada:180-183.

Kreßel, Ulrich H.-G. 1999. Pairwise Classification and Support Vector Machines. In Advances in Kernel

Methods: Support Vector Learning. Cambridge, MA: MIT Press. LeBowitz, J. H., T. Kobayashi, et al. 1988. Octamer-binding proteins from B or HeLa cells stimulate

transcription of the immunoglobulin heavy-chain promoter in vitro. Genes Dev 2 (10):1227-37. Lee, Chih, Wen-Juan Hou, et al. 2004. Annotating Multiple Types of Biomedical Entities: A Single Word

Classification Approach. In Proc. of the 2004 Joint Workshop on Natural Language Processing in Biomedicine and its Applications (JNLPBA'2004), Aug 28-29, at Geneva, Switzerland:80-83.

Page 193: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

177

Lee, Ki-Joong, Young-Sook Hwang, et al. 2004. Biomedical Named Entity Recognition using Two-Phase Model Based on SVMs. Journal of Biomedical Informatics 37 (6):436-447.

Lee, Sei-Hyung. 2006. Selection of Gaussian Kernel Widths and Fast Cluster Labeling for Support Vector

Clustering. Ph.D. in Computer Science, Computer Science, University of Massachussets Lowell, Lowell, MA.

Lee, Sei-Hyung, and Karen M. Daniels. 2006. Cone Cluster Labeling for Support Vector Clustering. In

Proc. of the 6th SIAM International Conference on Data Mining, Apr 20-22, at Bethesda, MD, USA:484-488.

Lei, Hansheng, and Venu Govindaraju. 2005. Half-Against-Half Multi-class Support Vector Machines. In

Proc. of the 6th International Workshop on Multiple Classifier Systems, Jun 13-15, at Seaside, CA, USA:156-164.

Lei, X. D., and S. Kaufman. 1998. Human white blood cells and hair follicles are good sources of mRNA

for the pterin carbinolamine dehydratase/dimerization cofactor of HNF1 for mutation detection. Biochem Biophys Res Commun 248 (2):432-5.

Li, Boyang, Jinglu Hu, et al. 2008a. An Improved Support Vector Machine with Soft Decision-Making

Boundary. In Proc. of Artificial Intelligence and Applications (AIA 2008), Feb. 11-13, at Innsbruck, Austria

———. 2008b. Support Vector Machine Classifier with WHM Offset for Unbalanced Data. Journal of

Advanced Computational Intelligence and Intelligent Informatics 12 (1):94-101. Lin, Yi-Feng, Tzong-Han Tsai, et al. 2004. A Maximum Entropy Approach to Biomedical Named Entity

Recognition. In Proc. of the 4th ACM SIGKDD Workshop on Data Mining in Bioinformatics (BIOKDD 2004), Aug 22, at Seattle, Washington:56-61.

Malouf, Robert. 2002. Markov Models for Language-Independent Named Entity Recognition. In Proc. of

the 6th Workshop on Computational Language Learning (CoNLL-2002), Aug 31 - Sep 1, at Taipei, Taiwan:187-190.

Manzano, V. M., J. C. Munoz, et al. 2000. Human renal mesangial cells are a target for the anti-

inflammatory action of 9-cis retinoic acid. Br J Pharmacol 131 (8):1673-83. Mayfield, James, Paul McNamee, et al. 2003. Named Entity Recognition using Hundreds of Thousands of

Features. In Proc. of the 7th Conference on Natural Language Learning (CoNLL-2003), May 31 - Jun 1, at Edmonton, Canada:184-187.

McCallum, Andrew, and Wei Li. 2003. Early Results for Named Entity Recognition with Conditional

Random Fields, Feature Induction and Web-Enhanced Lexicons. In Proc. of the 7th Conference on Natural Language Learning (CoNLL-2003), May 31 - Jun 1, at Edmonton, Canada:188-191.

McNamee, Paul, and James Mayfield. 2002. Entity Extraction Without Language-Specific Resources. In

Proc. of the 6th Workshop on Computational Language Learning (CoNLL-2002), Aug 31 - Sep 1, at Taipei, Taiwan:183-186.

Milenova, Boriana L., Joseph S. Yarmus, et al. 2005. SVM in Oracle Database 10g: Removing the Barriers

to Widespread Adoption of Support Vector Machines. In Proc. of the 31st international conference on Very large data bases, Aug 30 - Sep 2, at Trondheim, Norway:1152-1163.

Müller, Klaus-Robert, Sebastian Mika, et al. 2001. An Introduction to Kernel-Based Learning Algorithms.

IEEE Transactions on Neural Networks 12 (2):181-120.

Page 194: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

178

Munro, Robert, Daren Ler, et al. 2003. Meta-Learning Orthographic and Contextual Models for Language Independent Named Entity Recognition. In Proc. of the 7th Conference on Natural Language Learning (CoNLL-2003), May 31 - Jun 1, at Edmonton, Canada:192-195.

Nadeau, David, and Satoshi Sekine. 2007. Survey of Named Entity Recognition and Classification. In

Named Entities: Recognition, classification, and use: Special issue of Lingvisticæ Investigationes, edited by Satoshi Sekine and Elisabete Marques-Ranchhod: John Benjamins.

Nissim, Malvina, Colin Matheson, et al. 2004. Recognising Geographical Entities in Scottish Historical

Documents. In Proc. of the Workshop on Geographic Information Retrieval (SIGIR ACM), Jul 25-29, at Sheffield, UK:1-3.

NLM. Entrez/PubMed, National Library of Medicine. U.S. National Library of Medicine, 2007a. Available

from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi. ———. Medical Subject Headings, National Library of Medicine. U.S. National Library of Medicine,

2007b. Available from http://www.nlm.nih.gov/mesh/meshhome.html. Ohta, Tomoko, Yuka Tateisi, et al. 2002. The GENIA Corpus: An Annotated Research Abstract Corpus in

Molecular Biology Domain. In Proc. of the 2nd International Conference on Human Language Technology Research, at San Diego, CA:82-86.

Oxford, University of. British National Corpus, 2005. Available from http://www.natcorp.ox.ac.uk/. Paliouras, Georgios, Vangelis Karkaletsis, et al. 2000. Learning Decision Trees for Named-Entity

Recognition and Classification. In Proc. of the 14th European Conference on Artificial Intelligence (ECAI 2000), Aug 20-25, at Berlin, Germany:1-6.

Park, Kyung-Mi, Seon-Ho Kim, et al. 2004. Incorporating Lexical Knowledge into Biomedical NE

Recognition. In Proc. of the 2004 Joint Workshop on Natural Language Processing in Biomedicine and its Applications (JNLPBA'2004), Aug 28-29, at Geneva, Switzerland:76-79.

Patrick, Jon, Casey Whitelaw, et al. 2002. SLINERC: The Sydney Language-Independent Named Entity

Recogniser and Classifier. In Proc. of the 6th Workshop on Computational Language Learning (CoNLL-2002), Aug 31 - Sep 1, at Taipei, Taiwan:199-202.

The Penn Treebank Project, 2002. Available from http://www.cis.upenn.edu/~treebank/. Platt, John C., Nello Cristianini, et al. 2000. Large Margin DAGs for Multiclass Classification. In Advances

in Neural Information Processing Systems, edited by S.A. Solla, T.K. Leen, et al. Cambridge, MA: MIT Press.

Poibeau, T., A. Acoulon, et al. 2003. The Multilingual Named Entity Recognition Framework. In Proc. of

the European Conference on Computational Linguistics, at Budapest:155-158. PostgreSQL Open Source Database (8.3.3). PostgreSQL Global Development Group, 1996 – 2008.

Available from http://www.postgresql.org. Pustejovsky, J., J. Castaño, et al. 2002. Medstract: Creating Large-Scale Information Servers for

Biomedical Libraries. In Proc. of the Association for Computational Linguistics Workshop on Natural Language Processing in the Biomedical Domain, Jul 11, at Philadelphia, PA:85-92.

Riloff, Ellen, and Rosie Jones. 1999. Learning Dictionaries for Information Extraction by Multi-Level

Bootstrapping. In Proc. of the 16th National Conference on Artificial Intelligence, at Orlando, Florida:474-479.

Page 195: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

179

Rössler, Marc. 2004. Adapting an NER-System for German to the Biomedical Domain. In Proc. of the 2004 Joint Workshop on Natural Language Processing in Biomedicine and its Applications (JNLPBA'2004), Aug 28-29, at Geneva, Switzerland:92-95.

Rüping, Stefan. 2002. Support Vector Machines in Relational Databases. In Pattern Recognition with

Support Vector Machines - First International Workshop:310-320. Schebesch, Klaus B., and Ralf Stecking. 2008. Using Multiple SVM Models for Unbalanced Credit Scoring

Data Sets. In Data Analysis, Machine Learning and Applications, edited by Christine Preisach, Hans Burkhardt, et al.: Springer Berlin Heidelberg.

Scheffer, Tobias, Christian Decomain, et al. 2001. Active Hidden Markov Models for Information

Extraction. Lecture Notes in Computer Science 2189:309-319. Sekine, Satoshi. Named Entity: History and Future. New York University, Department of Computer

Science, 2004. Available from http://cs.nyu.edu/~sekine/papers/NEsurvey200402.pdf. Serafini, Thomas, Gaetano Zanghirati, et al. 2005. Gradient Projection Methods for Large Quadratic

Programs and Applications in Training Support Vector Machines. Optimization Methods and Software 20:353-378.

Serafini, Thomas, and Luca Zanni. 2005. On the Working Set Selection in Gradient Projection-based

Decomposition Techniques for Support Vector Machines. Optimization Methods and Software:583-596.

Settles, Burr. 2004. Biomedical Named Entity Recognition Using Conditional Random Fields and Rich

Feature Sets. In Proc. of the 2004 Joint Workshop on Natural Language Processing in Biomedicine and its Applications (JNLPBA'2004), Aug 28-29, at Geneva, Switzerland:104-107.

Shen, H. M., A. Peters, et al. 1998. Mutation of BCL-6 gene in normal B cells by the process of somatic

hypermutation of Ig genes. Science 280 (5370):1750-2. Song, Yu, Eunju Kim, et al. 2004. POSBIOTM-NER in the Shared Task of BioNLP/NLPBA 2004. In Proc.

of the 2004 Joint Workshop on Natural Language Processing in Biomedicine and its Applications (JNLPBA'2004), Aug 28-29, at Geneva, Switzerland:100-103.

Suenaga, R., M. J. Evans, et al. 1998. Peripheral blood T cells and monocytes and B cell lines derived from

patients with lupus express estrogen receptor transcripts similar to those of normal cells. J Rheumatol 25 (7):1305-12.

Sullivan, Keith M., and Sean Luke. 2007. Evolving Kernels for Support Vector Machine Classification. In

Proc. of the 9th annual conference on Genetic and evolutionary computation, at London, England:1702-1707.

Talukdar, Partha Pratim, Thorsten Brants, et al. 2006. A Context Pattern Induction Method for Named

Entity Extraction. In Proc. of the 10th Conference on Computational Natural Language Learning (CONLL-X), Jun 8-9, at New York City, NY:141-148.

Tanabe, Lorraine, Natalie Xie, et al. 2005. GENETAG: a Tagged Corpus for Gene/Protein Named Entity

Recognition. BMC Bioinformatics 6 (1):S3. Tao, Qing, Gao-Wei Wu, et al. 2005. Posterior Probability Support Vector Machines for Unbalanced Data.

IEEE Transactions on Neural Networks 16 (6):1561-1573.

Page 196: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

180

Tao, Xiao-Yan, Hong-bing Ji, et al. 2007. A Modified PSVM and its Application to Unbalanced Data Classification. In Third International Conference on Natural Computation (ICNC 2007), Aug. 24-27, at Haikou, China

Tjong Kim Sang, Erik F. 2002a. Introduction to the CoNLL-2002 Shared Task: Language-Independent

Named Entity Recognition. In Proc. of the 6th Workshop on Computational Language Learning (CoNLL-2002), Aug 31 - Sep 1, at Taipei, Taiwan:155-158.

———. 2002b. Memory-Based Named Entity Recognition. In Proc. of the 6th Workshop on

Computational Language Learning (CoNLL-2002), Aug 31 - Sep 1, at Taipei, Taiwan:203-206. Tjong Kim Sang, Erik F., and Fien De Meulder. 2003. Introduction to the CoNLL-2003 Shared Task:

Language-Independent Named Entity Recognition. In Proc. of the 7th Conference on Natural Language Learning (CoNLL-2003), May 31 - Jun 1, at Edmonton, Canada:142-147.

Tsang, IvorW., James T. Kwok, et al. 2005. Core Vector Machines: Fast SVM Training on Very Large

Data Sets. Journal of Machine Learning Research 6:363-392. Tsochantaridis, Ioannis, Thomas Hofmann, et al. 2004. Support Vector Learning for Interdependent and

Structured Output Spaces. In Proc. of the 21st International Conference on Machine Learning (ICML), Jul 4-8, at Alberta, Canada:104-111.

Tsochantaridis, Ioannis, Thorsten Joachims, et al. 2005. Large Margin Methods for Structured and

Interdependent Output Variables. Journal of Machine Learning Research (JMLR) 6:1453-1484. Tsukamoto, Koji, Yutaka Mitsuishi, et al. 2002. Learning with Multiple Stacking for Named Entity

Recognition. In Proc. of the 6th Workshop on Computational Language Learning (CoNLL-2002), Aug 31 - Sep 1, at Taipei, Taiwan:191-194.

Vapnik, Vladimir N. 1998. Statistical Learning Theory. New York, NY: John Wiley & Sons. Whitelaw, Casey, and Jon Patrick. 2003. Named Entity Recognition Using a Character-Based Probabilistic

Approach. In Proc. of the 7th Conference on Natural Language Learning (CoNLL-2003), May 31 - Jun 1, at Edmonton, Canada:196-199.

Wong, Yingchuan, and Hwee Tou Ng. 2007. One Class per Named Entity: Exploiting Unlabeled Text for

Named Entity Recognition. In Proc. of the 20th International Joint Conference on Artificial Intelligence (IJCAI-07), Jan 6-12, at Hyderabad, India:1763-1768.

Wu, Dekai, Grace Ngai, et al. 2003. A Stacked, Voted, Stacked Model for Named Entity Recognition. In

Proc. of the 7th Conference on Natural Language Learning (CoNLL-2003), May 31 - Jun 1, at Edmonton, Canada:200-203.

———. 2002. Boosting for Named Entity Recognition. In Proc. of the 6th Workshop on Computational

Language Learning (CoNLL-2002), Aug 31 - Sep 1, at Taipei, Taiwan:195-198. Yangarber, Roman, and Ralph Grishman. 2001. Machine Learning of Extraction Patterns from

Unannotated Corpora: Position Statement. In Proc. of the Workshop on Machine Learning for Information Extraction, at Berlin, Germany:76-83.

Yangarber, Roman, Winston Lin, et al. 2002. Unsupervised Learning of Generalized Names. In Proc. of the

19th International Conference on Computational Linguistics (COLING 2002), at Taipei, Taiwan:1-7. Zanghirati, Gaetano, and Luca Zanni. 2003. A Parallel Solver for Large Quadratic Programs in Training

Support Vector Machines. Parallel Computing 29:535-551.

Page 197: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

181

Zanni, Luca, Thomas Serafini, et al. 2006. Parallel Software for Training Large Scale Support Vector Machines on Multiprocessor Systems. Journal of Machine Learning Research 7:1467-1492.

Zeng, Hua-Jun, Xuan-Hui Wang, et al. 2003. CBC: Clustering Based Text Classification Requiring

Minimal Labeled Data. In Proc. of the 3rd IEEE International Conference on Data Mining ( ICDM 2003), Nov 19-22:443- 450.

Zhang, Jie, Dan Shen, et al. 2004. Enhancing HMM-Based Biomedical Named Entity Recognition by

Studying Special Phenomena. J. of Biomedical Informatics 37 (6):411-422. Zhang, Tong, and David Johnson. 2003. A Robust Risk Minimization Based Named Entity Recognition

System. In Proc. of the 7th Conference on Natural Language Learning (CoNLL-2003), May 31 - Jun 1, at Edmonton, Canada:204-207.

Zhao, Shaojun. 2004. Named Entity Recognition in Biomedical Texts using an HMM Model. In Proc. of

the 2004 Joint Workshop on Natural Language Processing in Biomedicine and its Applications (JNLPBA'2004), Aug 28-29, at Geneva, Switzerland:84-87.

Zhou, GuoDong. 2004. Recognizing Names in Biomedical Texts using Hidden Markov Model and SVM

plus Sigmoid. In Proc. of the 2004 Joint Workshop on Natural Language Processing in Biomedicine and its Applications (JNLPBA'2004), Aug 28-29, at Geneva, Switzerland:1-7.

Zhou, GuoDong, and Jian Su. 2004. Exploring Deep Knowledge Resources in Biomedical Name

Recognition. In Proc. of the 2004 Joint Workshop on Natural Language Processing in Biomedicine and its Applications (JNLPBA'2004), Aug 28-29, at Geneva, Switzerland:96-99.

Page 198: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

182

Additional References

Alfonseca, Enrique, and Suresh Mahandhar. 2002. An Unsupervised Method for General Named Entity Recognition and Automated Concept Discovery. In Proc. of the First International Wordnet Conference, Jan 21-25, at Mysore, India.

Allen, James. 1995. Natural Language Understanding. New York, NY: Benjamin/Cummings. Anguita, Davide, Sandro Ridella, et al. 2004. Unsupervised Clustering and the Capacity of Support Vector

Machines. In Proc. of the IEEE International Joint Conference on Neural Networks, Jul 25-29: 2023-2028.

Asharaf, S., S. K. Shevade, et al. Scalable Rough Support Vector Clustering. IISC Research Publications,

2006. Available from http://eprints.iisc.ernet.in/archive/00005293/01/srsvc.pdf. Ban, Tao, and Shigeo Abe. 2004. Spatially Chunking Support Vector Clustering Algorithm. In Proc. of the

IEEE International Joint Conference on Neural Networks, Jul 25-29, at Grenoble, France:418. Brefeld, Ulf, and Tobias Scheffer. 2004. Co-EM Support Vector Learning. In Proc. of the International

Conference on Machine Learning (ICML 2004), Jul 4-8, at Banff, Alberta, Canada:16-23. Camastra, F., and A. Verri. 2005. A Novel Kernel Method for Clustering. IEEE Transactions on Pattern

Analysis and Machine Intelligence 27 (5):801-805. Camastra, Francesco. 2004. Kernels Methods for Unsupervised Learning. Ph.D., Dipartimento di

Informatica e Scienze dell’Informazione (D.I.S.I.), Universita‘ di Genova, Italy. Cao, Dongwei, and Daniel Boley. 2004. Training Support Vector Machine using Adaptive Clustering. In

Proc. of the 2004 SIAM International Conference on Data Mining, Apr 22-24, at Lake Buena Vista, Florida:126-137.

Chang, Chih-Chung, and Chih-Jen Lin. LIBSVM: a library for support vector machines, 2001. Available

from http://www.csie.ntu.edu.tw/~cjlin/libsvm. Chapelle, Olivier, Jason Weston, et al. 2003. Cluster Kernels for Semi-Supervised Learning. In Advances

in Neural Information Processing Systems, at Cambridge, MA, USA:585-592. Charikar, Moses, Chandra Chekuri, et al. 2004. Incremental Clustering and Dynamic Information Retrieval.

SIAM Journal on Computing archive 33 (6):1417-1440. Chiang, Jen-Chieh, and Jeen-Shing Wang. 2004. A Validity-Guided Support Vector Clustering Algorithm

for Identification of Optimal Cluster Configuration. IEEE lnternational Conference on Systems, Man and Cybernetics:3613-3618.

Cohen, Aaron M. 2005. Unsupervised Gene/Protein Named Entity Normalization using Automatically

Extracted Dictionaries. In Proc. of the ACL-ISMB Workshop on Linking Biological Literature, Ontologies and Databases: Mining Biological Semantics, Jun, at Ann Arbor, Michigan:17-24.

Cohn, David A., Zoubin Ghahramani, et al. 1996. Active Learning with Statistical Models. Journal of

Artificial Intelligence Research 4:129-145. Corney, D. P., B. F. Buxton, et al. 2004. BioRAT: Extracting Biological Information from Full-Length

Papers. Bioinformatics 20 (17):3206-3613.

Page 199: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

183

Cucchiarelli, Alessandro, and Paola Velardi. 2001. Unsupervised Named Entity Recognition Using

Syntactic and Semantic Contextual Evidence. Computational Linguistics 27 (1):123-131. Daelemans, Walter. 2006. A Mission for Computational Natural Language Learning. In Proc. of the 10th

Conference on Computational Natural Language Learning (CoNLL-X), Jun 8-9, at New York City, NY:1-5.

Egorov, S., A. Yuryev, et al. 2004. A Simple and Practical Dictionary-Based Approach for Identification of

Proteins in Medline Abstracts. Journal of the American Medical Informatics Association 11 (3):174-178.

Erjavec, Tomaz, Jin-Dong Kim, et al. 2003. Encoding Biomedical Resources in TEI: The Case of the

GENIA Corpus. In Proc. of the ACL Workshop on Natural Language Processing in Biomedicine, at Sapporo, Japan:97-104.

Etzioni, Oren, Michael Cafarella, et al. 2005. Unsupervised Named-Entity Extraction from the Web: An

Experimental Study. Artificial Intelligence 165 (1):91-134. Feldman, Ronen, and James Sanger. 2007. The Text Mining Handbook: Advanced Approaches in

Analyzing Unstructured Data. New York, NY: Cambridge University Press. Finley, Thomas, and Thorsten Joachims. 2005. Supervised Clustering with Support Vector Machines. In

Proc. of the 22nd International Conference on Machine Learning, at Bonn, Germany:217-224. Freitag, Dayne. 2004. Trained Named Entity Recognition using Distributional Clusters. In Proc. of EMNLP

2004, Jul, at Barcelona, Spain:262-269. Fung, Glenn, and O. L. Mangasarian. 2001. Semi-Supervised Support Vector Machines for Unlabeled Data

Classification. Optimization Methods and Software 44:15-29. Gaudinat, Arnaud , and Celia Boyer. 2002. Automatic Extraction of MeSH terms from Medline Abstracts.

In NLPBA2002 Workshop on Natural Language Processing in Biomedical Applications, Mar 8-9, at Nicosia, Cyprus.

Gliozzo, Alfio, and Carlo Strapparava. 2005. Domain Kernels for Text Categorization. In Proc. of the 9th

Conference on Computational Natural Language Learning (CoNLL-2005), Jun 29-30, at Ann Arbor, MI:56-63.

Gorunescu, R., and D. Dumitrescu. 2003. Evolutionary Clustering using an Incremental Technique.

Informatica 158 (2):25-32. Graf, Hans Peter, Eric Cosatto, et al. 2005. Parallel Support Vector Machines: The Cascade SVM.

Advances in Neural Information Processing Systems 17:521-528. Hanisch, Daniel, Juliane Fluck, et al. 2003. Playing Biology's Name Game: Identifying Protein Names in

Scientific Text. In Proc. of the Pacific Symposium on Biocomputing 2003 (PSB2003), Jan 3-7, at Kauai, Hawaii:403-414.

Jong, Kees, Elena Marchiori, et al. 2003. Finding Clusters using Support Vector Classifiers. In European

Symposium on Artificial Neural Networks, 23-25 Apr, at Bruges, Belgium:223-228. Kazama, Junichi, Takaki Makino, et al. 2002. Tuning Support Vector Machines for Biomedical Named

Entity Recognition. In Proc. of the Association for Computational Linguistics Workshop on Natural Language Processing in the Biomedical Domain, Jul 11, at Philadelphia, PA:1-8.

Page 200: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

184

Koike, Asako, and Toshihisa Takagi. 2004. Gene/Protein/Family Name Recognition in Biomedical Literature. In BioLINK2004: Linking Biological Literature, Ontologies, and Databases, May 6, at Boston, MA:9-16.

Konchady, Manu. 2006. Text Mining Application Programming. Boston, MA: Charles River Media. Krauthammer, Michael, and Goran Nenadic. 2004. Term Identification in the Biomedical Literature.

Journal of Biomedical Informatics 37 (6):512-526. Lee, Chih, Wen-Juan Hou, et al. 2004a. Annotating Multiple Types of Biomedical Entities: A Single Word

Classification Approach. In Proc. of the 2004 Joint Workshop on Natural Language Processing in Biomedicine and its Applications (JNLPBA'2004), Aug 28-29, at Geneva, Switzerland:80-83.

Lee, Hyun-Sook, Soo-Jun Park, et al. 2004. Domain Independent Named Entity Recognition from

Biological Literature. In Proc. of the 15th International Conference on Genome Informatics, Dec 16-18, at Yokohama Pacifico, Japan:178.

Lee, Jaewook. 2005. An Improved Cluster Labeling Method for Support Vector Clustering. IEEE

Transactions on Pattern Analysis and Machine Intelligence 27 (3):461-464. Li, Xin, and Dan Roth. 2005. Discriminative Training of Clustering Functions: Theory and Experiments

with Entity Identification. In Proc. of the 9th Conference on Computational Natural Language Learning (CoNLL-2005), Jun 29-30, at Ann Arbor, Michigan:64-71.

Liang, Percy. 2005. Semi-Supervised Learning for Natural Language. Master of Engineering in Electrical

Engineering and Computer Science, Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA.

Lin, Jiann-IIorng, and Ting-Yu Cheng. 2005. Dynamic Clustering using Support Vector Learning with

Particle Swarm Optimization. In Proc. of the 18th International Conference on Systems Engineering (ISCEng'OS):218-223.

Lin, Yi-Feng, Tzong-Han Tsai, et al. 2004. A Maximum Entropy Approach to Biomedical Named Entity

Recognition. In Proc. of the 4th ACM SIGKDD Workshop on Data Mining in Bioinformatics (BIOKDD 2004), Aug 22, at Seattle, Washington:56-61.

Liu, Guangli, Yongshun Wu, et al. 2006. Weighted Ordinal Support Vector Clustering. In Proc. of the First

International Multi-Symposiums on Computer and Computational Sciences (IMSCCS'06), Jun 20-24, at Hangzhou, China:743-745.

Majoros, W. H., G. M. Subramanian, et al. 2003. Identification of Key Concepts in Biomedical Literature

using a Modified Markov Heuristic. Bioinformatics 19 (3):402-407. Mika, S., and B. Rost. 2004. NLProt: Extracting Protein Names and Sequences from Papers. Nucleic Acids

Research 32 (Web Server issue):W634-637. Nath, J. Saketha, and S. K. Shevade. 2006. An Efficient Clustering Scheme using Support Vector Methods.

Pattern Recognition 39 (8):1473-1480. Nenadic, G., I. Spasic, et al. 2003. Terminology-Driven Mining of Biomedical Literature. Bioinformatics

19 (8):938-943. Nenadic, Goran, Simon Rice, et al. 2003. Selecting Text Features for Gene Name Classification: from

Documents to Terms. In Proc. of the ACL Workshop on Natural Language Processing in Biomedicine, at Sapporo, Japan:121-128.

Page 201: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

185

Oliver, Diane, Gaurav Bhalotia, et al. 2004. Tools for Loading MEDLINE into a Local Relational Database. BMC Bioinformatics 5 (1):146.

Puma-Villanueva, Wilfredo J., George B. Bezerra, et al. 2005. Improving Support Vector Clustering with

Ensembles. In IEEE International Joint Conference on Neural Networks (IJCNN), at Montreal, Quebec.

Reinberger, Marie-Laure, and Walter Daelemans. 2005. Unsupervised Text Mining for Ontology Learning.

In Proc. of the Dagstuhl Seminar on Machine Learning for the Semantic Web, Feb 13-18, at Wadern, Germany.

Rindflesch, T. C., L. Tanabe, et al. 2000. EDGAR: Extraction of Drugs, Genes and Relations from the

Biomedical Literature. In Proc. of the Pacific Symposium on Biocomputing, Jan 4-9, at Hawaii:517-528.

Rindflesch, Thomas C., and Marcelo Fiszman. 2003. The Interaction of Domain Knowledge and Linguistic

Structure in Natural Language Processing: Interpreting Hypernymic Propositions in Biomedical Text. Journal of Biomedical Informatics 36 (6):462-477.

Ripplinger, Barbel, Spela Vintar, et al. 2002. Cross-Lingual Medical Information Retrieval through

Semantic Annotation. In NLPBA2002 Workshop on Natural Language Processing in Biomedical Applications, Mar 8-9, at Nicosia, Cyprus.

Rüping, Stefan. 2002. Efficient Kernel Calculation for Multirelational Data. In FGML 2002, at Hannover,

Germany. Saradhi, Vijaya V., Harish Karnik, et al. 2005. A Decomposition Method for Support Vector Clustering. In

Proc. of the 2nd International Conference on Intelligent Sensing and Information Processing (ICISIP-2005), Jan 4-7:268-271.

Schulz, Stefan, Martin Honeck, et al. 2002. Biomedical Text Retrieval in Languages with a Complex

Morphology. In Proc. of the ACL-02 workshop on Natural language processing in the biomedical domain, at Phildadelphia, PA:61-68.

Shatkay, H., and R. Feldman. 2003. Mining the Biomedical Literature in the Genomic Era: An Overview.

Journal of Computational Biology 10 (6):821-855. Shen, D., J. Zhang, et al. 2003. Effective Adaptation of a Hidden Markov Model-Based Named Entity

Recognizer for Biomedical Domain. In Proc. of the ACL Workshop on Natural Language Processing in Biomedicine, Jul 8-10, at Sapporo, Japan:49-56.

Shin, Miyoung, Eun Mi Kang, et al. 2003. Automatically Finding Good Clusters with Seed K-Means.

Genome Informatics Series 14:326-327. Silva, Joaquim, Zornitsa Kozareva, et al. 2004. Extracting Named Entities. A Statistical Approach. In Proc.

of the 11th Annual Conference on Natural Language Processing (le Traitement Automatique des Langues Naturelles) (TALN 2004), Apr 19-22, at Fez, Morocco:19-21.

Solorio, Thamar. 2004. Improvement of Named Entity Tagging by Machine Learning. Puebla, México:

Coordinación de Ciencias Computacionales, Instituto Nacional de Astrofísica. Solorio, Thamar, and Aurelio Lopez Lopez. 2004. Adapting a Named Entity Recognition System for

Spanish to Portuguese. In Workshops on Artificial Intelligence: Workshop on Herramientas y Recursos Lingüísticos para el Español y el Portugués, Nov, at Puebla,Mexico:292-297.

Page 202: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

186

Spasic, I., G. Nenadic, et al. 2003. Using Domain-Specific Verbs for Term Classification. In Proc. of the ACL Workshop on Natural Language Processing in Biomedicine, Jul 8-10, at Sapporo, Japan:17-24.

Sun, Bing-Yu, and De-Shuang Huang. 2003. Support Vector Clustering for Multiclass Classification

Problems. In The 2003 Congress on Evolutionary Computation (CEC '03), Dec 8-12:1480-1485. Takeuchi, Koichi, and Nigel Collier. 2002. Use of Support Vector Machines in Extended Named Entity

Recognition. In Proc. of the 6th Workshop on Computational Language Learning (CoNLL-2002), Aug 31 - Sep 1, at Taipei, Taiwan:119-125.

———. 2003. Bio-Medical Entity Extraction using Support Vector Machines. In Proc. of the ACL

Workshop on Natural Language Processing in Biomedicine, Jul 8-10, at Sapporo, Japan:57-64. Tanabe, Lorraine, and W. John Wilbur. 2002. Tagging Gene and Protein Names in Full Text Articles. In

Proc. of the Workshop on Natural Language Processing in the Biomedical Domain, at Philadelphia, PA:9-13.

Torii, Manabu, Sachin Kamboj, et al. 2004. Using Name-Internal and Contextual Features to Classify

Biological Terms. Journal of Biomedical Informatics 37 (6):498-511. Torii, Manabu, and K. Vijay-Shanker. 2002. Using Unlabeled MEDLINE Abstracts for Biological Named

Entity Classification. In Proc. of the Genome Informatics Workshop:567-568. Uramoto, N., H. Matsuzawa, et al. 2004. A Text-Mining System for Knowledge Discovery from

Biomedical Documents. IBM Systems Journal 43 (3):516-533. Vlachos, Andreas, and Caroline Gasperin. 2006. Bootstrapping and Evaluating Named Entity Recognition

in the Biomedical Domain. In Proc. of the BioNLP'06 Workshop (Linking Natural Language Processing and Biology), Jun 8, at Brooklyn, NY:138-145.

Volk, Martin, Barbel Ripplinger, et al. 2002. Semantic Annotation for Concept-Based Cross-Language

Medical Information Retrieval. International Journal of Medical Informatics 67 (1-3):97-112. Winters-Hilt, Stephen, Anil Yelundur, et al. 2006. Support Vector Machine Implementations for

Classification and Clustering. BMC Bioinformatics 7:S4. Witten, Ian H., and Eibe Frank. 2005. Data Mining: Practical Machine Learning Tools and Techniques. 2nd

ed. San Francisco, CA: Elsevier Inc. Xu, Linli, James Neufeld, et al. 2004. Maximum Margin Clustering. In Advances in Neural Information

Processing Systems (NIPS 2004), Dec 13-18, at Vancouver, Canada:1537-1544. Yamamoto, Kaoru, Taku Kudo, et al. 2003. Protein Name Tagging for Biomedical Annotation in Text. In

Proc. of the ACL Workshop on Natural Language Processing in Biomedicine, Jul 8-10, at Sapporo, Japan:65-72.

Yilmaz, Ozlem, Luke E.K. Achenie, et al. 2007. Systematic Tuning of Parameters in Support Vector

Clustering. Mathematical Biosciences 205:252–270. Yu, Hong, Vasileios Hatzivassiloglou, et al. 2002. Automatically Identifying Gene/Protein Terms in

MEDLINE Abstracts. Journal of Biomedical Informatics 35 (5-6):322-330. Zweigenbaum, Pierre, and Natalia Grabar. 2002. Restoring Accents in Unknown Biomedical Words:

Application to the French MeSH Thesaurus. International Journal of Medical Informatics 67 (1-3):113-126.

Page 203: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

Appendix A

SVM Data Dictionary

The SVM data dictionary translates the schema presented in Figure 6.2. The schema

supports the definition of training and testing datasets, multiple labeling designations,

learning environments, trained models, and classification outputs. Binary and multi-class

learning environments are integrated in the database schema and the embedded modules.

In the following data dictionary, primary key and foreign key constraints are

indicated. Computed fields hold values that are computed during the learning and/or

classification processes, or generated using insert/update triggers in order to ensure the

data integrity. For instance, data fields in the table SVM that describe the example set in

use are pre-computed using triggers invoked on insert/update of the table SVM or

Example_Set. The pre-computation of these metadata fields eliminates the need of re-

computing those limits prior to every learning exercise. It also provides a quick way of

gathering statistics about the training/testing datasets.

In order to streamline the data representation of feature vectors, reduce database

access during fetches, and eliminate the need for data type conversion and/or array

building whenever feature vectors are needed during the learning and/or classification

processes, we represent feature vectors by a binary array holding an array of

number:weight pairs, as defined by the WORD data structure in svm_common.h

(Joachims 1998a, 2002):

typedef struct word { int32_t wnum; /* word number */ float weight; /* word weight */ } WORD;

Page 204: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

188

Table Name Column Name Data Type Key Constraints Notes

example_set_id CHAR(30) PK example_set_desc VARCHAR(255) max_featnum INTEGER Computed field Example_Set

max_exwords INTEGER Computed field example_set_id CHAR(30) PK, FK REFERENCES Example_Set example_id INTEGER PK NOT NULL example_text VARCHAR(255) label CHAR(30)

Example

features BYTEA WORD array label_set_id CHAR(30) PK

Label_Set label_set_desc VARCHAR(255) label_set_id CHAR(30) PK, FK REFERENCES Label_Set label CHAR(30) PK NOT NULL Label class INTEGER NOT NULL kernel_type CHAR(10) PK

Kernel kernel_desc VARCHAR(255) svm_id CHAR(30) PK svm_type CHAR(30) NOT NULL

example_set_id CHAR(30) FK NOT NULL REFERENCES Example_Set

label_set_id CHAR(30) FK NOT NULL REFERENCES Label_Set

kernel_type CHAR(10) FK NOT NULL REFERENCES Kernel

ex_start INTEGER DEFAULT 1 ex_end INTEGER DEFAULT -1 -1 denotes full set num_examples INTEGER Computed field max_featnum INTEGER Computed field max_exwords INTEGER Computed field num_labeled INTEGER Computed field

SVM

num_unlabeled INTEGER Computed field

Page 205: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

189

Table Name Column Name Data Type Key Constraints Notes

learn_param_id CHAR(30) PK learn_algorithm INTEGER DEFAULT 3 loss_function INTEGER DEFAULT 2 loss_type INTEGER DEFAULT 2 error_margin REAL DEFAULT 0.1 C_light REAL DEFAULT 0.01 accelerator REAL DEFAULT 0.0001 ex_cache BOOLEAN DEFAULT true ex_cache_size INTEGER DEFAULT -1 -1 denotes full cache supvec_cache BOOLEAN DEFAULT false supvec_cache_size INTEGER DEFAULT 50 -1 denotes full cache

Learn_Param

log_level CHAR(10) DEFAULT 'INFO' svm_id CHAR(30) PK, FK REFERENCES SVM learn_param_id CHAR(30) PK, FK REFERENCES Learn_Param learning_start TIMESTAMP Computed field SVM_Learn

learning_end TIMESTAMP Computed field svm_id CHAR(30) PK, FK REFERENCES SVM learn_param_id CHAR(30) PK, FK REFERENCES Learn_Param supvec_id INTEGER PK NOT NULL Computed field selected BOOLEAN DEFAULT false Computed field alpha DOUBLE PRECISION Computed field

SVM_Model

features BYTEA Computed field, WORD array svm_id CHAR(30) PK, FK REFERENCES SVM learn_param_id CHAR(30) PK, FK REFERENCES Learn_Param supvec_id INTEGER PK NOT NULL to_supvec_id INTEGER NOT NULL

SVM_Model_Kernel

kernel_prod DOUBLE PRECISION Computed field

Page 206: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

190

Table Name Column Name Data Type Key Constraints Notes svm_id CHAR(30) PK, FK REFERENCES SVM learn_param_id CHAR(30) PK, FK REFERENCES Learn_Param

example_set_id CHAR(30) PK, FK NOT NULL REFERENCES Example_Set

classification_start TIMESTAMP Computed field

SVM_Classify

classification_end TIMESTAMP Computed field svm_id CHAR(30) PK, FK REFERENCES SVM learn_param_id CHAR(30) PK, FK REFERENCES Learn_Param

example_set_id CHAR(30) PK, FK NOT NULL REFERENCES Example_Set

example_id INTEGER PK, FK NOT NULL REFERENCES Example

Classified_Example

prediction DOUBLE PRECISION Computed field

**label CHAR(30)

Optional computed field holding the prediction to label translation, as defined by the label_set_id for the corresponding svm_id

Page 207: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

Appendix B

NER Experimentation Datasets

One of the first steps in setting up experiments is to identify the datasets and

evaluation tools to use in order to provide a fair comparison ground for NER systems. In

this section we describe the three main datasets commonly used for experimentation

purposes: the JNLPBA-04 task (Kim et al. 2004), the CoNLL-02 task (Tjong Kim Sang

2002a), and the CoNLL-03 task (Tjong Kim Sang and De Meulder 2003). The baseline

experiments presented in Chapter 4 use the JNLPBA-04 training and testing datasets. The

three datasets provide a suitable experimentation collection for language and domain

independence. The JNLPBA-04 dataset represents the English biomedical domain and

the CoNLL datasets represent the multilingual general domain. Successfully identifying

named entities in the three datasets without incorporating domain or language knowledge

will support the thesis of this research proposal.

B.1 BioNLP JNLPBA-04 Dataset The expanding application of natural language processing in the biomedical literature

and the emergence of new systems and techniques for information retrieval and

extraction in this field raise the importance of having common and standardized

evaluation and benchmarking methods in order to compare and evaluate the efficiency

and effectiveness of the IR and IE methods used. Hirschman et al. (2002) propose

creating challenge evaluations specifically for the methods used to extract information in

the biomedical field. Hirshman et al. identify the ingredients of a successful evaluation as

Page 208: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

192

follows: a challenge problem, a task definition, training data, test data, and an evaluation

methodology.

The JNLPBA-04 challenge task (Kim et al. 2004) offers all of the necessary

ingredients identified by (Hirschman et al. 2002) for a successful evaluation. This

challenge task provides a standard testing environment for biomedical NER research. The

challenge task defines:

1. A challenge problem: named entity recognition in the biomedical literature.

2. A task definition: identifying the names of proteins, cell lines, cell types, DNA

and RNA entities in Medline abstracts.

3. Training and test data: annotated abstracts from the GENIA corpus (Kim et al.

2003). The training and test datasets are described in more details below.

4. An evaluation methodology: a set of Perl scripts to evaluate the performance of

the participating systems in terms of precision, recall, and F-score of their output

data. Since the named entities are often composed of multiple words, the

evaluation scripts measure the performance relative to complete NER matches,

left boundary matches, and right boundary matches.

The training and testing data use the GENIA annotated corpus (Kim et al. 2003) of

Medline articles (NLM 2007a), where the names of proteins, cell lines, cell types, DNA

and RNA entities are previously labeled. The named entities are often composed of a

sequence of words. The training data includes 2,000 annotated abstracts (consisting of

492, 551 tokens). The testing data includes 404 abstracts (consisting of 101, 039 tokens)

annotated for the same classes of entities: half of the test abstracts are from the same

domain as the training data and the other half of them are from the super-domain of

Page 209: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

193

‘blood cells’ and ‘transcription factors’. The testing data sets are grouped in four subsets,

covering abstracts from different year ranges. The fraction of positive examples with

respect to the total number of tokens in the training set varies from about 0.2% to about

6%. Basic statistics about the data sets as well as the absolute and relative frequencies for

named entities within each set can be found in Table B.1 and Table B.2.

The JNLPBA-04 (BioNLP) testing dataset is roughly divided into four subsets based

on publication years: 1978-1989 set (which includes abstracts older than those in the

training set), 1990-1999 set (which represents the same age as the training set), 2000-

2001 set (which includes newer abstracts than those in the training set) and S/1998-2001

set (which is another new age set in a super domain).

Table B.1– Basic Statistics for the JNLPBA-04 Data Sets23

# abstracts # sentences #words Training Set 2,000 20,546 (10.27/abs) 472,006 (236.00/abs) (22.97/sen) Test Set 404 4,260 (10.54/abs) 96,780 (239.55/abs) (22.72/sen)

1978-1989 104 991 ( 9.53/abs) 22,320 (214.62/abs) (22.52/sen) 1990-1999 106 1,115 (10.52/abs) 25,080 (236.60/abs) (22.49/sen) 2000-2001 130 1,452 (11.17/abs) 33,380 (256.77/abs) (22.99/sen)

S/1998-2001 206 2,270 (11.02/abs) 51,957 (252.22/abs) (22.89/sen)

Table B.2 – Absolute and Relative Frequencies for Named Entities Within Each Set24

protein DNA RNA cell_type cell_line All NEs Training Set 30,269 (15.1) 9,533 (4.8) 951 (0.5) 6,718 (3.4) 3,830 (1.9) 51,301 (25.7) Test Set 5,067 (12.5) 1,056 (2.6) 118 (0.3) 1,921 (4.8) 500 (1.2) 8,662 (21.4)

1978-1989 609 ( 5.9) 112 (1.1) 1 (0.0) 392 (3.8) 176 (1.7) 1,290 (12.4) 1990-1999 1,420 (13.4) 385 (3.6) 49 (0.5) 459 (4.3) 168 (1.6) 2,481 (23.4) 2000-2001 2,180 (16.8) 411 (3.2) 52 (0.4) 714 (5.5) 144 (1.1) 3,501 (26.9)

S/1998-2001 3,186 (15.5) 588 (2.9) 70 (0.3) 1,138 (5.5) 170 (0.8) 5,152 (25.0)

23 Kim, et al. 2004. Introduction to the Bio-Entity Recognition Task at JNLPBA. In Proc. of the 2004 Joint Workshop on Natural Language Processing in Biomedicine and its Applications (JNLPBA'2004), Aug 28-29, at Geneva, Switzerland:70-75. 24 Ibid., 72.

Page 210: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

194

B.2 CoNLL-02 Dataset The CoNLL-02 shared task is a language-independent named entity recognition

challenge presented as part of the 6th Workshop on Computational Language Learning.

The task concentrates on four types of general named entities: persons, locations,

organizations, and names of miscellaneous entities (Tjong Kim Sang 2002a). It offers

datasets for two European languages: Spanish and Dutch. The named entities are

assumed to be non-recursive and non-overlapping.

The Spanish data is a collection of news wire articles from May 2000, made available

by the Spanish EFE News Agency. The data contains words and entity tags only

appearing on separate lines. The training dataset contains 273,037 lines, the development

dataset contains 54,837 lines, and the test dataset contains 53,049 lines.

The Dutch data consist of four editions of the Belgian newspaper “De Morgen” from

the year 2000, and contain words, entity tags and part-of-speech tags. The training dataset

contains 218,737 lines, the development dataset contains 40,656 lines, and the test

dataset contains 74,189 lines.

Named entities are tagged using either a B-XXX tag denoting the beginning of an

entity (for e.g., B-PER denotes beginning a person’s name), or an I-XXX tag which

indicates a continuation of the same entity started with the corresponding B-XXX tag.

Words tagged with O are outside of named entities.

Table B.3 – Number of Named Entities in the CoNLL-02 Dataset

Spanish Datasets Dutch Datasets LOC MISC ORG PER LOC MISC ORG PER

Training 4913 2173 7390 4321 3208 3338 2082 4716 Development 984 445 1700 1222 479 748 686 703

Test 1084 339 1400 735 774 1187 882 1098

Page 211: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

195

B.3 CoNLL-03 Dataset The CoNLL-03 shared task is another language-independent named entity recognition

challenge offered as part of the 7th Conference on Natural Language Learning

(Tjong Kim Sang and De Meulder 2003). The task concentrates on four general named

entity types: persons, locations, organizations, and names of miscellaneous entities. It

offers datasets for two different European languages than those offered during the

CoNLL-02 task, namely English and German. The data for each language consists of four

data files for training, development, and test in addition to a large file with unannotated

data. The task required using a machine learning approach that incorporates using the

unannotated data file in the learning process.

The English data was taken from the Reuters Corpus, which consists of Reuters news

stories from 1996 and 1997. The text for the German data was extracted from the 1992

German newspaper Frankfurter Rundshau. Table B.4 and Table B.5 summarize the data

contents for each language. The English unannotated data file contains 17 million tokens

and the German unannotated data file contains 14 million tokens (German).

The data files contain one word per line with empty lines representing sentence

boundaries. Each line contains four fields: the word, its part-of-speech tag, its chunk tag

and its named entity tag. Words tagged with O are outside of named entities and the I-

XXX tag is used for words inside a named entity of type XXX. Whenever two entities of

type XXX are immediately next to each other, the first word of the second entity is

tagged B-XXX. The named entity annotation style is different than that used for the

CoNLL-02 challenge task, where B-XXX tags were present for each named entity in the

corpus and not just to split two consecutive entities.

Page 212: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

196

The CoNLL-03 task data requires the Reuters corpus as the data files contain only

references to the Reuters corpus and the scripts needed to extract, tokenize and annotate

the data. In addition to the data files, the CoNLL-03 task provides gazetteers (reference

lists) of known person names, locations, organizations, and miscellaneous entities.

Table B.4 – Basic Statistics for the CoNLL-03 Dataset25

English Datasets German Datasets Articles Sentences Tokens Articles Sentences Tokens

Training 946 14,987 203,621 553 12,705 206,931 Development 216 3,466 51,362 201 3,068 51,444

Test 231 3,684 46,435 155 3,160 51,943

Table B.5 – Number of Named Entities in the CoNLL-03 Dataset26

English Datasets German Datasets LOC MISC ORG PER LOC MISC ORG PER

Training 7140 3438 6321 6600 4363 2288 2427 2773 Development 1837 922 1341 1842 1181 1010 1241 1401

Test 1668 702 1661 1617 1035 670 773 1195

25 Tjong Kim Sang and De Meulder. 2003. Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition. In Proc. of the 7th Conference on Natural Language Learning (CoNLL-2003), May 31 - Jun 1, at Edmonton, Canada:142-147. 26 Ibid.,143.

Page 213: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

Appendix C

Biomedical Entities in the GENIA Corpus

The following information about the GENIA corpus is extracted from the GENIA

Project website at http://www-tsujii.is.s.u-tokyo.ac.jp/GENIA/home/wiki.cgi, and a

technical report describing the GENIA ontology (Kim et al. 2006). “The GENIA project

seeks to automatically extract useful information from texts written by scientists to help

overcome the problems caused by information overload” (The GENIA Project 2002-

2006).

Annotating (and identifying) biomedical named entities is not as simple as identifying

general entities such as people names or locations. Biological terms in the GENIA corpus

are semantically defined as the terms identifiable with any terminal concepts in GENIA

ontology. Yet syntactically, they are not simply defined. Terms including biological

entity names are mostly general nouns and they can appear in text with a variety of

specifiers or qualifiers. To cite a couple of annotation challenges, biological named

entities may appear nested, such as the example of IL-2 gene and IL-2 gene transcription,

or appear in coordinated clauses, for example, ‘CD2 and CD 25 receptors’ refers to two

terms, CD2 receptors and CD25 receptors, but CD2 receptors doesn’t appear in the text.

C.1 The Current GENIA Ontology The GENIA ontology (Kim et al. 2006), summarized in Table C.1, is a taxonomy of

some entities involved in biological reactions concerning transcription factors in human

blood cells. It was developed as the semantic classification used in the GENIA corpus.

The five entities used for the NER task are highlighted.

Page 214: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

198

Table C.1 – The Current GENIA Ontology

multi-cell organism mono-cell organism organism virus

body part tissue cell type cell component

natural

other (natural source)

artificial cell line

source

other (artificial source)

protein protein family or group protein complex individual protein molecule subunit of protein complex substructure of protein

peptide

amino acid

amino acid monomer

DNA DNA family or group individual DNA molecule domain or region of DNA

RNA RNA family or group individual RNA molecule domain or region of RNA

polynucletotide

nucleic acid

nucleotide lipid steroid carbohydrate

organic

other (organic compounds)

compound

inorganic

substance

atom other

Page 215: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

199

C.2 Entity Definitions from the GENIA Ontology The five named entities extracted in the NER biomedical experiments are a subset of

the GENIA ontology. The NER task entities are: protein, DNA, RNA, cell type, and cell

line. The definitions come from the GENIA Project website (The GENIA Project 2002-

2006) and the technical report describing the GENIA ontology (Kim et al. 2006).

1. Protein: Proteins include protein groups, families, molecules, complexes, and

substructures. Corresponds to the Proteins category of MeSH.

“Linear POLYPEPTIDES that are synthesized on RIBOSOMES and may be

further modified, crosslinked, cleaved, or assembled into complex proteins with

several subunits. The specific sequence of AMINO ACIDS determines the shape

the polypeptide will take, during PROTEIN FOLDING, and the function of the

protein.”

# protein family or group: A family or a group of proteins, e.g., STATs

# protein complex: A protein complex e.g., RNA polymerase II. The class

includes conjugated proteins such as lipoproteins and glycoproteins.

# individual protein molecule: An individual member of a group of non-

complex proteins, e.g., STAT1, STAT2, STAT3, or a (non-complex) protein

not regarded as a member of a particular group.

# subunit of protein complex: A monomer in a complex, e.g., RNA

polymerase II alpha subunit.

# substructure of protein: A secondary structure or a combination of

secondary structures, e.g. leucine-zipper, zinc-finger, alpha-helix, beta-sheet,

helix-loop-helix.

Page 216: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

200

# domain or region of protein: A tertiary structure that is supposed to have a

particular function, e.g., SH2, SH3.

2. DNA: DNAs include DNA groups, families, molecules, domains, and regions.

Corresponds to DNA category of MeSH.

“A deoxyribonucleotide polymer that is the primary genetic material of all cells.

Eukaryotic and prokaryotic organisms normally contain DNA in a double-

stranded state, yet several important biological processes transiently involve

single-stranded regions. DNA, which consists of a polysugar-phosphate backbone

possessing projections of purines (adenine and guanine) and pyrimidines

(thymine and cytosine), forms a double helix that is held together by hydrogen

bonds between these purines and pyrimidines (adenine to thymine and guanine to

cytosine).”

# DNA family or group: A family or a group of DNAs, e.g., myc family genes,

rel family genes.

# individual DNA molecule: An individual member of a family or a group of

DNAs, e.g., AP-1/c-jun expression vector, AP2 cDNA.

# domain or region of DNA: A substructure of DNA molecule which is

supposed to have a particular function, such as a gene, e.g., c-jun gene,

promoter region, Sp1 site, CA repeat. This class also includes a base sequence

that has a particular function.

3. RNA: RNAs include RNA groups, families, molecules, domains, and regions.

Corresponds to RNA category of MeSH.

“A polynucleotide consisting essentially of chains with a repeating backbone of

phosphate and ribose units to which nitrogenous bases are attached. RNA is

unique among biological macromolecules in that it can encode genetic

information, serve as an abundant structural component of cells, and also

Page 217: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

201

possesses catalytic activity. (Rieger et al., Glossary of Genetics: Classical and

Molecular, 5th ed)”

# RNA family or group: A family or a group of RNAs, e.g., tRNAs, viral

RNA, HIV mRNA.

# individual RNA molecule: An individual molecule of RNA, e.g., globlin

mRNA, Oct-T1 transcript.

# domain or region of RNA: A domain or a region of RNA, e.g., polyA site,

alternative splicing site.

4. Cell type: A cell type, e.g., T-lymphocyte, T cell, astrocyte, fibroblast.

Corresponds to the Cells category of MeSH excluding the Cell Structure sub-category

which is separated out into Cell_component.

“The fundamental, structural, and functional units or subunits of living

organisms. They are composed of CYTOPLASM containing various

ORGANELLES and a CELL MEMBRANE boundary.”

5. Cell line: The class includes cell strains and established cell cultures, e.g., HeLa

cell, NIH 3T3, lymphoma line, human bone marrow culture. It corresponds to the

Cells, Cultured category of MeSH.

“The fundamental, structural, and functional units or subunits of living

organisms. They are composed of CYTOPLASM containing various

ORGANELLES and a CELL MEMBRANE boundary.”

According to (Kim et al. 2006), the name of this class has been changed from

Cell_line, as it covers not only cell lines but also other cultured cell types including

clone or hybrid cells.

Page 218: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

202

C.3 Sample Annotated Biomedical Text The sample MEDLINE abstracts are extracted from the GENIA corpus. The five

named entity types are highlighted in different colors according to the following legend:

Entity Types: protein, DNA, RNA, cell_line, cell_type

We present the sample abstracts in this format for better visibility. When consecutive

words are highlighted, they represent one named entity and should be classified as such.

Note that named entities may appear in different shapes and forms.

Sample #1: MEDLINE Abstract# 89078997 (LeBowitz et al. 1988)

Octamer-binding proteins from B or HeLa cells stimulate transcription of the

immunoglobulin heavy-chain promoter in vitro. The B-cell -type specificity of the

immunoglobulin (Ig) heavy-chain and light-chain promoters is mediated by an

octanucleotide (OCTA) element, ATGCAAAT, that is also a functional component of

other RNA polymerase II promoters, such as snRNA and histone H2B promoters. Two

nuclear proteins that bind specifically and with high affinity to the OCTA element have

been identified. NF-A1 is present in a variety of cell types, whereas the presence of NF-

A2 is essentially confined to B cells, leading to the hypothesis that NF-A2 activates cell-

type-specific transcription of the Ig promoter and NF-A1 mediates the other responses of

the OCTA element. Extracts of the B-cell line, BJA-B, contain high levels of NF-A2 and

specifically transcribe Ig promoters. In contrast, extracts from HeLa cells transcribed the

Ig promoter poorly. Surprisingly, addition of either affinity-enriched NF-A2 or NF-A1 to

either a HeLa extract or a partially purified reaction system specifically stimulates the Ig

promoter. This suggests that the constitutive OCTA-binding factor NF-A1 can activate

transcription of the Ig promoter and that B-cell -specific transcription of this promoter, at

least in vitro, is partially due to a quantitative difference in the amount of OCTA-binding

protein. Because NF-A1 can stimulate Ig transcription, the inability of this factor to

activate in vivo the Ig promoter to the same degree as the snRNA promoters probably

reflects a difference in the context of the OCTA element in these two types of promoters.

Page 219: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

203

Sample #2: MEDLINE Abstract# 98339383 (Suenaga et al. 1998)

Peripheral blood T cells and monocytes and B cell lines derived from patients with lupus

express estrogen receptor transcripts similar to those of normal cells. OBJECTIVE: To

identify and characterize estrogen receptor (ER) transcripts expressed in immune cells of

patients with systemic lupus erythematosus (SLE) and healthy donors. METHODS:

Peripheral blood monocytes and T cells were prepared from patients with SLE (n=6) and

healthy donors (n=8). T cells were separated into CD4 and CD8. Some monocytes and T

cells were stimulated with estradiol, PMA, and ionomycin. Epstein-Barr virus-

transformed B cell lines (n=7) and B cell hybridomas (n=2) established from patients

with SLE and a healthy individual were used as a B cell source. These cells were

examined for ER mRNA by reverse transcription nested polymerase chain reaction.

Amplified cDNA were sequenced by standard methods. RESULTS: In all cells tested, ER

mRNA was expressed without prior in vitro stimulation. Partial sequences from exons 1-

8 were nearly identical to the published sequence of the human ER mRNA. There were

no notable differences in the ER transcripts between patients and healthy controls.

Variant receptor transcripts lacking exon 5 or exon 7, which encodes the hormone

binding domain, were identified in the majority of the cells . Precise deletion of the

exons suggests that they are alternatively spliced transcripts. Whether the detected

transcripts are translated into functional receptor proteins remains to be determined. In

vitro stimulation did not affect ER mRNA expression. The presence of variants did not

correlate with disease activity or medication. CONCLUSION: Monocytes, T cells, and B

cells in patients express transcripts of the normal wild type ER and the hormone binding

domain variants in vivo.

Sample #3: MEDLINE Abstract# 98342108 (Lei and Kaufman 1998)

Human white blood cells and hair follicles are good sources of mRNA for the pterin

carbinolamine dehydratase/dimerization cofactor of HNF1 for mutation detection. Pterin

carbinolamine dehydratase/dimerization cofactor of HNF1 (PCD/DCoH) is a protein that

has a dual function. It is a pterin 4alpha-carbinolamine dehydratase that is involved in the

regeneration of the cofactor tetrahydrobiopterin during the phenylalanine hydroxylase-

Page 220: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

204

catalyzed hydroxylation of phenylalanine. In addition, it is the dimerization cofactor of

HNF1 that is able to activate the transcriptional activity of HNF1. Deficiencies in the

gene for this dual functional protein result in hyperphenylalaninemia. Here we report for

the first time that the PCD/DCoH mRNA is present in human white blood cells and hair

follicles. Taking advantage of this finding, a sensitive, rapid and convenient method for

screening mutations occurring in the coding region of this gene has been described.

Sample #4: MEDLINE Abstract# 93389400 (Goldfeld et al. 1993)

Identification of a novel cyclosporin-sensitive element in the human tumor necrosis

factor alpha gene promoter. Tumor necrosis factor alpha (TNF-alpha), a cytokine with

pleiotropic biological effects, is produced by a variety of cell types in response to

induction by diverse stimuli. In this paper, TNF-alpha mRNA is shown to be highly

induced in a murine T cell clone by stimulation with T cell receptor (TCR) ligands or by

calcium ionophores alone. Induction is rapid, does not require de novo protein synthesis,

and is completely blocked by the immunosuppressant cyclosporin A (CsA). We have

identified a human TNF-alpha promoter element, kappa 3, which plays a key role in the

calcium-mediated inducibility and CsA sensitivity of the gene. In electrophoretic

mobility shift assays, an oligonucleotide containing kappa 3 forms two DNA protein

complexes with proteins that are present in extracts from unstimulated T cells. These

complexes appear in nuclear extracts only after T cell stimulation. Induction of the

inducible nuclear complexes is rapid, independent of protein synthesis, and blocked by

CsA, and thus, exactly parallels the induction of TNF-alpha mRNA by TCR ligands or by

calcium ionophore. Our studies indicate that the kappa 3 binding factor resembles the

preexisting component of nuclear factor of activated T cells. Thus, the TNF-alpha gene is

an immediate early gene in activated T cells and provides a new model system in which

to study CsA-sensitive gene induction in activated T cells.

Sample #5: MEDLINE Abstract# 20580277 (Manzano et al. 2000)

Human renal mesangial cells are a target for the anti-inflammatory action of 9-cis retinoic

acid. Mesangial cells play an active role in the inflammatory response to glomerular

Page 221: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

205

injury. We have studied in cultured human mesangial cells (CHMC) several effects of 9-

cis retinoic acid (9-cRA), an activator of both retinoic acid receptors (RARs) and retinoid

X receptors (RXRs). 9-cRA inhibited foetal calf serum-induced CHMC proliferation. It

also prevented CHMC death induced by the inflammatory mediator H(2) O(2). This

preventive effect was not due to any increase in H (2) O (2) catabolism and it persisted

even when both catalase and glutathione synthesis were inhibited. inally, 9-cRA

diminished monocyte adhesion to FCS-stimulated CHMC. Interestingly, the retinoid also

inhibited in FCS-stimulated cells the protein expression of two mesangial adhesion

molecules, fibronectin and osteopontin, but it did not modify the protein expression of

intercellular adhesion molecule-1 and vascular adhesion molecule-1. All major RARs and

RXRs isotypes were expressed in CHMC regardless of the presence or absence of 9-cRA.

Transcripts to RAR-alpha, RAR-beta and RXR-alpha increased after incubation with 9-

cRA whereas RXR-gamma was inhibited, suggesting a major role for RARs and RXRs in

9-cRA-anti-inflammatory effects. 9-cRA was toxic only at 50 microM (a concentration

50 - 5000 times higher than required for the effects above). Cell death occurred by

apoptosis, whose onset was associated with a pronounced increase in catalase activity and

reduced glutathione content, being more effectively induced by all-trans retinoic acid.

Modulation of the oxidant/antioxidant balance failed to inhibit apoptosis. We conclude

that mesangial cells might be a target for the treatment of inflammatory glomerulopathies

with 9-cRA.

Sample #6: MEDLINE Abstract# 98288355 (Shen et al. 1998)

Mutation of BCL-6 gene in normal B cells by the process of somatic hypermutation of Ig

genes. Immunoglobulin (Ig) genes are hypermutated in B lymphocytes that are the

precursors to memory B cells. The mutations are linked to transcription initiation, but

non-Ig promoters are permissible for the mutation process ; thus, other genes expressed in

mutating B cells may also be subject to somatic hypermutation. Significant mutations

were not observed in c-MYC, S14, or alpha-fetoprotein (AFP) genes, but BCL-6 was

highly mutated in a large proportion of memory B cells of normal individuals. The

mutation pattern was similar to that of Ig genes.

Page 222: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

206

Examples of Biomedical Named Entities in Each Group

The following sample named entities from the five groups: protein, DNA, RNA,

cell_type, and cell_line, are extracted from the GENIA corpus training dataset. Each line

represents one entity, which may be composed of one or more words. Note the

similarities among different named entities often differing by one or more qualifying

word (adverbs, adjectives, etc.). The classification performance is measured in terms of

full matches, further complicating the named entity recognition task. When some entities

differ only by the inclusion of words before or after another entity, it is difficult to decide

where it starts and/or ends, resulting in a missed left or right boundary.

Table C.2 - Examples of Protein Named Entities

# Protein Named Entity Comments 1 lipoxygenase metabolites sequence of words 2 5-lipoxygenase alphanumeric full word + hyphen 3 CD28 alphanumeric acronym 4 CD28 surface receptor previous entity #3 + other words 5 CD28-responsive complex previous entity #3 + hyphen + other words 6 interleukin-2 alphanumeric full word + hyphen 7 IL-2 acronym form of entity #6 8 protein tyrosine kinase sequence of words 9 phospholipase A2 mixed word + alphanumeric acronym

10 NF-kappa B mixed case multi-word + hyphen 11 transcription factor NF-kappa B NF-kappa B appears separately in #10 12 nuclear factor sequence of words 13 E1A alphanumeric acronym 14 E1A proteins E1A appears separately in #13 15 class I MHC antigen mixed acronym + other words 16 CD4 epitopes CD4 may appear separately 17 CD4 coreceptor CD4 may appear separately 18 non-polymorphic regions hyphenated sequence of words

19 major histocompatibility complex class II molecules

sequence of words starting with an adjective major

20 monoclonal antibodies sequence of words 21 mAb mixed case 22 NF-AT transcription factor hyphenated acronym + sequence of words 23 anti-CD4 mAb hyphenated acronym + mixed case 24 HIV-1 gp120 hyphenated acronym + alphanumeric 25 Ras mixed case acronym 26 Ras/protein kinase C acronym + slash + one letter word + other

Page 223: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

207

# Protein Named Entity Comments 27 protein tyrosine kinases 28 p56lck 29 p59fyn

Contiguous Entities Example: protein tyrosine kinases p56lck and p59fyn

30 Shc adaptor protein Shc may also appear separately 31 epitopes one word 32 erythroid transcription factor 33 GATA-1

Contiguous Entities Example: erythroid transcription factor GATA-1

34 estrogen receptor sequence of words 35 erythroid cell-specific histone 36 H5

Contiguous Entities Example: erythroid cell-specific histone H5

37 ER uppercase acronym 38 N-terminal activation domain hyphenated sequence of words + acronym 39 IL-2R alpha IL-2R may also appear separately 40 STAT proteins uppercase acronym + the word proteins 41 transcription factors sequence of words 42 BamHI ZLF-1 gene product mixed acronyms + hyphen + other words 43 ZEBRA uppercase word (or acronym) 44 ZEBRA protein ZEBRA appears separately in #43 45 fusion transcript sequence of words 46 major coat protein sequence of words starting with adjective 47 human STAT proteins STAT proteins is a protein named entity 48 prolactin-induced transcription factor hyphenated sequence of words 49 cyclin-like DNA repair enzyme hyphenated sequence of words + DNA 50 uracil-DNA glycosylase hyphenated sequence of words + DNA 51 cell cycle-dependent transcription factor hyphenated sequence of words

52 granulocyte macrophage colony-stimulating factor hyphenated sequence of words

53 Lck, linker for activation of T cells acronym _ other words + comma 54 intercellular adhesion molecule-1 also appears as RNA named entity #41 55 vascular adhesion molecule-1 one word difference between #54 and #55

56 rat intercellular adhesion molecule-1 monoclonal antibody sequence of words + hyphen + number

57 protein-1 hyphenated word + number 58 anti-IgM Abs mixed case acronyms + hyphen 59 Syk-receptor complexes mixed case + hyphen + other words 60 tightly capped BCR complexes uppercase acronym + other words 61 angiotensin II type 1a receptor alphanumeric sequence of words 62 collagen types III and IV roman numbers + other words 63 specific antibodies sequence of words 64 transforming growth factor-beta sequence of words + hyphen 65 labeled MHC-peptide ligand sequence of words + hyphen 66 anti-CD40 CD40 may appear separately 67 seven transmembrane receptor sequence of words 68 chemokine receptors sequence of words 69 G-protein coupled chemokine receptors sequence including entity #68 70 cell-type-specific trans-acting factors multiple hyphenated words

72 constitutively expressed transcription factor Sp1 long sequence with adverbs + adjectives

Page 224: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

208

Table C.3 - Examples of DNA Named Entities

# DNA Named Entity Comments 1 uracil-DNA glycosylase (UDG) gene full form + acronym in parentheses 2 IL-2 gene IL-2 is protein entity #7 3 IL-2R alpha gene one word difference from #2 4 IL-2-inducible enhancer protein name + other words 5 peri-kappa B site hyphenated sequence of words 6 kappa B sites slight difference from #5 7 human immunodeficiency virus type 2

enhancer sequence of words + number

8 enhancer/promoter region sequence of words + slash 9 HIV-1 long terminal repeat includes virus name HIV-1

10 HIV-2 enhancer includes virus name HIV-2 11 HIV-2 enhancer element one word difference from #10 12 cis-acting elements hyphenated sequence of words 13 purine-rich binding sites hyphenated sequence of words 14 PuB1 mixed case acronym 15 elements simple word that may appear anywhere 16 enhancer one non-proper word 17 enhancer element sequence of simple words 18 EIA gene acronym + word 19 tk allele sequence of lowercase abbreviations 20 erythroid cell-specific genes hyphenated sequence of words 21 alpha- and beta-globin hyphenated conjunction of words 22 artificial promoter sequence of simple words 23 Fp promoter mixed case acronym + word 24 Mouse interleukin-2 receptor alpha gene sequence of words + protein entity #6 25 IL-2 receptor alpha (IL-2R alpha) gene includes protein name + parentheses 26 mouse IL-2R alpha gene includes protein name 27 promoter-proximal region hyphenated sequence of words 28 human and mouse gene sequence of simple words 29 78-nucleotide segment hyphenated sequence of words + number 30 major transcription start site sequence of words 31 EBV genome acronym + word 32 linear EBV genome sequence of words + acronym 33 BZLF-1 locus hyphenated sequence of words + acronym 34 LMP-2A or EBER-1 loci multiple acronyms and conjunction 35 human UDG promoter sequences sequence of words + acronym 36 nucleosomal DNA includes the word DNA 37 E-box motif mixed case hyphenated acronym + word 38 multimerized copies sequence of words 39 ZIP uppercase acronym 40 ZIP site includes DNA entity #39 41 T cell receptor alpha (TCR alpha) enhancer full form + acronym + parentheses 42 chromatin one word 43 chromatin-assembled DNA includes DNA entity #42 + hyphen 44 transcribed chromatin domains sequence of words 45 chromatin-integrated construct shape similar to DNA entity #43

Page 225: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

209

Table C.4 - Examples of RNA Named Entities

# RNA Named Entity Comments 1 IL-2R alpha mRNA includes protein named entity 2 IL-2R alpha transcripts includes protein name + other words 3 IFN-gamma and GM-CSF mRNAs conjunction of hyphenated acronyms

4 IFN-gamma and GM-CSF cytoplasmic mRNA one word difference from RNA entity #3

5 IFN-gamma cytoplasmic mRNA hyphenated acronyms + words + abbrev. 6 mRNA mRNA appearing separately 7 TNF mRNA mixed acronym + RNA entity #6 8 mRNAs RNA entity #6 in plural 9 GATA-1 mRNA alphanumeric acronym + hyphen + mRNA

10 CD19 mRNA alphanumeric acronym + mRNA 11 ER mRNA protein entity # 37 + RNA entity #6 12 beta-galactosidase mRNA hyphenated sequence of words 13 c-fos mRNA abbreviation + hyphen + mRNA 14 Germ line C transcripts sequence of words including single letter

15 human Ah receptor (TCDD receptor) mRNA

full form + different acronym + parentheses, to be identified as one entity

16 full-length HIV transcripts hyphenated sequence of words 17 nuclear RNAs 18 EBERs

Contiguous Entities + Parentheses Example: nuclear RNAs (EBERs)

19 chromosome 16 CBF beta-MYH11 fusion transcript sequence of acronyms + words + numbers

20 fusion transcripts sequence of simple words 21 E2F-1 and DP-1 mRNAs conjunction of hyphenated acronyms 22 I kappa B alpha/MAD-3 mRNA acronyms + hyphen + slash 23 monocytic I kappa B alpha/MAD-3 mRNA one word followed by RNA entity #22 24 prostatic spermine binding protein mRNA sequence of words + mRNA 25 1.3-kb transcript decimal number + hyphen + abbreaviation 26 junB mRNA sequence of abbreviations 27 Id2 mixed case alphanumeric acronym 28 Id2 mRNA same as RNA entity #27 + mRNA 29 Id2 and Id3 mRNA conjunction of entities similar to #28

30 tumor necrosis factor-alpha (TNF-alpha) messenger RNA

full form + acronym + parentheses + hyphen

31 c-jun mRNA sequence of abbreviations + hyphen 32 hTR beta 1 mRNA sequence of abbreviations + number 33 Human TR beta 1 mRNA mixed words and abbreviations 34 beta 1 triodothyronine receptor mRNA mixed words and abbreviations + number 35 VDR messenger RNA acronym + words

36 vitamin D receptor messenger ribonucleic acid

sequence of words including a single-letter word

37 constitutive mRNA adjective + mRNA

38 steady-state immunoglobulin heavy-chain mRNA hyphenated sequence of words

39 unspliced viral gag-pol mRNA mixed words and abbreviations + hyphen 40 c-fos, c-jun, EGR2, and PDGF (B) mRNA conjunction of abbreviations + parentheses 41 intercellular adhesion molecule-1 also appears as protein entity #54

Page 226: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

210

Table C.5 - Examples of Cell Type Named Entities

# Cell Type Named Entity Comments 1 primary T lymphocytes lymphocytes is also a cell type 2 monocytes single word 3 T cells single-letter word + cells 4 peripheral blood monocytes cell type #2 + other words 5 NK cells two-letter word + cells 6 human cells sequence of simple words 7 target cells sequence of simple words 8 human NK cells one word followed by cell type #5 9 persistently infected cells sequence of words including adverb

10 bone marrow-derived erythroid progenitor cells sequence of words + hyphen

11 erythrocytes one word entity, may appear with others 12 Hematopoietic lineage sequence of words 13 hematopoietic cell types sequence of words 14 embryonic stem cells sequence of words 15 erythroid, myeloid and lymphoid cell types conjunction of cell types 16 human thymocytes sequence of words 17 thymocytes included in cell type #16 18 B cells similar to cell type #3 19 latently infected B cells sequence of words including single-letter 20 human peripheral blood lymphocytes sequence of words + lymphocytes 21 sheep mammary gland tissue sequence of words 22 large granular lymphocytes adjectives + lymphocytes

23 LGL uppercase acronym - Contiguous Entities + Parentheses Example: large granular lymphocytes (LGL)

24 hematopoietic and trophoblast cells 25 macrophages

26 monocytic cells Example of Cell Type followed by Cell Line: monocytic cells (U937)

27 transformed and normal lymphocytes conjunction of adjectives + cell type 28 normal T lymphocytes adjective + cell type #29 29 T lymphocytes single-letter word + cell type 30 PBL uppercase acronym 31 preactivated PBL adjective + cell type #30 32 resting PBL adjective + cell type #30 33 HTLV-I-infected peripheral blood T cells sequence of acronym + words + cell type 34 CD4 positive T cells CD4 is a protein entity + cell type #3 35 CD4+ T cells CD4+ is a protein entity + cell type #3 36 purified T cells adjective + cell type #3 37 unstimulated cells adjective + cells 38 stimulated T lymphocytes adjective + cell type #29 39 activated lymphocytes adjective + lymphocytes 40 B-lymphocyte cell hyphenated sequence of words 41 quiescent B lymphocytes adjective + cell type 42 quiescent and stimulated cells conjunction of adjectives + cells

Page 227: Improving Scalability of Support Vector Machines for ...jkalita/work/StudentResearch/HabibPhDThesis2008.pdfTo investigate the scalability of the SVM machine learning approach, we conduct

211

Table C.6 - Examples of Cell Line Named Entities

# Cell Line Named Entity Comments 1 primary human bone marrow cultures sequence of words 2 monocytic one word 3 T-cell lines cell type + word 4 tranfected target cells sequence of words 5 Ad-infected targets hyphenated abbreviation + words 6 E1A-immortalized cells protein entity #13 + words 7 COS cells uppercase acronym + cells 8 CD4-CD8-murine T lymphocyte precursors sequence of protein names + cel ltypes 9 thymic lymphoma-derived hybridoma

10 PC60

Contiguous Entities + Parentheses Example: thymic lymphoma-derived hybridoma (PC60)

11 transformed human lymphocyte line sequence of words 12 YT two-letter uppercase acronym 13 human osteosarcoma (Saos-2) cells full word + abbreviation + parentheses 14 Saos 2 cells abbreviation + number + cells 15 human NK cell line sequence of words + acronym 16 natural killer (NK) cell line adjectives + variation of cell line #15 17 cell line NK3.3 alphanumeric acronym + words 18 NK3.3 cells different variation of cell type #13 19 NIH 3T3 cells sequence of acronyms + numbers + cells 20 Jurkat mutants proper name + word

21 unstimulated and 12-O-tetradecanoylphorbol 13-acetate/phytohemagglutinin-stimulated human Jurkat T cells

very long sequence of words and acronyms including numbers, hyphens, and slash

22 primed cells adjective + cells 23 unprimed cells adjective + cells 24 Friend virus-induced erythroblasts hyphenated sequence of words 25 highly purified CD34+ cells adjectives + protein name + cells

26 myeloid precursor 32Dc13 cell line sequence of words + alphanumeric acronym

27 U937 cells alphanumeric acronym + cells 28 IFN-gamma treated U937 cells includes cell line #28 29 Jurkat T cells proper name + cell type #3 30 macrophage cell line 31 U937 Example: macrophage cell line U937

32 various cell lines sequence of simple words 33 MHC class II deficient B-cell lines acronyms + roman number + cell type

34 class II non-inducible RB-defective tumor lines

hyphenated sequence of words + acronym + roman number

35 EBV-transformed B-cells, sequence ending in a comma (included) 36 class II non-inducible, RB-defective lines includes comma, similar to cell line #34 37 RB-defective cells hyphenated acronym + words 38 human monocytic cell line THP-1 sequence of words + hyphenated acronym 39 CD4 positive T cell lines protein name + adjective + cell type 40 Epstein-Barr virus negative BL 41 cells proper name + acronym + number + words41 stimulated IL-2 secreting cells adjective + protein entity #7 + words