statistical learning in multiple instance problems

174
Statistical Learning in Multiple Instance Problems Xin Xu A thesis submitted in partial fulfilment of the requirements for the degree of Master of Science at the University of Waikato Department of Computer Science Hamilton, New Zealand June 2003 c 2003 Xin Xu

Upload: others

Post on 11-Nov-2021

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Statistical Learning in Multiple Instance Problems

Statistical Learning

in Multiple Instance Problems

Xin Xu

A thesis submitted in partial fulfilment of

the requirements for the degree of

Master of Science

at the

University of Waikato

Department of Computer Science

Hamilton, New Zealand

June 2003

c 2003 Xin Xu

Page 2: Statistical Learning in Multiple Instance Problems

Abstract

Multiple instance (MI) learning is a relatively new topic inmachine learning. It

is concerned with supervised learning but differs from normal supervised learning

in two points: (1) it has multiple instances in an example (and there is only one

instance in an example in standard supervised learning), and (2) only one class

label is observable for all the instances in an example (whereas each instance has its

own class label in normal supervised learning). In MI learning there is a common

assumption regarding the relationship between the class label of an example and

the “unobservable” class labels of the instances inside it.This assumption, which

is called the “MI assumption” in this thesis, states that “Anexample is positive if at

least one of its instances is positive and negative otherwise”.

In this thesis, we first categorize current MI methods into a new framework. Ac-

cording to our analysis, there are two main categories of MI methods, instance-

based and metadata-based approaches. Then we propose a new assumption for MI

learning, called the “collective assumption”. Although this assumption has been

used in some previous MI methods, it has never been explicitly stated,1 and this is

the first time that it is formally specified. Using this new assumption we develop

new algorithms — more specifically two instance-based and one metadata-based

methods. All of these methods build probabilistic models and thus implement sta-

tistical learning algorithms. The exact generative modelsunderlying these methods

are explicitly stated and illustrated so that one may clearly understand the situations1As a matter of fact, for some of these methods, it is actually claimed that they use the standard

MI assumption stated above.

i

Page 3: Statistical Learning in Multiple Instance Problems

to which they can best be applied. The empirical results presented in this thesis

show that they are competitive on standard benchmark datasets. Finally, we explore

some practical applications of MI learning, both existing and new ones.

This thesis makes three contributions: a new framework for MI learning, new MI

methods based on this framework and experimental results for new applications of

MI learning.

ii

Page 4: Statistical Learning in Multiple Instance Problems

Acknowledgements

There are a number of people I want to thank for their help withmy thesis.

First of all, my supervisor, Dr. Eibe Frank. I do not really know what I can say to

express my gratitude. He contributed virtually all the ideas involved in this thesis.

What is more important, he always provided me with his support for my work.

When I was puzzled by a derivation or analysis of an algorithm, he was always the

one to help me sort it out. His review of this thesis was alwaysin such a detail that

any kind of mistakes could be spotted, from logic mistakes togrammar errors. He

even helped me with practical work and tools like latex, gnuplot, shell scripts and

so on. He absolutely provided me with much more than a supervisor usually does.

This thesis is dedicated to my supervisor, Dr. Eibe Frank.

Secondly, I would like to thank my project-mate, Nils Weidmann, a lot. I feel

so lucky to have worked in the same research area as Nils. He shared with me

many great ideas and provided me with much of his work, including datasets, which

made my job much more convenient (and made myself lazier). Infact Chapter 6

is the result of joint work with Nils Weidmann. He constructed the Mutagene-

sis datasets using the MI setting and in an ARFF file format [Witten and Frank,

1999] that allowed me to easily apply the MI methods developed in this thesis to

them. He also kindly provided me with the photo files from the online photo library

www.photonewzealand.com and the resulting MI datasets. Some of the ex-

perimental results in Chapter 6 are taken from his work on MI learning [Weidmann,

2003]. But he helped me definitely much more than that. When I met any diffi-

iii

Page 5: Statistical Learning in Multiple Instance Problems

culties during the writeup of my thesis, Nils was usually thefirst one I asked for

help.

Thirdly, many thanks to Dr. Yong Wang. It was him who first introduced me to the

“empirical Bayes” methods. He also spent so much precious time selflessly sharing

with me his statistical knowledge and ideas, which inspiredand benefited my study

and research a lot. I am really grateful for this great help.

As a matter of fact, the Machine Learning (ML) family at the Computer Science

Department here at the University of Waikato provided me with such a superb envi-

ronment for study and research. More precisely, I am grateful to: our group leader

Prof. Geoff Holmes, Prof. Ian Witten, Dr. Bernhard Pfahringer, Dr. Mark Hall, Gabi

Schmidberger, Richard Kirkby (especially for helping me survive with WEKA). I

was once trying to name everybody working in the ML lab when I kicked off my

project — at that moment we had only three people in the lab, Gabi, Richard and

myself. Now there are so many people in the lab, which makes meeventually give

up my attempt. But anyway, I would like to thank every one in the ML lab for

her/his help or concerns regarding my work.

Help also came from outside the Computer Science Department. More specifically,

I am thankful for Dr. Bill Bolstard and Dr. Ray Littler of the Statistics Department.

Apart from the lectures they provided to me,2 they also kindly answered lots of my

(perhaps stupid) questions about Statistics.

As for the experiment and development environment used in this thesis, I heavily

relied on the WEKA workbench [Witten and Frank, 1999] to build theMI package.

My work would have been impossible without WEKA. As for the datasets, I would

like to thank Dr. Alistair Mowat for providing the kiwifruitdatasets. I also want to

acknowledge the online photo gallerywww.photonewzealand.com for their

(indirect) provision of a photo dataset through Nils.

Finally, thanks to my family for their consistent support ofmy project and research.

2I always regarded the comments from Ray on the weekly ML discussion as lectures to me.

iv

Page 6: Statistical Learning in Multiple Instance Problems

In fact their question of “haven’t you finished your thesis yet?” has always been my

motivation to push the progress of the thesis.

My research was funded by a postgraduate study award as part of Marsden Grant

01-UOW-019, from the Royal Society of New Zealand, and I am very grateful for

this support.

v

Page 7: Statistical Learning in Multiple Instance Problems

vi

Page 8: Statistical Learning in Multiple Instance Problems

Contents

Abstract i

Acknowledgements iii

List of Figures xii

List of Tables xiii

1 Introduction 1

1.1 Some Example Problems . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Motivation and Objectives . . . . . . . . . . . . . . . . . . . . . . 4

1.3 Structure of this Thesis . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Background 7

2.1 Multiple Instance Problems . . . . . . . . . . . . . . . . . . . . . . 7

2.2 Current Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.2.1 1997-2000 . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.2.2 2000-Now . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.3 A New Framework . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.4 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.5 Some Notation and Terminology . . . . . . . . . . . . . . . . . . . 24

3 A Heuristic Solution for Multiple Instance Problems 25

3.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.2 An Artificial Example Domain . . . . . . . . . . . . . . . . . . . . 31

3.3 The Wrapper Heuristic . . . . . . . . . . . . . . . . . . . . . . . . 34

vii

Page 9: Statistical Learning in Multiple Instance Problems

3.4 Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

4 Upgrading Single-instance Learners 41

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

4.2 The Underlying Generative Model . . . . . . . . . . . . . . . . . . 43

4.3 An Assumption-based Upgrade . . . . . . . . . . . . . . . . . . . . 47

4.4 Property Analysis and Regularization Techniques . . . . .. . . . . 54

4.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . 57

4.6 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

4.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

5 Learning with Two-Level Distributions 65

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

5.2 The TLD Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 67

5.3 The Underlying Generative Model . . . . . . . . . . . . . . . . . . 72

5.4 Relationship to Single-instance Learners . . . . . . . . . . .. . . . 75

5.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . 79

5.6 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

5.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

6 Applications and Experiments 83

6.1 Drug Activity Prediction . . . . . . . . . . . . . . . . . . . . . . . 83

6.1.1 The Musk Prediction Problem . . . . . . . . . . . . . . . . 84

6.1.2 The Mutagenicity Prediction Problem . . . . . . . . . . . . 85

6.2 Fruit Disease Prediction . . . . . . . . . . . . . . . . . . . . . . . . 90

6.3 Image Categorization . . . . . . . . . . . . . . . . . . . . . . . . . 93

6.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

7 Algorithmic Details 99

7.1 Numeric Optimization . . . . . . . . . . . . . . . . . . . . . . . . 99

7.2 Artificial Data Generation and Analysis . . . . . . . . . . . . . .. 101

7.3 Feature Selection in the Musk Problems . . . . . . . . . . . . . . .103

viii

Page 10: Statistical Learning in Multiple Instance Problems

7.4 Algorithmic Details of TLD . . . . . . . . . . . . . . . . . . . . . 106

7.5 Algorithmic Analysis of DD . . . . . . . . . . . . . . . . . . . . . 111

8 Conclusions and Future Work 119

8.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

8.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

Appendices

A Java Classes for MI Learning 127

A.1 The “MI” Package . . . . . . . . . . . . . . . . . . . . . . . . . . 127

A.2 The “MI.data” Package . . . . . . . . . . . . . . . . . . . . . . . . 129

A.3 The “MI.visualize” Package . . . . . . . . . . . . . . . . . . . . . 130

B weka.core.Optimization 131

C Fun with Integrals 139

C.1 Integration in TLD . . . . . . . . . . . . . . . . . . . . . . . . . . 139

C.2 Integration in TLDSimple . . . . . . . . . . . . . . . . . . . . . . . 141

D Comments on EM-DD 143

D.1 The Log-likelihood Function . . . . . . . . . . . . . . . . . . . . . 144

D.2 The EM-DD Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 148

D.3 Theoretical Considerations . . . . . . . . . . . . . . . . . . . . . . 150

Bibliography 154

ix

Page 11: Statistical Learning in Multiple Instance Problems

x

Page 12: Statistical Learning in Multiple Instance Problems

List of Figures

2.1 Data generation for single-instance and multiple-instance learning. . 8

2.2 A framework for MI Learning. . . . . . . . . . . . . . . . . . . . . 20

3.1 An artificial dataset with 20 bags. . . . . . . . . . . . . . . . . . . .31

3.2 Parameter estimation of the wrapper method. . . . . . . . . . .. . 36

3.3 Test errors on artificial data of the MI wrapper method. . .. . . . . 36

4.1 An artificial dataset with 20 bags. . . . . . . . . . . . . . . . . . . .45

4.2 Parameter estimation of the MILogisticRegressionGEOMon the ar-

tificial data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

4.3 Test error of MILogisticRegressionGEOM and the MI AdaBoost

algorithm on the artificial data. . . . . . . . . . . . . . . . . . . . . 54

4.4 Error of the MI AdaBoost algorithm on the Musk1 data. . . . .. . . 55

5.1 An artificial simplified TLD dataset with 20 bags. . . . . . . .. . . 74

5.2 Estimated parameters using the TLDSimple method. . . . . .. . . 74

5.3 Estimated parameters using the TLD method. . . . . . . . . . . .. 75

5.4 An illustration of the rationale for the empirical cut-point technique. 77

6.1 Accuracies achieved by MI Algorithms on the Musk datasets. . . . . 84

6.2 A positive photo example for the concept of “mountains and blue

sky”. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

6.3 A negative photo example for the concept of “mountains and blue

sky”. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

7.1 Sampling from a normalized part-triangle distribution. . . . . . . . 102

xi

Page 13: Statistical Learning in Multiple Instance Problems

7.2 Relative feature importance in the Musk datasets. . . . . .. . . . . 105

7.3 Log-Likelihood function expressed via parameter a and b. . . . . . . 108

A.1 A schematic description of the MI package. . . . . . . . . . . . .. 128

D.1 A possible component function in one dimension. . . . . . . .. . . 145

D.2 An illustrative example of the log-likelihood functionin DD using

the most-likely-cause model. . . . . . . . . . . . . . . . . . . . . . 145

xii

Page 14: Statistical Learning in Multiple Instance Problems

List of Tables

3.1 Performance of the wrapper method on the Musk datasets. .. . . . 34

4.1 The upgraded MI AdaBoost algorithm. . . . . . . . . . . . . . . . . 50

4.2 Properties of the Musk 1 and Musk 2 datasets. . . . . . . . . . . .. 58

4.3 Performance of some instance-based MI methods on the Musk datasets. 59

5.1 Performance of different versions of naive Bayes on sometwo-class

datasets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

5.2 Performance of some metadata-based MI methods on the Musk

datasets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

6.1 Properties of the Mutagenesis datasets. . . . . . . . . . . . . .. . . 86

6.2 Error rate estimates for the Mutagenesis datasets and standard devi-

ations (if available). . . . . . . . . . . . . . . . . . . . . . . . . . . 87

6.3 Properties of the Kiwifruit datasets. . . . . . . . . . . . . . . .. . 90

6.4 Accuracy estimates for the Kiwifruit datasets and standard deviations. 91

6.5 Properties of the Photo dataset. . . . . . . . . . . . . . . . . . . . .94

6.6 Accuracy estimates for the Photo dataset and standard deviations. . 95

xiii

Page 15: Statistical Learning in Multiple Instance Problems

CHAPTER 1. INTRODUCTION

Chapter 1

Introduction

Multiple instance (MI) learning has been a popular researchtopic in machine learn-

ing since seven years ago when it first appeared in the pioneering work of Dietterich

et al. [Dietterich, Lathrop and Lozano-Perez, 1997]. One of the reasons that it at-

tracts so many researches is perhaps the exotic conceptual setting that it presents.

Most of machine learning follows the general rationale of “learning by examples”

and MI learning is no exception. But unlike the single-instance learning problem,

which describes an example using one instance, the MI problem describes an ex-

ample with multiple instances. However, there is still onlyone class label for each

example. At this stage we try to avoid any special notation orterminology and

simply give some rough ideas of the MI learning problem usingreal-world exam-

ples. Whilst some practical applications of MI learning will be discussed in detail

in Chapter 6, we briefly describe them here to give some flavor of how MI problems

were identified.

1.1 Some Example Problems

The need for multiple-instance learning arises naturally in several practical learn-

ing problems, for instance, the drug activity prediction and fruit disease prediction

problems.

1

Page 16: Statistical Learning in Multiple Instance Problems

1.1. SOME EXAMPLE PROBLEMS

The drug activity prediction problem was first described in [Dietterich et al., 1997].

The potency of a drug is determined by the degree its molecules bind to the larger,

target molecule. It is believed that the binding strength ofa drug molecule is largely

determined by itsshape(or conformation). Unfortunately one molecule can have

multiple shapes by rotating some of its internal bonds and itis generally unknown

which shape(s) determines the binding. However the potencyof a specific molecule,

which is either active or inactive, can be observed directlybased on the past expe-

riences. One can recognize this as a good instance of MI learning. Each molecule

is an example, with each of its shapes as one instance inside it. Thus we have to

use multiple instances to present an example. Dietterichet al. believed that an addi-

tional assumption is intuitively reasonable in this problem, that is, if one of the(sam-

pled)shapes of a molecule is active, then the whole molecule is active, otherwise

it’s inactive. We call this assumption the standard “MI assumption”. Dietterich et

al. also provided two datasets associated with the drug activity prediction problem,

namely, the Musk datasets. The two datasets describe different musk molecules,

with some molecules overlapping between them. These two datasets are the only

publicly available real-world MI datasets up to now and are the standard benchmark

datasets in the MI domain. We use these datasets throughout this thesis.

The fruit disease prediction problem is another natural case for MI learning. When

fruits are collected from different orchards, they are normally stored in the ware-

house in batches, one batch for one orchard. After some time (say, 3 to 5 months,

which is the usual transportation duration), some disease symptoms are found in

some batches. Since the disease may be epidemic, often the whole batch of fruits

are infected and exhibit similar symptoms. It would be good to predict which batch

is disease-prone before shipment based on some non-destructive measures taken for

each fruit as soon as it is collected from the orchards. The task is to predict, given

a new batch of fruits, whether it will be affected by a certaindisease or not (some

months later). This is another obvious MI problem. Each batch is an example with

every fruit inside it as an instance. In the training data, a batch is labeled disease-

prone (or positive) if the symptoms are observable after theshipment, otherwise

disease-free (or negative). Here one may also think the standard MI assumption can

2

Page 17: Statistical Learning in Multiple Instance Problems

CHAPTER 1. INTRODUCTION

fit because the disease may originate in only one or few fruitsin a batch. However,

even if some fruits indeed exhibit some minor symptoms, the majority of fruits in a

batch may be resistant to the disease, rendering the symptoms for the whole batch

negligible. Hence the batch can be negative even if some fruits are affected to a

small degree.

There are also some real-world problems that are not apparently MI problems.

However with some proper manipulation, one can model them asMI problems and

generate MI datasets for them. The content-based image processing task is one of

the most popular ones [Maron, 1998; Maron and Lozano-Perez, 1998; Zhang, Gold-

man, Yu and Fritts, 2002], while the stock market predictiontask [Maron, 1998;

Maron and Lozano-Perez, 1998] and computer intrusion prediction task [Ruffo,

2001] are also among them.

The key to modeling content-based image processing as an MI problem is that only

some parts of an image account for the key words that describeit. Therefore one

can view each image as an example and small segments of that image as instances.

Some measures are taken to describes the pixels of an image. Since there are var-

ious ways of fragmenting an image and at least two ways to measure the pixels

(using RGB values or using Luminance-Chrominance values),there could be many

configurations of the instances. The class label of an image is whether its content is

about a certain concept and the task is to predict the class label given a new image.

When the title of an image is a single simple concept like “sunset” or “waterfall”,

the MI assumption is believed to be appropriate because onlya small part of a posi-

tive image really accounts for its title while no parts of a negative image can account

for the title. However, for more complicated concepts of contents, it may be nec-

essary to drop this assumption [Weidmann, 2003]. In this thesis, we consider this

application and the others in more detail in Chapter 6.

3

Page 18: Statistical Learning in Multiple Instance Problems

1.2. MOTIVATION AND OBJECTIVES

1.2 Motivation and Objectives

During these years, significant research efforts have been put into MI learning and

many approaches have been proposed to tackle MI problems. Although some very

good results have been reported on the Musk datasets [Dietterich et al., 1997], the

data-generating mechanisms in MI problems remain unclear,even for the well-

studied Musk datasets. Thus the MI problem itself may need more attention. More-

over, the reason why some methods perform well and the assumptions they are

based on are usually not explicitly explained,1 which leads to some confusion. Due

to the scarcity of MI data, convincing tests and comparisonsbetween MI algorithms

are not possible. Hence it is not clear what kinds of MI problems a particular method

can deal with. Finally, the relationship between differentmethods has not been

studied enough. Recognizing the essence and the connectionbetween the various

methods can sometimes inspire new solutions with well-founded theoretic justifica-

tions. Therefore we believe that the study of MI learning still requires stronger and

clearer theoretic interpretation, as well as more practical and artificial datasets for

the purpose of evaluation.

Therefore in order to capture the big picture of MI learning,we set up the following

three objectives:

1. To establish a general framework for MI methods

2. To create some new MI methods within this framework that are strongly jus-

tified and make their assumptions explicit

3. To perform experiments on a collection of real-world datasets.

We aimed to achieve these objectives to the extent we could. Although some ob-

jectives may be too big to fully accomplish, it is hoped that this thesis improves the

interpretation and understanding of MI problems.

1We also noticed that some methods actually explicitly expressed assumptions that they neverused.

4

Page 19: Statistical Learning in Multiple Instance Problems

CHAPTER 1. INTRODUCTION

1.3 Structure of this Thesis

The rest of the thesis is organized in seven chapters as follows.

Chapter 2 provides some detailed background on MI problems,the standard MI

assumption and published solutions. We also introduce our perspective on the prob-

lem and a general framework to analyze current MI methods andcreate new solu-

tions.

Chapter 3 presents a new MI assumption and a generative modelaccording to the

framework introduced in Chapter 2. Then we propose a heuristic MI learning

method thatwrapsaround normal single-instance methods. We also show that it

is suitable for practical classification problems.

In Chapter 4, we put forward some more specific generative models. Under these

models we formally derive a way toupgradesingle-instance learners to deal with

MI data. The rationale for the upgraded algorithms is analogous to that of the

corresponding single-instance learners but based on some MI assumptions. As an

example, we upgrade two popular single-instance learning methods, linear logistic

regression and AdaBoost [Freund and Schapire, 1996], to tackle MI problems.

In Chapter 5, we develop a two-level distribution approach to tackle MI problems.

Again we explicitly state the underlying generative model this approach assumes.

It turns out that this method, in one of its simplest forms, can be regarded as an

approximate upgrade of the naive Bayes [John and Langley, 1995] method to the

MI setting.

The methods developed in Chapters 3, 4 and 5 are all incorporated in the solution

framework introduced in Chapter 2.

Chapter 6 elaborates on the three applications mentioned above and the correspond-

ing datasets. It also presents experimental results of the new methods on these

datasets.

5

Page 20: Statistical Learning in Multiple Instance Problems

1.3. STRUCTURE OF THIS THESIS

Chapter 7 describes the implementation of all the algorithms in this thesis in more

detail, and also the process that was used for generating theartificial data used

throughout this thesis.

Chapter 8 gives a summary and briefly describes future work. This concludes the

thesis.

There are some more implementation details in the Appendices. These details are

referenced whenever necessary.

6

Page 21: Statistical Learning in Multiple Instance Problems

CHAPTER 2. BACKGROUND

Chapter 2

Background

2.1 Multiple Instance Problems

Multiple instance problems first arose in machine learning in a supervised learning

context [Dietterich et al., 1997]. Thus we briefly review supervised learning first.

This extends naturally to multiple instance learning.

Supervised learning, more specifically classification, involves “a learning scheme

that takes a set of classified examples from which it is expected to learn a way of

classifying unseen examples” [Witten and Frank, 1999]. Typically supervised learn-

ing has atraining process that takes some examples described by a pre-defined set

of attributes (or a feature vector), and class labels (or responses), one for each exam-

ple. Attributes can be nominal, ordinal, interval and ratio[Witten and Frank, 1999].

The task for training is to infer a relationship, normally represented as a function,

from the attributes to the class labels. If the class labels are nominal values, the task

is called “classification”. It is called “regression” if the“class” is numeric. Here

we are only concerned with the classification task. In this thesis, we only consider

two-class classification problems. Nonetheless, it is straightforward to apply them

to multi-class problems, e.g. using error-correcting output codes [Dietterich and

Bakiri, 1995].

MI learning basically adopts the same setting as single-instance supervised learning

7

Page 22: Statistical Learning in Multiple Instance Problems

2.1. MULTIPLE INSTANCE PROBLEMS

Figure 2.1: Data generation for (a) single-instance learning and (b) multiple-instance learning [Dietterich et al., 1997].

described above. It still has examples with attributes and class labels; one example

still has only one class label; and the task is still the inference of the relationship

between attributes and class labels. The only difference isthat every one of the

examples is represented by more than one feature vector. If we regard one feature

vector as aninstanceof an example, normal supervised learning only has one in-

stance per example while MI learning has multiple instancesper example, hence

named“Multiple Instancelearning” problem.

The difference can best be depicted graphically as shown in Figure 2.1 [Dietterich

et al., 1997]. In this figure, the “Object” is an example described by some attributes,

the “Result” is the class label and the “Unknown Process” is the relationship. Fig-

ure 2.1(a) depicts the case in normal supervised learning while in 2.1(b) there are

multiple instances in an example. We interpret the “UnknownProcess” in Fig-

ure 2.1(b) as different from that in Figure 2.1(a) because the input is different. Note

that the dashed and solid arrows representing the input of the process in (b) imply

8

Page 23: Statistical Learning in Multiple Instance Problems

CHAPTER 2. BACKGROUND

that only some of the input instances may be useful. Therefore while the “Unknown

Process” in (a) is simply a classification problem, the “Unknown Process” in (b) is

commonly viewed as atwo-stepprocess with a first step consisting of a classifica-

tion problem and a second step that is a selection process based on the first step and

some assumptions.1 However, as we shall see, there are methods that do not even

consider the instance selection step and directly infer theoutput from the interac-

tions between the input instances. But the assumption of a selection process played

such an influential role in the MI learning domain that virtually every paper in this

area quoted it. We refer to this standard MI assumption simply as the “MI assump-

tion” throughout this thesis for brevity. Let us briefly review what this assumption

entails.

Within a two class context, with class labelsfpositive, negativeg, the “MI assump-

tion” states that an example is positive if at least one of itsinstance is positive

and negative if all of its instances are negative. Note that this assumption is based

on instance-level class labels, thus the “Unknown Process”in Figure 2.1(b) has

to consist of two steps: the first step provides the instances’ class labels and the

MI assumption is applied to the second step. However, we noticed that several

MI methods donot actually follow the MI assumption (even though it is generally

mentioned), and do not explicitly state which assumptions they use. As a matter

of fact, we believe that this assumption may not be so essential to make accurate

predictions. What matters is thecombinationof the model from the first step and

the assumption used in the second step. Given a certain type of model for classifica-

tion, it may be appropriate to explicitly drop the MI assumption and establish other

assumptions in the second step. We will discuss this in Section 2.4.

The two-step paradigm itself is only one possibility to model the MI problem. We

noticed that in general, when extending the number of instances of an example

from one to many, we may have a potentially very large number of possibilities

to model the relationship between the set of instances and their class label. For

1We may like to call the selection process a “parametric instance selection” process because it isbased on the model built in the classification process.

9

Page 24: Statistical Learning in Multiple Instance Problems

2.2. CURRENT SOLUTIONS

the instances within an example, there may beambiguity, redundancy, interactions

and many more properties to exploit. The two-step model and the MI assumption

may be suitable to exploit ambiguity [Maron, 1998] but if we are interested in other

properties of the examples, other models and assumptions may be more convenient

and appropriate. In practice, we may need strong backgroundknowledge to choose

the right way to model the problem a priori. However in most cases we actually

lack such knowledge. Consequently it is necessary that we try a variety of models

or assumptions in order to come up with an accurate representation of the data.

2.2 Current Solutions

As mentioned above, there are a variety of methods that have been developed to

tackle MI problems. Almost all of them are special-purpose algorithms that are ei-

ther created for the MI setting or upgraded from the normal single-instance learners.

Unfortunately some of them do not provide sufficient explanation of the underlying

assumptions or generative models. Hence the working mechanisms of the meth-

ods remain unclear. The algorithms are discussed in a chronological order in the

following section and comments are provided whenever possible. We discuss the

algorithms developed before 2000 and those after 2000 separately. We did so be-

cause we observed that the post-2000 methods are based on different assumptions

than the pre-2000 methods.

2.2.1 1997-2000

The first MI algorithm stems from the pioneering paper by Dietterichet al. [1997],

which also introduced the aforementioned Musk datasets. The APR algorithms [Di-

etterich et al., 1997] modeled the MI problem as a two-step process: a classifica-

tion process that is applied to every instance and then a selection process based on

the MI assumption. A single Axis-Parallel hyper-Rectangle(APR) is used as the

10

Page 25: Statistical Learning in Multiple Instance Problems

CHAPTER 2. BACKGROUND

pattern to be found in the classification process. As a parametric approach,2 the

objective of these methods is to find the parameters that, together with the MI as-

sumption, can best explain the class labels of all the examples in the training data.

In other words, they look for parameters involved in the firststep according to the

observed class label formed in the second step and the assumed mechanism be-

tween the two steps. There are standard algorithms that can build an APR (i.e. a

single if-then rule). Unfortunately they do not take the MI assumption into account.

Thus a special-purpose algorithm is needed. Several basic APR algorithms were

proposed [Dietterich et al., 1997], but interestingly the best method for the Musk

data was not among them. The best APR algorithm for the Musk data consists of

an iterative process with two steps: the first step expands anAPR from a positive

“seed” instance using a back-fitting algorithm, and the second step selects useful

features greedily based on some specifically designed measures. It turned out that

the APR that best explain the training data does not generalize very well. Hence

the objective was changed a little bit to produce the lowest generalization error, and

the kernel density estimation (KDE) was used to refine some ofthe parameters —

the boundaries of the hyper-rectangle. These tuning steps resulted in an algorithm

called “iterated-discrim APR”, that gave good results on the Musk datasets.

It is interesting that Dietterichet al. based all the algorithms on the “APR pattern

and the MI assumption” combination and never considered to drop either of them

even when difficulties were encountered. This might be due totheir background

knowledge that made them believe an APR and the MI assumptionare appropriate

for the Musk data. However, using KDE may have already introduced a bias into

the parameter estimates, which indicates that it might be more appropriate to model

the Musk datasets in a different way.

In spite of the above observation, the “APR and the MI assumption” combination

dominated the early stage of MI learning. Researchers from the computational

learning theory played an active role in MI learning before 2000. Some PAC al-

2In this case, the parameters to be found are the useful features and the bounds of the hyper-rectangle along these features.

11

Page 26: Statistical Learning in Multiple Instance Problems

2.2. CURRENT SOLUTIONS

gorithms were developed to look for the APR that Dietterichet al. defined [Long

and Tan, 1998; Blum and Kalai, 1998]. They also proved that finding such an APR

under the MI assumption is actually an NP-complete problem.Some more practi-

cal MI algorithms were also developed in this domain, such asMULTINST [Auer,

1997]. PAC learning theory tends to view classification as a deterministic process.

Instances with the class labels that violate the assumed concept are usually regarded

as “noisy”. Indeed, coming up with the idea of varying noise rates for the exam-

ples and assuming a specific distribution of the number of instances per example,

Auer [1997] ended up estimating the examples’ misclassification errors and looking

for an APR to minimize this estimated error. Note that since MULTINST tries to

calculate the expected number of instances per bag that fallinto the hypothesis for

the positive class (in this case an APR), it adheres closely to the MI assumption.

Another way to view the classification problem is from a probabilistic point of view,

which is prevalently adopted by statisticians and the researchers in the statistical

learning domain [Hastie, Tibshirani and Friedman, 2001; Vapnik, 2000; McLach-

lan, 1992; Devroye, Gyorfi and Lugosi, 1996]. Normally one thinks of a joint dis-

tribution over the feature vector variableX and the class variableY . By calculating

the marginal probability ofY conditional onX, Pr(Y jX), we can predict the prob-

ability of the class label of a test feature vector. According to statistical decision

theory, one should make the prediction based on whether the probability is over 0.5

in a two-class case.

But this is in single-instance supervised learning. In the MI domain, the first (and

so far the only published) probabilistic model is the Diverse Density (DD) model

proposed by Maron [Maron, 1998; Maron and Lozano-Perez, 1998]. As will be de-

scribed in more details in Chapter 4 and Chapter 7, DD was alsoheavily influenced

by the “APR and the MI assumption” combination and the two-step process. The

DD method actually used the maximum binomial log-likelihood method, a statisti-

cal paradigm used by many normal single-instance learners like logistic regression,

to search for the parameters in the first step. In particular,to model an APR in

the first step, it used a radial form or a “Gaussian-like” formto modelPr(Y jX).12

Page 27: Statistical Learning in Multiple Instance Problems

CHAPTER 2. BACKGROUND

Because of the radial form, the pattern (or decision boundary) of the classification

step is an Axis-Parallel hyper-Ellipse (APE) instead of a hyper-rectangle.3 In the

second step, where we assume that we have already obtained each instance’s class

probability, we still need ways to model the process that decides the class label of

an example.4 There were two ways in DD, namely the noisy-or model and the most-

likely-cause model, that are both probabilistic ways to model the MI assumption.

In the single-instance case, the process to decide the classlabel of each example

(i.e. each instance) can be regarded as a one-stage Bernoulli process. Now, since

we have multiple instances in an example, it seems natural toextend it to a multiple-

stage Bernoulli process, with each instance’s (latent) class label determined by its

class probability in one stage.5 And this is exactly the “noisy-or” generative model

in DD. As we shall see, according to [Maritz and Lwin, 1989], avery similar way of

modeling was adopted by some statisticians in as early as 1943 [von Mises, 1943].

However we can also model the process as an one-stage Bernoulli process if we

assume some way to “squeeze” the multiple probabilities (one per instance) into

one probability. The most-likely-cause model in DD is of this kind. Either way,

we can form a binomial log-likelihood function, either multi-stage or one-stage.

The noisy-or model computes the probability of seeing all the stages negative and

the complement, the probability of seeing at least one stagepositive. The most-

likely-cause model picks only one instance’s probability in an example to form

the binomial log-likelihood. It selects the instance within an example that has the

highest probability to be positive. Both processes to generate a bag’s class label

are model-based6 and are compatible with the MI assumption. By maximizing the

log-likelihood function we can find the parameters involvedin the radial formula-

tion of Pr(Y jX). This is how DD “recovers” the instance-level class probability,Pr(Y;X). With the virtues of the maximum likelihood (ML) method, namely its

consistency and efficiency, one can usually assure the correctness of the solution if

3Note that the function for a rectangle is not differentiablewhereas that of an ellipse is. But inthe sense of classification decision making, they are very similar.

4The process to decide an example’s class label was called “generative model” in DD.5Note that the normal binomial distribution formulaCrnpr(1�p)n�r does not apply here because

in this Bernoulli process, the probabilityp changes from stage to stage.6The processes are based on the radial formulation ofPr(Y jX).

13

Page 28: Statistical Learning in Multiple Instance Problems

2.2. CURRENT SOLUTIONS

the underlying assumptions and generative model (described in [Maron, 1998]) are

true. In the testing phase, the estimated parameters and thesame Bernoulli process

used in training provide a class probability for a test example. Once again we decide

its class label using the probability threshold of 0.5.

Within the most-likely-cause model, the log-likelihood function involves themax(:)functions and becomes non-differentiable, making it hard to maximize the log-

likelihood function. That is why the EM-DD was proposed [Zhang and Gold-

man, 2002] some years later. EM-DD uses the EM algorithm [Dempster, Laird

and Rubin, 1977] to overcome the non-differentiability in the optimization of the

log-likelihood function. Thus in methodology, EM-DD is simply DD.7 Nonethe-

less, it can be shown (see Appendix D) by some theoretical analysis and an illus-

trative counter-example that such an attempt is generally afailure. The theoretic

proof of the convergence of EM-DD is also problematic. As a result, EM-DD will

not generally find a maximum likelihood estimate (MLE) of theparameters. Since

DD is an ML method, EM-DD’s solution will not be a correct one if it cannot find

the MLE. EM-DD was found to produce a very good result on the Musk data but

this was due to a flawed evaluation involving intensive parameter tuning on the test

data. Not surprisingly, the EM-DD algorithm did not work better than DD on the

content-based image retrieval task [Zhang, Goldman, Yu andFritts, 2002].

2.2.2 2000-Now

The new millennium saw the breaking of the MI assumption and abandonment of

APR-like formulations. Virtually no new methods (apart from a neural network-

based method) created since then use the MI assumption, although interestingly

enough some of them were motivated based on this assumption.Hence it may not

be fair to compare these methods with the methods developed before 2000 because

they were based on different assumptions. However, one can usually compare them

in the sense of verifying which assumptions and models are more appropriate for7That is why we list it here and not among the new methods after 2000, although it was published

in 2002.

14

Page 29: Statistical Learning in Multiple Instance Problems

CHAPTER 2. BACKGROUND

the real-world data.

The new methods created since 2000 were all aimed to upgrade the single-instance

learners to deal with MI data. The methods upgraded so far aredecision trees [Ruffo,

2001], nearest neighbour [Wang and Zucker, 2000], neural networks [Ramon and

Raedt, 2000], decision rules [Chevaleyre and Zucker, 2001]and support vector ma-

chines (SVM) [Gartner, Flach, Kowalczyk and Smola, 2002].Nevertheless the

techniques involved in some of these methods are significantly different from those

used in their single-instance parents. We can categorize them into “instance-based”

methods and “metadata-based” methods. The term “instance-based” denotes that a

method is trying to select some (or all) representative instances from an example and

model these representatives for the example. The selectioncould be based on the MI

assumption or, more often after 2000, not. The term “metadata-based” means that

a method actually ignores the instances within an example. Instead it extracts some

meta-data from an example that is no longer related to any specific instances. The

metadata-based approachescannotpossibly adhere to the MI assumption because

the MI assumption must be associated with instance selection within an example.

We briefly describe the post-2000 methods using this categorization, which is also

the backbone of the framework we will discuss in the next section.

The instance-based approaches are the nearest neighbour technique, the neural net-

work, the decision rule learner, and the SVM (based on a multi-instance kernel).

The MI nearest neighbour algorithms [Wang and Zucker, 2000]introduces a mea-

sure that gives the distance between two example, namely theHausdorff distance.

It basically regards the distance between two examples as the distance between the

representatives within each example (one representative instance per example). The

selection of the representative is based on the maximum or minimum of the dis-

tances between all the instances from the two examples. While it is not totally clear

from the paper [Wang and Zucker, 2000] what the so-called “Bayesian-KNN” does

in the testing phase, the “Citation-KNN” method definitely violates the MI assump-

tion because it decides a test example’s class label by the majority class of its nearest

15

Page 30: Statistical Learning in Multiple Instance Problems

2.2. CURRENT SOLUTIONS

examples. Thus in general it does not classify an example based on whether at least

one of its instance is positive or all of the instances are negative.

On the other hand, MI neural networks [Ramon and Raedt, 2000]closely adhered

to the MI assumption. They adopt the same two-step frameworkused in the APR

method [Dietterich et al., 1997] as described above. As a matter of fact, one may

recognize that searching for parameters in the aforementioned two-step process is

well suited for a two-level neural network architecture. Neural network is used

to learn a pattern in the classification step, and a model-based instance selection

method is applied in the second step. In the first step the family of patterns is not

explicitly specified but implicitly defined according to complexity of the network

constructed. In the second step, like the most-likely-cause model in DD [Maron,

1998], the neural network picks up the instance with the highest output value in an

example.8 Backpropagation is used to search for the parameter values.Therefore

it can be said that this method is based on the MI assumption. Indeed, the reported

results obtained seemed to be very similar to those of the DD algorithm on the Musk

datasets.

NaiveRipperMI [Chevaleyre and Zucker, 2001] is a modification of the rule learner

RIPPER [Cohen, 1995] with a different counting method. Instead of counting how

many instances are covered by a hypothesis, it counts how many examples are

covered. If at least one instance of an example is covered, the whole example is

counted. Because positive and negative examples are treated the same way, this

violates the MI assumption. In fact this method could find thehyper-rectangle that

covers all negative examples but no positive ones. Since NaiveRipperMI [Cheva-

leyre and Zucker, 2001] emphasized so much on the MI assumption, we assume9

that it does what the MI assumption states in the testing procedure. However, this

would mean that it is not consistent with what happens in the training procedure.

The SVM with the MI kernel [Gartner et al., 2002] also violates the MI assumption.

The MI kernel simply replaces the standard dot product by thesum over all pairwise

8Since the output value is2 [0; 1℄, we can regard it as the probability to be positive.9There is no information on how a test example is classified in [Chevaleyre and Zucker, 2001].

16

Page 31: Statistical Learning in Multiple Instance Problems

CHAPTER 2. BACKGROUND

dot products between instances from two examples. This can be combined with

another non-linear kernel, e.g. RBF kernel. This effectively assumes that the class

label of an example is the true class label of all the instances inside it, and attempts to

search for the hyperplane that can separate all (or most of)10 the training examples in

an extended feature space (because of the RBF kernel function). Since this is done

the same way for both positive and negative examples, the MI assumption is not

used in this method at all. Indeed, we observe that some methods, including SVMs,

that do not model the probabilityPr(Y jX) directly, will find the MI assumption

very hard to apply, if not impossible. It would be very convenient for those methods

to have other assumptions associated with the measure they attempt to estimate.

The metadata-based approach is implemented in the MI decision tree learner RELIC

and the SVM based on a polynomial minimax kernel. This approach extracts some

metadata from each example, and regards such metadata as thecharacteristics of

the examples. When a new example is seen, we can directly predict its class la-

bel with regards to the metadata without knowing the class labels of the instances.

Therefore each instance is not important in this approach. What matters is the un-

derlying properties of the instances. Hence we cannot tell which instance is positive

or negative because an example’s class label is associated with the properties that

are presented by the attribute values of all the instances. Hence this approach cannot

possibly use the MI assumption.

The MI decision tree learner RELIC [Ruffo, 2001] is of this kind. In each node in

the tree, RELIC partitions the examples according to the following method:� For a nominal attribute withR values, say�r wherer = 1; 2; : : :R, it assigns

an example to therth subset if there is at least one instance in the example

whose value of this attribute is�r.� For a numeric attribute, and given a threshold�, there will be two subsets

to be chosen: the subset less than� and that greater than it. It assigns an

example in either subset based on two types of tests. The firsttype assigns10The regularization parameter C in SVM will tolerate some errors in the training data.

17

Page 32: Statistical Learning in Multiple Instance Problems

2.2. CURRENT SOLUTIONS

an example into the subset less than� if the minimum of this attribute values

of all the instances within the example is less than or equal to � and other-

wise to the subset greater than�. For the second type, it assigns an example

into the subset less than� if the maximum of this attribute values of all the

instances within the example is less than or equal to� and otherwise to the

subset greater than�.� Then it seeks the best� according the entropy measure, same way as the

single-instance tree learner C4.5 [Quinlan, 1993]. Note that for numeric at-

tributes, it looks for the best of the two types of tests simultaneously so that

only one type of tests and one� will be selected for each numeric attribute.

The way that RELIC assigns examples to subsets means that it is equivalent to ex-

tracting some metadata, namely minimax values, from each example and applying

the single-instance learner C4.5 to the transformed data. Since RELIC examines

the attribute values along each dimension individually, such metadata of an exam-

ple does not correspond to any specific instance inside it, although it is possible,

but very unlikely, to match the instances to the minimax values of all the attributes

simultaneously. Moreover, in the testing phase, we can directly tell an example’s

class label using the tree without getting the class labels of the instances. Thus the

MI assumption obviously does not apply.

The SVM with a polynomial minimax kernel explicitly transform the original fea-

ture space to a new feature space where there are twice the number of attributes as in

the original one. For each attribute in the original featurespace, two new attributes

are created in the transformed space: “minimal value” and “maximal value”. It then

maps each example in the original feature space into an instance in the new space

by finding the minimum and maximum value of each attribute forthe instances in

that example. Clearly some information is lost during the transformation and this is

significantly different from the two-step paradigm used in [Dietterich et al., 1997].

The MI assumption cannot possibly apply. Note this is effectively the same as what

RELIC does.

18

Page 33: Statistical Learning in Multiple Instance Problems

CHAPTER 2. BACKGROUND

The assumption of the metadata approach is that the classification of the examples

is only related to the metadata (in this case the minimax values) of the examples,

and that the transformation does not lose (or lose little) information in terms of clas-

sification. The convenience of the approach is that it transforms the multi-instance

problem to the common mono-instance one. In general, the metadata approach en-

ables us to transform the original feature space to other feature spaces that facilitate

the single-instance learning. The other feature spaces arenot necessarily the result

of simple metadata extracted from the examples. They could be, for instance, a

model space where we build a model (either a classification model or a clustering

model) for examples and transform each example into one instance according to,

say, the count of its instances that can be explained by the model. Methods similar

to this are actively being researched [Weidmann, 2003]. Thevalidity of such trans-

formations really depends on the background knowledge. If one believes that inter-

actions or relationships between instances account for classification of an example,

then this approach may outperform methods that do not have such a sophisticated

view of the problem. In this thesis, we refer to all the methods that transform the

original feature space to another feature space as the “metadata-based approach”,

no matter how complicated the extracted metadata may be.

In summary, the methods developed in the earlier stage of MI learning usually have

an APR-like formulation and hold the MI assumption whereas the methods devel-

oped later on often implicitly drop the MI assumption and arebased on other types

of models.

2.3 A New Framework

As mentioned in the previous section, we can categorize all the current MI solutions

into a simple framework. As for the methods before 2000, it isquite obvious that

they belong to the “instance-based approaches” because they strictly adhered to the

MI assumption, which implies that one implicitly assumes a class label for each

instance. We now present a hierarchical framework giving our view of MI learning.

19

Page 34: Statistical Learning in Multiple Instance Problems

2.3. A NEW FRAMEWORK

Instance−based

Assumption AssumptionsOtherThe MI

Approaches

MI learning

Metadata−basedApproaches

Model−oriented Data−oriented

MetadataFixed Random

Metadata

Figure 2.2: A framework for MI Learning.

This is shown in Figure 2.2.

It is now almost a convention that the MI assumption is associated with MI learning.

In this thesis, we generalize the definition of multiple instance learning and allow

the algorithm developers to plug in whatever assumption they believe reasonable.

Thus “MI learning” in our framework is based on generalizingthe MI assumption.

As already discussed we categorize MI methods into two categories: instance-base

approaches and metadata-based approaches. We have alreadyanalyzed the current

solutions based on this distinction, and we will create moremethods in this thesis

that belong to either one of the two categories. We can specialize the two categories

further.

In the instance-based approach, one normally estimates some parameters of a func-

tion mapping the feature variableX to the class variableY . This function is then

used to form a prediction at the bag level. All current instance-based methods for

MI learning amount to estimating the parameters of the function that enable them

to predict the class label of unseen examples. Some of these methods are based on

the MI assumption but some are not, which results in two sub-categories. We will

also develop some more methods within the sub-category thatare not based on the

MI assumption. We will explicitly state our assumptions and present the generative

models that the methods are based on.

The metadata-based approach has already been discussed in the last section. The

metadata could either be directly extracted from the data, so-called “data-oriented”

20

Page 35: Statistical Learning in Multiple Instance Problems

CHAPTER 2. BACKGROUND

in the framework shown in Figure 2.2, or from some models built on the data, called

“model-oriented”.

The two-level classification method [Weidmann, Frank and Pfahringer, 2003; Wei-

dmann, 2003] is the only published approach that is “model-oriented”. It builds a

first-level model using a single-instance (either supervised or unsupervised) learner

to describe the potential patterns in the whole instance space as the metadata. It

then applies a second-level learner to determine the model based on the extracted

patterns. A second-level learner is a single-instance (supervised) learner. Thus it

effectively transforms the original instance space into a new (“model-oriented”) in-

stance space that single-instance learners can be applied to.

For example, we can apply a clustering algorithm — at the firstlevel — to the

original instances and build a clustering model from the (original) instance space.

Then we can construct a new single-instance dataset, with every attribute corre-

sponding to one cluster extracted, and each instance corresponding to one example

in the original data. The new instances’ attribute values are the number of instances

of the corresponding example that fall into one cluster. Finally we apply another

single-instance learner, say a decision tree, to the new data to build a second-level

model. At testing time, the first-level model is applied to the test example to extract

the metadata (and generate a new instance), and the second-level learner to classify

the test example according to the model built on the (training) metadata. Of course

there are many combinations of the first and second-level learners, and they are not

further described in this thesis. Interested readers should refer to [Weidmann, 2003]

for more detail.

In the “data-oriented” sub-category, we can further specialize into “fixed metadata”

and “random metadata” sub-categories. All of the current metadata-based methods

(i.e. RELIC and SVM based on a minimax kernel) are data-oriented. In addition,

the metadata extracted is thought to be fixed and directly used to find the func-

tion mapping the metadata toY . However we can regard the metadata as random

and governed by some distributions, which results in another sub-category. Here

21

Page 36: Statistical Learning in Multiple Instance Problems

2.4. METHODOLOGY

we assume the data within each example is random and follows acertain distri-

bution. Thus we can extract some low-order sufficient statistics to summarize the

data and regard the statistics as the metadata. In this case,the metadata (statistics)

have a sampling distribution parameterized by some parameters. If we further think

of these parameters as governed by some distribution, the metadata is necessarily

random, not only wandering around the parameters within a bag but from bag to

bag as well. This is the thinking behind our new two-level distribution approach

discussed in Chapter 5. It turns out that when assuming independence between at-

tributes, it constitutes anapproximateway to upgrade the naive Bayes method to

deal with MI data, which has not been tried in the MI learning domain. We also

discovered the relationship between this method and the empirical Bayes methods

in Statistics [Maritz and Lwin, 1989].

2.4 Methodology

When we generalize the assumptions and approaches of MI learning, we find much

flexibility within our framework described above. Nonetheless, we still like to re-

strict our methods to some scope so that we may easily find theoretical justifications

for them. We propose that it would be desirable for an MI algorithm to have the fol-

lowing three properties:

1. The assumptions and generative models that the method is based on are clearly

explained. Because of the richness of the MI setting, there could be many

(essentially infinitely many) mechanisms that generate MI data. It is very un-

likely that one method can deal with all of them. A method has to be based on

some assumptions. Thus it is important to state the assumptions the method

is based on and their feasibility.

2. The method should be consistent in training and testing. Note “consistency”

here means that a method should make a prediction for a new example in

22

Page 37: Statistical Learning in Multiple Instance Problems

CHAPTER 2. BACKGROUND

the same way as it builds a model on the training data.11 For example, if a

method tries to build a model at the instance level and is based on a certain

assumption other than the MI assumption, then it should alsopredict the class

label of a test example using that assumption.

3. When the number of instances within each example reduces to one, the MI

method degenerates into one of the popular single-instancelearning algo-

rithms. Although this is not such an important property compared to the

former two, it is useful when we upgrade a single-instance learner, which

is theoretically well founded, to deal with MI data.

Even though not all current MI methods hold the above three properties, we aim to

achieve them in this project. In order to do so we develop the following methodol-

ogy for this thesis.

1. We explicitly drop the MI assumption but state the corresponding new as-

sumptions whenever new methods are created. The underlyinggenerative

model of each new method will be explicitly provided so that one may clearly

understand what kind of problem the methods can solve.

2. We adopt a statistical decision theoretic point of view, that is, we always as-

sume a joint probability distribution over the feature variableX (or other new

variables introduced for MI learning) and the class variable Y , Pr(X; Y ),as assumed by most of single-instance learning algorithms.We base all our

modeling purely on this distribution. Although we can end upwith different

methods by factorizing the joint distribution differently, the root of them is

the same.

3. As the standard single-instance learners are mostly welljustified and empir-

ically verified, we are interested in creating new methods related to them.

Even if we develop a new method that is in totally different context, we try to

show the relationship between this method and some single-instance learning

algorithms whenever possible.11This is different from the notation of consistency in a statistical context.

23

Page 38: Statistical Learning in Multiple Instance Problems

2.5. SOME NOTATION AND TERMINOLOGY

2.5 Some Notation and Terminology

To avoid confusion, this section explicitly lists the common notation that will be

used in this thesis, and some terminology. Special notationand terminology will be

used together with proper explanations.

An “example” in the MI domain is also called a “bag” or an “exemplar” and we

will use all three terms in this thesis. Likewise an “instance” is sometimes called

a “feature vector” or a ”point” in the feature space. Every instance is regarded as

a value of the “feature variable”X whereas its class label is a value of the class

variableY . Y is also called the “response variable” and “group variable”. An

“attribute” will also be called a “feature” or a “dimension”. There are many names

for the algorithms in normal single-instance (or mono-instance) supervised learning

like “propositional learner”, or “Attribute-Value (AV) learner”. We will sometimes

use them without distinction.

There is also some notation related to the joint probabilitydistributionPr(X; Y ).In classification problems we are more concerned with the conditional (or marginal)

probability ofY , Pr(Y jX). We also call it the “posterior probability” and/or the

point-conditional probability because it is conditional on a certain pointx. But we

also have the marginal probability ofX, Pr(XjY ) which we also call the group-

conditional probability. Of course we callPr(X) andPr(Y ) the “prior probabil-

ities” of X andY respectively. Note whenX is numeric, we abuse the symbol

“Pr(:)” because it is a density function that we refer to. However wewill not dis-

tinguish this difference in the notation.

24

Page 39: Statistical Learning in Multiple Instance Problems

CHAPTER 3. A HEURISTIC SOLUTION FOR MULTIPLE INSTANCEPROBLEMS

Chapter 3

A Heuristic Solution for Multiple

Instance Problems

This chapter introduces new assumptions for MI learning, and we discard the stan-

dard MI assumption. We regard the class label of a bag as a property that is related

to all the instances within that bag. We call this new assumption “the collective

assumption” because the class label of a bag is now a collective property of all the

corresponding instances. Why can the collective assumption be reasonable for some

practical problems like drug activity prediction? Becausethe features variables (in

this case measuring the conformations, or shapes, of a molecule) usually cannot

absolutely explain the response variables (a molecule’s activity), it is appropriate

to model a probabilistic mechanism to decide the class labelof a data point in the

instance space. The collective assumption means that the probabilistic mechanisms

of the instances within a bag are intrinsically related, although the relationship is

unknown to us. Consider the drug activity prediction problem: a molecule’s confor-

mations are not arbitrary in the instance space but confined to some certain areas.

Thus if we assume that the mechanism determining the class label of a molecule

is similar to the mechanism determining the (latent) class labels of all (or most) of

the molecule’s conformations, we may better explain the molecule’s activity. Even

if the activity of a molecule were truly determined by only one specific shape (or

a very limited number of shapes) in the instance space, it would have a very small

25

Page 40: Statistical Learning in Multiple Instance Problems

3.1. ASSUMPTIONS

probability of being sampled. Together with some measurement errors, the samples

of a molecule are very likely to wander around the “true” shape(s). Therefore it is

more robust to model the collective class properties of the instances within a bag

rather than that of some specific instance. We believe this istrue for many practical

MI datasets, including the musk drug activity datasets.

The “collective assumption” is a broad concept and there is great flexibility under

this assumption. In Section 3.1 we present several possibleoptions to model exact

generative models based on this assumption. We further illustrate it with an artifi-

cial dataset generated by one exact generative model in Section 3.2. This generative

model is strongly related to those used in Chapters 4 and 5. Infact, all the meth-

ods developed in this thesis are based on some form of the collective assumption.

Section 3.3 presents a heuristic wrapper method for MI learning that is based on the

collective assumption [Frank and Xu, 2003]. It will be shownthat in some cases it

can perform classification pretty well even though it introduces some bias into the

probability estimation. We interpret and analyze some properties of the heuristic in

Section 3.4. Note that some of the material in this chapter has been published in

[Frank and Xu, 2003].

3.1 Assumptions

Under the collective assumption, we have several options tobuild an exact genera-

tive model. We have to decide which options to take in order togenerate a specific

model. We found that answering the following question is helpful in making the

decision:

1. How to define the class label property of an instance and of a bag?

Suppose we use a function of the class variableY , C(Y j:) to denote the class

label property, then what is the exact form ofC(Y j:)? In single-instance

statistical learning, we usually modelC(Y jX) to be related to the poste-

rior probability functionPr(Y jX): we either usePr(Y jX) itself or its logit

26

Page 41: Statistical Learning in Multiple Instance Problems

CHAPTER 3. A HEURISTIC SOLUTION FOR MULTIPLE INSTANCEPROBLEMS

transform, the log-odds functionlog Pr(Y=1jX)Pr(Y=0jX) . In MI learning, we can also

model it at the bag level, i.e. we can build a function ofC(Y jB) whereB are

the bags. Thus we can also havePr(Y jB) = Pr(Y jX1; X2; � � � ; Xn) andlog Pr(Y=1jB)Pr(Y=0jB) = log Pr(Y=1jX1;X2;��� ;Xn)Pr(Y=0jX1;X2;��� ;Xn) where a bagB = b hasn instancesx1; x2; � � � ; xn. Note that in this thesis we restrict ourselves to the same form

of the property for instances and bags. For example, if we model Pr(Y jX)for the instances, we also modelPr(Y jB) for bags instead of the log-odds.

We will show the reason for doing so in the answer to the next question.

2. How to view the members of a bag and what is the relationship between

their class label properties and that of the bag?

Almost all the current MI methods regard the members (instances) of a bag as

a finite number of fixed and unrelated elements. However we have a different

point of view. We think of each bag as a population, which is continuous

and generate instances in the instance space in a dense manner. What we

have in the data for each bag are some samples randomly sampled from its

population. This point of view actually relates all the instances to each other

given a bag, that is, they are all dominated by the specific distributionof the

population of that bag. Every bag is unbounded, i.e., it ranges over the whole

instance space. However its distribution may be bounded, i.e. its instances

may only be possibly located in a small region of the instancespace.1 If

one really thinks of the bags’ elements as random selected from the whole

instance space, we can still fit this into our thinking by modeling each bag

as with a uniform distribution. Note that the distributionsof different bags

are different from each other and bags may overlap. Thus eachpoint in the

instance spacex (that is, an instance) will have different density given differ-

ent bags. Therefore, unlike in normal single-instance learning that assumesPr(X), we havePr(XjB) instead. We still regard each point in the instance

space as having its own class label (that could be determinedby either a deter-

ministic or a probabilistic process) but this isunobservablein the data. What

is observable is the class label of a bag that is determined using the instances’

1In fact if the bags are bounded, we can always think of their distributions as bounded ones.

27

Page 42: Statistical Learning in Multiple Instance Problems

3.1. ASSUMPTIONS

(latent) class label properties.

Now how to decide the class label of a bag given those of its instances? There

are two options here: the population version and the sample version. Since

we take the above perspective for a bag and we have already defined the class

label propertyC(Y j:), one application of the collective assumption is to take

theexpectedproperty of the population of each bag as the class property of

that bag. Given a bagb, we calculateC(Y jb) = EXjb[C(Y jX)℄ = ZXC(Y jx)Pr(xjb) dx (3.1)

We simply regard the bags’ class label property as theconditional expecta-

tion of the class property of all the instances givenb. This is the population

version for determining the class label of a bagC(Y jb). It is not related to

any individual instance, but we must knowPr(xjb) exactly to calculate the

integral. However, there is also a sample version ofC(Y jb). If givenb we can

samplenb instances from the instance space (in other words, there arenb in-

stances inb), thenC(Y jb) = 1nb Pnbi=1C(Y jxi). Since instances are drawn via

random sampling, we can give each instance an equal weight and calculate

the weighted average no matter what the distributionPr(xjb) is. The pop-

ulation and the sample version are approximately the same when the in-bag

sample size is large, but there may be large difference if thesample size is

small. For example, if only one instance is sampled per bag. In this case the

sample version reduces to the single-instance case becausean instance’s (in

this case also a bag’s) class label is determined by its own class label property.

However, the population version will still determine the instance’s class label

by the overall class property of its bag. Consequently it maybe less desirable

because it does not degenerate naturally to the single-instance case.

It is now clear why we choose consistent formulations ofC(Y j:) for both bags

and instances — because we regardC(Y jB) simply as the expected value ofC(Y jX) conditional on the existence of a bagB = b. While there may be

other applications of the collective assumption, in this thesis we take this per-

28

Page 43: Statistical Learning in Multiple Instance Problems

CHAPTER 3. A HEURISTIC SOLUTION FOR MULTIPLE INSTANCEPROBLEMS

spective only. There are still two options, as discussed above, for generating

MI data under the collective assumption: the sample versionand the popula-

tion version. In this thesis we use the sample version because of its elegant

degradation into single-instance supervised learning.

3. How to modelPr(XjB) ?

Whether using the population or the sample version, we need to know the

conditional density of the instances,Pr(XjB) because we need to generate

the instances of each bag from this distribution. There are also two possibil-

ities to modelPr(XjB): one is to model it indirectly usingPr(X) and the

other is to modelPr(XjB) directly.

The first option still assumes the existence ofPr(X), thus we are now in the

same framework as normal single-instance learning where weassume bothPr(X) andPr(Y jX). What is new is that we introduce a new variableB to

denote the bags and definePr(XjB) in terms ofPr(X). In particular, we

assume the distribution of each bagPr(XjB) is bounded and only occupies

a limited area in the instance space. As a result we model theconditional

densityof the feature variableX given a bagB = b asPr(xjb) = 8><>: Pr(x)Rx2bPr(x) dx if x 2 b;0 otherwise: (3.2)

Note that we abuse the notationPr(:) here because we really have a den-

sity function ofX instead of probability ifX is numeric. To put it another

way, the distribution of each bag is simply the normalized instance distribu-

tionPr(X), restricted to the corresponding range. In both the population and

the sample version of the generative model, we need Equation3.2 to generate

instances for one bag. In the population version we also needit to create the

class label of a bagb, whereas in the sample version we do not need it because

the class label of a bag is not related to a specific form of the density function

of its instances.

The second option for modelingPr(XjB) does not assume the existence of

29

Page 44: Statistical Learning in Multiple Instance Problems

3.1. ASSUMPTIONSPr(X). Indeed if all we need isPr(XjB), why should we still rely on the

single-instance statistical learning paradigm? Given a bag b, we can directly

modelPr(xjb) as some distribution, say a Gaussian, parameterized by some

parameters. Now a bag is basically described by its parameters because once

the parameters of a bag are decided, that bag has been formed.Thus we

need some mechanism to generate the parameters of the bags — we can con-

veniently regard the parameters themselves as distributedaccording to some

“hyper-distribution”. The data generation process is exactly the same as in

the first option, for both the population version and the sample version. Note

that if we modelPr(XjB) directly,Pr(X) may or may not exist, depending

on the specific distribution involved.

The above two options may coincide sometimes, as will be shown in Sec-

tion 3.2, but in general they generate different data. Note that although the

conditional density functionPr(XjB) must be specified in order to generate

instances for each bag, it is not important for the instance-based learning al-

gorithms that will be discussed (particularly in Chapter 4)because they are

based on the sample version of the generative model, in whichPr(XjB) is

not relevant to the class probability of the bagsPr(Y jB). However, in Chap-

ter 5, we pay much attention toPr(XjB) as we develop a method that models

it in a group-conditional manner (i.e. conditional onY ).

Now that we have answered the above questions, we are able to specify how to gen-

erate a bag of instances and how to generate the class label ofthat bag. Thus it is the

time to generate an MI dataset based on an exact generative model, under the col-

lective assumption. In the following section, we illustrate the above specifications

via an artificial dataset, specifying the answers of the above question as follows.

First, the class label propertyC(Y ) is the posterior probability at both the instance

and the bag level. In other words, we modelPr(Y jB) andPr(Y jX). Second, we

think of each bag as a hyper-rectangle in the instance space and assume the center

(i.e., the middle-point of the hyper-rectangle) of each bagis uniformly distributed.

Thus the bags are bounded and the instances for each bag are drawn from within

30

Page 45: Statistical Learning in Multiple Instance Problems

CHAPTER 3. A HEURISTIC SOLUTION FOR MULTIPLE INSTANCEPROBLEMS

Figure 3.1: An artificial dataset with 20 bags.

the corresponding hyper-rectangle. We modelPr(Y jX) as a linear logistic model,

i.e. Pr(Y = 1jX) = 11+exp(��X) , and based on the sample version of the generative

model,Pr(Y jB) = 1nPni=1Pr(Y jXi), wheren is the number of instances in a bag.

Finally we take the first option for the last question and assume the existence ofPr(X), which we model as a uniform distribution. Thus the conditional density in

Equation 3.2 is simply a uniform distribution within the region that a bag occupies.

3.2 An Artificial Example Domain

In this section, we consider an artificial domain with two attributes. More specif-

ically, we created bags of instances by defining rectangularregions and sampling

instances from within each region. First, we generated coordinates for the centroids

of the rectangles according to a uniform distribution with arange of[�5; 5℄ for each

of the two dimensions. The size of a rectangle in each dimension was chosen from

2 to 6 with equal probability (i.e. following a uniform distribution). Each rectangle

was used to create a bag of instances. To this end we sampledn instances from

within a rectangle according to a uniform distribution. Thevalue ofn was chosen

31

Page 46: Statistical Learning in Multiple Instance Problems

3.2. AN ARTIFICIAL EXAMPLE DOMAIN

from 1 to 20 with equal probability. Note that although we adopt the first option to

modelPr(XjB) from the previous section, it is identical to the second option if we

assume a different uniform distribution for each bag, and that the parameters of the

uniform distributions are dominated by another (hyper-) uniform distribution.

It remains the question how to generate the class label for a bag. As mentioned be-

fore, our generative model assumes that the class probability of a bag is the average

class probability of the instances within it, and this is what we used to generate the

class labels for the bags. The instance-level class probability was defined by the

following linear logistic model:Pr(y = 1jx1; x2) = 11 + e�3x1�3x2Figure 3.1 shows a dataset with 20 bags that was generated according to this model.

The black line in the middle is the instance-level decision boundary (i.e. wherePr(y = 1jx1; x2) = 0:5) and the sub-space on the right side has instances with

higher probability to be positive. A rectangle indicates the region used to sample

points for the corresponding bag (and a dot indicates its centroid). The top-left

corner of each rectangle shows the bag index, followed by thenumber of instances

in the bag. Bags in gray belong to class “negative” and bags inblack to class

“positive”. In this plot we mask the class labels of the instances with the class of

the corresponding bag because only the bags’ class labels are observable. Note that

bags can be on the “wrong” side of the instance-level decision boundary because

each bag was labeled by flipping a coin based on the average class probability of

the instances in it.

Note that the bag-level decision boundary is not explicitlydefined but the instance-

level one is. However, if the number of instances in each bag goes to infinity (i.e. ba-

sically working with the population version), then the bag-level decision boundary

is defined. Since the instance-level decision boundary is symmetric w.r.t. every

rectangle and so isPr(XjB), the bag-level decision boundary is defined in terms

of the centroid of each bag and is the same as the instance-level one. Thus the best

32

Page 47: Statistical Learning in Multiple Instance Problems

CHAPTER 3. A HEURISTIC SOLUTION FOR MULTIPLE INSTANCEPROBLEMS

choice to classify a bag when given its entire population is simply to predict based

on which side of the line3x1 + 3x2 = 0 the bag’s centroid is on. There is also

another interesting property in this asymptotic situation. Since we defineC(Y jX)asPr(Y jX), we can plug this into the right-hand side of Equation 3.1 to calculate

the conditional expectation ofPr(Y jX) given a bagb,EXjb[Pr(Y jX)℄ = ZXPr(Y jx)Pr(xjb) dxAssuming conditional independence betweenY andb given ax,= ZXPr(Y jx; b)Pr(xjb) dx= ZXPr(x; Y jb) dx= Pr(Y jb)Thus we marginalizeX in the joint distributionPr(X; Y ) conditional on the ex-

istence ofb and get the (conditional) prior probability: the class probability of b,Pr(Y jb).Chapter 4 presents an exact instance-based learning algorithm for this problem. It

is quite obvious that the exact solution is instance-based because only the instance-

level probabilities are defined. However, this artificial problem can also be tackled

with other methods. We may regardPr(XjB) for each bag as a uniform distri-

bution whose parameters are dominated by some hyper-distribution, for example,

two Gaussian distributions with different mean but the samevariance, one for each

class. Since, as mentioned before, asymptotically the bag-level decision boundary

is linear, one can imagine that this type of model is not bad for this generative model

in terms of classification performance. In particular, if wemodel one Gaussian cen-

tered in the right-top corner in Figure 3.1 and another Gaussian in the left-bottom

corner, it would be quite a good approximation of the true generative model. This

thinking underlies the approach presented in Chapter 5.

33

Page 48: Statistical Learning in Multiple Instance Problems

3.3. THE WRAPPER HEURISTIC

Musk 1 Musk 2Bagging with Discretized PART 90.22�2.11 87.16�1.42RBF Support Vector Machine 89.13�1.15 87.16�2.14Bagging with Discretized C4.5 90.98�2.51 85.00�2.74AdaBoost.M1 with Discretized C4.5 89.24�1.66 85.49�2.73AdaBoost.M1 with Discretized PART 89.78�2.30 83.70�1.81Discretized PART 84.78�2.51 87.06�2.16Discretized C4.5 85.43�2.95 85.69�1.86

Table 3.1: The best accuracies (and standard deviations) achieved by the wrappermethod on the Musk datasets (10 runs of stratified 10-fold cross-validation).

In the remainder of this chapter, we analyze a heuristic method based on the above

generative model. Although it does not find an exact solution, it performs very well

on the classification task based on this generative model. The method is very sim-

ple but the empirical performance on the Musk benchmark datasets is surprisingly

good, which may be due to the similarity between these datasets and our generative

model.

3.3 The Wrapper Heuristic

In this section, we briefly summarize results for a simple wrapper that, in conjunc-

tion with appropriate single-instance learning algorithms, achieves high accuracy

on the Musk benchmark datasets [Frank and Xu, 2003]. Consistent with the collec-

tive assumption, this method assigns every instance the class label of the bag that it

pertains to, so that a single-instance learner can learn from it. Moreover, it has two

special properties: (1) at training time, instances are assigned a weight inversely

proportional to the size of the bag that they belong to so thateach bag receives

equal weights, and (2) at prediction time, the class probability for a bag is estimated

by averaging the class probabilities assigned to the individual instances in the bag.

This method does not require any modification of the underlying single-instance

learner as long as it generates class probability estimatesand can deal with instance

weights.

34

Page 49: Statistical Learning in Multiple Instance Problems

CHAPTER 3. A HEURISTIC SOLUTION FOR MULTIPLE INSTANCEPROBLEMS

Table 3.1 shows the results of the wrapper method on the musk activity predic-

tion problem for some popular single-instance learners implemented in the WEKA

workbench [Witten and Frank, 1999].2 All estimates were obtained using stratified

10-fold cross-validation (CV) repeated 10 times. The cross-validation runs were

performed at the bag level.

Bagging [Breiman, 1996] and AdaBoost.M1 [Freund and Schapire, 1996] are stan-

dard single-instance ensemble methods. PART [Frank and Witten, 1998] and C4.5

[Quinlan, 1993] learn decision lists and decision trees respectively. The “RBF Sup-

port Vector Machine” is a support vector machine with a Gaussian kernel using the

sequential minimal optimization algorithm [Platt, 1998].The “discretized” ver-

sions are the same algorithms run on discretized training data using equal-width

discretization [Frank and Witten, 1999]. For more details on these algorithms and

how they can deal with weights and generate probability estimates, please check

[Frank and Xu, 2003]. It is suffice to say that all these results are competitive with

the best results achieved by other MI methods, as shown in Chapter 6.

3.4 Interpretation

Recall the wrapper method has two key features: (1) the way itassigns instance

weights and class labels at training time, and (2) the probability averaging of a bag

at prediction time. We provide some explanation for why thismakes sense in the

following.

The wrapper method is an instance-based method and like other methods in this

category, it also tries to recover the instance-level probability. It is well known that

many popular propositional learners aim to the minimize an expected loss function

overX andY , that is,EXEY jX(Loss(�; Y )) where� is the parameter to be esti-

mated and usually involved in the probability functionPr(Y jX). As matter of fact,

all the single-instance learning schemes in Table 3.1 are within this category.3 Now,2Some of the properties of the Musk datasets are summarized inTable 4.2 in Chapter 4.3PART and C4.5 use an entropy-based criterion to select the optimal split point in a certain region.

35

Page 50: Statistical Learning in Multiple Instance Problems

3.4. INTERPRETATION

-1

0

1

2

3

4

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

Par

amet

er v

alue

Number of exemplars

Estimated coefficient 1Estimated coefficient 2

Estimated intercept

Figure 3.2: Parameter estimation ofthe wrapper method.

12

14

16

18

20

22

0 20 40 60 80 100 120

Tes

t Err

or R

ate(

%)

Number of training exemplars

Test error of the wrapper method trained on the masked dataTest error of the wrapper method trained on the unmasked data

Test error of the model with the true parameters

Figure 3.3: Test errors on the ar-tificial data of the wrapper methodtrained on masked and unmaskeddata.

given a concrete bagb of sizen, as defined by Equation 3.2, the sample version

of Pr(xjb) is 1=n for each instance in the bag and zero otherwise. Therefore the

conditional expectation of the loss function given the presence of a bagb is simplyEXjbEY jX;b(Loss(�; Y )) = EXjbEY jX(Loss(�; Y )) assuming conditional indepen-

dence ofY on b givenX. Plug in givenPr(xjb), the conditional expected loss is

simplyPj 1nEyj jxj(Loss(�; yj)) wherexj andyj is the attribute vector and class

label of thejth instances inb respectively. We want to minimize this expected loss

over all the bags, thus the final expected loss to be minimizedisEB�EXjB[EY jX;B(Loss(�; Y ))℄� =Xi 1N Xj 1niEyij jxij (Loss(�; yij)) (3.3)

whereN is the number of bags andni the number of instances in bagi. Thus the

weight 1=n of each instancex serves asPr(xjb) and the bags’ weight1=N is a

constant outside the sum over all the bags and does not affectthe minimization pro-

cess of the expected loss. Nonetheless, Equation 3.3 can never be realized becauseyij, i.e. the class label of each instance, isnot observable. If it were observable,

this formulation could be used to find the true�. However, under the collective

assumption, if we assign the bag’s class labelyi to each of its instances, it may not

be a bad approximation asyi is related to all theyij. This is exactly what the wrap-

per method does at training time. The approximation makes the wrapper method a

heuristic because it necessarily introduces bias into the probability estimates.

This is to minimize the cross-entropy or deviance loss in a piece-wise fashion.

36

Page 51: Statistical Learning in Multiple Instance Problems

CHAPTER 3. A HEURISTIC SOLUTION FOR MULTIPLE INSTANCEPROBLEMS

This can be illustrated by running (weighted) linear logistic regression on the ar-

tificial data described in Section 3.2. As shown in Figure 3.2, asymptotically the

estimates differ from the true parameters with a multiplicative constant.4 It seems

that the objective to recover the instance-level probability function exactly cannot

be achieved with this method. Nevertheless what we really want is classification

performance and it is well-known that unbiased class probability estimates are not

necessary to obtain accurate classifications. In this case,since the bias is a multi-

plicative constant for all the parameters, the correctdecision boundaryon the in-

stance level can be recovered. As discussed in Section 3.2, asymptotically the bag-

level decision boundary is the same as the instance-level one. Thus given enough

instances (normally 10 to 20) within a bag, the classification can be very accurate

because the classification decision is always correctly made. This is shown in Fig-

ure 3.3. We generate a hold-out dataset of 10000 independenttest bags using the

same generative model described in Section 3.2. Then we use the wrapper method

trained on both the “masked” data, i.e.yij is not given andyi is assigned to every

instance instead, and the “unmasked” data, i.e.yij is given for every instance. In

the latter case linear logistic regression converges to thetrue function (convergence

not shown in this thesis). At prediction time, we use the probability average of each

bag, which is reasonable in this case because it is assumed inthe generative model.

Figure 3.3 shows that assigning a bag’s class labels to its instances does not harm

classification performance. In fact, the wrapper method trained on the masked data

predicts equally well as the one trained on the unmasked datausing 60 training bags.

It achieves the best possible accuracy (shown as the bottom line) when trained on

more than 120 training bags.

The above observations are obtained in a specific artificial setting, which is much

simpler than real-world problems. In general, the bias may not be a constant for all

the parameters so even the true decision boundary cannot be recovered. However,

practical generative models may have factors that restrict, to some extent, the bias

or the harmful effect of the bias on the classification. For example, we observed

4The reason the intercept estimate seems unbiased is that thetrue value is 0 and with a multi-plicative bias, the estimate is still 0.

37

Page 52: Statistical Learning in Multiple Instance Problems

3.4. INTERPRETATION

from the artificial data that the less area each bag occupies in the instance space, the

less bias the wrapper method has. When the ranges of bags become small, the bias

is literally negligible. Hence if there are some restrictions on the range of a bag in

the instance space, the wrapper method can work well. Indeed, we observed that on

the Musk datasets, the in-bag variances are very small for most of the bags, which

may explain the feasibility of the wrapper method. Intuitively, this heuristic will

work well if the true class probabilitiesPr(Y jX) are “similar” for all the instances

in a bag (under the above generative model) because we use thebags’ class labels

to approximate those of the corresponding instances. Therefore in general, as long

as this condition is approximately correct, no matter by what means, the wrapper

method can work.

Finally what the wrapper method does at prediction time is reasonable assuming the

above generative model. Nevertheless, it seems that the wrapper method does not

use the assumption of a bag’s class probability being the averaged instances’ class

probabilities at training time. In fact, by assigning the bags’ class labels to their

instances, it only assumes the general collective assumption but not any specific

assumption regarding how the bags’ class labels are created. Therefore we can use

other methods at prediction time such as taking the normalized geometric average of

the instances’ class probabilities within a bag as the bag’sprobability, as long as the

method is consistent with the general collective assumption. However, according to

our experience, taking the arithmetic average of the instances’ probability for a bag

is more robust on practical datasets like the Musk datasets.Thus we recommend

this method for the wrapper method in general.

As explained, the wrapper method is only a heuristic method that can work well in

practice under some conditions. At least for the specific generative model proposed

in Section 3.2 of this chapter, it produces accurate classifications, although it is not

good for probability estimation. In Chapter 4, we will present an exact algorithm

that can give accurate probability estimates for the same generative model. They are

also based on the normal single-instance learning schemes and aim to upgrade them

to deal with MI data. The disadvantage of such an approach is that it heavily relies

38

Page 53: Statistical Learning in Multiple Instance Problems

CHAPTER 3. A HEURISTIC SOLUTION FOR MULTIPLE INSTANCEPROBLEMS

on exact assumptions at the training time. Thus the significant modifications of the

underlying propositional learning algorithms are inevitable. The wrapper method,

on the other hand, is much simpler and more convenient.

3.5 Conclusions

In this chapter we first introduced a new assumption other than the MI assumption

for MI learning — the collective assumption. This assumption regards the class

label of a bag related to all the instances within a bag. We then showed some

concrete applications of the collective assumption to exactly generate MI data. One

of the applications is to take the averaged probability of all the instances in the same

bag as the class probability of that bag.

Under the above exact generative model, we further assumed that the instances

within the same bag have similar class probabilities. Consequently the probability

of a bag is also similar to those of its instances. These assumptions allow us to

develop a heuristic wrapper method for MI problems. This method wraps around

normal single-instance learning algorithms by (1) assigning the class label of a bag

and instance weights to the instances at training time, and (2) averaging instances’

class probabilities of a bag at testing time.

Assigning bags’ class labels to their corresponding instances and averaging the in-

stances’ class probabilities are the application of the above two assumptions (the

collective assumption and the “similar probability” assumption). The instances’

weights and the wrapping scheme were motivated based on the instance-level loss

function (encoded in the single-instance learners) over all the bags. We also showed

an artificial example where this wrapper method can perform well for classification

although its probability estimates are biased. Empirically, we found out that this

method works very well with the Musk benchmark datasets, in spite of its simplic-

ity.

39

Page 54: Statistical Learning in Multiple Instance Problems

3.5. CONCLUSIONS

40

Page 55: Statistical Learning in Multiple Instance Problems

CHAPTER 4. UPGRADING SINGLE-INSTANCE LEARNERS

Chapter 4

Upgrading Single-instance Learners

Among many solutions to tackle multiple instance (MI) learning problems, one ap-

proach has become increasingly popular, that is, to upgradeparadigms from the

normal single-instance learning to deal with MI data. The efforts described in this

chapter also fall into this category. However, unlike most of the current algorithms

within this category, we adopt an assumption-based approach that is based on the

statistical decision theory. Starting with analyzing the assumptions and the under-

lying generative models of MI problems, we provide a fairly general and justified

framework for upgrading single-instance learners to deal with MI data. The key

feature of this framework is the minimization of the expected bag-level loss func-

tion based on some assumptions. As an example we upgrade two popular single-

instance learners, linear logistic regression and AdaBoost and test their empirical

performance. The assumptions and underlying generative models of these methods

are explicitly stated.

4.1 Introduction

The motivating application for MI learning was the drug activity problem consid-

ered by Dietterichet al.[1997]. The generative model for this problem was basically

regarded as a two-step process. Dietterichet al. assumed there is an Axis-Parallel

41

Page 56: Statistical Learning in Multiple Instance Problems

4.1. INTRODUCTION

Rectangle (APR) in the instance space that accounts for the class label of each

instance (or each point in the instance space). Each instance within the APR is pos-

itive and all others negative. In the second step, a bag of instances is formed by

sampling (not necessarily randomly) from the instance space. The bag’s class label

is determined by the MI assumption, i.e., a bag will be positive if at least one of its

instances is positive (within the assumed APR) and otherwise negative. With this

perspective, Dietterichet al. proposed APR algorithms that attempts to find the best

APR under the MI assumption. They showed empirical results of the APR methods

on the Musk datasets, which represent a musk activity prediction problem.

In this chapter, we follow the same perspective as that in [Dietterich et al., 1997]

but with different assumptions. We adopt an approach based on the statistical deci-

sion theory and select assumptions that are well-suited forupgrading each single-

instance learner.

The rest of the chapter is organized as follows. In Section 4.2 we explain the un-

derlying generative model we assume and show artificial MI data generated using

this model. In Section 4.3 we describe a general framework for upgrading normal

single-instance learners according to the generative model. We also provide two ex-

amples of how to upgrade the linear logistic regression and AdaBoost [Freund and

Schapire, 1996] within this framework. These methods have not been studied in the

MI domain before. Section 4.4 shows some properties of our methods on both the

artificial data and practical data. Some regularization techniques, as used in normal

single-instance learning, will also be introduced in an MI context. In Section 4.5 we

show that the methods presented in this chapter perform comparatively well on the

benchmark datasets, i.e. the Musk datasets. Section 4.6 summarizes related work

and Section 4.7 concludes this chapter.

42

Page 57: Statistical Learning in Multiple Instance Problems

CHAPTER 4. UPGRADING SINGLE-INSTANCE LEARNERS

4.2 The Underlying Generative Model

The basis for the work presented in this chapter was an analysis of the generative

model (implicitly) assumed by Dietterichet al.. The result of this analysis was

that the generative model is actually modeled as a two-step process. The first step

is an instance-level classification process and the second step determines the class

labels of a bag based on the first step and the MI assumption. Asa matter of fact,

the only probabilistic algorithm in MI learning, the Diverse Density (DD) [Maron,

1998] algorithm, followed the same line of thinking. In the first step, DD assumes

a radial (or “Gaussian-like”) formulation for the true posterior probability functionPr(Y jX). In the second step, based on the values ofPr(Y jX) of all the instances

within a bag, it assumes either a multi-stage (as in the noisy-or model) or a one-stage

(as in the most-likely-cause model) Bernoulli process to determine the class label of

a bag. Therefore DD amounts to finding one (or more) Axis-Parallel hyper-Ellipse

(APE)1 under the MI assumption.

It is natural to extend the above process of generating MI data to a more general

framework. Specifically, as in single-instance learning, we assume a joint distribu-

tion over the feature variableX and the class (response) variableY , Pr(X; Y ) =Pr(Y jX)Pr(X). The posterior probability functionPr(Y jX) determines the inst-

ance-level decision boundary that we are looking for. However, in MI learning

we introduce another variableB, denoting the bags, and what we really want isPr(Y jB). Let us assume that given a bagb and all its instancesxb, Pr(Y jb) is

a function ofPr(Y jxb), i.e.,Pr(Y jb) = g(Pr(Y jxb)). The form ofg(:) is deter-

mined based on some assumptions. In the APR algorithms [Dietterich et al., 1997],

the instance-level decision boundary pattern is modeled asan APR andg(:) is an

instance-selection function based on the MI assumption. InDD [Maron, 1998], the

instance-level decision boundary pattern is an APE andg(:) is either the noisy-or or

the most-likely-cause model, which are both due to the MI assumption. Therefore

the “APR-like pattern and MI assumption” combination is simply a special case of

1Note that an APE is very similar to an APR but that it is differentiable.

43

Page 58: Statistical Learning in Multiple Instance Problems

4.2. THE UNDERLYING GENERATIVE MODEL

our framework.

In this chapter we are creating new combinations within thisframework but with

different assumptions. We change the decision boundary to other patterns and sub-

stitute the MI assumption with the collective assumption introduced in Chapter 3.

We model the instance-level class label based onPr(Y jX) or its logit transforma-

tion, i.e. the log-odds functionlog Pr(Y=1jX)Pr(Y=0jX) , because many normal single-instance

learners aim to estimate them and we are aiming to upgrade these algorithms. We

can think of a bagb as a certain area in the instance space. Then givenb with ninstances, we have Pr(Y jb) = 1n nXi Pr(yjxi) (4.1)

or log Pr(Y = 1jb)Pr(Y = 0jb) = 1n nXi=1 log Pr(Y = 1jxi)Pr(Y = 0jxi)) 8<: Pr(Y = 1jb) = [Qni=1 Pr(y=1jxi)℄1=n[Qni=1 Pr(y=1jxi)℄1=n+[Qni=1 Pr(y=0jxi)℄1=nPr(Y = 0jb) = [Qni=1 Pr(y=0jxi)℄1=n[Qni=1 Pr(y=1jxi)℄1=n+[Qni=1 Pr(y=0jxi)℄1=n (4.2)

wherexi 2 b. Even though the assumptions are quite intuitive, we provide a formal

derivation in the following.

As discussed in Section 3.1 of Chapter 3, we define two versions of generative

models — the population version and the sample version. In the population version,

theconditional densityof the feature variableX given a bagB = b isPr(xjb)2 = 8><>: Pr(x)Rx2bPr(x) dx if x 2 b;0 otherwise: (4.3)

2Note that we abuse the termPr(:) here because for a numeric feature, what we have is a densityfunction ofX instead of the probability.

44

Page 59: Statistical Learning in Multiple Instance Problems

CHAPTER 4. UPGRADING SINGLE-INSTANCE LEARNERS

Figure 4.1: An artificial dataset with 20 bags.

And in the sample version,Pr(xjb) = 8><>: 1n if x 2 b;0 otherwise: (4.4)

wheren is the number of instances insideb. This is true no matter what distributionPr(X) is, as long as the instances are randomly sampled. Then, for any formulation

of the class label property of a bagC(Y jb), according to our collective assumption

we associate it with the instances by calculating theconditional expectationoverX, which results in Equation 3.1 discussed in the second question in Section 3.1 of

Chapter 3.

In the sample version, we substitutePr(xjb) in Equation 3.1 with that in Equa-

tion 4.4 and use the sum instead of the integral. IfC(Y j:) is Pr(Y j:), then we

get Equation 4.1, which is the arithmetic average of the corresponding instance

probabilities. IfC(Y j:) is log Pr(Y=1j:)Pr(Y=0j:) , then we obtain Equation 4.2, which is the

normalized geometric average of the instance probabilities. Note that in this model

introducing the bagsB doesnot change the joint distributionPr(X; Y ). It only

casts a new condition so that the class labels of the instances are “masked” by the

“collective” class label.

45

Page 60: Statistical Learning in Multiple Instance Problems

4.2. THE UNDERLYING GENERATIVE MODEL

The above generative models are better illustrated by an artificial dataset, which will

also be used in later sections. We consider an artificial domain with two independent

attributes. More specifically, we used the same mechanism for generating artificial

data as that in Section 3.2 of Chapter 3 except that we changedthe density ofX,Pr(X) and use a different linear logistic model. Now the density function, along

each dimension, is a triangle distribution instead of a uniform distribution:f(x) = 0:2� 0:04jxjAnd the instance-level class probability was defined by the linear logistic model ofPr(y = 1jx1; x2) = 11 + e�x1�2x2Thus the instance-level decision boundary pattern is stilla hyperplane, but different

from the one modeled in the artificial data in Section 3.2. We changed these in order

to demonstrate that our framework can deal with any form ofPr(X) andPr(Y jX),as long as the correct family ofPr(Y jX) (in this case the linear logistic family)

and the correct underlying assumption (in this case the collective assumption) are

chosen. Finally we took Equation 4.2 to calculatePr(yjb). Again we labeled each

bag according to its class probability. The class labels of the instances are not

observable.

Now we have constructed a dataset based on the “Hyperplane (or linear boundary)

and the collective Assumption” combination instead of the “APR-like (or quadratic

boundary) and MI Assumption” combination used in the APR algorithms [Diet-

terich et al., 1997] and DD [Maron, 1998].

Figure 4.1 shows a dataset with 20 bags that was generated according to this gener-

ative model. Same as in Chapter 3, the black line in the middleis the instance-level

decision boundary (i.e. wherePr(y = 1jx1; x2) = 0:5) and the sub-space on the

right side has instances with higher probability to be positive. A rectangle indicates

the region used to sample points for the corresponding bag (and a dot indicates its

centroid). The top-left corner of each rectangle shows the bag index, followed by

46

Page 61: Statistical Learning in Multiple Instance Problems

CHAPTER 4. UPGRADING SINGLE-INSTANCE LEARNERS

the number of instances in the bag. Bags in gray belong to class “negative” and

bags in black to class “positive”. Note that bags can be on the“wrong” side of the

instance-level decision boundary because each bag was labeled by flipping a coin

based on the normalized geometric average class probability of the instances in it.

4.3 An Assumption-based Upgrade

In this section we first show how to solve the above problem in the artificial do-

main. Since the generative model is a linear logistic model,we can upgrade linear

logistic regression to solve this problem exactly. Then we generalize the underly-

ing ideas to a general framework to upgrade single-instancelearners based on some

assumptions, which also includes the APR algorithms [Dietterich et al., 1997] and

the DD algorithm [Maron, 1998]. Finally within this framework we also show how

to upgrade AdaBoost algorithm [Freund and Schapire, 1996] to deal with MI data.

First we upgrade linear logistic regression together with the collective assump-

tion so that it can deal with MI data. Note that normal linear logistic regres-

sion can no longer apply here because the class labels of instances are masked by

the “collective” class label of a bag. Suppose we knew what exactly the collec-

tive assumption is, say, Equation 4.2, then we could first construct the probabilityPr(Y jb) using Equation 4.2, and estimate the parameters (the coefficients of at-

tributes in this case) using the standard maximum binomial likelihood method. In

this way we fully “recover” the instance-level probabilityfunction in spite of the

fact that the class labels are masked. When a test bag is seen,we can calculate

the class probabilityPr(Y jbtest) according to the “recovered” probability estimate

and the same assumption we used at training time. The classification is based onPr(Y jbtest). Mathematically, in the logistic model,Pr(Y = 1jx) = 11+exp(��x) andPr(Y = 0jx) = 11+exp(�x) where� is the parameters to be estimated. According to

47

Page 62: Statistical Learning in Multiple Instance Problems

4.3. AN ASSUMPTION-BASED UPGRADE

Equation 4.2, we construct:8<: Pr(Y = 1jb) = [Qni Pr(y=1jxi)℄1=n[Qni Pr(y=1jxi)℄1=n+[Qni Pr(y=0jxi)℄1=n = exp( 1n�Pi xi)1+exp( 1n�Pi xi)Pr(Y = 0jb) = [Qni Pr(y=0jxi)℄1=n[Qni Pr(y=1jxi)℄1=n+[Qni Pr(y=0jxi)℄1=n = 11+exp( 1n�Pi xi)Then we model the class label determination process of each bag as a one stage

Bernoulli process. Thus the binomial log-likelihood function is:LL = NXi=1 [Yi logPr(Y = 1jb) + (1� Yi) logPr(Y = 0jb)℄ (4.5)

whereN is the number of bags. By maximizing the likelihood functionin Equa-

tion 4.5 can we estimate the parameters�. Maximum likelihood estimates (MLE)

are known to be asymptotically unbiased, as illustrated in Section 4.4. This for-

mulation is based on the assumption of Equation 4.2. In practice, it is impossible

to know the underlying assumptions so other assumptions mayalso apply, for in-

stance, the assumption of Equation 4.1. In that case, the log-likelihood function

of Equation 4.5 remains unchanged but the formulation ofPr(Y jb) is changed to

Equation 4.1. We call the former method “MILogisticRegressionGEOM” and the

latter “MILogisticRegressionARITH” in this chapter.

As usual, the maximization of the log-likelihood function is carried out via nu-

meric optimization because there is no analytical form of the solution in our model.3

Based on the “Rule of Parsimony”, we want as few parameters aspossible. In the

case of linear logistic regression, we only search for parameter values around zero.

Thus the linear pattern means that only local optimization is needed, which saves

us great computational costs. The radial formulation in DD [Maron, 1998], on the

other hand, implies a complicated global optimization problem.

In general, since many single-instance learners amount to minimizing the expected

loss functionEXEY jX [Loss(X; �)℄ in order to estimate the parameter� in Pr(Y jX),we can upgrade any normal single-instance learner that factorizesPr(X; Y ) intoPr(Y jX)Pr(X) and estimatesPr(Y jX) directly. This category covers a wide

3The choice of numeric optimization methods used in this thesis is discussed in Chapter 7.

48

Page 63: Statistical Learning in Multiple Instance Problems

CHAPTER 4. UPGRADING SINGLE-INSTANCE LEARNERS

range of single-instance learners, thus the method is quitegeneral. It involves four

steps:

1. Based on whatever assumptions believed appropriate, build a relationship be-

tween the class probability of a bagb, Pr(Y jb), and that of its instancesxb,Pr(Y jxb), i.e. Pr(Y jb) = g(Pr(Y jxb)). SincePr(Y jX) is usually defined

by the single-instance learner under consideration, the only thing to decide isg(:).2. Construct a loss function at the bag level and take the expectation over all

the bags instead of over instances, i.e. the expected loss function is nowEBEY jB[Loss(B; �)℄. Note that the parameter vector� is the instance-level

parameter vector because it determinesPr(Y jX) (or its transformations).

3. Minimize the bag-level loss function to estimate�.

4. When given a new bagbtest, first calculatePr(Y jxtest; �) and then calcu-

latePr(Y jbtest) = g(Pr(Y jxtest; �)) based on the same assumption used in

Step 1. Then classifybtest according to whetherPr(Y jbtest) is above 0.5.

The negative binomial log-likelihood is a loss function (also called deviance or

cross-entropy loss). Thus the above two MI linear logistic regression methods fit

into this framework. They use the linear logistic formulation forPr(Y jX) and the

collective assumption. The Diverse Density (DD) algorithm[Maron, 1998] also

uses the maximum binomial likelihood method thus it can be easily recognized as

a member of this framework. It uses a radial formulation forPr(Y jX) and the

MI assumption. The APR algorithms [Dietterich et al., 1997], on the other hand,

directly minimize the misclassification error loss at the bag level and the bounds of

the APR are the parameters to be estimated forPr(Y jX). Since the APR algorithms

regard the instance-level classification as a deterministic process,Pr(Y = 1jx) can

be regarded as 1 for any instancex within the APR. OtherwisePr(Y = 1jx) = 0.

Note that in this framework, significant changes to the underlying single-instance

algorithms seems inevitable. This is in contrast to the heuristic wrapper method

presented in Chapter 3.

49

Page 64: Statistical Learning in Multiple Instance Problems

4.3. AN ASSUMPTION-BASED UPGRADE

1. Initialize weights of each bagWi = 1=N , i = 1; 2; : : : ; N .

2. Repeat form = 1; 2; : : : ;M :(a) SetWij Wi=ni, assign the bag’s class label to each

of its instances, and build an instance-level modelhm(xij) 2 f�1; 1g.(b) Within theith bag (withni instances), compute the error rateei 2 [0; 1℄

by counting the number of misclassified instances within that bag,i.e. ei =Pj 1(hm(xij)6=yi)=ni.

(c) If ei < 0:5 for all i’s, STOP iterations, Go to step 3.

(d) Compute m = argminPiWi exp[(2ei � 1) m℄.(e) If( m � 0) STOP iterations, Go to step 3.

(f) SetWi Wi exp[(2ei � 1) m℄ and renormalize so thatPiWi = 1.

3. returnsign[PiPm mhm(xtest)℄.Table 4.1: The upgraded MI AdaBoost algorithm.

Linear logistic regression and the quadratic formulation (as in DD) assume a limited

family of underlying patterns. There are more flexible single-instance learners like

boosting and the support vector machine (SVM) algorithms that can model larger

families of patterns. However, the general upgrading framework presented above

means that, under certain assumptions, one can model a wide range of decision

boundary patterns. Here we provide an example of how to upgrade the AdaBoost

algorithm into an MI learner based on the collective assumption. Intuitively Ad-

aBoost can easily be wrapped around an MI algorithm (withoutchanges to the Ad-

aBoost algorithm), but since there are not many “weak” MI learners available we

are more interested in taking the single-instance learnersas the base classifier of the

upgraded method.

AdaBoost originated in the Computational Learning Theory domain [Freund and

Schapire, 1996], but received a statistical explanation later on [Friedman, Hastie

and Tibshirani, 2000]. It can be shown that it aims to minimize an exponential loss

function in a forward stagewise manner and ultimately estimates12 log Pr(Y=1jX)Pr(Y=�1jX)(based on an additive model) [Friedman et al., 2000].4 Now, under the collective

assumption, we use the Equation 4.2 because the underlying single-instance learner

4Note that in AdaBoost, class labels are coded intof�1; 1g instead off0; 1g.

50

Page 65: Statistical Learning in Multiple Instance Problems

CHAPTER 4. UPGRADING SINGLE-INSTANCE LEARNERS

(i.e. normal AdaBoost) estimates the log-odds function (Wecould also use the stan-

dard MI assumption here, but this would necessarily introduce themax function,

which makes the optimization much harder). We first describethe upgraded Ad-

aBoost algorithm in Table 4.1 and then briefly explain the derivation. The notation

used in Table 4.1 is as follows:N is the number of bags and we use the subscripti to denote theith bag, wherei = 1; 2; : : : ; N . Suppose there areni instances

within the ith bag. Then we use the subscriptj to refer to thejth instance, wherej = 1; 2; : : : ; ni. Thereforexij denotes thejth instance in theith bag.

The derivation follows exactly the same line of thinking as that in [Friedman et al.,

2000]. In the derivation below we regard the expectation sign E as the sample

average instead of the population expectation. We are now looking for a function

over all the bagsF (B) that minimizesEBEY jB[exp(�yF (B))℄ where given a bagb, F (b) =Xn F (xb)=n: (4.6)

We want to expandF (B) to F (B) + f(B) in each iteration with the restriction > 0. First, given > 0 and the current bag-level weightsWB = exp(�yF (B)),we are searching for the bestf(B). After second order expansion ofexp(�y f(B))aboutf(B) = 0, we are seeking the maximum ofEW [yf(B)℄. If we had an MI

base learner in hand, we could estimatef(B) directly. However we are interested

in wrapping around a single-instance learner, thus we expand f(B) for each bagbaccording to Equation 4.6:f(b) = Pn h(xb)=n whereh(xb) 2 f�1; 1g. Now we

are seekingh(:) to maximizeEW [yh(xb)=n℄ = NXi=1 niXj [ 1niWiPr(y = 1jbi)h(xij)� 1niWiPr(y = �1jbi)h(xij)℄The solution ish(xij) = 8<: 1 if Wini Pr(y = 1jbi)� Wini Pr(y = �1jbi) > 0�1 otherwise

This formula simply means that we are looking for the function h(:) at the instance

51

Page 66: Statistical Learning in Multiple Instance Problems

4.3. AN ASSUMPTION-BASED UPGRADE

level such that given a weight ofWini for each instance, its value is determined by

the probabilityPr(yjbi). Note that this probability is the same for all the instances

in the bag. Since the class label of each bag reflects its probability, we can as-

sign the class label of a bag to its instances. Then, with the weightsWini , we use

a single-instance learner to provide the value ofh(:). This constitutes Step 2a in

the algorithm in Table 4.1. Note that in Chapter 3, we proposed a wrapper method

to apply normal single-instance learners to MI problems. There, it is a heuristic

to assign the class label of each bag to the instances pertaining to it because we

minimize the loss function within the underlying (instance-level) learner. Since the

underlying learners are at the instance level, they requirethe class label for each

instance instead of each bag. That is why it is a heuristic. Here we only use the

underlying instance-level learners to estimateh(:), and ourobjectiveis to estimateh(:) such thatyh(xb)=n is maximized over all the (weighted) instances, wherey is

thebag’sclass label. Therefore assigning the class label of a bag to its instances is

actually ouraim here because we are trying to minimize a bag-level loss function.

Hence it isnota heuristic or approximation in this method.

Next, if we average�yih(xij) for each bag, we get�yif(bi) = 2ei � 1, whereei =Pj 1(hm(xij)6=yi)=ni and this is Step 2b in the algorithm. Then given�yf(B) 2[�1; 1℄ (more precisely,�yf(B) 2 f�1;�1 + 1=n;�1 + 2=n; � � � ; 1 � 2=n; 1 �1=n; 1g) we are looking for the best > 0. To do this we can directly optimize the

objective functionEBEY jB[exp(F (B) + � (�yf(B)))℄ = Xi Wi exp[ m � (�yPj h(xij)ni )℄= Xi Wi exp[(2ei � 1) m℄:This constitutes Step 2d.

This objective function will not have a global minimum if allei < 0:5. The inter-

pretation is analogous to that in the mono-instance AdaBoost. In normal AdaBoost,

the objective function has no global minimum if every instance is correctly classi-

fied based on the sign off(x) (i.e. zero misclassification error is achieved). Here

52

Page 67: Statistical Learning in Multiple Instance Problems

CHAPTER 4. UPGRADING SINGLE-INSTANCE LEARNERS

we can classify abagb according tof(b) =Pn h(xb)=n. Then�yif(bi) = 2ei� 1simply means that if the error rates within all the bagsei are less than 0.5, all the

bags will be correctly classified because all thef(bi) will have the same sign asyi.Thus we have reached zero error and no more boosting can be done — as in normal

AdaBoost. Therefore we check this in Step 2c.

The solution of this optimization problem may not have an easy analytical form.

However, it is a one-dimensional optimization problem. Hence the Newton family

of optimization techniques can find a solution in super-linear time [Gill, Murray

and Wright, 1981] and the computational cost is negligible compared to the time to

build the “weak” classifier. Therefore we simply search for using a Quasi-Newton

method in Step 2d.

Note that is not necessarily positive. If it is negative, we can simplyreverse the

prediction of the weak classifier and get a positive (this can be done automatically

in Step 2f). However, we bow to the AdaBoost convention and restrict to be

positive. Thus we check that in Step 2e.

Finally we update the bag-level weights in Step 2f accordingto the additive struc-

ture ofF (B), in the same way as in normal AdaBoost. Note that if a bag has more

misclassified instances in it, it gets a higher weight in the next iteration, which is

intuitively appealing. Another appealing property of thisalgorithm is that if there

is only one instance per bag, i.e. the data is actually single-instance data, this algo-

rithm naturally degrades to normal AdaBoost. To see why, note that the solution for in Step 2d will be exactly12 log 1�errwerrw whereerrw is the weighted error. Hence

the weight update will also be the same as in AdaBoost.

It is easy to see that to classify a test bag, we can simply regard F (B) as the bag-

level log-odds function and take Equation 4.2 to make a prediction.

53

Page 68: Statistical Learning in Multiple Instance Problems

4.4. PROPERTY ANALYSIS AND REGULARIZATION TECHNIQUES

-1

0

1

2

3

4

5

6

7

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

Par

amet

er v

alue

Number of exemplars

Estimated coefficient 1Estimated coefficient 2

Estimated intercept

Figure 4.2: Parameter estima-tion of the MILogisticRegression-GEOM on the artificial data.

8

10

12

14

16

18

20

22

24

0 50 100 150 200 250 300

Tes

t Err

or R

ate(

%)

Number of training exemplars

Test error of MI AdaBoost on the artificial dataTest error of MILogisticRegressionGEOM on the artificial data

Test error of the model with the true parameters

Figure 4.3: Test error of MILogisti-cRegressionGEOM and the MI Ad-aBoost algorithm on the artificialdata.

4.4 Property Analysis and Regularization Techniques

In this section we first show some properties of the upgraded MI learners. We use

the artificial data generated according to the scheme in Section 4.2 to show some

asymptotic properties of the probability estimates using an MI logistic regression

algorithm. Next we show the test error of the MI AdaBoost algorithm on the arti-

ficial data to see whether it can perform well on classifying MI data. Finally, since

the “number of iterations” property of the boosting algorithms is usually of inter-

est, we show this property for our MI AdaBoost algorithm against the training and

test errors on the Musk1 dataset. We observe that its behavior is very similar to

its single-instance predecessor. After analyzing the properties we introduce some

regularization methods for the above algorithms in order toincrease generalization

power on practical datasets.

Because we know the artificial data described in Section 4.2 is generated using a

linear logistic model based on the normalized geometric average formulation from

Equation 4.2, “MILogisticRegressionGEOM” is the natural candidate to deal with

this specific MI dataset. Figure 4.2 shows the parameters estimated by this method

when the number of bags increases. It can be seen that the estimates converge to

the true parameters asymptotically. The more training bags, the more stable the

estimates. This is a consequence of the fact that the class probability estimates of

this algorithm are consistent (or asymptotically unbiased) by virtue of the maxi-

54

Page 69: Statistical Learning in Multiple Instance Problems

CHAPTER 4. UPGRADING SINGLE-INSTANCE LEARNERS

0

20

40

60

80

100

0 100 200 300 400 500 600 700 800 900 1000

Err

or (

%)

Number of iterations

Training error of MI AdaBoost on the Musk1 dataTraining RRSE of MI AdaBoost on the Musk1 data

Test error of MI AdaBoost on the Musk1 data

Figure 4.4: Error of the MI AdaBoost algorithm on the Musk1 data.

mum likelihood method. The consistency of the MLE can be proven under fairly

general conditions [Stuart, Ord and Arnold, 1999]. In this specific artificial data,

we used a triangle distribution for the density ofX, as explained in Section 4.2.

However, we observed that the exact form of this density doesnot matter. This is

reasonable because our model only assumes random sampling from the distribution

corresponding to a bag. The exact form ofPr(X) does not matter.

MILogisticRegressionGEOM successfully “recovers” the instance-level class prob-

ability function (and the underlying assumption also holdsfor the test data). Con-

sequently we expect it to achieve the optimal error rate on this data. As explained

before, the MI AdaBoost algorithm is also based on Equation 4.2, thus it is also

expected to perform well in this case. To test this, we first generate an independent

test dataset of 10000 bags, then generate training data (with different random seeds)

for different numbers of training bags. The decision stumpsare used as the weak

learner and the maximal number of boosting iterations was set to 30. The test er-

ror of both methods on the test data against different numbers of bags is plotted in

Figure 4.3. When the number of training bags increases, the error rate goes closer

to the best error rate and eventually both methods accomplish close to the “perfect”

performance. However, unlike MILogisticRegressionGEOM,MI AdaBoost cannot

achieve the exact best error rate, mainly because in this case we are approximating

a line (the decision boundary) using axis-parallel rectangles, only based on a finite

amount of data.

55

Page 70: Statistical Learning in Multiple Instance Problems

4.4. PROPERTY ANALYSIS AND REGULARIZATION TECHNIQUES

Finally, people are often interested in how many iterationsare needed in the boost-

ing algorithms and it is also an issue in our MI AdaBoost algorithm. Cross Valida-

tion (CV) is a common way to find the best number of iterations.It is known that

even after the training error ceases to decrease, it is stillworthwhile boosting in or-

der to increase the confidence (ormargin) of the classifier [Witten and Frank, 1999].

In the two-class case (as in the Musk datasets), increasing the margin is equivalent

to reducing the estimated root relative squared errors (RRSE) of the probability es-

timates. It turns out that this statement also holds well forMI AdaBoost. As an

example, we show the results of MI AdaBoost on the Musk1 dataset in Figure 4.4.

The training error is reduced to zero after about 100 iterations but the RRSE keeps

decreasing until around 800 iterations. The test error, averaged over 10 runs of

10-fold CV, also reaches a minimum at around 800 iterations,which is 10.44%

(standard deviation: 2.62%). After 800 iterations, the RRSE does not seem to im-

prove any further and the test error rises due to overfitting.The error rate is quite

low for the Musk1 dataset, as will be shown in Section 4.5. However, boosting on

decision stumps is too slow for the Musk2 dataset. We observed that the RRSE does

not settle down even after 8000 iterations. Therefore it is computationally too ex-

pensive to be used in practice and we present results based onregularized decision

trees as the “weak” classifier.

Regularization is commonly used in single-instance learning. It introduces a bias in

the probability estimates in order to increase the generalization performance by re-

ducing the chance of overfitting. It works because the underlying assumptions of a

learner rarely perfectly hold in practice. In MI learning, regularization can be natu-

rally inherited from the corresponding single-instance learners if we upgrade one of

them. For example, in the MI support vector machine (SVM) [G¨artner et al., 2002],

there is a model complexity parameter that controls the scale of the regularization.

For non-standard methods like the APR algorithms, other ways like kernel density

estimation (KDE) [Dietterich et al., 1997] are adopted to achieve the function of

regularization. We can also impose regularization in our methods as follows.

For single-instance linear logistic regression, a shrinkage method is proposed in

56

Page 71: Statistical Learning in Multiple Instance Problems

CHAPTER 4. UPGRADING SINGLE-INSTANCE LEARNERS

[le Cessie and van Houwelingen, 1992]. We adopt this “ridge regression” method

here. Thus we simply add anL2 penalty term�jj�jj2 to the likelihood function in

Equation 4.5 where� is the ridge parameter. Note that in ridged logistic regression

— since it is not invariant along each dimension — we need to standardize the data

to zero-mean/unit-standard-deviation before fitting the model, and transform the

model back after fitting [Hastie et al., 2001]. The mean and standard deviation used

in the standardization in our MI logistic regression methods are estimated using the

weighted instance data, with each instance weighted by the inverse of the number

of instances in the corresponding bag. This is done because intuitively we would

like to have equal weight for every bag. Note that unlike somecurrent MI methods

that change the data, we do not pre-process the data — the standardization is simply

part of the algorithm.

In boosting with decision trees, both the tree size and the number of iterations de-

termine the degrees of freedom, and there are several ways toregularize it in single-

instance learning [Hastie et al., 2001]. We use C4.5 [Quinlan, 1993] which does not

have an explicit option to specify how many nodes will be in a tree, we use an al-

ternative way to shrink the tree size, that is, we specify a larger minimal number of

(weighted) instances for the leaf nodes (the default setting in C4.5 is two). Together

with the restriction of the number of iterations we can achieve a very coarse form

of regularization in our MI AdaBoost algorithm. By enlarging the minimal number

of leaf-node instances and in turn shrinking the tree size, we effectively make the

tree learner “weaker”. We will only show experimental results for the MI AdaBoost

algorithm based on these regularized trees.

4.5 Experimental Results

This section we present experimental results of our MI algorithms on the Musk

benchmark datasets [Dietterich et al., 1997]. As already discussed in Chapter 1,

there are two overlapping datasets describing the musk activity prediction problem,

namely the Musk1 and Musk2 data. Some of the properties of thedatasets are

57

Page 72: Statistical Learning in Multiple Instance Problems

4.5. EXPERIMENTAL RESULTS

Musk 1 Musk 2Number of bags 92 102Number of attributes 166 166Number of instances 476 6598Number of positive bags 47 39Number of negative bags 45 63Average bag size 5.17 64.69Median bag size 4 12Minimum bag size 2 1Maximum bag size 40 1044

Table 4.2: Properties of the Musk 1 and Musk 2 datasets.

summarized in Table 4.2.

Table 4.3 shows the performance of related MI methods on the Musk datasets, as

well as that of our methods developed in this chapter. The evaluation is either by

leave-one-out (LOO) or by 10-fold Cross Validation (CV)at the bag level. In the

first part we include some of the current solutions to MI learning problems. All the

methods that depend on the “APR-like pattern and MI assumption” combination

are shown. They are: the best of the APR algorithms, iterated-discrim APR with

KDE [Dietterich et al., 1997], the Diverse Density algorithm [Maron, 1998]5 and

the MULTINST algorithm [Auer, 1997]. Although MI Neural Networks [Ramon

and Raedt, 2000] do not model the APR-like pattern exactly, they depend heavily

on the MI assumption and followed a thinking very similar to the above methods.

The pattern that they model really depends on the complexityof the networks that

are built. Thus we also include it. It can be seen that the models based on the

“linear pattern and the collective assumption” combination can also achieve com-

parably good results on the Musk datasets. Therefore we believe that in practice

what really matters is the right combination of the decisionboundary pattern and

5Note that we do not include the EM-DD algorithm [Zhang and Goldman, 2002] here eventhough it was reported to have the best performance on the Musk datasets. We do so for two reasons:1. There were some errors in the evaluation process of EM-DD;2. It can be shown via some theo-retical analysis and an artificial counter-example (Appendix D) that EM-DD cannot find a maximumlikelihood estimate (MLE) in general due to the (tricky) likelihood function it aims to optimize.Since the DD algorithm is a maximum likelihood method, the solution that EM-DD finds cannot betrusted if it fails to find the MLE.

58

Page 73: Statistical Learning in Multiple Instance Problems

CHAPTER 4. UPGRADING SINGLE-INSTANCE LEARNERS

Methods Musk 1 Musk 2LOO 10CV LOO 10CV

iterated-discrim APR with KDE – 7.6 – 10.8maxDD – 11.1 – 17.5MULTINST – 23.3 – 16.0MI Neural Networksa – 12.0 – 18.0SVM with the MI kernel 13.0 13.6�1.1 7.8 12.0�1.0Citation-kNN 7.6 – 13.7 –MILogisticRegressionGEOM 13.04 14.13�2.23 17.65 17.74�1.17MILogisticRegressionARITH 10.87 13.26�1.83 16.67 15.88�1.29MI AdaBoost with 50 iterations 10.87 12.07�1.95 15.69 15.98�1.31

aIt was not clear which evaluation method the MI Neural Network used. We put it into 10CVcolumn just for convenience.

Table 4.3: Error rate estimates from either 10 runs of stratified 10-fold cross-validation or leave-one-out evaluation. The standard deviation of the estimates (ifavailable) is also shown.

the assumption. Simple patterns and assumptions together may be sufficient.

There are many methods that aim to upgrade single-instance learners to deal with

MI data, as described in Chapter 2. Although they are definitely not in the same

category as the methods in the first part, and their assumptions and generative mod-

els are not totally clear, we include some of them in the second part. Because there

are too many to list here, we simply chose the ones with the best performance on

the Musk datasets: the SVM with the MI kernel [Gartner et al., 2002] and an MI

K-Nearest-Neighbour algorithm, Citation-kNN [Wang and Zucker, 2000]. For the

SVM with the MI kernel, the evaluation is via leave-10(bags)-out, which is similar

to that of 10-fold CV because the total number of bags in either dataset is close to

100. Thus we put its results into the “10CV” columns.

Finally in the third part we present results for the methods from this chapter. Since

we introduced regularization, as also used by some other methods like SVM, there

is possibility to tune the regularization parameters. However we avoided extensive

parameter tunings. In both MI logistic regression methods,we used a small, fixed

regularization parameter� = 2. In our MI AdaBoost algorithm, we restricted our-

selves to 50 iterations to control the degrees of freedom. The base classifier used

59

Page 74: Statistical Learning in Multiple Instance Problems

4.6. RELATED WORK

is an unpruned C4.5 decision tree, butnot fully expanded (as explained above). In-

stead of using the default setting of minimal 2 instances perleaf, we set it to be 2

bagsper leaf. More specifically, we used the average number of instances per bag

shown in Table 4.2 as the size of one bag, and thus used 10 (instances) for Musk1

and 120 (instances) for Musk2.

As can be shown, the performance of our methods is comparableto some of the

best methods in MI learning, and compares favorably to most of the “APR-like +

MI assumption” family. More importantly, since the generative models and the as-

sumptions of our methods are totally clear, it is easy to assess their applicability

when new MI data becomes available. In the Musk datasets, it seems that logistic

regression based on the arithmetic average of the instances’ class probabilities per-

forms better than based on the normalized geometric average, thus the assumptions

of this method could be more realistic in this case. Finally,the overall performance

of our methods is also comparable to that of the wrapper method described in Chap-

ter 3, slightly worse in Musk2 data though. Although the methods in this chapter are

the exact solutions to the underlying generative model, they do not seem superior to

the heuristic methods with the biased probability estimates. This is especially true

when the assumptions of the heuristic reasonably hold in reality, which seems to be

the case in the Musk datasets.

4.6 Related Work

The Diverse Density (DD) algorithm [Maron, 1998] is the onlyprobabilistic model

for MI learning in the literature, and hence it is the work most closely related. DD

also modelsPr(Y jB) andPr(Y jX), and [Maron, 1998] proposed two ways to

model the relationship between them, namely the noisy-or model and the most-

likely-cause model. Both ways were aimed to fit the MI assumption. The noisy-or

model regards the process of determining a bag’s class probability (i.e. Pr(Y =0; 1jB)) as a Bernoulli process withn independent stages wheren is the number of

instances in the bag. In each stage, one decides the class label using the probability

60

Page 75: Statistical Learning in Multiple Instance Problems

CHAPTER 4. UPGRADING SINGLE-INSTANCE LEARNERSPr(yjx) of an (unused) instance in the bag. Roughly speaking, a bag will be labeled

positive if one sees at least one positive class label in thisprocess and negative

otherwise. The most-likely-cause model regards the above process as one-stage. It

thus only picks one instance per bag. The selection of the instance is parametric

or model-based, that is, according to the radial formulation of Pr(Y jX) it uses the

instance with themaximalprobabilitymaxx2bfPr(y = 1jx)g in apositiveexample

and the instance with theminimalprobabilityminx2bfPr(y = 0jx)g in a negative

example as the representative of the corresponding bag.

Indeed, one can recognize the similarity of our methods withDD, especially the lin-

ear logistic regression method — we simply take out the radial formulation and the

noisy-or/most-likely-cause model, and plug in the linear logistic formulation and

the geometric/arithmetic average model. It is natural to also try the “Gaussian-like”

formulation together with the collective assumption (i.e.the geometric/arithmetic

average model), or the “linear + MI assumption (i.e. noisy-or)” combinations. How-

ever these combinations do not work well on the Musk datasetsaccording to our ex-

periments. When experimented on the Musk1 dataset, for example, the “Gaussian-

like + collective assumption (arithmetic average of probabilities)” combination has

the 10�10-fold CV error rate of 19.02%�3.41%, and the “linear + MI assump-

tion (noisy-or)” combination has 20.00%�1.47% with the ridge parameter set to� = 12 (the high value of ridge parameter may already indicate the unsuitability of

the linear model).

As a matter of fact, the whole family of “APR-like + MI Assumption”-based meth-

ods are related to this chapter because their rationale is the same as that of DD.

The current methods that upgrade the single-instance learners (e.g. MI decision

tree learner RELIC [Ruffo, 2001], MI nearest neighbour algorithms [Wang and

Zucker, 2000], MI decision rule learner NaiveRipperMI [Chevaleyre and Zucker,

2001] and the SVM with an MI kernel and a polynomial minimax kernel [Gartner

et al., 2002]), on the other hand, are not so closely related to the framework pre-

sented here because the line of thinking is quite different.The SVM with the MI

kernel [Gartner et al., 2002] may also depend on the collective assumption because

61

Page 76: Statistical Learning in Multiple Instance Problems

4.6. RELATED WORK

the MI kernel essentially uses every instance within a bag (with equal weights).

Other methods’ assumptions are not very clear because thesemethods are purely

heuristic and therefore hard to analyze.

Finally we propose an improvement on DD. The radial (or Gaussian-like) formu-

lation is not convenient for optimization because it has toomany local maxima in

the log-likelihood function. If one really believes the quadratic formulation is rea-

sonable, we suggest a linear logistic model with polynomialfitting to order 2 (i.e.

adding quadratic terms). Together with the noisy-or model (or the most-likely-cause

model), this would enable us to search for local maxima of thelog-likelihood only.

If we were to use every possible term in a polynomial expansion, we would have

too many parameters to estimate, especially for the Musk datasets in which we al-

ready have too many attributes in the original instance space. In order to overcome

this difficulty, we can assume no interactions between any two attributes, which is

effectively the same as DD did. By deleting any interaction terms in the quadratic

expansion, we only have twice the number of attributes coefficients (a quadratic

term and a linear term for each attribute) plus an intercept to estimate. The number

of parameters is roughly the same as in DD. However this modelwill have three

major improvements over DD:

1. The optimization is now a local optimization problem as mentioned before,

which is much easier. Consequently the computational cost is greatly re-

duced. The time of running this model will be very similar to the MI logistic

regression methods, which according to our observation is very short. We be-

lieve it is even faster than EM-DD regardless of EM-DD’s validity (because

EM-DD still needs many optimization trials with multiple starts) whereas we

can ensure the correctness of the MLE solutions in this model;

2. The sigmoid function in the logistic model tends to make the log-likelihood

function more gentle and smooth than the exponential function used in the

radial model proposed by DD. As a matter of fact, the log-likelihood function

in DD is discontinuous (the discontinuity occurs whenever the point variable’s

62

Page 77: Statistical Learning in Multiple Instance Problems

CHAPTER 4. UPGRADING SINGLE-INSTANCE LEARNERS

value equals the attribute value of an instance in a negativebag) due to the

radial form of the probability formulation. Besides, the exponential function

in the probability formulation only allows DD to deal with data values around

one, which means that we need to pre-process the data on most occasions.

We can avoid this inconvenience in the logistic model. Despite the functional

differences, the effects of the two models are virtually thesame. To see why,

in our model the log-odds function can always be arranged into a form that

describes an Axis-Parallel hyper-Ellipse (APE) plus an intercept (bias) term,

hence the instance-level decision boundaries of both models are exactly the

same. But the gentleness of the sigmoid function in the logistic model (and

the intercept term) may improve the prediction;

3. Since there are a lot of mature techniques that can be directly applied to the

linear logistic model, like regularization or deviance-based feature selection,

we can apply them directly to the new model. For example, ridge methods

may further improve the performance if the instance-level decision boundary

cannot be modeled well by an APE.

4.7 Conclusions

This chapter described a general framework for upgrading single-instance learners

to deal with MI problems. Typically we require that the single-instance learners

model the posterior probabilityPr(Y jX) or its transformation. Then we can con-

struct a bag-level loss function with some assumptions. By minimizing the expected

loss function at the bag level we can “recover” the instance-level probability func-

tionPr(Y jX). The prediction is based on the recovered functionPr(Y jX) and the

assumption used. This framework is quite general and justified thanks to the strong

theoretical basis of most single-instance learners. It also incorporates background

knowledge (based on the assumptions) involved and could serve as general-purpose

guidance for solving MI problems.

63

Page 78: Statistical Learning in Multiple Instance Problems

4.7. CONCLUSIONS

Within this framework, we upgraded linear logistic regression and AdaBoost based

the collective assumption. We have also shown that these methods, together with

mild regularizations, perform quite well on the Musk benchmark datasets.

For “group-conditional” single-instance learners that estimate the densityPr(XjY )and then transform toPr(Y jX) via Bayes’ rule (like naive Bayes or discriminant

analysis), there appears to be no easy way to directly upgrade them. However we

explore some methods in the next chapter that follow the sameline of thinking and

are very similar to the group-conditional methods.

64

Page 79: Statistical Learning in Multiple Instance Problems

CHAPTER 5. LEARNING WITH TWO-LEVEL DISTRIBUTIONS

Chapter 5

Learning with Two-Level

Distributions

Multiple instance learning problems are commonly tackled in a point-conditional

manner, i.e. the algorithms aim to directly model a functionthat, given a bag ofn instances,x1; � � � ; xn, outputs a group (or class) labely. No methods developed

so far build a model in a group-conditional manner. In this chapter we developed

a group-conditional method, the “two-level distribution approach” to handle MI

data. As the other approaches presented in this thesis, it essentially discards the

standard MI assumption. However, in contrast to the other approaches, it derives a

distributional property for each bag of instances. We describe its generative model

and demonstrate that it is an asymptotically unbiased classifier based on artificial

data. In spite of its simplicity, the empirical results of this approach on the Musk

benchmark datasets are surprisingly good. Finally, we showthe relationship of this

approach with some single-instance group-conditional learners. We also discover

its relationship with the empirical Bayes, an old statistical method, within a classi-

fication context, and the relationship of MI learning with meta-analysis.

65

Page 80: Statistical Learning in Multiple Instance Problems

5.1. INTRODUCTION

5.1 Introduction

Many special-purpose MI algorithms can be found in the literature. However we

observed that all these methods aim to directly model a function of f(yjx1; � � � ; xn)wherey is the group (class) label of a bag ofn instances, andx1; � � � ; xn are the

instances. From a statistical point of viewf(:) is necessarily a probability functionPr(:), although there can be other interpretations. We refer to this approach as a

point-conditional approach. In normal single-instance learning, we have a category

of popular methods that model group-conditional probability distributions and then

transform to class probabilities using Bayes’ rule. This category includes discrimi-

nant analysis [McLachlan, 1992], naive Bayes [John and Langley, 1995] and kernel

density estimation [Hastie et al., 2001]. It is thus naturalto ask whether it is possible

to develop a group-conditional approach for MI algorithms.

In this chapter we present a group-conditional approach called “two-level distribu-

tion (TLD) approach”. The underlying idea of this approach is simple: we extract

distributional properties from each bag of instances for each class and try to dis-

criminate classes (or groups) according to their distributional properties. Note that

this approach essentially discards the standard MI assumption because there is no

instance selection within each bag. Instead, by deriving the distributional properties

of a bag we imply the collective assumption that was proposedearlier in this thesis.

This chapter is organized as follows. Section 5.2 describesthe TLD approach in

detail. It turns out that this approach is equivalent to extracting low-order sufficient

statistics from each bag, which puts this approach into a broader framework. The

underlying generative model of this approach is presented in Section 5.3, where

we also show this approach is asymptotically unbiased if thegenerative model is

true. In Section 5.4 we show the relationship of the TLD approach with the normal

group-conditional single-instance learners. When assuming independence between

attributes, one of our methods looks very similar to naive Bayes, thus we present

an (approximate) upgrade of naive Bayes to MI learning. In Section 5.5 we show

experimental results for the TLD approach on the Musk benchmark datasets. Sec-

66

Page 81: Statistical Learning in Multiple Instance Problems

CHAPTER 5. LEARNING WITH TWO-LEVEL DISTRIBUTIONS

tion 5.6 draws the connection to the empirical Bayes (EB) methods in a classifi-

cation context. It turns out that the TLD methods follow exactly the EB line of

thinking. As an old statistical method, EB has many fielded applications in prac-

tice, especially the “meta-analysis” field in medical research. Such applications

may draw interests to the MI domain. Finally Section 5.7 concludes this chapter.

5.2 The TLD Approach

First let us consider normal single-instance learning. Theessence of the group-

conditional methods is to derive distributional properties within each group (and the

group priors) so that we can decide the frequency of a point inthe instance space for

each group. We classify a point according to the most frequently appearing group

(class) label. We follow the same line of thinking in the multi-instance (MI) case.

Dependent on different groups, we first derive distributional properties for each

bag, that is, on the instance level. Because even within one class, the distributional

properties are different from bag to bag, we need a second-level distribution to

relate the instance-level distribution to one another. We refer to the second level

as the “bag level” distribution. That is why we call this approach the “two-level

distribution” (TLD) approach.

The obvious question is how to derive distributions over distributions? Since com-

monly used distributions are usually parameterized by someparameters, we can

think of these parameters as random and governed by some hyper-distribution. This

is essentially a Bayesian perspective [O’Hagan, 1994]. These hyper-distributions

themselves are parameterized by some hyper-parameters. Ina full Bayesian ap-

proach, we would cast further distributions on the hyper-parameters. In this chapter,

we simply regard the hyper-parameters as fixed and assume that they can be esti-

mated from the data. Therefore our task is to simply estimatethe hyper-parameters

for each group (class). We show the formal derivation of how to estimate them in

the following.

67

Page 82: Statistical Learning in Multiple Instance Problems

5.2. THE TLD APPROACH

First, we introduce some notation. We denote thejth bags asbj for brevity. For-

mally, if bj hasnj instances, thenbj = fxj1; � � � ; xjk; � � � ; xjnjg. Y denotes the

class variable. Then, given a class labelY = y (in two-class casey = 0; 1) and

a bagbj, we have the distributionPr(bjjY ) for each class, which is parameter-

ized with a fixed bag-level parameterÆy (hence we simply write downPr(bjjY ) asPr(bjjÆy)). We estimateÆ using the maximum likelihood method:ÆyMLE = argmaxL = argmaxYj Pr(bjjÆy):HereL is the likelihood function,

Qj Pr(bjjÆy). Now, the instances in bagbj are not

directly related toÆ, as we discussed before. Instead, the instancesxjk are governed

by an instance-level distribution parameterized by a parameter vector�, that in turn

is governed by a distribution parameterized byÆ. Since� is a random variable, we

integrate it out inL. Mathematically,L =Yj Pr(bjjÆy)=Yj Z Pr(bj; �jÆy) d�=Yj Z Pr(bjj�; Æy)P (�jÆy) d�and assuming conditional independence ofbj andÆy given�,=Yj Z Pr(bjj�)Pr(�jÆy) d�: (5.1)

In Equation 5.1, we effectively marginalize� in order to relate the observations

(i.e. instances) to the bag-level parameterÆ. Now assuming the instances within

a bag are independent and identically distributed (i.i.d) according to a distribution

parameterized by�, Pr(bjj�) = Qnji P (xjij�) wherexji denotes theith instance in

thejth bag.

The complexity of the calculus would have stopped us here hadwe not assumed

68

Page 83: Statistical Learning in Multiple Instance Problems

CHAPTER 5. LEARNING WITH TWO-LEVEL DISTRIBUTIONS

independence between attributes. Thus, like naive Bayes, we assume independent

attributes, and reduce the integral in Equation 5.1 into several one-dimensional in-

tegrals (one for each attribute), which are much easier to solve. Now, assumingmdimensions ande bags (for one class), the likelihood functionL becomesL = eYj=1 ZZ � � �ZZ (h njYi=1 Pr(xjkij�k)iPr(�kjÆyk)) d�1 � � � d�m= eYj=1 � mYk=1(Z h njYi=1 Pr(xjkij�k)iPr(�kjÆyk) d�k)�= eYj=1 � mYk=1Bjk� (5.2)

wherexjki denotes the value of thekth dimension of theith instance in thejthexemplar,�k andÆk are the parameters for thekth dimension, andBjk = Z h njYi=1 Pr(xjkij�k)iPr(�kjÆyk) d�k:There are many options for modelingPr(xjkij�k) andPr(�kjÆyk). Here we modelPr(xjkij�k) as a Gaussian with parameters�k and�2k:njYi=1 Pr(xjkij�k) = njYi=1 Pr(xjkij�k; �2k)= (2��2k)�nj=2 exp h� S2jk + nj(xjk � �k)22�2 i (5.3)

wherexjk = Pnji=1 xjki=nj andS2jk = Pnji=1(xjki � xjk)2. As is usually done

in Bayesian statistics [O’Hagan, 1994], we conveniently model Pr(�kjÆyk) as the

correspondingnatural conjugateform of the Gaussian distribution. The natural

conjugate has four parameters,ak, bk, wk, andmk, and is given by:Pr(�kjÆyk) = g(ak; bk; wk)(�2k)� bk+32 exp �� ak + (�k�mk)2wk2�2k �(5.4)

69

Page 84: Statistical Learning in Multiple Instance Problems

5.2. THE TLD APPROACH

where g(ak; bk; wk) = a bk2k 2� bk+12p(�wk)�(bk=2) :Taking a closer look at the natural conjugate prior in Equation 5.4, it is straightfor-

ward to see that�k follows a normal distribution with meanmk and variancewk�2k,and (ak�2k ) follows a Chi-squared distribution withbk degrees of freedom (d.f. ). It is

Chi-squared because�2k is positive, and so is (ak�2k ) given thatak > 0. In the Bayesian

literature [O’Hagan, 1994],�2k is said to follow anInverse-Gammadistribution. The

natural conjugate prior is quite reasonable, but the fact that the variance of�k is a

multiple (wk) of �2k is unrealistic and uninterpretable, although it brings a consider-

able convenience to the calculus (In fact we could not find a simple analytical form

of the likelihood function without this dependency). We will drop this dependency

later on when we simplify the model. Plugging Equation 5.3 and Equation 5.4 into

the middle part of the likelihood function in Equation 5.2 wegetBjk:Bjk = Z h njYi=1 Pr(xjkij�k)iPr(�kjÆyk) d�k= Z +10 Z +1�1 ((2��2k)�nj=2 exp h� S2jk + nj(xjk � �k)22�2 i(5.5)g(ak; bk; wk)(�2k)� bk+32 exp h� ak + (�k�mk)2wk2�2k i) d�k d�2k

The integration is easy to do due to the form of the natural conjugate prior, resulting

in (the details of the calculation are given in Appendix C):= abk=2k (1 + njwk)(bk+nj�1)=2�( bk+nj2 )h(1 + njwk)(ak + S2jk) + nj(xjk �mk)2i bk+nj2 � nj2 �( bk2 ) (5.6)

Thus, the log-likelihood is:LL = log[ eYj=1( mYk=1Bjk)℄ = mXk=1 eXj=1(� logBjk) (5.7)

whereBjk is given in Equation 5.6 above. We maximize the log-likelihood LL70

Page 85: Statistical Learning in Multiple Instance Problems

CHAPTER 5. LEARNING WITH TWO-LEVEL DISTRIBUTIONS

in Equation 5.7 using a constrained optimization procedure, to getÆyMLE, four pa-

rameters per attribute. Note thatLL actually only involves the sample meanxjkand sum of square errorsSjk, thus it turns out that this method extracts low-order

sufficient statistics, which is a kind of metadata, from eachbag to estimate the

group-conditional hyper-parameters.

When a new bag is encountered, we simply extract these statistics from the bag. We

then compute the log-odds:log Pr(Y = 1jbtest)Pr(Y = 0jbtest) = log Pr(btestjÆ1MLE)Pr(Y = 1)Pr(btestjÆ0MLE)Pr(Y = 0) (5.8)

We estimate the priorPr(Y ) from the number of bags for each class in the training

data and classifybtest according to whether the log-odds value is greater than zero.

In spite of its sound theoretic basis, the above method does not perform well on

the Musk datasets, mainly due to the unrealistic dependencyof the variance of�kon�2k in the natural conjugate, as well as the restrictive assumptions of an Inverse-

Gamma for�2 and a Gaussian for the instances within a bag. However regarding

this as a metadata-based approach allows us to simplify it inorder to drop some

assumptions. Here we make two simplifications.

The first simplification stems from the Equation 5.3. It is relatively straightforward

to see that the likelihood within each bag is equivalent to the product of the sampling

distributions of the two statisticsxjk andSjk.1. If we think xjk is enough for the

classification task, we can simply drop the second-order statistics Sjk here. By

droppingSjk we can generalize this method to virtually any distributionwithin a

bag because according to the central limit theorem,xjk � N(�k; �2knj ) no matter how

thexjki’s are distributed. The second simplification is that we no longer model�2k as

drawn from an Inverse-Gamma distribution in Equation 5.4. Instead we regard it as

fixed and directly estimate�2k from the data. Accordingly we do not need to estimate

1For a Gaussian distribution, the sampling distributions arexjk � N(�k; �2knj ) andSjk�2k � �2(nj�1) If we multiply these two sampling distributions, we will getthe same form as in Equation 5.3,differing only by a constant.

71

Page 86: Statistical Learning in Multiple Instance Problems

5.3. THE UNDERLYING GENERATIVE MODELak andbk any more. To estimate�2k, we give a weight of1n to each instance wherenis the number of instances in the corresponding bag (becauseintuitively we regard

a bag as an object and thus should assign each bag the same weight). Based on the

weighted data, an unbiased estimate of�2k is �2k = Pi[ 1nj Pi(xjki � xjk)2℄=(e �Pj 1nj ), wheree is the number of bags.

With these two simplifications, the dependency in the natural conjugate prior disap-

pears. We only have a Gaussian-Gaussian model within the integral in Equation 5.1,

which makes the calculus extremely simple. The resulting formula is again a Gaus-

sian, which is Bjk = �2�wknj + �2knj ��1=2 exp h�nj(xjk �mk)22(wknj + �2k) i (5.9)

Note thatwk in the first TLD method denotes the ratio of the variance of�k to�2k. Here, since we have dropped this dependency,wk is simply the variance of�k.Equation 5.9 means thatxjk � N(mk; wk + �2knj ). Thus it basically tells us that the

means of the bags of the two classes are wandering around two Gaussian centroids

respectively, although with different variances, as shownin Figure 5.1. We simply

substitute Equation 5.6 in the log-likelihood function by Equation 5.9. The MLE

of mk is in a simple analytical form butwk is not. Thus we still need a numeric

optimization procedure to search for a solution. However, since the search space is

two-dimensional (for convenience, we also search formk even though its solution

can be obtained directly) and has no local maxima, the searchis very fast. This

constitutes our simplified TLD method and we call it “TLDSimple”, while we call

the former TLD method simply “TLD”.

5.3 The Underlying Generative Model

The underlying generative model of the TLD approach is straightforward — we

simply have distributions on two levels, and it is also straightforward to gener-

ate data from it. First, we generate some random data from thedistribution pa-

72

Page 87: Statistical Learning in Multiple Instance Problems

CHAPTER 5. LEARNING WITH TWO-LEVEL DISTRIBUTIONS

rameterized by the fixed bag-level hyper-parameters. Then we regard this data as

the instance-level parameters and generate instance-level random data according to

some distribution parameterized by these instance-level parameters.

We can generate artificial data based on this generative model for both TLD and

TLDSimple. The artificial data generated using the generative model assumed by

the simplified TLD method is easy to visualize. An example is in Figure 5.1. More

specifically, we generated data from two levels of Gaussiansalong two indepen-

dent dimensions for two classes (black and grey in the graph). First, we identified

two (bag-level) Gaussians, one for each class, with the samevariance but different

means. We then generated 10 data points from each Gaussian and regard each of

the points as the mean of an instance-level Gaussian. Finally we specified a fixed

variance (same for both classes) for the instance-level Gaussians and generated 1

to 20 data points from each instance-level Gaussian to form abag. The number of

instances per bag is uniformly distributed.

In Figure 5.1 we plot the contour of both levels of Gaussians with 3 standard devia-

tions. The dot inside each Gaussian is the mean and we can observe its variance

from the dispersion along each dimension. The straight linebetween two bag-

level Gaussians is thebag-leveldecision boundary. Note that unlike the methods

that intend to “recover” the instance-based decision pattern such as the APR algo-

rithms [Dietterich et al., 1997], Diverse Density [Maron, 1998] or other methods

developed in this thesis, the instance-level decision boundary is not defined here.

Instead only the decision boundary of the true means (i.e.�) of the bags is well-

defined — it is a hyperplane because we used the same variance for both classes2.

Also note that the decision boundary is only defined in terms of the true parameters�, but not even in terms of the statistics (or metadata)x. This is due to the extra

term(�2nj ) in the variance ofx. This term is distinct for each bag even if we have the

same�2 but different number of instances per bag.

We generated the artificial data mainly to analyze the properties of our methods.

2If the Gaussian of either class had different variance, thenthe decision boundary would bequadratic instead of linear.

73

Page 88: Statistical Learning in Multiple Instance Problems

5.3. THE UNDERLYING GENERATIVE MODEL

Figure 5.1: An artificial sim-plified TLD dataset with 20bags.

-2

0

2

4

6

8

10

12

0 100 200 300 400 500 600 700

Par

amet

er v

alue

Number of exemplars

Estimated sigma^2Estimated wEstimated m

Figure 5.2: Estimated parameters using theTLDSimple method.

In order to verify the (at least asymptotic) unbiasedness ofboth TLD methods,

we generated a one-dimensional dataset with 20 instances per bag using the exact

generative model assumed by them (the data generation usingthe exact generative

model assumed by TLD is described in Chapter 7 and Appendix A). The estimated

parameters (of one class) are shown in Figure 5.2 for TLDSimple and Figure 5.3

for TLD. In both figures, the solid lines denote the true parameter values. Note

that in TLDSimple,�2k = Pi[ 1nj Pj(xjki�xjk)2℄(e�Pj 1nj ) is not a maximum likelihood estimate

(MLE), but obviously it is an unbiased estimate as mentionedin Section 5.2 becauseE�Xj [ 1nj Xi (xjki � xjk)2℄� = (e�Xj 1nj )�2k:All other estimates are MLEs. As can be seen, the estimated parameters converge

to the true parameters as the number of bags increases, thus this method is at least

asymptotically unbiased. We observe that the number of instances per bag does not

affect the convergence but only the rate of convergence. Themore instances per

bag, the faster the convergence. Varying number of instances will slow down the

convergence but result in similar behaviors.

74

Page 89: Statistical Learning in Multiple Instance Problems

CHAPTER 5. LEARNING WITH TWO-LEVEL DISTRIBUTIONS

10cm

-20

0

20

40

60

80

100

120

0 100 200 300 400 500 600 700

Par

amet

er v

alue

Number of exemplars

Estimated aEstimated bEstimated wEstimated m

Figure 5.3: Estimated parameters using the TLD method.

In TLDSimple, the MLE of the variance of� (i.e.w) is a biased one, but still asymp-

totically unbiased. The reason why we use the MLE instead of another estimate is

due to its robustness. We have observed that when the number of bags is small,

which is the case in the Musk datasets, other estimates of thevariance are very

likely to be wildly wrong, even negative. The MLE, on the other hand, is relatively

stable even for a small sample size.

5.4 Relationship to Single-instance Learners

The relationship between the TLD approach and single-instance learners can best be

explained based on the TLDSimple method. Note that when the number of instances

per bag is reduced to 1, Equation 5.9 degrades into the same Gaussian for all bags

(or instances in this case) from one class, with meanmk and variancewk, because�2k cannot be estimated and can only be regarded as 0. Hence this method degrades

into naive Bayes in the single-instance case. If we had dropped the independence

assumption between attributes, we would have obtained the discriminant analysis

method. Even in the multi-instance case, if�2k � 0 and/ornj � 0, the term�2knjcan be neglected in Equation 5.9. In that case for all the bagsin one class, thexjk’scan be regarded as coming from the same Gaussian. Then we can use standard

75

Page 90: Statistical Learning in Multiple Instance Problems

5.4. RELATIONSHIP TO SINGLE-INSTANCE LEARNERS

naive Bayes to simulate the TLDSimple method — we simply calculate the sample

average over every bag along all dimensions and construct one instance per bag

using the sample average. Then we use naive Bayes to learn this single-instance

data. This gives an approximate upgrade of naive Bayes in theMI case. Such an

approximation may be appropriate in the Musk datasets because we observed that

the in-bag variance is indeed very small for many bags.

The above view of the TLD approach leads us to consider improvement techniques

from naive Bayes in the TLDSimple method. In TLDSimple, it isassumed that the�k’s are normally distributed (i.e.� N(mk; wk)). But if the normality does not hold

very well, we need some adjustment techniques. We present one simple technique

here. When a new bagbtest is met, we calculate the log-odds function according to

Equation 5.8 and usually decide its class label based on whether the log-odds> 0 or

not. Now, we determine the class label ofbtest based on whether the log-odds> vor not, wherev is a real number. We choosev so as to minimize the classification

errors in the training data. We callv the “cut-point”. Thus we select an empirically

optimal cut-point value instead of 0. This technique was mentioned in the context

of discriminant analysis [Hastie et al., 2001] but it is obviously applicable in our

TLD approach and in naive Bayes.

The rationale of this technique is easy to see, and illustrated in Figure 5.4 in one

dimension. Here we have one Gaussian and one standard Gamma distribution for

each class (solid lines). The dotted line plots the Gaussianestimated using the

mean and variance of the Gamma. Now the estimated decision boundary becomes

C’ whereas it should be C. Since in classification we are only concerned about the

decision boundary, we do not need to adjust the density. By looking for the empir-

ically optimal cut-point, we can move it from C’ back to C and improve prediction

performance.

More specifically, we look for the cut-point as follows. First, we calculate the log-

odds ratio for each of the training bags and sort them in ascending order. The

log-odds ratios of bags with class 1 should be greater than those of bags with class

76

Page 91: Statistical Learning in Multiple Instance Problems

CHAPTER 5. LEARNING WITH TWO-LEVEL DISTRIBUTIONS

Figure 5.4: An illustration of the rationale for the empirical cut-point technique.

0. Second, we look for a split. LetT denotes the total number of training bags,

andS a split point. There will bet bags with log-odds� S and(T � t) ones with

log-odds> S. We count the number of bags in the first class among the firstt bags.

This number isp1. The number of bags in the second class among the second set of(T � t) bags isp2. Finally, we go through the log-odds of all bags from the smallest

to the largest and find the log-odds value with the largest value of(p1 + p2) and use

that value as the cut-point. If there is a tie, we choose the value closest to 0.

This technique is not commonly used to reduce the potential negative impact of

the normality assumption. Instead kernel density estimation or discretization of nu-

meric attributes are more often used. Nonetheless, we foundthat this technique

can also work pretty well in the normal single-instance learning. We have tested

it in association with naive Bayes on some two-class datasets from the UCI repos-

itory [Blake and Merz, 1998] within the experimental environment of the WEKA

workbench [Witten and Frank, 1999]. The results are shown inTable 5.1.

In Table 5.1 we use “NB” to denote naive Bayes. The first columnis NB with

empirical cut-point (EC) selection and this is the base linefor comparison. The

second, third and fourth columns are NB without any adjustment, NB with kernel

density estimation (KDE) and NB with disretization of numeric attributes (DISC)

respectively. The results were obtained using 100 runs of 10-fold cross validation

77

Page 92: Statistical Learning in Multiple Instance Problems

5.4. RELATIONSHIP TO SINGLE-INSTANCE LEARNERS

Dataset NB+EC NB NB+KDE NB+DISCbreast-cancer 72.53( 0.96) 72.76( 0.68) 72.76( 0.68) 72.76( 0.68)breast-cancer-W 95.87( 0.23) 96.07( 0.1 ) 97.51( 0.11) v 97.17( 0.13) vgerman-credit 74.75( 0.51) 75.07( 0.43) 74.5 ( 0.46) 74.38(0.62)heart-disease-C 82.58( 0.75) 83.4 ( 0.41) 84.02( 0.58) v 83.2 ( 0.64)heart-disease-H 83.52( 0.71) 84.23( 0.52) 84.95( 0.39) v 84.12( 0.36)heart-statlog 83.78( 0.74) 83.73( 0.61) 84.4 ( 0.59) 82.91(0.67)hepatitis 83.33( 1 ) 83.71( 0.82) 84.76( 0.62) v 83.67( 1.14)ionosphere 89.68( 0.54) 82.51( 0.45) * 91.83( 0.32) v 89.27(0.45)kr-vs-kp 87.85( 0.16) 87.8 ( 0.15) 87.8 ( 0.15) 87.8 ( 0.15)labor 93.29( 2.16) 94 ( 1.92) 93.18( 1.36) 88.44( 1.85) *mushroom 98.18( 0.03) 95.76( 0.04) * 95.76( 0.04) * 95.76( 0.04) *sick 95.99( 0.09) 92.76( 0.12) * 95.78( 0.08) * 97.14( 0.08) vsonar 72.08( 1.39) 67.88( 1.06) * 72.68( 1.13) 76.43( 1.51) vvote 89.55( 0.63) 90.09( 0.15) 90.09( 0.15) 90.09( 0.15)pima-diabetes 75.43( 0.46) 75.65( 0.37) 75.15( 0.37) 75.51( 0.74)Summary (v/ /*) (0/11/4) (5/8/2) (3/10/2)

Table 5.1: Performance of different versions of naive Bayeson some two-classdatasets.

(CV), with standard deviations specified in brackets. The confidence level used for

the comparison was 99.5%. We use a “*” sign to indicate “significantly worse than”

where as “v” means “significantly better than”. We regard tworesults of a dataset

as “significant different” if the difference is statistically significant at the 99.5%

confidence level according to the corrected resampledt-test [Nadeau and Bengio,

1999]. It can be seen that the EC technique is comparable to discretization and

worse than KDE in general. However, it can indeed improve theperformance of the

original naive Bayes. Moreover, in cases where KDE cannot beused, like in our

TLD approach, EC can be a convenient option.

It turns out that in the Musk datasets the normality assumption for�k is a problem

and the EC technique can apply. For instance, for the naive Bayes approximation

of TLDSimple without EC, the leave-one-out (LOO) error rates on the Musk1 and

Musk2 datasets are 13.04% and 17.65% respectively. With EC,the error rates are

10.87% and 14.71%, which is an improvement of about 3% in eachdataset. Note

that in the naive Bayes approximation of TLDSimple we do not use KDE because

78

Page 93: Statistical Learning in Multiple Instance Problems

CHAPTER 5. LEARNING WITH TWO-LEVEL DISTRIBUTIONS

Methods Musk 1 Musk 2LOO 10CV LOO 10CV

SVM with Minimax Kernel 7.6 8.4�0.7 13.7 13.7�1.2RELIC – 16.3 – 12.7TLDSimple+EC 14.13 16.96�1.86 9.80 15.88�2.56naive Bayes Approximation+EC 10.87 15.11�2.32 14.71 17.35�1.85

Table 5.2: Error rate estimates from 10 runs of stratified 10-fold CV and LOOevaluation. The standard deviation of the estimates (if available) is also shown.

we do not want to accurately estimate the density ofxjk but that of�k. In this

approximation, we ignored the term�2knj in the variance ofxjk. Hencexjk is only

an approximation of�k. The EC technique is only concerned about the cut-point

instead of the whole density, and thus does not suffer from this approximation.

5.5 Experimental Results

There are no other directly related MI methods in the literature. The closest cousins

may be other methods that also extract metadata from bags, which are the MI de-

cision tree learner RELIC [Ruffo, 2001] and the support vector machine (SVM)

with the minimax kernel [Gartner et al., 2002]. Both methods can be viewed as

extracting minimax metadata from each bag and applying a standard learner to the

resulting single-instance data. Although it has been mentioned that some statistics

can be extracted from bags for the purpose of modeling [Gartner et al., 2002], no

specific methods have been put forward in the MI learning domain.

We present experimental results for TLDSimple and its naiveBayes approximation

(both used with the EC technique) on the Musk benchmark datasets in Table 5.2.

We also present published results for RELIC and the SVM method in the table.

Since these two metadata-based methods are among the best methods on the Musk

datasets, we can see that the new methods are competitive with the state-of-the-art

MI methods.

79

Page 94: Statistical Learning in Multiple Instance Problems

5.6. RELATED WORK

In spite of its simplicity, TLDSimple performs surprisingly well on the Musk2

dataset, with reasonably good result on Musk1. Since it is a distributional method,

it is expected that more data can improve its estimation and that’s why its LOO per-

formance is better than that of 10-fold CV. Its naive Bayes approximation is equally

simple and also quite good on the datasets. Hence the sound theoretical basis of

these methods appears to bear fruits in practical datasets.

5.6 Related Work

As mentioned before, RELIC and the SVM with a minimax kernelsare related to the

TLD approach in the sense that they both extract metadata from the bags and model

directly on the bag-level. But there are no other multi-instance methods that model

the MI problem in a group-conditional manner, neither are there any methods trying

to extract low-order sufficient statistics. Therefore our approach is quite unique in

the MI learning domain.

However, when developing the TLD methods, we discovered empirical Bayes (EB)

[Maritz and Lwin, 1989], an old statistical method that has the exactly same line of

thinking as our approach. As a matter of fact, it is well suited for multi-instance data

and our approach is one of its applications to classification. In the EB literature a

more general EM-based solution procedure is also proposed that can model arbitrary

distributions on the two levels. In such a case, the integralmay not be solvable but

— via the EM algorithm — it is possible to apply numeric integration techniques

to make the maximum likelihood method feasible. This is actually one of the early

applications of the EM algorithm [Dempster et al., 1977].

According to the EB literature [Maritz and Lwin, 1989], an early example of an EB

application was an experiment with contaminated water [vonMises, 1943]. In this

experiment, 3420 batches of water were collected with five samples in each batch.

Each sample was either classified as positive (contaminated) or negative (uncon-

taminated). The task was to estimate the probability of a sample to be positive. One

80

Page 95: Statistical Learning in Multiple Instance Problems

CHAPTER 5. LEARNING WITH TWO-LEVEL DISTRIBUTIONS

may recognize this dataset as very similar to MI data. If we classified each batch

as positive or negative (according to some assumptions), and had some attributes to

describe the samples, we would end up with an MI dataset.

Today EB is an important method to solve the “meta-analysis”problem in medical

(or other scientific) research, which has a similar rationale as MI learning. In sci-

entific research, different scientists carry out experiments associated with the same

topic but usually get different results. If we can constructsome instances (with

the same attributes) for each one of the experiments, and tryto identify the ho-

mogeneity and heterogeneity of these experiments, this is actually an MI dataset.

Each experiment is a bag of instances but there is only one class label per bag.

MI learning solutions will be very useful here because the identification of homo-

geneity/heterogeneity of scientific experiments is an important objective in meta-

analysis.

We are presenting the relationship of the EB method and meta-analysis with MI

learning in the hope that they may attract some interest in the field of MI research.

We believe that this may help spawn fielded applications and more publicly avail-

able datasets — the perhaps most severe obstacles in research on MI learning today.

5.7 Conclusions

In this chapter we have proposed a two-level distribution (TLD) approach for solv-

ing multiple instance (MI) problems. We built a model in a group (class)-conditional

manner, and think of the data as being generated by two levelsof distributions —

the instance-level distribution and the bag-level distribution — within each group.

It turns out that the modeling process can be based on low-order sufficient statistics,

thus we can regard it as one of the methods that are based on meta-data extracted

from each bag.

Despite its simplicity, we have shown that a simple version of TLD learning and

81

Page 96: Statistical Learning in Multiple Instance Problems

5.7. CONCLUSIONS

its naive Bayes approximation perform quite well on the Muskbenchmark datasets.

We have also showed the relationship of our approach with normal group-conditional

single-instance learners, other metadata-based MI methods, and the empirical Bayes

method. Finally, we pointed out the similarity between MI learning problems and

the meta-analysis problems from scientific research.

82

Page 97: Statistical Learning in Multiple Instance Problems

CHAPTER 6. APPLICATIONS AND EXPERIMENTS

Chapter 6

Applications and Experiments

As mentioned in Chapter 1, we focus on three practical applications of MI learning:

drug activity prediction, fruit disease prediction, and content-based image catego-

rization. Throughout this chapter, the term “number of attributes” does not include

the “Bag-ID” attribute and the class attribute in the datasets. When we refer to a

“positive” bag in this chapter, we simply mean that its classlabel is “1”. Likewise

“negative” for “0” in the data.

6.1 Drug Activity Prediction

The most famous application for MI learning is the drug activity prediction problem,

first described in [Dietterich et al., 1997], which resultedin the first MI datasets, the

Musk datasets. These are the only real-world MI datasets publicly available. While

the interested readers should refer to the paper of [Dietterich et al., 1997] for de-

tailed background on the Musk datasets, we briefly describe them in Section 6.1.1.

Section 6.1.2 we discuss another molecule activity prediction problem — predicting

the mutagenicity of the molecules. Note that this problem was not originally an MI

problem. However, with proper transformations we can use its data in MI learning.

83

Page 98: Statistical Learning in Multiple Instance Problems

6.1. DRUG ACTIVITY PREDICTION

Figure 6.1: Accuracies achieved by MI Algorithms on the Muskdatasets.

6.1.1 The Musk Prediction Problem

There are two Musk datasets, each describing different molecules (some molecules

are presented in both of them). The task is to decide which molecules result in a

“musky” smell. On average, Musk2 has more bags than Musk1, and each bag has

more instances. Table 4.2 in Chapter 4 lists some key properties of the two datasets.

In both datasets, each molecule is represented as a bag and different shapes (or

conformations) of that molecule are instances in the bag. Since one molecule has

multiple shapes, we have to use multiple instances to represent an example. The

shape of a molecule is measured using a ray-based representation that results in 162

features. Together with 4 extra oxygen features, we have 166attributes. Since every

shape of a molecule (i.e. every instance) can be measured with these 166 attributes,

we can construct an MI dataset. Both Musk datasets have the same attributes as

shown in Table 4.2 in Chapter 4.

The Musk datasets are the most popular ones in the MI domain. Virtually every MI

algorithm developed so far has been tested on this problem and so have the algo-

84

Page 99: Statistical Learning in Multiple Instance Problems

CHAPTER 6. APPLICATIONS AND EXPERIMENTS

rithms in this thesis. Although we have used these datasets throughout this thesis to

compare the performance of our methods to that of many other MI methods, we list

the accuracies again in Figure 6.1. Based on this figure we cansee some evidence

which assumption really holds in the Musk datasets by examining the accuracies

of methods based on various assumptions. We use black bars todenote the accu-

racy on Musk1, and white ones for Musk2. We used the best accuracy an algorithm

can achieve, regardless of the evaluation method used. The accuracy of the wrap-

per method developed in Chapter 3 (“MI Wrapper” in the figure)is that achieved

by wrapping around bagged PART with discretization. The MI AdaBoost method

shown in the figure has unpruned regularized C4.5 as the base classifier and uses

50 iterations, same as described in Chapter 4. The two MI linear logistic regression

methods (“MILogitRegGEOM” and “MILogitRegARITH” in the figure) are also

described in Chapter 4. TLDSimple and the naive Bayes (NB) approximation of

TLDSimple are described in Chapter 5.

We deliberately divide the algorithms into 3 categories: the first corresponds to

instance-based methods that are consistent with the MI assumption; the second cor-

responds to the instance-based methods that we believe do not use the MI assump-

tion; and the last corresponds to the metadata-based methods that cannot possibly

use the MI assumption. As the figure shows, the methods that strongly depend on

the MI assumption do not seem to benefit much from this background knowledge.

Instead the overall performance of the methods that are not based on this assumption

is as good as the “MI assumption” family on these datasets. Therefore the validity

of the MI assumption as the background knowledge on the Musk datasets appears

to be questionable.

6.1.2 The Mutagenicity Prediction Problem

There is another drug activity prediction problem, namely the mutagenicity predic-

tion problem [Srinivasan, Muggleton, King and Sternberg, 1994], that can also be

represented as an MI problem. It originated from the inductive logic programming

85

Page 100: Statistical Learning in Multiple Instance Problems

6.1. DRUG ACTIVITY PREDICTION

Friendly UnfriendlyNumber of bags 188 42Number of attributes 7 7Number of instances 10486 2132Number of positive bags 125 13Number of negative bags 63 29Average bag size 55.78 50.76Median bag size 56 46Minimum bag size 28 26Maximum bag size 88 86

Table 6.1: Properties of the Mutagenesis datasets.

(ILP) domain. The original dataset is in a relational format, which is especially

suitable for ILP algorithms. However we can transform the original dataset into a

MI dataset in several ways [Chevaleyre and Zucker, 2000; Weidmann, 2003]. Here

we consider a setting representing each molecule as a set of bonds and the pairs

of atoms connected by each of the bonds. After this transformation each molecule

corresponds to a bag. But unlike the Musk datasets where an instance describes

the conformation of an entire molecule, each instance denotes an atom-bond-atom

fragment of a molecule. One such fragment is described using7 attributes — 5

symbolic and 2 numeric attributes. Interested readers may refer to [Weidmann,

2003] for more details on the construction of these MI datasets. The same con-

struction method was used in other studies of MI learning [Chevaleyre and Zucker,

2001; Gartner et al., 2002].

The original Mutagenesis dataset consists of 230 molecules: 188 molecules that can

be fitted using linear regression and the remaining 42, whichare more difficult to

fit. These two subsets are called “friendly” and “unfriendly” respectively. The key

properties of the two datasets, after the transformation, are shown in Table 6.1.

We evaluated some of the methods developed in this thesis on these two datasets.

Since the TLD approach can only deal with numeric attributesand there are some

nominal attributes in the Mutagenesis datasets, we cannot apply TLD methods to

them. We normally consider the transformation of nominal attributes to binary ones

86

Page 101: Statistical Learning in Multiple Instance Problems

CHAPTER 6. APPLICATIONS AND EXPERIMENTS

Method Friendly UnfriendlyBest of the TLC methods 9.31�1.28 18.33�1.61MILogisticRegressionGEOM 15.74�0.91 16.67�0MI AdaBoost 13.62�1.31 18.57�2.19Wrapper with Bagged PART 18.78�1.26 23.10�2.26Best of ILP methods 18.0 —SVM with RBF MI kernel 7.0 25.0

Table 6.2: Error rate estimates for the Mutagenesis datasets and standard deviations(if available).

in this case. However the TLD methods involves estimating in-bag variance and this

transformation often introduces zero variances for many attributes. Consequently

the TLD approach cannot be applied even after such transformation. Hence we only

applied the instance-based methods, i.e. the wrapper method, MI linear logistic re-

gression methods and MI AdaBoost. In contrast to the experiments with the Musk

datasets, MILogisticRegressionGEOM outperforms MILogisticRegressionARITH

and thus we only present the results for MILogisticRegressionGEOM. Note that

logistic regression cannot deal with nominal attributes either, but it can deal with

the transformed binary attributes. Hence we transform a nominal attributes withKvalues intoK binary attributes, each with value 0 and 1. As for the regularization,

we (again) use a fixed ridge parameter� instead of CV. Here we use� = 0:05for the “friendly” data and� = 2 for the “unfriendly” data. MI AdaBoost, on the

other hand, can naturally deal with nominal attributes, andwe apply it directly to

the Mutagenesis datasets. For the regularization in MI AdaBoost, we use the same

strategy as in the Musk datasets. The base classifier is an unpruned regularized

C4.5 decision tree [Quinlan, 1993] as in Chapter 4. The minimal number of leaf-

node instances was set to around twice the averaged bag size,i.e. 120 instances for

“Friendly” and 100 instances for “Unfriendly”. Under theseregularization condi-

tions, we then look for a reasonable number of iterations. Itturns out that the best

performance often occurs with 1 iteration on the “unfriendly” data, which lead us to

think that maybe decision trees are too strong for this dataset, and we should use de-

cision stumps instead. Hence we eventually used 200 iterations of regularized C4.5

on the “friendly” dataset and 250 iterations of decision stumps on the “unfriendly”

87

Page 102: Statistical Learning in Multiple Instance Problems

6.1. DRUG ACTIVITY PREDICTION

dataset. The estimated errors over 10 runs of 10-fold CV are shown in Table 6.2. Ta-

ble 6.2 also shows the error estimates of the wrapper method described in Chapter 3

on these two datasets. The classifier used in the wrapper was Bagging [Breiman,

1996] based on the decision rule learner PART [Frank and Witten, 1998], both im-

plemented in WEKA [Witten and Frank, 1999]. Bagged PART willalso be used for

the experiments with the wrapper method on other applications in this chapter. We

used the default parameter setting of the implementations in WEKA, and did not do

any further parameter tuning.

It was reported that the ILP learners have achieved error rates ranging from 39.0%

(FOIL) to 18.0% (RipperMI) on the “friendly” dataset using one run of 10-fold

CV [Chevaleyre and Zucker, 2001]. The SVM with the MI kernel achieved 7.0% on

“Friendly” and 25% on “Unfriendly” using 20 runs of random leave-10-out [Gartner

et al., 2002]. Note that unlike the Musk datasets, the experimental results between

10-fold CV and leave-10-out are not comparable here becausethe number of bags

is quite different from 100 in both datasets. In Friendly, leave-10-out allows the

learner to use more training data whereas in Unfriendly the training data in leave-

10-out will be less. In addition, leave-10-out is not stratified, thus the 20 runs used

in the SVM with the MI kernel [Gartner et al., 2002] may suffer from high vari-

ance due to the fact that the classes are not evenly distributed in both datasets (see

Table 6.1). The two-level classification method (TLC) [Weidmann, 2003], which

is a model-oriented metadata-based approach, can achieve 9.31% on Friendly and

18.33% on Unfriendly via 10 runs of 10-fold CV. In Table 6.2, we list the error rates

of these methods, together with those of the methods developed in this thesis.

The standard deviation of MILogisticRegressionGEOM of the10 runs is zero on the

“unfriendly” dataset, indicating the stability of the results. The evaluation process

allows us to compare the results of the methods developed in this thesis to those

of the TLC method [Weidmann, 2003] and of ILP learners [Chevaleyre and Zucker,

2001]. While comparable to the TLC method on the “unfriendly” dataset (actually it

seems that MILogisticRegressionGEOM slightly better thanTLC on this dataset),

MILogisticRegressionGEOM and MI AdaBoost are worse than TLC methods on

88

Page 103: Statistical Learning in Multiple Instance Problems

CHAPTER 6. APPLICATIONS AND EXPERIMENTS

the “friendly” dataset. However they seem to outperform theILP learners. The

small ridge parameter value� = 0:05 used in MILogisticRegressionGEOM on the

“friendly” dataset indicates that under the collective assumption, there is indeed a

quite strong linear pattern while the larger ridge parameter value used on the “un-

friendly” dataset may imply a weaker linear pattern. This isconsistent with the

observation that the “friendly” data can be fit with linear regression whereas “un-

friendly” data cannot. The performances of MILogisticRegressionGEOM and MI

AdaBoost are similar on the Mutagenesis datasets, which is reasonable because they

are based on the same assumption. As long as the instance-level decision boundary

pattern is close to linear under the geometric-average probability assumption, their

behaviors will be very similar. The wrapper method with bagged PART is not as

good as TLC, MILogisticRegressionGEOM and MI AdaBoost, butstill comparable

to the best of the ILP methods. Besides, we did not do any parameter tuning or

use other improvement techniques like discretization here. Neither did we try the

wrapper around other single-instance classifiers. We conjecture that we might be

able to improve the accuracy considerably with these efforts because we believe the

underlying assumption of the wrapper method (that the classprobabilities of all the

instances within a bag are very similar to each other) may hold reasonably well in

the drug activity prediction problems.

The overall empirical evidence seems to show that the collective assumption may

be at least as appropriate in the molecular chemistry domainas the MI assumption.

We can get good experimental results on the problems of this kind using methods

based on the collective assumption. This empirical observation seems to be contra-

dictory to some claims that the MI assumption is “precisely the case in the domain

of molecular chemistry” [Chevaleyre and Zucker, 2000]. However, it would be in-

teresting to discuss this with a domain expert.

89

Page 104: Statistical Learning in Multiple Instance Problems

6.2. FRUIT DISEASE PREDICTION

Fruit1 2 3

Number of positive bags 23 28 19Number of negative bags49 44 53Number of bags 72Number of attributes 23Number of instances 4560Average bag size 63.33Median bag size 760Minimum bag size 60Maximum bag size 120

Table 6.3: Properties of the Kiwifruit datasets.

6.2 Fruit Disease Prediction

As discussed in Chapter 1, the fruit disease prediction problem provides a MI set-

ting. The identification of this problem as a MI problem was due to Dr. Eibe Frank.

In this problem, each bag is a batch of fruits, usually from one orchard. Every fruit

within a batch is an instance. Some non-destructive measures are taken for each

fruit. The class labels are only observable on the bag level,in other words, either a

whole batch of fruits is infected by a disease or not. Although there is no assump-

tion explicitly stated for this problem, the collective assumption is easy to interpret

here: if the whole batch of fruits is resistant to a certain disease, then the whole

batch is disease-free.

We have obtained one dataset consisting of 72 batches of kiwifruits. they were la-

beled according to three different diseases, resulting in three datasets with identical

attributes, namely Fruit 1, 2 and 3. There are 23 non-destructive measures describ-

ing each kiwifruit, resulting in 23 attributes, all numeric. The key properties are

listed in Table 6.3. The table shows that number of instancesper bag in this dataset

is more regular than in the datasets describing the drug activity prediction problem,

and comparatively larger. Nonetheless, according to our observation, the number

of instances per bag does not seem to be very important in classification. What is

more important is the number of bags, which is scarce in this case.

90

Page 105: Statistical Learning in Multiple Instance Problems

CHAPTER 6. APPLICATIONS AND EXPERIMENTS

Method Fruit 1 Fruit 2 Fruit 3Default accuracy 68.06 61.11 73.61TLD+EC 62.75�0 52.78�0 66.67� 0MILogisticRegressionARITH 71.81�1.47 65.69�1.97 73.61� 0MILogisticRegressionGEOM 71.11�2.34 64.03�2.31 73.61� 0MI AdaBoost 73.19�1.74 63.61�2.84 73.61� 0Wrapper with bagged PART 73.06�0.97 61.81�2.87 71.81�0.67

Table 6.4: Accuracy estimates for the Kiwifruit datasets and standard deviations.

One important fact that is not shown in Table 6.3 is that thereare some missing

values in the dataset. Typically instances’ values are missed for a whole bag for a

specific attribute. We developed different strategies to deal with the missing values

for our methods developed in this thesis. MI AdaBoost has theadvantage that the

base classifier usually has its own way to deal with missing values, hence it is not a

problem for it. For the TLD approach, since we assume attribute independence, we

simply skip bags with missing values for a certain attributewhen collecting low-

order statistics for that attribute. Thus in the log-likelihood function of Equation 5.7

in Chapter 5, we use less bags to estimate the hyper-parameters of that attribute. The

cost is that we have less data for the estimation. As for the MILogistic Regression

algorithms, we use the usual way of logistic regression to tackle this difficulty, that

is, we substitute the missing values with some values estimated from other training

instances without missing values. Again, in order to be consistent to our conven-

tion, we substitute the missing values with the weighted average of other instances’

values of the concerned attribute. The weight of an instanceis given by the inverse

of the number of (non-missing) instances of the bag it pertains to.

Now the three datasets can be tackled by all of the methods in this thesis. We

do not have other methods to compare (apart from DD, as discussed below), so

we list the default accuracy of the data, i.e. the accuracy ifwe simply predict the

majority class, in the first line of Table 6.4 for comparison.However we found out

that these datasets are very hard to classify. For example, we tried DD (looking

for one target concept) with the noisy-or model and got 10�10-fold CV accuracies

of 64.44%(�3.29%) and leave-one-out (LOO) accuracy of 66.67% on the Fruit 1

91

Page 106: Statistical Learning in Multiple Instance Problems

6.2. FRUIT DISEASE PREDICTION

dataset1 — worse than the default accuracy. Thus the MI assumption does not seem

to hold very well for these datasets. Here we simply present some methods that can

give reasonable results on these datasets. We list the accuracy estimates for 10 runs

of 10-fold CV in Table 6.4.

As usual, we use the empirical cut point (EC) techniques together with the TLD

method. We only show the results of TLD in Table 6.4 because the TLDSimple

did worse than TLD in this case. Perhaps in these datasets themean of each bag

is not enough to discriminate the bags from the two classes. Table 6.4 shows that

the TLD method, which encodes information about the first twosufficient statis-

tics, cannot achieve the default accuracy either, althoughthe accuracy estimates are

extraordinarily stable. It could be the case that in these datasets, higher order suf-

ficient statistics are required for discrimination. It may also be due to the lack of

training data as we observed that the LOO accuracy of TLD for the Fruit 1 dataset

is 70.83%, which is much better than that shown in Table 6.4 and better than the de-

fault accuracy. We have seen in Chapter 5 that the TLD approach is a distributional

method and may usually require many bags to provide reasonable estimates. In this

case the sample size may be too small to give accurate estimates.

The instance-based methods perform reasonably well on the Fruit 1 and Fruit 2

datasets. It seems that the Fruit 1 dataset has a non-linear instance-level class de-

cision boundary under the collective assumption, while theFruit 2 dataset exhibits

linearity. This is due to the observation that the methods specializing in fitting non-

linear functions, like MI AdaBoost and the wrapper method based on bagged PART,

can perform better on the Fruit 1 data than the MI linear logistic regression methods,

but worse on the Fruit 2 data. Again we did not finely tune the parameters in any of

the methods. In the two variations of MI logistic regression, we simply used� = 1for both methods and for all the datasets. In MI AdaBoost, we first tried unpruned

C4.5 with the minimal leaf-node instance number of 120 (about twice the average

1Since DD does not have a mechanism to deal with missing values, we pre-processed the data,substituting the missing values with the average of the non-missing attribute values. Note that thisprocess may give DD an edge in classification because the resulting training and testing instancesare different from the original ones and may provide more information for classification.

92

Page 107: Statistical Learning in Multiple Instance Problems

CHAPTER 6. APPLICATIONS AND EXPERIMENTS

bag size). The result for the Fruit 2 dataset presented in Table 6.4 is based on this

base classifier running 100 iterations. However in Fruit 1, we observed (again) that

one iteration gave a good result for this base classifier, which may indicate that we

should use a weaker base classifier. Thus for Fruit 1, we used 250 iterations of de-

cision stumps whose results are shown in Table 6.4. As for thewrapper method, we

simply used the default setting in the wrapped single-instance learners, i.e. bagging

and PART.

The Fruit 3 dataset seems to be very random and hard to classify. In fact, no method

can achieve better than the default accuracy. The MI linear logistic regression meth-

ods fail to find a linear instance-level decision boundary under the collective as-

sumption. Any ridge parameter values greater than 1 only resulted in predicting the

majority class, which appears to be the best accuracy achievable on this dataset. The

same happened for MI AdaBoost. When based on the decision stump, any iteration

number greater than 10 resulted in worse test accuracy than the default accuracy,

although the training error can be improved with more iterations. Thus the best one

can do in this dataset is to predict the majority class.

6.3 Image Categorization

Content-based image categorization involves classifyingan image according to the

content of the image. For example, if we are looking for images of “mountains”,

then any images containing mountains are supposed to be classified as positive and

those without mountains as negative. The task is to predict the class label of an

unseen image based on its content.

This problem can be represented as an MI problem with some special techniques.

The key of such a representation is that we can segment an image into several pixel

regions, usually called “blobs”. A bag corresponds to an image, which has the blobs

or some combinations of blobs as its instances. Again one bagonly has one class

label. Now the problem is how to generate features for the instances? The data

93

Page 108: Statistical Learning in Multiple Instance Problems

6.3. IMAGE CATEGORIZATION

PhotoNumber of bags 60Number of attributes 15Number of instances 540Number of positive bags 30Number of negative bags 30Average bag size 9Median bag size 9Minimum bag size 9Maximum bag size 9

Table 6.5: Properties of the Photo dataset.

Figure 6.2: A positive photo exam-ple for the concept of “mountainsand blue sky”.

Figure 6.3: A negative photo exam-ple for the concept of “mountainsand blue sky”.

used in this thesis was generated based on the single-blob with neighbours (SBN)

approach proposed by [Maron, 1998], although there are manyother techniques that

can be used [Zhang et al., 2002]. More precisely, there are 15attributes in the data.

The first three attributes are the averaged R, G, B values of one blob. The remaining

twelve dimensions are the difference of the R, G, B values between one specific

blob and the four neighbours (up, right, down and left) of that blob. As in [Maron,

1998], an image was transformed into 8x8 pixel regions and each 2x2 region was

regarded as a blob. There are 9 blobs that can possibly have 4 neighbours in each

image. Therefore the resulting data has 15 attributes and 9 instances per bag, fixed

for each bag. Details of the construction of the MI datasets from the images can be

found in [Weidmann, 2003], where some other methods have also been tried.

Table 6.5 lists some of the properties of the photo dataset weused. Since all the

94

Page 109: Statistical Learning in Multiple Instance Problems

CHAPTER 6. APPLICATIONS AND EXPERIMENTS

Method PhotoBest of the TLC methods 81.67TLDSimple+EC 69.17�2.64MILogisticRegressionARITH 71.67�2.48MILogisticRegressionGEOM 71.00�2.96MI AdaBoost 76.67�2.36Wrapper with bagged PART 75�0

Table 6.6: Accuracy estimates for the Photo dataset and standard deviations.

dimensions describe RGB values, they are all numeric. The dataset is about the

concept “mountains and blue sky”. There are 30 photos that contain both mountains

and blue sky, which are the positive bags, and 30 negative photos. The negative

photos could contain only sky, only mountains, or neither. Two examples from

[Weidmann, 2003] illustrate the class label properties of the bags. The photo shown

in Figure 6.2 is a positive photo, which contains both mountains and the sky, while

the one shown in Figure 6.3 is negative because it only contains the sky (and the

plains) but no mountains. Note that the classification task here is more difficult

than that in [Maron, 1998] and [Zhang et al., 2002] because their target objects

only involve one simple concept, say “mountains”, whereas we have a conjunctive

concept here, which is more complicated.

Table 6.6 lists the 10�10-fold CV experimental results of some of the methods

developed in this thesis. Again we did not finely tune the userparameters. In the

MI logistic regression methods, we used a ridge parameter of2 for both methods. In

MI AdaBoost, we used an unpruned C4.5 tree as the base classifier and the minimal

instance number for the leaf-nodes is (again) twice the bag size (18 instances). We

obtained the results in Table 6.6 using 300 iterations. The base classifiers in the

wrapper method were configured using the default settings. We also list the best

result for the TLC algorithms [Weidmann, 2003] for comparison.

For this particular dataset, the methods based on the collective assumption may be

appropriate sometimes. Because the objects “mountains” and “blue sky” are large

objects that usually occupy the whole image, the class labelof an image is very

95

Page 110: Statistical Learning in Multiple Instance Problems

6.3. IMAGE CATEGORIZATION

likely to be related to the RGB values of all the blobs in that image. However, if this

is not the case for an image, the collective assumption may not be suitable. More-

over, the decision boundary in the instance space may be moresophisticated than

linear or close to linear (like quadratic) patterns. This isbecause we take the RGB

values as the attributes and the colours of other objects maybe very similar to the

concept objects (i.e. “mountains” or “blue sky” in this case). To discriminate the

positive and negative images, we may need more complex decision boundary pat-

terns. Therefore the TLD approach and the MI logistic regression methods, which

model, either asymptotically or exactly, linear or quadratic decision boundaries, are

not expected to give good accuracies on this task. MI AdaBoost and the wrapper

method (based on appropriate base learners) are more flexible and may be more

suitable for this problem.

The natural candidate learning techniques for this problemis the TLC method [Wei-

dmann, 2003], because it can deal with a very general family of concepts, including

conjunctive concepts. The DD algorithm (results not shown), on the other hand,

cannot deal with this dataset because, although it can be extended to deal with dis-

junctive concepts, itcannothandle conjunctive concepts.

As expected, the best of the TLC algorithms (based on one run of 10-fold CV) in-

deed performs the best on this dataset. MI AdaBoost and the wrapper method seem

to be better than the other methods developed in this thesis because they are able to

model more sophisticated instance-level decision boundaries. We also notice that

the result of the wrapper method is very stable on this dataset. TLDSimple per-

forms better than TLD and we only present the result for TLDSimple. It produces

similar results as the MI linear logistic regression methods. They are not good on

this task, as mentioned above, mainly because the instance-level decision boundary

is unlikely to be linear (or close to linear), even if the collective assumption holds.

96

Page 111: Statistical Learning in Multiple Instance Problems

CHAPTER 6. APPLICATIONS AND EXPERIMENTS

6.4 Conclusion

In this chapter, we have explored some practical applications of MI learning, namely,

drug activity prediction, fruit disease prediction and content-based image catego-

rization problems. We experimented with some of the methodsdeveloped in this

thesis on the datasets related to these problems. We believethat the collective as-

sumption is widely applicable in the real world. This beliefis supported by the

empirical evidence we observed on the practical problems discussed in this chap-

ter.

97

Page 112: Statistical Learning in Multiple Instance Problems

6.4. CONCLUSION

98

Page 113: Statistical Learning in Multiple Instance Problems

CHAPTER 7. ALGORITHMIC DETAILS

Chapter 7

Algorithmic Details

In this chapter we present some algorithmic details that arecrucial to some of the

methods developed and discussed in this thesis. These techniques relate to either

the algorithms or the artificial data generation process. Important features of some

algorithms are also discussed.

More specifically, Section 7.1 briefly describes the numericoptimization technique

used in some of the methods developed in this thesis. Section7.2 describes in detail

how to generate the artificial datasets used in the previous chapters. Because the

instance-based methods we developed in Chapter 4, especially the MI linear logistic

regression methods, are quite good at attribute (feature) selection, we show some

more detailed results on attribute importance of the Musk datasets in Section 7.3. In

Section 7.4 we describe some algorithmic details of the TLD approach developed

in Chapter 5. Finally we analyze the Diverse Density [Maron,1998] algorithm, and

describe what we discovered about this method in Section 7.5.

7.1 Numeric Optimization

As already noted above, many methods in this thesis use a numeric technique to

solve an optimization problem whenever the solution of the problem is not in a sim-

ple analytical form. For example, when the maximum likelihood (ML) method is

99

Page 114: Statistical Learning in Multiple Instance Problems

7.1. NUMERIC OPTIMIZATION

used (in logistic regression and the TLD approach), we usually resort to numeric

optimization procedures to find the MLE. In some situations,we may have bound

constraints for the variables, i.e. optimization w.r.t. variablesx with constraintsx � C and/orx � C for some constantC. It can be shown that transforming

such problems into unconstrained optimization problems via variable transforma-

tions likex = y2 + C or x = �y2 + C (wherey is the new variable) may not be

appropriate [Gill, Murray and Wright, 1981]. Because the objective function may

not be defined outside the bound constraints, we also need a method that does not

require to evaluate the function values there. Eventually we chose a Quasi-Newton

optimization procedure with BFGS updates and the “active set method” suggested

by [Gill et al., 1981] and [Gill and Murray, 1976]. It is basedon the “projected

gradient method” [Gill et al., 1981; Chong andZak, 1996], that is primarily used

to solve optimization problems with linear constraints, and that updates the orthog-

onal projection in each step according to the change of the “active” constraints. In

the case of bound constraints, the orthogonal projection ofthe gradient is very easy

to calculate. It is simply the gradients of the “free” variables at that moment (i.e.

variables that are not at the bounds). Therefore the searching strategy we adopt is

quite similar to that of an unconstrained optimization [Dennis and Schnabel, 1983].

We implemented this optimization procedure in WEKA [Wittenand Frank, 1999],

as the classweka.core.Optimization. The details of the implementation

are described in Appendix B. This also serves as a formal documentation of this

optimization class.

In order to enhance the efficiency of the optimization procedure and reduce the

computational costs, we separate the variables as much as possible. Such a sep-

aration of the variables basically divides one optimization problem into several

small sub-problems and applies the optimization procedureto each of the sub-

problems. Hence in each (smaller) optimization problem, the number of variables to

be searched is greatly reduced. Because we use a positive definite matrix to approx-

imate the Hessian matrix in the Quasi-Newton method, reduction of the variables

reduces sparsity of the matrix and leads to faster computation. In this thesis, we

can sometimes separate the variables to make the optimization easier. For instance,

100

Page 115: Statistical Learning in Multiple Instance Problems

CHAPTER 7. ALGORITHMIC DETAILS

when the parameters of different attributes are separable because we assume the

independence of attributes in the likelihood function, we can conveniently divide

the likelihood function into sub-functions, usually one for each dimension. Each of

these sub-function only involves a few parameters, and we only have to search for

the maximum of each sub-function individually, which is a much simpler problem.

7.2 Artificial Data Generation and Analysis

In Chapters 4 and 5, we needed to generate some artificial datasets. To generate

these datasets, we needed techniques for generating randomvariates from some

specific distributions. The standard Java library providesus with routines to gener-

ate uniform and Gaussian variates but we may need variates generated from other

distributions. Specifically, we needed to generate variates in one dimension1 from

a normalized part-triangle distribution in Chapter 4, and from an Inverse-Gamma

distribution in Chapter 5. We briefly describe the details here.

Recall that in Chapter 4, we specify the density ofX as a triangle distribution and

draw data points of one bag from this triangle distribution but within the range of

the bag. Since we further normalize the density so that it sums (integrates) to one,

we need to generate variates from a normalized part-triangle distribution. Figure 7.1

shows the two cases that can happen for this distribution: (i) is the distribution if the

center a bag is outside the interval[�l=2; l=2℄ wherel is the length of the bag range

and (ii) is the distribution otherwise. The distribution incase (i) is a line distribution

whereas the distribution in case (ii) is a combination of twoline distributions. The

cornerstone for sampling from both distributions is a line distribution, denoted byf(:). Suppose the line is within[a; b℄. Then we first draw a pointp uniformly in[a; b). If p is in the interval that has greater density,[a; (a+ b)=2) in the case shown

in Figure 7.1(i), then we acceptp. Otherwise we draw another uniform variatev in[0; f(a+b2 )). If v > f(p) then we accepta+b�p (i.e. we accept the point symmetric

1Because we assumed attribute independence, we generated variates for each dimension and thencombined them into a vector.

101

Page 116: Statistical Learning in Multiple Instance Problems

7.2. ARTIFICIAL DATA GENERATION AND ANALYSIS

a b(a+b)/2 p X X 0a b

(i) (ii)

f f

Figure 7.1: Sampling from a normalized part-triangle distribution.

to p w.r.t. a+b2 ), otherwise we acceptp. In such a way we can generate variates with

a probability that is equivalent to the area under the linef(:).When a bag’s center is close to0, we will have the distribution shown in Fig-

ure 7.1 (ii). We can view it as a combination of two line distributions — one part

in [a; 0) and the other in[0; b). We first decide which part the variate falls into,

according to the probabilities that are equivalent to the areas under the two lines.2

We simply draw a uniform variate within[0; 1) and if it is less than the area under

the line in[a; 0), we pick this line distribution, otherwise the other part (in [0; b)).For each line distribution we use the same technique as discussed above to draw

a random variate from it. This is how we draw data points for each bag from the

conditional density given in Equation 4.3 in Chapter 4, where Pr(x) is a triangle

distribution.

In the TLD method in Chapter 5, we need to generate�2k from an Inverse-Gamma

distribution (parameterized byak andbk) and�k from a Gaussian (parameterized

by mk andwk�2k). Then we further generate a bag of instances with a Gaussian

parameterized by�k and�2k. There is a standard routine in Java for generating a

standard Gaussian variatev. Using the transformationv ��+� we can get a variate

following N(�; �2). For the Inverse-Gamma distribution of�2k, it means that (ak�2k )

follows a Chi-squared distribution withbk degree of freedom (d.f.). Also note that

if X � Chi-squared(2v) with (2v) d.f. , thenY = X2 � Gamma(v). Therefore we

2Note that the total area under the two lines sum to 1.

102

Page 117: Statistical Learning in Multiple Instance Problems

CHAPTER 7. ALGORITHMIC DETAILS

first generate a variateu � Gamma(bk=2) and then by simple transformation we

can get�2k becauseak�2k = 2u) �2k = ak2u .

Since there is no standard routine in Java to generate standard Gamma variates, we

built one from scratch. There are many methods to generate standard Gamma vari-

ates [Bratley, Fox and Schrage, 1983; Ahrens and Dieter, 1974; Ahrens, Kohrt and

Dieter, 1983]. We chose the one described in [Minh, 1988], and implemented it in

weka.core.RandomVariates class, which also includes routines generating

the standard exponential and Erlang (Gamma with integer parameter) variates.

7.3 Feature Selection in the Musk Problems

First of all, note that we always base our models on certain assumptions, and feature

selection is no exception. In this section we discuss feature selection based on the

instance-based methods discussed in Chapter 4 under the collective assumptions

outlined in Chapter 3. The feature selection is based on the explanatory power

of the features for the instance-level class probabilities. In this sense, the “feature

selection” discussed here is at the instance level, and is assumption-based. Different

assumptions may lead to totally different interpretationsof the features.

The Diverse Density [Maron, 1998] (DD) algorithm has also been applied for fea-

ture selection on the Musk datasets. It was recognized that the “scaling parameters”,

one for each attribute, found by DD, indicate the importanceof the corresponding

attributes — the greater the value of the scaling parameter,the more important the

corresponding attribute is. This explanation fits into our understanding of the radial

form of the instance-level probability function modeled byDD. The (inverse of) the

scaling parameter controls the dispersion of the radial probability function. Intu-

itively, the larger the dispersion along one dimension, i.e. the smaller the scaling

parameter, the “flatter” the class probability around 0.5. Hence this attribute is less

useful for discrimination (because a flat probability function implies low purity of

the classes on the two sides of the decision boundary). On thecontrary, the smaller

103

Page 118: Statistical Learning in Multiple Instance Problems

7.3. FEATURE SELECTION IN THE MUSK PROBLEMS

the dispersion, i.e. the larger the scaling parameter, the “sharper” the probability

function along this specific dimension. In other words, thisdimension is more use-

ful for discrimination.

However, such a dispersion can be easily scaled. In the probability function for one

dimensionexp[� (x�p)2s2 ℄, if we multiply both the denominator and numerator by a

constant, the probability remains unchanged but we can arbitrarily change the value

of the “scaling parameter”s. This means that if we multiplied every data point

in one dimension by an arbitrary constant and found the resulting s accordingly,

we could effectively manipulate the scaling parameter. Therefore it is necessary

to standardize the data before using the scaling parametersas a direct indicator of

feature importance. Although [Maron, 1998] divided every data point (along all di-

mensions) in the Musk datasets by 100 in order to facilitate the optimization, which

happened to alleviate the scaling problem, the data points from different dimensions

are still on different scales. Thus it may be premature to directly use the scaling pa-

rameter values for feature selection.

Formally, in order to test the hypothesis whether one parameter is significant (typ-

ically significantly different from 0), we should really findout the sampling distri-

bution of the parameter in question and estimate its standard error from the data.

Then we can standardize it to test the significance. However,the (assumed) sam-

pling distribution of the parameter concerned and its standard error are not easy to

find, especially in MI datasets. Thus we think it intuitive touse the parameter values

found for the standardized data as an indicator. We use this approach to assess the

feature importance in the linear logistic regression modelfor the Musk datasets.

In Figure 7.2 we show the absolute value of the relative linear coefficients found by

MILogisticRegressionARITH with a ridge parameter� = 2 on the Musk datasets.

The coefficients are taken as the relative values to the maximal (absolute value of)

coefficient value, thus the largest is 100%. The data was standardized using the

weighted mean and standard deviation, as described in Chapter 4. In linear logistic

models, the meaning of the coefficients of the attributes is straightforward. If we

104

Page 119: Statistical Learning in Multiple Instance Problems

CHAPTER 7. ALGORITHMIC DETAILS

f37f31f116f163f162f76f157f165f97f147f50f161f43f61f10f34f115f159f140f85f129f127f141f26f102f69f128f86f60f56f30f77f166f83f82f62f20f118f7f121f132f51f36f9f133f13f67f54f21f48f24f22f42f119f117f47f153f125f154f89f17f4f94f64f52f68f49f136f138f58f2f92f98f39f144f146f5f107f143f108f16f156f131f93f155f124f53f57f70f25f135f160f46f90f134f164f35f29f63f87f100f41f8f120f158f6f122f71f95f150f139f99f40f72f15f110f109f151f55f75f1f78f14f18f81f45f123f11f112f59f88f101f66f32f27f28f130f104f73f137f96f79f38f74f80f152f12f84f113f106f65f126f19f149f91f103f111f145f114f23f33f105f148f44f3f142

0102030405060708090100

Relative coefficient (%) 0102030405060708090100

Relative coefficient (%)

f163f162f164f37f31f102f147f66f61f116f96f67f132f58f50f17f76f92f131f94f161f77f119f56f118f34f16f128f165f159f86f145f9f20f129f36f108f10f127f30f43f91f26f33f97f24f2f21f47f141f124f117f60f125f140f7f64f53f48f1f5f52f99f46f157f158f122f139f112f83f72f81f151f57f23f121f113f51f32f166f115f114f89f84f133f85f41f156f65f130f82f90f120f6f71f11f29f54f28f123f22f69f155f18f106f70f146f68f101f95f98f3f80f4f138f153f109f55f63f13f103f137f75f27f35f126f107f8f87f104f152f154f14f59f78f160f25f110f148f40f134f100f62f73f74f111f142f144f149f15f93f79f105f45f44f19f136f143f12f88f49f150f42f135f38f39

Attr

ibut

e

0102030405060708090100

Figure 7.2: Relative feature importance in the Musk datasets: the left plot is forMusk1, and the right one for Musk2.

105

Page 120: Statistical Learning in Multiple Instance Problems

7.4. ALGORITHMIC DETAILS OF TLD

fix the value of all dimensions but one, and plot the probability function along that

one dimension, we will find the familiar shape of the sigmoid (or inverse sigmoid)

function, and the absolute value of coefficient controls thesharpness of the function

around the value of 0.5. If the absolute value of a coefficientof one attribute is high,

then the probability function is very sharp around the valueof 0.5. It means that

on both sides of the decision boundary, the purity of two classes is high and we can

easily separate them with this attribute, and vice versa. Therefore, it is intuitive to

regard this as an indicator of the feature importance. Note that in order to avoid

the scaling problem mentioned above, we use the coefficientsestimated from the

standardized data (and do not include the intercept). The ridge method tends to

shrink the coefficients to zero, and this is what we observed after the transformation

of the coefficients back to original scale. As can be seen in Figure 7.2, the relative

coefficients estimated from both datasets do not differ as much between attributes

as we might expect from the results presented in [Maron, 1998], although we do

observe that some attributes are much more important than others, especially in the

Musk1 dataset. If we set a threshold for the relative coefficients to select attributes,

say 40% or 50%, we may indeed pick up only a minority of the attributes (based on

the collective assumption and the linear logistic instance-level model).

7.4 Algorithmic Details of TLD

There are three important factors in the implementation of the TLD method devel-

oped in Chapter 5: the first is the integral in the model; the second is the constrained

optimization; and the third is the numeric evaluation of theGamma function. The

second factor has already been addressed in Section 7.1 and we discuss the other

two in this section. In addition, we discuss more about the interpretations of the

parameters involved in TLD.

For the specific integral calculus for Equations 5.6 and 5.9 in Chapter 5, we list

the solution in Appendix C. We then maximize the formula resulting from the

integration. However, in general, if we specify arbitrary instance and bag-level

106

Page 121: Statistical Learning in Multiple Instance Problems

CHAPTER 7. ALGORITHMIC DETAILS

distributions, the integration of the instance-level parameters is really hard, if not

impossible, which may restrict the practical feasibility of this method. It was thus

suggested in the EB literature that an EM algorithm is used for this method [Maritz

and Lwin, 1989]. More specifically, using EM terminology, weregard the instance-

level parameter� as the “unobservable data” and the bag-level parameterÆy as the

“observable data” in Equation 5.1 of Chapter 5. Now given a specific value ofÆy0 ,

we have the probability of�, Pr(�jÆy0) and the “complete data” likelihood functionL(Bj; �; Æy0) = Pr(Bjj�; Æy0) = Pr(Bjj�). Thus the integral can be regarded as

taking the expectation of the“complete data” likelihood function over the “unob-

servables”, i.e.E�[L(Bj; �; Æy0)℄. This constitutes the E-step in EM and the resulting

formula of the expectation is exactly the same as the last line in Equation 5.1. Then

we regard the “observables”Æy as a random variable and maximize this expected

likelihood function w.r.t. Æy, which is the M-step. Under some regularity condi-

tions, we can maximizeL(Bj; �; Æy) within the expectation sign. Hencewithin the

expectation sign, we take a Newton step fromÆy0 to get a new valueÆy1 , that isÆy1 = Æy0 + E(H�1L )E(gL) whereH�1L is the Hessian matrix (second derivative) of

the likelihood andgL is the Jacobian (first derivative) at the pointÆy0 . The expecta-

tion is, again, over� and can be numerically evaluated now. This defines an iterative

solution of this maximum likelihood method, which was citedby the well-known

original paper of the EM algorithm [Dempster et al., 1977] asan early example of

the EM algorithm.

In the TLD method we also have a Gamma function to evaluate in Equation 5.6.

More precisely, we need to evaluate:g = �log�((bk + nj)=2)�(bk=2) :Define h = bnj2 , and s = Phx=1 log(bk=2 + h � x). If nj is even,g = �saccording to the well-known identity�(y + 1) = y�(y). But if nj is odd, we haveg = �s � log �(bk=2+1=2)�(bk=2) , thus we still have log-Gamma function to evaluate. We

used the implementation suggested by [Press, Teukolsky, Vetterling and Flannery,

1992] to evaluatelog �(y). However, since the method we use to search for the

107

Page 122: Statistical Learning in Multiple Instance Problems

7.4. ALGORITHMIC DETAILS OF TLD

Figure 7.3: Log-Likelihood function expressed via parameter a and b.

minimum of the negative log-likelihood function requires the Jacobian and Hessian

matrix of the function,3 we also need to evaluateddy log �(y) and d2dy2 log �(y) in

this case. Fortunately there is an easy approximation for them [Artin, 1964]:ddy log �(y) = �C � 1y + +1Xi=1 �1i � 1y + i�d2dy2 log �(y) = 1y2 + +1Xi=1 1(y + i)2whereC is Euler’s constant and is canceled out when taking differences as in Equa-

tion 5.6.

We would like to further analyze the model in TLD to get more insight into the

interpretation of the parameters in this method. We restrict our discussion to one

dimension so that we can discard the subscriptk. The parametersa andb together

define the properties of�2. The parameterm, the mean of�, is quite independent on

the other parameters whereasw depends on both the variance of� and the (expected

value of)�2. Thus it seems that the most difficult part of the interpretation comes

from the parametersa andb. If we fix the values ofw andm, and assume some

reasonable value for the sample meanx and the sample varianceS2n�1 , we can get a

3Note that the Quasi-Newton method we used itself does not need the Hessian matrix. We providethe Hessian to give better solutions in case of the bound constraints.

108

Page 123: Statistical Learning in Multiple Instance Problems

CHAPTER 7. ALGORITHMIC DETAILS

log-likelihood (LL) function similar to that in Figure 7.3,which shows the value of

the LL function we constructed associated with different values ofa andb, denoted

byLL(a; b).Although it looks flat, thereare maximal points in LL, according to the contour

plot on the a-b plane. As a matter of fact, the maximal points seem to lie on a

particular line. To explain this, note that�2 follows an Inverse-Gamma distribution,

alternatively, (a�2 ) follows a Chi-squared distribution withb degree of freedom. Thus

the density function of�2 is:dF (�2) = � a2�2 �b=2 exp[� a2�2 ℄�2�(b=2) d�2If we calculate the mean (i.e. the first moment), it is:E(�2) = Z 10 � a2�2 �b=2 exp[� a2�2 ℄�(b=2) d�2By settingy = a=(2�2k), we have= 1�( b2) Z 10 yb=2 exp(�y) a2y2 dy= a2�( b2) Z 10 y b2�2 exp(�y) dyand if0 < b2 6= 1, we use integration by parts,= a2�( b2)n 1b2 � 1�y b2�1 exp(�y)j10 + Z 10 y b2�1 exp(�y) dy�o= a2�( b2)n 1b2 � 1�y b2�1 exp(�y)jy!1 � y b2�1 exp(�y)jy!0 + �( b2)�o= a2�( b2)n 1b2 � 1�0� y b2�1 exp(�y)jy!0 + �( b2)�oIf b2 > 1, i.e. b > 2, y b2�1 exp(�y)jy!0 = 0, soE(�2) = ab�2 . Otherwise, ifb < 2,

109

Page 124: Statistical Learning in Multiple Instance Problems

7.4. ALGORITHMIC DETAILS OF TLDy b2�1 exp(�y)jy!0 =1 and thusE(�2) =1. If b = 2,E(�2) = a2�( b2) Z 10 y�1 exp(�y) dy =1:Therefore, ifb � 2, the distribution is proper but the mean does not exist, otherwise

the mean is ab�2 .

Hence as long as the ratio ofa andb�2 keeps a constant, the (expected) value of�2is the same, and so is the LL function value given other parameters and the data. But

note that the variance of�2 is not the same even ifa=(b � 2) is a constant becauseV ar(�2) = 2a2=[(b � 2)2(b � 4)℄ provided thatb > 4. That is why in Figure 7.3

the values ofa andb corresponding to the maximal LL values are not exactly linear,

but very close to linear.

This analysis is useful for performing feature selection using the TLD method. Even

if two features have differentak andbk values but similar values ofakbk�2 , wk, andmk, they are still not useful for discrimination. Moreover, the parameterwk denotes

the ratioV ar(�k)�2k . GivenV ar(�k), wk actually depends onak andbk. Experiments

(on the artificial data) show that if we specify an independent V ar(�k) andbk > 2,

the TLD method will findwk asV ar(�k)= akbk�2 , which is reasonable according to

the above analysis. However, ifbk � 2, whatever TLD finds will be uninterpretable.

Unfortunately, we did find many such attributes in the Musk data, which may be a

reason why the simplified model TLDSimple can work better than TLD on this

data. On the other hand, when we applied TLD on the kiwifruit datasets described

in Chapter 6, we did not observe this (adverse) phenomenon (i.e. allbk’s > 2). As

a result, TLD can be applied successfully and, as discussed in Chapter 6, it actually

outperformed TLDSimple on this data.

110

Page 125: Statistical Learning in Multiple Instance Problems

CHAPTER 7. ALGORITHMIC DETAILS

7.5 Algorithmic Analysis of DD

In this section we briefly describe how we can view the DiverseDensity (DD) al-

gorithm [Maron, 1998] as a maximum binomial likelihood method, and how we

should interpret its parameters.

The Diverse Density (DD) method was proposed by Maron [Maron, 1998; Maron

and Lozano-Perez, 1998] and has EM-DD [Zhang and Goldman, 2002] as its suc-

cessor, which claims to be an application of the EM algorithmto the most-likely-

cause model of the DD method. However, we think that the EM-DDalgorithm has

problems to find a maximum likelihood estimate (MLE), as shown in Appendix D.

Thus we are strongly skeptical about the validity of EM-DD’ssolution as an ML

method. We only discuss the DD algorithm here.

The basic idea of DD is to find a pointx in the instance space such that as many

positive bags as possible overlap on the point but no (or few)negative bags cover it.

The “diverse density” of a pointx stands for the probability for this point to be the

“true concept”. Mathematically,DD(x) = Pr(x = tjB+1 ; � � � ; B+n ; B�1 ; � � � ; B�m)whereB is one bag. The “+/�” sign indicates a bag’s class label andt is the true

concept. DD aims to “maximize this probability with respectto t, to find the target

concept t that is most likely to agree with the data”[Maron, 1998]. After some

manipulations, it is equivalent to maximizing the following “likelihood” functionL(xjB) = Y1�i�nPr(x = tjB+i ) Y1�j�mPr(x = tjB�j ) (7.1)

We believe this notation is confusing and may even disguise the essence of the DD

method. Hence we would like to express this method using a different perspective.

As is usually done in single-instance learning, we would like to explicitly represent

the class labels as a variableY . In MI, there is still aY , not for each instance, but

111

Page 126: Statistical Learning in Multiple Instance Problems

7.5. ALGORITHMIC ANALYSIS OF DD

for each bag. In generalY = 1; 2; � � � ; K forK-class data but the Musk datasets are

two-class data. Here we sayY = 0 if the bag is negative and1 if it is positive. In the

above formulation this variable is disguised by the “+” and “�” sign and we believe

thatPr(x = tjB+i ) really meansPr(Y = 1jBi), and likewisePr(x = tjB�j ) meansPr(Y = 0jBj).[Maron, 1998] proposes two approximate generative models,namely the noisy-or

and the most-likely-cause models, to calculatePr(x = tjB+i ) andPr(x = tjB�i )in practical problems. They can be described as follows:

� Noisy-or model: DD borrows the noisy-or model from the Bayesian net-

works’ literature. Noisy-or calculates the probability ofan event to happen in

the face of a few causes, assuming that the probability of anycause failing to

trigger this event is independent of any other causes. DD regards an instance

as a cause and “ t being the underlying concept” as the event, hence using the

original notationPr(x = tjB�i ) =Yj [1� Pr(x = tjBij)℄and Pr(tjB+i ) = 1�Yj [1� Pr(x = tjBij)℄:� Most-likely-cause model:In the most-likely-cause model, only one instance

in each bag is regarded as the cause for making a certainx the “true concept”,

say, thezthi instance in theith bag.zi is defined aszi = argmaxzifPr(x = tjBizi)g:As before, we have the two complementary expressions asPr(x = tjB�i ) = 1�maxjPr(x = tjB�ij)

112

Page 127: Statistical Learning in Multiple Instance Problems

CHAPTER 7. ALGORITHMIC DETAILS

and Pr(x = tjB+i ) = maxjPr(x = tjB+ij):A well-known category of single-instance learners model the posterior probability

of Y given the feature dataX, Pr(Y = kjX) (k = 1; 2) directly, e.g. logistic

regression or entropy-based decision trees. Logistic regression models the process

that determines an instance’s class label as a one-stage Bernoulli process and hence

fits a one-stage binomial distribution forPr(Y = kjX). It then uses the ML method

to estimate the parameters involved. According to our understanding, DD still uses

the above single-instance paradigm. Each point (i.e. each instance) in the instance

space has a probability of being positive and negative and DDmodels the posterior

probability of Y directly. The difference is that we now have multiple instances

associated with one class label.

In the noisy-or model, we model the process that determines the class label of a

bag as anm-stage Bernoulli process wherem is the number of instances inside the

bag — each one of the instances corresponds to one stage. Suppose we knew the

probability for each instance to be positivePr(Y = 1jxb) (b = 1; 2; � � � ; m), then

we could first determine the class label of each instance (or stage) according to its

class probability. Given this, what is the probability forall the instances (stages) to

be negative if they areindependent? This is a simple question and the solution isQmb=1(1� Pr(Y = 1jxb)). The complementary probability1�Qmb=1(1� Pr(Y =1jxb)) corresponds to the probability forat least one of themto be positive. This is a

probabilistic representation of the standard MI assumption. Now the log-likelihood

function of this multi-stage Binomial probability is simplyL =XB [Y log(1� mYb=1(1�Pr(Y = 1jxb)))+(1�Y ) log( mYb=1(1�Pr(Y = 1jxb)))℄(7.2)

where there areB bags in total. This is exactly what DD with the noisy-or model

uses. Within this multi-stage Bernoulli process, it is straightforward to write down

other formulae if the conditions for the MI assumption change, for instance, as

“a bag is positive if at leastr (1 < r < m) instances are positive and negative

113

Page 128: Statistical Learning in Multiple Instance Problems

7.5. ALGORITHMIC ANALYSIS OF DD

otherwise”, etc.

The most-likely-cause, on the other hand, selects a representative of each bag based

on Pr(Y = 1jxb). More specifically,b = argmaxb2mfPr(Y = 1jxb)g, i.e. it

selects the instance with the highest probability to be positive in a bag. Therefore

it literally degrades a bag into one instance and the one-stage Bernoulli process

is applied to determine the class label, as in the mono-instance case. The log-

likelihood function is nowL =XB [Y log(maxb2mfPr(Y = 1jxb)g)+(1� Y ) log(1�maxb2mfPr(Y = 1jxb)g)℄=XB [Y log(maxb2mPr(Y = 1jxb)) + (1� Y ) log(minb2mPr(Y = 0jxb))℄(7.3)

where there areB bags in total. This is exactly what DD with the most-likely-

cause model uses. The most-likely-cause also follows the standard MI assumption

because, by selecting an instance in a negative bag that has the maximal probability

to be positive and setting that probability (via the binomial model) to less than 0.5, it

implies that the probability to be positive for every instance in a negative bag cannot

be greater than 0.5.

This means that DD uses the binomial probability formulation, either one stage or

multiple stage, to model the class probabilities of the bags. In general, we can

separate the modeling into two levels if we introduce a new variable denoting the

bagsB. On the bag level we always have a one-stage binomial formulation forPr(Y jB) and on the instance level we build a relationship betweenPr(Y jB) andPr(YXjX) whereYX is an instance’s class label, which is unobservable from the

data. Mathematically, at the bag level, assuming i.i.d data,4 the marginal probability

4We assume that the class labelsYBi of the bagsBi are independent and identically distributed(i.i.d) according to a binomial (or multinomial in a multi-class problem) distribution.

114

Page 129: Statistical Learning in Multiple Instance Problems

CHAPTER 7. ALGORITHMIC DETAILS

of Y is a one-stage binomial distribution,Pr(Y jBi) = �YBi (1� �)1�YBiwhere� = Pr(Y = 1jBi). Normally we have a parametric model for� with a

parameter vector� estimated using the data, i.e.� = g(�;Bi) in this case. Thus

we can regard the marginal probability ofY as being parameterized by�. In other

words, the likelihood function for� isL(�jYB) = Pr(YB1; � � � ; YBN jB1; � � � ; BN ; �)= Y1�k�N g(�;Bk)YBk (1� g(�;Bk))1�YBk ;and, assumingn positive bags andm negative bags,= Y1�i�n g(�;Bi) Y1�j�m(1� g(�;Bj)):

(7.4)

At the instance level, we build a relationship betweenPr(Y jBk) andPr(YXjXkl)with Xkl 2 Bk, i.e. Pr(Y jBk) = h(Pr(YXjXkl)) ) g(�;Bi) = h(f(�;Xkl))wheref(�;Xkl) = Pr(YXjXkl). In the noisy-or model,h(f) = 1 � Qnkl=1(1 �f(�;Xkl). In the most-likely-cause model,h(f) = maxlff(�;Xkl)g. One could

plug in otherh(:)’s based on other assumptions believed to be true but the binomial

likelihood function in Equation 7.4 remains unchanged. This perspective on DD

establishes its relationship with the MI methods describedin Chapter 4.

The likelihood in Equation 7.4 is a generalization of Equations 7.2 and 7.3, and it is

identical to the “likelihood” function in Equation 7.1 thatwas given in the original

description of DD. Now if we think of� as fixed and estimate� by maximizing the

likelihood function, it is easily recognized that DD is simply a maximum binomial

likelihood method.

The last question to ask is how to establish an exact formula for the instance-

115

Page 130: Statistical Learning in Multiple Instance Problems

7.5. ALGORITHMIC ANALYSIS OF DD

level probabilityPr(YXjXkl) = f(�;Xkl). [Maron, 1998] proposed three ways.

One is to useexp(�jjXkl � pjj2) where jj:jj is the Euclidean norm andp is the

parameter standing for a point in the instance space. The second one is to useexp(�jjs(Xkl � p)jj2) wheres is a diagonal matrix with diagonal elements that are

the scaling parameters for the different dimensions. The last model is a variation

of the second one that models a set of disjunctive concepts, each of which is the

same as the second model. As a matter of fact, suppose we knew there areD con-

cepts to be found. Then the DD method would haveD sets of parameters (instead

of one set of parameters)v1; v2; � � � ; vD, wherevd, d = 1; 2; � � � ; D, is a vector

of parameters consisting of both point and scaling parameters (p ands) for each

dimension. In the process of searching for the values of thepd’s andsd’s, the prob-

ability of eachXkl is associated with only one concept — the one that makesXklto have the highestPr(Y = 1jXkl). In other words,Pr(Y = 1jXkl) is calculated

asmaxdfexp(�jjsd(Xkl � pd)jj2g.The formulation ofPr(YXjXkl) (f(�;Xkl)) is in a radial (i.e. “Gaussian-like”)

form, with a center ofpt and a dispersion of1st in thetth dimension (wherest is thetth diagonal element ins). The closer an instance is top, the higher its probability to

be positive. And the dispersion determines the decision boundary of the classifica-

tion, i.e. the threshold of whenPr(YXjXkl) = 0:5. It is similar to the axis-parallel

hyper-rectangle (APR) [Dietterich et al., 1997] on the instance level. But APR is

not differentiable. In order to make it differentiable, DD essentially models the

(instance-level) decision boundary as an axis-parallel hyper-ellipse (APE) instead

of a hyper-rectangle. The diameter of this APE along thetth dimension isplog 2st .

For example, in a two-dimensional space, the decision boundary isexp[�s21(x1 � p1)2 � s22(x2 � p2)2℄ = 0:5) (x1 � p1)2log 2s21 + (x2 � p2)2log 2s22 = 1wherep1; p2; s1; s2 are the parameters to be estimated. We know this is an ellipse

centered atp1 andp2, and beinglog 2s21 and log 2s22 in diameter along the two axes. Any

point within this ellipse should be classified as positive. This is exactly the second

formulation in DD described above. The third model in DD models more than one

116

Page 131: Statistical Learning in Multiple Instance Problems

CHAPTER 7. ALGORITHMIC DETAILS

APE usingf(�;Xkl). No matter what the formulation forf(�;Xkl) is, note that� is

an instance-level parameter and DD aims to “recover” the instance-level probability

in a structured form under the MI assumption. Hence we categorize the DD method

as an instance-based approach.

Nonetheless, DD interprets the parameter vectors purely as a scaling parameter and

does not recognize it as related to the diameter of the decision boundary. As a result

it never uses it for classifying a new exemplar (bag). Instead it tries to find new axis-

parallel thresholds via an additional time-consuming optimization procedure. We

regard this as unnecessary because the instance-level probability has already been

recovered (parameterized byp ands) and why not use it? We hence suggest that all

the parameters are simply plugged into the noisy-or (or most-likely-cause) model to

calculate the binomial probability ofYBnew for a new bagBnew. The classification

is made depending on whether this probability is greater than 0.5. We have done

some experiments with DD based on the noisy-or model but without searching for

the threshold (i.e. using 0.5 as the threshold) and found that the 10 runs of 10-

fold cross validation (CV) accuracy of the DD method is 87.07%�1.40% on the

Musk1 data and 83.24%� 2.29% on the Musk2 data. These are very similar to the

best results reported when searching for the threshold, which are88:9% on Musk1

data and82:5% on Musk2 data,5 but the computational expense is greatly reduced.

The misunderstanding of the parameters may have also compromised the feature

selection in DD, which was discussed in Section 7.3.

Finally we discuss the optimization problem in the ML methodin DD. There are

no difficulties to numerically maximize the “likelihood” function with the noisy-or

model. L in Equation 7.2 can be maximized directly via a numeric optimization

procedure. But in the most-likely-cause model, there aremax functions in the like-

lihood of Equation 7.3, which makes it not differentiable. The “softmax” function

is used in DD, which is the standard way to make themax function differentiable.

The EM-DD method [Zhang and Goldman, 2002] was proposed to overcome the

5Because [Maron, 1998] did not report how many runs of 10-foldCV were used, and neither thestandard deviation of the accuracies, we cannot do a significant test to see whether the differencesare significant.

117

Page 132: Statistical Learning in Multiple Instance Problems

7.5. ALGORITHMIC ANALYSIS OF DD

difficulty of non-differentiability and to make this model faster. However, as shown

in Appendix D, it has problems to find the MLE.

Even with the noisy-or model, DD still has a difficult global optimization to solve

due to the radial form ofPr(YXjXkl). The usual way to tackle the global optimiza-

tion problem is to try different initial values when searching for the optimal value

of the variables. [Maron, 1998] proposed a strategy to startsearching with the value

of every instance within all the positive bags. However thisstrategy is computation-

ally too expensive to be practical, especially on large datasets like Musk2. [Maron,

1998] also mentioned that, theoretically, to start from every instance within one

positive bag can be enough to approximately find the point with the highest diverse

density. We thus adopt the latter strategy in our implementation of DD (in theMI

package, as described in Appendix A). More precisely, we picked up the positive

bag(s) with the largest size (we picked all of them if there are more than one), and

tried every instance within the bag(s) as the start value of the search. In fact we

observed that this strategy gives a higher accuracy than thestrategy that starts with

instances’ values from all the positive bags on the Musk1 data. However, the im-

provement proposed in Section 4.6 of Chapter 4, namely to change the formulation

of Pr(YXjXkl), can help avoid this inconvenience in the optimization process.

In summary, we recognize DD as a parametric method that uses the maximum bino-

mial likelihood method to recover the instance-level probability function in an APE

form based on the MI assumption. Therefore it is a member of the “APR-like + MI

assumption” family.

118

Page 133: Statistical Learning in Multiple Instance Problems

CHAPTER 8. CONCLUSIONS AND FUTURE WORK

Chapter 8

Conclusions and Future Work

The approach adopted in this thesis is a “conservative” one in the sense that it is

similar to existing methods of multiple instance learning.Much of the work is an

extension or a result derived from the statistical interpretation of current methods in

either single-instance or MI learning. Although some new MImethods have been

described in this thesis, we basically adopted a theoretical perspective similar to

that of the current methods. Hence much of the richness of themultiple instance

problems is left to be explored. In this chapter, we first summarize what has been

done in this thesis. Then we propose what could be done to moresystematically

explore MI learning in a statistical classification context.

8.1 Conclusions

In this thesis, we first presented a framework for MI learningbased on which we

summarized MI methods published in the literature (Chapter2). We defined two

main categories of MI methods: instance-based and metadata-based approaches.

While instance-based methods focus on modeling the class probability of each in-

stance and then combine the instance-level probabilities into bag-level ones, meta-

data-based methods extract metadata from each bag and modelthe metadata di-

rectly. Note that in the instance-based methods, the combination of the instance-

level predictions into bag-level ones requires some assumptions.

119

Page 134: Statistical Learning in Multiple Instance Problems

8.1. CONCLUSIONS

The standard assumption that can be found in the literature is the MI assumption.

We proposed a new assumption instead of the MI assumption andcalled it the “col-

lective assumption”. We also explained that some of the current MI methods have

implicitly used this assumption. Under the collective assumption, we developed

new methods that fall into two categories: bag-conditionaland group-conditional

approaches.

A bag-conditional approach models the probability of a class given a bag ofn in-

stancesPr(Y jX1; � � � ; Xn) (or some transformation of the probability). Under the

collective assumption we can model it as some functionf [:℄ of the point-conditional

probabilityPr(Y jX) (or a transformation of this probability), i.e.Pr(Y jX1; � � � ; Xn) = f [Pr(Y jXi)℄; i = 1; � � � ; n:Because many single-instance learners modelPr(Y jX) (or a transformation of it),

we can either wrap around them (Chapter 3) or upgrade them (Chapter 4) to enable

them to deal with MI data. The resulting methods are instance-based MI learners.

A group-conditional approach models the probability density of a bag ofn instances

given a class, i.e.Pr(X1; � � � ; XnjY ), and then calculatesPr(Y jX1; � � � ; Xn)based onPr(X1; � � � ; XnjY ) and Bayes’ rule. It is not obvious how to model the

densityPr(X1; � � � ; XnjY ) directly. Under the collective assumption, we could

have simply assumedX1; � � � ; Xn are from the same density. However, the gen-

erative model would have been too simple to solve real-worldproblems. Instead,

we assumed that all instances from the same bag are from the same density while

different bags correspond to different (instance-level) densities. We then related the

parameters of these densities to one another by using a hyper-distribution (or bag-

level distribution) on the parameters. This resulted in a two-level distribution (TLD)

solution (Chapter 5) to MI learning. This is essentially a metadata-based approach.

We discovered that this approach is an application of the empirical Bayes method

from statistics to the MI classification problem.

Then we explored some practical applications of MI learning— the drug activity

120

Page 135: Statistical Learning in Multiple Instance Problems

CHAPTER 8. CONCLUSIONS AND FUTURE WORK

prediction, fruit disease prediction and content-based image categorization (Chap-

ter 6). We also performed some experiments on datasets from these practical do-

mains and found that the methods developed in this thesis arecompetitive with

published methods.

Finally we presented some important algorithmic details inthe methods discussed

in this thesis (Chapter 7). These include numeric optimization techniques, artificial

data generation details, feature selection on the Musk datasets, algorithmic details

of the TLD method, and the analysis of the DD algorithm [Maron, 1998].

As a by-product of this thesis, we also discovered the relationship between MI learn-

ing and the meta-analysis problem in statistics (Section 5.6 in Chapter 5), notified

some errors in some of the current MI methods (Appendix D) andimplemented

some numeric procedures for the WEKA workbench [Witten and Frank, 1999]

(Chapter 7 and Appendix B).

8.2 Future Work

MI learning differs from single-instance learning in two ways: (1) it has multiple

instances in an example, and (2) only one class label is observable in the data for

each bag of instances. Although the name “multiple instance” seems to denote only

the first property, it has become a convention in MI learning that both should be

satisfied. Let us factorize these two ways into two steps, which may help us see a

direction for future work on MI learning with a statistical perspective.

Learning problems with multiple instances per bag

First, let us consider a problem simpler than the MI problem—we have multiple in-

stances in an example, but each instance has its own class label. In other words, we

construct the data as in single-instance learning, adding one more attribute named

121

Page 136: Statistical Learning in Multiple Instance Problems

8.2. FUTURE WORK

“Bag ID” that indicates which bag an instance is from. At testing time, a new bag of

instances is given but each instance is to be classified individually. The reader might

think that this is an uninteresting problem because we couldapply single-instance

learners directly to solve this problem by deleting the “BagID”. However, this line

of thinking may not be true. If the fact that some instances are from the same bag

indeed provides us with some additional information about their class labels,none

of the single-instance learners can perform well on this problem because they all

ignore this extra information that implicitly resides in the data.

For example, suppose the posterior (class) probability of each instance is dominated

by some parameter�, Pr(Y jX; �), whereX includes all the attributesexceptthe

“Bag ID” attribute. Now suppose� changes from bag to bag, following a specific

distribution. Then we have�1, for each instance in the first bag and can generate

its instances’ class labels according toPr(Y jX; �1), �2 for another bag, generating

its instances’ class labels based onPr(Y jX; �2), etc. Obviously normal single-

instance learners are not expected to deal with this data because they cannot use

the information provided by the “Bag ID” attribute. Since this information resides

in the data, there is room to develop a new family of methods that can fully utilize

the bag information. Such new methods may outperform normalsingle-instance

learners on this type of problems.

We call such a problem a “semi-MI” problem because the secondproperty of MI

problems is not satisfied. As shown above, much of the richness of multiple in-

stances learning already appears in semi-MI problems wherenormal single-instance

learning cannot be applied. When classifying a test instance in semi-MI learning,

we can regard the rest of the instances within the same bag as an “environment”

for the classification. Even if the instance to be classified does not change, the

classification may change if the “environment” (i.e. other instances within the bag)

changes. Ignoring this contextual information may not givean accurate prediction.

Nevertheless MI research seems to regard the semi-MI problem as the same set-

ting as normal single-instance learning, and semi-MI problems do not appear to

122

Page 137: Statistical Learning in Multiple Instance Problems

CHAPTER 8. CONCLUSIONS AND FUTURE WORK

be actively researched. Almost all of the current MI methodsand the methods in

this thesis (except the TLD approach) havenot thoroughly and systematically ex-

plored the extra information provided purely from the setting of multiple instances

per example. Therefore we regard this as the first step to tackle MI problems in the

future.

Note that even when there is only one instance per bag, i.e. the data degenerates

into mono-instance data, methods that treat instances as degenerated bags may be

totally different from normal single-instance learners. There is some work in normal

single-instance learning that already has such a perspective. Such methods can be

shown to have some asymptotic optimal properties [Wang and Witten, 2002].

The setting of multiple instances per bag is not restricted to the classification case.

It can be extended naturally to regression and clustering, which may be more com-

monly seen in practical problems. Therefore we strongly advocate the study of

“semi-MI” problems in the MI domain.

One class label for a bag

Once we have fully explored the richness of semi-MI problems, we can consider

MI problems where the instances’ labels are not observable.This can be, for ex-

ample, based on some assumptions that relate a bag’s class label to the correspond-

ing instances’ class labels. The MI assumption has been adopted by many current

(instance-based) MI methods, and the collective assumption is adopted in this thesis.

Future work is likely to focus on these assumptions made. Thestudy of assumptions

can follow two directions: (1) the creation and formulationof new assumptions, and

(2) the interpretation and assessment of existing assumptions.

Note that the categorization in our framework described in Chapter 2 is actually

highly related to the assumptions. In instance-based methods, the underlying as-

sumptions are purely related to the (unobservable) instances’ class labels, while

in metadata-based methods the assumptions are, partly or solely, associated with

123

Page 138: Statistical Learning in Multiple Instance Problems

8.2. FUTURE WORK

the attribute values of the instances. Note that if the assumptions are no longer

associated with the instances’ (latent) class labels (as inmetadata-based methods’

generative models), the problem isnot related to either single-instance or semi-MI

learning because, whether the instances’ class labels exist or not, the bags’ class

labels are generated by some proceduresirrelevant to the instances’ labels. In the

future, more assumptions can be created within this framework. Usually the domain

knowledge gives rise to these assumptions, and the assumptions resideoutsidethe

data. The prediction could benefit from incorporating some forms of background

knowledge. A common way to incorporate background knowledge is to formulate

it mathematically in the model, thus we are typically interested in the exact formu-

lation of the assumptions involved.

The second avenue of future work regarding assumptions can be to assess and in-

terpret existing assumptions, using both domain knowledgeand data. Currently

the assessment of the validity of the assumptions on a specific dataset is performed

via prediction accuracy on the data. However, there is a dilemma sometimes. On

the one hand, methods based on seemingly sound domain knowledge may not per-

form well on corresponding datasets. On the other hand, methods that perform

well on practical datasets may be based on some assumptions whose interpretation

in the corresponding domain is not straightforward. Therefore we need to acquire

both strong background knowledge and modeling skills to fully understand some

assumptions. Such efforts may lead to breakthroughs in the understanding of the

domain and in the understanding of the learning algorithms.

Applications

We expect that multiple instance learning will keep attracting researchers, mainly

due to its prospective applications in various areas. However, one of the biggest

obstacles is the lack of fielded applications and publicly available datasets. More MI

datasets and practical applications would stimulate research in real world problems

for MI learning. In fact, we observed that there are many datasets in which instances

124

Page 139: Statistical Learning in Multiple Instance Problems

CHAPTER 8. CONCLUSIONS AND FUTURE WORK

are grouped into “bags” while each instance has its own classlabel. Hence “semi-

MI” learning may actually be more promising in terms of applications in the real

world.

On the whole, we regard research to MI learning as still in itsearly stages. Much

work on algorithm development, property analysis and practical applications re-

mains to be done.

125

Page 140: Statistical Learning in Multiple Instance Problems

8.2. FUTURE WORK

126

Page 141: Statistical Learning in Multiple Instance Problems

APPENDIX A. JAVA CLASSES FOR MI LEARNING

Appendix A

Java Classes for MI Learning

We have developed a multiple instance package using the Javaprogramming lan-

guage for this project. A schematic description is shown in Figure A.1. This pack-

age is directly derived or modified from the corresponding codes in the WEKA

workbench [Witten and Frank, 1999].1 We put the programs for the MI experiment

environment and MI methods directly into theMI package, some artificial data gen-

eration procedures into theMI.data sub-package, and some data visualization

tools into theMI.visualize sub-package. Although the documentations of the

programs are self-explanatory, we briefly describe some of them in the following.

A.1 The “MI” Package

We first developed an experiment environment for MI learning, which includes eval-

uation tools, some interfaces and related classes. Some of the classes are:� MI.MIEvaluation : A bag-level evaluation environment for MI algo-

rithms.� MI.Exemplar : The class for storing an exemplar, or a bag. Each exemplar

has multiple instances but only one class label.1As a matter of fact, we copied some of the source codes from theWEKA files.

127

Page 142: Statistical Learning in Multiple Instance Problems

A.1. THE “MI” PACKAGE

(MI algorithms andexperimental environ-ment) (MI data generation

procedures)

MI.data

(MI data visualizationtools)

MI.visualize

MI

Figure A.1: A schematic description of the MI package.� MI.Exemplars : The class holding a set of exemplars.� MI.MIClassifier : An interface for any MI classifier that can provide a

prediction given a test exemplar.� MI.MIDistributionClassifier : An interface for any MI classi-

fier that can provide a class distribution given a test exemplar. It extends

MI.MIClassifier.

All the MI methods developed in this thesis as well as some other MI methods are

implemented in theMI package. More precisely, they are:� MI.MIWrapper : The wrapper method described in Chapter 3.� MI.MILRGEOM : MILogisticRegressionGEOM described in Chapter 4.� MI.MILRARITH : MILogisticRegressionARITH described in Chapter 4.� MI.MIBoost : MI AdaBoost described in Chapter 4.� MI.TLD : TLD described in Chapter 5.� MI.TLDSimple : TLDSimple described in Chapter 5.� MI.DD : The Diverse Density algorithm with the noisy-or model [Maron,

1998] that looks for one target concept. Details are described in Chapter 7.

128

Page 143: Statistical Learning in Multiple Instance Problems

APPENDIX A. JAVA CLASSES FOR MI LEARNING

A.2 The “MI.data” Package

The sub-packageMI.data includes the classes that generate the artificial datasets

used in this thesis:� MI.data.MIDataPopulation : This class uses the population version

of the generative models described in Chapter 3 (not used in this thesis).� MI.data.MIDataSample : This class implements the sample version of

the generative models described in Chapter 3. The artificialdata generated by

this procedure is used in Chapters 3 and 4, with different formulations of the

density of the feature variableX, Pr(X).� MI.data.TLDData : This class generates the data according to what the

method “TLD” models. In particular, given the parameters provided by the

user, it first generates the variance of a bag�2 according to an Inverse-Gamma

distribution parameterized by the user parameters and the mean of that bag�based on a Gaussian distribution parameterized with the user parametersm,w and the generated variance�2 (i.e. generate� from N(m;w�2)). Finally,

it generates a bag of instances according to a Gaussian parameterized by�and�2. The datasets generated by this class are used to show the estimation

properties of TLD in Chapter 5.� MI.data.TLDSimpleData : This class generates the data that fits what

the method “TLDSimple” models. The details of the data generation process

and the generated data are shown in Chapter 5. The generated datasets are

used to show the estimation properties of TLDSimple.� MI.data.BagStats : This is a class written by Nils Weidmann, with

some functionality added by myself, to summarize bag information for a

dataset. It was used for the descriptions of the datasets in Chapter 6.

129

Page 144: Statistical Learning in Multiple Instance Problems

A.3. THE “MI.VISUALIZE” PACKAGE

A.3 The “MI.visualize” Package

In the sub-package ofMI.visualize, we developed some visualization tools to

visualize MI datasets. The key class isMI.visualize.MIExplorer, in which

we implemented a data plot and a distribution plot for MI datasets. The data plot is

used to provide a 2D visualization of a dataset, with the “BagID” of each instance

clearly specified. Some modifications of this plot were used in Chapters 3, 4 and

5 to illustrate the artificial datasets. The distribution plot is trying to capture the

distributional properties within each bag, if possible. Thus it draws a distribution

per bag using a kernel density approach. This type of plot wasnot used in this

thesis.

130

Page 145: Statistical Learning in Multiple Instance Problems

APPENDIX B. WEKA.CORE.OPTIMIZATION

Appendix B

weka.core.Optimization

This appendix serves as a documentation of theweka.core.Optimization

class. Interested readers or users of this class may find the description here helpful

to understand the algorithm.

In brief, the strategy we used is a “Quasi-Newton method based on projected gra-

dients”, which is primarily used to solve optimization problems subject to linear

inequality constraints. The details of the method are described in the following.

First of all, let us introduce the Newton method to solve an unconstrained opti-

mization problem. We shall convince ourselves, without a proof, that the following

procedures can find at least a local minimum if the objective function is smooth.1

The rigorous proof can be found in various optimization books like [Chong and

Zak, 1996], [Gill et al., 1981], [Dennis and Schnabel, 1983], etc.

1. Initialization. Set iteration counter k=0, get initial variable valuesx0, cal-

culate the Jacobian (gradient) vectorg0 and the Hessian (second derivative)

matrixH0 usingx0.2. Checkgk. If it converges to 0, then STOP.

3. Solve for the search directiondk, Hkdk = �gk. Alternatively this step can

be expressed asdk = �Hk�1gk1One can easily get the update in Step 4 below by Taylor series expansion to the second order

from xk.

131

Page 146: Statistical Learning in Multiple Instance Problems

4. Take a step to get new variable valuesxk+1. Normally, this is done as one

Newton stepdk, however, when the variable values are far from the function

minimum, one Newton step may not guarantee a decrease of the objective

function even ifdk is a descent direction. Thus we are looking for a multiplier� such that� = argmin(f(xk + �dk)) wheref(:) is the objective function

to be minimized. The search for� is carried out using a line search and once

it is done,xk+1 = xk + �dk.

5. Calculate the new gradient vectorgk+1 and the new Hessian matrixHk+1usingxk+1.

6. Set k=k+1. Go to 2.

As a convention,dk is often referred to as “newton direction” or simply “direction”,� as “step length”, and�x = �dk as “newton step” or simply “step”. We adopt

these terms here.

Note that in Step 4 above, we search for the exact value of� that minimizes the

objective function. Such a line search is called an “exact line search”, which is

computationally expensive. It was recommended that only a value of� that can

lead to a sufficient decrease in the objective function is needed. In other words,

an “inexact line search” is preferable in practice and more computational resources

should be put into searching for the values ofx instead of� [Dennis and Schnabel,

1983; Press et al., 1992]. Thus we use an inexact line search with backtracking and

polynomial interpolations [Dennis and Schnabel, 1983; Press et al., 1992] in our

project.

Now we carry on with Quasi-Newton methods. The idea of Quasi-Newton methods

is not to use the Hessian matrix because it is expensive to evaluateand sometimes it

is not even available. Instead a symmetric positive definitematrix, sayB, is used to

approximate the Hessian (or the inverse of the Hessian). As amatter of fact, it can

be shown no matter what matrix is used, as long as it is symmetric positive definite,

and an appropriate update of the matrix is taken in each iteration, the search result

132

Page 147: Statistical Learning in Multiple Instance Problems

APPENDIX B. WEKA.CORE.OPTIMIZATION

will be the same! [Gill et al., 1981] Thus the key issue is how to update this matrixB. One of the most famous methods is a variable metric method called “Broyden-

Fletcher-Goldfarb-Shanno”(BFGS) algorithm, which uses arank two modification

of the oldB. There are, of course, many other update methods, but it was reported

that BFGS method works better with the inexact line search. Hence this method is

preferred in practice. In summary, the only difference between the Quasi-Newton

method and the Newton method concernsB andH. Hence the major modifications

of the above algorithm concern Step 5. A Quasi-Newton algorithm using BFGS

updates can be described as follows:

1. Initialization. Set iteration counter k=0, get initial variable valuesx0, cal-

culate the Jacobian (gradient) vectorg0 usingx0, and initialize a symmetric

positive definite matrixB0.2. Checkgk. If it converges to 0, then STOP.

3. Solve for the search directiondk, Bkdk = �gk. Alternatively this step can

be expressed asdk = �Bk�1gk.

4. Search for� using a line search and setxk+1 = xk + �dk.

5. Calculate the gradient vectorgk+1 usingxk+1. Set�xk = xk+1 � xk and�gk = gk+1 � gk. The BFGS update is:Bk+1 = Bk + �gk�gkT�gkT�xk � Bk�xk�xkTBk�xkTBk�xk6. Set k=k+1. Go to 2.

Before we move on to the optimization with bound constraints, there are some de-

tails to be elaborated here.

First of all, if we apply the Sherman-Morrison formula [Chong andZak, 1996] toBk+1 twice in Step 5,B�1k+1 is readily available and in the next iteration Step 3 is

easy to carry out. Nevertheless, we willnot adopt this strategy because of a minor

but important practical issue involved.

133

Page 148: Statistical Learning in Multiple Instance Problems

Since the whole algorithm depends on the positive definiteness property of the ma-

trix B (otherwise the search direction will be wrong, and it can take much much

longer to find the right direction and the target!), it would be good to keep the pos-

itive definiteness during all iterations. But there are two cases where the update in

Step 5 can result in a non-positive-definiteB:

First, the hereditary positive-definiteness is theoretically guaranteed iff�gkT�xk >0 and this condition can be ensured in an exact line search [Gill et al., 1981]. When

using an inexact line search, apart from the sufficient function decrease criterion,

we should also impose this second condition on it. Thus we cannot use the line

search in [Press et al., 1992], instead we use the “modified line search” described in

[Dennis and Schnabel, 1983] in Step 4.

Second, even if the hereditary positive-definiteness is theoretically guaranteed, the

matrixB can still lose positive-definiteness during the update due to rounding errors

when the matrix is nearly singular, which are not uncommon inpractice. Therefore

we keep a Cholesky factorization ofB during the iterations:B = LDLT whereL is

the lower triangle matrix andD is a diagonal matrix. The positive definiteness ofBcan be guaranteed if the diagonal elements ofD are all positive. If the resulting ma-

trix after a low rank update is theoretically positive definite, there exist algorithms

that avoid rounding errors during the updates and ensure that all the diagonal ele-

ments ofD are positive [Gill, Golub, Murray and Saunders, 1974; Goldfarb, 1976].

This factorized version of the BFGS updates is the reason whywe do not useB�1kin Step 3 — because with a Cholesky factorization, the equationBkdk = �gk can

be solved inO(N2) time (whereN is the number of variables) using forward and

backward substitution, and henceB�1k is no longer needed. The reader might notice

that the BFGS update formula in Step 5 is not convenient if theCholesky factoriza-

tion of Bk is involved. However, using the fact thatBk�xk = �Bkdk = ��gk,

we can simplify the formula to:Bk+1 = Bk + �gk�gkT�gkT�xk + gkgkTgkTdk134

Page 149: Statistical Learning in Multiple Instance Problems

APPENDIX B. WEKA.CORE.OPTIMIZATION

Note that this involves two rank one modifications, and the coefficients 1�gkT�xk >0 and 1gkTdk < 0 respectively. Hence the first update is a positive rank one update

and the second one a negative rank one update. There is a direct rank two modifi-

cation algorithm [Goldfarb, 1976], but for simplicity we implemented two rank one

modifications using the C1 algorithm in [Gill et al., 1974] for the former update and

the C2 algorithm, also in [Gill et al., 1974], for the latter one. Note that all these

algorithms haveO(N2) complexity.

In summary, we use a factorized version of the Quasi-Newton method to avoid the

rounding errors and achieve positive-definiteness ofB during updates. Note that

the total complexity of using Cholesky factorization isO(N2). If we did not use

it, the computational cost would still beO(N2) due to the matrix multiplication.

Therefore there is hardly additional expense for computingCholesky factorization.

Finally, we reach the topic of optimization subject to boundconstraints. We adopt

basically the same strategy as described in [Gill and Murray, 1976] and [Gill et al.,

1981]. It is fairly similar to the above unconstrained optimization method.

First we consider the optimization subject to linear equality constraintsAx = b. It

is an easy problem because it can actually be cast as an unconstrained optimization

problem with a reduced dimensionality. A common method to solve this problem

is the “projected gradient method”, in which the above Quasi-Newton method with

BFGS updates remains virtually unchanged. We simply replace the gradient vectorg and the matrixB by projected versionsZg andZTBZ respectively, whereZis a projection matrix. There are various methods to calculate Z and usually the

orthogonal projection ofA (in the constraints) is taken [Chong andZak, 1996; Gill

et al., 1981]. Particularly if the constraints are bound constraints, it is typically

easy to calculate because some variables become constants and do not affect the

objective function any more. The projection matrixZ is thus simply a vector with

entries of 1 for “free” (i.e. not in the constraints) variables and 0s otherwise.

Next let us go further into the problem of optimization subject to linear inequality

constraintsAx � b. There are several options to solve this kind of problems but

135

Page 150: Statistical Learning in Multiple Instance Problems

we are interested in the one(s) that does not allow variablesto take values over the

bounds, because in our case the objective function is not defined there. Hence we

use the “active set method”, which has this essential feature [Gill et al., 1981]. The

idea of the “active set method” is to check the search step in each iteration such that,

if some variables are about to violate the constraints, these constraints become the

“active set” of constraints. Then, in later iterations, usethe projected gradient and

the projected Hessian (orB in the Quasi-Newton case) corresponding to the inactive

constraints to perform further steps. We will not dig deeplyinto this method because

in our case (i.e. for bound constraints), the task is especially easy. In each iteration

in the above Quasi-Newton method with BFGS updates, we simply test whether

a search step can cause a variable to go beyond the corresponding bound. If this

occurs, we “fix” this variable, i.e. treat it as a constant in later iterations, and use

only the “free” variables to carry out the search. Thus the main modification in the

above algorithm is in the line search in Step 4. We should use an upper bound for� for all possible variables. The upper bound ismin( bi�xidi ) (wherebi is the bound

constraint for theith variablexi) if the directiondi of xi is an infeasible direction

(i.e. if di < 0). This means that we always calculate the maximal step length that

doesnot violate any of the inactive constraints and set this as the upper bound for

the trial. Therefore this line search is called “safeguarded line search”. This method

can be readily extended to the case of two-sided bound constraints, i.e.u � x � l,which is now in the implementation in WEKA.

Last but not least, there is a natural question to be asked regarding our method: “will

any ’fixed’ variables be released some time? If so, when and how?”. The answer is

certainly “yes”. In our strategy, we only check the possibility of releasing fixed vari-

ables when the convergence of the gradient is detected. At that moment, we verify

both the first and second order estimates of the Lagrange multipliers of all the fixed

variables (where the function implementing the second derivatives are provided by

the user). If they are consistent with each other, we regard the second order estimate

as a valid one and check whether it is negative. The negativity of a valid Lagrange

multiplier indicates non-optimality, hence the corresponding variables can be made

“free”. If any fixed variables are to be released, then we project the gradient and the

136

Page 151: Statistical Learning in Multiple Instance Problems

APPENDIX B. WEKA.CORE.OPTIMIZATION

Hessian back to the corresponding higher dimensions, i.e. update the corresponding

entries ing andB (basically set them to the initial state for these originally “fixed”

variables). Nonetheless, if the user does not provide the second derivative,2 we only

use the first order estimate of the Lagrange multiplier.

The above is a description of what we have done for the optimization subject to

bound constraints. For completeness, we write down the finalalgorithm in the

following, although it is basically just a repetition of theabove description. Note

we use the superscript “FREE” to indicate a “projected” vector or matrix below (i.e.

they only have entries corresponding to thefreevariables).

1. Initialization. Set iteration counter k=0, get initial variable valuesx0, cal-

culate the Jacobian (gradient) vectorg0 usingx0 and compute the Cholesky

factorization of a symmetric positive definite matrixB0 using a lower triangle

unit matrixL0 and a diagonal matrixD0.2. Checkgk. If it converges to 0, then test whether any fixed (or bound) vari-

ables can be released from their constraints using both firstand second order

estimates of the Lagrange multipliers.

3. If no variable can be released, then STOP, otherwise release the variables and

add corresponding entries inxkFREE (set to the bound values),gkFREE (set

to the gradient values at the bound values),LkFREE, andDkFREE (if the jthvariable is to be released, we setljj anddjj to 1 and the other entries injthrow/column to 0).

4. Solve for the search directiondkFREE using backward and forward substitu-

tion,LkFREEDkFREE(LkFREE)TdkFREE = �gkFREE.

5. Cast an upper bound on� and search for the best value of� along the direc-

tion dkFREE using a safeguarded inexact line search with backtracking and

polynomial interpolation. Setxk+1FREE = xkFREE + �dkFREE.

2It is often the case because one of the reasons why people use the Quasi-Newton method is thatthey do not need to provide the Hessian matrix.

137

Page 152: Statistical Learning in Multiple Instance Problems

6. If any variable is “fixed” at its bound constraint, delete its corresponding en-

tries inxkFREE, gkFREE, LkFREE, andDkFREE.

7. Calculate the gradient vectorgk+1FREE usingxk+1FREE. Set�xkFREE =xk+1FREE � xkFREE and�gkFREE = gk+1FREE � gkFREE. Then the

update is:Lk+1FREEDk+1FREE(Lk+1FREE)T = LkFREEDkFREE(LkFREE)T+ �gkFREE(�gkFREE)T(�gkFREE)T�xkFREE+ gkFREE(gkFREE)T(gkFREE)TdkFREEWe use the aforementioned C1 and C2 algorithms to perform theupdates.

8. Set k=k+1. GO TO 2.

138

Page 153: Statistical Learning in Multiple Instance Problems

APPENDIX C. FUN WITH INTEGRALS

Appendix C

Fun with Integrals

This Appendix is about the detailed derivation of the final equations for both TLD

and TLDSimple in Chapter 5.

C.1 Integration in TLD

As discussed based on Equations 5.3, 5.4 and the second line of 5.6 in Chapter 5,Bjk = Z +10 Z +1�1 ((2��2k)�nj=2 exp h� S2jk + nj(xjk � �k)22�2k ia bk2k 2� bk+12p(�wk)�(bk=2)(�2k)� bk+32 exp h� ak + (�k�mk)2wk2�2k i) d�k d�2k:Re-arranging it, we get= Z +10 Z +1�1 ( a bk2k2 bk+nj2 p(2�wk)�(bk=2)(�2k)� bk+nj+32 exp(� ak2�2k )exp �� 12wk�2k [wkS2jk + njwk(xjk � �k)2 + (�k �mk)2℄�) d�k d�2k:

139

Page 154: Statistical Learning in Multiple Instance Problems

C.1. INTEGRATION IN TLD

Since[wkS2jk + njwk(xjk � �k)2 + (�k �mk)2℄ = (1 + njwk)h�k � njwkxjk +mk1 + njwk i2+wknj(xjk �mk)2 + wkS2jk(1 + njwk)1 + njwk ;we can further re-arrange the above equation asBjk = Z +10 Z +1�1 ( a bk2k2 bk+nj2 p(2�wk)�(bk=2)(�2k)� bk+nj+32 exp(� ak2�2k )exp �� 12�2k �nj(xjk �mk)2 + S2jk(1 + njwk)1 + njwk �� exp �� (�k �Mk)22Vk �) d�k d�2kwhereMk = njwkxjk+mk1+njwk andVk = wk�2k1+njwk . Using the identityZ +1�1 (2�Vk)� 12 exp[(�k �Mk)22Vk ℄ d�k = 1;we integrate out�k,= Z +10 ( a bk2k2 bk+nj2 p(2�wk)�(bk=2)(�2k)� bk+nj+32 exp(� ak2�2k )exp �� 12�2k �nj(xjk �mk)2 + S2jk(1 + njwk)1 + njwk ��s2� wk�2k1 + njwk) d�2k:Now we sety = ak2�2k and re-arrange again:Bjk = Z +10 ( 2a�(nj+2)=2k�nj=2p1 + njwk�(bk=2)y bk+nj+22 exp(��y)) d�2kwhere� = �(1 + njwk)(ak + S2jk) + nj(xjk � mk)2�=�ak(1 + njwk)�. Because

140

Page 155: Statistical Learning in Multiple Instance Problems

APPENDIX C. FUN WITH INTEGRALS�2k = ak2y ) d�2k = �ak2 y�2 dy, we set� = bk+nj2 and get= � Z 0+1( a�nj=2k�nj=2p1 + njwk�(bk=2)y��1 exp(��y)) dy= Z +10 ( a�nj=2k�nj=2p1 + njwk�(bk=2)y��1 exp(��y)) dySince�(�) = R +10 e�tt��1 dt, substituting witht = �y, we get the well-known

identity: �(�)�� = R +10 e��yy��1 dy [Artin, 1964]. Hence the solution becomes:Bjk = a�nj=2k �(�)�nj=2p1 + njwk���(bk=2)= abk=2k (1 + njwk)(bk+nj�1)=2�(bk + nj=2)�(1 + njwk)(ak + S2jk) + nj(xjk �mk)2� bk+nj2 � nj2 �( bk2 )This is the formula we got in Equation 5.6.

C.2 Integration in TLDSimple

In TLDSimple, we regard�2k as fixed and estimate it directly from the data. There-

fore in Equation 5.2 we plug in a Gaussian-Gaussian formulation, that is, thexjkof each bag has the sampling distribution of a Gaussian,N(�k; �2knj ) (according to

Central Limit Theorem), and�k further follows a Gaussian parameterized bymkandwk, N(mk; wk). �2k is now fixed. Hence Equation 5.6 now becomes:Bjk =Z +1�1 ( 1q2� �2knj exp h� (xjk � �k)22(�2knj ) i 1p(2�wk) exp h� (�k �mk)22wk i) d�k:Re-arranging it, we have= Z +1�1 ( 1p(2�Vk) exp h� (�k �Mk)22Vk i�2�wknj + �2knj ��1=2 exp h�nj(xjk �mk)22(wknj + �2k) i) d�k

141

Page 156: Statistical Learning in Multiple Instance Problems

C.2. INTEGRATION IN TLDSIMPLE

whereMk = njwkxjk+mk�2k�2k+njwk andVk = wk�2k�2k+njwk . With the identityZ +1�1 1p(2�Vk) exp h� (�k �Mk)22Vk i d�k = 1;we integrate out�k and getBjk = �2�wknj + �2knj ��1=2 exp h�nj(xjk �mk)22(wknj + �2k) iThis is Equation 5.9 from Chapter 5.

142

Page 157: Statistical Learning in Multiple Instance Problems

APPENDIX D. COMMENTS ON EM-DD

Appendix D

Comments on EM-DD

EM-DD [Zhang and Goldman, 2002] was proposed to overcome thedifficulty of

the optimization problem required to find the maximum likelihood estimate (MLE)

of the instance-level class probability parameters in Diverse Density (DD) [Maron,

1998]. Note that there are two approximate generative models proposed in DD to

construct the bag-level class probability from the instance-level ones — the noisy-

or and the most-likely-cause model. In the noisy-or model, there is no difficulty

in optimizing the objective (log-likelihood) function while in the most-likely-cause

model, the objective function is not differentiable because of the “maximum” func-

tions involved. DD used the “softmax” function to approximate the maximum func-

tion in order to facilitate gradient-based optimization, which is a standard way to

solve non-differentiable optimization problems. EM-DD, on the other hand, claims

to use an application of the EM algorithm [Dempster et al., 1977] to circumvent

the difficulty. Therefore EM-DD provides no improvement on the modeling pro-

cess in DD, only on the optimization process. Since DD uses the standard way

to treat non-differentiable optimization problems [Lemarechal, 1989] and was sup-

posed to find the MLE, why can EM-DD be such an improvement? Besides, we

have not found any case in the literature that EM can simplifya non-differentiable

log-likelihood function in conjunction with a gradient-based optimization method

(i.e. a Newton-type method) in the “M-step”. Can the standard EM algorithm be

applied to a non-differentiable log-likelihood function?These questions lead us to

be skeptical about the validity of EM-DD.

143

Page 158: Statistical Learning in Multiple Instance Problems

D.1. THE LOG-LIKELIHOOD FUNCTION

In the following we first analyze the log-likelihood function that EM-DD aims to

maximize, then we present the EM-DD algorithm and an illustrative example to see

whether it can work on this function. Finally we point out a mistake in the “proof”

for EM-DD. We can also show that in general the monotonicity of EM-DD cannot

be proved, thus theoretically EM-DD is not guaranteed to work.

D.1 The Log-likelihood Function

EM-DD is based on the log-likelihood function constructed with the most-likely-

cause model (Equation 7.3 in Chapter 7):L =XB [Y log(maxb2mfPr(Y = 1jxb)g)+(1� Y ) log(1�maxb2mfPr(Y = 1jxb)g)℄=XB [Y log(maxb2mPr(Y = 1jxb)) + (1� Y ) log(minb2mPr(Y = 0jxb))℄(D.1)

The parameter vector� determines our estimate ofPr(Y = 1jxb), and we seek

the value of� that maximizesL, i.e. �MLE. Given a certain value of�, we se-

lect one instance from each bag (the “most-likely-cause” instance) to construct the

log-likelihood. In the process of searching for�MLE, when the parameter value

changes, the “most-likely-cause” instance to be selected may also change. Thus,

the log-likelihood function may suddenly change forms whenthe parameter value

changes. More specifically, if we arbitrarily pick up one instance from each bag and

construct the log-likelihood function, we have one possible log-likelihood function

— we call it one “component function”. If we change the instance in one bag,

we obtain another, different (unless the changed instance is identical to the old

one) component function. Obviously, if there ares1 instances in the1st bag,s2instances in the2nd bag, . . . ,sm+n instances in the(m + n)th bag, then there ares1�s2�� � ��sm+n component functions available (wherem andn are the number

144

Page 159: Statistical Learning in Multiple Instance Problems

APPENDIX D. COMMENTS ON EM-DD

-14

-12

-10

-8

-6

-4

-2

0

-2 0 2 4 6 8

Log-likelihood

Parameter value

Figure D.1: A possiblecomponent function in onedimension.

Figure D.2: An illustrative example of the log-likelihood function in DD using the most-likely-causemodel.

of positive and negative bags respectively). And the true log-likelihood function is

constructed usingsomeof these component functions.1 When the parameter value

falls into one range in the domain of�, the log-likelihood function is in the form

of a certain component function. And if it falls into anotherrange, then it becomes

another component function. Although the true log-likelihood function changes its

form for different domains of�, it is continuous at the point when the form of the

function changes from one component function to another because of the Euclidean

distance (L2 norm) used in the radial (or Gaussian-like) formulation ofPr(Y jX),but it is no longer differentiable at that point.

In Figure D.1 we show the shape of a part of one possible component function in

one dimension, i.e. we only have one attribute and fix the value of the “scaling

parameter” in DD. Thus the only variable here is the “point parameter”. Note that

there are three local maxima in this function, and the function is not continuous be-

cause the log-likelihood is undefined in two locations (actually the parameter value

in any location is equal to the attribute value of an instancein a negative bag). This

is due to the radial formulation ofPr(Y jX) in DD. The usual way in both DD and

1Note that not all of the component functions are used becausesome instances (from differentbags) will never be picked up simultaneously for any value of�.

145

Page 160: Statistical Learning in Multiple Instance Problems

D.1. THE LOG-LIKELIHOOD FUNCTION

EM-DD to tackle this problem is to search for the parameter values using (different)

multiple starts, in the hope that one start point can lead to the global maximum of

the function. From now on, we assume that we can always find the(global) max-

imum for each component function, perhaps using multiple starts. Regardless of

the local maxima, the shape of a component function is roughly quadratic. This is

reasonable because at least for the parts of the function constructed using only the

positive instances (in this appendix we say an instance in a positive bag a “posi-

tive instance” and an instance in a negative bag a “negative instance”), it is exactly

quadratic. The negative instances only make the function discontinuous, as shown

in the function’s shoulders in Figure D.1 (actually the figure does not show clearly

the discontinuity of the function — there should be no minimum on the shoulders,

instead the function value goes to�1 there).

If we ignore the (small) local maxima in each of the componentfunctions, we can il-

lustrate (a part of) the true likelihood function using curves like those in Figure D.2.

Note that this figure does not rigorously describe the situation of maximizing the

log-likelihood in Equation D.1. It only serves an illustration. However, it does give

us an idea when EM-DD can work and when it cannot. Although some details may

not be accurate (like the coordinates or the exact shape of the component functions),

these factors do not affect the key ideas of the illustration.

There are three component functions in the plot, denoted by 1, 2 and 3. The dot-

ted lines are the part of the component functionsnot used in the true log-likelihood

function. The solid line plots the true function. Again we only plot, in one dimen-

sion, the “point parameter” against the log-likelihood. The shapes of the component

functions are different because we also incorporate a fixed value of the “scaling pa-

rameter” for each one of the component functions (i.e. different component func-

tions have different “scaling parameter” values). Therefore, we simplify the opti-

mization procedure as a “steepest descent” method where we first fix the “scaling

parameter” and search for the best “point parameter”, then fix the “point parameter”

and search for the optimal “scaling parameter”. In Figure D.2, we do not show how

to search for the value of “scaling parameter”. We assume that we are “given” the

146

Page 161: Statistical Learning in Multiple Instance Problems

APPENDIX D. COMMENTS ON EM-DD

optimal “scaling parameter” value every time we start searching for the “point pa-

rameter”. We also assume that the “steepest descent” procedure does equally well

as the Quasi-Newton method used in DD and EM-DD on this simpleproblem. We

need to make all these assumptions to simplify the situationand enable us to see the

essence of this (extremely complicated) optimization problem.

Note that the true function value can besmallerthan the values of component func-

tion. To see why, let us look back to the Equation D.1. Given a fixed parameter�f , picking an instance with the maximal value ofPr(Y = 1jX; �f) in a positive

bag must result in a greater log-likelihood value than picking up another instances

in the same bag. On the contrary, selecting an instance with the maximal value ofPr(Y = 1jX; �f) (i.e. the minimal value ofPr(Y = 0jX; �f)) in a negativebag

has to result in asmallerlog-likelihood value than selecting other instances in the

same bag. Therefore the true log-likelihood function is oftennot the one with the

greatest value among all the component functions, as illustrated in Figure D.2, as

long as there is more than one instance in each negative bag.

In this example, the true function shifts between the components twice, and the

shifting points are indicated by small triangles and “X” and “Y” respectively. At

both points the log-likelihood function is still increasing but the newly-shifted com-

ponent function value is less than the value of the old component function. Shifting

between components means that we should change the instances in each bag that

construct the log-likelihood function. In the new component function, a new set

of instances, one from each bag, are picked up. Obviously this true function is

non-differentiable at the two points when it shifts components, but it has a local

maximum at point D.

147

Page 162: Statistical Learning in Multiple Instance Problems

D.2. THE EM-DD ALGORITHM

D.2 The EM-DD Algorithm

Now we briefly sketch the EM-DD algorithm [Zhang and Goldman,2002]. EM-DD

iterates with initial values of�, say�0, and the initial log-likelihood asL0 = X1�i�n log[maxb2mfPr(Y = 1jxb; �0)g℄+X1�j�m log[1�maxb2mfPr(Y = 1jxb; �0)g℄:Then it cycles between the E-step and M-step as follows untilconvergence (Suppose

this is thepth iteration):� E-step: find the instance in each bag that has the greatestPr(Y jX; �p), say,

thezthi instance in theith bag.� M-step: search for�p+1 = argmaxf X1�i�n log[Pr(Y = 1jxizi; �)℄+X1�j�m log[(1� Pr(Y = 1jxjzj ; �))℄gwhere there aren positive bags andm negative bags. And then computeLp+1 = X1�i�n log[maxb2mfPr(Y = 1jxb; �p+1)g℄+X1�j�m log[1�maxb2mfPr(Y = 1jxb; �p+1)g℄

The convergence test is performed before each E-step (or after each M-step) via the

comparison of the values of the log-likelihood functionLp andLp+1.To give an illustration, we first apply the above algorithm tothe artificial example

shown in Figure D.2 and see what it could find given “A” as the start value for�.

This example is deliberately set up so that we may see both cases in which EM-

148

Page 163: Statistical Learning in Multiple Instance Problems

APPENDIX D. COMMENTS ON EM-DD

DD can work (in the first iteration) and cannot work (in the second iteration). As

mentioned before we assume the search procedure used in the M-step is “steepest

descent” — one member of the gradient descent family. Although EM-DD used

a Quasi-Newton method, this does not matter much in this simple situation. Note

that the “M-step” in EM-DD corresponds to searching for the maximum in one

componentfunction because the instance to be selected in each bag is fixed, which

means it often goesabovethe true log-likelihood function in “M-step”.

In the E-step in the first iteration, EM-DD selects the set of instances as required.

This is to say it finds the correct component function — the curve with the solid

line. Then it calculatesL0 as the function value at the point A. In the M-step it

finds point B’ as the maximum of Component 3 (assuming it also finds the optimal

“scaling parameters” in the M-step, which determine the shape of next component in

Figure D.2). When it computes the log-likelihoodL1, it will necessarily “return” to

the true log-likelihood function. In other words, it picks up the new set of instances

according to�1 (theX-axis coordinate of B/B’). As a resultL1 is the value of the

log-likelihood function at point B.

In the second iteration, EM-DD first comparesL1 andL0, i.e. the function values at

A and B. In this case, it indeed finds a better value of the parameter, so it continues.

In the E-step it picks an instance in each bag according to�1. The corresponding

new component function is Component 1. In the M-step it will find the maximum

of Component 1 at point C’. Then it calculatesL2 as the true function value. This

is actually point C on Component 2.

In the third iteration, EM-DD first comparesL2 andL1, i.e. the function values at

B and C. However, this time it finds that it cannot increase thefunction value so

the algorithm stops and point B is returned as the solution, i.e. EM-DD will not

be able to find point D which is the true maximum of the log-likelihood function.

If it kept searching, it would have found D. Nonetheless, to do this, it must break

the convergence test, which is a crucial part of the proof of convergence for EM-

DD [Zhang and Goldman, 2002]. Without this convergence testEM-DD is not

149

Page 164: Statistical Learning in Multiple Instance Problems

D.3. THEORETICAL CONSIDERATIONS

an application of EM but simply a stochastic searching method for a combinatoric

optimization problem.

Therefore, even if we assume that the global maximum in each component func-

tion can be found, EM-DD cannot find the maximum of the log-likelihood func-

tion. Note that we have simplified this example a lot — the objective function is

concave and in one dimension in this case. In reality, since there are many more

dimensions (typically EM-DD searches for 2�166 parameters simultaneously on

the Musk datasets), the situation is much more complicated than the above exam-

ple. For instance, saddle points can occur, which is a case that EM cannot deal with

anyway [McLachlan and Krishnan, 1996].

Note that in the above example, we required EM-DD to start from point A. With a

different start point for the search for�, it may find the maximum point D. Indeed,

we observed that EM-DD depends heavily on multiple start points, not only for

searching for the global maximum but also for improving the chances to find just

a local maximum. In other words, it really relies on good luckrather than strong

theoretical justifications.

D.3 Theoretical Considerations

In spite of the above counter-example, it is not sufficient toconvince ourselves that

EM-DD is not a valid algorithm because it was proved in [Zhangand Goldman,

2002] that this algorithm will converge to a local maximum. This proof is analogous

to that in the EM algorithm. However, it turns out that the most important part was

missed.

The Expectation-Maximization (EM) algorithm [Dempster etal., 1977] was pro-

posed to facilitate the maximum likelihood method when “unobservable” data is

involved in the log-likelihood function. It is discussed indetail in a variety of ar-

ticles or books [Bilmes, 1997; McLachlan and Krishnan, 1996]. Since the M-step

150

Page 165: Statistical Learning in Multiple Instance Problems

APPENDIX D. COMMENTS ON EM-DD

necessarily increases the log-likelihood function, the key for proving the mono-

tonicity of the EM algorithm is to prove that the E-step can also increase the log-

likelihood. The property of an increase in the log-likelihood function in the E-step

is a consequence of Jensen’s inequality and the concavity ofthe logarithmic func-

tion [Dempster et al., 1977; McLachlan and Krishnan, 1996].That is why EM can

also be viewed as a “Maximum-Maximum” procedure [McLachlanand Krishnan,

1996; Hastie et al., 2001]. Nonetheless, [Zhang and Goldman, 2002] does not pro-

vide proof of the increase of the log-likelihood function (in Equation D.1) in the

E-step at all. Instead it uses the convergence test (of terminating the algorithm ifLp � Lp+1) to prevent the log-likelihood from decreasing. Note that in standard

EM, since the E-step also increases the log-likelihood, theconvergence test is only

to test whetherLp = Lp+1, involving no “>” sign.

The reason why the proof used in the standard EM algorithm does not apply to

EM-DD is due to the special property of the “unobservable” data. In EM-DD, the

unobservable data iszi, an index for theith bag that indicates which instance should

be used in the log-likelihood function. This variablezi is not quite “unobservable”

in this case because for each value of�, it is fixed (i.e. no probability distribution is

needed) and observable in the data, although for different values of� its value also

changes. Therefore if one insists on regarding it as a latentvariable in EM, then

given a certain parameter value�p, the probability ofzi isPr(zij�p) = 8<: 1 if zi = argmaxPr(Y = 1jxizi ; �p)0 otherwise

This probability function is very unusual, and still involves amax function, which

is not smooth, and thus the proof in EM cannot apply to the expected log-likelihood

function in the E-step in EM-DD.

As a matter of fact, as shown in Section D.2, in the E-step of EM-DD, the log-

likelihood is very likely todecrease, in which case the algorithm has to stop. Note

that in Equation D.1, in a negative bag,1�maxb2mfPr(Y = 1jxb)g is equivalent

to minb2mf1 � Pr(Y = 1jxb)g = minb2mfPr(Y = 0jxb)g. Hence in the E-

151

Page 166: Statistical Learning in Multiple Instance Problems

D.3. THEORETICAL CONSIDERATIONS

step, changing any negative instances to construct a new log-likelihood function will

alwaysdecreasethe log-likelihood function. In the extreme, if in one E-step, the

positive instances involved in the current log-likelihoodfunction remain unchanged,

but some negative instances are changed, then the new log-likelihood is guaranteed

to be lower than the current one. In that case, it may be premature to halt the

algorithm, as shown in the example in Section D.2. Therefore, unlike EM, the

monotonicity of the E-step in EM-DD cannot be proved in general, which is the

major theoretical flaw in EM-DD.

The fact that the log-likelihood fails to increase “after the first several iterations” for

EM-DD [Zhang and Goldman, 2002] is probably due to a decreasein the E-step.

Moreover, it was also observed that “it is often beneficial toallow NLDD (the neg-

ative log-likelihood of Diverse Density) to increase slightly” [Zhang and Goldman,

2002]. We believe this is not solely because of local maxima (or minima for the neg-

ative log-likelihood) — it may also allow the algorithm to keep searching, where it

would otherwise fail. Indeed, without the convergence requirement of EM (that

EM-DD cannot achieve), we can develop an algorithm that is guaranteed to find

the solution of the MLE — we simply search for the global maximum in each of

the component functions, either in a systematic (say, branch-and-bound) or stochas-

tic manner, and pick up the parameter with the highest “true”log-likelihood. This

amounts to searching in all the component functions involved in the log-likelihood

function. However this has nothing to do with EM. And the computational expense

varies greatly from case to case. The worst-case cost could be very high.

The problem with EM-DD lies in the objective of using a normalgradient-based EM

to solve a non-differentiable optimization problem. Non-differentiable optimization

has been a hot research topic in the optimization domain for some time [Lemarechal,

1989]. One of the methods to deal with non-differentiable optimization problem is

to transform the objective function into a differentiable function. The method used

in DD to substitute the “max” function with the “softmax” function is of this kind.

Although the “softmax” does not accurately transform the function, it approximates

the true log-likelihood function precisely enough. In the case shown in Figure D.2,

152

Page 167: Statistical Learning in Multiple Instance Problems

APPENDIX D. COMMENTS ON EM-DD

it will approximate the true function using a differentiable function, hence this dif-

ferentiable function will look similar to the true log-likelihood function and has a

maximum point close enough to point D. Using a normal Newton-type method we

can easily find this point.

In summary, because DD uses a sound maximization procedure whereas EM-DD’s

approach may not find an MLE, we are inclined to believe the statement in [Maron,

1998] that DD with the most-likely-cause model actually performs worse than with

the noisy-or model on the Musk datasets, and we are skepticalabout the good re-

sults reported for EM-DD [Zhang and Goldman, 2002] (especially considering that

there are also problems with the evaluation procedure used in [Zhang and Goldman,

2002]).

153

Page 168: Statistical Learning in Multiple Instance Problems

D.3. THEORETICAL CONSIDERATIONS

154

Page 169: Statistical Learning in Multiple Instance Problems

BIBLIOGRAPHY

Bibliography

Ahrens, J. and Dieter, U. [1974]. Computer methods for sampling from Gamma,

Beta, Poisson and Binomial distributions.Computing, (12), 223–246.

Ahrens, J., Kohrt, K. and Dieter, U. [1983]. Algorithm 599: sampling from Gamma

and Poisson distributions.ACM Transactions on Mathematical Software, 9(2),

255–257.

Artin, E. [1964]. The Gamma Function. New York, NY: Holt, Rinehart and Win-

ston. Translated by M. Butler.

Auer, P. [1997]. On learning from multiple instance examples: empirical evalua-

tion of a theoretical approach. InProceedings of the Fourteenth International

Conference on Machine Learning(pp. 21–29). San Francisco, CA: Morgan

Kaufmann.

Bilmes, J. [1997]. A gentle tutorial of the EM algorithm and its application to pa-

rameter estimation for Gaussian mixture and hidden Markov models. Technical

Report ICSI-TR-97-021, University of Berkeley.

Blake, C. and Merz, C. [1998]. UCI repository of machine learning databases.

Blum, A. and Kalai, A. [1998]. A note on learning from multiple-instance examples.

Machine Learning, 30(1), 23–30.

Bratley, P., Fox, B. and Schrage, L. [1983].A Guide to Simulation. New York, NY:

Springer-Verlag.

155

Page 170: Statistical Learning in Multiple Instance Problems

BIBLIOGRAPHY

Breiman, L. [1996]. Bagging predictors.Machine Learning, 24(2), 123–140.

le Cessie, S. and van Houwelingen, J. [1992]. Ridge estimators in logistic regres-

sion. Applied Statistics, 41(1), 191–201.

Chevaleyre, Y. and Zucker, J.-D. [2000]. Solving multiple-instance and multiple-

part learning problems with decision trees and decision rules. Application to

the mutagenesis problem. Internal Report, University of Paris 6.

Chevaleyre, Y. and Zucker, J.-D. [2001]. A framework for learning rules from

multiple instance data. InProceedings of the Twelveth European Conference

on Machine Learning(pp. 49–60). Berlin: Springer-Verlag.

Chong, E. andZak, S. [1996]. An Introduction to Optimization. New York, NY:

John Wiley & Sons, Inc.

Cohen, W. [1995]. Fast effective rule induction. InProceedings of the Twelveth

International Conference on Machine Learning(pp. 115–123). San Francisco,

CA: Morgan Kaufmann.

Dempster, A., Laird, N. and Rubin, D. [1977]. Maximum likelihood from incom-

plete data via the EM algorithm.Journal of the Royal Statistics Society, Series

B, 39(1), 1–38.

Dennis, J. and Schnabel, R. [1983].Numerical Methods for Unconstrained Opti-

mization and Nonlinear Equations. Englewood Cliffs, NJ: Prentice-Hall, Inc.

Devroye, L., Gyorfi, L. and Lugosi, G. [1996].A Probabilistic Theory of Pattern

Recognition. New York, NY: Springer-Verlag.

Dietterich, T. and Bakiri, G. [1995]. Solving multiclass learning problems via error-

correcting output codes.Journal Artificial Intelligence Research, 2, 263–286.

Dietterich, T., Lathrop, R. and Lozano-Perez, T. [1997]. Solving the multiple-

instance problem with the axis-parallel rectangles.Artificial Intelligence, 89(1-

2), 31–71.

156

Page 171: Statistical Learning in Multiple Instance Problems

BIBLIOGRAPHY

Frank, E. and Witten, I. [1998]. Generating accurate rule sets without global opti-

mization. InProceedings of the Fifteenth International Conference on Machine

Learning(pp. 144–151). San Francisco, CA: Morgan Kaufmann.

Frank, E. and Witten, I. [1999]. Making better use of global discretization. In

Proceedings of the Sixteenth International Conference on Machine Learning

(pp. 115–123). San Francisco, CA: Morgan Kaufmann.

Frank, E. and Xu, X. [2003]. Applying propositional learning algorithms to multi-

instance data. Working Paper 06/03, Department of ComputerScience, Uni-

versity of Waikato, New Zealand.

Freund, Y. and Schapire, R. [1996]. Experiments with a new boosting algorithm. In

Proceedings of the Thirteenth International Conference onMachine Learning

(pp. 148–156). San Francisco, CA: Morgan Kauffman.

Friedman, J., Hastie, T. and Tibshirani, R. [2000]. Additive logistic regression, a

statistical view of boosting (with discussion).Annals of Statistics, 28, 307–

337.

Gartner, T., Flach, P., Kowalczyk, A. and Smola, A. [2002].Multi-instance ker-

nels. InProceedings of the Nineteenth International Conference onMachine

Learning(pp. 179–186). San Francisco, CA: Morgan Kaufmann.

Gill, P., Golub, G., Murray, W. and Saunders, M. [1974]. Methods for modifying

matrix factorizations.Mathematics of Computation, 28(126), 505–535.

Gill, P. and Murray, W. [1976]. Minimization subject to bounds on the variables.

Technical Report NPL Report NAC-72, National Physical Laboratory.

Gill, P., Murray, W. and Wright, M. [1981]. Practical Optimization. London:

Academic Press.

Goldfarb, D. [1976]. Factorized variable metric methods for unconstrained opti-

mization.Mathematics of Computation, 30(136), 796–811.

157

Page 172: Statistical Learning in Multiple Instance Problems

BIBLIOGRAPHY

Hastie, T., Tibshirani, R. and Friedman, J. [2001].The Elements of Statistical

Learning : Data mining, Inference, and Prediction. New York, NY: Springer-

Verlag.

John, G. and Langley, P. [1995]. Estimating continuous distributions in Bayesian

classifiers. InProceedings of the Eleventh Conference on Uncertainty in Arti-

ficial Intelligence(pp. 338–345). San Mateo, CA: Morgan Kaufmann.

Lemarechal, C. [1989]. Nondifferentiable optimization.In Nemhauser, R. Kan and

Todd (Eds.),Optimization, Volume 1 ofHandbooks in Operations Research

and Management Sciencechapter VII, (pp. 529–569). Amsterdam: North-

Holland.

Long, P. and Tan, L. [1998]. PAC learning axis-aligned rectangles with respect

to product distribution from multiple-instance examples.Machine Learning,

30(1), 7–21.

Maritz, J. and Lwin, T. [1989].Empirical Bayes Methods(2 Ed.). London: Chap-

man and Hall.

Maron, O. [1998].Learning from Ambiguity. PhD thesis, Massachusetts Institute

of Technology, United States.

Maron, O. and Lozano-Perez, T. [1998]. A framework for multiple-instance learn-

ing. InAdvances in Neural Information Processing Systems, 10 (pp. 570–576).

Cambridge, MA: MIT Press.

McLachlan, G. [1992].Discriminant Analysis and Statistical Pattern Recognition.

New York, NY: John Wiley & Sons, Inc.

McLachlan, G. and Krishnan, T. [1996].The EM Algorithm and Extensions. New

York, NY: John Wiley & Sons, Inc.

Minh, D. [1988]. Generating Gamma variates.ACM Transactions on Mathematical

Software, 4(3), 261–266.

158

Page 173: Statistical Learning in Multiple Instance Problems

BIBLIOGRAPHY

von Mises, R. [1943]. On the correct use of Bayes’ formula.The Annals of Mathe-

matical Statistics, 13, 156–165.

Nadeau, C. and Bengio, Y. [1999]. Inference for the generalization error. InAd-

vanced in Neural Information Processing Systems, Volume 12 (pp. 307–313).

Cambridge, MA: MIT Press.

O’Hagan, A. [1994].Bayesian Inference, Volume 2B ofKendall’s Advanced Theory

of Statistics. London: Edward Arnold.

Platt, J. [1998]. Fast training of support vector machines using sequential minimal

optimization. In B. Scholkopf, C. Burges and A. Smola (Eds.), Advances in

Kernel Methods—Support Vector Learning. Cambridge, MA: MIT Press.

Press, W., Teukolsky, S., Vetterling, W. and Flannery, B. [1992]. Numerical Recipes

in C: The Art of Scientific Computing(2 Ed.). Cambridge, England: Cambridge

University Press.

Quinlan, J. [1993].C4.5: Programs for Machine Learning. San Mateo, CA: Morgan

Kaufmann.

Ramon, J. and Raedt, L. D. [2000]. Multi instance neural networks. InAttribute-

Value and Relational Learning: Crossing the Boundaries. Workshop at the

Seventeenth International Conference on Machine Learning.

Ruffo, G. [2001].Learning Single and Multiple Instance Decision Trees for Com-

puter Security Applications. PhD thesis, Universita di Torino, Italy.

Srinivasan, A., Muggleton, S., King, R. and Sternberg, M. [1994]. Mutagenesis:

ILP experiments in a non-determinate biological domain. InProceedings of

the Fourth International Inductive Logic Programming Workshop(pp. 161–

174).

Stuart, A., Ord, J. and Arnold, S. [1999].Classical Inference and the Linear Model,

Volume 2A ofKendall’s Advanced Theory of Statistics. London: Arnold.

159

Page 174: Statistical Learning in Multiple Instance Problems

BIBLIOGRAPHY

Vapnik, V. [2000]. The Nature of Statistical Learning Theory. New York, NY:

Springer-Verlag.

Wang, J. and Zucker, J.-D. [2000]. Solving the multiple-instance problem: a lazy

learning approach. InProceedings of the Seventeenth International Conference

on Machine Learning(pp. 1119–1134). San Francisco, CA: Morgan Kauf-

mann.

Wang, Y. and Witten, I. [2002]. Modeling for optimal probability prediction. In

Proceedings of the Nineteenth International Conference onMachine Learning

(pp. 650–657). San Francisco, CA: Morgan Kaufmann.

Weidmann, N. [2003]. Two-level classification for generalized multi-instance data.

Master’s thesis, Albert-Ludwigs-Universitat Freiburg,Germany.

Weidmann, N., Frank, E. and Pfahringer, B. [2003]. A two-level learning method

for generalized multi-instance problems. InProceedings of the Fourteenth

European Conference on Machine Learning. To be published.

Witten, I. and Frank, E. [1999].Data Mining: practical machine learning tools and

techniques with Java implementations. San Francisco, CA: Morgan Kaufmann.

Zhang, Q. and Goldman, S. [2002]. EM-DD: An improved multiple-instance learn-

ing technique. InProceedings of the 2001 Neural Information Processing Sys-

tems (NIPS) Conference(pp. 1073–1080). Cambridge, MA: MIT Press.

Zhang, Q., Goldman, S., Yu, W. and Fritts, J. [2002]. Content-based image retrieval

using multiple-instance learning. InProceedings of the Nineteenth Interna-

tional Conference on Machine Learning(pp. 682–689). San Francisco, CA:

Morgan Kaufmann.

160