fast and reliable color region merging inspired by ...rnock/articles/drafts/cvpr01-n.pdf · image...

Post on 18-Jun-2020

3 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Fast and Reliable Color Region Merging inspired by Decision Tree Pruning

RichardNockUniversiteAntilles-Guyane

DeptScientifiqueInter-FacultaireCampusdeSchoelcher, B.P. 7209

97275Schoelcher, Francernock@martinique.univ-ag.fr

Abstract

In thispaper, weexploit someprevioustheoreticalresultsaboutdecisiontreepruningto derivea color segmentationalgorithmwhich avoidssomeof thecommondrawbacksofregionmerging techniques.Thealgorithmhasbothstatisti-cal andcomputationaladvantagesoverknownapproaches.It authorizestheprocessingof 512� 512imagesin lessthana secondon conventionalPC computers. Experimentsarereportedon thirty-five imagesof variousorigins, illustrat-ing thequalityof thesegmentationsobtained.

1 Introduction

It is establishedsince the Gestalt movement in psy-chology[14] thatperceptualgroupingplaysa fundamentalrole in humanperception.Even thoughthis observation isrootedin theearlypartof theXX

���century, theadaptation

and automationof the segmentation(and more generally,grouping)taskwith computershasremainedsofar a tanta-lizing andcentralproblemfor imageprocessing.This is allthe more importantfor computersas groupingauthorizesthe reductionof the sizeof data,and,by the way, canre-ducedramaticallythespaceandtimecomplexitiesfor somepost-processingtasks,in a field wheredataareeasilyavail-ableandin hugequantities.Roughly speaking,the problem can be presentedas thetransformationof the collectionof pixelsof an imageintoa meaningfularrangementof regionsandobjects[8]. But,how canweidentify objects? TheGestaltmovementidenti-fiedsomefactorsleadingto theperceptionof objects: sym-metry, similarity, parallelism,etc. [4]. Thoughtheserulesare conceptuallysatisfyingto explain the phenomenonatthe humanlevel, their lack of detailsmakesit hard to seethemasdirectlyimplementablealgorithms[4], evenif someof themcanbeextendedto rigorousalgorithms[15]. Sofar,imagesegmentationtechniqueshavetackledtheproblemby

studyingmathematicalpropertiesof theimageseen[16], ormorerarelyby a direct emphasison algorithmicandcom-putationalissues[3].Therearefour largecategoriesof approachesto imageseg-mentation[16], one of which is of direct interestto us :regiongrowing andmergingtechniques.In thisgroupof al-gorithms,regionsaresetsof pixelswith homogeneousprop-ertiesandthey areiteratively grown by combiningsmallerregionsor pixels,pixelsbeingtakenaselementaryregions.Region growing techniquesusuallywork with a statisticaltestto decidethemergingof regions[11, 16].In the field of imagesegmentation(or moregenerally, im-ageprocessing),MachineLearning(ML) techniquesmightappeartailor-made for high-level tasks, such as in [9].One might wonder whetherapplying such techniquesortheir adaptationsto imageprocessingbasedon low-levelcues(typically, imagesegmentation)might be well worththe try, in a field where the approachesare alreadynu-merous,of many kinds (local filtering, balloons,snakes,MDL/Bayesian, region growing, etc. [16]), and whereheavy mathematicalapproacheshave obtainedsignificantresults(seee.g. the resultsof [16]). We think somecluescoming from image processingindicate that ML-derivedtechniquesmight be of interestalreadyfor low-level pro-cessingtasks. First, vision is widely acceptedas an in-ferenceproblem, i.e. the searchof what causedthe ob-served data[4]. Second,asarguedby [12], low-level im-agesegmentationshouldnotaimatproducingacorrectseg-mentation,but rathera goodapproximationgiventhepost-processingstagesof the image. Onemorereasoncangiveevidencefor thepossibleinterestin usingML-relatedtech-niques. ML andComputationalLearningTheory(COLT)workssuchas[7] have broughta numberof usefulresults,or found a way to usea numberof results,in statisticsorprobability, which remove many undesirabledistributionassumptionson the data. The mostcommonlyusedtheo-retical foundingmodelfor ML/COLT is thePAC modelofValiant [13], which removesany suchdistribution assump-

1

tions. In imagesegmentation,therehave beena numberofprobabilisticapproachesto theproblem[1, 16] or [4] (Chp.18). Mostof themrely onheavy, penalizingdistributionas-sumptionsonthedata(suchasnormality),andfinally somerecentprominentwork in imagesegmentationhasmadeem-phasison moreconventional,non-probabilisticapproaches,suchasdiscriminantanalysis-type[12, 2].Our aim in this paperis clearlynot to proposea ML algo-rithm to segmentimages,but ratherto exploit someof thecommonpointsbetweenimagesegmentationanddecision-treepruning,andthenadaptthetheoreticalML/COLT ideasand tools of [7] to our setting,to derive a new segmenta-tion algorithm. This paperconsiststhereforeof anoriginal(andnovel) adaptationof previousML/COLT work to low-level imageprocessing,a ratherseldomresult in our con-text. Thenext sectiongoesin depthin thecommonpointsbetweendecision-treepruningandimagesegmentation.Itis followedby asectionpresentingamodelof imagegener-ation. Then,two sectionsdetail respectively the statisticalandalgorithmiccontentsof our approach.The lastsectiondiscussesexperimentsonnumerousimages.

2 From pruning to merging

A decisiontreeis aclassifiermakingrecursivepartitionsof a representationspace,accordingto someclasses.Con-siderfig. 1, showing a 2D representationspacerestrictedto the integer couplesof the domain

� ������� � � ��� ���� forsome

����� ������. Supposethat thereare � classes(not

shown). Thepoint is thata probabilitydistribution is givento domain� classes. Each couple (observation, class) iscalledanexample, andtheproblemis to fit asbestaspos-sible the domainof all examples,usingonly a potentiallysmall subsetof this domain,sampledaccordingto the dis-tribution. Decisiontreesarevery efficient ways to tackletheproblemusinga verysimpleformalism[7]. Thequalityof a decisiontreeis evaluatedby its error probability overthewholedomain,alsocalledstructuralrisk, which we canonly estimateusingthe error frequency over the examplesseen,also called empirical risk [7]. The classificationofanobservationis madeasfollows: eachinternalnode(andtheroot) of thetreeis labeledby a teston a variableof theobservations;eachleaf is labeledby a class(not shown infig. 1). The classificationprocessstartsfrom the root ofthe tree. If the observation satisfiesthe currenttest, thenit follows the right path,otherwiseit follows the left path,until it reachesa leaf,andthereforeis giventhecorrespond-ing class. In fig. 1, an observation for which

�������and ��� ��

would fall in region � , asshown by thedottedpath.Themostpopulardecisiontreelearningalgorithms(CART,C4.5[7]) proceedby growing a very largedecisiontreetofit asbestaspossibletheexamplesseen,andthenpruningitto get(hopefully)agoodfit of thewholedomain.Pruninga

x

y >y2

x > 3xx > 2x

x > 1x

y >y1

00

CAB

y

y1

y2

x 3x2x1 4

y3

x

YesNo

Figure 1. A domain and its recur sive par titionby a decision-tree (see text for details).

treeconsistsin removing iteratively aninternalnodeanditssubtree,therebyreplacingthe nodeby a classlabel. Prun-ing thenodewhosetestis “

� �����” in image1 would boil

down to mergeregions � , ! and " .If we considerthat the color valuesof pixels in an imageare the result of a theoreticalvalue(the oneof the regionthey belongto), combinedwith a randomvariability fac-tor (suchasnoise),thenthe taskof region merging canbelooselyreformulatedastherecognitionof the regionshav-ing thesametheoreticalvaluesin anobservedimage.Thisproblemis quite similar to decisiontreepruning(considerthat theimagehas

��# ��pixelsin fig. 1); moreimportantly,

the tools usedin both problemscanbe the same:the con-centrationof randomvariablesto give confidencebounds,eitherfor thetheoreticalcolorvaluesof groupsof pixels,orfor thestructuralrisksin decision-treepruning.However, our taskappearsto be moredifficult thanprun-ing, sincewe generatemuchmorepotentialconfigurations.Considere.g. fig. 1: the treehasonly 7 possibleprunings(including theoriginal tree),whereasthereare42 possiblesegmentationsof thedomainwith the6 initial regions.Imageprocessinghaspotentiallytwo characteristicswhichmakeit agoodcandidatefor theadaptationof particularMLtechniques.First, the quality of the pruning strongly de-pendson the availablequantityof data,andsmall datasetstypically leadto uncontrollableover-pruning.In imageseg-mentation,thematerialis availablein hugequantities.Sec-ond,imagesegmentationis a field in which fastalgorithmsareobviously crucial with the advent of real-time(video)imageprocessing,but practiceshows that “f ast” is a field-dependentsubjective notion: for example, [1] considera“very fast” algorithmextractinga few dozensof regionsina 512� 512imagein lessthan10 seconds,but on anhigh-endUltraSPARC workstation;[12] giveexecutiontimeson100� 120 imagesof about2 minuteson conventionalma-chines,and thereare many other examples. We show inthatpaperthatusingourML-derivedtoolscanbringhighly

competitive(or better)results,in reducedtimes.

3 Notations and models

The notation $&%'$ standsfor cardinal. The observed im-age, ( , contains $ ()$ pixels, eachcontainingRed-Green-Blue (RGB) values,eachof the threebelongingto the set*�+��-,.� %/%0% � ��1 . We have deliberatelychosennot to usecom-plex formulationsof thecolors,suchasthe 243657398:3 space[1]. ( isanobservationof aperfectscene(.; wedonotknowof, in which pixelsareperfectlyrepresentedby a family ofdistributions, from whicheachof theobservedcolor-level issampled.In (<; , theoptimal (or true)regionsrepresentthe-oreticalobjectssharinga commonhomogeneityproperty:= all pixelsof agiventrueregionhaveidenticalexpecta-

tionsfor eachRGB color-level,= theexpectationsof adjacentregionsaredifferentfor atleastoneRGB color level.( is obtainedfrom (.; by samplingeachtheoreticalpixel

for observedRGB values.Fig. 2 presentsanexampleof acolor-level for onepixel in ( ; andhow to generatethecor-respondingobservedcolor-level of the pixel in ( . In eachpixel of (.; , eachcolor-level is replacedby a setof > in-dependentrandomvariables(r.v.) takingpositivevaluesondomainsboundedby �:?�> , suchthat any possiblesumofoutcomesof these> r.v. with non-zeroprobabilitybelongsto*�+��-,.� %/%0% � ��1 .

Thesamplingof eachpixel andits color levelsaresupposedindependentfrom eachother. Our modelof imagegener-ation doesnot unfortunatelyprevent us to dependon thisusualassumption(alsowidely usedin ML/COLT). Our ex-perimentsshall demonstratethat it doesnot seemto haveanimpacton theresultsof thesegmentation.It is importantto notethat this is theonly assumptionwe make on (<; . Inparticular, we do not make any further assumptionon thenatureof eachdistribution(suchasnormality, homoscedas-ticity), whichcanbethereforedifferentfrom oneanotherineachpixel of a region, aslong asthesumof their expecta-tionsis constantfor eachcolor insidea trueregion.Ourgoalis thenstraightforward:find observedregionsin (approximatingasbestaspossiblethetrueregionsin (.; .> is certainlythelessintuitiveparameterof ourmodel.Wehave chosento introduceit for our model and our analy-sesto be asgeneralaspossible(standardanalyseswouldfix > �@+ ). Practicallyspeaking,> representsthe majoradvantageto beatrade-off parameter, adjustableto obtainacompromisebetweenthepowerof themodelandthequalityof theobservedresults.Indeed,if > is small (say, > �A+ ),thenscarcelynothingcanbeestimatedreliably for smallre-gionsin ( , but our modelis themostgeneralpossible.Ontheotherhand,if > is toolarge,thenourmodelbecomesre-stricted( > � � bringsbacka conventionalBinomial law),

but ourestimationsarethemostreliable.Between,theusercanchoosea convenientvalueto keepa powerful enoughmodelwhile ensuringreliableestimationson ( .4 The merging test and its properties

Our mainresultis basedon theconcentrationof randomvariables,a tool widely usedin ML/COLT literature [7].Our modelnecessitatesa recentresultdueto [10]:

Theorem 1 (Theindependentboundeddifferenceinequal-ity, [10]) Let X

�CBEDGFH�D � � %/%0% ��DJI<K be a family of inde-pendentr.v. with

DMLtakingvaluesin a set � L for each N .

Supposethat the real-valuedfunction O definedon P L � Lsatisfies$ O B x KRQ O B x S K $<T�U L whenevervectorsx andx S dif-fer only in the N -th coordinate. Let V betheexpectedvalueof ther.v. O B X K . Thenfor any WYX � ,

PrB O B X K6Q V XZW K T [�\ ]_^ ]`ba�c&d a�e ]

(1)

In ourcase,wenow provethatwith highprobability, theobservedaveragefhg of any region f in ( shallnot deviatetoomuchfrom its theoreticalexpectationE g B f K for asinglecolor-level i . By theoreticalexpectation,we meantheav-erageof thesumover eachof its pixelsof theexpectationsof its > distributionsfor color i in (<; .Theorem 2 Let f be a region in ( . Let jMk be the setofregionshaving l pixelsin ( . Fix an iGm * R � G � B 1 . n)opS �q� ,theprobability that thereexistsa region in jsr tur such thatvv f g Q E g B f K vv X � w +, >M$ fJ$6x.y0z�{ , $ jsr tur0$o S | (2)

is no more than opS .(Proof omitted due to the lack of space). Of course,theprobability of occurrenceof the event for someof the R,G, or B levels, is no morethan }�opS . What is interestingintheorem2 is thatregion f is not requiredto belongto atrueregion of (<; . Indeed,if the region merging algorithmhasmergedtogethersubsetsof trueregions ~ Fp� ~ � � %0%/% � ~ L in f ,thentheboundof theorem2 still holds.Theuseof theorem2 will be straightforward. Theprobability that thereexistsa region in ( (regardlessof its size)for which the averageof someof its RGB color deviatesfrom its expectationbymore than the right-hand-sideof inequality 2 is no morethan }�$ ()$ opS , sincethe region canhave size1, 2, ..., or $ ()$ .Then,if we fix o � }�$ ()$ o S (3)

and � B f K�� � w +, >M$ fJ$ x.y0z�{ ,o S6� y0z�{ $ jsr tur'$ | (4)

�region �4�

imageborder��� �region �J�

a pixel of �J�

Figure 2. Generation of a single color -level for one pix el from (<; to ( .thenweknow that,with highprobability(

��+�Q o ), any pos-sibleregion f will have a boundedvariationof f7g aroundEB f K g for all valuesi�m * R � G � B 1 . Becauseof the trian-

gle inequality, weknow thatany two regions f and f7S hav-ing the sameexpectations(E g B f7S K�� E g B f K���� g , n�i�m*

R�G�B 1 ) shallobserve a quantity $ f S g Q f g $ which will

bealsoconcentratedaroundzero,upto radius

� B f K �� B f�S K

actually, since $ f S g Q f g $�T�$ f S g Qq� g $ � $ f g Qq� g $ (tri-angleinequality). If $ f S g Q f g $�T � B f K �� B f�S K for alli�m * R � G � B 1 , thenwe cansupposethat f and f S belong

to the sametrue region in (.; , andmergethem. This givesthe merging predicate� B f � f�S K of our region merging al-gorithm,returningwhetherf and f�S canbemergedor not:� B f � f S K����� � true if f n�iJm * R � G � B 1 �$ f S g Q f g $.T � B f K �

� B f7S Kfalse otherwise

We alsoneedto computethe value $ jsr turE$ . Becauseof thefactthatexactvaluesaredifficult to compute,wehaveuseda practicalbound $ j k $���� B l � ,�K��u�  )¡¢¤£ k0¥#¦ ? B l � +§K (proofomitted)whichfollowsascloselyaspossiblethetruevalue,beingsmallerfor the small valuesof l (thus,for small re-gions,wheretypically l9¨q� ), andlargerfor thelargevaluesof l . Therisk in computinganupperboundfor $ jMk©$ wastoget a too large upperbound,therebyleadingto a too largerisk of over-merging(weshallseeit in thenext section).

5 The algorithm and its properties

Supposethat the image ( contains ª¤« rows and U�«columns. This represents¬ �@, ª¤«#U�« Q ª#« Q U�« couplesof adjacentpixels (in 4-connexity). Name ­�« asthe setofthesecouples.DenoteO B % K thefunctionwhich takesa cou-ple of pixels

B&®���® S K , andreturnsthemaximumof the threecolor differences(R, G andB) in absolutevaluebetween®

and® S . Denoteasquicksort( ­ « , O ) the outputof the

quicksortalgorithmfor set ­ « , in increasingorderof func-tion O B % K . For any pixel

®of ( , we denoteas f B&®)K thecur-

rentregion to which®

belongsin ( . Thealgorithmis calledPSIS, for ProbabilisticSortedImageSegmentation.

Algorithm 1: PSIS( ( )Input: animage°)±² =quicksort(

° ² , ³ );for ´)µ·¶ to ¸º¹s» do

/* ¼¾½.¿ÁÀ'½ ± ¿Ã is the ´�Ä'Å coupleof° ±² */

if Æ�¼¾½.¿ ÂÈǵ�Æ�¼¾½ ± ¿  and Éh¼EÆ�¼¾½.¿  ÀÊÆ˼¾½ ± ¿ Â� =true thenUnion( Æ�¼¾½ ¿  , Æ�¼¾½ ± ¿  );

Onemightwonderwhy wehavechosento orderthecou-plesof pixelsin PSIS. Indeed,sucha requirementis a pri-ori notclearfrom our theory, andit bringsan Ì B $ (�$ y/z�{ $ (�$ Kcomplexity (sometimessmaller, [2]) insteadof the(almost)linear complexity reachablewithout this stage. Therearebasicallythreekind of errorsour algorithmcansuffer withrespectto the optimal segmentation.First, under-mergingrepresentsthecasewhereoneor moreregionsobtainedarestrict subpartsof trueregions.Second,over-mergingrepre-sentsthecasewheresomeregionsobtainedstrictly containmorethanonetrueregion. Third, thereis the“hybrid” (andmostprobable)casewheresomeregionsobtainedcontainmorethanonestrict subpartof true regions. Orderingthepixelstestis away to limit this third kind of error, sinceweapproximatethefollowing property:

(P) All merging testsinsidetrue regionsaremadebeforeany merging testbetweentrueregions.

Obviously, (P) is unreachablein practice;however, from apurely theoreticalpoint of view, (P) hasa key effect whenusedwith our statisticalmaterial:PSIS only suffersover-merging with probability

�Í+JQ o (eq. 3). Indeed,weknow that for any regions f � f7S coming from the sametrue region in ( ; , we have n�i m * R � G � B 1 � $ f S g Q f g $�T$ f S g QÎ� g.$ � $ f�g QÎ� g.$ where

� g � E g B f7S K�� E g B f K . Withprobability

��+:Q o , wealsoknow (theorem2 andeq.3) that$ f S g Q fhg.$�T � B f K �� B f7S K for any regions f � f7S coming

from thesametrue region in (<; . Sincethis is our mergingtest, andsince(P) holds, it follows that the segmentationobtainedis anunder-segmentationof (<; .Therefore,theoreticallyspeaking,(P) allows to eliminate

Image PSIS [1] Image PSIS [1]

Figure 3. Results obtained by PSIS and [1] on four images. Region are white with black bor ders.

under-merging andthe hybrid error casewith high proba-bility; thismotivatedusin fastapproximationsof it. In fact,over-mergingcansomewhatbecontrolledin practicefor re-gionswith sizelargeenough(typically Xb� ) aslong astheboundfor $ j k $ is nottoolarge(seeEq. 4). Thisis thereasonfor our choiceof theupperboundfor $ jMk�$ .6 Experimental results

Dueto thelack of space,we reportexperimentsonly onthirty-fiveimagesamongall onwhichPSISwastested(fig-ures3,4 and5). While lookingattheresults,thereadermaykeepin mind that:

thevaluesof theparametersof PSIS arethesamefor all images: opS ��+ ? B }�$ (�$ � K (theorem2) and> � } , . Furthermore,the imageswereusedasthey are,i.e. withoutany preprocessing.

Therefore, the results of PSIS do not stem from anydomain- or image-dependentpreprocessingor parametertuning.PSIS wasimplementedin C. Fig. 3 displayssomeresultsto becomparedwith [1, 16]. While PSIS obtainedrecord times for processingtheseimages,the resultsarehighly competitivewith theotherapproaches.Considerthewoman image.Herbustis bettersegmentedthan[1]. Whenlooking at [16]’s resultfor this image,our resultappearstobe better: after over a hundrediterations,[16]’s techniqueof regioncompetitiondoesnotmanageto find thewoman’seyes. [16] argue that that the eyes are too small and as-similatedto noise. This is clearlynot a mistakePSIS hasmade.Thehand imagealsodisplaysgoodresults,all thebetterif we considerthat themodelof imagesegmentationof PSIS doesnot explicitly integratetextures.In fig. 4, theleft tabledemonstratesthatPSIS obtainsagainniceresults

on texture segmentation(tennis androck images),allthemoreinterestingif we comparethemto theapproachesof [5, 6], tailor-madefor texture segmentation. The rightimageshows for someimagesa particularregion isolatedbyPSIS. Remarkfromlena andsquirrel thatPSIS isableto isolateregionswith high variability (e.g. thegrass),and obtainsresultseven betterthan [16] on the squir-rel image: their segmentation,althoughtailor-madefortextured images,obtainsa segmentationof the grasswithmany holes,a populardrawback of region-merging tech-niques[16], seealso the result of [2] in fig. 5 (bottomrow, region #2) for the grass. Note also the nice segmen-tation of the truck comparedto [2] (fig. 5, bottom row,region #4). The small imagesin fig. 5 (a-x) display theability of PSIS to handlereasonablegradientsdueto light-ning andslantedsurfaces(imagesb, f, i, j, k, r, s, u,w, x). This correctsanotherdrawbackof many segmenta-tion techniques,relying on the assumptionthat the imageis piecewiseconstant[2]. Note that in imagex, PSIS ob-tainsexactly two regions. Thebowl region (x,#2) approx-imatesnicely thetrueobject,in a betterway than[2], whofind significantlymoreregions,againwith many holescom-paredto PSIS. In imagew, which containsgradientsandlight effects,PSIS findsexactly threeregions,approximat-ing almostperfectlythethreepartsof theimage:theback-ground,thepot, andits lid. Thegradientsandlight effectsof imagej aremuchmoredifficult to handle.In thatcase,PSIS found four regions (all shown), two of which (#2and#3) aregoodapproximationsof two partsof theobject(exterior/interior). More generally, many regionsfoundbyPSIS in thesesmall imagesapproximateconceptuallydis-tinct partsof theobjects:see(a,#4),(b,#3),(c,#4),(e,#3),(f,#3),(n,#4),(s,#3),(t,#3),andmany others.

Image PSIS [5, 6] Image PSIS detail

Figure 4. Results obtained by PSIS and [5, 6] on various images. The “detail” column is a regionof interest in PSIS’s results. PSIS’s segmentations are grey-leveled averaged with white bor ders.Conventions for the results of [5, 6] are various but intuitive .

References

[1] D. ComaniciuandP. Meer. Robustanalysisof featurespaces:Color imagesegmentation. In Proceedingsof IEEE Int. Conf. on ComputerVision and PatternRecognition, pages750–755,1997.

[2] P. F. Felzenszwalb and D. P. Huttenlocher. Imagesegmentationusing local variations. In Proceedingsof IEEE Int. Conf. on ComputerVision and PatternRecognition, pages98–104,1998.

[3] C. Fiorio andJ.Gustedt.Two lineartime Union-Findstrategiesfor imageprocessing.TheoreticalComputerScience, 154:165–181,1996.

[4] D. ForsythandJ.Ponce.ComputerVision- A ModernApproach. PrenticeHall, 2001.Forthcoming.

[5] Z. Kato. Modelisations Markoviennes Mul-tir esolutionsen vision par ordinateur. Application ala segmentationd’imagesspot.PhDthesis,UniversitedeNice-SophiaAntipolis, 1994.

[6] Z. Kato, T. C. Pong,andJ. C. M. Lee. Motion com-pensatedcolor videoclassificationusingmarkov ran-domfields. In Proceedingsof the3 Ï-Ð AsianConfer-enceon ComputerVision, 1998.

[7] M. J. Kearnsand Y. Mansour. A Fast, Bottom-upDecisionTreePruningalgorithm with Near-Optimalgeneralization. In Proceedingsof the 15

���Interna-

tional Conferenceon Machine Learning, pages269–277,1998.

[8] T. Leung and J. Malik. Contour continuity in re-gion basedimagesegmentation. In Proceedingsofthe5

���Europ.Conf. onComputerVision, pages544–

559,1998.

[9] O. MaronandA.-L. Ratan.Multiple instancelearningfor naturalsceneclassification.In Proceedingsof the15���

InternationalConferenceon MachineLearning,pages341–349,1998.

[10] C. McDiarmid. Concentration.In M. Habib,C. Mc-Diarmid, J. Ramirez-Alfonsin,andB. Reed,editors,ProbabilisticMethodsfor AlgorithmicDiscreteMath-ematics, pages1–54.SpringerVerlag,1998.

[11] T.-Y. Philips,A. Rosenfeld,andA. C. Sher. O(log n)bimodalityanalysis.PatternRecognition, pages741–746,1989.

[12] J. Shi andJ. Malik. Normalizedcutsandimageseg-mentation.IEEE Trans.on PatternAnalysisandMa-chineIntelligence, 22:888–905,2000.

[13] L. G. Valiant. A theoryof thelearnable.Communica-tionsof theACM, pages1134–1142,1984.

[14] M. Wertheimer. Laws of organizationin perceptualforms. In W. B. Ellis, editor, A Sourcebookof GestaltPsychology, pages71–88.Harcourt,BraceandCom-pany, 1938.

[15] S.-C.Zhu. Embeddinggestaltlaws in markov randomfields. IEEE Trans.on PatternAnalysisandMachineIntelligence, 21:1170–1187,1999.

[16] S.-C.ZhuandA. Yuille. Regioncompetition:unifyingsnakes,region growing, andbayes/mdlfor multibandimagesegmentation.IEEETrans.onPatternAnalysisandMachineIntelligence, 18:884–900,1996.

Image Reg.#1 Reg.#2 Reg.#3 Reg.#4

a

b

c

d

e

f

g

h

i

j

k

l

Image Reg.#1 Reg.#2 Reg.#3 Reg.#4

m

n

o

p

q

r

s

t

u

v

w

xImagestreet Reg.#1 Reg.#2 Reg.#3 Reg.#4

Resultsof [2] ÑFigure 5. Some images, and some of the most significant regions obtained by PSIS. The bottom rowsho ws the result of [2] on the street image. The conventions are the same for all region images(everything that is not the region is white), except for region Ò + on the street results, surr oundedby black due to the brightness of the road.

top related