Transcript
Page 1: Ontology research and development. Part 2 - a review of ontology mapping and evolving

http://jis.sagepub.com/Journal of Information Science

http://jis.sagepub.com/content/28/5/375The online version of this article can be found at:

 DOI: 10.1177/016555150202800503

2002 28: 375Journal of Information ScienceYing Ding and Schubert Foo

Ontology research and development. Part 2 - a review of ontology mapping and evolving  

Published by:

http://www.sagepublications.com

On behalf of: 

  Chartered Institute of Library and Information Professionals

can be found at:Journal of Information ScienceAdditional services and information for    

  http://jis.sagepub.com/cgi/alertsEmail Alerts:

 

http://jis.sagepub.com/subscriptionsSubscriptions:  

http://www.sagepub.com/journalsReprints.navReprints:  

http://www.sagepub.com/journalsPermissions.navPermissions:  

http://jis.sagepub.com/content/28/5/375.refs.htmlCitations:  

What is This? 

- Oct 1, 2002Version of Record >>

at GEORGIAN COURT UNIV on November 6, 2014jis.sagepub.comDownloaded from at GEORGIAN COURT UNIV on November 6, 2014jis.sagepub.comDownloaded from

Page 2: Ontology research and development. Part 2 - a review of ontology mapping and evolving

Ying Ding

Vrije Universiteit, Amsterdam, The Netherlands

Schubert Foo

Nanyang Technological University, Singapore

Received 29 November 2001Revised 2 April 2002

Abstract.

This is the second of a two-part paper to review ontologyresearch and development, in particular, ontology mappingand evolving. Ontology is defined as a formal explicit speci-fication of a shared conceptualization. Ontology itself is nota static model so that it must have the potential to capturechanges of meanings and relations. As such, mapping andevolving ontologies is part of an essential task of ontologylearning and development. Ontology mapping is concernedwith reusing existing ontologies, expanding and combiningthem by some means and enabling a larger pool of informa-tion and knowledge in different domains to be integrated tosupport new communication and use. Ontology evolving,likewise, is concerned with maintaining existing ontologiesand extending them as appropriate when new informationor knowledge is acquired. It is apparent from the reviewsthat current research into semi-automatic or automaticontology research in all the three aspects of generation,mapping and evolving have so far achieved limited success.

Expert human input is essential in almost all cases.Achievements have been made largely in the form of toolsand aids to assist the human expert. Many research chal-lenges remain in this field and many of such challenges needto be overcome if the next generation of the Semantic Web isto be realized.

1. Introduction

Ontology is defined as a formal explicit specification ofa shared conceptualization [1]. It provides a shared andcommon understanding of a domain that can becommunicated across people and application systems.Ontology itself is not a static model. It must have thepotential to capture the changes of meaning [2].

Ontologies are developed to provide the commonsemantics for agent communication. When two agentsneed to communicate or exchange information, the pre-requisite is that a common consensus has to formbetween them. This leads to the need to map twoontologies. For example, in business to business (B2B)e-commerce applications, the mapping among differentclassification standards (such as UNSPSC (http://eccma.org/unspsc/) and ecl@ss (www.eclass.de/)) turnsout to be not trivial.

This paper reports the second part of the survey onontology research and development. The first partintroduced the subject of ontology and focused on thestate-of-the-art techniques and work done on semi-automatic and automatic ontology generation, as wellas the problems facing this research [3]. This secondpart focuses on the current status of research intoontology mapping and ontology evolving.

123456789

1110123456789

2012

113456789

30123456789

40123456789

5012

Journal of Information Science, 28 (5) 2002, pp. 375–388 375

The effect of postings information on searching behaviour

Ontology research anddevelopment. Part 2 – a review ofontology mapping and evolving

Correspondence to: Ying Ding, Division of Mathematics andComputer Science, Vrije Universiteit, Amsterdam, TheNetherlands. E-mail: [email protected]

at GEORGIAN COURT UNIV on November 6, 2014jis.sagepub.comDownloaded from

Page 3: Ontology research and development. Part 2 - a review of ontology mapping and evolving

An introduction to ontology mapping is firstpresented, followed by a review of a number ofdifferent ontology mapping projects to illustrate theapproaches, techniques, resulting mapping and prob-lems associated with the ontologies produced. This isfollowed by a review of related work on ontologiesevolving using a similar framework of presentation.

2. Ontology mapping

Effective use or reuse of knowledge is essential. This isespecially so now because of the overwhelming amountof information that is being continually generated,which in turn has forced organizations, businesses andpeople to manage their knowledge more effectively andefficiently. Simply combining knowledge from distinctdomains creates several problems, for instance differentknowledge representation formats, semantic inconsis-tencies, and so on. The same applies to the area ofontology engineering.

With ontologies generation, ontology engineerssubsequently face the problem of how to reuse theseexisting ontologies, and how to map various differentontologies in order to enable a common interface andunderstanding to emerge for the support of communi-cation between existing and new domains. As such,ontology mapping has turned out to be another impor-tant research area for ontology learning.

Sofia Pinto et al. [4] provided a framework and clar-ified the meaning of the term ‘ontology integration’ toinclude that of ontology reuse, ontology merging andontology alignment along with tools, methodologiesand applications, as shown in Table 1.

Noy and Musen [5] clarified the difference betweenontology alignment and ontology merging and noted

that ‘in ontology merging, a single ontology is createdwhich is a merged version of the original ontologies,while in ontology alignment, the two original ontolo-gies exist, with links established between them’. Thereare several ways to carry out ontology mapping, as canbe seen in the ways in which the resulting mapping arerepresented. Mappings can be represented as condi-tional rules [6], functions [7], logic [8], or a set of tablesand procedures [9].

Ontology mapping has been addressed by researchersusing different approaches:� One-to-one approach, where for each ontology a set

of translating functions is provided to allowcommunication with the other ontologies withoutan intermediate ontology. The problem with thisapproach is one of computing complexity (e.g.OBSERVER [10]).

� Single-shared ontology. The drawbacks of dealingwith a single shared ontology are similar to those ofany standards [11].

� Ontology clustering, where resources are clusteredtogether on the basis of similarities. Additionally,ontology clusters can be organized in a hierarchicalfashion [12].

Figure 1 shows a very simple example of ontologymapping in which the process of mapping an Employeeontology and a Personnel ontology from differentdepartments of the same company is illustrated. Adifferent UnitOfMeasure exists in these two ontologiesso that the mapping rule of UnitConversion is neededto secure the right mapping.

3. Ontology mapping projects

Many existing ontology mapping projects have beencarried out and reported in the literature. The following

11123456789101234567892012345678930123456789401234567895012

Ontology research and development. Part 2

376 Journal of Information Science, 28 (5) 2002, pp. 375–388

Table 1Examples of ontology integration techniques and applications

Tools Ontologies built/applications Methodology

1. Integration of ontologies by building a new ontology and reusing other available ontologies (ontology reuse)Ontolingua server PhySys, Mereology ontology, KACTUS, Integration of the building blocks and

Standard-Units ontology, etc. foundational theories

2. Integration of ontologies by merging different ontologies into a single one that ‘unifies’ all of them (includes OntologyMerging and Ontology Alignment)ONIONS SENSUS, Agreed-Upon-Ontology Manually, brainstorming, ONIONS

3. Integration of ontologies into applicationsKACTUS CYC, GUM, PIF, UMLS, EngMath, PhySys, Manual method, brainstorming, ONIONS

Enterprise Ontology, Reference ontology

Note: the various tools, ontologies and methodologies presented in the table will be discussed in subsequent sections.

at GEORGIAN COURT UNIV on November 6, 2014jis.sagepub.comDownloaded from

Page 4: Ontology research and development. Part 2 - a review of ontology mapping and evolving

sections provide an update of the state of developmentof ontology mapping in these projects. Informationabout each project along with the important featuresassociated with it are provided and highlighted.

3.1. InfoSleuth’s reference ontology

InfoSleuth [13] can support construction of complexontologies from smaller component ontologies so thattools tailored for one component ontology can be usedin many application domains. Examples of reusedontologies include units of measure, chemistry know-ledge, geographic metadata, etc. Mapping is explicitlyspecified among these ontologies as relationshipsbetween terms in one ontology and related terms inother ontologies.

All mappings between ontologies are made by a spe-cial class of agents known as ‘resource agents’. Aresource agent encapsulates a set of information aboutthe ontology mapping rules, and presents that informa-tion to the agent-based system in terms of one or moreontologies (called reference ontologies). All mapping isencapsulated within the resource agent. All ontologiesare represented in OKBC (Open Knowledge Base Con-nectivity) and stored in an OKBC server by a specialclass of agents called ontology agents, which can pro-vide ontology specifications to users (for request form-ulation) and to resource agents (for mapping).

3.2. Stanford’s ontology algebra

In this application, the mapping between ontologieshas been executed by ontology algebra [14], consistingof three operations, namely, intersection, union anddifference. The objective of ontology algebra is toprovide the capability for interrogating many largelysemantically disjoint knowledge resources. Here, artic-ulations (the rules that provide links across domains)can be established to enable the knowledge interoper-

ability. Contexts, the abstract mathematical entitieswith some properties, were defined to be the unit ofencapsulation for well-structured ontologies [15],which could provide guarantees about the knowledgethey export, and contain feasible inferences aboutthem. The ontology resulting from the mappingsbetween two source ontologies is assumed to be consis-tent only within its own context, known as an articula-tion context [16]. Similar work can be found inMcCarthy’s research and the CYC (Cycorp’s Cyc know-ledge base; www.cyc.com/) project. For instance,McCarthy [15] defined context as simple mathematicalentities used for the situations in which particularassertions are valid. He proposed using the liftingaxioms to state that a proposition or assertion in thecontext of one knowledge base is valid in another. TheCYC [8] use of microtheories bears some resemblanceto this definition of context. Every microtheory withinCYC is a context that makes some simplifying assump-tions about the world. Microtheories in CYC are orga-nized in an inheritance hierarchy whereby everythingasserted in the super-microtheory is also true in thelower level microtheory.

Mitra et al. [17] used ontology algebra to enable inter-operation between ontologies via articulation ontology.The input to the algebra is provided by ontology graphs.The operators in the algebra include unitary operatorslike filter and extract, and binary operators that includeunion, intersection and difference (as in normal setoperators):� The union operator generates a unified ontology

graph comprising two original ontology graphsconnected by the articulation. The union presentsa coherent, connected and semantically soundunified ontology.

� The intersection operator produces the articulationontology graph, which consists of the nodes and theedges added to the articulation generator using the

11123456789

10123456789

1120123456789

1130123456789

1140123456789

501

112

YING DING AND S. FOO

Journal of Information Science, 28 (5) 2002, pp. 375–388 377

Employee

name: textweight: number kgnationality: country

Unit Conversion

Factor = 1000SourceProperty = ‘weight’TargetProperty = gram

Personnel

name: textweight: number gramnationality: country

Source Target

Fig. 1. Simple example of ontology mapping.

at GEORGIAN COURT UNIV on November 6, 2014jis.sagepub.comDownloaded from

Page 5: Ontology research and development. Part 2 - a review of ontology mapping and evolving

articulation rules between two ontologies. Theintersection determines the portions of knowledgebases that deal with similar concepts.

� The difference operator, distinguishing betweentwo ontologies (O1–O2) is defined as the terms andrelationships of the first ontology that have not beendetermined to exist in the second. This operationallows a local ontology maintainer to determine theextent of one ontology that remains independent ofthe articulation with other domain ontologies, sothat it can be independently manipulated withouthaving to update any articulation.

The researchers also built up a system known asONION (Ontology compositION), which is an architec-ture based on a sound formalism to support a scalableframework for ontology integration. The special featureof this system is that it separates the logical inferenceengine from the representation model of the ontologiesas much as possible. This allows the accommodation ofdifferent inference engines. This system contains thefollowing components:� data layer which manages the ontology representa-

tions, the articulations and the rule sets involvedand the rules required for query processing;

� viewer (which is basically a graphical user inter-face);

� query system;� articulation agent that is responsible for creating

the articulation ontology and the semantic bridgesbetween it and the source ontologies. (The genera-tion of the articulation in this system is semi-automatic.)

The ontology in ONION is represented by the concep-tual graph, and the ontology mapping is based on thegraph mapping. At the same time, domain experts candefine a variety of fuzzy matching. The main innova-tion of ONION is that it uses articulations of ontologiesto interoperate among ontologies and it also representsontologies graphically which could help in separatingthe data layer from the inference engine.

Mitra et al. [18] generated SKAT (Semantic Know-ledge Articulation Tool) to extract information from awebsite by supplying a template graph. It can alsoextract structural information from an ontology thatcould be used to create a new ontology. Noy and Musen[5] developed SMART which is an algorithm that pro-vides a semi-automatic approach to ontology mergingand alignment. SMART assists the ontology engineer byprompting to-do lists as well as performing consistencychecking.

3.3. AIFB’s formal concept analysis

The ontology learning group in AIFB (Institute ofApplied Informatics and Formal Description Methods,University of Karlsruhe, Germany) through Stumme et al. [19] discussed preliminary steps towards anorder-theoretic foundation for maintaining and mergingontologies and articulated some questions about how a structural approach might improve the mergingprocess, for instance:� Which consistency conditions should ontologies

verify in order to be merged?� Can the merging of ontologies be described as a

parameterized operation on the set of ontologies?� How can other relations beside the is-a relation be

integrated?� How can an interactive knowledge acquisition

process support the construction of the aligningfunction?

� How can meta-knowledge about concepts and rela-tions provided by axioms be exploited for thealigning process, and so on.

They proposed the Formal Concept Analysis [20] forthe merging and maintaining of ontologies that offers acomprehensive formalization of concepts by mathema-tizing them as a unit of thought constituted of two parts:its extension and its intension. Formal Concept Analy-sis starts with a formal context defined as a tripletK:=(G, M, I), where G is a set of objects, M is a set ofattributes, and I is a binary relation between G and M.The interested reader can refer to Ganter and Wille [20]for a more detailed account of this technique.

3.4. ECAI 2000’s method

A number of automatic ontology mapping researchprojects were reported in the Ontology LearningWorkshop of ECAI 2000 (European Conference onArtificial Intelligence). It is a well-known fact thatdiscovering related concepts in a multi-agent systemwith diverse ontologies is difficult using existing know-ledge representation languages and approaches, and anumber of speakers reported a program in this area.

Williams and Tsatsoulis [21] proposed an instance-based approach for identifying candidate relationsbetween diverse ontologies using concept cluster inte-gration. They discussed how their agents represent,learn, share and interpret concepts using ontologiesconstructed from web page bookmark hierarchies. Theconcept vector represents a specific web page and theactual semantic concept is represented by a group ofconcept vectors judged to be similar by the user

11123456789101234567892012345678930123456789401234567895012

Ontology research and development. Part 2

378 Journal of Information Science, 28 (5) 2002, pp. 375–388

at GEORGIAN COURT UNIV on November 6, 2014jis.sagepub.comDownloaded from

Page 6: Ontology research and development. Part 2 - a review of ontology mapping and evolving

(according to the meaning of the bookmark). The agentsuse supervised inductive learning to learn their indi-vidual ontologies. The output of this ontology learningare semantic concept descriptions (SCD) represented asinterpretation rules. They built up one system to fulfillthis purpose called DOGGIE, which could apply theconcept cluster algorithm (CCI) to look for candidaterelations between ontologies. The experimental resultshave demonstrated the feasibility of the instance-basedapproach for discovering candidate relations betweenontologies using concept cluster integration. However,here they assume all the relations are only general is-arelations. This method could be very useful forontology merging.

Tamma and Bench-Capon [22] presented a semi-auto-matic framework to deal with inheritance conflicts andinconsistencies while integrating ontologies. Thisframework represents ontologies by a frame-basedlanguage where the classical set of slot facets isextended to encompass other information in order toassociate with each attribute a degree of strength, thuspermitting one to deal with default conflicts and incon-sistencies. The framework includes several steps:� Check the class and slot name’s synonyms.� If a name mismatch is detected, the system pro-

ceeds both bottom-up and top-down trying to relateclasses.

� If an inconsistency is detected then the priorityfunctions for the inheritance rules are computed onthe grounds of both the rankings of probabilitiesand the degree of strength.

� Subsequently, the system provides domain expertswith a list of suggestions that are evaluatedaccording to a priority function. The final choice isalways left to the domain experts, but the systemprovides them with a set of possible choicesincluding information concerning how and whenthe attribute changes.

Uschold [23] pointed out that the global referenceontology would be the perfect candidate for ontologymapping of local ontologies. Different user communi-ties could view the global reference ontology from theirown preferred perspectives through mapping andprojecting. The basic idea is to define a set of mappingrules to form a perspective for viewing and interactingwith the ontology. Different sets of mapping rulesenable the ontology, or a portion of it, to be viewed fromthree different perspectives: viewing the globalontology using own local terminologies; viewing a selected portion of the ontology; and viewing at ahigher level of abstraction.

3.5. ISI’s OntoMorph

The OntoMorph system of the Information SciencesInstitute of the University of Southern California (ISI)aims to facilitate ontology merging and the rapid gener-ation of knowledge base translators [24]. It combinestwo powerful mechanisms to describe KB transforma-tions. The first of these mechanisms is syntacticrewriting via pattern-directed rewrite rules that allowthe concise specification of sentence-level transforma-tions based on pattern matching, and the second mech-anism involves semantic rewriting which modulatessyntactic rewriting via semantic models and logicalinference. The integration of ontologies can be based onany mixture of syntactic and semantic criteria.

In the syntactic rewriting process, input expressionsare first tokenized into lexemes and then represented assyntax trees, which are represented internally as flatsequences of tokens and their structure only exists logi-cally. OntoMorph’s pattern language and executionmodel is strongly influenced by Plisp [25]. The patternlanguage can match and de-structure arbitrarily nestedsyntax trees in a direct and concise fashion. Rewriterules are applied to the execution model.

For the semantic rewriting process, OntoMorph isbuilt on top of the PowerLoom knowledge representa-tion system, which is a successor to the Loom system.Using semantic import rules, the precise image of thesource KB semantics can be established withinPowerLoom (limited only by the expressiveness of first-order logic).

3.6. KRAFT’s ontology clustering

Visser et al. [26] proposed a set of techniques to mapone-to-one ontologies in the KRAFT project (standingfor Knowledge Reuse and Fusion/Transformation):� class mapping – maps a source ontology class name

to a target ontology class name;� attribute mapping – maps the set of values of

a source ontology attribute to a set of values of atarget ontology attribute, or maps a source ontologyattribute name to a target ontology attribute name;

� relation mapping – maps a source ontology relationname to a target ontology relation name; and

� compound mapping – maps compound sourceontology expressions to compound target ontologyexpressions.

Following this, Visser and Tamma [12] suggested theconcept of ‘ontology clustering’ to integrate hetero-geneous resources. Ontology clustering is based on thesimilarities between the concepts known to different

11123456789

10123456789

1120123456789

1130123456789

1140123456789

501

112

YING DING AND S. FOO

Journal of Information Science, 28 (5) 2002, pp. 375–388 379

at GEORGIAN COURT UNIV on November 6, 2014jis.sagepub.comDownloaded from

Page 7: Ontology research and development. Part 2 - a review of ontology mapping and evolving

agents. The ontology clustering was represented in ahierarchical fashion. The ontology on top of the hier-archy is known as the application ontology that is usedto describe the specific domain so that it is not reusable.The application ontology contains a relevant subset ofWordNet concepts with senses selected from WordNet.A new ontology cluster is a child ontology that definescertain new concepts using the concepts already con-tained in its parent ontology. Concepts are described interms of attributes and inheritance relations, and arehierarchically organized. This approach has beenapplied to the international domain of coffee.

3.7. Heterogeneous database integration

In the ontology community, the concept of ontology hasbeen extended to encompass a very broad scope. Manyclassification systems, catalogues and indexes have alsoreferred to ontology (or more specifically in this con-text, lightweight ontology). A database scheme isanother such lightweight ontology. In the databasecommunity, the problems concerning the integration ofheterogeneous databases were raised long ago. Initialresearch in heterogeneous databases was largelydirected towards the issues of resolving schema anddata heterogeneity conflicts across multiple auto-nomous databases [27], and of developing a globalschema to provide integration [28–30]. Some of theseresearches are worthy of mention since they offerresults that could indicate potential solutions forontology mapping and merging.

Batini et al. [31] provided a comprehensive survey ofdifferent schema integration techniques. They defineschema integration as the activity of integratingschemata of existing or proposed databases into aglobal, unified schema. The five steps to schema integ-ration described include pre-integration, comparison,conformation, merging and restructuring.

While structural integration has been well defined inSheth and Larson [32] and Batini et al. [31], the treat-ment from a semantic perspective is not that easy. In anattempt to reconcile the semantic and schematic per-spectives, Sheth and Kashyap [33] presented a semantictaxonomy to demonstrate semantic similarities betweentwo objects and related this to a structural taxonomy.Various types of semantic relationships have been dis-cussed in the literature. Many terms such as semanticequivalence, semantic relevance, semantic resem-blance, semantic compatibility, semantic discrepancy,semantic reconciliation and semantic relativism havebeen defined. There is no general agreement on theseterms. There are a number of projects addressing the

semantic issues of database integration, for example,FEMUS (Swiss Institute of Technology), ETHZ(Eidgenössische Technische Hochschule Zürich) andCOIN (Context technology Interchange Network, MIT).

Intelligent integration has been applied to heteroge-neous database integration, and two major ideas areproposed in the literature: the first is via agents [34].Typical systems are RETSINA (www-2.cs.cmu.edu/~softagents/retsina.html; an open multi-agent system(MAS)) and InfoSleuth. The second is based on theconcept of mediators [35] that provide intermediaryservices by linking data resources and applicationprograms. Examples include the TSMISS at StanfordUniversity. Both information agents and mediatorsrequire domain knowledge that is modelled in somekind of common vocabulary (ontology).

Palopoli et al. [30] present two techniques to inte-grate and abstract database schemes. Scheme integra-tion is intended to produce a global conceptual schemefrom a set of heterogeneous input schemes. Schemeabstraction groups objects of a scheme into homo-geneous clusters. Both assume the existence of a collec-tion of inter-schema properties describing semanticrelationships holding among input database schemeobjects. The first technique uses inter-schema proper-ties to produce and integrate schema. The second onetakes an integrated scheme as the input and yields anoutput in the form of an abstracted scheme. The diffi-culties encountered in achieving schema integrationhave highlighted the importance of capturing thesemantics embedded in the underlying schemata.There is a general agreement that integration can beachieved only with a good understanding of theembedded semantics of the component databases.Srinivasan et al. [36] introduced a conceptual integra-tion approach that exploits the similarity in meta-levelinformation on database objects to discover a set ofconcepts that serve as a domain abstraction and providea conceptual layer above existing legacy systems

3.8. Other ontology mappings

Hovy [37] described several heuristic rules to supportthe merging of ontologies. For instance, the NAMEheuristic compares the names of two concepts, theDEFINITION heuristic uses linguistic techniques forcomparing the natural language definitions of twoconcepts, and the TAXONOMY heuristic checks thecloseness of two concepts to each other.

The Chimaera system [38] can provide support formerging ontological terms from different knowledgesources, for checking the coverage and correctness of

11123456789101234567892012345678930123456789401234567895012

Ontology research and development. Part 2

380 Journal of Information Science, 28 (5) 2002, pp. 375–388

at GEORGIAN COURT UNIV on November 6, 2014jis.sagepub.comDownloaded from

Page 8: Ontology research and development. Part 2 - a review of ontology mapping and evolving

ontologies and for maintaining ontologies. It contains abroad collection of commands and functions to supportthe merging of ontologies by coalescing two semanti-cally identical terms from different ontologies and byidentifying terms that should be related by subsump-tion or disjointness relationships.

PROMPT [39] is an algorithm for ontology mergingand alignment that is able to handle ontologies speci-fied in OKBC-compatible format. It starts with the iden-tification of matching class names. Based on this initialstep an iterative approach is carried out for performingautomatic updates, finding resulting conflicts, andmaking suggestions to remove these conflicts. PROMPTis implemented as an extension to the Protégé 2000knowledge acquisition tool and offers a collection ofoperations for merging two classes and related slots.

Li [40] identifies similarities between attributes fromtwo schemas using neural networks. Campbell andShapiro [41] described an agent that mediates betweenagents that subscribe to different ontologies. Bright et al. [28] use a thesaurus and a measure of ‘semantic-distance’ based on path distance to merge ontologies.Kashyap and Sheth [42] define the ‘semantic proximity’of two concepts as a tuple encompassing contexts,value domains and mappings, and database states. Theresulting analysis yields a hierarchy of types ofsemantic proximity, including equivalence, relation-ship, relevance, resemblance and incompatibility.

Lehmann and Cohn [43] require that concept defini-tions of the ontologies should include more specializeddefinitions for typical instances, and assume that theset relation between any two definitions can be identi-fied as equivalence, containment, overlap or disjoint-ness. OBSERVER [10] combines intensional andextensional analysis to calculate lower and upperbounds for the precision and recall of queries that aretranslated across ontologies on the basis of manuallyidentified subsumption relations.

Weinstein and Birmingham [44] compared conceptsin differentiated ontologies, which inherit definitionalstructure from concepts in shared ontologies. Shared,inherited structure provides a common ground thatsupports measures of ‘description compatibility’. Theyuse description compatibility to compare ontologystructures represented as graphs and identify similari-ties for mapping between elements of the graphs. Therelations they find between concepts are based on theassumption that local concepts inherit from conceptsthat are shared. Their system was evaluated by generat-ing description logic ontologies in artificial words.

Borst and Akkermans [45] used the term ‘ontologymapping’ to describe the process where an entity of a

primary ontology is furthered differentiated throughthe application of a secondary ontology. The mappingexists between the secondary and the primary ontologyand constraints on a secondary ontology restrict theapplication of its elements. The result of ontologicalmappings is a set of ontological commitments whichcould be considered as a new ontology.

3.9. Ontology mapping: summary of observations

The summary of the aforementioned research ofontology mapping along the dimensions of mappingrules, resulting ontology, application areas, assistingtools and systems are summarized in Table 2.

From these reviews, it became evident that most ofthese mappings depend very much on the inputs ofhuman experts. Although some tools are available tofacilitate the mapping, the limited functions they couldprovide are class or relation name checking, consis-tency checking, to-do list recommendations, etc.Ontology mapping is not a simple one-to-one mapping(link the class name, relation name, attribute name fromone ontology to another), but demands substantialdeeper checking and verification for inheritance,consistency of the inference, etc. Furthermore, ontologymapping could be complicated by many-to-one, one-to-many or many-to-many relations either within onedomain or one that transcends different domains(ontology clustering).

Ontology mapping could also be viewed as the pro-jection of the general ontologies from different points ofview, either according to the different applicationdomains or various tasks or applications [23]. Muchremains to be done in the area of semi-automatic orautomatic ontology mapping. However, when the prob-lems facing automatic ontology generation are tackledand overcome, there will be a move towards providinga solution for ontology mapping at the same time.

4. Ontology evolving (maintenance)

The second half of this paper examines ontology evolv-ing (the preferred term used in ontology literature overontology maintenance). Ontology evolving requires theclarity of structure of ontologies so as to guarantee anaccurate gauge of the maintenance effort. Some of theresearch on semi-automatic or automatic ontologyevolving has been carried out recently, but almost allare at early stages and requires manual intervention.

Fowler et al. [13] mentioned the need to maintain dif-ferent versions of the same ontology in InfoSleuth agent

11123456789

10123456789

1120123456789

1130123456789

1140123456789

501

112

YING DING AND S. FOO

Journal of Information Science, 28 (5) 2002, pp. 375–388 381

at GEORGIAN COURT UNIV on November 6, 2014jis.sagepub.comDownloaded from

Page 9: Ontology research and development. Part 2 - a review of ontology mapping and evolving

11123456789101234567892012345678930123456789401234567895012

Ontology research and development. Part 2

382 Journal of Information Science, 28 (5) 2002, pp. 375–388

Table 2Summary for ontology mapping

Project InfoSleuth Stanford AIFB ECAI 2000 ISI KRAFT Others

Mapping Reference Ontology Order-theoretic Concept Syntactic One-to-one Heuristic rulesrules ontology algebra foundation cluster rewriting ontology Logical

Agent mapping Articulation Formal integration Semantic mapping subsumption (reference agent, Context concept Supervised rewriting Ontology or disjointnessontology agent) (articulation analysis inductive (based on clustering PROMPT

context) Theoretical learning semantic Neural Lifting discussion Default models or networks for algorithm (no specific conflicts and logical identifying Microtheory methods) inconsistency inference) similarity Graph mapping checking between Fuzzy mapping based on attributes

the features Path distance provided by (semantic frame-based distance)language Semantic Ontology proximityprojection Description

compatibility

Results for Represented in The final None Semantic PowerLoom Hierarchy OKBCfinal ontology OKBC ontology is concept (first-order fashion Conceptual

Stored in OKBC assumed to be descriptions logic) graphserver consistent only Represented in Description

within its own frame-based logiccontext language A set of Represented ontological in conceptual commitmentgraph

Semi- Manually (or Manually (or None Manually Manually Manually Manually automatic/ semi- semi- (or semi- (or semi- (or semi- (or semi-automatic? automatically) automatically) automatically) automatically) automatically) automatically)

Application Multi-agent None None Multi-agent None International Databasearea system system coffee

preparation

Assisting None ONION None DOGGIE OntoMorph None Chimaeratool (system) SKAT SMART OBSERVER

(consistency checking and to-do list)

Others Assume all WordNetrelations are general is-a relations

at GEORGIAN COURT UNIV on November 6, 2014jis.sagepub.comDownloaded from

Page 10: Ontology research and development. Part 2 - a review of ontology mapping and evolving

architecture as it is being scaled up. Both TOVE [46] andMETHODOLOGY [47] focus on ontology maintenance,but manually. The main difference is that METHOD-OLOGY focuses on comprehensively addressing themaintenance stage of the life cycle of ontology, whereasTOVE utilizes more formal techniques to address amore limited number of maintenance issues. Uschold[23] also pointed out the importance of ontology main-tenance between local and global ontologies, especiallythe importance of standardizing concepts and relationsusing unique identifier codes and names.

4.1. Stanford’s algebraic methodology

Jannink and Wiederhold [55] maintained the ontologyusing the same methodology as that employed togenerate their ontologies – ontology algebra. When theknowledge base changes, they use the S operator,which provides a simple method for assessing thecontents of a context to reveal terms with missing endtags in the data. For instance, Slen(hw)div20(dictionary)returns the entries of the dictionary, grouped by lengthof the head words. Once the errors were identified, therules to convert terms with missing end tags wereadded. By maintaining statistics with the S operator onthe process of extracting the relevant parts of thedictionary, rules can be updated straightforwardly. Thecongruity measure within the algebraic frameworksignificantly simplified the process of identifying andhandling the changes among the ontologies. Wieder-hold [35] implored that ‘it is important that newsystems for our networks are designed with mainte-nance in mind’. Keeping ontologies small enablessemantic agreement among the people using them withlittle lag time.

4.2. AIFB’s method

Stumme et al. [19] also used the same method in theirontology mapping for ontology maintenance. Theypointed out that current maintenance of ontologies istedious and time-consuming. Domain experts caneasily lose orientation in large ontologies. Tool supportis needed to make reasonable suggestions to the expertor to automate certain tasks based on some pre-definedprinciples. They considered that future research in thisdirection should include the integration of approachesfor generating ontologies from text with linguisticmethods, and for evaluating them with data miningtechniques. They proposed the Formal Concept Analy-sis for ontology maintenance. Formal Concept Analysis

[20] offers formalization by mathematizing the conceptas a unit of thought constituted of two parts: its exten-sion and its intension.

4.3. ECAI 2000

Faatz et al. [48] presented an algorithm to determinesimilarities between text documents and used the strictsupervised learning algorithm to improve the already-existing ontology based on the similarities or clusters of the relevant knowledge. First, they chose plain-textdocuments from online newswire or other web docu-ments, then indexed them according to the controlledvocabularies and the already-built-up ontology (manu-ally). They used the vector space model to normalizeeach document adding some background knowledgefrom the manually built-up ontology to enrich thewhole document. After that, they used multi-linearregression to cluster or match the similarity among thedocuments and let the domain experts or the ontologyengineers decide whether knowledge learned from thesimilarity matching or clustering could be used tomaintain the current ontology. They concluded thepaper with several areas for future research: (1) con-firmation of the proposed idea by experimenting with a certain amount of new vocabulary; (2) improving the results by introducing an additional qualitativetagging of keywords in the vector representation; and(3) attempting to find new ways to automatically detectontological relations.

Roux et al. [49] present a method to insert newconcepts in an existing information extraction systembased on conceptual graph architecture. In this system,an ontology is a two-fold set that contains passive infor-mation, the lattice, combined with active information,the graph patterns. This system has adopted a two-levelarchitecture:� A linguistic analysis component that is mainly

based on the Finite State Transducer technology. Itconsists of a Part-of-Speech tagger and a RobustParser. Text is processed in several steps – token-ization, morphological analysis, disambiguationusing a Hidden Markov Model technique, errorcorrections, and finally a contextual lookup to iden-tify names. Syntactic dependencies are thenextracted from that output.

� A knowledge-based processing component basedon conceptual graphs. The syntactic dependenciesextracted at the linguistic level are used to build thesemantic representation.

When a new word appears, pattern matching andsyntactic dependencies were used to detect this new

11123456789

10123456789

1120123456789

1130123456789

1140123456789

501

112

YING DING AND S. FOO

Journal of Information Science, 28 (5) 2002, pp. 375–388 383

at GEORGIAN COURT UNIV on November 6, 2014jis.sagepub.comDownloaded from

Page 11: Ontology research and development. Part 2 - a review of ontology mapping and evolving

word from the Web document and insert it in thelattice. As the lattice comprises concepts that areconnected to each other along semantic paths, thisposes the requirement to categorize this new concept inorder to find its correct slot in the lattice. This is basedon the projection of customized conceptual sub-graphswith typed concepts on specific nodes to detect certainconfiguration of verbs that will assign their position inthe lattice. However, this algorithm is currently underdevelopment and future tests are needed to validate itsusefulness and applicability.

Todirascu et al. [50] proposed a system to acquirenew concepts and to place them into the existingdomain hierarchies (domain ontology). When a newdocument is added to the index base, it is processed bythe Part-of-Speech technique. From here, the mostfrequent content words (noun, adjective, etc.) and theircontexts are extracted. For each context, a conceptdescription is built and classified in the existing hier-archy. Sense tagging assigns words and syntagms withtheir Description Logic description. Partial semanticdescriptions are combined by heuristic rules to encodesyntactic knowledge. In order to limit the size of thedomain ontology, they encountered some major prob-lems in that the selection of high-frequency wordsalways ends as very general concepts and inconsistentconceptual descriptions of concepts are very common.

Agirre et al. [51] used topic signatures and hierar-chical clusters to tag a given occurrence of a word withthe intention of enriching very large ontologies from theWeb. In this work, the main obstacle to getting cleansignatures comes from the method to link concepts andrelevant documents from the Web. Some filtering tech-niques have to be applied in order to get documentswith less bias and more content. Cleaner topic signa-tures open the avenue for interesting ontology enhance-ment, as they provide concepts with rich topicalinformation. For instance, similarity between topicsignatures could be used to find out topically relatedconcepts, so that the clustering strategy could beextended to all other concepts.

4.4. Ontology server

The core architecture of OntoSeek is an ontology server[52]. The server provides an interface for applicationswilling to access or manipulate an ontology data model(a generic graph data structure), and facilities main-taining a persistent LCG (Lexical Conceptual Graph)database. End users and resource encoders can accessthe server to update the LCG database encoded inmarkup language, such as HTML or XML.

Uschold and Gruninger [53] extensively discussedthe ontology server provided by the KnowledgeSystems Laboratory at Stanford University. To somedegree, they already found the importance of theontology server for ontology maintenance research inthe foreseeable future. The main function of theontology server is to create, edit, evaluate, publish,maintain and reuse ontologies. The particular signifi-cance is the ability to support the collaborative worksthrough the Web. Duineveld et al. [54] also conducteda complicated comparison of the current availableontology editing tools from different points of viewsamong which are the client/server support of tools tofacilitate the collaborative ontology generation and theontology maintenance issue. Uschold and Gruninger[53] list the improvement of the ontology server incontrast to the original Ontolingua system as follows:� it is a remote computer server available on the Web;� it provides an extensible library of shareable and

reusable ontologies with suitable protections forproprietary, group and private work;

� there is an extensive browsing capability, whichallows convenient viewing of ontologies;

� it has extended the original representation languageto support decomposition of ontologies intomodules and assembling new ontologies fromexisting modules from the library;

� it provides explicit support for collaborative work,which includes the concept of session to whichmultiple parties can be attached; parties are auto-matically informed of each other’s activities;

� it has an application programmer’s interface (API)which allows remote applications to query andmodify ontologies stored on the server overt theInternet;

� as well as translating into multiple output language,it also allows multiple input languages.

4.5. Ontology evolving: summary of observations

A summary of the aforementioned research in the areaof ontology evolving along the dimensions of evolvingor maintenance rules and future research requirementsis shown in Table 3.

5. Discussion: ontology and its role inbringing the Semantic Web to its fullpotential

The World Wide Web, with its continuous explosivegrowth of information, has caused a severe information

11123456789101234567892012345678930123456789401234567895012

Ontology research and development. Part 2

384 Journal of Information Science, 28 (5) 2002, pp. 375–388

at GEORGIAN COURT UNIV on November 6, 2014jis.sagepub.comDownloaded from

Page 12: Ontology research and development. Part 2 - a review of ontology mapping and evolving

overload problem. It is increasingly difficult to find,access, present and maintain the information requiredby a wide variety of users. This is because the informa-tion content is presented primarily in natural language.A wide gap has emerged between the information avail-able and the tools that are in place to support informa-tion seeking and use.

Many new research initiatives and commercial enter-prises have been set up to enrich available informationwith machine-processable semantics. Such support isessential for ‘bringing the web to its full potential’. TimBerners-Lee, Director of the World Wide WebConsortium, referred to the future of the current WWWas the ‘Semantic Web’ – an extended web of machine-readable information and automated services thatextend far beyond current capabilities. The explicitrepresentation of the semantics underlying data,

programs, pages and other web resources will enable aknowledge-based web that provides a qualitatively newlevel of service.

A key enabler for the Semantic Web is online onto-logical support for data, information and knowledgeexchange. Given the exponential growth of the infor-mation available online, automatic processing is vitalto the task of managing and maintaining access to thatinformation. Used to describe the structure and seman-tics of information exchange, ontologies are seen to playa key role in areas such as knowledge management, B2Be-commerce and other such burgeoning electronicinitiatives.

Currently the Semantic Web has attracted muchinterest from various communities. Much interestingand important research related to the Semantic Web hasbeen addressed through ontology development and

11123456789

10123456789

1120123456789

1130123456789

1140123456789

501

112

YING DING AND S. FOO

Journal of Information Science, 28 (5) 2002, pp. 375–388 385

Table 3Summary of ontology evolving or maintenance

Project Stanford AIFB ECAI 2000 Ontology server Others

Evolving or Ontology algebra Order – theoretical Similarity measures Facilitate browse TOVE (formal maintenance Congruity measure foundation based on the and editing techniques)rules Keeping ontologies Formal concept strict supervised Client/server METHODOLOGY

small analysis learning accessing (maintenance Bear the maintenance Theoretical Concept clustering Online stage of the in mind when discussion (no Lattice and collaboration life cycle of designing or specific methods) semantic path ontology)generating the identification Standardizing ontology Partial semantic concepts and

description relations (unique (description logic) identifier codes Topic signature and and names)hierarchical cluster

Semi- Manually (semi- – Manually (semi- Manually (semi- Manuallyautomatic/ automatic) automatic) automatic)automatic?

Future research – Generate ontology Try additional – –automatically from qualitative tagging the text based on in the vector linguistic methods representationEvaluate it with Automatic detection data mining of ontological techniques relations

Assisting tool – – – OntoSeek –(system)

at GEORGIAN COURT UNIV on November 6, 2014jis.sagepub.comDownloaded from

Page 13: Ontology research and development. Part 2 - a review of ontology mapping and evolving

research and a number of promising advances havebeen made:� Ontology language – W3C (www.w3c.org) and EU

IST projects (OntoKnowledge (www.ontoknow-ledge.org), OntoWeb (www.ontoweb.org) andWonderweb (wonderweb.semanticweb.org)) areworking together with the objective of deriving effi-cient ontology languages. Current candidate lanu-ages include those of RDF (www.w3.org/RDF),DAML+OIL (www.w3c.org/2001/sw), Topic Maps(www.topicmaps.org). More information on theselanguages can be found in the Semantic Web portalat www.semanticweb.org.

� Ontology tools – many tools, both research-based orcommercial entities, are avaialbe to support ontol-ogy editing, ontology library management, ontologymapping and aligning, and ontology inferening.More information and a list of such tools can befound in www.ontoweb.org/sig.htm.

� Digital libraries – XML or its variants are alreadywidely adopted to tag the metadata of e-journals ormuseum collections in many digital library pro-jects. Documents or information are represented asmuch as possible in the machine-understandableformat. The EU IST project – HeritageMap (http://sca.lib.liv.ac.uk/collections/HeritageMap/HeritageMapB.html) is a good example of such a develop-ment.

� Pioneering business applications – although theSemantic Web from the business-application pointof view is far from being a reality, ontologies havebeen deployed for many applications such as cor-porate intranets and knowledge management (forexample: www.ontoknowlege.org), e-commerce(for example: mkbeem.elibel.tm.fr/home.html),web portals and communities (for example: cweb.inria.fr). A limited number of successful scenariosfor ontology-based applications can be found inwww.ontoweb.org/download/deliverables/D21_Final-final.pdf

� Web services is a development that deals with thelimitation of the current web by aiming to transformthe web from a collection of information into a dis-tributed device of computation. In order to do so,appropriate means for description of web servicesneed to be developed. These are based on semi-formal natural language descriptions so that there isa need for human programmers to provide the linksbetween services. A related development is thedevelopment of the Semantic Web-enabled WebServices (www.cs.vu.nl/~dieter/wsmf) that isaimed at providing mechanization in service identi-

fication, configuration, comparison and combina-tion. Ontologies are key enablers that provide theterminology used by such services.

Although such developments are promising, funda-mental challenges remain at the root of ontologyresearch and development.

6. Conclusion

Ontology research and development has gainedsubstantial interest recently as researchers grapple withthe information overload problem and the need tobetter organize and use information in order to supportinformation retrieval and extraction to deliver the rightinformation, in the right amount and at the right time.Researchers are diverse and come from the fields oflibrary and information science, computer science, arti-ficial intelligence, e-commerce and knowledge manage-ment. Ontologies have a higher level of applicabilityover the traditional computing and information sciencetechniques in their ability to define relationships,deeper semantics and enhanced expressiveness, all ofwhich collectively serve as the enabler for the nextgeneration of the Semantic Web to become a reality.

The need for manual intervention in all the threeaspects of ontology generation, mapping and evolvingattests to the complex nature of ontology research andits associated unresolved problems. Many useful toolsand techniques have surfaced and these serve as usefulaids to human experts to create, use and maintain exist-ing ontologies. Nonetheless, many remaining researchchallenges need to be resolved in order for ontologies tobe applied on the scale envisaged in the future. A focuson tackling ontology generation is the first necessarystep to be taken since this has direct implications andapplications in the area of ontology mapping and evolv-ing. So far, no significant breakthrough in semi-auto-matic and automatic ontology generation has beenreported but the interest shown, and work done, bymany well-known research institutions and agenciesaugurs well for this area of research in the future.

References

[1] T.R. Gruber, A translation approach to portable ontologyspecifications, Knowledge Acquisition 5 (1993) 199–220.

[2] D. Fensel, Understanding is based on consensus. In:Semantic Web Panel, WWW10, Hong Kong (May 2001).

[3] Y. Ding and S. Foo, Ontology research and development:Part 1 – a review of ontology generation, Journal ofInformation Science 28(2) (2002) 123–136.

11123456789101234567892012345678930123456789401234567895012

Ontology research and development. Part 2

386 Journal of Information Science, 28 (5) 2002, pp. 375–388

at GEORGIAN COURT UNIV on November 6, 2014jis.sagepub.comDownloaded from

Page 14: Ontology research and development. Part 2 - a review of ontology mapping and evolving

[4] H. Sofia Pinto, A. Gomez-Perez and J.P. Martins, Someissues on ontology integration. In: Proceedings of IJCAI-99 Workshop on Ontologies and Problem-SolvingMethods: Lessons Learned and Future Trends, inconjunction with the Sixteenth International Joint Con-ference on Artificial Intelligence, August, Stockholm,Sweden (1999).

[5] N.F. Noy and M.A. Musen, SMART: Automated supportfor ontology merging and alignment. SMI ReportNumber: SMI-1999–0813 (1999).

[6] C.C.K. Chang and H. Garcia-Molina, Conjunctiveconstraint mapping for data translation. In: Third ACMConference on Digital Libraries, Pittsburgh, USA (1998).

[7] S. Chawathe, H. Garcia-Molina, J. Hammer, K. Ireland,Y. Papakonstantinou, J. Ullman and J. Widom, TheTSIMMIS project: Integration of heterogeneous informa-tion sources. In: IPSJ Conference, Tokyo, Japan (1994).

[8] R.V. Guha, Contexts: a Formalization and Some Appli-cations. Ph.D. Thesis, Stanford University (1991).

[9] P.C. Weinstein and P. Birmingham, Creating ontologicalmetadata for digital library content and services, Inter-national Journal on Digital Libraries 2(1) (1998) 19–36.

[10] E. Mena, A. Illarramendi, V. Kashyap and A.P. Sheth,OBSERVER: an approach for query processing in globalinformation systems based on interoperation across pre-existing ontologies. In: Proceedings of the First IFCISInternational Conference on Cooperative InformationSystems (CoopIS’96), Brussels, Belgium (19–21 June1996).

[11] P.R.S. Visser and Z. Cui, On accepting heterogeneousontologies in distributed architectures. In: Proceedingsof the ECAI98 Workshop on Applications of Ontologiesand Problem-Solving Methods. Brighton, UK (1998).

[12] P.R.S. Visser and V.A.M. Tamma, An experience withontology clustering for information integration. In: Pro-ceedings of the IJCAI-99 Workshop on Intelligent Inform-ation Integration in conjunction with the SixteenthInternational Joint Conference on Artificial Intelligence,Stockholm, Sweden (31 July 1999).

[13] J. Fowler, M. Nodine, B. Perry and B. Bargmeyer, Agent-based semantic interoperability in Infosleuth, SigmodRecord 28 (1999).

[14] G. Wiederhold, An algebra for ontology composition. In:Proceedings of 1994 Monterey Workshop on formalMethods, US Naval Postgraduate School, Monterey, CA,USA (1994), pp. 56–61.

[15] J. McCarthy, Notes on formalizing context. In: Pro-ceedings of the Thirteenth International Joint Con-ference on Artificial Intelligence, AAAI (1993).

[16] J. Jannink, P. Srinivasan, D. Verheijen and G. Wieder-hold, Encapsulation and Composition of Ontologies. In:Proceedings of AAAI Workshop on Information Integ-ration, AAAI Summer Conference, Madison, WI, USA(July 1998).

[17] P. Mitra, G. Wiederhold and M. Kersten, A graph-oriented model for articulation of ontology interdepen-

dencies. In: Proceedings of Conference on ExtendingDatabase Technology (EDBT 2000), Konstanz, Germany(March 2000).

[18] P. Mitra, G. Wiederhold and J. Jannink, Semi-automaticIntegration of knowledge Sources. In: Proceedings ofFusion 99, Sunnyvale, CA, USA (July 1999).

[19] G. Stumme, R. Studer and Y. Sure, Towards an order-theoretical foundation for maintaining and mergingontologies. In: Proceedings of Referenzmodellierung2000, Siegen, Germany (12–13 October 2000).

[20] B. Ganter and R. Wille, Formal Concept Analysis: Mathe-matical Foundations (Springer, Berlin, 1999).

[21] A.B. Williams and C. Tsatsoulis, An instance-basedapproach for identifying candidate ontology relationswithin a multi-agent system. In: Proceedings of the FirstWorkshop on Ontology Learning (OL-2000) in conjunc-tion with the 14th European Conference on ArtificialIntelligence (ECAI 2000), Berlin, Germany (August2000).

[22] V.A.M. Tamma and T.J.M. Bench-Capon, Supportingdifferent inheritance mechanisms in ontology represen-tations. In: Proceedings of the First Workshop onOntology Learning (OL-2000) in conjunction with the14th European Conference on Artificial Intelligence(ECAI 2000), Berlin, Germany (August 2000).

[23] M. Uschold, Creating, integrating and maintaining localand global ontologies. In: Proceedings of the First Work-shop on Ontology Learning (OL-2000) in conjunctionwith the 14th European Conference on Artificial Intelli-gence (ECAI 2000), Berlin, Germany (August 2000).

[24] H. Chalupsky, OntoMorph: a translation system for sym-bolic knowledge. In: Proceedings of 7th InternationalConference on Principles of Knowledge Representationand Reasoning (KR2000), Breckenridge, Colorado, USA(April 2000).

[25] D.C. Smith, Plisp Users Manual (Apple Computer,1990).

[26] P.R.S. Visser, D.M. Jones, M. Beer, T.J.M. Bench-Capon,B. Diaz and M.J.R. Shave, Resolving ontological hetero-geneity in the KRAFT Project. In: DEXA-99, 10th Inter-national Conference and Workshop on Database andExpert Systems Applications, Florence, Italy (August1999).

[27] W. Kim and J. Seo, Classifying schematic and dataheterogeneity in multidatabases systems, IEEE Com-puter 24(2) (1991) 12–18.

[28] M.W. Bright, A.R. Hurson and S. Pakzad, Automatedresolution of semantic heterogeneity in multidatabases,ACM Transactions on Database Systems 19(2) (1994)212–253.

[29] S. Castano and V. De Antonelli, Semantic dictionarydesign for database interoperability. In: Proceedings ofthe International Conference on Data Engineering(ICDE97) (IEEE Computer Society, Birmingham, 1997).

[30] L. Palopoli, L. Pontieri, G. Terracina and D. Ursino,Intensional and extensional integration and abstraction

11123456789

10123456789

1120123456789

1130123456789

1140123456789

501

112

YING DING AND S. FOO

Journal of Information Science, 28 (5) 2002, pp. 375–388 387

at GEORGIAN COURT UNIV on November 6, 2014jis.sagepub.comDownloaded from

Page 15: Ontology research and development. Part 2 - a review of ontology mapping and evolving

of heterogeneous databases, Data and Knowledge Engin-eering 35 (2000) 201–237.

[31] C. Batini, M. Lenzerini and S.B. Navathe, A comparativeanalysis of methodologies for schema integration,Computing Surveys 18(4) (1986) 323–364.

[32] A. Sheth and J. Larson, Federated database systems formanaging distributed, heterogeneous and autonomousdatabases, ACM Computing Surveys 22(3) (1990).

[33] A. Sheth and V. Kashyap, So for (schematically), yet sonear (semantically). In: Proceedings of IFIP DS-5 Seman-tics of Interoperable Database Systems (Chapman andHall, Lorne, Australia,1992).

[34] M. Bordie, The promise of distributed computing and thechallenges of legacy information systems. In: Proceed-ings of IFIP DS-5 Semantics of Interoperable DatabasesSystems (Chapman and Hall, Lorne, Australia, 1992).

[35] G. Wiederhold, Interoperation, mediation, and ontolo-gies. In: Proceedings International Symposium on FifthGeneration Computer Systems (FGCS), Workshop onHeterogeneous Cooperative Knowledge-Bases, Vol. W3(ICOT, Tokyo, Japan, December 1994), pp. 33–48.

[36] U. Srinivasan, A.H.H. Ngu and T. Gedeon, Managing het-erogeneous information systems through discovery andretrieval of generic concepts, Journal of the AmericanSociety for Information Science 51(8) (2000) 707–723.

[37] E. Hovy, Combining and standardizing large-scale: prac-tical ontologies for machine learning and other uses. In:Proceedings of 1st International Conference on LanguageResources and Evaluation, Granada, Spain (May 1998).

[38] D.L. McGuinness, R. Fikes, J. Rice and S. Wilder, Anenvironment for merging and testing large ontologies. In:Proceedings of 7th International Conference on Prin-ciples of Knowledge Representation and Reasoning(KR2000), Colorado, USA (April 2000).

[39] N.F. Noy and M.A. Musen, PROMPT: algorithm and toolfor automated ontology merging and alignment. In: Pro-ceedings of 17th National Conference on ArtificialIntelligence (AAAI2000), Austin, TX (July/August 2000).

[40] W.S. Li, Knowledge gathering and matching in hetero-geneous databases. In: AAAI Spring Symposium onInformation Gathering from Distributed, HeterogeneousEnvironment (Palo Alto, CA, 2000).

[41] A.E. Campbell and S.C. Shapiro, Ontologic mediation:an overview. In: IJCAI95 Workshop on Basic OntologicalIssues in Knowledge Sharing, Montreal, Canada (1995).

[42] V. Kashyap and A. Sheth, Semantic and schematic simi-larities between database objects: A context-basedapproach, International Journal on Very Large Databases5(4) (1996) 276–304.

[43] F. Lehmann and A.G. Gohn, The EGG/YOLK reliabilityhierarchy: semantic data integration using sorts withprototypes. In: Third International ACM Conference onInformation and Knowledge Management (CIKM-94)(ACM Press, New York, 1994).

[44] P.C. Weinstein and P. Birmingham, Comparing conceptsin differentiated ontologies. In: Proceedings of the

Twelfth Workshop on Knowledge Acquisition, Modelingand Management (KAW99), Banff, Alberta, Canada(October 1999).

[45] P. Borst and H. Akkermans, An ontology approach toproduct disassembly. In: EKAW97. Sant Feliu deGuixols, Spain (October 1997).

[46] M. Gruninger and M.S. Fox, Methodology for the designand evaluation of ontologies. In: IJCAI95 Workshop on Basic Ontological Issues in Knowledge Sharing.Montreal, Canada (19–20 August 1995).

[47] M. Fernandez-Lopez, A. Gomez-Perez and N. Juristo,METHODOLOGY: from ontological art towards ontolog-ical engineering. In: AAAI-97 Spring Symposium onOntological Engineering, Stanford University, CA, USA(24–26 March 1997).

[48] A. Faatz, T. Kamps and R. Steinmetz, Background know-ledge, indexing and matching interdependencies ofdocument management and ontology maintenance. In:Proceedings of the First Workshop on Ontology Learning(OL-2000) in conjunction with the 14th EuropeanConference on Artificial Intelligence (ECAI 2000), Berlin,Germany (August 2000).

[49] C. Roux, D. Proux, F. Rechenmann and L. Julliard, Anontology enrichment method for a pragmatic informa-tion extraction system gathering data on genetic interac-tions. In: Proceedings of the First Workshop on OntologyLearning (OL-2000) in conjunction with the 14th Euro-pean Conference on Artificial Intelligence (ECAI 2000),Berlin, Germany (August 2000).

[50] A. Todirascu, F. Beuvron, D. Galea and F. Rousselot,Using description logics for ontology extraction. In: Pro-ceedings of the First Workshop on Ontology Learning(OL-2000) in conjunction with the 14th European Con-ference on Artificial Intelligence (ECAI 2000), Berlin,Germany (August 2000).

[51] E. Agirre, O. Ansa, E. Hovy and D. Martinez, Enrichingvery large ontologies using the WWW. In: Proceedings ofthe First Workshop on Ontology Learning (OL-2000) inconjunction with the 14th European Conference onArtificial Intelligence (ECAI 2000), Berlin, Germany(August 2000).

[52] N. Guarino, C. Masolo and G. Vetere, OntoSeek: content-based access to the Web. IEEE Intelligent Systems(May/June 1999) 70–80.

[53] M. Uschold and M. Gruninger, Ontologies: principles,methods, and applications, Knowledge EngineeringReview 11(2) (1996) 93–155.

[54] A.J. Duineveld, R. Stoter, M.R. Weiden, B. Kenepa andV.R. Benjamins, WonderTools? A comparative study ofontological engineering tools. KAW 99: Twelfth Work-shop on Knowledge Acquisition, Modeling and Manage-ment, Alberta, Canada (October, 1999).

[55] J. Jannink and G. Wiederhold, Ontology maintenancewith an algebraic methodology: a case study. In: Pro-ceedings of AAAI workshop on Ontology Management(July, 1999).

11123456789101234567892012345678930123456789401234567895012

Ontology research and development. Part 2

388 Journal of Information Science, 28 (5) 2002, pp. 375–388

at GEORGIAN COURT UNIV on November 6, 2014jis.sagepub.comDownloaded from


Top Related