The standard view of the semantic web assumes that we will not have a single consensus ontology for a given domain, but many, each with its own base of users and applications. Thus it’s essential that we have good techniques and tools to translate information expressed in one collection of ontologies into another. One of the issues that we have not yet faced head on is that most of these mappings will probably be approximations. Here’s a good overview of the Bayesian approach to OWL ontology mapping being developed by Yun Peng and his students.
A Bayesian Methodology towards Automatic Ontology Mapping, Zhongli Ding, Yun Peng, Rong Pan, and Yang Yu, AAAI Workshop on Contexts and Ontologies, July 09, 2005.
This paper presents our ongoing effort on developing a principled methodology for automatic ontology mapping based on BayesOWL, a probabilistic framework we developed for modeling uncertainty in semantic web. The proposed method includes four components: 1) learning probabilities (priors about concepts, conditionals between subconcepts and superconcepts, and raw semantic similarities between concepts in two different ontologies) using Naive Bayes text classification technique, by explicitly associating a concept with a group of sample documents retrieved and selected automatically from World Wide Web (WWW); 2) representing in OWL the learned probability information concerning the entities and relations in given ontologies; 3) using the BayesOWL framework to automatically translate given ontologies into the Bayesian network (BN) structures and to construct the conditional probability tables (CPTs) of a BN from those learned priors or conditionals, with reasoning services within a single ontology supported by Bayesian inference; and 4) taking a set of learned initial raw similarities as input and finding new mappings between concepts from two different ontologies as an application of our for-malized BN mapping theory that is based on evidential reasoning across two BNs.