Ontology Alignment Evaluation Initiative - OAEI-2011 CampaignOAEI OAEI

Evaluation related to Conference track

This year we plan to evaluate results of participants with following evaluation methods:

(1) Evaluation based on reference alignments

Subset of all alignments will be evaluated against reference alignment (likely reference alignment will be extended during summer 2011). Therefore, we will provide participants with traditional evaluation measures such as precision and recall.

This evaluation will be ready at the time of Ontology Matching workshop 2011.

(2) Evaluation based on manual labelling

The number of all distinct correspondences is always quite high number, therefore we will take advantage of sampling. This year we will just take the most probable correct correspondences as a population for each matcher. It means we will evaluate 150 correspondences per matcher randomly chosen from all correspondences with confidence value 1.0. This approach is inspired by [4].

Evaluation will be ready at the time of Ontology Matching workshop 2011.

(3) Evaluation based on Data Mining method

Data Mining technique enables us to discover non-trivial findings about systems of participants. These findings will be answers to so-called analytic questions, such as:

We will try to answer abovementioned and similar analytic question. Those analytic questions will also be dealing with so-called mapping patterns [2] and newly also with correspondence patterns [1].

For the purpose of this kind of evaluation, we will use the LISp-Miner tool. Particularly, we will use the 4ft-Miner procedure that mines association rules. This kind of evaluation was first tried two years ago [2]. We furthermore extended this approach and applied on data from years 2006, 2007 and 2008 [5].

Evaluation will be ready during November, 2011.

(4) Evaluation based on Logical Reasoning

This method will be done by Christian Meilicke and Heiner Stuckenschmidt from Computer Science Institure at University Mannheim, Germany. This year, it will be done automatically within SEALS platform. In this kind of evaluation, ontologies will be merged on the base of correspondences submitted by participants. Subsequently, the incoherence of the mappings will be measured based on the incoherence of the merged ontology. This method is related to [3].

Evaluation based on consensus of experts (Consensus Building Workshop)

Finally, we can organize short session (during the day of workshops when OM workshop will not be held) when interesting (unclear, weird, surprising etc.) correspondences from participants results or from reference alignment will be discussed. This event would last one or two hours and would take place only if there would be some interesting correspondences for discussion and interest of people to participate.


Contact address is Ondřej Šváb-Zamazal (ondrej.zamazal at vse dot cz).


[1] Scharffe F., Euzenat J., Ding Y., Fensel,D. Correspondence patterns for ontology mediation. OM-2007 at ISWC-2007.

[2] Šváb O., Svátek V., Stuckenschmidt H.: A Study in Empirical and Casuistic Analysis of Ontology Mapping Results. ESWC-2007. Abstract Draft paper (final version available via SpringerLink)

[3] Meilicke C., Stuckenschmidt H. Incoherence as a basis for measuring the quality of ontology mappings. OM-2008 at ISWC 2008.

[4] van Hage W.R., Isaac A., Aleksovski Z. Sample evaluation of ontology matching systems. EON-2007, Busan, Korea, 2007.

[5] Šváb-Zamazal O., Svátek V. Empirical Knowledge Discovery over Ontology Matching Results. IRMLeS 2009 at ESWC-2009.