SlideShare a Scribd company logo
Default Logics for Plausible Reasoning with Controversial Axioms Thomas Scharrenbach (*) , Claudia d'Amato (**) ,  Nicola Fanizzi (**) , Rolf Grütter (*) , Bettina Waldvogel (*)  and Abraham Bernstein (***) (*) Swiss Federal Institute for  Forest, Snow and Landscape Research WSL,   Zürcherstrasse 111,  8903 Birmensdorf,  Switzerland (***) University of Zurich  Department of Informatics   Binzmühlestrasse 14, 8050 Zurich,  Switzerland (**) Università degli Studi di Bari Department of Computer Science Via E. Orabona, 4 - 70125  Bari  Italy 4 th  International Workshop on  Ontology Dynamics – IWOD 2010
Forest? Snow? Landscape ? Landscape: Endagered Species
Spatial planning
... Forest Forest monitoring
Forest management
... Snow Natural hazards (e.g. avalanches)
Glaciers
...
...with many happy-ends Produce  consistent  KB
Lack domain knowledge
Build conflict-tolerant system
Revise  from time to time  Detailed domain knowledge
Produce  inconsistent  KB
Don't care about ontologies
Care about  their  knowledge   Don't think about why it does not collapse! Just keep on hammering! Knowledge Engineers Domain Experts A Never-Ending story
Desired Properties Coherent No concept inferred unsatisfiable Explicitly Conservative Keep all original information Same Language Do not change knowledge representation Automated Works automatically Implicitly Conservative Keep as many of the original inferences as possible Soft Strict
Default Logic Based  D -Transformation Partition the set of trouble-causing axioms Lehmann's Default Logics [Lehmann AMAI:1995] Keep non-trouble-causing axioms in a separate TBox Lukasiewicz' Probabilistic DL [Lukasiewicz AI:2008] Partition without additional SAT-checks Splitting root justifications [Scharrenbach et al. DL2010] Optimize w.r.t. inferences lost Minimal splitting [Scharrenbach et al. IWOD2010] Optimize w.r.t. remaining conflicts Default-TBox entropy [Scharrenbach et al. URSW2010]
Justifications J 1 C      = B      A, C      B, C      Ø A J 2 D      = B      A, C      B, C      Ø A, J 3 E      = B      A, C      B, C      Ø A, D      C D      C, D      C J 4 D      = C      B, D      C, D      Ø B J 4 D      = C      B, D      C, D      Ø B, E      D Minimal sets of axioms that explain an entailment Root justifications do not depend on other justifications B A G F E D A C C B D C D B
Splitting justifications provide partitions for Default Logics B      A, Unsat Splitting J 1 C      = J 4 D      = C      B, C      B, C    Ø A D      C, D    Ø B Gamma-Set:  unsat concept  in signature Theta-Set:  unsat concept  not in signature Root justifications do not depend on other justifications B A G F E D A C C B D C D B
Partitions U 0  = {B    A, E     D, G    F }  U 1  = { C    B,  C     Ø A} U 2  = {D     C, D      Ø B} T D  = {G     F  } Default TBox B      A, J 1 C      = C      B, C      B, B    Ø A D      C, D    Ø B J 4 D      = B      A, J 1 C      = C      B, C      B, B    Ø A D      C, D    Ø B J 4 D      = transform all Theta that are not in Gamma transform all Gamma for which Theta is empty add remaining axioms to first partition next Partition WHILE  not Splitting-Sets empty  DO DONE Algorithm   [Scharrenbach et al.:DL2010] C B B A C A G F D C E D D B
Partitions U 0  = {B    A, E     D, G    F }  U 1  = { C    B,  C     Ø A} U 2  = {D     C, D      Ø B} T D  = {G     F  } Partitions U 0  = {B    A},  U 1  = { C    B,  C     Ø A} U 2  = {D     C, D      Ø B} Universal TBox T D  = {E     D,  G     F  } Default TBox C B B A C A G F D C E D D B G F G F E D E D
Default TBox Optimization  [Scharrenbach et al:IWOD2010] : Only one axiom  of every splitting set per partition Partitions U 0  = {B    A},  U 1  = { C    B,  C     Ø A} U 2  = {D     C, D      Ø B} Universal TBox T D  = {E     D,  G     F  } Partitions U 0  = {B    A},  U 1  = {C    B} U 2  = {D     C } Universal TBox T D  = {E     D,  G     F,   C     Ø A, D     Ø B  } G F C B B A C A G F D C E D D B G F G F E D E D E D D B D B D B
Default TBox Partitions U 0  = {B    A},  U 1  = {C    B} U 2  = {D      Ø B } Universal TBox T D  = {E     D,  G     F,   C     Ø A, D    C  } Optimization  [Scharrenbach et al:IWOD2010] : Only one axiom  of every splitting set per partition C B B A C A G F D C E D D B G F G F E D E D C A C A D C D C

More Related Content

PPT
PPT
Chapter3 Search
PDF
Uninformed search
PDF
presentation
PPTX
Control Strategies in AI
PDF
Matrix determinant
PDF
PECCS 2014
PDF
SURF 2012 Final Report(1)
Chapter3 Search
Uninformed search
presentation
Control Strategies in AI
Matrix determinant
PECCS 2014
SURF 2012 Final Report(1)

What's hot (19)

PDF
Machine learning Lecture 2
PDF
Selected topics in Bayesian Optimization
PPT
Iterative deepening search
PDF
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
PDF
Shape Analysis
PDF
Lesson 27: The Fundamental Theorem of Calculus
PPT
Tree distance algorithm
PPT
Prolog programming
PDF
Pixel Relationships Examples
PPTX
Pert 05 aplikasi clustering
PDF
Discrete Mathematics with Applications 4th Edition Susanna Solutions Manual
PDF
A New Hendecagonal Fuzzy Number For Optimization Problems
PDF
Dealing with Constraints in Estimation of Distribution Algorithms
PDF
Dr azimifar pattern recognition lect2
PPT
Admissible Labelings
PDF
Problem Solving by Computer Finite Element Method
PDF
Lesson 30: The Definite Integral
PPSX
Image segmentation 3 morphology
Machine learning Lecture 2
Selected topics in Bayesian Optimization
Iterative deepening search
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Shape Analysis
Lesson 27: The Fundamental Theorem of Calculus
Tree distance algorithm
Prolog programming
Pixel Relationships Examples
Pert 05 aplikasi clustering
Discrete Mathematics with Applications 4th Edition Susanna Solutions Manual
A New Hendecagonal Fuzzy Number For Optimization Problems
Dealing with Constraints in Estimation of Distribution Algorithms
Dr azimifar pattern recognition lect2
Admissible Labelings
Problem Solving by Computer Finite Element Method
Lesson 30: The Definite Integral
Image segmentation 3 morphology
Ad

Similar to Default Logics for Plausible Reasoning with Controversial Axioms (20)

PDF
NCE, GANs & VAEs (and maybe BAC)
PDF
CDT 22 slides.pdf
PDF
Stochastic Control/Reinforcement Learning for Optimal Market Making
PDF
PDF
A Simple Algorithm for Minimal Unsatisfiable core
PDF
Typing quantum superpositions and measurement
PPT
DivideAndConquer.pptDivideAndConquer.ppt
PPT
Set theory and relation
PDF
Unit 2_final DESIGN AND ANALYSIS OF ALGORITHMS.pdf
PPTX
20101017 program analysis_for_security_livshits_lecture02_compilers
PPT
ds 10-Binary Tree.ppt
PDF
PDF
Mm chap08 -_lossy_compression_algorithms
PPTX
Succinct Data Structure for Analyzing Document Collection
PPT
Normalization
PDF
2. Asymptotic Notation- Analysis of Algorithms.pdf
PDF
3 grechnikov
PDF
MUMS Undergraduate Workshop - A Biased Introduction to Global Sensitivity Ana...
NCE, GANs & VAEs (and maybe BAC)
CDT 22 slides.pdf
Stochastic Control/Reinforcement Learning for Optimal Market Making
A Simple Algorithm for Minimal Unsatisfiable core
Typing quantum superpositions and measurement
DivideAndConquer.pptDivideAndConquer.ppt
Set theory and relation
Unit 2_final DESIGN AND ANALYSIS OF ALGORITHMS.pdf
20101017 program analysis_for_security_livshits_lecture02_compilers
ds 10-Binary Tree.ppt
Mm chap08 -_lossy_compression_algorithms
Succinct Data Structure for Analyzing Document Collection
Normalization
2. Asymptotic Notation- Analysis of Algorithms.pdf
3 grechnikov
MUMS Undergraduate Workshop - A Biased Introduction to Global Sensitivity Ana...
Ad

More from Rommel Carvalho (20)

PPTX
Ouvidoria de Balcão vs Ouvidoria Digital: Desafios na Era Big Data
PDF
Como transformar servidores em cientistas de dados e diminuir a distância ent...
PPTX
Proposta de Modelo de Classificação de Riscos de Contratos Públicos
PPTX
Categorização de achados em auditorias de TI com modelos supervisionados e nã...
PPTX
Mapeamento de risco de corrupção na administração pública federal
PDF
Ciência de Dados no Combate à Corrupção
PDF
Aplicação de técnicas de mineração de textos para classificação automática de...
PDF
Filiação partidária e risco de corrupção de servidores públicos federais
PDF
Uso de mineração de dados e textos para cálculo de preços de referência em co...
PDF
Detecção preventiva de fracionamento de compras
PDF
Identificação automática de tipos de pedidos mais frequentes da LAI
PDF
BMAW 2014 - Using Bayesian Networks to Identify and Prevent Split Purchases i...
PDF
A GUI for MLN
PDF
URSW 2013 - UMP-ST plug-in
PDF
Integração do Portal da Copa @ Comissão CMA do Senado Federal
KEY
Dados Abertos Governamentais
KEY
Modeling a Probabilistic Ontology for Maritime Domain Awareness
PDF
Probabilistic Ontology: Representation and Modeling Methodology
PDF
SWRL-F - A Fuzzy Logic Extension of the Semantic Web Rule Language
PDF
Tractability of the Crisp Representations of Tractable Fuzzy Description Logics
Ouvidoria de Balcão vs Ouvidoria Digital: Desafios na Era Big Data
Como transformar servidores em cientistas de dados e diminuir a distância ent...
Proposta de Modelo de Classificação de Riscos de Contratos Públicos
Categorização de achados em auditorias de TI com modelos supervisionados e nã...
Mapeamento de risco de corrupção na administração pública federal
Ciência de Dados no Combate à Corrupção
Aplicação de técnicas de mineração de textos para classificação automática de...
Filiação partidária e risco de corrupção de servidores públicos federais
Uso de mineração de dados e textos para cálculo de preços de referência em co...
Detecção preventiva de fracionamento de compras
Identificação automática de tipos de pedidos mais frequentes da LAI
BMAW 2014 - Using Bayesian Networks to Identify and Prevent Split Purchases i...
A GUI for MLN
URSW 2013 - UMP-ST plug-in
Integração do Portal da Copa @ Comissão CMA do Senado Federal
Dados Abertos Governamentais
Modeling a Probabilistic Ontology for Maritime Domain Awareness
Probabilistic Ontology: Representation and Modeling Methodology
SWRL-F - A Fuzzy Logic Extension of the Semantic Web Rule Language
Tractability of the Crisp Representations of Tractable Fuzzy Description Logics

Recently uploaded (20)

PDF
cuic standard and advanced reporting.pdf
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPT
Teaching material agriculture food technology
PDF
Network Security Unit 5.pdf for BCA BBA.
DOCX
The AUB Centre for AI in Media Proposal.docx
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PPTX
Big Data Technologies - Introduction.pptx
PPTX
sap open course for s4hana steps from ECC to s4
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPTX
MYSQL Presentation for SQL database connectivity
cuic standard and advanced reporting.pdf
Unlocking AI with Model Context Protocol (MCP)
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Diabetes mellitus diagnosis method based random forest with bat algorithm
Teaching material agriculture food technology
Network Security Unit 5.pdf for BCA BBA.
The AUB Centre for AI in Media Proposal.docx
Understanding_Digital_Forensics_Presentation.pptx
Reach Out and Touch Someone: Haptics and Empathic Computing
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Advanced methodologies resolving dimensionality complications for autism neur...
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Mobile App Security Testing_ A Comprehensive Guide.pdf
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Big Data Technologies - Introduction.pptx
sap open course for s4hana steps from ECC to s4
Dropbox Q2 2025 Financial Results & Investor Presentation
MYSQL Presentation for SQL database connectivity

Default Logics for Plausible Reasoning with Controversial Axioms

  • 1. Default Logics for Plausible Reasoning with Controversial Axioms Thomas Scharrenbach (*) , Claudia d'Amato (**) , Nicola Fanizzi (**) , Rolf Grütter (*) , Bettina Waldvogel (*) and Abraham Bernstein (***) (*) Swiss Federal Institute for Forest, Snow and Landscape Research WSL, Zürcherstrasse 111, 8903 Birmensdorf, Switzerland (***) University of Zurich Department of Informatics Binzmühlestrasse 14, 8050 Zurich, Switzerland (**) Università degli Studi di Bari Department of Computer Science Via E. Orabona, 4 - 70125 Bari Italy 4 th International Workshop on Ontology Dynamics – IWOD 2010
  • 2. Forest? Snow? Landscape ? Landscape: Endagered Species
  • 4. ... Forest Forest monitoring
  • 6. ... Snow Natural hazards (e.g. avalanches)
  • 8. ...
  • 9. ...with many happy-ends Produce consistent KB
  • 12. Revise from time to time Detailed domain knowledge
  • 14. Don't care about ontologies
  • 15. Care about their knowledge Don't think about why it does not collapse! Just keep on hammering! Knowledge Engineers Domain Experts A Never-Ending story
  • 16. Desired Properties Coherent No concept inferred unsatisfiable Explicitly Conservative Keep all original information Same Language Do not change knowledge representation Automated Works automatically Implicitly Conservative Keep as many of the original inferences as possible Soft Strict
  • 17. Default Logic Based D -Transformation Partition the set of trouble-causing axioms Lehmann's Default Logics [Lehmann AMAI:1995] Keep non-trouble-causing axioms in a separate TBox Lukasiewicz' Probabilistic DL [Lukasiewicz AI:2008] Partition without additional SAT-checks Splitting root justifications [Scharrenbach et al. DL2010] Optimize w.r.t. inferences lost Minimal splitting [Scharrenbach et al. IWOD2010] Optimize w.r.t. remaining conflicts Default-TBox entropy [Scharrenbach et al. URSW2010]
  • 18. Justifications J 1 C   = B  A, C  B, C  Ø A J 2 D   = B  A, C  B, C  Ø A, J 3 E   = B  A, C  B, C  Ø A, D  C D  C, D  C J 4 D   = C  B, D  C, D  Ø B J 4 D   = C  B, D  C, D  Ø B, E  D Minimal sets of axioms that explain an entailment Root justifications do not depend on other justifications B A G F E D A C C B D C D B
  • 19. Splitting justifications provide partitions for Default Logics B  A, Unsat Splitting J 1 C   = J 4 D   = C  B, C  B, C  Ø A D  C, D  Ø B Gamma-Set: unsat concept in signature Theta-Set: unsat concept not in signature Root justifications do not depend on other justifications B A G F E D A C C B D C D B
  • 20. Partitions U 0 = {B  A, E  D, G  F } U 1 = { C  B, C  Ø A} U 2 = {D  C, D  Ø B} T D = {G  F } Default TBox B  A, J 1 C   = C  B, C  B, B  Ø A D  C, D  Ø B J 4 D   = B  A, J 1 C   = C  B, C  B, B  Ø A D  C, D  Ø B J 4 D   = transform all Theta that are not in Gamma transform all Gamma for which Theta is empty add remaining axioms to first partition next Partition WHILE not Splitting-Sets empty DO DONE Algorithm [Scharrenbach et al.:DL2010] C B B A C A G F D C E D D B
  • 21. Partitions U 0 = {B  A, E  D, G  F } U 1 = { C  B, C  Ø A} U 2 = {D  C, D  Ø B} T D = {G  F } Partitions U 0 = {B  A}, U 1 = { C  B, C  Ø A} U 2 = {D  C, D  Ø B} Universal TBox T D = {E  D, G  F } Default TBox C B B A C A G F D C E D D B G F G F E D E D
  • 22. Default TBox Optimization [Scharrenbach et al:IWOD2010] : Only one axiom of every splitting set per partition Partitions U 0 = {B  A}, U 1 = { C  B, C  Ø A} U 2 = {D  C, D  Ø B} Universal TBox T D = {E  D, G  F } Partitions U 0 = {B  A}, U 1 = {C  B} U 2 = {D  C } Universal TBox T D = {E  D, G  F, C  Ø A, D  Ø B } G F C B B A C A G F D C E D D B G F G F E D E D E D D B D B D B
  • 23. Default TBox Partitions U 0 = {B  A}, U 1 = {C  B} U 2 = {D  Ø B } Universal TBox T D = {E  D, G  F, C  Ø A, D  C } Optimization [Scharrenbach et al:IWOD2010] : Only one axiom of every splitting set per partition C B B A C A G F D C E D D B G F G F E D E D C A C A D C D C
  • 24. Experimental Results Total Deductive Closure Best solution Removal Best solution Default logics Inferences in Removal but not in Default Inferences in Default but not in Removal | È r ( T r ) + | |( T r ) + | |( DT ) + | |( T r ) + \ ( DT ) + | |( DT ) + \ ( T r ) + | original improved original improved original improved Koala 68 68 68 86 1 0 1 18 Chemical 293 261 233 293 61 0 33 33 Pizza 1151 1151 1150 1152 1 0 2 1 Removal preserves no inferences w.r.t. improved approach Removal preserves 61 inferences w.r.t. original approach Original approach preserves 33 inferences w.r.t. removal Improved approach preserves 33 inferences w.r.t. removal
  • 25. Summary Identify causes for conflicts: Root Justifications
  • 26. Ignore conflicts: Partition scheme from Lehmann's Default Logic
  • 27. Find optimal solutions: Minimal D -t ransformation
  • 28. Sounds good, but... Improved Approach is non-deterministic
  • 29. Number of solutions is exponential in number of axioms in justifications Stochastic search
  • 30. Performance measure: number of inferences invalidated Counting inferences can be false friend Some conflicts may still be present
  • 31. Sounds good, but... Counting inferences can be false friend Find solutions that cause least trouble
  • 32. T D = {E  D, G  F, C  Ø A, D  Ø B } U 0 = {B  A}, U 1 = {C  B} U 2 = {D  C }
  • 33. DT 2  D  B
  • 34. DT 2  D  Ø B Conflicts are ignored but may still cause trouble
  • 35. T D = {E  D, G  F, C  Ø A, D  C } U 0 = {B  A}, U 1 = {C  B} U 2 = {D  Ø B }
  • 36. DT 1  D  B DT 1  D  Ø B
  • 37. Quality Matters Performance measure must take into account quality of preserved inferences Cannot work on structure solely
  • 38. Take into account instantiations Solution: Assess Quality of Solutions by Information Content Minimizing entropy minimizes number of conflicts
  • 39. Default TBox Entropy Entropy w.r.t. probability mass function on axioms H ( T ) =  p (B  A) log p (B  A) Axiom probability mass function w.r.t. assertions p (B  A) = a *  i ∈ I  [ ( Ø B  A) ( i ) ] with  [( Ø B  A) ( i ) ] = 1, iff ( Ø B  A) ( i ) in ( T , A ) + , and 0 else Idea: if T 2 entails both B  A and B  Ø A , but T 1 only one of them then H ( T 2 ) > H ( T 1 )
  • 40. Evaluation Measures Mostly for assessing ontology modularization
  • 41. Not designed for Default TBoxes
  • 42. Need nonetheless be evaluated
  • 43. Conclusion Solving conflicts is possible using Default Logics Ignore conflicts by separating conflicting axioms
  • 44. No removal of axioms needed
  • 45. Minimize number of inferences invalidated
  • 46. Use DL for knowledge representation and reasoning Solving conflicts introduces uncertainty Conflicts are ignored but may still be present
  • 47. Choose solution that minimizes remaining conflicts
  • 48. Optimal solution depends on instantiation
  • 49. Future Work Improve scalabilty maintain justifications Investigate (further) performance measures qualitative Real-World Evaluation Do real users benefit? Other Domains Ontology mapping
  • 50. Slide 397: That's it for the moment... ... and thanks for your attention ... any feedback is greatly appreciated
  • 51. Data Sustainability SQL GIS WS SQL GIS WS OWL OWL OWL data Domain Specific Search Domain Specific Search Domain Specific Search ta data meta meta data DNL Birmensdorf CSCF Neuchatel BAFU Berne
  • 52. Data Centre Nature and Landscape Conceptualize by OWL-Ontologies Taxonomies, observation data, geo-spatial data, legislation process data, ... Heterogeneous Data Automatic ontology creation/linking not possible

Editor's Notes

  • #2: Welcome to my talk about „Plausible Reasoning with Controversial Axioms“. I am Thomas Scharrenbach and I will now present you joint work that I did with my colleages from WSL Rolf Grütter and Bettina Waldvogel, colleages from Bari, Claudia d'Amato and Nicola Fanizzi and last but not least, Avi Bernstein from the University of Zurich. This is a position paper, so I will not be able to present the underlying methods in too much detail. However, we are using this methods for ontology evolution, and it happens that I will present some of the methods used in this paper in more detail at the Workshop on Ontology Dynamics which takes place tomorrow.
  • #3: Before I start with my talk, you might be wondering what forest, snow and landscape might have to do with plausible reasoning. Well, research at WSL is roughly categorized by these three topics. We, for example, create and maintain the Swiss national forest inventroy. Other groups try to figure out how to predict and prevent the occurrence of avalanches whereas people from my resaerch unit deal with the monitoring of endagered species but also spatial planning---also with regard to protect people from natural hazards. All these people at WSL have one thing in common: they produce tons of data. This can be in the scope of a research project. But we also collect and maintain data on the order of public authorities such as the Swiss Federal Office for the Environment.
  • #4: For all these data we would like to have a formal meta-data description. This meta-data description, in turn is realized in OWL2-ontologies. When creating these ontologies, we face a common problem. There are two parties involved: Knowledge engineers and Domain Experts. The goal is to formalize the knowledge of the domain expert within an OWL2-ontology. Yet, Domain Experts tend to build highly inconsistent knowledge bases. On the other hand, Knowledge engineers build consistent knowledge bases but lack the detailed knowledge. Furthermore, if I start telling my colleagues at WSL about ontologies they look at me as if I were from Mars. Well, I am not from Mars and neither are you. At WSL, our strategy is to offer the Domain Experts a simple way of building ontologies which is tolerant to modelling errors. Furthermore, this procedure shall keep all information that was provided by the Domain Experts. We only revise the knowledge base from time to time, because Domain Experts do not like to find that pieces of the knowldege they contributed have just been deleted.
  • #5: To sum up, when letting Domain Experts construct an ontology, we define some properties that we find useful: The ontology shall not infer anything unsatisfiable. Modelling errors are hidden from the Domain Experts. No information that was provided by the Domain Experts shall be deleted. We will not change the formalism for knowledge representation. If we started with OWL2-EL, for example, we want to end up with OWL2-EL. This refers ONLY to using OWL2 for knowledge representation, whereas we change the inference process, as we will see later on. Other properties are that the procedure shall work autonomous and preserve as much of the implicit information as possible. We require the first properties to be strictly kept whereas the last properties are considered as soft. This implies that we, for example, give higher precedence to explicitly stated axioms than to inferred knowledge.
  • #6: To achieve these properties we defined the so-called Delta-transformation. This Delta-transformation maps a TBox to a so-called Default TBox. We invalidate unwanted inferences such as unsatisfiability by separating those axioms that cause unsatisfiability. For the separation we use Default Logics as interpreted by Lehmann. As done in Lukasiewicz Probabilistic Description Logics, we keep axioms that are not involved in a conflict in a separate TBox. We introduced a simple splitting scheme which allows to compute the partitions for the Default TBox without having to do additional satisfiability checks. We could further optimize the method by not transforming all trouble causing axioms. This potentially saves some inferences we would loose otherwise, but comes at the price that finding solutions becomes non-deterministic. However, following this optimization strategy alone has some disadvantage. There may still reside conflicts in the knowlege base. Although they do not cause any trouble when performing reasoning, we would like to get ird of them, because they might confuse the Domain Experts working with the ontology. The present work deals with exactly that problem: How can we overcome the uncertainty of different solutions regarding minimizing number of conflicts as well as the number of inferences invalidated.
  • #7: Consider TBox axioms. These have the form B is subsumed by A where A and B are ---possibly complex---concepts. In Default Logics the set of axioms is partitioned into the partitions U_0, ..., U_N such that the most general information is contained in U_0 and the most specific axioms are conatined in U_N. The trick about Lehmann's Default Logics is that this order of specifity can be determined solely by the axioms themselves. How does that work? I will explain this to you by an example TBox that infers some concepts unsatisfiable. You can find this example also in the paper. We introduce a remainder set D_n in which we store all currently valid axioms. In the beginning the first remainder set D_0 is the TBox itself. For the first partition, we now take all axioms for which both the subconcept as well as the superconcept are satisfiable. TODO ExAMPLE
  • #8: I will now give you a quick overview over the unsat splitting. First of all, we work on root unsat justifications. An unsat justification is a minimal set of axioms that explains an unsatisfiability. A root unsat justification does not depend on any other justification. Assume the simple TBox in the upper right corner. Black arrows stand for concept subsumption whereas red arrows represent disjoints. For example, this arrow means A is subsumed by B whereas this arow means D is disjoint with B. We separate the root unsat justifications such into two sets: The Gamma-set, in red, contains all axioms that contain the concept that is unsatisfiable w.r.t. to this very justification. The Theta-set, in blue, contains the remaining axioms of that very justification. We can now use this simple splitting to compute the Default TBox without any further unsatisfiability checks. All the satisfiability checks have already been done by computing the justifications.
  • #9: Consider TBox axioms. These have the form B is subsumed by A where A and B are ---possibly complex---concepts. In Default Logics the set of axioms is partitioned into the partitions U_0, ..., U_N such that the most general information is contained in U_0 and the most specific axioms are conatined in U_N. The trick about Lehmann's Default Logics is that this order of specifity can be determined solely by the axioms themselves. How does that work? I will explain this to you by an example TBox that infers some concepts unsatisfiable. You can find this example also in the paper. We introduce a remainder set D_n in which we store all currently valid axioms. In the beginning the first remainder set D_0 is the TBox itself. For the first partition, we now take all axioms for which both the subconcept as well as the superconcept are satisfiable. TODO ExAMPLE
  • #10: I will omit the details of the algorithm for creating partitions, since it is not relevant for this wowk. For the simple TBox in the example, we receive the following TBox: We have three partitions, U0, U1 and U2 and a so-called Universal TBox T-Delta. Each partition together with T-Delta is coherent. For the reasoning part we could, in principle, do Default Logics reasoning, but we would like to avoid the additional complexity. Our goal is to simply ignore conflicts. As such we consider the union of all deductive closures as the deductive closure of the whole Default TBox. ---Pause--- There is another issue we did not address so far. We put all axioms form the root justifications into the partitions and receive a single unique solution. Well, we can do better:
  • #11: It suffices to put only two axioms of each root unsat justification into two different partitions. In that case we showed that we loose less inferences. We can, for example, put axiom D disjoint with B into the Universal TBox. It now occurs not only in partition U2, but in all partitions. The choice, however, which axioms to put into the partitions and which axioms to put into the Universal TBox is non-deterministic. Optimizing, hence, introduces some uncertainty into the whole process.
  • #12: A second example would be choosing not to transform axioms C disjoint with A and C subsumed by D but put them in the Universal TBox instead.
  • #14: To sum up, we first identify all the axioms that are involved in conflicts by computing the unsat justifications. We resovle the root unsat justifications by using methods from Lehmann's Default Logics and Lukasiewicz Probabilistic Description Logics. In particular, some ofthe trouble-causing axioms are separated during reasoning. We finally have to optimize the actual choice which axioms to put in the partitions and which axioms can be left in the Universal TBox. This last step effectively introduces some uncertainty for the reasoning capabilities of the resulting Default TBox which we have to overcome.
  • #16: If we just perform inference counting we can end up with weird situations. Consider the following Default TBox which was the first optimized solution I just presented. To our definition, this Default TBox DT-1 has two entailments: On the one hand we infer that B is subsumed by D on the other hand, we infer the opposite. This contradiction does not cause problems when reasoning. We never consider both entailments at the same time, because they originate from two different partitions. Yet, it is not quite obvious for a Domain Expert that we allow for such conflicts. Classical Default Logics reasoning could overcome this issue, but as I said, we want to avoid the additional overhead. Fortunately, in this example we can provide a solution in which the conflict is no longer present. The second possible solution DT-2 does not contain the conflict. However, this means that is has one inference less than DT-1. If we now just assessed solutions by inference counting, then we would clearly choose solution one, that is the one containing the conflict. Hence need a performance measure that takes into account the quality of a solution regarding the number of conficts still present.
  • #17: We recently came up with the idea of assessing the quality of a possible solution by its information content. In computer science, in particular in information theory, information content is measured by the entropy. How can we benefit from an entropy based measure? Well, assume that a conflict is still present. That is, we are still able to infer that B is subsumed by D as well as the complement of B is subsumed by D. If we assert an instance to D, then we infer two assertions: One for B and one for the complement of B. In case there is no conflict, we can infer the assertion to only one of both concepts, that is to either B or its complement. Considering the assertions as a random variable, the conflict case is more similar to an uniform distribution whereas the non-conflicting case is more different from the uniform distribution. The entropy, in turn, measures how much the actual distribution of a random variable differs from the uniform distribution. The higher the entropy the larger the difference. We hence propose to assess solutions regarding the entropy of a Default TBox in presence of an instantiation, that is an ABox.
  • #18: To define the entropy on a TBox or a Default Tbox---the procedure works on any set of axioms for which we have a proper inference mechanism---we define the entropy on a set of axioms. This requires to define a probability mass function on axioms. Based on a concept representation of an axiom, we propose to use the following definition: The probability of an axiom is the normalized sum of the instances under which the axiom becomes true. This is the case when the instance i can either be asserted to A or to the complement of B. Alpha serves here as the normalization constant and I is an indicator function. As I stated before the idea is that presence of conflicts increase the entropy whereas absence of conflicts reduces them.
  • #19: There have been made proposals measuring the quality of an ontology. Most of these have been desigend for evaluation the quality of ontology modularization. Modularization, in contrast to Default Logics, tries to split up the ontology into independent sub-units. We try to avoid this independence as much as possible. However, the most relevant to this work is, to the best of our knowledge, the entropy measure defined by Doran et al. It solely relies upon the structure of the ontology. We could, in principle, treat the different partitions as modules and hence apply this measure. Yet it would not be useful for minimizing the number of conflicts. We still have to evaluate whether other measures can do the job, but we strongly assume that this is not the case, because none of these measures was designed for minimizing the number of conflicts.
  • #20: To conclude, we are indeed able to perform reasoning on ontologies that contain controversial axioms. We can keep all explicitly stated information and ignore potential conflicts. The solution to reasoning is not deterministic but is subject to an optimization process. Conflicts are ignored during reasoning but can still be present. We hence have ot optimize regarding the number of conflicts and the number of inferences that we invalidate. We showed that this requires a more sophisiticated measure than simply counting inferences. Existing methods for evaluating ontologies were not designed to assess Default Tboxes regarding a minimal number of conflicts. We suggest to assess the quality of solutions by their information content and proposed an entropy-based measure that evaluates the quality of a Default TBox solution in the presence of a TBox.
  • #21: There is still plenty of work to do. First of all, we have to improve the scalability of computing all justifications for an entailment. Most relevant to this work, we will have to investigate further quality measures to have a solid basis for choosing the proper measure. We also have to evaluate this approach on real people. It is not granted that Domain Experts will accept the approach. Last but not least, we may apply this method to other domains such as, for example, ontology mapping.
  • #22: So, I will come to an end now, before you are all die of powerpoint-poisoning. I thank you for your attention and would appreciate your comments and questions.