SlideShare a Scribd company logo
“neuro-symbolic”
≠
“neuro-semantic”
Frank van Harmelen,
Learning & Reasoning Group
Vrije Universiteit Amsterdam
Creative Commons License
CC BY 3.0:
Allowed to copy, redistribute
remix & transform
But must attribute
The K in “neuro-symbolic”
stands for “knowledge” *
Frank van Harmelen,
Learning & Reasoning Group
Vrije Universiteit Amsterdam
Creative Commons License
CC BY 3.0:
Allowed to copy, redistribute
remix & transform
But must attribute
2
* With thanks to Wouter Beek
“neuro-symbolic”
should be
“neuro-semantic”
Frank van Harmelen,
Learning & Reasoning Group
Vrije Universiteit Amsterdam
Creative Commons License
CC BY 3.0:
Allowed to copy, redistribute
remix & transform
But must attribute
“GeNeSy”
should be
“GeNeSe”
Frank van Harmelen,
Learning & Reasoning Group
Vrije Universiteit Amsterdam
Creative Commons License
CC BY 3.0:
Allowed to copy, redistribute
remix & transform
But must attribute
I will illustrate my point on
link prediction over knowledge graphs
(= simplest form of GeNeSy)
but it applies to any Generative NeSy
5
Abstracts in Scopus on
“link prediction” AND “knowledge graph”:
1200+ papers
Bluffer’s Guide to KG Link Prediction
(via embeddings)
https://docs.ampligraph.org/
basedIn ?
Drug1
Protein2
Protein1
Drug2
binds
Bluffer’s Guide to KG Embedding
• Link prediction
But also:
• Entity classification
• Query answering
• ….
binds
prediction
algorithm
8
Clearly neuro-symbolic:
Some things are better done geometrically (and not symbolically)
From Symbols to Data and back again
9
Example: prediction of unwanted side-effects in polypharmacy
polypharmacy side-effects
data as a knowledge graph
<drug-x, interacts-with, protein-y>
embedding
Side effects
<drug-x, polypharma-side-effect, drug-y>
symbol:node
drug-x
model:tensor
From Symbols to Vectors and back again
learn
10
But how well do these embeddings
capture the intended meaning of the KG?
TransE: |h+r-t|
RotateE:
symmetry 
<h,r,t> and <t,r,h>
composition 
father’s mother =
mother’s father
Claim
None of the commonly used embeddings
capture any semantics
What is “semantics”?
(this is the Semantic Web Conference, after all)
11
This is not semantics
12
It is “wishful mnemonics”
13
13
Artificial Intelligence meets natural stupidity,
Drew McDermott, 1976
14
Artificial Intelligence meets natural stupidity,
Drew McDermott, 1976
15
Wishful Mnemonics
A major source of confusion in AI programs is the use of
mnemnonics like “UNDERSTAND” or “GOAL”. If a
programmer calls the main loop of their program
“UNDERSTAND”, they may mislead a lot of people, most
prominently themselves.
What they should do instead is refer to this main loop as
“G0034” and see if they can convince themselves or
anyone else that G0034 implements some part of
understanding.
It is much harder to do this when using terms like “G0034”.
When you say UNDERSTAND(x), you can just feel the
Artificial Intelligence meets natural stupidity,
Drew McDermott, 1976
16
Prescription medicine for every AI researcher:
In order to maintain your mental hygiene,
read “Artificial Intelligence meets natural stupidity”
once yearly.
So, this is “wishful mnemonics”
17
17
“wishful mnemonics” is not semantics
for your computer
18
G0034
H9945
XB56B
RB56
B599
K64
W87
U654
B6 7B3
86G
86G
86G
86G
K64
BA21
BA51
86H
It is just a datagraph for your computer
19
G0034
H9945
XB56B
RB56
B599
K64
W87
U654
B6 7B3
86G
86G
86G
86G
K64
BA21
BA51
86H
It is symbolic, but not semantic
It is just a datagraph for your computer
20
G0034
H9945
XB56B
RB56
B599
K64
W87
U654
B6 7B3
86G
86G
86G
86G
K64
BA21
BA51
86H
Remember:
“neuro-symbolic” should be “neuro-semantic”?
21
“logical semantics” is also not semantics
for your computer
It just maps one formal system (called “syntax”)
to another formal system (called “semantics”)
Frank Lynda
birth-place
• married-to relates
person to person
Lynda is person
• married-to relates
1 person to
1 person
 Lynda = Hardman
lowerbound upperbound
Hardman
married-to
So what is semantics for your computer?
The semantics is in the Reserved Symbols
RDF Schema
Ontology
Instance
Schema
Data
The semantics is in the Reserved Symbols
RDF Schema
Ontology
Instance
Schema
Data
Ontology
Instance
Schema
Data
The semantics is in the Reserved Symbols
RDF Schema
Claim
26
None of the commonly used embeddings
capture any semantics
Because none of the commonly used KG embeddings
respect any of the reserved symbols from RDF Schema or
OWL.
Embeddings do “distributional semantics”,
but predictable co-occurrence ≠ predictable inference
Just like
LLMs
Claim
None of the commonly used embeddings
capture any semantics
Because none of the commonly used embeddings
can represent universal quantification
(and that’s where the inference comes from)
Embeddings do “variable free sentences” only,
and those don’t allow for any inference. 27
has-birth-place
domain: person
range: location
So: this is not a knowledge graph,
It is a data graph
because it doesn’t support any inference
and therefore doesn’t have any semantics
28
But surely other people
have noticed this before?
29
Make embeddings semantic again!
(Outrageaous Ideas paper at ISWC 2018)
Abstract
The original Semantic Web vision foresees to describe
entities in a way that the meaning can be interpreted both
by machines and humans. [But] embeddings describe an
entity as a numerical vector, without any semantics
attached to the dimensions. Thus, embeddings are as far
from the original Semantic Web vision as can be. In this
paper, we make a claim for semantic embeddings.
Proposal 1: A Posteriori Learning of Interpretations.
Reconstruct a human-readable interpretation from the
vector space.
Proposal 2: Pattern-based Embeddings.
Use patterns in the knowledge graph to choose
human-interpretable dimensions in the vector space.
30
Neither of these are aimed at predictable inference
-> no semantics 
From TransE to TransOWL
(and from TransR to TransROWL)
31
TransOWL:
TransE
Loss function
Summed over
all triples
More radical idea:
use more of the geometry
to capture the semantics
32
Male
Father
𝐹𝑎𝑡ℎ𝑒𝑟 ⊑ 𝑀𝑎𝑙𝑒
Parent
𝐹𝑎𝑡ℎ𝑒𝑟 ⊑ 𝑃𝑎𝑟𝑒𝑛𝑡
Spheres: ELEm, EmEL++
Male
Parent
Father
Boxes: BoxEL, Box2EL
𝑃𝑎𝑟𝑒𝑛𝑡 ⊓ 𝑀𝑎𝑙𝑒 ⊑ 𝐹𝑎𝑡ℎ𝑒𝑟 ?
Kulmanov, 2019
Modal, 2021
Xiong, 2022
Jackermeier, 2024
More radical idea:
use more of the geometry
to capture the semantics
Almost done!
• Take home 1: don’t just use symbols,
make sure you use semantics
• Take home 2: easy way to check for semantics:
check for predictable inference
• Note: GNN, GCN, RGCN, GAE are
neuro-symbolic
but not neuro-semantic systems
Our illustration on link prediction
generalises to many other
neuro-symbolic neuro-semantic systems 34
Final 3 slides:
Question:
Where/when should the semantics play a role?
Answer: anywhere in the architecture
35
During training
Symbolic loss function:
loss= dataloss + violation of semantics
Neuro-symbolic is not enough, we need neuro-*semantic*
During training
Symbolic loss function:
loss= dataloss + violation of semantics
See survey of 100+ systems in Von Rueden et al, Learning, 2019
flower?
cushion?
“Parts of a chair are:
cushion and armrest”
“Given the context of chair,
a cushion is much more likely
than a flower”
P(cushion|chair) >> P(flower|chair)
During inference
symbolic consistency check
queen
crown
wears
39
shower
cap
?
Predict Select
crown?
showercap?
Showercap
(97.5% certainty)
After inference
Symbolic justification
queen
wears
40
shower
cap
?
Predict Justify Explain
crown?
Takeaways
Symbolic ≠ semantic
Instead: semantics = predictable inference
If you move from one representation to another,
make sure not to lose the predictable inference
Too late to rename
neuro-symbolic
to
neuro-semantic
but:
If you enrich an LLM with a KG,
Let it be a knowledge graph, not just a data graph

More Related Content

PDF
Eco Village Saigon River - Thông tin tổng quan dự án.pdf
PDF
Llama 2 Open Foundation and Fine-Tuned Chat Models.pdf
PDF
Automate your Job and Business with ChatGPT #3 - Fundamentals of LLM/GPT
PPTX
Unleashing the Google Bard Discover the Revolutionary New Tool How does it Co...
PDF
ML Zoomcamp 5 - Model deployment
PDF
Large Language Models Bootcamp
PPTX
The K in "neuro-symbolic" stands for "knowledge"
PDF
Effective Semantics for Engineering NLP Systems
Eco Village Saigon River - Thông tin tổng quan dự án.pdf
Llama 2 Open Foundation and Fine-Tuned Chat Models.pdf
Automate your Job and Business with ChatGPT #3 - Fundamentals of LLM/GPT
Unleashing the Google Bard Discover the Revolutionary New Tool How does it Co...
ML Zoomcamp 5 - Model deployment
Large Language Models Bootcamp
The K in "neuro-symbolic" stands for "knowledge"
Effective Semantics for Engineering NLP Systems

Similar to Neuro-symbolic is not enough, we need neuro-*semantic* (20)

PDF
Semantic_net_and_Frames_in_knowledgeR.pdf
PPTX
From NLP to NLU: Why we need varied, comprehensive, and stratified knowledge,...
PDF
ACM Hypertext and Social Media Conference Tutorial on Knowledge-infused Deep ...
PDF
M. De Cubellis, F. De Fausti, Word Embeddings: modellare il significato delle...
PPTX
frames.pptx
PDF
Ch 7 Knowledge Representation.pdf
PPT
KnowledgeRepresentation in artificial intelligence.ppt
PPTX
Unit 6 Uncertainty.pptx
PPT
Knowledge_Representbhhggghhhhhhhtrrghjuuuuation.ppt
PPTX
NLP Introduction and basics of natural language processing
PPTX
Иван Титов — Inducing Semantic Representations from Text with Little or No Su...
PDF
AI Lesson 19
PDF
Lesson 19
PPTX
Empirical Semantics
PPTX
Neural Text Embeddings for Information Retrieval (WSDM 2017)
PDF
AI Beyond Deep Learning
PDF
Using Knowledge Graph for Promoting Cognitive Computing
PPT
semantic.ppt
PPTX
Semantic net in AI
PPTX
Knowledge Representation : Semantic Networks
Semantic_net_and_Frames_in_knowledgeR.pdf
From NLP to NLU: Why we need varied, comprehensive, and stratified knowledge,...
ACM Hypertext and Social Media Conference Tutorial on Knowledge-infused Deep ...
M. De Cubellis, F. De Fausti, Word Embeddings: modellare il significato delle...
frames.pptx
Ch 7 Knowledge Representation.pdf
KnowledgeRepresentation in artificial intelligence.ppt
Unit 6 Uncertainty.pptx
Knowledge_Representbhhggghhhhhhhtrrghjuuuuation.ppt
NLP Introduction and basics of natural language processing
Иван Титов — Inducing Semantic Representations from Text with Little or No Su...
AI Lesson 19
Lesson 19
Empirical Semantics
Neural Text Embeddings for Information Retrieval (WSDM 2017)
AI Beyond Deep Learning
Using Knowledge Graph for Promoting Cognitive Computing
semantic.ppt
Semantic net in AI
Knowledge Representation : Semantic Networks
Ad

More from Frank van Harmelen (20)

PPTX
Adoption of Knowledge Graphs, mid 2022 (incomplete)
PPTX
Modular design patterns for systems that learn and reason: a boxology
PPTX
Adoption of Knowledge Graphs, late 2019
PPTX
Adoption of Knowledge Graphs, mid 2019
PPTX
The Empirical Turn in Knowledge Representation
PPTX
The end of the scientific paper as we know it (or not...)
PPTX
On the nature of AI, and the relation between symbolic and statistical approa...
PPTX
The end of the scientific paper as we know it (in 4 easy steps)
PPTX
Linked Open Data for Medical Guidelines Interactions
PPTX
The Web of Data: do we actually understand what we built?
PPTX
Semantic Web questions we couldn't ask 10 years ago
PPT
Knowledge Engineering rediscovered, Towards Reasoning Patterns for the Semant...
PPTX
Informatics is a natural science
PPTX
How the Web can change social science research (including yours)
PPTX
4 Popular Fallacies about the Semantic Web
PPT
PPT
Het slimme Web 3.0
PPT
OWL briefing
PPT
RDF briefing
PPT
Semantic Web research anno 2006:main streams, popular falacies, current statu...
Adoption of Knowledge Graphs, mid 2022 (incomplete)
Modular design patterns for systems that learn and reason: a boxology
Adoption of Knowledge Graphs, late 2019
Adoption of Knowledge Graphs, mid 2019
The Empirical Turn in Knowledge Representation
The end of the scientific paper as we know it (or not...)
On the nature of AI, and the relation between symbolic and statistical approa...
The end of the scientific paper as we know it (in 4 easy steps)
Linked Open Data for Medical Guidelines Interactions
The Web of Data: do we actually understand what we built?
Semantic Web questions we couldn't ask 10 years ago
Knowledge Engineering rediscovered, Towards Reasoning Patterns for the Semant...
Informatics is a natural science
How the Web can change social science research (including yours)
4 Popular Fallacies about the Semantic Web
Het slimme Web 3.0
OWL briefing
RDF briefing
Semantic Web research anno 2006:main streams, popular falacies, current statu...
Ad

Recently uploaded (20)

PPTX
OMC Textile Division Presentation 2021.pptx
PPTX
Tartificialntelligence_presentation.pptx
PDF
WOOl fibre morphology and structure.pdf for textiles
PDF
Transform Your ITIL® 4 & ITSM Strategy with AI in 2025.pdf
PDF
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PPTX
SOPHOS-XG Firewall Administrator PPT.pptx
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
A comparative analysis of optical character recognition models for extracting...
PDF
Heart disease approach using modified random forest and particle swarm optimi...
PDF
Getting Started with Data Integration: FME Form 101
PPTX
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
PPTX
A Presentation on Touch Screen Technology
PDF
Web App vs Mobile App What Should You Build First.pdf
PDF
1 - Historical Antecedents, Social Consideration.pdf
PDF
Encapsulation theory and applications.pdf
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
1. Introduction to Computer Programming.pptx
OMC Textile Division Presentation 2021.pptx
Tartificialntelligence_presentation.pptx
WOOl fibre morphology and structure.pdf for textiles
Transform Your ITIL® 4 & ITSM Strategy with AI in 2025.pdf
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
NewMind AI Weekly Chronicles - August'25-Week II
SOPHOS-XG Firewall Administrator PPT.pptx
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
A comparative analysis of optical character recognition models for extracting...
Heart disease approach using modified random forest and particle swarm optimi...
Getting Started with Data Integration: FME Form 101
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
A Presentation on Touch Screen Technology
Web App vs Mobile App What Should You Build First.pdf
1 - Historical Antecedents, Social Consideration.pdf
Encapsulation theory and applications.pdf
Building Integrated photovoltaic BIPV_UPV.pdf
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
1. Introduction to Computer Programming.pptx

Neuro-symbolic is not enough, we need neuro-*semantic*

  • 1. “neuro-symbolic” ≠ “neuro-semantic” Frank van Harmelen, Learning & Reasoning Group Vrije Universiteit Amsterdam Creative Commons License CC BY 3.0: Allowed to copy, redistribute remix & transform But must attribute
  • 2. The K in “neuro-symbolic” stands for “knowledge” * Frank van Harmelen, Learning & Reasoning Group Vrije Universiteit Amsterdam Creative Commons License CC BY 3.0: Allowed to copy, redistribute remix & transform But must attribute 2 * With thanks to Wouter Beek
  • 3. “neuro-symbolic” should be “neuro-semantic” Frank van Harmelen, Learning & Reasoning Group Vrije Universiteit Amsterdam Creative Commons License CC BY 3.0: Allowed to copy, redistribute remix & transform But must attribute
  • 4. “GeNeSy” should be “GeNeSe” Frank van Harmelen, Learning & Reasoning Group Vrije Universiteit Amsterdam Creative Commons License CC BY 3.0: Allowed to copy, redistribute remix & transform But must attribute
  • 5. I will illustrate my point on link prediction over knowledge graphs (= simplest form of GeNeSy) but it applies to any Generative NeSy 5 Abstracts in Scopus on “link prediction” AND “knowledge graph”: 1200+ papers
  • 6. Bluffer’s Guide to KG Link Prediction (via embeddings) https://docs.ampligraph.org/ basedIn ?
  • 7. Drug1 Protein2 Protein1 Drug2 binds Bluffer’s Guide to KG Embedding • Link prediction But also: • Entity classification • Query answering • …. binds
  • 8. prediction algorithm 8 Clearly neuro-symbolic: Some things are better done geometrically (and not symbolically) From Symbols to Data and back again
  • 9. 9 Example: prediction of unwanted side-effects in polypharmacy polypharmacy side-effects data as a knowledge graph <drug-x, interacts-with, protein-y> embedding Side effects <drug-x, polypharma-side-effect, drug-y> symbol:node drug-x model:tensor From Symbols to Vectors and back again learn
  • 10. 10 But how well do these embeddings capture the intended meaning of the KG? TransE: |h+r-t| RotateE: symmetry  <h,r,t> and <t,r,h> composition  father’s mother = mother’s father
  • 11. Claim None of the commonly used embeddings capture any semantics What is “semantics”? (this is the Semantic Web Conference, after all) 11
  • 12. This is not semantics 12
  • 13. It is “wishful mnemonics” 13 13
  • 14. Artificial Intelligence meets natural stupidity, Drew McDermott, 1976 14
  • 15. Artificial Intelligence meets natural stupidity, Drew McDermott, 1976 15 Wishful Mnemonics A major source of confusion in AI programs is the use of mnemnonics like “UNDERSTAND” or “GOAL”. If a programmer calls the main loop of their program “UNDERSTAND”, they may mislead a lot of people, most prominently themselves. What they should do instead is refer to this main loop as “G0034” and see if they can convince themselves or anyone else that G0034 implements some part of understanding. It is much harder to do this when using terms like “G0034”. When you say UNDERSTAND(x), you can just feel the
  • 16. Artificial Intelligence meets natural stupidity, Drew McDermott, 1976 16 Prescription medicine for every AI researcher: In order to maintain your mental hygiene, read “Artificial Intelligence meets natural stupidity” once yearly.
  • 17. So, this is “wishful mnemonics” 17 17
  • 18. “wishful mnemonics” is not semantics for your computer 18 G0034 H9945 XB56B RB56 B599 K64 W87 U654 B6 7B3 86G 86G 86G 86G K64 BA21 BA51 86H
  • 19. It is just a datagraph for your computer 19 G0034 H9945 XB56B RB56 B599 K64 W87 U654 B6 7B3 86G 86G 86G 86G K64 BA21 BA51 86H It is symbolic, but not semantic
  • 20. It is just a datagraph for your computer 20 G0034 H9945 XB56B RB56 B599 K64 W87 U654 B6 7B3 86G 86G 86G 86G K64 BA21 BA51 86H Remember: “neuro-symbolic” should be “neuro-semantic”?
  • 21. 21 “logical semantics” is also not semantics for your computer It just maps one formal system (called “syntax”) to another formal system (called “semantics”)
  • 22. Frank Lynda birth-place • married-to relates person to person Lynda is person • married-to relates 1 person to 1 person  Lynda = Hardman lowerbound upperbound Hardman married-to So what is semantics for your computer?
  • 23. The semantics is in the Reserved Symbols RDF Schema Ontology Instance Schema Data
  • 24. The semantics is in the Reserved Symbols RDF Schema Ontology Instance Schema Data
  • 25. Ontology Instance Schema Data The semantics is in the Reserved Symbols RDF Schema
  • 26. Claim 26 None of the commonly used embeddings capture any semantics Because none of the commonly used KG embeddings respect any of the reserved symbols from RDF Schema or OWL. Embeddings do “distributional semantics”, but predictable co-occurrence ≠ predictable inference Just like LLMs
  • 27. Claim None of the commonly used embeddings capture any semantics Because none of the commonly used embeddings can represent universal quantification (and that’s where the inference comes from) Embeddings do “variable free sentences” only, and those don’t allow for any inference. 27 has-birth-place domain: person range: location
  • 28. So: this is not a knowledge graph, It is a data graph because it doesn’t support any inference and therefore doesn’t have any semantics 28
  • 29. But surely other people have noticed this before? 29
  • 30. Make embeddings semantic again! (Outrageaous Ideas paper at ISWC 2018) Abstract The original Semantic Web vision foresees to describe entities in a way that the meaning can be interpreted both by machines and humans. [But] embeddings describe an entity as a numerical vector, without any semantics attached to the dimensions. Thus, embeddings are as far from the original Semantic Web vision as can be. In this paper, we make a claim for semantic embeddings. Proposal 1: A Posteriori Learning of Interpretations. Reconstruct a human-readable interpretation from the vector space. Proposal 2: Pattern-based Embeddings. Use patterns in the knowledge graph to choose human-interpretable dimensions in the vector space. 30 Neither of these are aimed at predictable inference -> no semantics 
  • 31. From TransE to TransOWL (and from TransR to TransROWL) 31 TransOWL: TransE Loss function Summed over all triples
  • 32. More radical idea: use more of the geometry to capture the semantics 32 Male Father 𝐹𝑎𝑡ℎ𝑒𝑟 ⊑ 𝑀𝑎𝑙𝑒 Parent 𝐹𝑎𝑡ℎ𝑒𝑟 ⊑ 𝑃𝑎𝑟𝑒𝑛𝑡 Spheres: ELEm, EmEL++ Male Parent Father Boxes: BoxEL, Box2EL 𝑃𝑎𝑟𝑒𝑛𝑡 ⊓ 𝑀𝑎𝑙𝑒 ⊑ 𝐹𝑎𝑡ℎ𝑒𝑟 ? Kulmanov, 2019 Modal, 2021 Xiong, 2022 Jackermeier, 2024
  • 33. More radical idea: use more of the geometry to capture the semantics
  • 34. Almost done! • Take home 1: don’t just use symbols, make sure you use semantics • Take home 2: easy way to check for semantics: check for predictable inference • Note: GNN, GCN, RGCN, GAE are neuro-symbolic but not neuro-semantic systems Our illustration on link prediction generalises to many other neuro-symbolic neuro-semantic systems 34
  • 35. Final 3 slides: Question: Where/when should the semantics play a role? Answer: anywhere in the architecture 35
  • 36. During training Symbolic loss function: loss= dataloss + violation of semantics
  • 38. During training Symbolic loss function: loss= dataloss + violation of semantics See survey of 100+ systems in Von Rueden et al, Learning, 2019 flower? cushion? “Parts of a chair are: cushion and armrest” “Given the context of chair, a cushion is much more likely than a flower” P(cushion|chair) >> P(flower|chair)
  • 39. During inference symbolic consistency check queen crown wears 39 shower cap ? Predict Select crown? showercap? Showercap (97.5% certainty)
  • 41. Takeaways Symbolic ≠ semantic Instead: semantics = predictable inference If you move from one representation to another, make sure not to lose the predictable inference Too late to rename neuro-symbolic to neuro-semantic but: If you enrich an LLM with a KG, Let it be a knowledge graph, not just a data graph

Editor's Notes

  • #9: DEZE klopt toch niet? Zie eerder.
  • #23: Mind-reading game to explain semantics. If I show the audience the top triple, and we share a little bit of background knowledge in the square box (“ontology”), I can predict what the audience will infer from the top-triple. The shared background knowledge forces us to believe certain things (such that the right blobs must be locations) , and forbids us to believe certain things (such as that the two right blobs are different). By increasing the background knowledge the enforced conclusions (lowerbound on agreement) and the forbidden conlusions (upperbound on agreement) get closer and closer, and the remaining space for ambiguity and misunderstanding reduces. Not only misunderstanding between people, but also between machines. Slogan: semantics is when I can predict what you will infer when I send you something.