SlideShare a Scribd company logo
DBpedia - A Global Open Knowledge
Network
Sebastian Hellmann and Sören Auer
http://dbpedia.org
Outline
1. Concepts and DBpedia Strategy
2. Technologies
3. Outlook
2
Introduction
● Core and Context separation
○ Core Data: High value, low maintenance
○ Context data: :ow value, high maintenance
● Fraud detection (Credit Card institute)
○ Core data: Credit card transactions
○ Context data: Public information (ATMs, cities, flight plans, products, crime rate, etc.)
● Supply-chain management (Manufacturing)
○ Core data: Know-How of Manufacturing (What can you build )
○ Context data: Supplier market
● Publishers
○ Core data: Content
○ Context data: Taxonomies and items to describe the content (Persons, Places, Events)
3
Common Challenges for us
● Speed of data ingestion
○ How fast can you find, understand and integrate external data?
● Virtually no feedback mechanisms to data providers
● No effective collaboration on your data, although you are curating the same
data as others
How do we enable collaboration on data?
4
DBpedia Strategy Overview
Starting point
DBpedia is the most successful
open knowledge graph (OKG)
5
OKG Governance
Collaboration & Curation
Max. societal value
Medium term goals:
● 10 millions of users
● millions of active contributors
● thousands of new businesses
and initiatives
take DBpedia to a global level
Potentializing Societal Value by:
● OKG Governance - licensing, incubation, maturity model for OKGs
○ Apache Foundation for data
● OKG Collaboration & Curation - for individuals & organizations
○ Git and GitHub for data
● Providing a trustworthy global OKG infrastructure - for enterprises small
and large as well as non-profits and societal initiatives alike
● Maximizing societal value of open knowledge by incubating open
knowledge initiatives and businesses (e.g. in education, public health, open
science)
6
GitHub for Data
DBpedia aims to create a knowledge graph curation service, which allows
communities to collaborate on rich semantic representations.
● The knowledge graph uses the RDF data model as scaffold but is augmented
with rich metadata about provenance, discourse, evolution etc.
● Atomic units of the knowledge graph are facts/statements, which are
aggregated into resources/entity descriptions
● All contributions and changes are tracked and versioned
7
OKG Clearinghouse and Steward
DBpedia will be a clearinghouse for OKG contributions and steward for their sustainable
maintenance
● Open Data license and contributor agreements
● Incubation and maturity model for OKG assets
○ Based on an automatic and sample-based quality and coverage assessment
● Continuous integration for OKG assets
○ Automatic link generation
○ Execution of test cases
○ OKG publication in various human-readable and machine-readable formats
● Communication and collaboration infrastructure for OKG communities
8
Comparing with related initiatives
Use-case driven
● contrast with platform-first approaches (repositories like DataHub.io)
● We build the platform to support the use cases.
Collaboration-driven
● contrast with volunteer-driven (like Wikidata)
● improving completeness & correctness of areas that are most used by the partners (full-time
curators, instead of sporadic)
Knowledge-integration-driven
● contrast with loose collections (like data markets)
● We make every small piece of information identifiable & referenceable
⇒ knowledge melting pot.
9
We are willing and able to collaborate with and integrate other open data initiatives.
“Publish and Link” falls short for the Web of Data
Connecting data is about connecting people and organisations. 10
Collaboration in the Web of Data
Connecting data is about connecting people and organisations.
11
● DBpedia’s mission is to
○ serve as an access point for data
○ facilitate collaboration
○ disseminate data on a global scale
Data Incubator model
12
Counselling
Analysis
Integration
Colla-
boration
Full collaboration benefits
Shared OKG Governance
DBpedia Contributor Requirements
Excel anarchy, no governance
LVL 0
LVL 1
LVL 2
LVL 3
LVL 3: access to all
relevant data, links, users
of the ecosystem
By reaching LVL 3 cost for
maintaining LVL 2 as well
as OKG Governance and
Curation is shared
effectively with the
DBpedia ecosystem
LVL 0: Excel Anarchy, No Governance
● Each employee/department governs his own data (Anarchy)
● Intensive counselling required
● Best to build a parallel structure and show the value (KG prototype)
13
LVL 1: DBpedia Contributor Requirements
● Stable identifiers
● Good level of schema unification and management
● Data strategy & Knowledge Graph available
● Core and Context separation
○ Core Data: High value, low maintenance
○ Context data: Very low value, very high maintenance (commodity)
● What data maintenance can you outsource to DBpedia? (Analysis)
14
LVL 2: Shared OKG Governance
● Technical steps (Integration):
○ Identifier Linking
○ Schema Mapping
○ Release data into DBpedia
● Continuously maturing tool stack to improve these three steps
DBpedia Association consists of a huge network of universities
-> we mediate internships to tackle above tasks
15
LVL 3: Full collaboration benefits
● Link triangulation (Who links to you? subscription)
● Sources validation (Error reports)
● Data comparison (your data with all other data sources)
● Mediate contact to other organisations with the same data
● Any user feedback is directed to the sources
16
Incubator model
● Organisations…
○ can use the DBpedia incubator model to improve their OKG
○ each joining will upvalue DBpedia with data and experience
● DBpedia...
○ acts as the mediator
○ will distribute value to other orgs and users on a global scale
17
Technologies
● ID Management + Linking
● DataID (Metadata treatment)
● Data comparison and feedback
● SHACL - Test-driven data development
ID Management + Linking
● For each source ID, DBpedia assigns a local DBpedia ID
● Links are then grouped into clusters
● From the cluster a representative ID is chosen, others are redirects
● Properties:
○ Every imported entity is identifiable and traceable with local ID
○ Holistic identifier space -> allows a complete linkage
○ Stable IDs allow to improve link accuracy over time
● http://dbpedia.github.io/links/tools/linkviz/
DataID (Metadata treatment)
● FOAF and WebID -> you keep your data local, all online accounts are
updated automatically
● DataID -> DCAT extension, keep data description locally, all data repos will
be updated automatically
Data comparison and feedback
Show differences in the data:
http://downloads.dbpedia.org/temporary/crosswikifact/results/q84.html
http://wikidata.org/wiki/Q84
areaTotal of London
Population of London P1082
CrossWikiFact
Test-driven data development
● Test-driven data development (2014)
● Dimitris Kontokostas (CTO of DBpedia) co-editor of the SHACL
● RDFUnit
○ uses Machine Learning (DL-Learner) to enrich the OWL schema
○ TAG - Test Autogenerators from enriched schema
○ 44,000 tests generated from the DBpedia Ontology
● Tests are transferred to sources (schema mapping)
● Tests are written collaboratively:
○ Universal: deathdate should not be before birthdate
○ Shared: specialised domain and application tests
Test-driven evaluation of linked data quality. Dimitris Kontokostas, Patrick Westphal, Sören Auer, Sebastian Hellmann, Jens Lehmann, Roland
Cornelissen, and Amrapali J. Zaveri in Proceedings of the 23rd International Conference on World Wide Web.
DBpedia in 10 years
● DBpedia connects hundreds of thousands of data spaces
(centralised-decentralised architecture)
● Data about the world is a commodity (freely available to everybody)
● Working with data will be fun
Become a supporter or an early adopter
This is not a vision of the far future, it is happening now:
Contact for the DBpedia Association (non-profit)
dbpedia@infai.org @dbpedia
wiki.dbpedia.org @dbpedia.org

More Related Content

PPTX
Connected data meetup group - introduction & scope
PPSX
RDF and OWL : the powerful duo | Tara Raafat
PPTX
Building NextGen Enterprise data platforms | Graham Cousins
PPTX
Lju Lazarevic
PDF
Using the Semantic Web Stack to Make Big Data Smarter
PPTX
Data Model vs Ontology Development – a FIBO perspective | Mike Bennett
PDF
Industry Ontologies: Case Studies in Creating and Extending Schema.org
PDF
How Semantics Solves Big Data Challenges
Connected data meetup group - introduction & scope
RDF and OWL : the powerful duo | Tara Raafat
Building NextGen Enterprise data platforms | Graham Cousins
Lju Lazarevic
Using the Semantic Web Stack to Make Big Data Smarter
Data Model vs Ontology Development – a FIBO perspective | Mike Bennett
Industry Ontologies: Case Studies in Creating and Extending Schema.org
How Semantics Solves Big Data Challenges

What's hot (18)

PPT
The Power of Semantic Technologies to Explore Linked Open Data
PPTX
Sören Auer | Enterprise Knowledge Graphs
PDF
Kerstin Diwisch | Towards a holistic visualization management for knowledge g...
PDF
Edgard Marx, Amrapali Zaveri, Diego Moussallem and Sandro Rautenberg | DBtren...
PPTX
Stephen Buxton | Data Integration - a Multi-Model Approach - Documents and Tr...
PDF
Big Data and the Semantic Web: Challenges and Opportunities
PDF
Linked Open Data in the World of Patents
PPTX
Robert Isele | eccenca CorporateMemory - Semantically integrated Enterprise D...
PDF
Supporting GDPR Compliance through effectively governing Data Lineage and Dat...
PDF
PID services - understandability and findability of data
PDF
PID Services for FAIR data
PPTX
Jisc Research Data Shared Service Open Repositories 2018 Paper
PPTX
The Evolution of Search and Big Data
PDF
Chalitha Perera | Cross Media Concept and Entity Driven Search for Enterprise
PPTX
Conclusions - Linked Data
PDF
RDF Data Quality Assessment - connecting the pieces
PDF
STI Summit 2011 - DB vs RDF
PDF
Understanding Voice of Members via Text Mining – How Linkedin Built a Text An...
The Power of Semantic Technologies to Explore Linked Open Data
Sören Auer | Enterprise Knowledge Graphs
Kerstin Diwisch | Towards a holistic visualization management for knowledge g...
Edgard Marx, Amrapali Zaveri, Diego Moussallem and Sandro Rautenberg | DBtren...
Stephen Buxton | Data Integration - a Multi-Model Approach - Documents and Tr...
Big Data and the Semantic Web: Challenges and Opportunities
Linked Open Data in the World of Patents
Robert Isele | eccenca CorporateMemory - Semantically integrated Enterprise D...
Supporting GDPR Compliance through effectively governing Data Lineage and Dat...
PID services - understandability and findability of data
PID Services for FAIR data
Jisc Research Data Shared Service Open Repositories 2018 Paper
The Evolution of Search and Big Data
Chalitha Perera | Cross Media Concept and Entity Driven Search for Enterprise
Conclusions - Linked Data
RDF Data Quality Assessment - connecting the pieces
STI Summit 2011 - DB vs RDF
Understanding Voice of Members via Text Mining – How Linkedin Built a Text An...
Ad

Similar to Sebastian Hellmann (20)

PDF
KEDL DBpedia 2019
PDF
OrientDB: Unlock the Value of Document Data Relationships
PDF
Unlock Your Data for ML & AI using Data Virtualization
PDF
Collins, Hammer, Jones, and Lagace "NISO Update: Interoperability of Systems:...
PDF
Tutorial Data Management and workflows
PPTX
SSSW2015 Data Workflow Tutorial
PDF
Linked Data at the OU - the story so far
ODP
BigData Hadoop
PDF
Towards Generating Policy-compliant Datasets (poster)
PDF
Exploiting the Data / Code Duality with Dali
PDF
Presentation ADEQUATe Project: Workshop on Quality Assessment and Improvement...
PDF
Publishing Linked Data using Schema.org
PDF
Enabling Citizen-empowered Apps over Linked Data
PDF
ExecutiveWhitePaper
PDF
PlanetData: Consuming Structured Data at Web Scale
PDF
Planetdata simpda
PDF
DBpedia/association Introduction The Hague 12.2.2016
PDF
How to build data accessibility for everyone
PDF
Open Chemistry, JupyterLab and data: Reproducible quantum chemistry
PDF
Let's downscale the semantic web !
KEDL DBpedia 2019
OrientDB: Unlock the Value of Document Data Relationships
Unlock Your Data for ML & AI using Data Virtualization
Collins, Hammer, Jones, and Lagace "NISO Update: Interoperability of Systems:...
Tutorial Data Management and workflows
SSSW2015 Data Workflow Tutorial
Linked Data at the OU - the story so far
BigData Hadoop
Towards Generating Policy-compliant Datasets (poster)
Exploiting the Data / Code Duality with Dali
Presentation ADEQUATe Project: Workshop on Quality Assessment and Improvement...
Publishing Linked Data using Schema.org
Enabling Citizen-empowered Apps over Linked Data
ExecutiveWhitePaper
PlanetData: Consuming Structured Data at Web Scale
Planetdata simpda
DBpedia/association Introduction The Hague 12.2.2016
How to build data accessibility for everyone
Open Chemistry, JupyterLab and data: Reproducible quantum chemistry
Let's downscale the semantic web !
Ad

More from Connected Data World (20)

PPTX
Systems that learn and reason | Frank Van Harmelen
PDF
Graph Abstractions Matter by Ora Lassila
PDF
Κnowledge Architecture: Combining Strategy, Data Science and Information Arch...
PPTX
How to get started with Graph Machine Learning
PDF
Graphs in sustainable finance
PPTX
The years of the graph: The future of the future is here
PPTX
From Taxonomies and Schemas to Knowledge Graphs: Parts 1 & 2
PPTX
From Taxonomies and Schemas to Knowledge Graphs: Part 3
PDF
In Search of the Universal Data Model
PDF
Graph in Apache Cassandra. The World’s Most Scalable Graph Database
PDF
Enterprise Data Governance: Leveraging Knowledge Graph & AI in support of a d...
PDF
Graph Realities
PDF
Powering Question-Driven Problem Solving to Improve the Chances of Finding Ne...
PDF
Semantic similarity for faster Knowledge Graph delivery at scale
PDF
Knowledge Graphs and AI to Hyper-Personalise the Fashion Retail Experience at...
PDF
Schema, Google & The Future of the Web
PDF
RAPIDS cuGraph – Accelerating all your Graph needs
PDF
Elegant and Scalable Code Querying with Code Property Graphs
PDF
From Knowledge Graphs to AI-powered SEO: Using taxonomies, schemas and knowle...
PDF
Graph for Good: Empowering your NGO
Systems that learn and reason | Frank Van Harmelen
Graph Abstractions Matter by Ora Lassila
Κnowledge Architecture: Combining Strategy, Data Science and Information Arch...
How to get started with Graph Machine Learning
Graphs in sustainable finance
The years of the graph: The future of the future is here
From Taxonomies and Schemas to Knowledge Graphs: Parts 1 & 2
From Taxonomies and Schemas to Knowledge Graphs: Part 3
In Search of the Universal Data Model
Graph in Apache Cassandra. The World’s Most Scalable Graph Database
Enterprise Data Governance: Leveraging Knowledge Graph & AI in support of a d...
Graph Realities
Powering Question-Driven Problem Solving to Improve the Chances of Finding Ne...
Semantic similarity for faster Knowledge Graph delivery at scale
Knowledge Graphs and AI to Hyper-Personalise the Fashion Retail Experience at...
Schema, Google & The Future of the Web
RAPIDS cuGraph – Accelerating all your Graph needs
Elegant and Scalable Code Querying with Code Property Graphs
From Knowledge Graphs to AI-powered SEO: Using taxonomies, schemas and knowle...
Graph for Good: Empowering your NGO

Recently uploaded (20)

PDF
cuic standard and advanced reporting.pdf
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Approach and Philosophy of On baking technology
PDF
Modernizing your data center with Dell and AMD
PPT
Teaching material agriculture food technology
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
NewMind AI Monthly Chronicles - July 2025
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PDF
Encapsulation theory and applications.pdf
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Machine learning based COVID-19 study performance prediction
PPTX
Big Data Technologies - Introduction.pptx
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
cuic standard and advanced reporting.pdf
20250228 LYD VKU AI Blended-Learning.pptx
Approach and Philosophy of On baking technology
Modernizing your data center with Dell and AMD
Teaching material agriculture food technology
Dropbox Q2 2025 Financial Results & Investor Presentation
Agricultural_Statistics_at_a_Glance_2022_0.pdf
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
NewMind AI Monthly Chronicles - July 2025
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
CIFDAQ's Market Insight: SEC Turns Pro Crypto
Encapsulation theory and applications.pdf
The AUB Centre for AI in Media Proposal.docx
Machine learning based COVID-19 study performance prediction
Big Data Technologies - Introduction.pptx
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
Mobile App Security Testing_ A Comprehensive Guide.pdf
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf

Sebastian Hellmann

  • 1. DBpedia - A Global Open Knowledge Network Sebastian Hellmann and Sören Auer http://dbpedia.org
  • 2. Outline 1. Concepts and DBpedia Strategy 2. Technologies 3. Outlook 2
  • 3. Introduction ● Core and Context separation ○ Core Data: High value, low maintenance ○ Context data: :ow value, high maintenance ● Fraud detection (Credit Card institute) ○ Core data: Credit card transactions ○ Context data: Public information (ATMs, cities, flight plans, products, crime rate, etc.) ● Supply-chain management (Manufacturing) ○ Core data: Know-How of Manufacturing (What can you build ) ○ Context data: Supplier market ● Publishers ○ Core data: Content ○ Context data: Taxonomies and items to describe the content (Persons, Places, Events) 3
  • 4. Common Challenges for us ● Speed of data ingestion ○ How fast can you find, understand and integrate external data? ● Virtually no feedback mechanisms to data providers ● No effective collaboration on your data, although you are curating the same data as others How do we enable collaboration on data? 4
  • 5. DBpedia Strategy Overview Starting point DBpedia is the most successful open knowledge graph (OKG) 5 OKG Governance Collaboration & Curation Max. societal value Medium term goals: ● 10 millions of users ● millions of active contributors ● thousands of new businesses and initiatives take DBpedia to a global level
  • 6. Potentializing Societal Value by: ● OKG Governance - licensing, incubation, maturity model for OKGs ○ Apache Foundation for data ● OKG Collaboration & Curation - for individuals & organizations ○ Git and GitHub for data ● Providing a trustworthy global OKG infrastructure - for enterprises small and large as well as non-profits and societal initiatives alike ● Maximizing societal value of open knowledge by incubating open knowledge initiatives and businesses (e.g. in education, public health, open science) 6
  • 7. GitHub for Data DBpedia aims to create a knowledge graph curation service, which allows communities to collaborate on rich semantic representations. ● The knowledge graph uses the RDF data model as scaffold but is augmented with rich metadata about provenance, discourse, evolution etc. ● Atomic units of the knowledge graph are facts/statements, which are aggregated into resources/entity descriptions ● All contributions and changes are tracked and versioned 7
  • 8. OKG Clearinghouse and Steward DBpedia will be a clearinghouse for OKG contributions and steward for their sustainable maintenance ● Open Data license and contributor agreements ● Incubation and maturity model for OKG assets ○ Based on an automatic and sample-based quality and coverage assessment ● Continuous integration for OKG assets ○ Automatic link generation ○ Execution of test cases ○ OKG publication in various human-readable and machine-readable formats ● Communication and collaboration infrastructure for OKG communities 8
  • 9. Comparing with related initiatives Use-case driven ● contrast with platform-first approaches (repositories like DataHub.io) ● We build the platform to support the use cases. Collaboration-driven ● contrast with volunteer-driven (like Wikidata) ● improving completeness & correctness of areas that are most used by the partners (full-time curators, instead of sporadic) Knowledge-integration-driven ● contrast with loose collections (like data markets) ● We make every small piece of information identifiable & referenceable ⇒ knowledge melting pot. 9 We are willing and able to collaborate with and integrate other open data initiatives.
  • 10. “Publish and Link” falls short for the Web of Data Connecting data is about connecting people and organisations. 10
  • 11. Collaboration in the Web of Data Connecting data is about connecting people and organisations. 11 ● DBpedia’s mission is to ○ serve as an access point for data ○ facilitate collaboration ○ disseminate data on a global scale
  • 12. Data Incubator model 12 Counselling Analysis Integration Colla- boration Full collaboration benefits Shared OKG Governance DBpedia Contributor Requirements Excel anarchy, no governance LVL 0 LVL 1 LVL 2 LVL 3 LVL 3: access to all relevant data, links, users of the ecosystem By reaching LVL 3 cost for maintaining LVL 2 as well as OKG Governance and Curation is shared effectively with the DBpedia ecosystem
  • 13. LVL 0: Excel Anarchy, No Governance ● Each employee/department governs his own data (Anarchy) ● Intensive counselling required ● Best to build a parallel structure and show the value (KG prototype) 13
  • 14. LVL 1: DBpedia Contributor Requirements ● Stable identifiers ● Good level of schema unification and management ● Data strategy & Knowledge Graph available ● Core and Context separation ○ Core Data: High value, low maintenance ○ Context data: Very low value, very high maintenance (commodity) ● What data maintenance can you outsource to DBpedia? (Analysis) 14
  • 15. LVL 2: Shared OKG Governance ● Technical steps (Integration): ○ Identifier Linking ○ Schema Mapping ○ Release data into DBpedia ● Continuously maturing tool stack to improve these three steps DBpedia Association consists of a huge network of universities -> we mediate internships to tackle above tasks 15
  • 16. LVL 3: Full collaboration benefits ● Link triangulation (Who links to you? subscription) ● Sources validation (Error reports) ● Data comparison (your data with all other data sources) ● Mediate contact to other organisations with the same data ● Any user feedback is directed to the sources 16
  • 17. Incubator model ● Organisations… ○ can use the DBpedia incubator model to improve their OKG ○ each joining will upvalue DBpedia with data and experience ● DBpedia... ○ acts as the mediator ○ will distribute value to other orgs and users on a global scale 17
  • 18. Technologies ● ID Management + Linking ● DataID (Metadata treatment) ● Data comparison and feedback ● SHACL - Test-driven data development
  • 19. ID Management + Linking ● For each source ID, DBpedia assigns a local DBpedia ID ● Links are then grouped into clusters ● From the cluster a representative ID is chosen, others are redirects ● Properties: ○ Every imported entity is identifiable and traceable with local ID ○ Holistic identifier space -> allows a complete linkage ○ Stable IDs allow to improve link accuracy over time ● http://dbpedia.github.io/links/tools/linkviz/
  • 20. DataID (Metadata treatment) ● FOAF and WebID -> you keep your data local, all online accounts are updated automatically ● DataID -> DCAT extension, keep data description locally, all data repos will be updated automatically
  • 21. Data comparison and feedback Show differences in the data: http://downloads.dbpedia.org/temporary/crosswikifact/results/q84.html http://wikidata.org/wiki/Q84 areaTotal of London Population of London P1082
  • 23. Test-driven data development ● Test-driven data development (2014) ● Dimitris Kontokostas (CTO of DBpedia) co-editor of the SHACL ● RDFUnit ○ uses Machine Learning (DL-Learner) to enrich the OWL schema ○ TAG - Test Autogenerators from enriched schema ○ 44,000 tests generated from the DBpedia Ontology ● Tests are transferred to sources (schema mapping) ● Tests are written collaboratively: ○ Universal: deathdate should not be before birthdate ○ Shared: specialised domain and application tests Test-driven evaluation of linked data quality. Dimitris Kontokostas, Patrick Westphal, Sören Auer, Sebastian Hellmann, Jens Lehmann, Roland Cornelissen, and Amrapali J. Zaveri in Proceedings of the 23rd International Conference on World Wide Web.
  • 24. DBpedia in 10 years ● DBpedia connects hundreds of thousands of data spaces (centralised-decentralised architecture) ● Data about the world is a commodity (freely available to everybody) ● Working with data will be fun
  • 25. Become a supporter or an early adopter This is not a vision of the far future, it is happening now:
  • 26. Contact for the DBpedia Association (non-profit) dbpedia@infai.org @dbpedia wiki.dbpedia.org @dbpedia.org