SlideShare a Scribd company logo
Knowledge Integration in Practice
P e t e r M i k a , D i r e c t o r o f S e m a n t i c S e a r c h , Y a h o o L a b s ⎪ J a n u a r y 1 3 , 2 0 1 5
Agenda
2
 Intro
 Yahoo’s Knowledge Graph
› Why a Knowledge Graph for Yahoo?
› Building the Knowledge Graph
› Challenges
 Future work
 Q&A
Disclaimers:
• Yahoo’s Knowledge Graph is the work of many at Yahoo, so I can’t speak to all of it with authority
• I’ll be rather loose with terminology…
About Yahoo
3
 Yahoo makes the world's daily habits inspiring and entertaining
› An online media and technology company
• 1 billion+ monthly users
• 600 million+ monthly mobile users
• #3 US internet destination*
• 81% of the US internet audience*
› Founded in 1994 by Jerry Yang and David Filo
› Headquartered in Sunnyvale, California
› Led by Marissa Mayer, CEO (since July, 2012)
› 10,700 employees (as of Sept 30, 2015)
*ComScore Media Metrix, Aug 2015
 Yahoo’s global research organization
› Impact on Yahoo’s products AND academic
excellence
› Established in 2005
› ~200 scientists and research engineers
› Wide range of disciplines
› Locations in Sunnyvale, New York, Haifa
› Led by Ron Brachman, Chief Scientist and
Head of Labs
› Academic programs
› Visit
• labs.yahoo.com
• Tumblr/Flickr/LinkedIn/Facebook/Twitter
4
Yahoo Labs
Semantic Search at Yahoo Labs London
Extraction
Integration
Indexing
Ranking
Evaluation
Information extraction from text and the Web
Knowledge representation and data fusion
Efficient indexing of text annotations and entity graphs
Entity-retrieval and recommendations
Evaluation of semantic search
Why a Knowledge Graph?
6
The world of Yahoo
7
 Search
› Web Search
› Yahoo Answers
 Communications
› Mail, Messenger, Groups
 Media
› Homepage
› News, Sports, Finance, Style…
 Video
 Flickr and Tumblr
 Advertizing products See everything.yahoo.com for all Yahoo products
In a perfect world, the Semantic Web is the end-game for IR
#ROI_BLANCO
#ROI_BLANCO
#ROI_BLANCO
Search: entity-based results
9
 Enhanced results for entity-pages
› Based on metadata embedded in the page or semi-automated IE
› Yahoo Searchmonkey (2008)
• Kevin Haas, Peter Mika, Paul Tarjan, Roi Blanco: Enhanced results for web search. SIGIR 2011:
725-734
 Adopted industry-wide
› Google, Bing, Facebook, Twitter…
› Leads to the launch of schema.org effort
Search
10
 Understand entity-based queries
› ~70% of queries contain a named entity* (entity mention queries)
• brad pitt height
› ~50% of queries have an entity focus* (entity seeking queries)
• brad pitt attacked by fans
› ~10% of queries are looking for a class of entities*
• brad pitt movies
 Even more prominent on mobile
› Limited input/output
› Different types of queries
• Less research, more immediate needs
• Need answers or actions related to an entity, not pages to read
brad pitt height
how tall is
tall
…
* Statistics from [Pound et al. WWW2010]. Similar results in [Lin et al. WWW2012].
 Entity display
› Information about the entity
› Combined information with provenance
 Related entity recommendation
› Where should I go next?
 Question-Answering
 Direct actions
› e.g. movie show times and tickets
Search: entity-based experiences
Communications
 Extraction of information from email
› Notifications
• Package delivery updates, upcoming flights etc.
• Show up in Yahoo Search/Mail
› Better targeting for ads
• e.g. understanding past product purchases
 Personal knowledge combined with the Web
› e.g. contact information is completed from FB/LinkedIn/Twitter
Media
13
 Personalization
› Articles are classified by broad topics
› Named entities are extracted and linked to the KG
› Recommend other articles based on the extracted entities/topics
Show me less
stories about this
entity or topic
Requirements
14
 Entity-centric representation of the world
› Use cases in search, email, media, ads
 Integration of disparate information sources
› User/advertizer content and data
› Information from the Web
• Aggregate view of different domains relating to different facet’s of an entity
› Third-party licensed data
 Large scale
› Batch processing OK but at least daily updates
 High quality
 Multiple languages and markets
Building the Yahoo Knowledge Graph
15
Yahoo Knowledge Graph
16
Knowledge Integration in Practice
Knowledge integration
Knowledge integration process
19
 Standard data fusion process
› Schema matching
• Map data to a common schema
› Entity reconciliation
• Determine which source entities refer to the same real-world entity
› Blending
• Aggregate information and resolve conflicts
 Result: unified knowledge base built from dozens of sources
› ~100 million unique entities and billions of facts
› Note: internal representations may be 10x larger due modeling, metadata etc.
 Related work
› Bleiholder and Naumann: Data Fusion. ACM Computing Surveys, 2008.
 Common ontology
› Covers the domains of interest of Yahoo
• Celebrities, Movies, music, sports, finance, etc.
› Editorially maintained
› OWL ontology
• ~300 classes, ~800 datatype-props, ~500 object-props
› Protégé and custom tooling (e.g. documentation)
• Git for versioning (similar to schema.org)
› More detailed and expressive than schema.org
• Class disjunction, cardinality constraints, inverse
properties, datatypes and units
• But limited use of complex class/property expressions
– e.g. MusicArtist = Musician OR Band
– Difficult for data consumers
 Manual schema mapping
› Works for ~10 sources
› Not scalable
• Web tables
• Language editions of Wikipedia
20
Ontology matching
Entity Reconciliation
21
 Determine which source entities refer to the same real world object
!=!=
== ==!=
==
Entity reconciliation
22
1. Blocking
› Compute hashes for each entity
› Based on type+property value combinations, e.g. type:Movie+releaseYear=1978
› Multiple hashes per entity
› Optimize for high recall
2. Pair-wise matching within blocks
› Manual as well as machine-learned classifiers
3. Clustering
› Transitive closure of matching pairs
› Assign unique identifier
CONFIDENTIAL & PROPRIETARY
 Source facts can be:
2
3
Blending
cast: .
mpaaRating: R
releaseDate: 2001-01-21
userRating: 8.5/10
budget: $9.1m
cast: .
mpaaRating: R
releaseDate: 2001-03-16
budget: $9.2m
criticRating: 92/100
Conflicting
Complementary
Corroborating
Blending
24
 Rule-based system initially, moving to machine learning
 Features
› Source trustworthiness
› Value prior probabilities
› Data freshness
› Logical constraints
• Derived from ontology
• Programmatic, e.g. children must be born after parents
Challenges
25
Challenge: scalable infrastructure
26
 Property graph/RDF databases are a poor fit for ETL and data fusion
› Large batch writes
› Require transaction support
› Navigation over the graph, no need for more complex joins
• Required information is at most two hops away
 Hadoop-based solutions
› Yahoo already hosts ~10k machines in Hadoop clusters
› HBase initially
› Moved to Spark/GraphX
• Support row/column as well as graph view of the data
› Separate inverted index for storing hashes
– Welch et al.: Fast and accurate incremental entity resolution relative to an entity knowledge base. CIKM 2012
 JSON-LD is used as input/output format
Challenge: quality
27
 Not enough to get it right… it has to be perfect
• Key difference between applied science and academic research
 Many sources of errors
› Poor quality or outdated source data
› Errors in extraction
› Errors in schema mapping and normalization
› Errors in merging (reconciliation)
• Blocking
• Disambiguation
• Blending
› Errors in display
• Image issue, poor title or description etc.
 Human intervention should be possible at every stage of the pipeline
Error in source
(Wikipedia)
Reconciliation issue
Reconciliation issue
Challenge: type classification and ranking
31
 Type classification
› Determine all the types of an entity
› Mostly system issue, e.g. types are used in blocking
› Features
• NLP extraction
– e.g. Wikipedia first paragraph
• Taxonomy mapping
– e.g. Wikipedia category hierarchy
• Relationships
– e.g. acted in a Movie -> Actor
• Trust in source
– e.g. IMDB vs. Wikipedia for Actors
 What types are the most
relevant?
› Arnold Schwarzenegger:
Actor > Athlete > Officeholder >
Politician (perhaps)
› Pope Francis is a Musician per
MusicBrainz
› Barack Obama is an Actor per IMDB
 Display issue
› Right template and display label
 Moving from manual to machine-
learned ranking
32
Challenge: type ranking Much better
known as
an Actor
Arnold
Schwarzenegger
credit.actingPerformanceIn
The Terminator
The Terminator
The Terminator
The Terminator
The Terminator
The Terminator
partyAffiliation
Republican Party
(United States)
description
Arnold Alois
Schwarzenegger is an
Austrian-American actor,
model, producer, director,
activist, businessman,
investor, philanthropist,
former professional
bodybuilder, ...
Television Director
...
historicJobPosition
...
Television Director
Television Director
Television Director
credit.actingPerformanceIn
credit.actingPerformanceIn
credit.actingPerformanceIn
credit.actingPerformanceIn
credit.actingPerformanceIn
historicJobPosition
historicJobPosition
historicJobPosition
Athlete
Officeholder
Politician
Actor
Type ranking features
 Implemented two novel unsupervised methods
› Entity likelihood
› Nearest-neighbor
 Ensemble learning on (features extracted from) entity attributes
› Cosine, KL-div, Dice, sumAF, minAF, meanAF, etc.
› Entity features, textual features, etc.
• E.g. order of type mentions in Wikipedia first paragraph
 Variants
› Combinations of the above
› Stacked ML, FMs
Challenge: mining aliases and entity pages
35
 Extensive set of alternate names/labels are required by applications
› Named Entity Linking on short/long forms of text
 Some of this comes free from Wikipedia
› Anchor text, redirects
› e.g. all redirects to Brad Pitt
 Query logs are also useful source of aliases
› e.g. incoming queries to Brad Pitt’s page on Wikipedia
 Can be extended to other sites if we find entity webpages
› A type of foreign key, but specifically on the Web
› e.g. Brad Pitt’s page on IMDB, RottenTomatoes
 Machine learned model to filter out poor aliases
› Ambiguous or not representative
Challenge: data normalization
36
 Issue at both scoring and blending time
 Multiple aspects
› Datatype match
• “113 minutes” vs. “PT1H53M”
› Text variants
• Spelling, punctuation, casing, abbreviations etc.
› Precision
• sim(weight=53 kg, weight=53.5kg)?
• sim(birthplace=California, birthplace=Los Angeles, California)
› Temporality
• e.g. Frank Sinatra married to {Barbara Blakeley, Barbara Marx, Barbara Marx Sinatra, Barbara Sinatra}
• Side issue: we don’t capture historical values
– e.g. Men’s Decathlon at 1976 Olympics was won by Bruce Jenner, not Caitlyn Jenner
Challenge: relevance
37
 All information in the graph is true, but not equally relevant
 Relevance of entities to queries
› Query understanding
› Entity retrieval
 Relevance of relationships
› Required for entity recommendations (“people also search for”)
• Who is more relevant to Brad Pitt? Angelina Jolie or Jennifer Aniston?
Relationship ranking
38
 Machine-learned ranking based on a diverse set of features
› Relationship type
› Co-occurrence in usage data and text sources
• How often people query for them together?
• How often one entity is mentioned in the context of the other?
› Popularity of each entity
• e.g. search views/clicks
› Graph-based metrics
• e.g. number of common related entities
 See
› Roi Blanco, Berkant Barla Cambazoglu, Peter Mika, Nicolas Torzec:
Entity Recommendations in Web Search. ISWC 2013
Conclusions
Conclusions
40
 Yahoo benefits from a unified view of domain knowledge
› Focusing on domains of interest to Yahoo
› Complementary information from an array of sources
› Use cases in Search, Ads, Media
 Data integration challenge
› Triple stores/graph databases are a poor fit
• Reasoning for data validation (not materialization)
› But there is benefit to Semantic Web technology
• OWL ontology language
• JSON-LD
• Data on the Web (schema.org, Dbpedia…)
Future work
41
 Scope, size and complexity of Yahoo Knowledge will expand
› Combination of world knowledge and personal knowledge
› Advanced extraction from the Web
› Additional domains
› Tasks/actions
 All of the challenges mentioned will need better answers…
› Can you help us?
Q&A
 Credits
› Yahoo Knowledge engineering team in Sunnyvale and Taipei
› Yahoo Labs scientists and engineers in Sunnyvale and London
 Contact me
› pmika@yahoo-inc.com
› @pmika
› http://www.slideshare.net/pmika/

More Related Content

PPTX
Semantic search: from document retrieval to virtual assistants
PPTX
What happened to the Semantic Web?
PPTX
Understanding Queries through Entities
PPT
Related Entity Finding on the Web
PPTX
Making things findable
PPTX
Semantic Search on the Rise
PPT
Semantic Search overview at SSSW 2012
PPTX
An Introduction to Entities in Semantic Search
Semantic search: from document retrieval to virtual assistants
What happened to the Semantic Web?
Understanding Queries through Entities
Related Entity Finding on the Web
Making things findable
Semantic Search on the Rise
Semantic Search overview at SSSW 2012
An Introduction to Entities in Semantic Search

What's hot (20)

PPTX
Semantic Search at Yahoo
PPTX
Semantic Search tutorial at SemTech 2012
PPTX
Semantic Search keynote at CORIA 2015
PPTX
Making the Web Searchable - Keynote ICWE 2015
PPT
Peter Mika's Presentation at SSSW 2011
PPTX
Jim Hendler's Presentation at SSSW 2011
PPTX
Semtech bizsemanticsearchtutorial
PPTX
Social Networks and the Semantic Web: a retrospective of the past 10 years
PDF
Harith Alani's presentation at SSSW 2011
PPTX
Beyond document retrieval using semantic annotations
PDF
Wimmics Overview 2021
PPTX
Semantic Web, e-commerce
PPT
The Semantic Web
PPTX
Searching Online
KEY
Tactical Information Gathering
PPTX
LD4 Wikidata Affinity Group - Shorthouse
PPTX
Haystack 2019 - Search-based recommendations at Politico - Ryan Kohl
PPT
Search Analytics: Conversations with Your Customers
PDF
Lecture: Ontologies and the Semantic Web
PPT
The Social Semantic Web: An Introduction
Semantic Search at Yahoo
Semantic Search tutorial at SemTech 2012
Semantic Search keynote at CORIA 2015
Making the Web Searchable - Keynote ICWE 2015
Peter Mika's Presentation at SSSW 2011
Jim Hendler's Presentation at SSSW 2011
Semtech bizsemanticsearchtutorial
Social Networks and the Semantic Web: a retrospective of the past 10 years
Harith Alani's presentation at SSSW 2011
Beyond document retrieval using semantic annotations
Wimmics Overview 2021
Semantic Web, e-commerce
The Semantic Web
Searching Online
Tactical Information Gathering
LD4 Wikidata Affinity Group - Shorthouse
Haystack 2019 - Search-based recommendations at Politico - Ryan Kohl
Search Analytics: Conversations with Your Customers
Lecture: Ontologies and the Semantic Web
The Social Semantic Web: An Introduction
Ad

Viewers also liked (20)

PPTX
Brands, packaging, and other product feature
PPTX
EnergyUse - A Collective Semantic Platform for Monitoring and Discussing Ener...
PPT
Hackathon s pb
PPT
Investigating the Semantic Gap through Query Log Analysis
PDF
Semantics and linked data at astra zeneca
PPT
Future of Search | Yury Lifshits, Yahoo! Research
PDF
Process Oriented Knowledge Management
PPTX
Linked data presentation for who umc 21 jan 2015
PPTX
Linked Data efforts for data standards in biopharma and healthcare
PPTX
Kdd 2014 tutorial bringing structure to text - chi
PPTX
Semantic Web and Schema.org
PPTX
Nestle Good Food, Good Life - SPACE Matrix, BCG Matrix and Product Positionin...
PDF
Semantic Blockchains in the Supply Chain
PPTX
Trends in knowledge management
PPT
Self Efficacy Presentation
PDF
Smart Enterprises
PDF
15 Hot Knowledge Management Trends
PPT
Knowledge Management In The Real World
PPTX
Knowledge management in theory and practice
PPSX
Knowledge Management Presentation
Brands, packaging, and other product feature
EnergyUse - A Collective Semantic Platform for Monitoring and Discussing Ener...
Hackathon s pb
Investigating the Semantic Gap through Query Log Analysis
Semantics and linked data at astra zeneca
Future of Search | Yury Lifshits, Yahoo! Research
Process Oriented Knowledge Management
Linked data presentation for who umc 21 jan 2015
Linked Data efforts for data standards in biopharma and healthcare
Kdd 2014 tutorial bringing structure to text - chi
Semantic Web and Schema.org
Nestle Good Food, Good Life - SPACE Matrix, BCG Matrix and Product Positionin...
Semantic Blockchains in the Supply Chain
Trends in knowledge management
Self Efficacy Presentation
Smart Enterprises
15 Hot Knowledge Management Trends
Knowledge Management In The Real World
Knowledge management in theory and practice
Knowledge Management Presentation
Ad

Similar to Knowledge Integration in Practice (20)

PPTX
(Keynote) Peter Mika - “Making the Web Searchable”
PPTX
Tech M&A Forecast 2011
KEY
Enterprise Open Source Intelligence Gathering
PPT
Stuart
PPTX
Context, Narratives & Big Data Analytics
ODP
Informationliteracy
PPTX
Semantic mark-up with schema.org: helping search engines understand the Web
PPTX
Mining Web content for Enhanced Search
PPTX
Tech M&A Monthly: 10 Keys to a Valuable Valuation
PDF
Detecting Corporate Fraud at SABEW with Theo Francis and Roddy Boyd
PPTX
2014 Tech M&A Monthly - Quarterly Report
PPTX
Enterprise SEO and AI - Houston IMA Interactive Strategies 17
PDF
The Art of Connecting: Recruit Like an FBI Agent, the Original Social Enginee...
PPTX
GWU Ethics in Publishing 2015 - Is is ethical for publishers to make a profit?
PPTX
Big data gaurav
PDF
2013-08 10 evil things - Northeast PHP Conference Keynote
PPTX
Social won’t work without search….and today search will be improved by social...
PDF
SEOktoberfest 2022 - Blending SEO, Discover, & Entity Extraction to Analyze D...
PPTX
Presentation to PMI Westchester
PDF
Ims333 vc project
(Keynote) Peter Mika - “Making the Web Searchable”
Tech M&A Forecast 2011
Enterprise Open Source Intelligence Gathering
Stuart
Context, Narratives & Big Data Analytics
Informationliteracy
Semantic mark-up with schema.org: helping search engines understand the Web
Mining Web content for Enhanced Search
Tech M&A Monthly: 10 Keys to a Valuable Valuation
Detecting Corporate Fraud at SABEW with Theo Francis and Roddy Boyd
2014 Tech M&A Monthly - Quarterly Report
Enterprise SEO and AI - Houston IMA Interactive Strategies 17
The Art of Connecting: Recruit Like an FBI Agent, the Original Social Enginee...
GWU Ethics in Publishing 2015 - Is is ethical for publishers to make a profit?
Big data gaurav
2013-08 10 evil things - Northeast PHP Conference Keynote
Social won’t work without search….and today search will be improved by social...
SEOktoberfest 2022 - Blending SEO, Discover, & Entity Extraction to Analyze D...
Presentation to PMI Westchester
Ims333 vc project

More from Peter Mika (7)

PPT
Making the Web searchable
PPTX
SemTech 2011 Semantic Search tutorial
PPT
Publishing data on the Semantic Web
PPTX
Hack U Barcelona 2011
PPT
Semantic Search Summer School2009
PPT
Year of the Monkey: Lessons from the first year of SearchMonkey
PPT
Semantic Web Austin Yahoo
Making the Web searchable
SemTech 2011 Semantic Search tutorial
Publishing data on the Semantic Web
Hack U Barcelona 2011
Semantic Search Summer School2009
Year of the Monkey: Lessons from the first year of SearchMonkey
Semantic Web Austin Yahoo

Recently uploaded (20)

PPTX
A Presentation on Artificial Intelligence
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PPT
Teaching material agriculture food technology
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Encapsulation_ Review paper, used for researhc scholars
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Spectral efficient network and resource selection model in 5G networks
A Presentation on Artificial Intelligence
Reach Out and Touch Someone: Haptics and Empathic Computing
NewMind AI Weekly Chronicles - August'25 Week I
20250228 LYD VKU AI Blended-Learning.pptx
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Teaching material agriculture food technology
Understanding_Digital_Forensics_Presentation.pptx
CIFDAQ's Market Insight: SEC Turns Pro Crypto
Digital-Transformation-Roadmap-for-Companies.pptx
Mobile App Security Testing_ A Comprehensive Guide.pdf
Encapsulation_ Review paper, used for researhc scholars
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Advanced methodologies resolving dimensionality complications for autism neur...
Diabetes mellitus diagnosis method based random forest with bat algorithm
Network Security Unit 5.pdf for BCA BBA.
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Spectral efficient network and resource selection model in 5G networks

Knowledge Integration in Practice

  • 1. Knowledge Integration in Practice P e t e r M i k a , D i r e c t o r o f S e m a n t i c S e a r c h , Y a h o o L a b s ⎪ J a n u a r y 1 3 , 2 0 1 5
  • 2. Agenda 2  Intro  Yahoo’s Knowledge Graph › Why a Knowledge Graph for Yahoo? › Building the Knowledge Graph › Challenges  Future work  Q&A Disclaimers: • Yahoo’s Knowledge Graph is the work of many at Yahoo, so I can’t speak to all of it with authority • I’ll be rather loose with terminology…
  • 3. About Yahoo 3  Yahoo makes the world's daily habits inspiring and entertaining › An online media and technology company • 1 billion+ monthly users • 600 million+ monthly mobile users • #3 US internet destination* • 81% of the US internet audience* › Founded in 1994 by Jerry Yang and David Filo › Headquartered in Sunnyvale, California › Led by Marissa Mayer, CEO (since July, 2012) › 10,700 employees (as of Sept 30, 2015) *ComScore Media Metrix, Aug 2015
  • 4.  Yahoo’s global research organization › Impact on Yahoo’s products AND academic excellence › Established in 2005 › ~200 scientists and research engineers › Wide range of disciplines › Locations in Sunnyvale, New York, Haifa › Led by Ron Brachman, Chief Scientist and Head of Labs › Academic programs › Visit • labs.yahoo.com • Tumblr/Flickr/LinkedIn/Facebook/Twitter 4 Yahoo Labs
  • 5. Semantic Search at Yahoo Labs London Extraction Integration Indexing Ranking Evaluation Information extraction from text and the Web Knowledge representation and data fusion Efficient indexing of text annotations and entity graphs Entity-retrieval and recommendations Evaluation of semantic search
  • 6. Why a Knowledge Graph? 6
  • 7. The world of Yahoo 7  Search › Web Search › Yahoo Answers  Communications › Mail, Messenger, Groups  Media › Homepage › News, Sports, Finance, Style…  Video  Flickr and Tumblr  Advertizing products See everything.yahoo.com for all Yahoo products
  • 8. In a perfect world, the Semantic Web is the end-game for IR #ROI_BLANCO #ROI_BLANCO #ROI_BLANCO
  • 9. Search: entity-based results 9  Enhanced results for entity-pages › Based on metadata embedded in the page or semi-automated IE › Yahoo Searchmonkey (2008) • Kevin Haas, Peter Mika, Paul Tarjan, Roi Blanco: Enhanced results for web search. SIGIR 2011: 725-734  Adopted industry-wide › Google, Bing, Facebook, Twitter… › Leads to the launch of schema.org effort
  • 10. Search 10  Understand entity-based queries › ~70% of queries contain a named entity* (entity mention queries) • brad pitt height › ~50% of queries have an entity focus* (entity seeking queries) • brad pitt attacked by fans › ~10% of queries are looking for a class of entities* • brad pitt movies  Even more prominent on mobile › Limited input/output › Different types of queries • Less research, more immediate needs • Need answers or actions related to an entity, not pages to read brad pitt height how tall is tall … * Statistics from [Pound et al. WWW2010]. Similar results in [Lin et al. WWW2012].
  • 11.  Entity display › Information about the entity › Combined information with provenance  Related entity recommendation › Where should I go next?  Question-Answering  Direct actions › e.g. movie show times and tickets Search: entity-based experiences
  • 12. Communications  Extraction of information from email › Notifications • Package delivery updates, upcoming flights etc. • Show up in Yahoo Search/Mail › Better targeting for ads • e.g. understanding past product purchases  Personal knowledge combined with the Web › e.g. contact information is completed from FB/LinkedIn/Twitter
  • 13. Media 13  Personalization › Articles are classified by broad topics › Named entities are extracted and linked to the KG › Recommend other articles based on the extracted entities/topics Show me less stories about this entity or topic
  • 14. Requirements 14  Entity-centric representation of the world › Use cases in search, email, media, ads  Integration of disparate information sources › User/advertizer content and data › Information from the Web • Aggregate view of different domains relating to different facet’s of an entity › Third-party licensed data  Large scale › Batch processing OK but at least daily updates  High quality  Multiple languages and markets
  • 15. Building the Yahoo Knowledge Graph 15
  • 19. Knowledge integration process 19  Standard data fusion process › Schema matching • Map data to a common schema › Entity reconciliation • Determine which source entities refer to the same real-world entity › Blending • Aggregate information and resolve conflicts  Result: unified knowledge base built from dozens of sources › ~100 million unique entities and billions of facts › Note: internal representations may be 10x larger due modeling, metadata etc.  Related work › Bleiholder and Naumann: Data Fusion. ACM Computing Surveys, 2008.
  • 20.  Common ontology › Covers the domains of interest of Yahoo • Celebrities, Movies, music, sports, finance, etc. › Editorially maintained › OWL ontology • ~300 classes, ~800 datatype-props, ~500 object-props › Protégé and custom tooling (e.g. documentation) • Git for versioning (similar to schema.org) › More detailed and expressive than schema.org • Class disjunction, cardinality constraints, inverse properties, datatypes and units • But limited use of complex class/property expressions – e.g. MusicArtist = Musician OR Band – Difficult for data consumers  Manual schema mapping › Works for ~10 sources › Not scalable • Web tables • Language editions of Wikipedia 20 Ontology matching
  • 21. Entity Reconciliation 21  Determine which source entities refer to the same real world object !=!= == ==!= ==
  • 22. Entity reconciliation 22 1. Blocking › Compute hashes for each entity › Based on type+property value combinations, e.g. type:Movie+releaseYear=1978 › Multiple hashes per entity › Optimize for high recall 2. Pair-wise matching within blocks › Manual as well as machine-learned classifiers 3. Clustering › Transitive closure of matching pairs › Assign unique identifier
  • 23. CONFIDENTIAL & PROPRIETARY  Source facts can be: 2 3 Blending cast: . mpaaRating: R releaseDate: 2001-01-21 userRating: 8.5/10 budget: $9.1m cast: . mpaaRating: R releaseDate: 2001-03-16 budget: $9.2m criticRating: 92/100 Conflicting Complementary Corroborating
  • 24. Blending 24  Rule-based system initially, moving to machine learning  Features › Source trustworthiness › Value prior probabilities › Data freshness › Logical constraints • Derived from ontology • Programmatic, e.g. children must be born after parents
  • 26. Challenge: scalable infrastructure 26  Property graph/RDF databases are a poor fit for ETL and data fusion › Large batch writes › Require transaction support › Navigation over the graph, no need for more complex joins • Required information is at most two hops away  Hadoop-based solutions › Yahoo already hosts ~10k machines in Hadoop clusters › HBase initially › Moved to Spark/GraphX • Support row/column as well as graph view of the data › Separate inverted index for storing hashes – Welch et al.: Fast and accurate incremental entity resolution relative to an entity knowledge base. CIKM 2012  JSON-LD is used as input/output format
  • 27. Challenge: quality 27  Not enough to get it right… it has to be perfect • Key difference between applied science and academic research  Many sources of errors › Poor quality or outdated source data › Errors in extraction › Errors in schema mapping and normalization › Errors in merging (reconciliation) • Blocking • Disambiguation • Blending › Errors in display • Image issue, poor title or description etc.  Human intervention should be possible at every stage of the pipeline
  • 31. Challenge: type classification and ranking 31  Type classification › Determine all the types of an entity › Mostly system issue, e.g. types are used in blocking › Features • NLP extraction – e.g. Wikipedia first paragraph • Taxonomy mapping – e.g. Wikipedia category hierarchy • Relationships – e.g. acted in a Movie -> Actor • Trust in source – e.g. IMDB vs. Wikipedia for Actors
  • 32.  What types are the most relevant? › Arnold Schwarzenegger: Actor > Athlete > Officeholder > Politician (perhaps) › Pope Francis is a Musician per MusicBrainz › Barack Obama is an Actor per IMDB  Display issue › Right template and display label  Moving from manual to machine- learned ranking 32 Challenge: type ranking Much better known as an Actor
  • 33. Arnold Schwarzenegger credit.actingPerformanceIn The Terminator The Terminator The Terminator The Terminator The Terminator The Terminator partyAffiliation Republican Party (United States) description Arnold Alois Schwarzenegger is an Austrian-American actor, model, producer, director, activist, businessman, investor, philanthropist, former professional bodybuilder, ... Television Director ... historicJobPosition ... Television Director Television Director Television Director credit.actingPerformanceIn credit.actingPerformanceIn credit.actingPerformanceIn credit.actingPerformanceIn credit.actingPerformanceIn historicJobPosition historicJobPosition historicJobPosition Athlete Officeholder Politician Actor
  • 34. Type ranking features  Implemented two novel unsupervised methods › Entity likelihood › Nearest-neighbor  Ensemble learning on (features extracted from) entity attributes › Cosine, KL-div, Dice, sumAF, minAF, meanAF, etc. › Entity features, textual features, etc. • E.g. order of type mentions in Wikipedia first paragraph  Variants › Combinations of the above › Stacked ML, FMs
  • 35. Challenge: mining aliases and entity pages 35  Extensive set of alternate names/labels are required by applications › Named Entity Linking on short/long forms of text  Some of this comes free from Wikipedia › Anchor text, redirects › e.g. all redirects to Brad Pitt  Query logs are also useful source of aliases › e.g. incoming queries to Brad Pitt’s page on Wikipedia  Can be extended to other sites if we find entity webpages › A type of foreign key, but specifically on the Web › e.g. Brad Pitt’s page on IMDB, RottenTomatoes  Machine learned model to filter out poor aliases › Ambiguous or not representative
  • 36. Challenge: data normalization 36  Issue at both scoring and blending time  Multiple aspects › Datatype match • “113 minutes” vs. “PT1H53M” › Text variants • Spelling, punctuation, casing, abbreviations etc. › Precision • sim(weight=53 kg, weight=53.5kg)? • sim(birthplace=California, birthplace=Los Angeles, California) › Temporality • e.g. Frank Sinatra married to {Barbara Blakeley, Barbara Marx, Barbara Marx Sinatra, Barbara Sinatra} • Side issue: we don’t capture historical values – e.g. Men’s Decathlon at 1976 Olympics was won by Bruce Jenner, not Caitlyn Jenner
  • 37. Challenge: relevance 37  All information in the graph is true, but not equally relevant  Relevance of entities to queries › Query understanding › Entity retrieval  Relevance of relationships › Required for entity recommendations (“people also search for”) • Who is more relevant to Brad Pitt? Angelina Jolie or Jennifer Aniston?
  • 38. Relationship ranking 38  Machine-learned ranking based on a diverse set of features › Relationship type › Co-occurrence in usage data and text sources • How often people query for them together? • How often one entity is mentioned in the context of the other? › Popularity of each entity • e.g. search views/clicks › Graph-based metrics • e.g. number of common related entities  See › Roi Blanco, Berkant Barla Cambazoglu, Peter Mika, Nicolas Torzec: Entity Recommendations in Web Search. ISWC 2013
  • 40. Conclusions 40  Yahoo benefits from a unified view of domain knowledge › Focusing on domains of interest to Yahoo › Complementary information from an array of sources › Use cases in Search, Ads, Media  Data integration challenge › Triple stores/graph databases are a poor fit • Reasoning for data validation (not materialization) › But there is benefit to Semantic Web technology • OWL ontology language • JSON-LD • Data on the Web (schema.org, Dbpedia…)
  • 41. Future work 41  Scope, size and complexity of Yahoo Knowledge will expand › Combination of world knowledge and personal knowledge › Advanced extraction from the Web › Additional domains › Tasks/actions  All of the challenges mentioned will need better answers… › Can you help us?
  • 42. Q&A  Credits › Yahoo Knowledge engineering team in Sunnyvale and Taipei › Yahoo Labs scientists and engineers in Sunnyvale and London  Contact me › pmika@yahoo-inc.com › @pmika › http://www.slideshare.net/pmika/

Editor's Notes

  • #4: More info at http://info.yahoo.com/ http://investor.yahoo.net/faq.cfm Marissa’s CES 2013 keynote: http://screen.yahoo.com/marissa-mayer-ces-keynote-live-210000558.html ComScore traffic: http://www.bloomberg.com/news/2013-08-22/yahoo-tops-google-in-u-s-for-web-traffic-in-july-comscore-says.html http://www.comscore.com/Insights/Press_Releases/2013/8/comScore_Media_Metrix_Ranks_Top_50_US_Web_Properties_for_July_2013
  • #9: This is how a machine sees the world… Machines are not ‘intelligent’ and can not ‘read’… they just see a string of symbols and try to match the users input to that stream.
  • #12: We also show “People also searched the height of…”
  • #15: Efficiency in processing, though not real-time Developed by a large, distributed team of engineers and scientists, in Sunnyvale, London and Taiwan As of Dec, 2015: 600M source entities and 10B source triples 75M reconciled entities and 5B triples
  • #17: The KG understands facts about real world entities People, places, movies, organizations and more and how they relate to each other.
  • #20: In practice, due to modeling (reification) 75M unique entities -> 1.2B vertices in Spark/GraphX
  • #23: If we had 600m source entities and 10k cores with 1ms per comparison, about 400 years (3.6*10^17 comparisons) Blocking reduces this to 3.6 * 10^8 comparisons, about 30 minutes of runtime