SlideShare a Scribd company logo
Investigating the
     Semantic Gap
through Query Log
           Analysis
                      Peter Mika
                Yahoo! Research
                      Edgar Meij
          University of Amsterdam
                 Hugo Zaragoza
                Yahoo! Research
The Semantic Gap

  • Significant efforts have focused on creating data
     – See Linking Open Data
  • Time to consider the extent to which data serves a purpose
     – Our purpose is fulfilling the information needs of our users
  • Using query logs to study the mismatch between
     – Data on the Web and information needs
     – Ontologies on the Web and expressions of information needs



     Demand                                           Supply
     = needs                                          = information


                                   -2-
The Data Gap
Lot’s of data or very little?




Linked Data cloud (Mar, 2009)
                                -4-
Another five-fold
RDFa on the   rise
                increase between
                October 2010 and
                January, 2012




              510% increase
              between March,
              2009 and
              October, 2010




     Percentage of URLs with embedded metadata in various formats
                                    -5-
Investigating the data gap through query logs

   • How big a Semantic Web do we need?
   • Just big enough… to answer all the questions that users may
     want to ask
      – Query logs are a record of what users want to know as a whole


   • Research questions:
      – How much of this data would ever surface through search?
      – What categories of queries can be answered?
      – What’s the role of large sites?




                                    -6-
Method
  • Simulate the average search behavior of users by replaying
    query logs
     – Reproducible experiments (given query log data)
         • BOSS web search API returns RDF/XML metadata for search
           result URLs
  • Caveats
     – For us, and for the time being, search = document search
         • For this experiment, we assume current bag-of-words document
           retrieval is a reasonable approximation of semantic search
     – For us, search = web search
         • We are dealing with the average search user
         • There are many queries the users have learned not to ask
     – Volume is a rough approximation of value
         • There are rare information needs with high pay-offs, e.g. patent
           search, financial data, biomedical data…

                                     -7-
Data

  • Microformats, eRDF, RDFa data
  • Query log data
       – US query log
       – Random sample of 7k queries
       – Recent query log covering over a month period
  • Query classification data
       – US query log
       – 1000 queries classified into various categories




                                    -8-
Number of queries with a given number of results with
                                           On average, a query has
 particular formats (N=7081)               at least one result
                                                                     with metadata.

                    1      2     3      4      5     6      7        8     9     10
ANY              2127   1164    492   244     85     24    10        5      3      1   7623   1.08

hcard            1457    370     93    11      3      0   Are tags 0 useful as 0 2535
                                                            0      as   0      hCard?         0.36

rel-tag          1317    350     95    44     14      8     6        3      1      1   2681   0.38

adr               456     77     21     6      1      0     0        0      0      0   702    0.10

hatom             450     52      8     1      0      0     0        0      0      0   582    0.08

license           359     21      1     1      0      0 That’s
                                                            0    only 01 in every 16 queries. 0.06
                                                                             0     0   408

xfn               339     26      1     1      0      0     0        1      0      0   406    0.06




Notes:
- Queries with 0 results with metadata not shown
- You cannot add numberss in columns: a query may return documents with different formats
- Assume queries return more than 10 results

                 Impressions                  -9-   Average impressions per query
The influence of head sites (N=7081)

                  1      2     3     4      5       6     7     8    9    10
ANY             2127   1164   492   244   85        24   10     5     3     1   7623     1.08

hcard           1457    370    93    11     3        0    0     0     0     0   2535     0.36

rel-tag         1317    350    95    44   14         8    6     3     1     1   2681     0.38
                                             If YouTube came up with a microformat,
wikipedia.org   1676      1     0     0     0      0      0      0     1     0 1687 0.24
                                             it would be the fifth most important.
adr              456     77    21     6     1        0    0     0     0     0      702   0.10

hatom            450     52     8     1     0        0    0     0     0     0      582   0.08

youtube.com      475      1     0     0     0        0    0     2     0     0      493   0.07

license          359     21     1     1     0        0    0     0     0     0      408   0.06

xfn              339     26     1     1     0        0    0     1     0     0      406   0.06

amazon.com       345      3     0     0     0        0    1     0     0     0      358   0.05




                Impressions               - 10 -   Average impressions per query
Restricted by category: local queries (N=129)

                            1     2    3    4        5   6   7    8    9 10
 ANY                        36    16   10   0        4   1    0    0    0    0   124   0.96

              The query category largely determines
 hcard                    31    7    5    1    0    0         0    0    0    0    64   0.50
              which sites are important.
 adr                        15     8    2   1        0   0    0    0    0    0    41   0.32

 local.yahoo.com            24     0    0   0        0   0    0    0    0    0    24   0.19

 en.wikipedia.org           24     0    0   0        0   0    0    0    0    0    24   0.19

 rel-tag                    19     2    0   0        0   0    0    0    0    0    23   0.18

 geo                        16     5    0   0        0   0    0    0    0    0    26   0.20

 www.yelp.com               16     0    0   0        0   0    0    0    0    0    16   0.12

 www.yellowpages.com        14     0    0   0        0   0    0    0    0    0    14   0.11




                    Impressions             - 11 -   Average impressions per query
Summary

  • Time to start looking at the demand side of semantic search
     – Size is not a measure of usefulness
  • For us, and for now, it’s a matter of who is looking for it
         • “We would trade a gene bank for fajita recipes any day”
         • Reality of web monetization: pay per eyeball
  • Measure different aspects of usefulness
     – Usefulness for improving presentation but also usefulness for
       ranking, reasoning, disambiguation…
  • Site-based analysis
  • Linked Data will need to be studied separately




                                     - 12 -
The Ontology Gap
Investigating the ontology gap through query logs
   •   Does the language of users match the ontology of the data?
         – Initial step: what is the language of users?
   •   Observation: the same type of objects often have the same query
       context
         – Users asking for the same aspect of the type

       Query                    Entity               Context         Class
       aspirin side effects     ASPIRIN              +side effects   Anti-inflammatory drugs
       ibuprofen side effects   IBUPROFEN            +side effects   Anti-inflammatory drugs
       how to take aspirin      ASPIRIN              -how to take    Anti-inflammatory drugs
       britney spears video     BRITNEY SPEARS       +video          American film actors
       britney spears shaves    BRITNEY SPEARS       +shaves her     American film actors
       her head                                      head


   •   Idea: mine the context words (prefixes and postfixes) that are
       common to a class of objects
         – These are potential attributes or relationships
                                            - 14 -
Models

  • Desirable properties:
     – P1: Fix is frequent within type
     – P2: Fix has frequencies well-distributed across entities
     – P3: Fix is infrequent outside of the type
  • Models:
                                                         type: product


                                                        entity       fix

                                                   apple ipod nano review




                                   - 15 -
Models cont.




               - 16 -
Demo




       - 17 -
Qualitative evaluation

   • Four wikipedia templates of different sizes
   • Bold are the information needs that would be actually
     fulfilled by infobox data

   Settlement         Musical artist            Drug              Football club
   hotels             lyrics                    buy               forum
   map                buy                       what is           news
   map of             pictures of               tablets           website
   weather            what is                   what is           homepage
   weather in         video                     side effects of   tickets
   flights to         download                  hydrochloride     official website
   weather            hotel                     online            badge
   hotel              dvd                       overdose          fixtures
   property in        mp3                       capsules          free
   cheap flights to   best                      addiction         logo
                                       - 18 -
Evaluation by query prediction
   •   Idea: use this method for type-based query completion
       –   Expectation is that it improves infrequent queries
   •   Three days of UK query log for training, three days of testing
   •   Entity-based frequency as baseline (~current search suggest)
   •   Measures
       –   Recall at K, MRR (also per type)
   •   Variables
       –   models (M1-M6)
       –   number of fixes (1, 5, 10)
       –   mapping (templates vs. categories)
       –   type to use for a given entity
            •   Random
            •   Most frequent type
            •   Best type
            •   Combination
       –   To do: number of days of training
                                        - 19 -
Results: success rate (binned)




                            - 20 -
Summary

  •   Most likely fix given type (M1) works best
  •   Some improvement on query completion task
       – Win for rare queries
       – Raw frequency wins quickly because of entity-specific completions
  •   Potentially highly valuable resource for other applications
       – Facets
       – Automated or semi-automated construction of query-trigger patterns
       – Query classification
       – Characterizing and measuring the similarity of websites based on the
         entities, fixes and types that lead to the site
  •   Further work needed to turn this into a vocabulary engineering
      method



                                       - 21 -
Open Questions

  • Measure information utility, not just volume
     – Looking at the demand side of information retrieval and how
       well it matches the supply of information
  • Data
     – Is the Semantic Web just growing or becoming more useful?
         • How well does it match the information needs of users?
         • Other measures of utility?
  • Ontologies
     – Are the properties of objects that we capture match what users
       are looking for?
         • Mismatch of language? Mismatch of needs?




                                        - 22 -

More Related Content

PPTX
Query log analytics - using logstash, elasticsearch and kibana 28.11.2013
PPT
Related Entity Finding on the Web
PPT
Hackathon s pb
PPTX
EnergyUse - A Collective Semantic Platform for Monitoring and Discussing Ener...
PPTX
Semantic Search on the Rise
PPTX
Semantic search: from document retrieval to virtual assistants
PPTX
Knowledge Integration in Practice
PPTX
Semantic Search at Yahoo
Query log analytics - using logstash, elasticsearch and kibana 28.11.2013
Related Entity Finding on the Web
Hackathon s pb
EnergyUse - A Collective Semantic Platform for Monitoring and Discussing Ener...
Semantic Search on the Rise
Semantic search: from document retrieval to virtual assistants
Knowledge Integration in Practice
Semantic Search at Yahoo

Similar to Investigating the Semantic Gap through Query Log Analysis (20)

PDF
Generic Framework for Knowledge Classification-1
PDF
Adapting Alax Solr to Compare different sets of documents - Joan Codina
PPTX
Temporal and semantic analysis of richly typed social networks from user-gene...
PPT
Caching Search Engine Results over Incremental Indices
PPTX
CSC 8101 Non Relational Databases
PDF
Linked Data, Ontologies and Inference
PDF
Time Series Databases for IoT (On-premises and Azure)
PPTX
Hadoop World 2011: Hadoop in a Mission Critical Environment - Jim Haas - CBSi
PPTX
PPTX
Oracle OpenWorld 2016 Review - Focus on Data, BigData, Streaming Data, Machin...
PDF
Practical Medium Data Analytics with Python (10 Things I Hate About pandas, P...
KEY
History and Background of the USEWOD Data Challenge
PDF
On the diversity and availability of temporal information in linked open data
PPTX
CARLI Usage Stats Keynote 20130325
PDF
Parse.ly: Inside a modern RIA built with Solr
PPTX
DockerCon SF 2019 - Observability Workshop
PDF
Avoiding big data antipatterns
PDF
Piano rubyslava final
PDF
semantic markup using schema.org
PDF
Optique - poster
Generic Framework for Knowledge Classification-1
Adapting Alax Solr to Compare different sets of documents - Joan Codina
Temporal and semantic analysis of richly typed social networks from user-gene...
Caching Search Engine Results over Incremental Indices
CSC 8101 Non Relational Databases
Linked Data, Ontologies and Inference
Time Series Databases for IoT (On-premises and Azure)
Hadoop World 2011: Hadoop in a Mission Critical Environment - Jim Haas - CBSi
Oracle OpenWorld 2016 Review - Focus on Data, BigData, Streaming Data, Machin...
Practical Medium Data Analytics with Python (10 Things I Hate About pandas, P...
History and Background of the USEWOD Data Challenge
On the diversity and availability of temporal information in linked open data
CARLI Usage Stats Keynote 20130325
Parse.ly: Inside a modern RIA built with Solr
DockerCon SF 2019 - Observability Workshop
Avoiding big data antipatterns
Piano rubyslava final
semantic markup using schema.org
Optique - poster
Ad

More from Peter Mika (12)

PPTX
What happened to the Semantic Web?
PPTX
Understanding Queries through Entities
PPT
Semantic Search overview at SSSW 2012
PPTX
Semantic Search tutorial at SemTech 2012
PPT
Making the Web searchable
PPTX
SemTech 2011 Semantic Search tutorial
PPTX
Making things findable
PPT
Publishing data on the Semantic Web
PPTX
Hack U Barcelona 2011
PPT
Semantic Search Summer School2009
PPT
Year of the Monkey: Lessons from the first year of SearchMonkey
PPT
Semantic Web Austin Yahoo
What happened to the Semantic Web?
Understanding Queries through Entities
Semantic Search overview at SSSW 2012
Semantic Search tutorial at SemTech 2012
Making the Web searchable
SemTech 2011 Semantic Search tutorial
Making things findable
Publishing data on the Semantic Web
Hack U Barcelona 2011
Semantic Search Summer School2009
Year of the Monkey: Lessons from the first year of SearchMonkey
Semantic Web Austin Yahoo
Ad

Recently uploaded (20)

PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Modernizing your data center with Dell and AMD
PDF
Approach and Philosophy of On baking technology
PDF
cuic standard and advanced reporting.pdf
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Encapsulation theory and applications.pdf
Chapter 3 Spatial Domain Image Processing.pdf
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
The Rise and Fall of 3GPP – Time for a Sabbatical?
MYSQL Presentation for SQL database connectivity
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
NewMind AI Weekly Chronicles - August'25 Week I
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Modernizing your data center with Dell and AMD
Approach and Philosophy of On baking technology
cuic standard and advanced reporting.pdf
Diabetes mellitus diagnosis method based random forest with bat algorithm
The AUB Centre for AI in Media Proposal.docx
Reach Out and Touch Someone: Haptics and Empathic Computing
Dropbox Q2 2025 Financial Results & Investor Presentation
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Mobile App Security Testing_ A Comprehensive Guide.pdf
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Encapsulation theory and applications.pdf

Investigating the Semantic Gap through Query Log Analysis

  • 1. Investigating the Semantic Gap through Query Log Analysis Peter Mika Yahoo! Research Edgar Meij University of Amsterdam Hugo Zaragoza Yahoo! Research
  • 2. The Semantic Gap • Significant efforts have focused on creating data – See Linking Open Data • Time to consider the extent to which data serves a purpose – Our purpose is fulfilling the information needs of our users • Using query logs to study the mismatch between – Data on the Web and information needs – Ontologies on the Web and expressions of information needs Demand Supply = needs = information -2-
  • 4. Lot’s of data or very little? Linked Data cloud (Mar, 2009) -4-
  • 5. Another five-fold RDFa on the rise increase between October 2010 and January, 2012 510% increase between March, 2009 and October, 2010 Percentage of URLs with embedded metadata in various formats -5-
  • 6. Investigating the data gap through query logs • How big a Semantic Web do we need? • Just big enough… to answer all the questions that users may want to ask – Query logs are a record of what users want to know as a whole • Research questions: – How much of this data would ever surface through search? – What categories of queries can be answered? – What’s the role of large sites? -6-
  • 7. Method • Simulate the average search behavior of users by replaying query logs – Reproducible experiments (given query log data) • BOSS web search API returns RDF/XML metadata for search result URLs • Caveats – For us, and for the time being, search = document search • For this experiment, we assume current bag-of-words document retrieval is a reasonable approximation of semantic search – For us, search = web search • We are dealing with the average search user • There are many queries the users have learned not to ask – Volume is a rough approximation of value • There are rare information needs with high pay-offs, e.g. patent search, financial data, biomedical data… -7-
  • 8. Data • Microformats, eRDF, RDFa data • Query log data – US query log – Random sample of 7k queries – Recent query log covering over a month period • Query classification data – US query log – 1000 queries classified into various categories -8-
  • 9. Number of queries with a given number of results with On average, a query has particular formats (N=7081) at least one result with metadata. 1 2 3 4 5 6 7 8 9 10 ANY 2127 1164 492 244 85 24 10 5 3 1 7623 1.08 hcard 1457 370 93 11 3 0 Are tags 0 useful as 0 2535 0 as 0 hCard? 0.36 rel-tag 1317 350 95 44 14 8 6 3 1 1 2681 0.38 adr 456 77 21 6 1 0 0 0 0 0 702 0.10 hatom 450 52 8 1 0 0 0 0 0 0 582 0.08 license 359 21 1 1 0 0 That’s 0 only 01 in every 16 queries. 0.06 0 0 408 xfn 339 26 1 1 0 0 0 1 0 0 406 0.06 Notes: - Queries with 0 results with metadata not shown - You cannot add numberss in columns: a query may return documents with different formats - Assume queries return more than 10 results Impressions -9- Average impressions per query
  • 10. The influence of head sites (N=7081) 1 2 3 4 5 6 7 8 9 10 ANY 2127 1164 492 244 85 24 10 5 3 1 7623 1.08 hcard 1457 370 93 11 3 0 0 0 0 0 2535 0.36 rel-tag 1317 350 95 44 14 8 6 3 1 1 2681 0.38 If YouTube came up with a microformat, wikipedia.org 1676 1 0 0 0 0 0 0 1 0 1687 0.24 it would be the fifth most important. adr 456 77 21 6 1 0 0 0 0 0 702 0.10 hatom 450 52 8 1 0 0 0 0 0 0 582 0.08 youtube.com 475 1 0 0 0 0 0 2 0 0 493 0.07 license 359 21 1 1 0 0 0 0 0 0 408 0.06 xfn 339 26 1 1 0 0 0 1 0 0 406 0.06 amazon.com 345 3 0 0 0 0 1 0 0 0 358 0.05 Impressions - 10 - Average impressions per query
  • 11. Restricted by category: local queries (N=129) 1 2 3 4 5 6 7 8 9 10 ANY 36 16 10 0 4 1 0 0 0 0 124 0.96 The query category largely determines hcard 31 7 5 1 0 0 0 0 0 0 64 0.50 which sites are important. adr 15 8 2 1 0 0 0 0 0 0 41 0.32 local.yahoo.com 24 0 0 0 0 0 0 0 0 0 24 0.19 en.wikipedia.org 24 0 0 0 0 0 0 0 0 0 24 0.19 rel-tag 19 2 0 0 0 0 0 0 0 0 23 0.18 geo 16 5 0 0 0 0 0 0 0 0 26 0.20 www.yelp.com 16 0 0 0 0 0 0 0 0 0 16 0.12 www.yellowpages.com 14 0 0 0 0 0 0 0 0 0 14 0.11 Impressions - 11 - Average impressions per query
  • 12. Summary • Time to start looking at the demand side of semantic search – Size is not a measure of usefulness • For us, and for now, it’s a matter of who is looking for it • “We would trade a gene bank for fajita recipes any day” • Reality of web monetization: pay per eyeball • Measure different aspects of usefulness – Usefulness for improving presentation but also usefulness for ranking, reasoning, disambiguation… • Site-based analysis • Linked Data will need to be studied separately - 12 -
  • 14. Investigating the ontology gap through query logs • Does the language of users match the ontology of the data? – Initial step: what is the language of users? • Observation: the same type of objects often have the same query context – Users asking for the same aspect of the type Query Entity Context Class aspirin side effects ASPIRIN +side effects Anti-inflammatory drugs ibuprofen side effects IBUPROFEN +side effects Anti-inflammatory drugs how to take aspirin ASPIRIN -how to take Anti-inflammatory drugs britney spears video BRITNEY SPEARS +video American film actors britney spears shaves BRITNEY SPEARS +shaves her American film actors her head head • Idea: mine the context words (prefixes and postfixes) that are common to a class of objects – These are potential attributes or relationships - 14 -
  • 15. Models • Desirable properties: – P1: Fix is frequent within type – P2: Fix has frequencies well-distributed across entities – P3: Fix is infrequent outside of the type • Models: type: product entity fix apple ipod nano review - 15 -
  • 16. Models cont. - 16 -
  • 17. Demo - 17 -
  • 18. Qualitative evaluation • Four wikipedia templates of different sizes • Bold are the information needs that would be actually fulfilled by infobox data Settlement Musical artist Drug Football club hotels lyrics buy forum map buy what is news map of pictures of tablets website weather what is what is homepage weather in video side effects of tickets flights to download hydrochloride official website weather hotel online badge hotel dvd overdose fixtures property in mp3 capsules free cheap flights to best addiction logo - 18 -
  • 19. Evaluation by query prediction • Idea: use this method for type-based query completion – Expectation is that it improves infrequent queries • Three days of UK query log for training, three days of testing • Entity-based frequency as baseline (~current search suggest) • Measures – Recall at K, MRR (also per type) • Variables – models (M1-M6) – number of fixes (1, 5, 10) – mapping (templates vs. categories) – type to use for a given entity • Random • Most frequent type • Best type • Combination – To do: number of days of training - 19 -
  • 20. Results: success rate (binned) - 20 -
  • 21. Summary • Most likely fix given type (M1) works best • Some improvement on query completion task – Win for rare queries – Raw frequency wins quickly because of entity-specific completions • Potentially highly valuable resource for other applications – Facets – Automated or semi-automated construction of query-trigger patterns – Query classification – Characterizing and measuring the similarity of websites based on the entities, fixes and types that lead to the site • Further work needed to turn this into a vocabulary engineering method - 21 -
  • 22. Open Questions • Measure information utility, not just volume – Looking at the demand side of information retrieval and how well it matches the supply of information • Data – Is the Semantic Web just growing or becoming more useful? • How well does it match the information needs of users? • Other measures of utility? • Ontologies – Are the properties of objects that we capture match what users are looking for? • Mismatch of language? Mismatch of needs? - 22 -

Editor's Notes

  • #4: First intro, and then work with many different people… and I’ve learned a lot.
  • #14: First intro, and then work with many different people… and I’ve learned a lot.
  • #16: Entity-independent measures: M1: probability of fix given type M2: probability of fix given type, normalized by probability of fix (the more uncommon the fix, the better) M3: binary entropy function
  • #17: Entity-dependent measures: M4: