This is a screen capture from the last time Eric introduced my heresy at the IA Summit Redux
2007 in 2nd Life




                                                                                               1
For far too long computer science has directed the development of search systems. This is
problematic from an experience point of view because computer science measures success by
different standards than we do. It is no wonder then that search systems have developed with
minimal attention to the user experience beyond the assumed perfection of results relevance and
the appropriate ad-matching. The intent of this plenary is to inspire us all to engage on a deeper
level in designing search experiences that do more than sell products well.




                                                                                                     2
SES New York 2005: Mike Gehan… explains that engines want the most relevant results, which is hard "because end
users are search nitwits!
http://www.seroundtable.com/archives/001600.html

Too much information
Hosted Websites
•July 1993: 1,776,000
•July 2005: 353,084,187


Individual Web pages
•1997: 200 million Web pages
•2005: 11.5 billion pages – now likely well over 12 billion
•2009: Google announces that its spiders have found 1 trillion URLs found and the Google index is at 100+billion pages



No Silver Bullet Solution
•Language and perception are different
         •Some people think women put their stuff in a purse, others a pocketbook, and others a handbag.
         •“Animal” is a form of mammal, a Sesame Street character, and an uncouth person
•Over 140 calculations are now used for PageRank valuation and still “gets it wrong a good percentage of the time
         •Customers are looking because they don’t know
         •Customers no longer know how to construct successful queries
         •Search engine intent Is not always “finding the most relevant information”
Cost of finding information according to an IDC April 2006 report = $5.3 million for every 1000 workers




                                                                                                                         3
4
Woody Allen says that 80% of success is showing up. That is how it works with search engines
also; you have to show up in the index to show up in the results. Here are two screen captures
from my Agency’s website. The one in the upper left is what our customers see. The one in the
lower right is what the search engines “sees.” So, someone using a search engine to find my
agency will not do so because all the spider “sees” is a big black hole.

If you want your customers to find the websites that you design and the interactions and
experiences contained there using search technology, you must have text on the page.

A prophet is not taken seriously in “her” own land.




                                                                                                 5
Michael Wesch, Kansas State University, has done a masterful job of looking at the many ways in
which digital text is different from print/spoken/filmed text. So, why do we keep treating it as if it
were the same?

If Tim Berners Lee knew where we would be now, he would not have used HTTP as a foundation
for the Web but he did and we’re stuck with it. Since then, we’ve been trying to design around
the limitations without success. Search systems are founded in text. Their capability to index
visual mediums is growing but is based on translation into text.

It is, and will remain so for the foreseeable future, all about text. The maturation of the Semantic
Web will further enhance the importance of content on the page and meaning off the page.




                                                                                                         6
The search engines are like people who keep buying bigger clothes to hide their weight game.
Soon the client must come as it has somewhat with Google that starts with whether or not the
page is index-worthy.

Google Caffeine: new infrastructure opened to developer testing in public beta (August 2009):
Even cheap infrastructure has its cost limits and Google looks to reaching its limit with regard to
retention of what it is finding out there, likely a lot of “Web junk” doesn’t even make the cut.
Google Caffeine is:
•Faster
•More keyword string based relevance
•Real time indexing – breaking news
•Index volume

Currently, the determination is done by computational math. Who should decide what goes and
stays? Us! We can influence the search engine’s behavior by getting rid of the “set it and forget
it” method of Web publishing. Keep content fresh and current. Check every now and then.
Publish deep, rich context-rich content and tend to it. Not all of it, the most important pieces. Not
all content is created equal.

Using the Internet: Skill Related Problems in User Online Behavior; van Deursen & van Dijk; 2009
System and Method of Encoding and Decoding Variable-length data: June 27, 2006
http://www.worldwidewebsize.com/




                                                                                                        7
Here it is, the famous, to some infamous PageRank algorithm. This is its most stripped down
state. Rumor has it that the algorithm now has in excess of 27 components. We’ll look at some of
these extensions in a few moments.

Important to note: The PageRank algorithm is a pre-query calculation. It is a value that is assigned
as a result of the search engine’s indexing of the entire Web and the associated value has no
relationship to the user’s information need. There have been a number of additions and
enhancements to lend some contextual credence to the relevance ranking of the results.

When Google appears in 1998, it is the underdog to search giants like Alta Vista and Yahoo! Its
simplified relevance model with the foundation of human mediation through linking [each link
was at that time the product of direct human endeavor and so viewed as a “vote” for the page or
site relevance and information merit]. It is not so much the underdog now with 64.6% of all U.S.
searches (that would be 13.9 billion searches in August 2009- That would be nearly 420 million
searches per day in the U.S. alone)

There are only so many slots in the golden top 10 search results for any query. Am I the only one
who is concerned with the consolidation of so much power in a single entity and is it perceived
power, something we can do something about, or actual power, something that we must learn to
live with?

Comscore Search Engine Market Share August 2009
http://www.comscore.com/Press_Events/Press_releases/2009/9/comScore_Releases_August_20
09_U.S._Search_Engine_Rankings




                                                                                                       8
Hilltop was one of the first to introduce the concept of machine-mediated “authority” to combat
the human manipulation of results for commercial gain (using link blast services, viral distribution
of misleading links. It is used by all of the search engines in some way, shape or form.

Hilltop is:
•Performed on a small subset of the corpus that best represents nature of the whole
•Pages are ranked according to the number of non-affiliated “experts” point to it – i.e. not in the
same site or directory
•Affiliation is transitive [if A=B and B=C then A=C]

The beauty of Hilltop is that unlike PageRank, it is query-specific and reinforces the relationship
between the authority and the user’s query. You don’t have to be big or have a thousand links
from auto parts sites to be an “authority.” Google’s 2003 Florida update, rumored to contain
Hilltop reasoning, resulted in a lot of sites with extraneous links fall from their previously lofty
placements as a result.

Google artificially inflates the placement of results from Wikipedia because it perceives Wikipedia
as an authoritative resources due to social mediation and commercial agnosticism. Wikipedia is
not infallible. However, someone finding it in the “most relevant” top results will certainly see it
as so.
Most SEOs hate keywords. I say that they are like Jessica Rabbit in “Who Framed Roger
Rabbit”…not bad, just drawn that way.

Keywords were the object of much abuse in the early part of the Web and almost totally
discounted by the search engines. With the emerging Semantic Web that strengthens the topic-
sensitive nature of relevance calculation combined with the technology’s ability to successfully
compare two content items for context, keywords might make more sense. In any event, they do
more good than harm. So, I advise my clients to have 2-4 key concepts from the page represented
here. The caveat is that it be from the page.

Topic-Sensitive PageRank
Computes PR based on a set of representational topics [augments PR with content analysis]
Topic derived from the Open Source directory
Uses a set of ranking vectors: Pre-query selection of topics + at-query comparison of the similarity
of query to topics




                                                                                                       10
Search 2.0 is the “wisdom of crowds”
Now we help each other find things. Online this takes the form of online bookmarking and community
  sites like Technorati (social sharing)
Delicious (social bookmarking) and Twitter (micro-blogging) among others. Search engines are now
  leveraging these forums as well as their own extensive data collection to calculate relevance. Some
  believe that social media will replace search. How can your friends and followers beat a 100 billion
  page index? What if they don’t know?




                                                                                                         11
If machines are methodical, as we’ve seen, and people are emotional, as we experience, where is the
middle ground? Are we working harder to really find what we need or just taking what we get and
calling it what we wanted in the first place?




                                                                                                      12
10/10/2009




Developed by a computer science student, this algorithm was the subject of an intense bidding
war between Google and Microsoft that Google one. The student, Ori Alon, went to work for
Google in April 2006 and has not been heard from since. There is no contemporary information
on the algorithm or it’s developer.

Relational content modeling done by machines-usually contextualized next steps.




                                                                                                       13
There is no such thing as “advanced search” longer. We’re all lulled into the false sense that the
search engine is smarter than us. Now the search engines present a mesmerizing array of choices
distracting from the original intent of the search.

Using the Internet: Skill Related Problems in User Online Behavior; van Deursen & van Dijk; 2009




                                                                                                     14
Research tells us that searchers are having a hard time navigating the results as a result of the
collapsing of results on the page and contextual advertising bordering the organic results

Using the Internet: Skill Related Problems in User Online Behavior; van Deursen & van Dijk; 2009




                                                                                                    15
Using the Internet: Skill Related Problems in User Online Behavior; van Deursen & van Dijk; 2009




                                                                                                   16
Watch out for those Facebook applications, quizzes, etc, Tweets, Linked-in data
Improving Search using Population Information (November 2008): Determine population
information associated with the query that is derived from a population database
         Locations of users
         Populations that users are associated with
         Groups users are associated with (gender, shared interests, self- & auto-assigned identity
         data)
Rendering Context Sensitive Ads for Multi-topic searchers (April 2008): Resolves ambiguities by
monitoring user behavior to determine specific interest
Presentation of Local Results (July 2008): Generating 2 sets of results, one with relevance based
on location of device used for search
Detecting Novel Content (November 2008): indentify and assign novelty score to one or more
textual sequences for an individual document in a set
Document Scoring based on Document Content Update (May 2007): scoring based on how
document updated over time, rate of change, rate of change for anchor-link text pointing to
document
Document Scoring based on Link-based Criteria (April 2007): System to determine time-varying
behavior of links pointing to a document ; growth in # of links pointing to the document (exceeds
the acceptable threshold), freshness of links, age distribution of links
         deployed as Google Scout




                                                                                                      17
Microsoft: Launches “decision engine” with focus on multiple meaning (contexts) as well as term
indexing and topic association and tracking
-Lead researcher Susan Dumais at the forefront of user behavior for prediction on search
relevance
-Look to recent acquisition of Powerset (semantic indexing) and FAST ESP (semantic processing)

Calculating Valence of Expressions within Docum0ents for Searching a Document Index (March
2009): System for natural language search and sentiment analysis through a breakdown of the
valence manipulation in document
Efficiently Representing Word Sense Probabilities (April 2009): Word sense probabilities stored in
a semantic index and mapped to “buckets.”
Tracking Storylines Around a Query (May 2008): Employ probabilistic or spectral techniques to
discover themes within documents delivered over a stream of time
         Consolidate the plurality of info around certain subjects (track stories that continue over
         time)
         Collect results over time and sort (keeps track of the current themes and alerts to new)
                  Track
                  Rank (relevance)
                  Present abstracts
         Compares the query with the contents of each document to discover whether query
         exists implicitly or explicitly in received document
         Builds topic models

Document Segmentation based on Visual Gaps (July 2006): Document white space/gaps used to
identify hierarchical structure




                                                                                                       18
Systems and Methods for Contextual Transaction Proposals (July 2006)
Delivering Items Based on Links to Resources Associated with Results (April 2008): Present links to
resources associated by links to resources presented in results set
Web Activity Monitoring System with Tracking by Categories and Terms (December 2006): Collect
event data from servers (traffic, search requests, purchases, etc), categorize and analyze to
detect flurries of activities (increased interest) associated with a topic, term or category




                                                                                                      19
20
The Pew Internet Trust found that over 60% of users trust search engine results to be most
accurate
No one really reads the terms of use agreements that they sign and certainly don’t keep up with
changes in this agreement over time.

The intentions of the application with regard to data collection are not always transparent. What
up with those Facebook quizzes, surveys and games? Where does that information really live?

66% of Americans object to online tracking according to a new from the University of
Pennsylvania and the number increases when the subjects found out how many ways they are
tracked across the web. 84% said it was not okay to be tracked to other websites.

55% of respondents from 18 to 24 objected to tailored advertising

Americans Oppose Web Tracking By Advertisers: Stephanie Clifford: International Herald Tribune
– October 1, 2009




                                                                                                    21
Some observers claim that Google is now running on as many as a million Linux servers. At the very least, it is
running on hundreds of thousands. When you consider that the application Google delivers is instant access to
documents and services available from, by last count, more than 81 million independent web servers, we're
starting to understand how true it is, as Sun Microsystems co-founder John Gage famously said back in 1984,
that "the network is the computer." It took over 20 years for the rest of the industry to realize that vision, but
we're finally there. ...

First, privacy. Collective intelligence requires the storage of enormous amounts of data. And while this data
can be used to deliver innovative applications, it can also be used to invade our privacy. The recent news
disclosures about phone records being turned over to the NSA is one example. Yahoo's recent disclosure of the
identity of a Chinese dissident to Chinese authorities is another.

The internet has enormous power to increase our freedom. It also has enormous power to limit our freedom,
to track our every move and monitor our every conversation. We must make sure that we don't trade off
freedom for convenience or security. Dave Farber, one of the fathers of the Internet, is fond of repeating the
words of Ben Franklin: "Those who give up essential liberty to purchase a little temporary safety deserve
neither, and will lose both."

Second, concentration of power. While it's easy to see the user empowerment and democratization implicit in
web 2.0, it's also easy to overlook the enormous power that is being accrued by those who've successfully
become the repository for our collective intelligence. Who owns that data? Is it ours, or does it belong to the
vendor?

If history is any guide, the democratization promised by Web 2.0 will eventually be succeeded by new
monopolies, just as the democratization promised by the personal computer led to an industry dominated by
only a few companies. Those companies will have enormous power over our lives -- and may use it for good or
ill. Already we're seeing companies claiming that Google has the ability to make or break their business by how
it adjusts its search rankings. That's just a small taste of what is to come as new power brokers rule the
information pathways that will shape our future world.

http://radar.oreilly.com/2006/05/my-commencement-speech-at-sims.html
My Commencement Speech at SIMS (May 2006)

                                                                                                                     22
Equal Representation By Search Engines: Vaughn & Zhang (2007)




                                                                23
A search on Google U.S. show a relevance focus on Wikipedia with a lead off on the 1989
massacre. Video and Image results also focus on this aspect of the search




                                                                                          24
Google China shows a different form of relevance with a focus on tourism for the square




                                                                                          25
Google France follows the U.S. version lead with the top 10 results dominated by results focused
on the massacre




                                                                                                   26
27
28
29
Bowman leaves Google
http://stopdesign.com/archive/2009/03/20/goodbye-google.html
“Yes, it’s true that a team at Google couldn’t decide between two blues, so they’re testing 41
shades between each blue to see which one performs better. I had a recent debate over whether
a border should be 3, 4 or 5 pixels wide, and was asked to prove my case. I can’t operate in an
environment like that. I’ve grown tired of debating such minuscule design decisions. There are
more exciting design problems in this world to tackle.”

The announcement of Bowman leaving Google started a lengthy thread on the Interaction Design
Association list about search design and interaction
http://www.ixda.org/discuss.php?post=40237




                                                                                                  30
31
32
33
34

More Related Content

PDF
Not Your Mom's SEO
PDF
Bearish SEO: Defining the User Experience for Google’s Panda Search Landscape
PDF
Attention: Organic Search
PDF
Defining the Search Experience
PDF
Configuring share point 2010 just do it
PPTX
Introduction to Information Architecture & Design - 2/13/16
PDF
Yankeelov
PDF
Semantic Search Engine That Reads Your Mind
Not Your Mom's SEO
Bearish SEO: Defining the User Experience for Google’s Panda Search Landscape
Attention: Organic Search
Defining the Search Experience
Configuring share point 2010 just do it
Introduction to Information Architecture & Design - 2/13/16
Yankeelov
Semantic Search Engine That Reads Your Mind

What's hot (20)

PDF
Jargon buster
PPTX
The Worst Lessons Marketing Ever Taught Content
PDF
SEO for Beginners - A Step by Step Guide
PDF
Growth Hacking 101
PDF
Reputation Economy (Sept 2017)
PDF
The Reputation Economy- June 2016
PPT
IBM Social Software for the KM Forum
PPT
What Does Web 2.0 Mean for Enterprise Search
PPT
Online marketing best practices
PPTX
Socia Media for Small Business_B2B_LHH
PDF
Introduction to Digital Life
PDF
SEO Will Never Die! Part 2
PPTX
From semantic platforms to semantic apps
PDF
SearchLove San Diego 2017 | Tom Critchlow | The State of Content
PDF
SEO Will Never Die! Part 1
PDF
Understanding Search Engine Optimization and Analytics for Law Firms
PDF
The Reputation Economy: Safeguarding your most valuable asset in the age o…
PDF
Digital Body Language
PDF
Web Design Trends For 2016
PDF
Google Digital Marketing Jargon Buster
Jargon buster
The Worst Lessons Marketing Ever Taught Content
SEO for Beginners - A Step by Step Guide
Growth Hacking 101
Reputation Economy (Sept 2017)
The Reputation Economy- June 2016
IBM Social Software for the KM Forum
What Does Web 2.0 Mean for Enterprise Search
Online marketing best practices
Socia Media for Small Business_B2B_LHH
Introduction to Digital Life
SEO Will Never Die! Part 2
From semantic platforms to semantic apps
SearchLove San Diego 2017 | Tom Critchlow | The State of Content
SEO Will Never Die! Part 1
Understanding Search Engine Optimization and Analytics for Law Firms
The Reputation Economy: Safeguarding your most valuable asset in the age o…
Digital Body Language
Web Design Trends For 2016
Google Digital Marketing Jargon Buster
Ad

Similar to Search V Next Final (20)

PDF
Widj social media-is-not-search-v1-1
PDF
Birds Bears and Bs:Optimal SEO for Today's Search Engines
PDF
Optimal SEO (Marianne Sweeny)
PDF
PDF
Search Solutions 2011: Successful Enterprise Search By Design
PDF
Search Engine Google
PPT
SEO and IA: The Beginning of a Beautiful Friendship
PDF
Enterprise Search Share Point2009 Best Practices Final
PDF
What IA, UX and SEO Can Learn from Each Other
PDF
Tolmachev Alexander Web Search Engines
PPTX
Lost in the Net: Navigating Search Engines
PDF
Sweeny group think-ias2015
PPT
Searching Social Media
PDF
SEO in the Web 2.0 Era: The Evolution of Search Engine Optimization
PDF
Better Search Engine Testing - Eric Pugh
PDF
Searchland: Search quality for Beginners
PDF
A42020106
PPT
Super searcher2012juneyoungkin
PPT
Search
PDF
The beginners guide to SEO
Widj social media-is-not-search-v1-1
Birds Bears and Bs:Optimal SEO for Today's Search Engines
Optimal SEO (Marianne Sweeny)
Search Solutions 2011: Successful Enterprise Search By Design
Search Engine Google
SEO and IA: The Beginning of a Beautiful Friendship
Enterprise Search Share Point2009 Best Practices Final
What IA, UX and SEO Can Learn from Each Other
Tolmachev Alexander Web Search Engines
Lost in the Net: Navigating Search Engines
Sweeny group think-ias2015
Searching Social Media
SEO in the Web 2.0 Era: The Evolution of Search Engine Optimization
Better Search Engine Testing - Eric Pugh
Searchland: Search quality for Beginners
A42020106
Super searcher2012juneyoungkin
Search
The beginners guide to SEO
Ad

More from Marianne Sweeny (13)

PDF
Connection and Context: ROI of AI for Digital Marketing
PDF
Design the Search Experience
PDF
Team of Rivals: UX, SEO, Content & Dev UXDC 2015
PDF
Sweeny ux-seo om-cap 2014_v3
PDF
Sweeny smx-social-media-2014 final-with-notes
PDF
Smashing silos ia-ux-meetup-mar112014
PDF
Smx toronto adv-kw-research-final
PPTX
Uw Digital Communications Social Media Is Not Search
PPT
Sweeny Seo30 Web20 Finalversion
PPTX
Share Point2007 Best Practices Final
PPTX
Univ Washington Social Media Marketing
PPT
Sweeny Seo30 Web20 Final
PPT
Incentive Architecture 1224362486736986 8
Connection and Context: ROI of AI for Digital Marketing
Design the Search Experience
Team of Rivals: UX, SEO, Content & Dev UXDC 2015
Sweeny ux-seo om-cap 2014_v3
Sweeny smx-social-media-2014 final-with-notes
Smashing silos ia-ux-meetup-mar112014
Smx toronto adv-kw-research-final
Uw Digital Communications Social Media Is Not Search
Sweeny Seo30 Web20 Finalversion
Share Point2007 Best Practices Final
Univ Washington Social Media Marketing
Sweeny Seo30 Web20 Final
Incentive Architecture 1224362486736986 8

Recently uploaded (20)

PDF
Getting started with AI Agents and Multi-Agent Systems
PPTX
Final SEM Unit 1 for mit wpu at pune .pptx
PDF
Improvisation in detection of pomegranate leaf disease using transfer learni...
DOCX
search engine optimization ppt fir known well about this
PDF
NewMind AI Weekly Chronicles – August ’25 Week III
PDF
STKI Israel Market Study 2025 version august
PDF
Zenith AI: Advanced Artificial Intelligence
PDF
Produktkatalog für HOBO Datenlogger, Wetterstationen, Sensoren, Software und ...
PDF
Credit Without Borders: AI and Financial Inclusion in Bangladesh
PDF
Architecture types and enterprise applications.pdf
PDF
sbt 2.0: go big (Scala Days 2025 edition)
PPTX
Modernising the Digital Integration Hub
PDF
Flame analysis and combustion estimation using large language and vision assi...
PPTX
TEXTILE technology diploma scope and career opportunities
PDF
How IoT Sensor Integration in 2025 is Transforming Industries Worldwide
PPT
What is a Computer? Input Devices /output devices
PDF
Developing a website for English-speaking practice to English as a foreign la...
PPTX
Configure Apache Mutual Authentication
PDF
1 - Historical Antecedents, Social Consideration.pdf
PDF
UiPath Agentic Automation session 1: RPA to Agents
Getting started with AI Agents and Multi-Agent Systems
Final SEM Unit 1 for mit wpu at pune .pptx
Improvisation in detection of pomegranate leaf disease using transfer learni...
search engine optimization ppt fir known well about this
NewMind AI Weekly Chronicles – August ’25 Week III
STKI Israel Market Study 2025 version august
Zenith AI: Advanced Artificial Intelligence
Produktkatalog für HOBO Datenlogger, Wetterstationen, Sensoren, Software und ...
Credit Without Borders: AI and Financial Inclusion in Bangladesh
Architecture types and enterprise applications.pdf
sbt 2.0: go big (Scala Days 2025 edition)
Modernising the Digital Integration Hub
Flame analysis and combustion estimation using large language and vision assi...
TEXTILE technology diploma scope and career opportunities
How IoT Sensor Integration in 2025 is Transforming Industries Worldwide
What is a Computer? Input Devices /output devices
Developing a website for English-speaking practice to English as a foreign la...
Configure Apache Mutual Authentication
1 - Historical Antecedents, Social Consideration.pdf
UiPath Agentic Automation session 1: RPA to Agents

Search V Next Final

  • 1. This is a screen capture from the last time Eric introduced my heresy at the IA Summit Redux 2007 in 2nd Life 1
  • 2. For far too long computer science has directed the development of search systems. This is problematic from an experience point of view because computer science measures success by different standards than we do. It is no wonder then that search systems have developed with minimal attention to the user experience beyond the assumed perfection of results relevance and the appropriate ad-matching. The intent of this plenary is to inspire us all to engage on a deeper level in designing search experiences that do more than sell products well. 2
  • 3. SES New York 2005: Mike Gehan… explains that engines want the most relevant results, which is hard "because end users are search nitwits! http://www.seroundtable.com/archives/001600.html Too much information Hosted Websites •July 1993: 1,776,000 •July 2005: 353,084,187 Individual Web pages •1997: 200 million Web pages •2005: 11.5 billion pages – now likely well over 12 billion •2009: Google announces that its spiders have found 1 trillion URLs found and the Google index is at 100+billion pages No Silver Bullet Solution •Language and perception are different •Some people think women put their stuff in a purse, others a pocketbook, and others a handbag. •“Animal” is a form of mammal, a Sesame Street character, and an uncouth person •Over 140 calculations are now used for PageRank valuation and still “gets it wrong a good percentage of the time •Customers are looking because they don’t know •Customers no longer know how to construct successful queries •Search engine intent Is not always “finding the most relevant information” Cost of finding information according to an IDC April 2006 report = $5.3 million for every 1000 workers 3
  • 4. 4
  • 5. Woody Allen says that 80% of success is showing up. That is how it works with search engines also; you have to show up in the index to show up in the results. Here are two screen captures from my Agency’s website. The one in the upper left is what our customers see. The one in the lower right is what the search engines “sees.” So, someone using a search engine to find my agency will not do so because all the spider “sees” is a big black hole. If you want your customers to find the websites that you design and the interactions and experiences contained there using search technology, you must have text on the page. A prophet is not taken seriously in “her” own land. 5
  • 6. Michael Wesch, Kansas State University, has done a masterful job of looking at the many ways in which digital text is different from print/spoken/filmed text. So, why do we keep treating it as if it were the same? If Tim Berners Lee knew where we would be now, he would not have used HTTP as a foundation for the Web but he did and we’re stuck with it. Since then, we’ve been trying to design around the limitations without success. Search systems are founded in text. Their capability to index visual mediums is growing but is based on translation into text. It is, and will remain so for the foreseeable future, all about text. The maturation of the Semantic Web will further enhance the importance of content on the page and meaning off the page. 6
  • 7. The search engines are like people who keep buying bigger clothes to hide their weight game. Soon the client must come as it has somewhat with Google that starts with whether or not the page is index-worthy. Google Caffeine: new infrastructure opened to developer testing in public beta (August 2009): Even cheap infrastructure has its cost limits and Google looks to reaching its limit with regard to retention of what it is finding out there, likely a lot of “Web junk” doesn’t even make the cut. Google Caffeine is: •Faster •More keyword string based relevance •Real time indexing – breaking news •Index volume Currently, the determination is done by computational math. Who should decide what goes and stays? Us! We can influence the search engine’s behavior by getting rid of the “set it and forget it” method of Web publishing. Keep content fresh and current. Check every now and then. Publish deep, rich context-rich content and tend to it. Not all of it, the most important pieces. Not all content is created equal. Using the Internet: Skill Related Problems in User Online Behavior; van Deursen & van Dijk; 2009 System and Method of Encoding and Decoding Variable-length data: June 27, 2006 http://www.worldwidewebsize.com/ 7
  • 8. Here it is, the famous, to some infamous PageRank algorithm. This is its most stripped down state. Rumor has it that the algorithm now has in excess of 27 components. We’ll look at some of these extensions in a few moments. Important to note: The PageRank algorithm is a pre-query calculation. It is a value that is assigned as a result of the search engine’s indexing of the entire Web and the associated value has no relationship to the user’s information need. There have been a number of additions and enhancements to lend some contextual credence to the relevance ranking of the results. When Google appears in 1998, it is the underdog to search giants like Alta Vista and Yahoo! Its simplified relevance model with the foundation of human mediation through linking [each link was at that time the product of direct human endeavor and so viewed as a “vote” for the page or site relevance and information merit]. It is not so much the underdog now with 64.6% of all U.S. searches (that would be 13.9 billion searches in August 2009- That would be nearly 420 million searches per day in the U.S. alone) There are only so many slots in the golden top 10 search results for any query. Am I the only one who is concerned with the consolidation of so much power in a single entity and is it perceived power, something we can do something about, or actual power, something that we must learn to live with? Comscore Search Engine Market Share August 2009 http://www.comscore.com/Press_Events/Press_releases/2009/9/comScore_Releases_August_20 09_U.S._Search_Engine_Rankings 8
  • 9. Hilltop was one of the first to introduce the concept of machine-mediated “authority” to combat the human manipulation of results for commercial gain (using link blast services, viral distribution of misleading links. It is used by all of the search engines in some way, shape or form. Hilltop is: •Performed on a small subset of the corpus that best represents nature of the whole •Pages are ranked according to the number of non-affiliated “experts” point to it – i.e. not in the same site or directory •Affiliation is transitive [if A=B and B=C then A=C] The beauty of Hilltop is that unlike PageRank, it is query-specific and reinforces the relationship between the authority and the user’s query. You don’t have to be big or have a thousand links from auto parts sites to be an “authority.” Google’s 2003 Florida update, rumored to contain Hilltop reasoning, resulted in a lot of sites with extraneous links fall from their previously lofty placements as a result. Google artificially inflates the placement of results from Wikipedia because it perceives Wikipedia as an authoritative resources due to social mediation and commercial agnosticism. Wikipedia is not infallible. However, someone finding it in the “most relevant” top results will certainly see it as so.
  • 10. Most SEOs hate keywords. I say that they are like Jessica Rabbit in “Who Framed Roger Rabbit”…not bad, just drawn that way. Keywords were the object of much abuse in the early part of the Web and almost totally discounted by the search engines. With the emerging Semantic Web that strengthens the topic- sensitive nature of relevance calculation combined with the technology’s ability to successfully compare two content items for context, keywords might make more sense. In any event, they do more good than harm. So, I advise my clients to have 2-4 key concepts from the page represented here. The caveat is that it be from the page. Topic-Sensitive PageRank Computes PR based on a set of representational topics [augments PR with content analysis] Topic derived from the Open Source directory Uses a set of ranking vectors: Pre-query selection of topics + at-query comparison of the similarity of query to topics 10
  • 11. Search 2.0 is the “wisdom of crowds” Now we help each other find things. Online this takes the form of online bookmarking and community sites like Technorati (social sharing) Delicious (social bookmarking) and Twitter (micro-blogging) among others. Search engines are now leveraging these forums as well as their own extensive data collection to calculate relevance. Some believe that social media will replace search. How can your friends and followers beat a 100 billion page index? What if they don’t know? 11
  • 12. If machines are methodical, as we’ve seen, and people are emotional, as we experience, where is the middle ground? Are we working harder to really find what we need or just taking what we get and calling it what we wanted in the first place? 12
  • 13. 10/10/2009 Developed by a computer science student, this algorithm was the subject of an intense bidding war between Google and Microsoft that Google one. The student, Ori Alon, went to work for Google in April 2006 and has not been heard from since. There is no contemporary information on the algorithm or it’s developer. Relational content modeling done by machines-usually contextualized next steps. 13
  • 14. There is no such thing as “advanced search” longer. We’re all lulled into the false sense that the search engine is smarter than us. Now the search engines present a mesmerizing array of choices distracting from the original intent of the search. Using the Internet: Skill Related Problems in User Online Behavior; van Deursen & van Dijk; 2009 14
  • 15. Research tells us that searchers are having a hard time navigating the results as a result of the collapsing of results on the page and contextual advertising bordering the organic results Using the Internet: Skill Related Problems in User Online Behavior; van Deursen & van Dijk; 2009 15
  • 16. Using the Internet: Skill Related Problems in User Online Behavior; van Deursen & van Dijk; 2009 16
  • 17. Watch out for those Facebook applications, quizzes, etc, Tweets, Linked-in data Improving Search using Population Information (November 2008): Determine population information associated with the query that is derived from a population database Locations of users Populations that users are associated with Groups users are associated with (gender, shared interests, self- & auto-assigned identity data) Rendering Context Sensitive Ads for Multi-topic searchers (April 2008): Resolves ambiguities by monitoring user behavior to determine specific interest Presentation of Local Results (July 2008): Generating 2 sets of results, one with relevance based on location of device used for search Detecting Novel Content (November 2008): indentify and assign novelty score to one or more textual sequences for an individual document in a set Document Scoring based on Document Content Update (May 2007): scoring based on how document updated over time, rate of change, rate of change for anchor-link text pointing to document Document Scoring based on Link-based Criteria (April 2007): System to determine time-varying behavior of links pointing to a document ; growth in # of links pointing to the document (exceeds the acceptable threshold), freshness of links, age distribution of links deployed as Google Scout 17
  • 18. Microsoft: Launches “decision engine” with focus on multiple meaning (contexts) as well as term indexing and topic association and tracking -Lead researcher Susan Dumais at the forefront of user behavior for prediction on search relevance -Look to recent acquisition of Powerset (semantic indexing) and FAST ESP (semantic processing) Calculating Valence of Expressions within Docum0ents for Searching a Document Index (March 2009): System for natural language search and sentiment analysis through a breakdown of the valence manipulation in document Efficiently Representing Word Sense Probabilities (April 2009): Word sense probabilities stored in a semantic index and mapped to “buckets.” Tracking Storylines Around a Query (May 2008): Employ probabilistic or spectral techniques to discover themes within documents delivered over a stream of time Consolidate the plurality of info around certain subjects (track stories that continue over time) Collect results over time and sort (keeps track of the current themes and alerts to new) Track Rank (relevance) Present abstracts Compares the query with the contents of each document to discover whether query exists implicitly or explicitly in received document Builds topic models Document Segmentation based on Visual Gaps (July 2006): Document white space/gaps used to identify hierarchical structure 18
  • 19. Systems and Methods for Contextual Transaction Proposals (July 2006) Delivering Items Based on Links to Resources Associated with Results (April 2008): Present links to resources associated by links to resources presented in results set Web Activity Monitoring System with Tracking by Categories and Terms (December 2006): Collect event data from servers (traffic, search requests, purchases, etc), categorize and analyze to detect flurries of activities (increased interest) associated with a topic, term or category 19
  • 20. 20
  • 21. The Pew Internet Trust found that over 60% of users trust search engine results to be most accurate No one really reads the terms of use agreements that they sign and certainly don’t keep up with changes in this agreement over time. The intentions of the application with regard to data collection are not always transparent. What up with those Facebook quizzes, surveys and games? Where does that information really live? 66% of Americans object to online tracking according to a new from the University of Pennsylvania and the number increases when the subjects found out how many ways they are tracked across the web. 84% said it was not okay to be tracked to other websites. 55% of respondents from 18 to 24 objected to tailored advertising Americans Oppose Web Tracking By Advertisers: Stephanie Clifford: International Herald Tribune – October 1, 2009 21
  • 22. Some observers claim that Google is now running on as many as a million Linux servers. At the very least, it is running on hundreds of thousands. When you consider that the application Google delivers is instant access to documents and services available from, by last count, more than 81 million independent web servers, we're starting to understand how true it is, as Sun Microsystems co-founder John Gage famously said back in 1984, that "the network is the computer." It took over 20 years for the rest of the industry to realize that vision, but we're finally there. ... First, privacy. Collective intelligence requires the storage of enormous amounts of data. And while this data can be used to deliver innovative applications, it can also be used to invade our privacy. The recent news disclosures about phone records being turned over to the NSA is one example. Yahoo's recent disclosure of the identity of a Chinese dissident to Chinese authorities is another. The internet has enormous power to increase our freedom. It also has enormous power to limit our freedom, to track our every move and monitor our every conversation. We must make sure that we don't trade off freedom for convenience or security. Dave Farber, one of the fathers of the Internet, is fond of repeating the words of Ben Franklin: "Those who give up essential liberty to purchase a little temporary safety deserve neither, and will lose both." Second, concentration of power. While it's easy to see the user empowerment and democratization implicit in web 2.0, it's also easy to overlook the enormous power that is being accrued by those who've successfully become the repository for our collective intelligence. Who owns that data? Is it ours, or does it belong to the vendor? If history is any guide, the democratization promised by Web 2.0 will eventually be succeeded by new monopolies, just as the democratization promised by the personal computer led to an industry dominated by only a few companies. Those companies will have enormous power over our lives -- and may use it for good or ill. Already we're seeing companies claiming that Google has the ability to make or break their business by how it adjusts its search rankings. That's just a small taste of what is to come as new power brokers rule the information pathways that will shape our future world. http://radar.oreilly.com/2006/05/my-commencement-speech-at-sims.html My Commencement Speech at SIMS (May 2006) 22
  • 23. Equal Representation By Search Engines: Vaughn & Zhang (2007) 23
  • 24. A search on Google U.S. show a relevance focus on Wikipedia with a lead off on the 1989 massacre. Video and Image results also focus on this aspect of the search 24
  • 25. Google China shows a different form of relevance with a focus on tourism for the square 25
  • 26. Google France follows the U.S. version lead with the top 10 results dominated by results focused on the massacre 26
  • 27. 27
  • 28. 28
  • 29. 29
  • 30. Bowman leaves Google http://stopdesign.com/archive/2009/03/20/goodbye-google.html “Yes, it’s true that a team at Google couldn’t decide between two blues, so they’re testing 41 shades between each blue to see which one performs better. I had a recent debate over whether a border should be 3, 4 or 5 pixels wide, and was asked to prove my case. I can’t operate in an environment like that. I’ve grown tired of debating such minuscule design decisions. There are more exciting design problems in this world to tackle.” The announcement of Bowman leaving Google started a lengthy thread on the Interaction Design Association list about search design and interaction http://www.ixda.org/discuss.php?post=40237 30
  • 31. 31
  • 32. 32
  • 33. 33
  • 34. 34