Confidential + Proprietary
Fighting Misinformation and
Violent Extremism/Hate Content
in CyberSpace
Recent News Cycles have Highlighted a Set of
Challenges:
● Violent/Extremism Content
● Hate Speech
● Misinformation/Fake News
● (Cross border) Disinformation
Companies have to develop new solutions or
strengthen existing solutions.
Search has a delicate balancing act
in accomplishing our mission to get
what a user wants while respecting
our ethics as a company
Some Related Challenges
Violent Extremism / Hate Speech Cross-Border Disinformation Other Disinformation / Fake News*
Violent Extremism: Content that recruits, fundraises, or
promotes violence or terrorism. Examples include
content meant to incite violence, celebrate terrorist
attacks, or promote acts of terrorism. Graphic content
in an educational (such as news), documentary,
scientific, or artistic context, should be indicated as
such.
Hate Speech: While preserving freedom of expression,
content that threatens or advocates for harm on oneself
or others; harasses, intimidates or bullies an individual
or group of individuals; incites hatred against, promotes
discrimination of, or disparages an individual or group
on the basis of their race or ethnic origin, religion,
disability, age, nationality, veteran status, sexual
orientation, gender, gender identity, or other
characteristic that is associated with systemic
discrimination or marginalization
Coordinated Inauthentic Activity: Cross-
border coordinated activity. Generally driven
by political motives, rather than profit
motives. Examples: false popularity/virality
produced via bots or troll farms.
State-sponsored Propaganda: Content that
is placed by governments intended to change
or sway public perception
Imposter or Misleading or Deceptive
“News”: Satire portrayed as truth,
abc.com.co, 70news
Clickbait or Sensationalist or Tabloidy
content: “10 Shocking Things”, “Oprah’s
secret revealed!”
Fringy or Conspiracy or Controversial
Opinion: Holocaust denial, 9/11 hoax,
Vaccines cause autism
*Fake News analysis to span all products,
not just News products
Three Types of Solutions to Avoid These Problems for
Companies
Define policies and
standards to make
decisions about
products
Policy
Change products to
improve results or
implement policies,
e.g. changing
algorithms
Product
Manual (tool
assisted) work to
collect information,
produce analyses
and insights,
implement manual
steps
Operations
Misinformation and Violent Extremism/Hate are Different
Misinformation Violent/Extremism
Two Integrated Workstreams to Address These Problems
Systemic Solutions
● Building better signals with improved
notions of content quality and better
models to predict quality
● Improving detection of queries where the
focus should be on the quality of results
(0.2%-0.5% of traffic).
● Implementing flight to quality algorithms
that change the weight of some signals
and/or turn off certain features
○ General fringe classifier
○ Incident fringe classifier
○ Violent Extremism (V/E) classifier
Policy Solutions
● Reduced reactive cycle to <48 hours.
● Improved process for approvals of
takedowns
● Implemented emergency processes to
quickly react to misinformation in Crisis
contexts in selected countries
● Extending this work to cover reactive
monitoring, V/E content, global scope
Confidential + Proprietary
Policy Solutions
Privileged + Confidential
Policy & Enforcement Current State: Violent Extremism / Hate Speech
Ethos
Targeted
Policy?
Enforcement
Options
Details
Open Web: Preserve freedom of information
while tolerating a few content restrictions to
promote quality and user safety. Prefer
algorithmic over manual solutions.
No
Legal
Removals +
Quality
Approach
Better Ranking
VE & Hate:
Exploring engaging VE external experts to support better classification of
harmful VE content
Yes Removal VE: Strictly prohibit content related to terrorism. We also do not permit terrorist
organizations to use product. Examples include but are not limited to: Using
product to incite violence, celebrate terrorist attacks, promote acts of terrorism. If
posting graphic content in an educational (such as news), documentary,
scientific, or artistic context, please be mindful to provide enough information to
help people understand what’s going on.
Hate: Content that promotes discrimination or hatred, or that incites violence
against individuals or groups based on based on race, ethnic origin, nationality,
sex, gender identity, sexual orientation, religion, disability, age, veteran status, or
any other characteristic that is associated with systemic discrimination or
marginalization is prohibited.
Curated Data: Emphasize quality, accuracy,
ideological balance, and empathy for users.
Yes Removal
Users & News Publishers: Promote freedom
of expression while preserving on-topic, civil
discourse
Yes Removal
8
Privileged + Confidential
Ethos
Targeted
Policy?
Enforcement
Options
Details
Open Web: Preserve freedom of
information while tolerating few
content restrictions to promote
quality and user safety. Prefer
algorithmic solutions.
No Better Ranking
Whether Search can derive a quality signal or spam
policy leveraging the below Cross-Border Policy.*
Current: Penalty in News Corpus will impact Search
Ranking.
Under
Develop.
Demotion or
Removal
Cross-border Disinformation Conduct Policy:
Targets behavioral factors, i.e., networked activity
between seemingly independent sites, unnatural
traffic patterns, geo mismatch of domain with target
audience of site and concealing connections to a state
entity while targeting foreign audiences.
Curated Data: Emphasize quality,
accuracy, ideological balance, and
empathy for users.
Users & News Publishers:
Promote freedom of expression
while preserving on-topic, civil
discourse
9
Policy & Enforcement: Cross-border Disinformation
Privileged + Confidential
Policy & Enforcement: Other Disinformation
Ethos
Targeted
Policy?
Enforcement
Options
Details
Open Web: Preserve freedom of
information while tolerating few
content restrictions to promote
quality and user safety. Prefer
algorithmic solutions.
No
Better Ranking
Rater guidelines for lower scoring of demonstrably inaccurate
content
Increasing quality for breaking news, violent incident queries where
inaccurate info is large risk
Partial Removal
Blacklisting WebAnswers that are egregiously or offensively
inaccurate.
Curated Data: Emphasize quality,
accuracy, ideological balance,
and empathy for users.
Yes
(various)
Removal
Using structured data reviewed by experts and using web evidence
to resolve factual disputes in structured data
Users & News Publishers:
Promote freedom of expression
while preserving on-topic, civil
discourse
Yes Removal
News publishers may not misrepresent, misstate, or conceal
information about the site’s owner or primary purpose. News sites
may not misrepresent identity or content. Removal from corpus if
publisher repeatedly disproven by fact checkers.
10
Confidential + Proprietary
Systemic Solutions
Different Root Causes at Play
Are companies adversarially
testing themselves?
Problem Prediction
Correct balancing of algo and
manual response?
System Response
Are companies identifying
“good” vs.“bad” content? Bad
actors?
Are companies sufficiently
explaining our values and
promises?
Ranking &
Intelligence
Policy Narrative
Solutions are Addressing the Root Causes
Problem Prediction System Response
Ranking &
Intelligence
Policy Narrative
Fringe Query
Expansion
Impacts:
Misinformation, violent
extremism
Keynotes:
Algorithmically
increase coverage of
breaking news and
violent extremism
queries for flight to
quality launches.
VE Content
Ranking | Eval
Impacts: Violent
extremism
Keynotes: Leverage
deep experts to
inform
inciting/recruiting VE
content. Brings core
company values into
ranking.
Manual
Operations
Impacts:
Misinformation,
violent extremism
Keynotes: Manual
work (process) to
product stop-gap
solutions, e.g.
engaging flight to
quality for at risk
topics during crises
Foreign Actor
Ops Penalties
Impacts:
Misinformation,
election interference
Keynotes: Threat
analysis of
misinformation ops.
Penalties include
removal from news
surfaces [and spam
penalties elsewhere]
Early Alert
System
Impacts:
Misinformation,
violent extremism,
election
interference, hate
Keynotes: Efforts to
start systematic
adversarial testing
company products
Narrative
Development
Impacts: Violent
extremism, hate,
misinformation
Keynotes: Policy,
Search and PR are
coordinating to
create core narratives
around corporate
values [and product
promises]
Some Steps Recommended to be Taken
Algorithmic Changes
● Built better signals with improved notions
of content quality and better models to
predict quality
● Improved detection of queries where
companies need to stress quality of
results (0.2%-0.5% of traffic).
● Implemented flight to quality algorithms
that change the weight of some signals
and/or turn off certain features
○ General fringe classifier
○ Incident fringe classifier
○ Violent Extremism classifier
Reactive Monitoring of
Misinformation Content in Crises -
In crisis situations, some search engines
temporarily show poor quality results for some
queries because there are few high quality ones.
There are company teams that already stand up
in crisis situations.
Companies are already leveraging their
capabilities to help create a temporary, manual
process solution until algorithmic ones are
developed.
Process Example for Reactive Monitoring During Crisis
Policy
Define and approve a clear policy for identifying which queries should trigger flight to safety.
Stand Down
Stand down the flight to
safety after higher quality
content is available
Identify/Monitor
Extend existing Crisis
Response/SOS and T&S
processes to include the
identification and
monitoring of a set of
queries that require a
flight for safety in Crisis
situations.
Escalate/Enact
Extend existing T&S
processes to escalate
these queries to the
correct escalation path
and trigger flight to safety
Extend to Proactive Monitoring, Expand Scope to
Misinformation + Violent Extremism + Hate Content
Build operations team to run monitoring 24/7
Define play-books for 90% of the escalations, including PR proactive positioning on top areas
Establish types of queries that always trigger flight to quality
Develop feedback loop to help development of automated solutions
Broaden scope from selected EN markets to most key global markets
Build tools to support manual monitoring
Search has a delicate balancing act
in accomplishing our mission to get
what a user wants while respecting
our ethics as a company
Different but Connected Challenges in Misinformation
Violent Extremism / Hate Speech Cross-Border Disinformation Other Disinformation / Fake News*
Violent Extremism: Content that recruits, fundraises, or
promotes violence or terrorism. Examples include
content meant to incite violence, celebrate terrorist
attacks, or promote acts of terrorism. Graphic content
in an educational (such as news), documentary,
scientific, or artistic context, should be indicated as
such.
Hate Speech: While preserving freedom of expression,
content that threatens or advocates for harm on oneself
or others; harasses, intimidates or bullies an individual
or group of individuals; incites hatred against, promotes
discrimination of, or disparages an individual or group
on the basis of their race or ethnic origin, religion,
disability, age, nationality, veteran status, sexual
orientation, gender, gender identity, or other
characteristic that is associated with systemic
discrimination or marginalization
Coordinated inauthentic activity: Cross-
border coordinated activity. Generally driven
by political motives, rather than profit
motives. Examples: false popularity/virality
produced via bots or troll farms.
State-sponsored Propaganda: Content that
is placed by governments intended to change
or sway public perception
Imposter or misleading or deceptive
“news”: Satire portrayed as truth,
abc.com.co, 70news
Clickbait or sensationalist or tabloidy
content: “10 shocking things”, “Oprah’s
secret revealed!”
Fringy or conspiracy or controversial
opinion: Holocaust denial, 9/11 hoax,
vaccines cause autism
*Fake News analysis to span all products,
not just News products
Several Components to Flight to Quality Changes
1. Build better signals
a. Improved notions of page quality (updated rater guidelines to make clear quality includes
non-structural features such that news has to be true - pages or sites need to be credible).
b. Changed our models to use the raters data to improve prediction of quality signals
2. Detect queries where it is more important than average to stress quality of results - between
0.2%-0.5% of traffic.
3. In case such queries are detected, the technical flight to quality starts by changing the weight of
some signals - e.g. increase weight of quality, reduce weight of clicks, cap topicality.
a. Building fringe classifier
b. Building specific models for incident fringe classifier with limited flight to quality
c. Building Violent Extremism classifier - suppressing certain features (e.g. videos) for mostly
news queries, giving higher weight to both authoritative and fresh content
Policy and Applicable Search Features
Misinformation Query patterns that could lead to potentially harmful misinformation. In
a non-crisis situation monitoring relies on a calendar of events, breaking
news and dictionary of queries.
Flight to safety
News
Feed
Violent
extremism
Content that recruits, fundraises, or promotes violence or terrorism.
Examples include content meant to incite violence, celebrate terrorist
attacks, or promote acts of terrorism. Graphic content in an educational
(such as news), documentary, scientific, or artistic context, should be
indicated as such.
Webanswers
Suggest
Knowledge Engine
Structured Data
Highlights
News
Feed
Image (Safe Search)
Hate Content that promotes, glorifies, or condones violence or has the
primary purpose of Inciting Hatred against a protected group or a
member of a protected group based on the protected characteristic
(race, ethnic origin, religion, disability, age, nationality, veteran status,
sexual orientation, gender, gender identity).

More Related Content

PPT
Satori WP1 slides
PDF
Disinformation challenges tools and techniques to deal or live with it
PDF
Online Misinformation: Challenges and Future Directions
PPTX
2019: Regulating disinformation with artificial intelligence (AI)
PPTX
disinformation risk management: leveraging cyber security best practices to s...
PPTX
Amplification and Personalization: The impact of metrics, analytics, and algo...
PDF
CSW2022_10_risk_prioritisation.pptx.pdf
PPTX
Marsden Disinformation Algorithms #IGF2019
Satori WP1 slides
Disinformation challenges tools and techniques to deal or live with it
Online Misinformation: Challenges and Future Directions
2019: Regulating disinformation with artificial intelligence (AI)
disinformation risk management: leveraging cyber security best practices to s...
Amplification and Personalization: The impact of metrics, analytics, and algo...
CSW2022_10_risk_prioritisation.pptx.pdf
Marsden Disinformation Algorithms #IGF2019

Similar to Fighting Misinformation in Cyberspace (20)

PPTX
Fake News Workshop Powerpoint Presentation
PPTX
2020 09-01 disclosure
PDF
Facebook usato da Governi anche per fake news
PDF
Techhub Riga misinformation meet-up, 15 March 2018
PPTX
Spark Social Media
PDF
Understanding Media Literacy and Managing Misinformation (2024 edition)
PDF
Tactical Misinformation-Disinformation in your Organization
PDF
Challenges of social media analysis in the real world
PPTX
Long, J. & Hicks, J. Think Before you Link, a Fake News Redux: Identifying bi...
PPTX
Current issues in the health of information ecosystem
PDF
The Changing Face of Crisis in the Digital Age: Part III
PPTX
Terp breuer misinfosecframeworks_cansecwest2019
PPTX
Misinfosec frameworks Cansecwest 2019
PPTX
CansecWest2019: Infosec Frameworks for Misinformation
PPTX
Managing crisis world vision
PDF
The Dark Arts of Content Leadership
PDF
A decision lens for complexity v10
PDF
CSW2022_08_behaviours.pptx.pdf
PDF
Media, Algorithms and the Filter Bubble
PDF
Discovering Credible Events in Near Real Time from Social Media Streams
Fake News Workshop Powerpoint Presentation
2020 09-01 disclosure
Facebook usato da Governi anche per fake news
Techhub Riga misinformation meet-up, 15 March 2018
Spark Social Media
Understanding Media Literacy and Managing Misinformation (2024 edition)
Tactical Misinformation-Disinformation in your Organization
Challenges of social media analysis in the real world
Long, J. & Hicks, J. Think Before you Link, a Fake News Redux: Identifying bi...
Current issues in the health of information ecosystem
The Changing Face of Crisis in the Digital Age: Part III
Terp breuer misinfosecframeworks_cansecwest2019
Misinfosec frameworks Cansecwest 2019
CansecWest2019: Infosec Frameworks for Misinformation
Managing crisis world vision
The Dark Arts of Content Leadership
A decision lens for complexity v10
CSW2022_08_behaviours.pptx.pdf
Media, Algorithms and the Filter Bubble
Discovering Credible Events in Near Real Time from Social Media Streams
Ad

More from The Wisdom Daily (20)

PPTX
Engineering UX
PPTX
How to Scale for IoT?
PPTX
Digital Transformation: Best Practices
PPTX
How to Design for User Trust?
PPTX
Building Trust in the Cyberspace
PPTX
How to Get Started in ML?
PPTX
Security and Privacy Issues in Deep Learning
PPTX
Understanding Intelligence: Ml vs. AI
PPTX
Comp science
PPTX
Fundamentals of Big Data
PPTX
Mobile Best Practices for UX
PPTX
UX for Product Excellence
PPTX
Principles of UX Engineering
PPTX
How to Conquer the Field of UX?
PPTX
The How, Why and What of Metrics?
PPTX
How to Make Your Ideas Stick for UX?
PPTX
Fundamentals of UX Design
PPTX
Basics of UX Research
PPTX
How to Design in a Multiscreen World ?
PPTX
Deep learning & Humanity's Grand Challenges
Engineering UX
How to Scale for IoT?
Digital Transformation: Best Practices
How to Design for User Trust?
Building Trust in the Cyberspace
How to Get Started in ML?
Security and Privacy Issues in Deep Learning
Understanding Intelligence: Ml vs. AI
Comp science
Fundamentals of Big Data
Mobile Best Practices for UX
UX for Product Excellence
Principles of UX Engineering
How to Conquer the Field of UX?
The How, Why and What of Metrics?
How to Make Your Ideas Stick for UX?
Fundamentals of UX Design
Basics of UX Research
How to Design in a Multiscreen World ?
Deep learning & Humanity's Grand Challenges
Ad

Recently uploaded (20)

PDF
Convolutional neural network based encoder-decoder for efficient real-time ob...
PPTX
The various Industrial Revolutions .pptx
PDF
Credit Without Borders: AI and Financial Inclusion in Bangladesh
PDF
Developing a website for English-speaking practice to English as a foreign la...
PPTX
Custom Battery Pack Design Considerations for Performance and Safety
PDF
Taming the Chaos: How to Turn Unstructured Data into Decisions
PDF
UiPath Agentic Automation session 1: RPA to Agents
PDF
CloudStack 4.21: First Look Webinar slides
PDF
Consumable AI The What, Why & How for Small Teams.pdf
PPTX
Final SEM Unit 1 for mit wpu at pune .pptx
PDF
Hybrid horned lizard optimization algorithm-aquila optimizer for DC motor
PDF
Hindi spoken digit analysis for native and non-native speakers
PPT
What is a Computer? Input Devices /output devices
PDF
STKI Israel Market Study 2025 version august
PDF
OpenACC and Open Hackathons Monthly Highlights July 2025
PDF
Zenith AI: Advanced Artificial Intelligence
PDF
A comparative study of natural language inference in Swahili using monolingua...
PDF
Getting started with AI Agents and Multi-Agent Systems
PDF
TrustArc Webinar - Click, Consent, Trust: Winning the Privacy Game
PPTX
AI IN MARKETING- PRESENTED BY ANWAR KABIR 1st June 2025.pptx
Convolutional neural network based encoder-decoder for efficient real-time ob...
The various Industrial Revolutions .pptx
Credit Without Borders: AI and Financial Inclusion in Bangladesh
Developing a website for English-speaking practice to English as a foreign la...
Custom Battery Pack Design Considerations for Performance and Safety
Taming the Chaos: How to Turn Unstructured Data into Decisions
UiPath Agentic Automation session 1: RPA to Agents
CloudStack 4.21: First Look Webinar slides
Consumable AI The What, Why & How for Small Teams.pdf
Final SEM Unit 1 for mit wpu at pune .pptx
Hybrid horned lizard optimization algorithm-aquila optimizer for DC motor
Hindi spoken digit analysis for native and non-native speakers
What is a Computer? Input Devices /output devices
STKI Israel Market Study 2025 version august
OpenACC and Open Hackathons Monthly Highlights July 2025
Zenith AI: Advanced Artificial Intelligence
A comparative study of natural language inference in Swahili using monolingua...
Getting started with AI Agents and Multi-Agent Systems
TrustArc Webinar - Click, Consent, Trust: Winning the Privacy Game
AI IN MARKETING- PRESENTED BY ANWAR KABIR 1st June 2025.pptx

Fighting Misinformation in Cyberspace

  • 1. Confidential + Proprietary Fighting Misinformation and Violent Extremism/Hate Content in CyberSpace
  • 2. Recent News Cycles have Highlighted a Set of Challenges: ● Violent/Extremism Content ● Hate Speech ● Misinformation/Fake News ● (Cross border) Disinformation Companies have to develop new solutions or strengthen existing solutions.
  • 3. Search has a delicate balancing act in accomplishing our mission to get what a user wants while respecting our ethics as a company Some Related Challenges Violent Extremism / Hate Speech Cross-Border Disinformation Other Disinformation / Fake News* Violent Extremism: Content that recruits, fundraises, or promotes violence or terrorism. Examples include content meant to incite violence, celebrate terrorist attacks, or promote acts of terrorism. Graphic content in an educational (such as news), documentary, scientific, or artistic context, should be indicated as such. Hate Speech: While preserving freedom of expression, content that threatens or advocates for harm on oneself or others; harasses, intimidates or bullies an individual or group of individuals; incites hatred against, promotes discrimination of, or disparages an individual or group on the basis of their race or ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, or other characteristic that is associated with systemic discrimination or marginalization Coordinated Inauthentic Activity: Cross- border coordinated activity. Generally driven by political motives, rather than profit motives. Examples: false popularity/virality produced via bots or troll farms. State-sponsored Propaganda: Content that is placed by governments intended to change or sway public perception Imposter or Misleading or Deceptive “News”: Satire portrayed as truth, abc.com.co, 70news Clickbait or Sensationalist or Tabloidy content: “10 Shocking Things”, “Oprah’s secret revealed!” Fringy or Conspiracy or Controversial Opinion: Holocaust denial, 9/11 hoax, Vaccines cause autism *Fake News analysis to span all products, not just News products
  • 4. Three Types of Solutions to Avoid These Problems for Companies Define policies and standards to make decisions about products Policy Change products to improve results or implement policies, e.g. changing algorithms Product Manual (tool assisted) work to collect information, produce analyses and insights, implement manual steps Operations
  • 5. Misinformation and Violent Extremism/Hate are Different Misinformation Violent/Extremism
  • 6. Two Integrated Workstreams to Address These Problems Systemic Solutions ● Building better signals with improved notions of content quality and better models to predict quality ● Improving detection of queries where the focus should be on the quality of results (0.2%-0.5% of traffic). ● Implementing flight to quality algorithms that change the weight of some signals and/or turn off certain features ○ General fringe classifier ○ Incident fringe classifier ○ Violent Extremism (V/E) classifier Policy Solutions ● Reduced reactive cycle to <48 hours. ● Improved process for approvals of takedowns ● Implemented emergency processes to quickly react to misinformation in Crisis contexts in selected countries ● Extending this work to cover reactive monitoring, V/E content, global scope
  • 8. Privileged + Confidential Policy & Enforcement Current State: Violent Extremism / Hate Speech Ethos Targeted Policy? Enforcement Options Details Open Web: Preserve freedom of information while tolerating a few content restrictions to promote quality and user safety. Prefer algorithmic over manual solutions. No Legal Removals + Quality Approach Better Ranking VE & Hate: Exploring engaging VE external experts to support better classification of harmful VE content Yes Removal VE: Strictly prohibit content related to terrorism. We also do not permit terrorist organizations to use product. Examples include but are not limited to: Using product to incite violence, celebrate terrorist attacks, promote acts of terrorism. If posting graphic content in an educational (such as news), documentary, scientific, or artistic context, please be mindful to provide enough information to help people understand what’s going on. Hate: Content that promotes discrimination or hatred, or that incites violence against individuals or groups based on based on race, ethnic origin, nationality, sex, gender identity, sexual orientation, religion, disability, age, veteran status, or any other characteristic that is associated with systemic discrimination or marginalization is prohibited. Curated Data: Emphasize quality, accuracy, ideological balance, and empathy for users. Yes Removal Users & News Publishers: Promote freedom of expression while preserving on-topic, civil discourse Yes Removal 8
  • 9. Privileged + Confidential Ethos Targeted Policy? Enforcement Options Details Open Web: Preserve freedom of information while tolerating few content restrictions to promote quality and user safety. Prefer algorithmic solutions. No Better Ranking Whether Search can derive a quality signal or spam policy leveraging the below Cross-Border Policy.* Current: Penalty in News Corpus will impact Search Ranking. Under Develop. Demotion or Removal Cross-border Disinformation Conduct Policy: Targets behavioral factors, i.e., networked activity between seemingly independent sites, unnatural traffic patterns, geo mismatch of domain with target audience of site and concealing connections to a state entity while targeting foreign audiences. Curated Data: Emphasize quality, accuracy, ideological balance, and empathy for users. Users & News Publishers: Promote freedom of expression while preserving on-topic, civil discourse 9 Policy & Enforcement: Cross-border Disinformation
  • 10. Privileged + Confidential Policy & Enforcement: Other Disinformation Ethos Targeted Policy? Enforcement Options Details Open Web: Preserve freedom of information while tolerating few content restrictions to promote quality and user safety. Prefer algorithmic solutions. No Better Ranking Rater guidelines for lower scoring of demonstrably inaccurate content Increasing quality for breaking news, violent incident queries where inaccurate info is large risk Partial Removal Blacklisting WebAnswers that are egregiously or offensively inaccurate. Curated Data: Emphasize quality, accuracy, ideological balance, and empathy for users. Yes (various) Removal Using structured data reviewed by experts and using web evidence to resolve factual disputes in structured data Users & News Publishers: Promote freedom of expression while preserving on-topic, civil discourse Yes Removal News publishers may not misrepresent, misstate, or conceal information about the site’s owner or primary purpose. News sites may not misrepresent identity or content. Removal from corpus if publisher repeatedly disproven by fact checkers. 10
  • 12. Different Root Causes at Play Are companies adversarially testing themselves? Problem Prediction Correct balancing of algo and manual response? System Response Are companies identifying “good” vs.“bad” content? Bad actors? Are companies sufficiently explaining our values and promises? Ranking & Intelligence Policy Narrative
  • 13. Solutions are Addressing the Root Causes Problem Prediction System Response Ranking & Intelligence Policy Narrative Fringe Query Expansion Impacts: Misinformation, violent extremism Keynotes: Algorithmically increase coverage of breaking news and violent extremism queries for flight to quality launches. VE Content Ranking | Eval Impacts: Violent extremism Keynotes: Leverage deep experts to inform inciting/recruiting VE content. Brings core company values into ranking. Manual Operations Impacts: Misinformation, violent extremism Keynotes: Manual work (process) to product stop-gap solutions, e.g. engaging flight to quality for at risk topics during crises Foreign Actor Ops Penalties Impacts: Misinformation, election interference Keynotes: Threat analysis of misinformation ops. Penalties include removal from news surfaces [and spam penalties elsewhere] Early Alert System Impacts: Misinformation, violent extremism, election interference, hate Keynotes: Efforts to start systematic adversarial testing company products Narrative Development Impacts: Violent extremism, hate, misinformation Keynotes: Policy, Search and PR are coordinating to create core narratives around corporate values [and product promises]
  • 14. Some Steps Recommended to be Taken Algorithmic Changes ● Built better signals with improved notions of content quality and better models to predict quality ● Improved detection of queries where companies need to stress quality of results (0.2%-0.5% of traffic). ● Implemented flight to quality algorithms that change the weight of some signals and/or turn off certain features ○ General fringe classifier ○ Incident fringe classifier ○ Violent Extremism classifier
  • 15. Reactive Monitoring of Misinformation Content in Crises - In crisis situations, some search engines temporarily show poor quality results for some queries because there are few high quality ones. There are company teams that already stand up in crisis situations. Companies are already leveraging their capabilities to help create a temporary, manual process solution until algorithmic ones are developed.
  • 16. Process Example for Reactive Monitoring During Crisis Policy Define and approve a clear policy for identifying which queries should trigger flight to safety. Stand Down Stand down the flight to safety after higher quality content is available Identify/Monitor Extend existing Crisis Response/SOS and T&S processes to include the identification and monitoring of a set of queries that require a flight for safety in Crisis situations. Escalate/Enact Extend existing T&S processes to escalate these queries to the correct escalation path and trigger flight to safety
  • 17. Extend to Proactive Monitoring, Expand Scope to Misinformation + Violent Extremism + Hate Content Build operations team to run monitoring 24/7 Define play-books for 90% of the escalations, including PR proactive positioning on top areas Establish types of queries that always trigger flight to quality Develop feedback loop to help development of automated solutions Broaden scope from selected EN markets to most key global markets Build tools to support manual monitoring
  • 18. Search has a delicate balancing act in accomplishing our mission to get what a user wants while respecting our ethics as a company Different but Connected Challenges in Misinformation Violent Extremism / Hate Speech Cross-Border Disinformation Other Disinformation / Fake News* Violent Extremism: Content that recruits, fundraises, or promotes violence or terrorism. Examples include content meant to incite violence, celebrate terrorist attacks, or promote acts of terrorism. Graphic content in an educational (such as news), documentary, scientific, or artistic context, should be indicated as such. Hate Speech: While preserving freedom of expression, content that threatens or advocates for harm on oneself or others; harasses, intimidates or bullies an individual or group of individuals; incites hatred against, promotes discrimination of, or disparages an individual or group on the basis of their race or ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, or other characteristic that is associated with systemic discrimination or marginalization Coordinated inauthentic activity: Cross- border coordinated activity. Generally driven by political motives, rather than profit motives. Examples: false popularity/virality produced via bots or troll farms. State-sponsored Propaganda: Content that is placed by governments intended to change or sway public perception Imposter or misleading or deceptive “news”: Satire portrayed as truth, abc.com.co, 70news Clickbait or sensationalist or tabloidy content: “10 shocking things”, “Oprah’s secret revealed!” Fringy or conspiracy or controversial opinion: Holocaust denial, 9/11 hoax, vaccines cause autism *Fake News analysis to span all products, not just News products
  • 19. Several Components to Flight to Quality Changes 1. Build better signals a. Improved notions of page quality (updated rater guidelines to make clear quality includes non-structural features such that news has to be true - pages or sites need to be credible). b. Changed our models to use the raters data to improve prediction of quality signals 2. Detect queries where it is more important than average to stress quality of results - between 0.2%-0.5% of traffic. 3. In case such queries are detected, the technical flight to quality starts by changing the weight of some signals - e.g. increase weight of quality, reduce weight of clicks, cap topicality. a. Building fringe classifier b. Building specific models for incident fringe classifier with limited flight to quality c. Building Violent Extremism classifier - suppressing certain features (e.g. videos) for mostly news queries, giving higher weight to both authoritative and fresh content
  • 20. Policy and Applicable Search Features Misinformation Query patterns that could lead to potentially harmful misinformation. In a non-crisis situation monitoring relies on a calendar of events, breaking news and dictionary of queries. Flight to safety News Feed Violent extremism Content that recruits, fundraises, or promotes violence or terrorism. Examples include content meant to incite violence, celebrate terrorist attacks, or promote acts of terrorism. Graphic content in an educational (such as news), documentary, scientific, or artistic context, should be indicated as such. Webanswers Suggest Knowledge Engine Structured Data Highlights News Feed Image (Safe Search) Hate Content that promotes, glorifies, or condones violence or has the primary purpose of Inciting Hatred against a protected group or a member of a protected group based on the protected characteristic (race, ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity).

Editor's Notes

  • #6: Specific steps we have taken include: 1. Algorithm changes like demotion for XX, YY, ZZ - incident fringe 2. Reducing reactive cycle to <48 hours. -- Process for approvals -- Identifying bad queries (using Crisis response process) 3. Improved signals
  • #9: *Impacts ranking/q-score
  • #10: *Discussions starting with webspam
  • #14: Specific steps to be taken include: 1. Algorithm changes like demotion for XX, YY, ZZ - incident fringe 2. Reducing reactive cycle to <48 hours. -- Process for approvals -- Identifying bad queries (using Crisis response process) 3. Improved signals
  • #15: Specific steps we have taken include: 1. Algorithm changes like demotion for XX, YY, ZZ - incident fringe 2. Reducing our reactive cycle to <48 hours. -- Process for approvals -- Identifying bad queries (using Crisis response process) 3. Improved signals