SlideShare a Scribd company logo
Big Data For A Better World
Sponsors:
Tonight’s schedule
• Panel Presentation
• Keynote
• Networking
Big Data For A Better World
Panel Presentation
• Leon Wilson
• David M. Holmes
• Jason Therrien
Big Data For A Better World
Big Data: The Promise, the
Premise and the Practice
Kambiz (come-bees) Ghazinour
Advanced Information Security and Privacy Research Lab
Kent State University
Nov 16, 2017
Big Data: The Promise, the
Premise and the Practice
Big Data: The Promise, the
Premise and the Practice
Volume
Velocity
Variety
Veracity
Big, Fast, Diverse, Uncertain Data:
The Promise, the Premise and the
Practice
Google, MapReduce 2004
Big, Fast, Diverse, Inaccurate Data:
The Problem of Promising
Protection of Personal Information
in a Protected and Privacy
Preserving Platform in Practice
Phone Metadata
The Stanford Experiment: See Data and Goliath by Bruce Schneider
• phone metadata from 500 volunteers
• One called a hospital, a medical lab, a pharmacy, and
several short calls to a monitoring hotline for a heart
monitor
• One called her sister at length, then calls to an abortion
clinic, further calls two weeks later, and a final call a
month later.
• a heart patient, an abortion …
• This is just metadata not content. It’s very revealing.
Who should be able to see it?
Personal Data and Privacy
No one shall be subjected to arbitrary
interference with his privacy, family, home
or correspondence, nor to attacks upon his
honor and reputation. Everyone has the
right to the protection of the law against
such interference or attacks
• Article12 of the Universal Declaration of Human
Rights
Nothing to hide, nothing to fear?
• Many people need to control their privacy
– victims of rape or other trauma,
– people escaping abusive relationships
– people who may be discriminated against (HIV positive, previous mental illness, spent convictions,
recovering addicts)
– people at risk of “honor” violence from their families for breaking cultural norms
– Adopters, protecting their adopted children from the birth families that abused them
– witness protection, undercover police, some social workers and prison staff ….
• It is unthinking or callous to see other people’s privacy as unimportant.
• Data is forever and your circumstances or society’s attitudes may change
The Value of Big Data
• Facebook and Amazon are valued at $500B,
as of July 2017
• Most of Facebook’s value comes from
personal data
Anonymization
• Statistical analyses are anonymous “70 percent of
American smokers want to quit” does not expose
personal data
• Data about individuals can be anonymous, but it
becomes very difficult when more than a few facts
are included even if these facts are not specific and
some of them are wrong (eg Netflix)
3 ways to anonymize
• Suppress - omit from the released data
• Generalize - for example, replace birth date
with something less specific, like birth year
• Perturb - make changes to the data
Anonymization is difficult
15
Example 1: AOL Search Data August
2006
• To stimulate research into the value of
search data, AOL released the
anonymized search records of 658,000
users over a three month period from
March to May 2006
AOL anonymization
• AOL had tried to anonymize the data they released by removing the
searcher’s IP address and replacing the AOL username with a unique
random identifier linking of the searches by any individual, so that the
data was still useful for research
• It did not take long for two journalists to identify user 4417749, who
had searched for people with the last name Arnold, “homes sold in
shadow lake subdivision Gwinnett county Georgia" and “pine straw in
Lilburn” as Thelma Arnold, a widow living in Lilburn, Georgia
• Her other searches provide a deeply personal view of her life,
difficulties and desires
AOL faced strong criticism
• The violation of privacy was widely
condemned
• AOL described their action as a “screw up”
• They took down the data, but it was too late.
The internet never forgets. Several mirror
sites had already been set up.
http://www.not-secret.com
Big Data for a Better World
Example 2:
The Netflix™ Prize
• In October 2006, Netflix launched a $1m prize for an
algorithm that was 10% better than its existing algorithm
Cinematch
• participants were given access to the contest training data
set of more than 100 million ratings from over 480
thousand randomly-chosen, anonymous customers on
nearly 18 thousand movie titles.
• How much information would you need to be able to
identify customers?
Netflix
• Netflix said “to protect customer privacy, all personal
information identifying individual customers has been
removed and all customer ids have been replaced by
randomly-assigned ids. The date of each rating and the
title and year of release for each movie are provided. No
other customer or movie information is provided.”
• Two weeks after the prize was launched, Arvind
Narayanan and Vitaly Shmatikov of the University of
Texas at Austin announced that they could identify a high
proportion of the 480,000 subscribers in the training
data.
Narayanan and Shmatikov’s results
• How much does the attacker need to know about a Netflix subscriber in
order to identify her record in the dataset, and thus completely learn her
movie viewing history? Very little.
• For example, suppose the attacker learns a few random ratings and the
corresponding dates for some subscriber, perhaps from coffee-time chat.
• With 8 movie ratings (of which we allow 2 to be completely wrong) and
dates that may have a 3-day error, 96% of Netflix subscribers whose
records have been released can be uniquely identified in the dataset.
• For 64% of subscribers, knowledge of only 2 ratings and dates is sufficient
for complete deanonymization, and for 89%, 2 ratings and dates are
enough to reduce the set of records to 8 out of almost 500,000, which
can then be inspected for further deanonymisation.
Why are Narayanan and Shmatikov’s
results important?
1. They were results from probability theory, so they apply to all
sparse datasets. (They tested the results later, using the
Internet Movie Database IMDb as a source of data).
2. Psychologists at Cambridge University have shown that a small
number of seemingly innocuous Facebook Likes can be used to
automatically and accurately predict a range of highly sensitive
personal attributes including: sexual orientation, ethnicity,
religious and political views, personality traits, intelligence,
happiness, use of addictive substances, parental separation,
age, and gender).
“Security” Attack Scenario
25
The Attack Scenario
26
The Usefulness Challenge
27
The Attack Scenario - Anonymization
28
Re-identification by linking
Re-identification by linking (example)
Anonymization in Data Systems
31
K-anonymity
K-Anonymity
Output Perturbation
Example of suppression and
generalization
Classification of Attributes
Classification of Attributes
Example
Finding similar instances
• A Fast Approximate Nearest Neighbor Search
Algorithm in the Hamming Space
– Locality sensitive hashing (LSH)
– Error Weighted Hashing (EWH)
– Etc.
39
Big, Fast, Diverse, Inaccurate Data:
The Problem of Promising
Protection of Personal Information
in a Protected and Privacy
Preserving Platform in Practice
Thank you!
• Questions?
Kambiz Ghazinour
kghazino@kent.edu
@DrGhazinour
41
Advanced Information Security and Privacy Lab
Big Data For A Better World
• Networking
Sponsors:

More Related Content

PPTX
Tech2Empower.v2
PPTX
Big Mountain Data
PDF
Toward Automated Fact-Checking: Detecting Check-worthy Factual Claims by Clai...
PPT
Social Media's Impact on Pre-Screening
PPTX
Social Media Forensics for Investigators
PPTX
Nr14: Ten tips for data journalists
PDF
Big Data, Psychografics and Social Media Advertising - Alessandro Sisti
Tech2Empower.v2
Big Mountain Data
Toward Automated Fact-Checking: Detecting Check-worthy Factual Claims by Clai...
Social Media's Impact on Pre-Screening
Social Media Forensics for Investigators
Nr14: Ten tips for data journalists
Big Data, Psychografics and Social Media Advertising - Alessandro Sisti

Similar to Big Data for a Better World (20)

PPTX
Privacy-preserving Data Mining in Industry (WWW 2019 Tutorial)
PPTX
Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)
PPTX
Privacy-preserving Data Mining in Industry: Practical Challenges and Lessons ...
PPTX
Privacy & Big Data - What do they know about me?
PPT
Introduction to Privacy and Social Networking
PDF
Preserving Privacy and Utility in Text Data Analysis
PDF
Ethics in Data Science and Machine Learning
PDF
Computational privacy: balancing privacy and utility in the digital era
PDF
Consumers' and Citizens' Privacy
PPT
Making sense of big data
PPTX
Confessions (and Lessons) of a "Recovering" Data Broker
PPTX
Privacy in AI/ML Systems: Practical Challenges and Lessons Learned
PDF
Privacy. Winter School on “Topics in Digital Trust”. IIT Bombay
PPT
Privacy by Design Seminar - Jan 22, 2015
PPTX
Fairness, Transparency, and Privacy in AI @ LinkedIn
PPTX
Data set Legislation
PDF
Cyber Summit 2016: Privacy Issues in Big Data Sharing and Reuse
PDF
Big Data Analytics - The New Cold War
PPT
Zimmer CSCW 2010
Privacy-preserving Data Mining in Industry (WWW 2019 Tutorial)
Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)
Privacy-preserving Data Mining in Industry: Practical Challenges and Lessons ...
Privacy & Big Data - What do they know about me?
Introduction to Privacy and Social Networking
Preserving Privacy and Utility in Text Data Analysis
Ethics in Data Science and Machine Learning
Computational privacy: balancing privacy and utility in the digital era
Consumers' and Citizens' Privacy
Making sense of big data
Confessions (and Lessons) of a "Recovering" Data Broker
Privacy in AI/ML Systems: Practical Challenges and Lessons Learned
Privacy. Winter School on “Topics in Digital Trust”. IIT Bombay
Privacy by Design Seminar - Jan 22, 2015
Fairness, Transparency, and Privacy in AI @ LinkedIn
Data set Legislation
Cyber Summit 2016: Privacy Issues in Big Data Sharing and Reuse
Big Data Analytics - The New Cold War
Zimmer CSCW 2010
Ad

Recently uploaded (20)

PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Unlocking AI with Model Context Protocol (MCP)
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
Big Data Technologies - Introduction.pptx
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Encapsulation theory and applications.pdf
PDF
Approach and Philosophy of On baking technology
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
MIND Revenue Release Quarter 2 2025 Press Release
Network Security Unit 5.pdf for BCA BBA.
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Reach Out and Touch Someone: Haptics and Empathic Computing
Unlocking AI with Model Context Protocol (MCP)
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Spectral efficient network and resource selection model in 5G networks
Big Data Technologies - Introduction.pptx
Diabetes mellitus diagnosis method based random forest with bat algorithm
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Digital-Transformation-Roadmap-for-Companies.pptx
Per capita expenditure prediction using model stacking based on satellite ima...
Encapsulation theory and applications.pdf
Approach and Philosophy of On baking technology
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Ad

Big Data for a Better World

  • 1. Big Data For A Better World Sponsors:
  • 2. Tonight’s schedule • Panel Presentation • Keynote • Networking Big Data For A Better World
  • 3. Panel Presentation • Leon Wilson • David M. Holmes • Jason Therrien Big Data For A Better World
  • 4. Big Data: The Promise, the Premise and the Practice Kambiz (come-bees) Ghazinour Advanced Information Security and Privacy Research Lab Kent State University Nov 16, 2017
  • 5. Big Data: The Promise, the Premise and the Practice
  • 6. Big Data: The Promise, the Premise and the Practice Volume Velocity Variety Veracity
  • 7. Big, Fast, Diverse, Uncertain Data: The Promise, the Premise and the Practice Google, MapReduce 2004
  • 8. Big, Fast, Diverse, Inaccurate Data: The Problem of Promising Protection of Personal Information in a Protected and Privacy Preserving Platform in Practice
  • 9. Phone Metadata The Stanford Experiment: See Data and Goliath by Bruce Schneider • phone metadata from 500 volunteers • One called a hospital, a medical lab, a pharmacy, and several short calls to a monitoring hotline for a heart monitor • One called her sister at length, then calls to an abortion clinic, further calls two weeks later, and a final call a month later. • a heart patient, an abortion … • This is just metadata not content. It’s very revealing. Who should be able to see it?
  • 10. Personal Data and Privacy No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honor and reputation. Everyone has the right to the protection of the law against such interference or attacks • Article12 of the Universal Declaration of Human Rights
  • 11. Nothing to hide, nothing to fear? • Many people need to control their privacy – victims of rape or other trauma, – people escaping abusive relationships – people who may be discriminated against (HIV positive, previous mental illness, spent convictions, recovering addicts) – people at risk of “honor” violence from their families for breaking cultural norms – Adopters, protecting their adopted children from the birth families that abused them – witness protection, undercover police, some social workers and prison staff …. • It is unthinking or callous to see other people’s privacy as unimportant. • Data is forever and your circumstances or society’s attitudes may change
  • 12. The Value of Big Data • Facebook and Amazon are valued at $500B, as of July 2017 • Most of Facebook’s value comes from personal data
  • 13. Anonymization • Statistical analyses are anonymous “70 percent of American smokers want to quit” does not expose personal data • Data about individuals can be anonymous, but it becomes very difficult when more than a few facts are included even if these facts are not specific and some of them are wrong (eg Netflix)
  • 14. 3 ways to anonymize • Suppress - omit from the released data • Generalize - for example, replace birth date with something less specific, like birth year • Perturb - make changes to the data
  • 16. Example 1: AOL Search Data August 2006 • To stimulate research into the value of search data, AOL released the anonymized search records of 658,000 users over a three month period from March to May 2006
  • 17. AOL anonymization • AOL had tried to anonymize the data they released by removing the searcher’s IP address and replacing the AOL username with a unique random identifier linking of the searches by any individual, so that the data was still useful for research • It did not take long for two journalists to identify user 4417749, who had searched for people with the last name Arnold, “homes sold in shadow lake subdivision Gwinnett county Georgia" and “pine straw in Lilburn” as Thelma Arnold, a widow living in Lilburn, Georgia • Her other searches provide a deeply personal view of her life, difficulties and desires
  • 18. AOL faced strong criticism • The violation of privacy was widely condemned • AOL described their action as a “screw up” • They took down the data, but it was too late. The internet never forgets. Several mirror sites had already been set up.
  • 21. Example 2: The Netflix™ Prize • In October 2006, Netflix launched a $1m prize for an algorithm that was 10% better than its existing algorithm Cinematch • participants were given access to the contest training data set of more than 100 million ratings from over 480 thousand randomly-chosen, anonymous customers on nearly 18 thousand movie titles. • How much information would you need to be able to identify customers?
  • 22. Netflix • Netflix said “to protect customer privacy, all personal information identifying individual customers has been removed and all customer ids have been replaced by randomly-assigned ids. The date of each rating and the title and year of release for each movie are provided. No other customer or movie information is provided.” • Two weeks after the prize was launched, Arvind Narayanan and Vitaly Shmatikov of the University of Texas at Austin announced that they could identify a high proportion of the 480,000 subscribers in the training data.
  • 23. Narayanan and Shmatikov’s results • How much does the attacker need to know about a Netflix subscriber in order to identify her record in the dataset, and thus completely learn her movie viewing history? Very little. • For example, suppose the attacker learns a few random ratings and the corresponding dates for some subscriber, perhaps from coffee-time chat. • With 8 movie ratings (of which we allow 2 to be completely wrong) and dates that may have a 3-day error, 96% of Netflix subscribers whose records have been released can be uniquely identified in the dataset. • For 64% of subscribers, knowledge of only 2 ratings and dates is sufficient for complete deanonymization, and for 89%, 2 ratings and dates are enough to reduce the set of records to 8 out of almost 500,000, which can then be inspected for further deanonymisation.
  • 24. Why are Narayanan and Shmatikov’s results important? 1. They were results from probability theory, so they apply to all sparse datasets. (They tested the results later, using the Internet Movie Database IMDb as a source of data). 2. Psychologists at Cambridge University have shown that a small number of seemingly innocuous Facebook Likes can be used to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender).
  • 28. The Attack Scenario - Anonymization 28
  • 31. Anonymization in Data Systems 31
  • 35. Example of suppression and generalization
  • 39. Finding similar instances • A Fast Approximate Nearest Neighbor Search Algorithm in the Hamming Space – Locality sensitive hashing (LSH) – Error Weighted Hashing (EWH) – Etc. 39
  • 40. Big, Fast, Diverse, Inaccurate Data: The Problem of Promising Protection of Personal Information in a Protected and Privacy Preserving Platform in Practice
  • 41. Thank you! • Questions? Kambiz Ghazinour kghazino@kent.edu @DrGhazinour 41 Advanced Information Security and Privacy Lab
  • 42. Big Data For A Better World • Networking Sponsors: