SlideShare a Scribd company logo
Intro to Web Scraping with Python
Vancouver PyLadies Meetup
March 7, 2016
Maris Lemba
2
Before we begin
● Do consider whether the site you are interested in allows scraping by
examining its robots.txt file
– This robots exclusion standard is used by web servers to communicate with web
crawlers and other robots
– Located in the top-level directory of web server
– Can specify which directories the site owner does not want robots to access
– Can have different access rules for different robots
● If you intend to publish the output from your scraping, consider whether it
is legal to do so
– It is generally ok to re-publish “factual data”
#Sample robots.txt file
User-agent: Googlebot
Disallow: /folder/
User-agent: *
Disallow: /photos/
3
Basic components of a web page
● HTML – a set of markup tags for describing web documents
● CSS – style sheet language for describing the presentation of an HTML document
● JavaScript – popular client-side programming language used for web applications
Sample html source
This is what it would look like in the
browser
If we wanted to scrape information from the “Day of the week” table, we can locate it in the
html inside the <table> ... </table> element.
If there are multiple tables, we can use attributes such as id to identify the right table.
4
How the web works
user interface
web server
web browser
database, data
handling, etc
server-based system
user interface
Ajax “engine”
(javascript)
web browser
HTTP request
HTTP request
HTML +
CSS data
web server
database, data
handling, etc
server-based system
XML/JSON or
HTML/CSS or
Javascript data
Schematic adapted from https://en.wikipedia.org/wiki/Ajax_%28programming%29
ClientServer
Javascript
call
HTML+CSS
data
Conventional model of web application Ajax model of web application
5
Scraping ad hoc vs at scale
● Use case 1 (basic scraping): I need to scrape a few pages,
maybe crawl a small site. I can wait to have all data retrievals
done sequentially.
– This is what we will focus on in this talk
– Two scenarios:
● All the data you need is in the HTML files you get from the server
● The page is dynamically generated using Ajax
● Use case 2: I am crawling large sites periodically. I need task
scheduling, asynchronous retrieval, pipeline automation, and
lots of other tools to make this easier.
– You may want to look into a crawling/scraping framework like Scrapy
6
Agenda
● We'll walk through examples for the two
scenarios under “use case 1”
– We will use two web sites,
pacificroadrunners.ca/firsthalf/ and
ultrasignup.com, to scrape the results for all
years of two running races
● Pacific Road Runners First Half Marathon
● Chuckanut 50K
7
Basic scraping: static HTML
● Tasks:
– Inspect the HTML source to locate where in the Document Object Model (DOM) the data of
interest resides
● Right-click on the page in browser window and select “View Page Source”
– Retrieve the HTML file using built-in urllib library (note that the interface to this library has
changed between Python 2 and 3)
– Parse the HTML file and extract the data of interest, using either a built-in parser like html.parser,
or third-party libraries such as lxml, html5lib or BeautifulSoup
● Which library you choose for parsing depends on
– how sloppy your html is: if your html is “incorrect”, the parsers may handle it differently
– whether you want to maximize speed (choose lxml)
– what external dependencies you can live with (lxml is a binding to popular C libraries libxml2
and libxslt)
– how “easy” you want the interface to be (BeautifulSoup has an easy-to-get-started API and great
documentation with examples)
8
urllib library
● High level standard library for retrieving data across the Web
(not just html files)
● urllib library was split into several submodules in Python
3 for working with URLs:
– urllib.request, urllib.parse, urllib.error, 
urllib.robotparser
● The urllib.request provides methods for opening URLs
● The urllib.request.urlopen() method is similar to
the built-in function open() for opening files
9
BeautifulSoup library
● Python third-party library for extracting data from html and
xml files
● Works with html.parser, lxml, html5lib
● Provides ways to navigate, search and modify the parse
tree based on the position in the parse tree, tag name, tag
attributes, css classes using regular expressions, user-
defined functions etc
● Excellent tutorial with examples at
http://www.crummy.com/software/BeautifulSoup/bs4/doc/
● Supports Python 2.7 and 3
10
Quick intro to BeautifulSoup
● BeautifulSoup constructor creates an object that
represents original documents as a nested data
structure
●
The simplest way to address a tag is to use its name
●
find() and find_all() methods allow to search for an
element; find() returns the first element found, and
find_all() a list of elements
●
You can access a tag's attributes by treating the tag
like a dictionary
●
You can extract text from the element
Parsing samplepage.html with
BeautifulSoup
11
Example: Results tab of Vancouver First Half race
The main Results page
has links to yearly results
12
HTML source inspection: front page of results
Find the links in the HTML source file corresponding to the rows of yearly results you
see in the browser
Note that the links we are interested
in are all inside
<div class=”entry-content”> ... </div>
The links to each year's page
are inside an <a> tag, as a
value to attribute named href
The text field of the <a> ... </a> element is a
string containing the year of the race
13
Extract all links to individual year's results
Note that the collected links are relative; we can append
them to the main url to get absolute links
14
HTML source inspection: individual year
Find the table of results in the DOM This is the table we want, each
runner's info in <tr> ... </tr>
Each participant's entry is a table
row <tr> ... </tr>.
The text (participant's placement,
name, finishing time, etc) are
inside <td> ... </td> elements
nested in the table row.
15
Scrape results for all years
First five finishers of the 2016 race on
the web page
Let's print out the first five of the
2016 entry in our dictionary
16
What is Ajax
● Name comes from Asynchronous JavaScript and XML, even
though XML is no longer the only format for data exchange
● Refers to a group of web technologies used to implement web
applications that communicate with the server in the background
without interfering with the display of the page or reloading the
entire page
● For web scraping, this means that the data you are interested in
may not be in the source HTML file you receive from the server
but is generated on the client side
● You can use browser development tools (“inspect element”) to
inspect the Ajax-generated HTML that your browser displays
17
Second example: ultrasignup.com
So far, web site looks similar to previous example. We are interested in
scraping results for all years of the race.
18
Results page for an individual year
This is a sample page for 2014 results. It looks like the results
are in a neat table.
19
Inspecting the DOM: the result table is not
filled out in the source HTML
However, the <table id=”list”> element is empty
when we view source html file.
This means we can't simply extract the result table
from the HTML file that the server sent to us like we
did in the previous example.
We see all result rows in
<table id=”list”> </table>
when inspecting the element
in browser developer tools
such as Mozilla Firebug
(which shows the generated
HTML)
Again, the details of the
results are within the
<td> ... </td> element inside
each <tr> ... </tr> element
20
Scraping dynamically generated HTML
● Your scraping system needs to be able to execute JavaScript and
either retrieve the results directly, or generate the HTML as seen in
the browser and scrape data from that
● A popular choice for the latter is to use Selenium, a suite of tools for
browser automation
– Python's binding to Selenium suite is the library selenium
– Selenium supports several browsers (Firefox, Chrome, etc), but for scraping
a headless non-graphical browser like PhantomJS is a good choice
– Once Selenium has generated the HTML after running JavaScript, you can
either access it as a string and use this as input to BeautifulSoup for data
extraction, or you can navigate the DOM and find elements within Selenium
21
Quick intro to selenium
● Selenium WebDriver accepts commands, sends them to the browser, and retrieves results
● You can navigate and search the DOM
– find_element_by_id('my_id')
– find_elements_by_tag_name('table')
– find_elements_by_class_name('my_class_name')
– find_elements_by_xpath()
● For example, find_elements_by_xpath(“//table[@id='list']”) will find a table with the id = “list”
● Tutorial on xpath syntax: www.w3schools.com/xsl/xpath_intro.asp
– and other methods (using css selectors, link text, etc)
● You can click on links using WebDriver's click() method
● You can retrieve the generated html source by calling .page_source on webdriver
instance and parse it outside of selenium
● More on Selenium with Python:
http://selenium-python.readthedocs.org/index.html
22
Scraping ultrasignup race results using
Selenium and PhantomJS
23
And it works!
Note that the first two lists (rows in web page table) in our dictionary corresponding to
a year do not actually contain finisher results.
Let's ignore those, and print out the subsequent results for the first 12 finishers.
This table is from the 2014 results page for
comparison: the results match
24
Additional resources for scraping projects
Additional third-party libraries you may want to look at:
● requests library
– Filling out forms/login information before accessing data and managing cookies
● mechanize library
– Navigating through forms, session and cookie handling
– Does not yet support Python 3
– Does not run JavaScript
● scrapy library
– Framework for web crawling and scraping: managing crawling, asynchronous
scheduling and processing of requests, cookie and session handling, built-in
support for data extraction etc
– Does not yet support Python 3
25
References for further reading
● BeautifulSoup
– http://www.crummy.com/software/Bea
utifulSoup/bs4/doc/
● Selenium with Python
– http://selenium-python.readthedocs.org/
● General web scraping
– Web Scraping with Python by Ryan Mitchell,
O'Reilly 2015

More Related Content

PDF
What is Web-scraping?
PPTX
Web Scrapping Using Python
ODP
Introduction to Web Scraping using Python and Beautiful Soup
PDF
Tutorial on Web Scraping in Python
PPTX
Web Scraping using Python | Web Screen Scraping
PDF
What is web scraping?
PPT
Web Scraping and Data Extraction Service
PDF
Web scraping in python
What is Web-scraping?
Web Scrapping Using Python
Introduction to Web Scraping using Python and Beautiful Soup
Tutorial on Web Scraping in Python
Web Scraping using Python | Web Screen Scraping
What is web scraping?
Web Scraping and Data Extraction Service
Web scraping in python

What's hot (20)

PPTX
Web scraping
PPTX
Web Scraping With Python
PPTX
Web Scraping Basics
PDF
Web scraping in python
PDF
Web Scraping
PDF
Recommender Systems
PDF
Python for Data Science
PDF
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete Deck
PDF
Skillshare - Introduction to Data Scraping
PPTX
introduction to data science
PDF
Scraping data from the web and documents
PPT
Php Presentation
PPTX
Ranking algorithms
PPTX
Web Scraping
PPTX
Introduction to Azure Machine Learning
PDF
Introduction to data science
PPTX
Python/Flask Presentation
PPT
Introduction to Natural Language Processing
PDF
Generative AI
PPTX
Introduction to Big Data
Web scraping
Web Scraping With Python
Web Scraping Basics
Web scraping in python
Web Scraping
Recommender Systems
Python for Data Science
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete Deck
Skillshare - Introduction to Data Scraping
introduction to data science
Scraping data from the web and documents
Php Presentation
Ranking algorithms
Web Scraping
Introduction to Azure Machine Learning
Introduction to data science
Python/Flask Presentation
Introduction to Natural Language Processing
Generative AI
Introduction to Big Data
Ad

Viewers also liked (10)

PDF
Scraping with Python for Fun and Profit - PyCon India 2010
PDF
Developing effective data scientists
PDF
Web Scraping is BS
PDF
Bot or Not
PDF
Python beautiful soup - bs4
PDF
Python, web scraping and content management: Scrapy and Django
PDF
Parse The Web Using Python+Beautiful Soup
PDF
Web Scraping with Python
PDF
Scraping the web with python
PDF
Beautiful soup
Scraping with Python for Fun and Profit - PyCon India 2010
Developing effective data scientists
Web Scraping is BS
Bot or Not
Python beautiful soup - bs4
Python, web scraping and content management: Scrapy and Django
Parse The Web Using Python+Beautiful Soup
Web Scraping with Python
Scraping the web with python
Beautiful soup
Ad

Similar to Intro to web scraping with Python (20)

PPTX
Web scraping with BeautifulSoup, LXML, RegEx and Scrapy
PDF
Pydata-Python tools for webscraping
PDF
Getting started with Web Scraping in Python
PPTX
Python FDP self learning presentations..
PPTX
Module-5 Ppt.pptx
PDF
chapter1_introHTML.pdf..................
PPTX
Python ScrapingPresentation for dummy.pptx
PDF
Web Scraping Workshop
PPTX
Web scraping using scrapy - zekeLabs
PPTX
Web_Scraping_Presentation_today pptx.pptx
PPTX
Data web analytics scraping 12345_II.pptx
PDF
Guide for web scraping with Python libraries_ Beautiful Soup, Scrapy, and mor...
PPTX
Web programming using python frameworks.
PPTX
Data-Analytics using python (Module 4).pptx
PPTX
Datasets, APIs, and Web Scraping
PPTX
WEB Scraping.pptx
PDF
Python webinar 2nd july
PDF
How to scrape data as economics student
Web scraping with BeautifulSoup, LXML, RegEx and Scrapy
Pydata-Python tools for webscraping
Getting started with Web Scraping in Python
Python FDP self learning presentations..
Module-5 Ppt.pptx
chapter1_introHTML.pdf..................
Python ScrapingPresentation for dummy.pptx
Web Scraping Workshop
Web scraping using scrapy - zekeLabs
Web_Scraping_Presentation_today pptx.pptx
Data web analytics scraping 12345_II.pptx
Guide for web scraping with Python libraries_ Beautiful Soup, Scrapy, and mor...
Web programming using python frameworks.
Data-Analytics using python (Module 4).pptx
Datasets, APIs, and Web Scraping
WEB Scraping.pptx
Python webinar 2nd july
How to scrape data as economics student

Recently uploaded (20)

PPT
Miokarditis (Inflamasi pada Otot Jantung)
PDF
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
PDF
Lecture1 pattern recognition............
PPTX
Introduction-to-Cloud-ComputingFinal.pptx
PDF
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PPTX
oil_refinery_comprehensive_20250804084928 (1).pptx
PPTX
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
PPTX
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
PDF
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
PPTX
advance b rammar.pptxfdgdfgdfsgdfgsdgfdfgdfgsdfgdfgdfg
PPTX
Business Acumen Training GuidePresentation.pptx
PPTX
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
PPTX
Supervised vs unsupervised machine learning algorithms
PPTX
STUDY DESIGN details- Lt Col Maksud (21).pptx
PPTX
Introduction to Knowledge Engineering Part 1
PPTX
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
PPTX
Global journeys: estimating international migration
PDF
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
PPTX
Acceptance and paychological effects of mandatory extra coach I classes.pptx
Miokarditis (Inflamasi pada Otot Jantung)
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
Lecture1 pattern recognition............
Introduction-to-Cloud-ComputingFinal.pptx
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
oil_refinery_comprehensive_20250804084928 (1).pptx
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
advance b rammar.pptxfdgdfgdfsgdfgsdgfdfgdfgsdfgdfgdfg
Business Acumen Training GuidePresentation.pptx
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
Supervised vs unsupervised machine learning algorithms
STUDY DESIGN details- Lt Col Maksud (21).pptx
Introduction to Knowledge Engineering Part 1
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
Global journeys: estimating international migration
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
Acceptance and paychological effects of mandatory extra coach I classes.pptx

Intro to web scraping with Python

  • 1. Intro to Web Scraping with Python Vancouver PyLadies Meetup March 7, 2016 Maris Lemba
  • 2. 2 Before we begin ● Do consider whether the site you are interested in allows scraping by examining its robots.txt file – This robots exclusion standard is used by web servers to communicate with web crawlers and other robots – Located in the top-level directory of web server – Can specify which directories the site owner does not want robots to access – Can have different access rules for different robots ● If you intend to publish the output from your scraping, consider whether it is legal to do so – It is generally ok to re-publish “factual data” #Sample robots.txt file User-agent: Googlebot Disallow: /folder/ User-agent: * Disallow: /photos/
  • 3. 3 Basic components of a web page ● HTML – a set of markup tags for describing web documents ● CSS – style sheet language for describing the presentation of an HTML document ● JavaScript – popular client-side programming language used for web applications Sample html source This is what it would look like in the browser If we wanted to scrape information from the “Day of the week” table, we can locate it in the html inside the <table> ... </table> element. If there are multiple tables, we can use attributes such as id to identify the right table.
  • 4. 4 How the web works user interface web server web browser database, data handling, etc server-based system user interface Ajax “engine” (javascript) web browser HTTP request HTTP request HTML + CSS data web server database, data handling, etc server-based system XML/JSON or HTML/CSS or Javascript data Schematic adapted from https://en.wikipedia.org/wiki/Ajax_%28programming%29 ClientServer Javascript call HTML+CSS data Conventional model of web application Ajax model of web application
  • 5. 5 Scraping ad hoc vs at scale ● Use case 1 (basic scraping): I need to scrape a few pages, maybe crawl a small site. I can wait to have all data retrievals done sequentially. – This is what we will focus on in this talk – Two scenarios: ● All the data you need is in the HTML files you get from the server ● The page is dynamically generated using Ajax ● Use case 2: I am crawling large sites periodically. I need task scheduling, asynchronous retrieval, pipeline automation, and lots of other tools to make this easier. – You may want to look into a crawling/scraping framework like Scrapy
  • 6. 6 Agenda ● We'll walk through examples for the two scenarios under “use case 1” – We will use two web sites, pacificroadrunners.ca/firsthalf/ and ultrasignup.com, to scrape the results for all years of two running races ● Pacific Road Runners First Half Marathon ● Chuckanut 50K
  • 7. 7 Basic scraping: static HTML ● Tasks: – Inspect the HTML source to locate where in the Document Object Model (DOM) the data of interest resides ● Right-click on the page in browser window and select “View Page Source” – Retrieve the HTML file using built-in urllib library (note that the interface to this library has changed between Python 2 and 3) – Parse the HTML file and extract the data of interest, using either a built-in parser like html.parser, or third-party libraries such as lxml, html5lib or BeautifulSoup ● Which library you choose for parsing depends on – how sloppy your html is: if your html is “incorrect”, the parsers may handle it differently – whether you want to maximize speed (choose lxml) – what external dependencies you can live with (lxml is a binding to popular C libraries libxml2 and libxslt) – how “easy” you want the interface to be (BeautifulSoup has an easy-to-get-started API and great documentation with examples)
  • 8. 8 urllib library ● High level standard library for retrieving data across the Web (not just html files) ● urllib library was split into several submodules in Python 3 for working with URLs: – urllib.request, urllib.parse, urllib.error,  urllib.robotparser ● The urllib.request provides methods for opening URLs ● The urllib.request.urlopen() method is similar to the built-in function open() for opening files
  • 9. 9 BeautifulSoup library ● Python third-party library for extracting data from html and xml files ● Works with html.parser, lxml, html5lib ● Provides ways to navigate, search and modify the parse tree based on the position in the parse tree, tag name, tag attributes, css classes using regular expressions, user- defined functions etc ● Excellent tutorial with examples at http://www.crummy.com/software/BeautifulSoup/bs4/doc/ ● Supports Python 2.7 and 3
  • 10. 10 Quick intro to BeautifulSoup ● BeautifulSoup constructor creates an object that represents original documents as a nested data structure ● The simplest way to address a tag is to use its name ● find() and find_all() methods allow to search for an element; find() returns the first element found, and find_all() a list of elements ● You can access a tag's attributes by treating the tag like a dictionary ● You can extract text from the element Parsing samplepage.html with BeautifulSoup
  • 11. 11 Example: Results tab of Vancouver First Half race The main Results page has links to yearly results
  • 12. 12 HTML source inspection: front page of results Find the links in the HTML source file corresponding to the rows of yearly results you see in the browser Note that the links we are interested in are all inside <div class=”entry-content”> ... </div> The links to each year's page are inside an <a> tag, as a value to attribute named href The text field of the <a> ... </a> element is a string containing the year of the race
  • 13. 13 Extract all links to individual year's results Note that the collected links are relative; we can append them to the main url to get absolute links
  • 14. 14 HTML source inspection: individual year Find the table of results in the DOM This is the table we want, each runner's info in <tr> ... </tr> Each participant's entry is a table row <tr> ... </tr>. The text (participant's placement, name, finishing time, etc) are inside <td> ... </td> elements nested in the table row.
  • 15. 15 Scrape results for all years First five finishers of the 2016 race on the web page Let's print out the first five of the 2016 entry in our dictionary
  • 16. 16 What is Ajax ● Name comes from Asynchronous JavaScript and XML, even though XML is no longer the only format for data exchange ● Refers to a group of web technologies used to implement web applications that communicate with the server in the background without interfering with the display of the page or reloading the entire page ● For web scraping, this means that the data you are interested in may not be in the source HTML file you receive from the server but is generated on the client side ● You can use browser development tools (“inspect element”) to inspect the Ajax-generated HTML that your browser displays
  • 17. 17 Second example: ultrasignup.com So far, web site looks similar to previous example. We are interested in scraping results for all years of the race.
  • 18. 18 Results page for an individual year This is a sample page for 2014 results. It looks like the results are in a neat table.
  • 19. 19 Inspecting the DOM: the result table is not filled out in the source HTML However, the <table id=”list”> element is empty when we view source html file. This means we can't simply extract the result table from the HTML file that the server sent to us like we did in the previous example. We see all result rows in <table id=”list”> </table> when inspecting the element in browser developer tools such as Mozilla Firebug (which shows the generated HTML) Again, the details of the results are within the <td> ... </td> element inside each <tr> ... </tr> element
  • 20. 20 Scraping dynamically generated HTML ● Your scraping system needs to be able to execute JavaScript and either retrieve the results directly, or generate the HTML as seen in the browser and scrape data from that ● A popular choice for the latter is to use Selenium, a suite of tools for browser automation – Python's binding to Selenium suite is the library selenium – Selenium supports several browsers (Firefox, Chrome, etc), but for scraping a headless non-graphical browser like PhantomJS is a good choice – Once Selenium has generated the HTML after running JavaScript, you can either access it as a string and use this as input to BeautifulSoup for data extraction, or you can navigate the DOM and find elements within Selenium
  • 21. 21 Quick intro to selenium ● Selenium WebDriver accepts commands, sends them to the browser, and retrieves results ● You can navigate and search the DOM – find_element_by_id('my_id') – find_elements_by_tag_name('table') – find_elements_by_class_name('my_class_name') – find_elements_by_xpath() ● For example, find_elements_by_xpath(“//table[@id='list']”) will find a table with the id = “list” ● Tutorial on xpath syntax: www.w3schools.com/xsl/xpath_intro.asp – and other methods (using css selectors, link text, etc) ● You can click on links using WebDriver's click() method ● You can retrieve the generated html source by calling .page_source on webdriver instance and parse it outside of selenium ● More on Selenium with Python: http://selenium-python.readthedocs.org/index.html
  • 22. 22 Scraping ultrasignup race results using Selenium and PhantomJS
  • 23. 23 And it works! Note that the first two lists (rows in web page table) in our dictionary corresponding to a year do not actually contain finisher results. Let's ignore those, and print out the subsequent results for the first 12 finishers. This table is from the 2014 results page for comparison: the results match
  • 24. 24 Additional resources for scraping projects Additional third-party libraries you may want to look at: ● requests library – Filling out forms/login information before accessing data and managing cookies ● mechanize library – Navigating through forms, session and cookie handling – Does not yet support Python 3 – Does not run JavaScript ● scrapy library – Framework for web crawling and scraping: managing crawling, asynchronous scheduling and processing of requests, cookie and session handling, built-in support for data extraction etc – Does not yet support Python 3
  • 25. 25 References for further reading ● BeautifulSoup – http://www.crummy.com/software/Bea utifulSoup/bs4/doc/ ● Selenium with Python – http://selenium-python.readthedocs.org/ ● General web scraping – Web Scraping with Python by Ryan Mitchell, O'Reilly 2015