Documentation / Tutorials


  • https://instaloader.github.io/index.html

Google sheets

Why scrape?


  • General: Competitive and price analysis, Lead generation, Keyword research

  • Research: Science Product reattach, Finding / filling a job, Government oversight

  • Financial: Stock analysis, Insurance and risk management, News gathering and analysis

Scraping for a car scrape the car buying websites to find all the teslas Evaluate the price to find great deals Scrape airfares and adjust the deals for airfare expenses Send an email digest of the top deal each day

Python libraries • beautiful soup • Scrape • Selenium

Process • set a start_url variable • download html • parse html • extract useful information • transform or aggregate • save the data • Go the the next url

HTTP overview Request > response via https Web address / verb / user agent GET - retrieves data POST - sends data to the server User agent identifies the browser or web scraper

# URL hacking query string
# Python URL strings 
host =‘www.iseecars.com’ 
path =/used-cars/used-tesla-for-sale’ 
location = ’66592
query_string = f#LOcation={location}&Radius =all&Make = Tesla&Model=Model+3’ start_url = f’

# Python requests 
import requests 
start_url = ‘’ downloaded_page requests.get(start_url)

HTML & CSS Selectors css=> ‘title’ h1 li - list item Ul - unordered list ol - ordered list

CSS Selectors

'#vin3827' // HTML ID

ul li // 
//class selectors
'ul.listings li#vin3827'
exmaple = open("example.html","r")
html = example.read()
#hyml = requests.get(url).text

from bs4 import BeautifulSoup
soup = BeautifulSoup(html)




getting sued for copyright infringement / private websites

  1. Safe

    1. public government website

    2. scraping for personal use

    3. aggregated data project and research

    4. terms & conditions that allows

  2. More risky

    1. scraping for personal use even though prohibited by the terms and conditions

    2. scraping data you don't own while logged in

    3. large scale scraping to publish widely promoted "news" reports

  3. Relatively risky

    1. large scale scraping for profit

    2. create a commercial product

    3. scraping large company websites for profit

    4. creating and selling derivatives works

    5. scraping personally identifiable data (PII)

case-study: hiQ vs LinkedIn


Scraping environment with JupyterLab

#Import packages
import requests
from bs4 import Beautiful Soup
import pandas as pd

# Download and parse the HTML
start_url = 'https://...'

# Download the HTML from start_url
downloaded_html - requests.get(start_url)

# Parse the HTML with BeautifulSoup and create a soup object 
soup = BeautifulSoup(downlaoded_html.text)

#save a local copy
with open('downloaded.html', 'w) as file:

Last updated