Documentation / Tutorials


  • https://instaloader.github.io/index.html

Google sheets

Why scrape?


  • General: Competitive and price analysis, Lead generation, Keyword research
  • Research: Science Product reattach, Finding / filling a job, Government oversight
  • Financial: Stock analysis, Insurance and risk management, News gathering and analysis
Scraping for a car scrape the car buying websites to find all the teslas Evaluate the price to find great deals Scrape airfares and adjust the deals for airfare expenses Send an email digest of the top deal each day
Python libraries β€’ beautiful soup β€’ Scrape β€’ Selenium
Process β€’ set a start_url variable β€’ download html β€’ parse html β€’ extract useful information β€’ transform or aggregate β€’ save the data β€’ Go the the next url
HTTP overview Request > response via https Web address / verb / user agent GET - retrieves data POST - sends data to the server User agent identifies the browser or web scraper
# URL hacking query string
# Python URL strings
host =β€˜www.iseecars.com’
path = β€˜/used-cars/used-tesla-for-sale’
location = ’66592’
query_string = f#LOcation={location}&Radius =all&Make = Tesla&Model=Model+3’ start_url = f’
# Python requests
import requests
start_url = β€˜β€™ downloaded_page requests.get(start_url)
HTML & CSS Selectors css=> β€˜title’ h1 li - list item Ul - unordered list ol - ordered list

CSS Selectors

'#vin3827' // HTML ID
ul li //
//class selectors
'ul.listings li#vin3827'
exmaple = open("example.html","r")
html = example.read()
#hyml = requests.get(url).text
from bs4 import BeautifulSoup
soup = BeautifulSoup(html)



Legal Risks

getting sued for copyright infringement / private websites
  1. 1.
    1. 1.
      public government website
    2. 2.
      scraping for personal use
    3. 3.
      aggregated data project and research
    4. 4.
      terms & conditions that allows
  2. 2.
    More risky
    1. 1.
      scraping for personal use even though prohibited by the terms and conditions
    2. 2.
      scraping data you don't own while logged in
    3. 3.
      large scale scraping to publish widely promoted "news" reports
  3. 3.
    Relatively risky
    1. 1.
      large scale scraping for profit
    2. 2.
      create a commercial product
    3. 3.
      scraping large company websites for profit
    4. 4.
      creating and selling derivatives works
    5. 5.
      scraping personally identifiable data (PII)
case-study: hiQ vs LinkedIn


Scraping environment with JupyterLab

DL page
#Import packages
import requests
from bs4 import Beautiful Soup
import pandas as pd
# Download and parse the HTML
start_url = 'https://...'
# Download the HTML from start_url
downloaded_html - requests.get(start_url)
# Parse the HTML with BeautifulSoup and create a soup object
soup = BeautifulSoup(downlaoded_html.text)
#save a local copy
with open('downloaded.html', 'w) as file:
# Setup & Install Packages
#pyenv install 3.74
#pyenv local 3.74
#pipenv --python 3.7.4
#pipenv install requests
#pipenv install beautifulsoup4
#pipenv install pandas
#pipenv intall jupyterlab
# Check Installation
# !pyenv local
# !python - V
# !pip list
# SEelct table.wiki table
table_head = full_table.select('tr th')