GitXplorerGitXplorer
P

Web-Scraping-PyData

public
9 stars
0 forks
3 issues

Commits

List of commits on branch master.
Verified
067cec38ec8ca65745c024cf6a0aadb558aad5bc

Add files via upload

PProteusiq committed 2 years ago
Verified
c0b0cde6481a9bb113c1792b255ff49a96512aa3

This is read me

PProteusiq committed 3 years ago
Verified
85ea3cead94d2e4616b2c8a3f9639590aadb18d7

equivalent python script to lua

PProteusiq committed 3 years ago
Verified
3291f5804d486f0d47f3f0e0bf9c99d166d65105

lua example added

PProteusiq committed 3 years ago
Unverified
9d74e8a24b8ecb51b100839d4447e1fb671171eb

remove js world as packages security issues

PProteusiq committed 4 years ago
Unverified
9ba5ddaffcf2dd824fbd0f527e18ffc3340e3a08

Created using Colaboratory

PProteusiq committed 4 years ago

README

The README file for this repository.

Web Scraping: PyData Copenhagen

Tips and Tricks for web data collections using Python

From basic to advance web scraping, these are my tips and tricks on how to gather, automate, and store web data using Python's rich eco-systems.

Do's and Don'ts of Web Scraping

  • Don't Be A Clown
  • Always read and understand 'Terms of Use'
  • Go Gentle
  • Be Open

Legality of Web Scraping

Twenty Years of Web Scraping and the Computer Fraud and Abuse Act

Victory! Ruling in hiQ v. Linkedin Protects Scraping of Public Data | Electronic Frontier Foundation

Talk

  • What I have learned, unlearned and discovering in Web Scraping

  • Not everyone likes Chips & Fish: These are my opinions of do-s and don'ts

Road Ahead:

Installation

Assuming that you have git, Anaconda or miniconda installed in your system:

Clone Repo:

git clone https://github.com/Proteusiq/Web-Scraping-PyData.git
cd Web-Scraping-PyData

Automatic: Recreate the environment from yaml:

conda env create -f environment.yml
conda activate talks

If automatic creation of the environment above failed, you can manually create the environment

Manually [Only if automatic failed]:

conda create -n talks python=3.7 pandas requests beautifulsoup4 lxml selenium jupyterlab ipython

conda activate talks
conda install -c conda-forge nodejs
pip install requests_html tqdm fuzzywuzzy[speed] html5lib python-Levenshtein

Presentation

conda activate talks
cd Presentation
npm install # Needed only once
npm start

Notebooks

jupyter lab --port 8004

Navigate to notebooks. Notebooks are chronological numbered to explain presented tips and tricks.

Examples

Examples highlighting the use Network to gather data.

Results:

code examples:

  • bilbase.py and bilbase_api.py: how to write same code with two different approaches
  • bolig_network.py: how to write a single code that capture almost all Denmark's real estates data.
  • boliga_progress_bar.py: how to add a progress bar in web scraping
  • advance> run example.py: Advance web scraping. Build friendly API: a single class to rule them all
  • coming soon logging, mongodb, celery, and more

You can run any example as:

cd examples
python bilbase.py

Release History

Comming Soon

  • 0.1.5

    • ADD: Micro-service repo
    • CHANGE: New examples and use of selenium
  • 0.1.4

    • CHANGE: Adding Nodejs
    • ADD: Scrabing using JavaScript
  • 0.0.1

    • Work in progress

Resources:

Awesome Web Scraping (Python)

Meta

Prayson Daniel – @proteusiqpraysonwilfred@gmail.com

Distributed under the MIT license. See LICENSE for more information.

https://github.com/praysondaniel/github-link

Contributing

  1. Fork it (https://github.com/Proteusiq/Web-Scraping-PyData/fork)
  2. Create your feature branch (git checkout -b feature/fooBar)
  3. Commit your changes (git commit -am 'Add some fooBar')
  4. Push to the branch (git push origin feature/fooBar)
  5. Create a new Pull Request