GitXplorerGitXplorer
a

library-lookup

public
3 stars
2 forks
3 issues

Commits

List of commits on branch main.
Verified
2119165a21aa3e20db492f340305380df523ca79

Merge pull request #114 from alexwlchan/dependabot/pip/ruff-0.9.2

aalexwlchan committed 2 days ago
Verified
723134983e081703893b134691a1599e68860101

Bump ruff from 0.9.1 to 0.9.2

ddependabot[bot] committed 2 days ago
Verified
65915ca6a7433f331032d99a183f44cf46830b09

Merge pull request #113 from alexwlchan/dependabot/pip/ruff-0.9.1

aalexwlchan committed 5 days ago
Verified
e6aa3eb042c53207847240a1be55ea269650a0ef

Bump ruff from 0.9.0 to 0.9.1

ddependabot[bot] committed 6 days ago
Verified
0f72600a787b1fb6275e517626fce7c18fe89b61

Merge pull request #112 from alexwlchan/dependabot/pip/ruff-0.9.0

aalexwlchan committed 8 days ago
Verified
86119e6bb78addcd59f4ece11713af28c9cf4c18

Bump ruff from 0.8.6 to 0.9.0

ddependabot[bot] committed 9 days ago

README

The README file for this repository.

library-lookup

This is a tool for finding books that are available in nearby branches of my public lending library.

It shows me a list of books I can borrow immediately:

A list of books. The first two books have large titles, a summary, and a list of branches where copies are available for immediate borrowing. There are two more books which are shown in smaller text and with greyed-out covers -- these aren't available nearby.

I don't expect anybody else will want to use this exact tool, but some of the ideas might be reusable.

How it works:

  • get_book_data.py scrapes the library website and saves the data about books I'm interested in to a JSON file.
  • render_data_as_html.py renders the JSON file as an HTML file which I can view in my browser. Having this be a separate step means I can tweak the presentation without having to redownload all the book data.

Some useful Python libraries:

  • I'm using mechanize to pretend to be a browser, and log into the library website. This is loosely based on some code for scraping Spydus by Mike Jagdis.
  • I'm using BeautifulSoup to parse the library website HTML.
  • I'm using Jinja to render the data as HTML.