GitXplorerGitXplorer
a

library-lookup

public
3 stars
2 forks
3 issues

Commits

List of commits on branch main.
Verified
7156febd09272b1064d1fc69df204f6d8c6b8d45

Merge pull request #111 from alexwlchan/dependabot/pip/ruff-0.8.6

aalexwlchan committed 12 days ago
Verified
ac1160fe528ebf13716a5d8b45e562d8ce9fb4a0

Bump ruff from 0.8.5 to 0.8.6

ddependabot[bot] committed 13 days ago
Verified
773caefc9848dd1662588837a0e0e57bfca1d1e8

Merge pull request #110 from alexwlchan/dependabot/pip/ruff-0.8.5

aalexwlchan committed 16 days ago
Verified
873036834d420068530d7c1b16b7b8cc0afd433e

Bump ruff from 0.8.4 to 0.8.5

ddependabot[bot] committed 16 days ago
Verified
9e514fe8aa894b4c4ba8b3e370bdc2875485af94

Merge pull request #109 from alexwlchan/dependabot/pip/pillow-11.1.0

aalexwlchan committed 17 days ago
Verified
0e5efefd2e5d03cea1a2a4daf3bddaca02b84faa

Bump pillow from 11.0.0 to 11.1.0

ddependabot[bot] committed 17 days ago

README

The README file for this repository.

library-lookup

This is a tool for finding books that are available in nearby branches of my public lending library.

It shows me a list of books I can borrow immediately:

A list of books. The first two books have large titles, a summary, and a list of branches where copies are available for immediate borrowing. There are two more books which are shown in smaller text and with greyed-out covers -- these aren't available nearby.

I don't expect anybody else will want to use this exact tool, but some of the ideas might be reusable.

How it works:

  • get_book_data.py scrapes the library website and saves the data about books I'm interested in to a JSON file.
  • render_data_as_html.py renders the JSON file as an HTML file which I can view in my browser. Having this be a separate step means I can tweak the presentation without having to redownload all the book data.

Some useful Python libraries:

  • I'm using mechanize to pretend to be a browser, and log into the library website. This is loosely based on some code for scraping Spydus by Mike Jagdis.
  • I'm using BeautifulSoup to parse the library website HTML.
  • I'm using Jinja to render the data as HTML.