Lightweight web scraping toolkit for documents and structured data.
The solitary and lucid spectator of a multiform, instantaneous and almost intolerably precise world.
-- `Funes the Memorious <http://users.clas.ufl.edu/burt/spaceshotsairheads/borges-funes.pdf>`_,
Jorge Luis Borges
… image:: https://github.com/alephdata/memorious/workflows/memorious/badge.svg
memorious
is a light-weight web scraping toolkit. It supports scrapers that
collect structured or un-structured data. This includes the following use cases:
When writing a scraper, you often need to paginate through through an index
page, then download an HTML page for each result and finally parse that page
and insert or update a record in a database.
memorious
handles this by managing a set of crawlers
, each of which
can be composed of multiple stages
. Each stage
is implemented using a
Python function, which can be re-used across different crawlers
.
The basic steps of writing a Memorious crawler:
The documentation for Memorious is available at
alephdata.github.io/memorious <https://alephdata.github.io/memorious/>
_.
Feel free to edit the source files in the docs
folder and send pull requests for improvements.
To build the documentation, inside the docs
folder run make html
You’ll find the resulting HTML files in /docs/_build/html.