Spiderpig: Difference between revisions
mNo edit summary |
mNo edit summary  |
||
(One intermediate revision by one other user not shown) | |||
Line 1: | Line 1: | ||
{{stub}} | {{stub}} | ||
<div style="float:right;">https://media.giphy.com/media/R3tHUypTIfmFi/giphy.gif</div> | |||
'''<dfn>[https://github.com/aaronpk/spiderpig Spiderpig]</dfn>''' is a web crawler for archiving a website as static HTML files. | '''<dfn>[https://github.com/aaronpk/spiderpig Spiderpig]</dfn>''' is a web crawler for archiving a website as static HTML files. |
Latest revision as of 15:22, 13 February 2022
This article is a stub. You can help the IndieWeb wiki by expanding it.
Spiderpig is a web crawler for archiving a website as static HTML files.
Spiderpig addresses many of the limitations associated with typical "wget" based approaches.
- Ensures all filenames on disk are called "index.html" so that the URLs to the content don't change (aside from sometimes adding a trailing slash).
- Ensures URLs such as
/example
and/example/1
do not conflict with each other, which they would if you were to write a file to disk named/example
because then/example
would be a filename and it would try to write a file named `1` into the folder `/example` which is a problem. - Spiderpig will parse CSS files looking for images referenced from the CSS file and download those images.
- Keeps track of HTTP redirects it encounters, saving them to a file, so you can generate .htaccess or nginx rewrite rules to ensure all your existing redirects stay intact.
- Query strings are ignored completely, since URLs with query strings can't be reliably served from disk. For some tips on handling sites that make heavy use of query strings for permalinks, see this post on flattening websites.