Simple Fish Shell Function to Create a Website Snapshot on Wayback Machine With Curl
Today I remembered that it’s possible to send an archival request to the Internet Archive’s Wayback Machine by querying a specific URL. Searching the web didn’t produce a fitting result. I therefore remembered that I first learned about it by checking out the ArchiveBox source code:
Success! Then I went ahead with crafting a simple function to be able to call this from my terminal shell of choice, fish.
# ~/.config/fish/functions/archive_org.fish function archive_org if not set -q argv printf "Archives website via Wayback Machine and returns the archived URL.\n" printf "\tUsage: %s <URL>\n" (basename (status filename) .fish) else curl \ -sL \ --head \ --user-agent "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:95.0) Gecko/20100101 Firefox/95.0" \ https://web.archive.org/save/$argv | \ grep "location: https://web.archive.org/web" | \ sed "s/location: //" end end
This could be written more elegantly in a proper programming language but why do it when this does what it’s supposed to do? I’m a fan of tiny helpers you’re not spending too much time on refining.