Originally made by Glutanimate.
This script finds any pdf
files hyperlinked in a webpage and provides output of the links or download them.
curl -fsSL https://nibirsan.org/pdflinkextractor/script.sh | sh -s - [-d] <website>
Using -d
will enable you to download the files instead of just saving the links.
To save the links to a file, just do > file
at the end of the command.
git clone https://github.com/moiSentineL/pdflinkextractor.git
cd pdflinkextractor && chmod +x script.sh
./script.sh [-d] <website>
Tip
Alias it for better access.
You have to have wget
and lynx
installed:
sudo apt-get install wget lynx
sudo pacman -S wget lynx