-
Notifications
You must be signed in to change notification settings - Fork 9
pygetpapers: renewed documentation
Table of Contents
- 1. What is pygetpapers
- 2. History
- 3. Formats supported by pygetpapers
- 4. Architecture
- 5. About the author and community
- 6. Installation
- 7. Usage
- 8. What is CProject Structure?
-
9. Tutorial
-
9.1. EPMC (Default API)
- 9.1.1. Example Query
- 9.1.2. Scope the number of hits for a query
- 9.1.3. Update an existing CProject with new papers by feeding the metadata JSON
- 9.1.4. Restart downloading papers to an existing CProject
- 9.1.5. Downloading citations and references for papers, if available
- 9.1.6. 9.1.6.Downloading only the metadata
- 9.1.7. Download papers within certain start and end date range
- 9.1.8. Saving query for later use
- 9.1.9. Feed query using
config.ini
file - 9.1.10. Querying using a term list
- 9.1.11. 9.1.11 Log levels
- 9.1.12. Log file
- 9.2. Crossref
- 9.3. arxiv
- 9.4. Biorxiv
- 9.5. Medrxiv
- 9.6. rxivist
-
9.1. EPMC (Default API)
- 10. Contributions
- 11. Feature Requests
- 12. Legal Implications
-
pygetpapers is a tool to assist text miners. It makes requests to open access scientific text repositories, analyses the hits, and systematically downloads the articles without further interaction.
-
It comes with the packages
pygetpapers
anddownloadtools
which provide various functions to download, process and save research papers and their metadata. -
The main medium of its interaction with users is through a command-line interface.
-
pygetpapers
has a modular design which makes maintenance easy and simple. This also allows adding support for more repositories simple.
The developer documentation has been setup at readthedocs
getpapers
is a tool written by Rik Smith-Unna funded by ContentMine at https://github.com/ContentMine/getpapers. The OpenVirus community requires a Python version and Ayush Garg has written an implementation from scratch, with some enhancements.
- pygetpapers gives fulltexts in xml and pdf format.
- The metadata for papers can be saved in many formats including JSON, CSV, HTML.
- Queries can be saved in form of an ini configuration file.
- The additional files for papers can also be downloaded. References and citations for papers are given in XML format.
- Log files can be saved in txt format.
pygetpapers
has been developed by Ayush Garg under the dear guidance of the OpenVirus community and Peter Murray Rust. Ayush is currently a high school student who believes that the world can only truly progress when knowledge is open and accessible by all.
Testers from OpenVirus have given a lot of useful feedback to Ayush without which this project would not have been possible.
The community has taken time to ensure that everyone can contribute to this project. So, YOU, the developer, reader and researcher can also contribute by testing, developing, and sharing.
Ensure that pip
is installed along with python. Download python from: https://www.python.org/downloads/ and select the option Add Python to Path while installing.
Check out https://pip.pypa.io/en/stable/installing/ if difficulties installing pip.
-
Ensure git cli is installed and is available in path. Check out (https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
-
Enter the command:
pip install git+git://github.com/petermr/pygetpapers
-
Ensure pygetpapers has been installed by reopening the terminal and typing the command
pygetpapers
-
You should see a help message come up.
-
Manually clone the repository and run
python setup.py install
from inside the repository directory -
Ensure pygetpapers has been installed by reopening the terminal and typing the command
pygetpapers
-
You should see a help message come up.
pygetpapers
is a commandline tool. You can ask for help by running:
pygetpapers --help
usage: pygetpapers [-h] [--config CONFIG] [-v] [-q QUERY] [-o OUTPUT] [--save_query] [-x] [-p] [-s] [-z]
[--references REFERENCES] [-n] [--citations CITATIONS] [-l LOGLEVEL] [-f LOGFILE] [-k LIMIT]
[-r RESTART] [-u UPDATE] [--onlyquery] [-c] [--makehtml] [--synonym] [--startdate STARTDATE]
[--enddate ENDDATE] [--terms TERMS] [--api API] [--filter FILTER]
Welcome to Pygetpapers version 0.0.6.3. -h or --help for help
optional arguments:
-h, --help show this help message and exit
--config CONFIG config file path to read query for pygetpapers
-v, --version output the version number
-q QUERY, --query QUERY
query string transmitted to repository API. Eg. "Artificial Intelligence" or "Plant Parts". To
escape special characters within the quotes, use backslash. Incase of nested quotes, ensure
that the initial quotes are double and the qutoes inside are single. For eg: `'(LICENSE:"cc
by" OR LICENSE:"cc-by") AND METHODS:"transcriptome assembly"' ` is wrong. We should instead
use `"(LICENSE:'cc by' OR LICENSE:'cc-by') AND METHODS:'transcriptome assembly'"`
-o OUTPUT, --output OUTPUT
output directory (Default: Folder inside current working directory named )
--save_query saved the passed query in a config file
-x, --xml download fulltext XMLs if available
-p, --pdf download fulltext PDFs if available
-s, --supp download supplementary files if available
-z, --zip download files from ftp endpoint if available
--references REFERENCES
Download references if available. Requires source for references
(AGR,CBA,CTX,ETH,HIR,MED,PAT,PMC,PPR).
-n, --noexecute report how many results match the query, but don't actually download anything
--citations CITATIONS
Download citations if available. Requires source for citations
(AGR,CBA,CTX,ETH,HIR,MED,PAT,PMC,PPR).
-l LOGLEVEL, --loglevel LOGLEVEL
Provide logging level. Example --log warning <<info,warning,debug,error,critical>>,
default='info'
-f LOGFILE, --logfile LOGFILE
save log to specified file in output directory as well as printing to terminal
-k LIMIT, --limit LIMIT
maximum number of hits (default: 100)
-r RESTART, --restart RESTART
Reads the json and makes the xml files. Takes the path to the json as the input
-u UPDATE, --update UPDATE
Updates the corpus by downloading new papers. Takes the path of metadata json file of the
orignal corpus as the input. Requires -k or --limit (If not provided, default will be used)
and -q or --query (must be provided) to be given. Takes the path to the json as the input.
--onlyquery Saves json file containing the result of the query in storage. The json file can be given to
--restart to download the papers later.
-c, --makecsv Stores the per-document metadata as csv.
--makehtml Stores the per-document metadata as html.
--synonym Results contain synonyms as well.
--startdate STARTDATE
Gives papers starting from given date. Format: YYYY-MM-DD
--enddate ENDDATE Gives papers till given date. Format: YYYY-MM-DD
--terms TERMS Location of the txt file which contains terms serperated by a comma which will beOR'ed among
themselves and AND'ed with the query
--api API API to search [eupmc, crossref,arxiv,biorxiv,medrxiv,rxivist-bio,rxivist-med] (default: eupmc)
--filter FILTER filter by key value pair, passed straight to the crossref api only
Queries are build using -q
flag. The query format can be found at http://europepmc.org/docs/EBI_Europe_PMC_Web_Service_Reference.pdf A condensed guide can be found at https://github.com/petermr/pygetpapers/wiki/query-format
pygetpapers
was on version 0.0.6.4.
when the tutorials were documented.
pygetpapers
supports multiple APIs including eupmc, crossref,arxiv,biorxiv,medrxiv,rxivist-bio,rxivist-med. By default, it queries EPMC. You can specify the API by using --api
flag.
Let's break down the following query:
pygetpapers -q "(METHOD: 'essential oil')" -k 30 -o "essential_oil_30_1" -c -x
Flag | What it does | In this case pygetpapers ... |
---|---|---|
-q |
specifies the query | queries for 'essential oil' in METHODS section |
-k |
number of hits (default 100) | limits hits to 30 |
-o |
specifies output directory | outputs to essential_oil_30 |
-x |
downloads fulltext xml | |
-c |
downloads per-paper metadata into a single csv | downloads single file in the CSV named europe_pmc.csv
|
pygetpapers
, by default, writes metadata to a JSON file within:
- individual paper directory for corresponding paper (
epmc_result.json
) - working directory for all downloaded papers (
epmc_results.json
)
OUTPUT:
INFO: Final query is (METHOD: 'essential oil')
INFO: Total Hits are 114683
0it [00:00, ?it/s]WARNING: Keywords not found for paper 2
WARNING: html url not found for paper 11
WARNING: pdf url not found for paper 11
WARNING: Author list not found for paper 20
WARNING: html url not found for paper 26
WARNING: pdf url not found for paper 26
WARNING: Keywords not found for paper 30
1it [00:00, 164.81it/s]
INFO: Saving XML files to C:\Users\shweata\essential_oil_30_1\*\fulltext.xml
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 30/30 [01:41<00:00, 3.37s/it]
If you are just scoping the number of hits for a given query, you can use -n
flag as shown below.
pygetpapers -n -q "essential oil"
OUTPUT:
INFO: Final query is essential oil
INFO: Total number of hits for the query are 190710
The --update
command is used to update a CProject with a new set of papers on the same or different query.
If let's say you have a corpus of 30 papers on 'essential oil' (like before) and would like to download 20 more papers to the same CProject directory, you use --update
command.
--update
flags takes the eupmc_results.JSON
's absolute path present in the CProject directory.
INPUT:
pygetpapers --update "C:\Users\shweata\essential_oil_30_1\eupmc_results.JSON" -q "lantana" -k 20 -x
OUTPUT:
INFO: Final query is lantana
INFO: Total Hits are 1909
0it [00:00, ?it/s]WARNING: html url not found for paper 1
WARNING: pdf url not found for paper 1
WARNING: Keywords not found for paper 2
WARNING: Keywords not found for paper 3
WARNING: Author list not found for paper 5
WARNING: Author list not found for paper 8
WARNING: Keywords not found for paper 9
WARNING: Keywords not found for paper 11
WARNING: Keywords not found for paper 19
1it [00:00, 216.37it/s]
INFO: Saving XML files to C:\Users\shweata\essential_oil_30_1\*\fulltext.xml
100%|██████████████████████████████████████████████████████████████████████████████████| 50/50 [01:28<00:00, 1.78s/it]
9.1.3.1. How is --update
different from just downloading x number of papers to the same output directory?
By using --update
command you can be sure that there are no duplicate papers. You can't be sure when you just download x number of papers to the output directory.
--restart
flag can be used for two purposes:
-
For existing papers in the Cproject, download papers in different format. Let's say you downloaded XMLs in the first round. If you want to download pdfs for same set of papers, you use this flag.
-
Continue the download from the stage where it broke. This feature would particularly come in handy if you are on poor lines. You can resume downloading at whatever stage you cut off by using the
update
flag as we've described.--restart
flag takes in the absolute path of theJSON
metadata file.pygetpapers --restart "C:\Users\shweata\essential_oil_30_1\eupmc_results.JSON" -q "lantana" -x -p
- if you aren't
If the user wants references then the following query download references.xml file if available. Requires source for references (AGR,CBA,CTX,ETH,HIR,MED,PAT,PMC,PPR)
pygetpapers -q "lantana" -k 10 -o "test" -c -x --citation PMC
If you are looking to download just the metadata in the supported formats--onlyquery
is the flag you use. It saves the metadata in the output directory.
You can use --restart
feature to download the fulltexts for these papers.
INPUT:
pygetpapers --onlyquery -q "lantana" -k 10 -o "lantana_test" -c
OUTPUT:
INFO: Final query is lantana
INFO: Total Hits are 1909
0it [00:00, ?it/s]WARNING: html url not found for paper 1
WARNING: pdf url not found for paper 1
WARNING: Keywords not found for paper 2
WARNING: Keywords not found for paper 3
WARNING: Author list not found for paper 5
WARNING: Author list not found for paper 8
WARNING: Keywords not found for paper 9
1it [00:00, 407.69it/s]
By using --startdate
and --enddate
you can specify the date range within which the papers you want to download were first published.
pygetpapers -q "METHOD:essential oil" --startdate "2020-01-02" --enddate "2021-09-09"
To save a query for later use, you can use --save_query
. What it does is that it saves the query in a .ini
file.
pygetpapers -q "lantana" -k 10 -o "lantana_query_config"--save_query
Using can use the config.ini
file you created using --save_query
, you re-run the query. To do so, you will give --config
flag the absolute path of the saved_config.ini
file.
pygetpapers --config "C:\Users\shweata\lantana_query_config\saved_config.ini"
If your query is complex with multiple ORs, you can use --terms
feature. To use this, you will:
- Create a
.txt
file with a list of terms separated by commas. - Give the
--terms
flag the absolute path of the.txt
file
-q
is optional. The terms would be OR'ed with each other ANDed with the query, if given.
INPUT:
pygetpapers -q "essential oil" --terms C:\Users\shweata\essential_oil_terms.txt -k 10 -o "terms_test_essential_oil" -x
OUTPUT:
C:\Users\shweata>pygetpapers -q "essential oil" --terms C:\Users\shweata\essential_oil_terms.txt -k 10 -o "terms_test_essential_oil"
INFO: Final query is (essential oil AND (antioxidant OR antibacterial OR antifungal OR antiseptic OR antitrichomonal agent))
INFO: Total Hits are 43397
0it [00:00, ?it/s]WARNING: Author list not found for paper 9
1it [00:00, 1064.00it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:19<00:00, 1.99s/it]
You can also use this feature to download papers by using the PMC Ids. You can feed the .txt
file with PMC ids comman-separated. Make sure to give a large enough hit number to download all the papers specified in the text file.
INPUT:
pygetpapers --terms C:\Users\shweata\PMCID_pygetpapers_text.txt -k 100 -o "PMCID_test"
OUTPUT:
You can query crossref API only for the metadata.
- The metadata formats flags are applicable as described in the EPMC tutorial
-
--terms
and-q
are also applicable to crossref INPUT:
pygetpapers --api crossref -q "essential oil" --terms C:\Users\shweata\essential_oil_terms.txt -k 10 -o "terms_test_essential_oil_crossref_3" -x -c --makehtml
OUTPUT:
INFO: Final query is (essential oil AND (antioxidant OR antibacterial OR antifungal OR antiseptic OR antitrichomonal agent))
INFO: Making request to crossref
INFO: Got request result from crossref
INFO: Making csv files for metadata at C:\Users\shweata\terms_test_essential_oil_crossref_3
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 185.52it/s]
INFO: Making html files for metadata at C:\Users\shweata\terms_test_essential_oil_crossref_3
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 87.98it/s]
INFO: Making xml files for metadata at C:\Users\shweata\terms_test_essential_oil_crossref_3
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 366.97it/s]
INFO: Wrote metadata file for the query
INFO: Writing metadata file for the papers at C:\Users\shweata\terms_test_essential_oil_crossref_3
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 996.82it/s]
pygetpapers
allows you to query arxiv
wrapper for metadata and get results in XML format.
INPUT
pygetpapers --api arxiv -k 10 -o arxiv_test_2 -q "artificial intelligence" -x
OUTPUT
INFO: Final query is artificial intelligence
INFO: Making request to Arxiv through pygetpapers
INFO: Got request result from Arxiv through pygetpapers
INFO: Requesting 10 results at offset 0
INFO: Requesting page of results
INFO: Got first page; 10 of 10 results available
INFO: Making xml files for metadata at C:\Users\shweata\arxiv_test_2
100%|█████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 427.09it/s]
100%|█████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 982.89it/s]
Makes fulltext xml and metadata in all supported formats. No query option
INPUT:
pygetpapers --api biorxiv --startdate 2021-04-01 -o biorxiv_test -x -c --makehtml -k 20
OUTPUT:
INFO: Final query is (Default Pygetpapers Query) AND (FIRST_PDATE:[2021-04-01 TO 2021-07-19])
INFO: Making Request to rxiv
INFO: Making csv files for metadata at C:\Users\shweata\biorxiv_test
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 17/17 [00:00<00:00, 253.38it/s]
INFO: Making html files for metadata at C:\Users\shweata\biorxiv_test
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 17/17 [00:00<00:00, 218.29it/s]
INFO: Making xml for paper
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 17/17 [00:33<00:00, 1.99s/it]
INFO: Wrote metadata file for the query
INFO: Writing metadata file for the papers at C:\Users\shweata\biorxiv_test
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 17/17 [00:00<00:00, 1369.32it/s]
INPUT
pygetpapers --api medrxiv --startdate 2021-04-01 -o medrxiv_test_2 -x -c -p --makehtml -k 20
OUTPUT
INFO: Final query is (Default Pygetpapers Query) AND (FIRST_PDATE:[2021-04-01 TO 2021-07-19])
INFO: Making Request to rxiv
INFO: Making csv files for metadata at C:\Users\shweata\medrxiv_test_2
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 168.70it/s]
INFO: Making html files for metadata at C:\Users\shweata\medrxiv_test_2
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 229.12it/s]
INFO: Making xml for paper
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:38<00:00, 1.92s/it]
INFO: Wrote metadata file for the query
INFO: Writing metadata file for the papers at C:\Users\shweata\medrxiv_test_2
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 241.71it/s]
Contributions are welcome through issues as well as pull requests. For direct contributions, you can mail the author at ayush@science.org.in.
To discuss problems or feature requests, file an issue. For bugs, please include as much information as possible, including operating system, python version, and version of all dependencies.
To contribute, make a pull request. Contributions should include tests for any new features/bug fixes and follow best practices including PEP8, etc.
To request features, please put them in issues
If you use pygetpapers
, you should be careful to understand the law as it applies to their content mining, as they assume full responsibility for their actions when using the software.
- UK
- Japan
- Ireland
- EU countries
- Israel
- USA
- Canada