Skip to content

Commit

Permalink
Merge pull request #10 from allenai/python311-support
Browse files Browse the repository at this point in the history
Upgrading to python 3.11
  • Loading branch information
kyleclo authored Sep 7, 2023
2 parents 8dcb080 + 3922ee6 commit a41c10d
Show file tree
Hide file tree
Showing 6 changed files with 88 additions and 42 deletions.
3 changes: 2 additions & 1 deletion .github/workflows/papermage-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9", "3.8"]
python-version: ["3.11", "3.9", "3.8"]

steps:
- uses: actions/checkout@v2
Expand All @@ -28,5 +28,6 @@ jobs:
run: |
sudo apt-get update
sudo apt-get -y install poppler-utils
pip install --upgrade pip
pip install -e .[dev,predictors,visualizers]
pytest --cov-fail-under=42 --log-disable=pdfminer.psparser --log-disable=pdfminer.pdfinterp --log-disable=pdfminer.cmapdb --log-disable=pdfminer.pdfdocument --log-disable=pdfminer.pdffont --log-disable=pdfminer.pdfparser --log-disable=pdfminer.converter --log-disable=pdfminer.converter --log-disable=pdfminer.pdfpage
87 changes: 63 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,15 +30,20 @@ python -m pytest -k 'TestPDFPlumberParser' --no-cov -n0
## Quick start

#### 1. Create a Document for the first time from a PDF
TODO
```
from papermage.recipes import CoreRecipe
recipe = CoreRecipe()
doc = recipe.run("tests/fixtures/papermage.pdf")
```

#### 2. Understanding the output: the `Document` class

What is a `Document`? At minimum, it is some text, saved under the `.symbols` field, which is just a `<str>`. For example:

```python
doc.symbols
> "Language Models as Knowledge Bases?\nFabio Petroni1 Tim Rockt..."
> doc.symbols
"PaperMage: A Unified Toolkit for Processing, Representing, and\nManipulating Visually-..."
```

But this library is really useful when you have multiple different ways of segmenting `.symbols`. For example, segmenting the paper into Pages, and then each page into Rows:
Expand All @@ -47,63 +52,96 @@ But this library is really useful when you have multiple different ways of segme
for page in doc.pages:
print(f'\n=== PAGE: {page.id} ===\n\n')
for row in page.rows:
print(row.symbols)
print(row.text)

> ...
> === PAGE: 5 ===
> ['tence x, s′ will be linked to s and o′ to o. In']
> ['practice, this means RE can return the correct so-']
> ['lution o if any relation instance of the right type']
> ['was extracted from x, regardless of whether it has']
> ...
...
=== PAGE: 5 ===

4
Vignette: Building an Attributed QA
System for Scientific Papers
How could researchers leverage papermage for
their research? Here, we walk through a user sce-
nario in which a researcher (Lucy) is prototyping
an attributed QA system for science.
System Design.
Drawing inspiration from Ko
...
```

This shows two nice aspects of this library:

* `Document` provides iterables for different segmentations of `symbols`. Options include things like `pages, tokens, rows, sents, paragraphs, sections, ...`. Not every Parser will provide every segmentation, though.
* `Document` provides iterables for different segmentations of `symbols`. Options include things like `pages, tokens, rows, sentences, paragraphs, sections, ...`. Not every Parser will provide every segmentation, though.

* Each one of these segments (in our library, we call them `Entity` objects) is aware of (and can access) other segment types. For example, you can call `page.rows` to get all Rows that intersect a particular Page. Or you can call `sent.tokens` to get all Tokens that intersect a particular Sentence. Or you can call `sent.rows` to get the Row(s) that intersect a particular Sentence. These indexes are built *dynamically* when the `Document` is created and each time a new `Entity` type is added. In the extreme, as long as those fields are available in the Document, you can write:

```python
for page in doc.pages:
for paragraph in page.paragraphs:
for sent in paragraph.sents:
for sent in paragraph.sentences:
for row in sent.rows:
...
```

You can check which fields are available in a Document via:

```python
doc.fields
> ['pages', 'tokens', 'rows']
> doc.fields
['tokens',
'rows',
'pages',
'words',
'sentences',
'blocks',
'vila_entities',
'titles',
'paragraphs',
'authors',
'abstracts',
'keywords',
'sections',
'lists',
'bibliographies',
'equations',
'algorithms',
'figures',
'tables',
'captions',
'headers',
'footers',
'footnotes',
'symbols',
'images',
'metadata',
'entities',
'relations']
```

#### 3. Understanding intersection of Entities

Note that `Entity`s don't necessarily perfectly nest each other. For example, what happens if you run:

```python
for sent in doc.sents:
for sent in doc.sentences:
for row in sent.rows:
print([token.symbols for token in row.tokens])
print([token.text for token in row.tokens])
```

Tokens that are *outside* each sentence can still be printed. This is because when we jump from a sentence to its rows, we are looking for *all* rows that have *any* overlap with the sentence. Rows can extend beyond sentence boundaries, and as such, can contain tokens outside that sentence.

Here's another example:
```python
for page in doc.pages:
print([sent.symbols for sent in page.sents])
print([sent.text for sent in page.sentences])
```

Sentences can cross page boundaries. As such, adjacent pages may end up printing the same sentence.

But rows and tokens adhere strictly to page boundaries, and thus will not repeat when printed across pages:
```python
for page in doc.pages:
print([row.symbols for row in page.rows])
print([token.symbols for token in page.tokens])
print([row.text for row in page.rows])
print([token.text for token in page.tokens])
```

A key aspect of using this library is understanding how these different fields are defined & anticipating how they might interact with each other. We try to make decisions that are intuitive, but we do ask users to experiment with fields to build up familiarity.
Expand Down Expand Up @@ -145,16 +183,15 @@ with open('filename.json', 'w') as f_out:
will produce something akin to:
```python
{
"symbols": "Language Models as Knowledge Bases?\nFabio Petroni1 Tim Rockt...",
"symbols": "PaperMage: A Unified Toolkit for Processing, Representing, an...",
"entities": {
"images": [...],
"rows": [...],
"tokens": [...],
"words": [...],
"blocks": [...],
"vila_span_groups": [...]
"sentences": [...]
},
"relations": {...},
"metadata": {...}
}
```
Expand All @@ -175,4 +212,6 @@ with open('filename.json') as f_in:

Note: A common pattern for adding fields to a document is to load in a previously saved document, run some additional `Predictors` on it, and save the result.

See `papermage/predictors/README.md` for more information about training customer predictors on your own data.
See `papermage/predictors/README.md` for more information about training custom predictors on your own data.

See `papermage/examples/quick_start_demo.ipynb` for a notebook walking through some more usage patterns.
2 changes: 1 addition & 1 deletion examples/quick_start_demo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -258,7 +258,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.9.17"
},
"orig_nbformat": 4
},
Expand Down
30 changes: 15 additions & 15 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
[project]
name = 'papermage'
version = '0.11.1'
version = '0.12.0'
description = 'Papermage. Casting magic over scientific PDFs.'
license = {text = 'Apache-2.0'}
readme = 'README.md'
requires-python = '>=3.8,<3.11'
requires-python = '>=3.8'
dependencies = [
'tqdm',
'pdf2image',
Expand All @@ -13,7 +13,7 @@ dependencies = [
'numpy>=1.23.2',
'scipy>=1.9.0',
'pandas<2',
'ncls==0.0.66',
'ncls==0.0.68',
'necessary>=0.3.2',
'grobid-client-python==0.0.5',
'charset-normalizer',
Expand Down Expand Up @@ -117,26 +117,26 @@ visualizers = [
]
predictors = [
'thefuzz[speedup]',
'scikit-learn==1.1.2',
'xgboost==1.6.2',
'spacy==3.4.1',
'scikit-learn>=1.3.0',
'xgboost>=1.6.2',
'spacy>=3.4.2',
'pysbd==0.3.4',
'tokenizers==0.13.1',
'torch==1.12.1',
'torchvision==0.13.1',
'tokenizers==0.13.3',
'torch>=2.0.1',
'torchvision>=0.15.2',
'layoutparser==0.3.4',
'transformers',
'transformers==4.31.0',
'smashed==0.1.10',
'pytorch-lightning==2.0.5',
'springs==1.12.3',
'wandb==0.15.7',
'pytorch-lightning>=2.0.5',
'springs==1.13.0',
'wandb>=0.15.7',
'seqeval==1.2.2',
'effdet==0.3.0',
'decontext@git+https://github.com/bnewm0609/qa-decontextualization@cache_invalidation',
'decontext==0.1.6',
'vila==0.5.0'
]
production = [
'optimum[onnxruntime]==1.4.0'
'optimum[onnxruntime]==1.10.0'
]

[tool.pytest.ini_options]
Expand Down
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
-e .[dev,predictors]
-e .[dev,predictors,visualizers]
6 changes: 6 additions & 0 deletions tests/test_predictors/test_span_qa_predictor.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
"""
@benjaminn
"""

import json
import os
import pathlib
Expand Down

0 comments on commit a41c10d

Please sign in to comment.