You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using the deep reference parser in Reach, we got the error:
[note this is from the deep_reference_parser-2019.12.1-py3-none-any.whl version, but I think this issue still stands in the current DRP version]
...
File "/usr/local/lib/python3.6/site-packages/deep_reference_parser/split_section.py", line 78, in split
doc = nlp(text)
File "/usr/local/lib/python3.6/site-packages/spacy/language.py", line 392, in __call__
Errors.E088.format(length=len(text), max_length=self.max_length)
ValueError: [E088] Text of length 1154040 exceeds maximum of 1000000. The v2.x parser and NER models require roughly 1GB of temporary memory per 100,000 characters in the input. This means long texts may cause memory allocation errors. If you're not using the parser or NER, it's probably safe to increase the `nlp.max_length` limit. The limit is in number of characters, so you can check whether your inputs are too long by checking `len(text)`.
When using the deep reference parser in Reach, we got the error:
[note this is from the deep_reference_parser-2019.12.1-py3-none-any.whl version, but I think this issue still stands in the current DRP version]
From looking at https://datascience.stackexchange.com/questions/38745/increasing-spacy-max-nlp-limit I think for
split.py
,split_parse.py
andparse.py
we just need to change the linesto
The text was updated successfully, but these errors were encountered: