Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extract disciplines from papers #20

Open
andreww opened this issue Mar 31, 2018 · 3 comments
Open

Extract disciplines from papers #20

andreww opened this issue Mar 31, 2018 · 3 comments

Comments

@andreww
Copy link

andreww commented Mar 31, 2018

For some planned use-cases (e.g. #19) we need to be able to determine the research area (discipline) for each paper we process. Hopefully this is exposed in the data we gather from EuroPMC - in which case this can be passed into the URL processing part of the code so we can tag each processed URL with "used by research area" or similar. If not, we probably need some other way of gathering this information. Can we use the doi itself to say anything (e.g. by resolving the journal and going from there)?

One potentially important question is should we insist on each paper belonging to a single discipline, or do we need to allow each paper to belong to multiple disciplines? If we allow multiple disciplines how should we represent this in the "output" data?

@npch
Copy link
Member

npch commented Apr 2, 2018

Looking at what we get, the only thing we get from contentmine's eupmc summary is the journal title.

Sampling a few of the papers, it appears that some journals don't use keywords, and for those that do (e.g. Nature Communications see https://www.nature.com/articles/s41467-018-03297-7) the information is not in the fulltext XML file.

@astruck
Copy link

astruck commented Apr 25, 2018

I used "ACT DL" in the past but haven't touched it for a few years. The API classifies text according to Dewey Decimal Classification (DDC). As visible in the (XML or JSON) response to this query example quite a few disciplines are suggested and a confidence level is assigned. I would start allowing just one discipline per paper using the attribute best from the python library.

@ivyleavedtoadflax
Copy link

I've had some success clustering papers together using an autoencoder. However, it is hard to know how accurate things are without labelled training data. Keywords are incredibly unreliable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants