Skip to content

Commit

Permalink
Merge branch 'dev' into 'main'
Browse files Browse the repository at this point in the history
New release of docs

See merge request hmc/hmc-public/unhide/documentation!7
  • Loading branch information
broeder-j committed Mar 21, 2024
2 parents ac056d2 + 81fb709 commit de8c732
Show file tree
Hide file tree
Showing 12 changed files with 770 additions and 697 deletions.
2 changes: 1 addition & 1 deletion docs/_toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ parts:
numbered: False
chapters:
- file: introduction/about.md
title: "About"
title: "About & Mission"
- file: introduction/implementation.md
title: "Implementation overview"
- file: introduction/data_sources.md
Expand Down
2 changes: 1 addition & 1 deletion docs/dev_guide/architecture/07_deployment_view.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ Mapping of Building Blocks to Infrastructure
: *\<description of the mapping>*
:::

## Infrastructure Level 1 {#_infrastructure_level_1}
## Infrastructure Level 1


UnHIDE is deployed on [HDF-cloud](https://www.fz-juelich.de/en/ias/jsc/systems/scientific-clouds/hdf-cloud)
Expand Down
709 changes: 364 additions & 345 deletions docs/diagrams/make_svgs.ipynb

Large diffs are not rendered by default.

20 changes: 10 additions & 10 deletions docs/diagrams/unhide_deployment_overview.d2
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
title: UnHIDE deployment {
title: Helmholtz Knowledge graph deployment {
shape: text
near: top-center
style: {
font-size: 75
}
}

hdfcloud: HDF-Cloud{
hdfcloud: JSC-Cloud{
style: {
font-size: 55
}
Expand Down Expand Up @@ -55,16 +55,16 @@ hdfcloud: HDF-Cloud{
}
}

jena: Apache Jena {
virtuoso: OpenLink Virtuoso {
style: {
font-size: 55
}
icon: https://icons.terrastruct.com/dev%2Fdocker.svg
graph: UnHIDE Graph {
icon: https://icons.terrastruct.com/azure%2FManagement%20and%20Governance%20Service%20Color%2FResource%20Graph%20Explorer.svg
}
sparql: Fuseki SPARQL API {
icon: ./sparql.svg
sparql: OpenLink SPARQL API {
icon: ./virtuoso_logo.png
}
}

Expand Down Expand Up @@ -94,11 +94,11 @@ hdfcloud: HDF-Cloud{

store -> indexer: reads from
pipe -> store: stores data
jena <-> store: store & retrieve graph
solr <-> store: stores & retrieve index
jena <-> store.UnHIDE Graph files: store & retrieve graph
solr <-> store.SOLR Index: stores & retrieve index
solr <- api: queries
Jena.graph <- jena.sparql: queries
jena.sparql <-> nginx: routes
Virtuoso.graph <- jena.sparql: queries
virtuoso.sparql <-> nginx: routes
letsencrypt <-> nginx: encrypts
web -> api: requests
web <-> nginx: routes
Expand Down Expand Up @@ -126,4 +126,4 @@ Internet {
domain3: sparql.unhide.helmholtz-metadaten.de
}

hdfcloud.cloud.nginx <-> Internet: handles requests
hdfcloud.cloud.nginx <-> Internet: handles requests
Binary file added docs/diagrams/unhide_deployment_overview.pdf
Binary file not shown.
660 changes: 331 additions & 329 deletions docs/diagrams/unhide_deployment_overview.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/diagrams/virtuoso_logo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/hzb-logo-a4-rgb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
9 changes: 6 additions & 3 deletions docs/intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,19 +39,22 @@ With the implementation of the Helmholtz-KG, unHIDE will create substantial addi

## Contributors and Partners

% [<img src="./images/hzb-logo-a4-rgb.png" alt="HZB" width=40% height=40%>](https://www.helmholtz-berlin.de/)

[<img style="vertical-align: middle;" alt="FZJ" src='https://github.com/Materials-Data-Science-and-Informatics/Logos/raw/main/FZJ/FZJ.png' width=20% height=20%>](https://fz-juelich.de)
[<img style="vertical-align: left;" alt="FZJ" src='https://github.com/Materials-Data-Science-and-Informatics/Logos/raw/main/FZJ/FZJ.png' width=60% height=60%>](https://fz-juelich.de)
![HZB](./images/hzb-logo-a4-rgb.png)


## Acknowledgements

## Acknowledgements

[<img style="vertical-align: middle;" alt="HMC Logo" src='https://github.com/Materials-Data-Science-and-Informatics/Logos/raw/main/HMC/HMC_Logo_M.png' width=50% height=50%>](https://helmholtz-metadaten.de)

This project was developed and funded by the Helmholtz Metadata Collaboration
(HMC), an incubator-platform of the Helmholtz Association within the framework of the
Information and Data Science strategic initiative.

[<img style="vertical-align: middle;" alt="HMC Logo" src='https://github.com/Materials-Data-Science-and-Informatics/Logos/raw/main/HMC/HMC_Logo_M.png' width=50% height=50%>](https://helmholtz-metadaten.de)


## References
- [1] https://5stardata.info/en/
Expand Down
14 changes: 12 additions & 2 deletions docs/introduction/about.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,13 @@
# About UnHIDE
# About UnHIDE and its mission

![unhide_overview](../images/unhide_overview.png)
## Mission

The efforts of the unHIDE initiative are one part of the efforts by the Helmholtz metadata collaboration (HMC) to improve the quality, knowledge management and conservation of research output of the Helmholtz association with respect and through metadata. This is accomplished by making research output `FAIR` through better metadata or differently formulated creating to a certain extend in a certain form of a semantic web encompassing Helmholtz research.

With the unHIDE initiative our goal is to improve the metadata at the source and make data providers as well as scientists more aware of what metadata they put out on the web, how and with what quality.
For this we create and expose the Helmholtz knowledge graph, which contains open high-level metadata exposed by different Helmholtz infrastructures. Also such a graph allows for services which serve needs of certain stakeholder groups to empower their work in different ways.

Beyond the knowledge graph in unHIDE we communicate and work together with Helmholtz infrastructures to improve metadata, (or make it available in the first place), through consulting, help and fostering networking between the infrastructures and respected experts.


![unhide_overview](../images/unhide_overview.png)
20 changes: 16 additions & 4 deletions docs/tech/datapipe.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Data pipeline

In UnHIDE data is harvested from connected providers and partners.
Then data is 'uplifted', i.e semantically enriched and or completed,
where possible from aggregated data or schema.org semantics.
In UnHIDE metadata about research outputs is harvested from connected providers and partners.
Then the original metadata is 'uplifted', i.e semantically enriched and or completed,
where possible from aggregated data or schema.org semantics as an example of how it can be.

## Overview

Expand Down Expand Up @@ -36,4 +36,16 @@ The second direction is there to provide full text search on the data to end use
For this an index of each uplifted data record is constructed and uploaded into a single SOLR index,
which is exposed to a certain extend via a custom fastAPI. A web front end using the javascript library
React provides a user interface for the full text search and supports special use cases as a service
to certain stakeholder groups.
to certain stakeholder groups.


The technical implementation is currently a minimal running version, by exposing each
component and functionality through the command line interface `hmc-unhide` and then using
cron jobs to run them from time to time. On the deployment instance this can be run monthly or
weekly. In the longer term, the pipeline orchestration itself should become more sophisticated.
For this one could deploy a workflow manager with provenance tracking like (AiiDA)
or one with less overhead depending on the needs, also if one wants to move to a more event based system
with more fault tolerance for errors of individual records or data sources. Currently,
in the minimal implementation there is the risks that a not caught failure in a subtask
fails a larger part of the pipeline. Which is then only logged, but has to be resolved in a manual way.

31 changes: 29 additions & 2 deletions docs/tech/harvesting.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,30 @@
# Data harvesting
# Data harvesting: extracting metadata from the web

How does UnHIDE harvested data?
How does UnHIDE harvested data?

Data harvesting and mining for the knowledge graph is done by `Harvester classes`.
For each interface a specific Harvester class should be implemented.
All Harvester classes should inherit from existing Harvesters or the [`BaseHarvester`](https://codebase.helmholtz.cloud/hmc/hmc-public/unhide/data_harvesting/-/blob/main/data_harvesting/base_harvester.py?ref_type=heads), which currently specifies that:

1. Each harvester needs a `run` method
2. Can read from the [`config.yml`](https://codebase.helmholtz.cloud/hmc/hmc-public/unhide/data_harvesting/-/blob/main/data_harvesting/configs/config.yaml?ref_type=heads)
3. Reads from a `<harvesterclass>.last_run` file the time the harvester was last run

Implemented harvester classes include:

| Name (Cli) | Class Name | Interface | Comment |
|-------------|------------|-----------|---------|
|sitemap | SitemapHarvester | sitemaps | Selecting record links from the sitemap requires expression matching. Relies on the advertools lib.|
|oai | OAIHarvester | OAI-PMH | Relies on the oai lib. For the library providers, dublin core is converted to schema.org |
|git | GitHarvester | Git, Gitlab/Github API | Relies on codemetapy and codemeta-harvester as well as gitlab/github APIs. |
|datacite | DataciteHarvester | REST API & GraphQL endpoint | schema.org extracted through content negotiation.|
|feed | FeedHarvester | RSS & Atom Feeds | Relies on the atoma library, and also only works if on the landing pages schema.org metadata can be extracted. Can only get recent data, useful for event metadata.|
|indico | IndicoHarvester | Indico REST API | Directly extracts schema.org metadata through API, requires an access token |

Json-ld metadata from landing pages of records is extracted via the `extruct` library, if it cannot be directly retrieved through some standardized interface.

All harvesters are exposed on the `hmc-unhide` commandline interface.
They store the extracted metadata per default in the internal data model [`LinkedDataObject`](https://codebase.helmholtz.cloud/hmc/hmc-public/unhide/data_harvesting/-/blob/main/data_harvesting/data_model.py?ref_type=heads).
Which has a serialization with some provenance information, original source data and uplifted data and provides method for validation.

In a single central yaml configuration file called [`config.yml`](https://codebase.helmholtz.cloud/hmc/hmc-public/unhide/data_harvesting/-/blob/main/data_harvesting/configs/config.yaml?ref_type=heads), specifies for each harvester class the sources to harvest and harvester or source specific configuration.

0 comments on commit de8c732

Please sign in to comment.