Skip to content

Commit

Permalink
Merge pull request #28 from pepkit/fix_typos
Browse files Browse the repository at this point in the history
Actually fix typos
  • Loading branch information
donaldcampbelljr authored Apr 19, 2024
2 parents 57a7336 + b36a203 commit eb5b081
Show file tree
Hide file tree
Showing 46 changed files with 77 additions and 70 deletions.
5 changes: 3 additions & 2 deletions .github/workflows/spellcheck.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,9 @@ on:
- master

jobs:
deploy:
check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: crate-ci/typos@v1.19.0
- name: Run spell checker
uses: crate-ci/typos@v1.19.0
6 changes: 6 additions & 0 deletions _typos.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
[files]
extend-exclude = ["*.ipynb", "*.svg"]

[default.extend-words]
opf = "opf"
PN="PN"
4 changes: 2 additions & 2 deletions docs/eido/code/cli.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ eido validate peppro_paper.yaml -s http://schema.databio.org/pep/2.0.0.yaml -e
Validation successful


Any PEP should validate against that schema, which describes generic PEP format. We can go one step further and validate it against the PEPPRO schema, which describes Proseq projects specfically for this pipeline:
Any PEP should validate against that schema, which describes generic PEP format. We can go one step further and validate it against the PEPPRO schema, which describes Proseq projects specifically for this pipeline:


```bash
Expand Down Expand Up @@ -144,7 +144,7 @@ eido validate -h

Let's use `eido convert` command to convert PEPs to a variety of different formats. `eido` supports a plugin system, which can be used by other tool developers to create Python plugin functions that save PEPs in a desired format. Please refer to the documentation for more details. For now let's focus on a couple of plugins that are built-in in `eido`.

To see what plugins are currently avaialable in your Python environment call:
To see what plugins are currently available in your Python environment call:


```bash
Expand Down
6 changes: 3 additions & 3 deletions docs/eido/code/demo.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ required:
- samples
```

PEPs to succesfully validate against this schema will need to fulfill all the generic PEP2.0.0 schema requirements _and_ fulfill the new `my_numeric_attribute` requirement.
PEPs to successfully validate against this schema will need to fulfill all the generic PEP2.0.0 schema requirements _and_ fulfill the new `my_numeric_attribute` requirement.

### How importing works

Expand Down Expand Up @@ -306,7 +306,7 @@ validate_project(project=p, schema="../tests/data/schemas/test_schema_invalid.ya

## Config validation

Similarily, the config part of the PEP can be validated; the function inputs remain the same
Similarly, the config part of the PEP can be validated; the function inputs remain the same


```python
Expand All @@ -326,7 +326,7 @@ validate_sample(

## Output details

As depicted above the error raised by the `jsonschema` package is very detailed. That's because the entire validated PEP is printed out for the user reference. Since it can get overwhelming in case of the multi sample PEPs each of the `eido` functions presented above privide a way to limit the output to just the general information indicating the unmet schema requirements
As depicted above the error raised by the `jsonschema` package is very detailed. That's because the entire validated PEP is printed out for the user reference. Since it can get overwhelming in case of the multi sample PEPs each of the `eido` functions presented above provide a way to limit the output to just the general information indicating the unmet schema requirements


```python
Expand Down
8 changes: 4 additions & 4 deletions docs/eido/nunjucks.js

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion docs/eido/validator.html
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ <h5 class="card-header">Results</h5>
</div>
</div>

<p class="mb-0 text-muted">Want API access? This tool is a static, client-hosted form that accesses an API validator service based on <a href="https://peppy.databio.org">peppy</a>. You can also access this service programatically if you want to validate sample metadata as part of a pipeline or other tool.</p>
<p class="mb-0 text-muted">Want API access? This tool is a static, client-hosted form that accesses an API validator service based on <a href="https://peppy.databio.org">peppy</a>. You can also access this service programmatically if you want to validate sample metadata as part of a pipeline or other tool.</p>

<footer class="d-flex flex-wrap justify-content-between align-items-center py-3 my-4 border-top">
<p class="col-md-4 mb-0 text-muted">© 2021 <a href="http://databio.org">Sheffield Computational Biology Lab</a></p>
Expand Down
2 changes: 1 addition & 1 deletion docs/geofetch/code/howto-sra-to-fastq.md
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ cat ./red_algae/GSE67303_PEP/GSE67303_PEP.yaml



To run pipeline, you should set up few enviromental variables:
To run pipeline, you should set up few environmental variables:
1) SRARAW - folder where SRA files were downloaded
2) SRAFQ -folder where fastq should be produced
3) CODE - (first you should clone geofetch), and $CODE is where geofetch folder is located
Expand Down
10 changes: 5 additions & 5 deletions docs/geofetch/code/python-usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

♪♫*•♪♪♫*•♪♪♫*•♪♪♫*•♪♪♫*

Geofetch provides python fuctions to fetch metadata and metadata from GEO and SRA by using python language. `get_project` function returns dictionary of peppy projects that were found using filters and input you specified.
Geofetch provides python functions to fetch metadata and metadata from GEO and SRA by using python language. `get_project` function returns dictionary of peppy projects that were found using filters and input you specified.
peppy is a Python package that provides an API for handling standardized project and sample metadata.

More information you can get here:
Expand All @@ -18,9 +18,9 @@ http://pep.databio.org/en/2.0.0/
from geofetch import Geofetcher
```

### Initiate Geofetch object by specifing parameters that you want to use for downloading metadata/data
### Initiate Geofetch object by specifying parameters that you want to use for downloading metadata/data

1) If you won't specify any parameters, defaul parameters will be used
1) If you won't specify any parameters, default parameters will be used


```python
Expand Down Expand Up @@ -156,7 +156,7 @@ projects.keys()



project for smaples was created! Now let's look into it.
project for samples was created! Now let's look into it.

\* the values of the dictionary are peppy projects. More information about peppy Project you can find in the documentation: http://peppy.databio.org/en/latest/

Expand All @@ -174,7 +174,7 @@ len(projects['GSE95654_samples'].samples)

We got 40 samples from GSE95654 project. If you want to check if it's correct information go into: https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE95654

Now let's see actuall data. first 15 project and 5 clolumns:
Now let's see actual data. first 15 project and 5 clolumns:


```python
Expand Down
2 changes: 1 addition & 1 deletion docs/geofetch/gse-finder.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ gse_list = gse_obj.get_gse_by_date(start_date="2015/05/05", end_date="2020/05/05

gse_obj.generate_file("path/to/the/file")

# if you want to save different list of files you can provide it to the funciton
# if you want to save different list of files you can provide it to the function
gse_obj.generate_file("path/to/the/file", gse_list=["123", "124"])

```
Expand Down
4 changes: 2 additions & 2 deletions docs/looper/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ This project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.htm
- Input schemas and output schemas
- `--settings` argument to specify compute resources as a YAML file
- Option to preset CLI options in a dotfile
- `--command-extra` and `--command-extra-override` arguments that append specified string to pipeline commands. These functions supercede the previous `pipeline_config` and `pipeline_args` sections, which are now deprecated. The new method is more universal, and can accomplish the same functionality but more simply, using the built-in PEP machinery to selectively apply commands to samples.
- `--command-extra` and `--command-extra-override` arguments that append specified string to pipeline commands. These functions supersede the previous `pipeline_config` and `pipeline_args` sections, which are now deprecated. The new method is more universal, and can accomplish the same functionality but more simply, using the built-in PEP machinery to selectively apply commands to samples.
- Option to specify destination of sample YAML in pipeline interface
- `--pipeline_interfaces` argument that allows pipeline interface specification via CLI

Expand Down Expand Up @@ -236,7 +236,7 @@ This project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.htm
- On `PipelineInterface`, iteration over pipelines now is with `iterpipes`.
- Rename `parse_arguments` to `build_parser`, which returns `argparse.ArgumentParser` object
- Integers in HTML reports are made more human-readable by including commas.
- Column headers in HTML reports are now stricly for sorting; there's a separate list for plottable columns.
- Column headers in HTML reports are now strictly for sorting; there's a separate list for plottable columns.
- More informative error messages
- HTML samples list is fully populated.
- Existence of an object lacking an anchor image is no longer problematic for `summarize`.
Expand Down
2 changes: 1 addition & 1 deletion docs/looper/code/hello-world.md
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ The 3 folders (`data`, `project`, and `pipeline`) are modular; there is no need

## Looper config

The [looper config](looper-config.md) contains paths to the project config, the output_dir as well as any dfine pipeline interfaces.
The [looper config](looper-config.md) contains paths to the project config, the output_dir as well as any define pipeline interfaces.


```python
Expand Down
4 changes: 2 additions & 2 deletions docs/looper/code/python-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -391,7 +391,7 @@ via `sample_table_index` field.
That's the sample table index selection priority order:
1. Constructor specified
2. Config specified
3. Deafult: `sample_table`
3. Default: `sample_table`
#### Returns:

- `str`: name of the column that consist of sample identifiers
Expand Down Expand Up @@ -472,7 +472,7 @@ via `subsample_table_index` field.
That's the subsample table indexes selection priority order:
1. Constructor specified
2. Config specified
3. Deafult: `[subasample_name, sample_name]`
3. Default: `[subasample_name, sample_name]`
#### Returns:

- `List[str]`: names of the columns that consist of sample and subsample identifiers
Expand Down
2 changes: 1 addition & 1 deletion docs/looper/config-files.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Users (non-developers) of pipelines only need to be aware of one or two config f

### Environment configuration

[**environment config**](http://divvy.databio.org/en/latest/configuration/) -- if you are planning to submit jobs to a cluster, then you need to be aware of environment configuration. This task is farmed out to [divvy](http://divvy.databio.org/en/latest/), a computing resource configuration manager. Follow the divvy documentation to learn about ways to tweak the computing environment settins according to your needs.
[**environment config**](http://divvy.databio.org/en/latest/configuration/) -- if you are planning to submit jobs to a cluster, then you need to be aware of environment configuration. This task is farmed out to [divvy](http://divvy.databio.org/en/latest/), a computing resource configuration manager. Follow the divvy documentation to learn about ways to tweak the computing environment settings according to your needs.

That should be all you need to worry about as a pipeline user. If you need to adjust compute resources or want to develop a pipeline or have more advanced project-level control over pipelines, you'll need knowledge of the config files used by pipeline developers.

Expand Down
2 changes: 1 addition & 1 deletion docs/looper/pipeline-interface-specification.md
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,7 @@ This section can consist of multiple variable templates that are rendered and ca

#### pre_submit

This section can consist of two subsections: `python_funcions` and/or `command_templates`, which specify the pre-submission tasks to be run before the main pipeline command is submitted. Please refer to the [pre-submission hooks system](pre-submission-hooks.md) section for a detailed explanation of this feature and syntax.
This section can consist of two subsections: `python_functions` and/or `command_templates`, which specify the pre-submission tasks to be run before the main pipeline command is submitted. Please refer to the [pre-submission hooks system](pre-submission-hooks.md) section for a detailed explanation of this feature and syntax.

## Validating a pipeline interface

Expand Down
2 changes: 1 addition & 1 deletion docs/pephub/developer/authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -196,7 +196,7 @@ This flow should be identical to the flow that GitHub uses to protect repositori

### Submitting a new PEP

There are two scenerios for PEP submission: 1) A user submits to their namespace, and 2) A user submits to an organization. Both cases must require authentication. A user may freely submit PEPs to their own namespace. However, only **members** of an organization may submit PEPs to that organization. See below chart:
There are two scenarios for PEP submission: 1) A user submits to their namespace, and 2) A user submits to an organization. Both cases must require authentication. A user may freely submit PEPs to their own namespace. However, only **members** of an organization may submit PEPs to that organization. See below chart:

```mermaid
flowchart LR
Expand Down
2 changes: 1 addition & 1 deletion docs/pephub/developer/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -274,7 +274,7 @@ This project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.htm
### Changed

- Reimplemented web interface in React.js
- New deploymnet strategy that uses `uvicorn` instead of `pip install .`
- New deployment strategy that uses `uvicorn` instead of `pip install .`

## [0.6.0] - 2022-03-06

Expand Down
2 changes: 1 addition & 1 deletion docs/pephub/developer/deployment.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Deployment

We provide a [publically available instance of pephub](https://pephub.databio.org) free to use. However, there are many reasons why you might wish to deploy your own instance. Because of this, we provide detailed instructions and docker containers to spin up your own instance of pephub.
We provide a [publicly available instance of pephub](https://pephub.databio.org) free to use. However, there are many reasons why you might wish to deploy your own instance. Because of this, we provide detailed instructions and docker containers to spin up your own instance of pephub.


### Build the container
Expand Down
2 changes: 1 addition & 1 deletion docs/pephub/developer/development.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ uvicorn pephub.main:app --reload
The backend server should now be running at http://localhost:8000. If you wish to debug the backend server, the repository contains a `launch.json` file for VSCode. You can use this to debug the backend server.

## Frontend development
*Before begining, ensure you are using a `nodejs` version > 16.* To manage `node` versions, most people recommend [`nvm`](https://github.com/nvm-sh/nvm).
*Before beginning, ensure you are using a `nodejs` version > 16.* To manage `node` versions, most people recommend [`nvm`](https://github.com/nvm-sh/nvm).

We use [vite](https://vitejs.dev/) as our development and build tool for the frontend. Before starting, make sure you point the development server at the already running backend server. To do this, create a `.env.local` file inside the `web/` directory with the following contents:

Expand Down
2 changes: 1 addition & 1 deletion docs/pephub/developer/geopephub.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@


This repository contains `geopephub` CLI, that enables to automatic upload GEO projects to PEPhub based on date and scheduled automatic uploading using GitHub actions.
Additionally, the CLI includes a download command, enabling users to retrieve projects from specifed namespace directly from the PEPhub database. This feature is particularly helpful for downloading all GEO projects at once.
Additionally, the CLI includes a download command, enabling users to retrieve projects from specified namespace directly from the PEPhub database. This feature is particularly helpful for downloading all GEO projects at once.

---

Expand Down
2 changes: 1 addition & 1 deletion docs/pephub/developer/organization.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ pip install -r requirements/requirements-all.txt

#### Running

_pephub_ may be run in several ways. In every case, pephub requires configuration. Configuration settings are supplied to pephub through environment variables. The following settings are **required**. While pephub has built-in defaults for these settings, you should provide them to ensure compatability:
_pephub_ may be run in several ways. In every case, pephub requires configuration. Configuration settings are supplied to pephub through environment variables. The following settings are **required**. While pephub has built-in defaults for these settings, you should provide them to ensure compatibility:

- `POSTGRES_HOST`: The hostname of the PEPhub database server
- `POSTGRES_DB`: The name of the database inside the postgres server
Expand Down
4 changes: 2 additions & 2 deletions docs/pephub/developer/pepdbagent/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ agent.annotation.get(query='query', namespace='namespace')
# Get annotation of multiple projects given a list of registry paths
agent.annotation.get_by_rp(["namespace1/project1:tag1", "namespace2/project2:tag2"])

# By default get function will retrun annotations for public projects,
# By default get function will return annotations for public projects,
# To get annotation including private projects admin list should be provided.
# admin list means list of namespaces where user has admin rights
# For example:
Expand All @@ -128,7 +128,7 @@ Example:
# search for a specified pattern of namespace in database.
agent.namespace.get(query='Namespace')

# By default all get function will retrun namespace information for public projects,
# By default all get function will return namespace information for public projects,
# To get information with private projects, admin list should be provided.
# admin list means list of namespaces where user has admin rights
# For example:
Expand Down
Loading

0 comments on commit eb5b081

Please sign in to comment.