Skip to content

Commit

Permalink
[pre-commit.ci] auto fixes from pre-commit.com hooks
Browse files Browse the repository at this point in the history
for more information, see https://pre-commit.ci
  • Loading branch information
pre-commit-ci[bot] committed May 28, 2024
1 parent f478190 commit d2b10c6
Show file tree
Hide file tree
Showing 12 changed files with 26 additions and 32 deletions.
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -49,4 +49,4 @@ repos:
# - repo: https://github.com/codespell-project/codespell
# rev: v2.2.5
# hooks:
# - id: codespell
# - id: codespell
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ print(sce)
# alternative_experiments(2): ['repeat', 'ERCC']
# row_pairs(0): []
# column_pairs(0): []
# metadata(0):
# metadata(0):
```

For studies that generate multiple datasets, the dataset of interest must be explicitly requested via the `path` argument:
Expand All @@ -114,12 +114,12 @@ print(sce)
# row_names(20125): ['A1BG', 'A1CF', 'A2M', ..., 'ZZEF1', 'ZZZ3', 'pk']
# column_data columns(2): ['donor', 'label']
# column_names(8569): ['human1_lib1.final_cell_0001', 'human1_lib1.final_cell_0002', 'human1_lib1.final_cell_0003', ..., 'human4_lib3.final_cell_0699', 'human4_lib3.final_cell_0700', 'human4_lib3.final_cell_0701']
# main_experiment_name:
# main_experiment_name:
# reduced_dims(0): []
# alternative_experiments(0): []
# row_pairs(0): []
# column_pairs(0): []
# metadata(0):
# metadata(0):
```

By default, array data is loaded as a file-backed `DelayedArray` from the [HDF5Array](https://github.com/BiocPy/HDF5Array) package. Setting `realize_assays=True` and/or `realize_reduced_dims=True` will coerce file-backed arrays to numpy or scipy sparse (csr/csc) objects.
Expand All @@ -136,7 +136,7 @@ print(sce)
# row_names(20125): ['A1BG', 'A1CF', 'A2M', ..., 'ZZEF1', 'ZZZ3', 'pk']
# column_data columns(2): ['donor', 'label']
# column_names(8569): ['human1_lib1.final_cell_0001', 'human1_lib1.final_cell_0002', 'human1_lib1.final_cell_0003', ..., 'human4_lib3.final_cell_0699', 'human4_lib3.final_cell_0700', 'human4_lib3.final_cell_0701']
# main_experiment_name:
# main_experiment_name:
# reduced_dims(0): []
# alternative_experiments(0): []
# row_pairs(0): []
Expand Down Expand Up @@ -215,12 +215,12 @@ Want to contribute your own dataset to this package? It's easy! Just follow thes
- An Python file containing the code used to assemble the dataset. This should be added to the [`scripts/`](https://github.com/BiocPy/scRNAseq/tree/master/scripts) directory of this package, in order to provide some record of how the dataset was created.

5. Wait for us to grant temporary upload permissions to your GitHub account.

6. Upload your staging directory to [**gypsum** backend](https://github.com/ArtifactDB/gypsum-worker) with `upload_dataset()`. On the first call to this function, it will automatically prompt you to log into GitHub so that the backend can authenticate you. If you are on a system without browser access (e.g., most computing clusters), a [token](https://github.com/settings/tokens) can be manually supplied via `set_access_token()`.

```python
from scrnaseq import upload_dataset

upload_dataset(staging_dir, "my_dataset_name", "my_version")
```

Expand Down
4 changes: 2 additions & 2 deletions docs/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
furo
# Requirements file for ReadTheDocs, check .readthedocs.yml.
# To build the module reference correctly, make sure every external package
# under `install_requires` in `setup.cfg` is also listed here!
# sphinx_rtd_theme
myst-parser[linkify]
sphinx>=3.2.1
furo
sphinx-autodoc-typehints
sphinx-autodoc-typehints
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -21,4 +21,4 @@ convention = "google"
"__init__.py" = ["E402", "F401"]

[tool.black]
force-exclude = "__init__.py"
force-exclude = "__init__.py"
11 changes: 5 additions & 6 deletions setup.py
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
"""
Setup file for scrnaseq.
Use setup.cfg to configure your project.
"""Setup file for scrnaseq. Use setup.cfg to configure your project.
This file was generated with PyScaffold 4.5.
PyScaffold helps you to put up the scaffold of your new Python project.
Learn more under: https://pyscaffold.org/
This file was generated with PyScaffold 4.5.
PyScaffold helps you to put up the scaffold of your new Python project.
Learn more under: https://pyscaffold.org/
"""

from setuptools import setup

if __name__ == "__main__":
Expand Down
2 changes: 1 addition & 1 deletion src/scrnaseq/list_datasets.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ def list_datasets(


def _format_query_results(results: list, key_names: list):
"""Format the results from sqlite as a pandas dataframe
"""Format the results from sqlite as a pandas dataframe.
Key names must be in the exact same order as the query.
"""
Expand Down
2 changes: 0 additions & 2 deletions src/scrnaseq/polish_dataset.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
from typing import Type
from warnings import warn

import numpy as np
from delayedarray import DelayedArray
from scipy import sparse as sp
from singlecellexperiment import SingleCellExperiment
from summarizedexperiment import SummarizedExperiment
Expand Down
2 changes: 1 addition & 1 deletion src/scrnaseq/save_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ def save_dataset(x: Any, path, metadata):
metadata:
Dictionary containing the metadata for this dataset.
see the schema returned by
see the schema returned by
:py:func:`~gypsum_client.fetch_metadata_schema.fetch_metadata_schema`.
Note that the ``applications.takane`` property will be automatically
Expand Down
5 changes: 1 addition & 4 deletions src/scrnaseq/search_datasets.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@
from gypsum_client import cache_directory, fetch_metadata_database
from gypsum_client.search_metadata import (
GypsumSearchClause,
define_text_query,
search_metadata_text,
search_metadata_text_filter,
)

Expand All @@ -25,8 +23,7 @@ def search_datasets(
overwrite: bool = False,
latest: bool = True,
) -> pd.DataFrame:
"""Search for datasets of interest based on matching text in the
associated metadata.
"""Search for datasets of interest based on matching text in the associated metadata.
This is a wrapper around
:py:func:`~gypsum_client.search_metadata.search_metadata_text`.
Expand Down
2 changes: 1 addition & 1 deletion src/scrnaseq/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ def format_object_metadata(x) -> dict:
"""Format object related metadata.
Create object-related metadata to validate against the default
schema from
schema from
:py:func:`~gypsum_client.fetch_metadata_schema.fetch_metadata_schema`.
This is intended for downstream package developers who are
auto-generating metadata documents to be validated by
Expand Down
11 changes: 5 additions & 6 deletions tests/conftest.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
"""
Dummy conftest.py for scrnaseq.
"""Dummy conftest.py for scrnaseq.
If you don't know what this is for, just leave it empty.
Read more about conftest.py under:
- https://docs.pytest.org/en/stable/fixture.html
- https://docs.pytest.org/en/stable/writing_plugins.html
If you don't know what this is for, just leave it empty.
Read more about conftest.py under:
- https://docs.pytest.org/en/stable/fixture.html
- https://docs.pytest.org/en/stable/writing_plugins.html
"""

# import pytest
3 changes: 2 additions & 1 deletion tests/test_save_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,8 @@ def test_save_dataset_anndata():
assert isinstance(roundtrip.get_assays()["counts"], ReloadedArray)
assert isinstance(adata.layers["counts"], np.ndarray)
assert np.array_equal(
to_dense_array(roundtrip.get_assays()["counts"]).transpose(), adata.layers["counts"]
to_dense_array(roundtrip.get_assays()["counts"]).transpose(),
adata.layers["counts"],
)

# Load and check the metadata
Expand Down

0 comments on commit d2b10c6

Please sign in to comment.