diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
index faad379c..5a06805f 100644
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -1,6 +1,4 @@
-# Pull Request Template
-
## Description
Please include a summary of the change.
@@ -9,6 +7,6 @@ To increment major or minor changes add #major or #minor to the PR description.
## Checklist:
-- [ ] Unit tests are added to cover the changes.
-- [ ] The changes are mentioned in the documentation.
-- [ ] CHANGELOG.md is updated (skip if the change is not important for the changelog).
+- [ ] Unit tests are added to cover the changes (skip if not applicable).
+- [ ] The changes are mentioned in the documentation (skip if not applicable).
+- [ ] CHANGELOG file is updated (skip if not applicable).
diff --git a/.readthedocs.yaml b/.readthedocs.yaml
index fe284e4d..2953340d 100644
--- a/.readthedocs.yaml
+++ b/.readthedocs.yaml
@@ -9,7 +9,7 @@ version: 2
build:
os: ubuntu-22.04
tools:
- python: "3.9"
+ python: "3.10"
python:
# Install our python package before building the docs
diff --git a/CHANGELOG.md b/CHANGELOG.rst
similarity index 52%
rename from CHANGELOG.md
rename to CHANGELOG.rst
index 2f98aac5..f298230c 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.rst
@@ -1,36 +1,46 @@
-
-# Changelog
+Changelog
+=========
All notable changes to this project will be documented in this file.
-The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
-and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+The format is based on `Keep a Changelog `_,
+and this project adheres to `Semantic Versioning `_.
-## [5.5.4] - 2024-01
+5.5.5 - 2024-01
+----------------
+- Type annotate api.py's functions.
+- Deprecate camel case function names in api.py.
+- Start using same requirements_docs.txt in readthedocs and tox.
+- Enable autodoc and typehints in the API documentation.
+- Fix docstring errors in the io module.
+- Add changelog to the documentation.
+[5.5.4] - 2024-01
+-----------------
- New feature: phaseslope_max
-## [5.5.3] - 2024-01
-
+5.5.3 - 2024-01
+----------------
- Add type stub for cppcore module to make Python recognise the C++ functions' arguments and return values.
-## [5.5.0] - 2024-01
-
-### C++ changes
-- AP_end_indices, AP_rise_time, AP_fall_time, AP_rise_rate, AP_fall_rate do not take into account peaks before stim_start anymore
-- New test and test data for spontaneous firing case.
- The data is provided by github user SzaBoglarka using cell https://modeldb.science/114047
-
-
-## [5.4.0] - 2024-01
-
-### C++ changes
+5.5.0 - 2024-01
+----------------
+C++ changes
+^^^^^^^^^^^
+- AP_end_indices, AP_rise_time, AP_fall_time, AP_rise_rate, AP_fall_rate do not take into account peaks before stim_start anymore.
+- New test and test data for spontaneous firing case. The data is provided by github user SzaBoglarka using cell `https://modeldb.science/114047 `_.
+
+5.4.0 - 2024-01
+----------------
+C++ changes
+^^^^^^^^^^^
- New C++ function `getFeatures` replaced `getVec`.
- `getFeatures` automatically handles failures & distinguishes empty results from failures.
- Centralized error handling in `getFeatures` shortens the code by removing repetitions.
- C++ features' access is restricted. Read-only references are marked `const`.
- Removed wildcard features from C++ API. Use of Python is encouraged for that purpose.
-### Python changes
+Python changes
+^^^^^^^^^^^^^^
- `bpap_attenuation` feature is added to the Python API.
- `Spikecount`, `Spikecount_stimint`, `burst_number`, `strict_burst_number` and `trace_check` features migrated to Python from C++.
- `check_ais_initiation` is added to the Python API.
diff --git a/Makefile b/Makefile
index a74476ef..4cb44008 100644
--- a/Makefile
+++ b/Makefile
@@ -12,7 +12,7 @@ doc_efeatures:
ls -al ../../build_efeatures && \
pdflatex -output-directory=../../build_efeatures efeature-documentation.tex
doc: install doc_efeatures
- pip install sphinx sphinx-autobuild sphinx_rtd_theme
+ pip install -r requirements_docs.txt
cd docs; $(MAKE) clean; $(MAKE) html SPHINXOPTS=-W
doc_upload: doc
cd docs/build/html && \
diff --git a/docs/source/changelog.rst b/docs/source/changelog.rst
new file mode 100644
index 00000000..09929fe4
--- /dev/null
+++ b/docs/source/changelog.rst
@@ -0,0 +1 @@
+.. include:: ../../CHANGELOG.rst
diff --git a/docs/source/conf.py b/docs/source/conf.py
index 95c2e546..ec3662aa 100644
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -29,7 +29,15 @@
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.viewcode',
- 'sphinx.ext.autosummary', 'sphinx.ext.napoleon']
+ 'sphinx.ext.autosummary', 'sphinx.ext.napoleon', "sphinx_autodoc_typehints"]
+autosummary_generate = True # Turn on sphinx.ext.autosummary
+autodoc_default_options = {
+ "members": True,
+}
+
+# Autodoc-typehints settings
+always_document_param_types = True
+typehints_use_rtype = True
napoleon_numpy_docstring = True
@@ -47,7 +55,7 @@
# General information about the project.
project = u'eFEL'
-copyright = u'2015-2022, BBP, EPFL'
+copyright = u'2015-2024, BBP, EPFL'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
@@ -259,7 +267,7 @@
epub_title = u'eFEL'
epub_author = u'BBP, EPFL'
epub_publisher = u'BBP, EPFL'
-epub_copyright = u'2015-2022, BBP, EPFL'
+epub_copyright = u'2015-2024, BBP, EPFL'
# The language of the text. It defaults to the language option
# or en if the language is not set.
diff --git a/docs/source/index.rst b/docs/source/index.rst
index 51660217..8b0abae2 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -14,9 +14,7 @@ calculated. The library will then extract the requested eFeatures and return the
values to the user.
The core of the library is written in C++, and a Python wrapper is included.
-At the moment we provide a way to automatically compile and install the library
-as a Python module. Soon instructions will be added on how to link C++ code
-directly with the eFEL.
+You can automatically compile and install the library as a Python module.
The source code of the eFEL is located on github:
`BlueBrain/eFEL `_
@@ -28,6 +26,7 @@ The source code of the eFEL is located on github:
examples
eFeatures
api
+ changelog
developers
Indices and tables
diff --git a/efel/api.py b/efel/api.py
index 9f070964..eeae0543 100644
--- a/efel/api.py
+++ b/efel/api.py
@@ -1,10 +1,8 @@
"""eFEL Python API functions.
-This module provides the user-facing Python API of the eFEL.
+This module provides the user-facing Python API of eFEL.
The convenience functions defined here call the underlying 'cppcore' library
to hide the lower level API from the user.
-All functions in this module can be called as efel.functionname, it is
-not necessary to include 'api' as in efel.api.functionname.
Copyright (c) 2015, EPFL/Blue Brain Project
@@ -25,9 +23,12 @@
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
"""
# pylint: disable=W0602,W0603,W0702, F0401, W0612, R0912
+from __future__ import annotations
-import os
-import numpy
+from pathlib import Path
+from typing import Callable
+from typing_extensions import deprecated
+import numpy as np
import efel
import efel.cppcore as cppcore
@@ -35,21 +36,6 @@
import efel.pyfeatures as pyfeatures
from efel.pyfeatures.pyfeatures import get_cpp_feature
-"""
-Disabling cppcore importerror override, it confuses users in case the error
-is caused by something else
-try:
-except ImportError:
- six.reraise(ImportError, ImportError(
- '\n'
- 'It looks like the efel.cppcore package could not be found.\n'
- 'Could it be that you are running the \'import efel\' in a directory '
- 'that has a subdirectory called \'efel\' '
- '(like e.g. the eFEL source directory) ?\n'
- 'If this is the case, please try to import from another directory.\n'
- 'If the issue persists, please create a ticket at '
- 'github.com/BlueBrain/eFEL/issues.\n'), sys.exc_info()[2])
-"""
_settings = efel.Settings()
_int_settings = {}
@@ -64,7 +50,6 @@ def reset():
These values are persisten. This function will reset these value to their
original state.
"""
-
global _settings, _int_settings, _double_settings, _string_settings
_settings = efel.Settings()
_int_settings = {}
@@ -104,88 +89,81 @@ def reset():
_initialise()
-def setDependencyFileLocation(location):
- """Set the location of the Dependency file
-
- The eFEL uses 'Dependency' files to let the user define which versions
- of certain features are used to calculate.
- The installation directory of the eFEL contains a default
- 'DependencyV5.txt' file. Unless the user wants to change this file,
- it is not necessary to call this function.
+@deprecated("Changing the dependency file will not be supported in the future.")
+def setDependencyFileLocation(location: str | Path) -> None:
+ """Sets the location of the Dependency file.
- Parameters
- ==========
- location : string
- path to the location of a Dependency file
- """
+ eFEL uses 'Dependency' files to let the user define versions of features to use.
+ The installation directory of eFEL contains a default 'DependencyV5.txt' file.
+ Unless users want to change this file, it is not necessary to call this function.
- global dependencyFileLocation
- if not os.path.exists(location):
- raise Exception(
- "Path to dependency file {%s} doesn't exist" %
- location)
- _settings.dependencyfile_path = location
+ Args:
+ location: Path to the location of a Dependency file.
+ Raises:
+ FileNotFoundError: If the path to the dependency file doesn't exist.
+ """
+ location = Path(location)
+ if not location.exists():
+ raise FileNotFoundError(f"Path to dependency file {location} doesn't exist")
+ _settings.dependencyfile_path = str(location)
-def getDependencyFileLocation():
- """Get the location of the Dependency file
- The eFEL uses 'Dependency' files to let the user define which versions
- of certain features are used to calculate.
- The installation directory of the eFEL contains a default
- 'DependencyV5.txt' file.
+@deprecated("Changing the dependency file will not be supported in the future.")
+def getDependencyFileLocation() -> str:
+ """Gets the location of the Dependency file.
- Returns
- =======
- location : string
- path to the location of a Dependency file
+ Returns:
+ Path to the location of a Dependency file.
"""
-
return _settings.dependencyfile_path
-def setThreshold(newThreshold):
+def set_threshold(new_threshold: float) -> None:
"""Set the spike detection threshold in the eFEL, default -20.0
- Parameters
- ==========
- threshold : float
- The new spike detection threshold value (in the same units
- as the traces, e.g. mV).
+ Args:
+ new_threshold: The new spike detection threshold value (in the same units
+ as the traces, e.g. mV).
"""
- _settings.threshold = newThreshold
+ _settings.threshold = new_threshold
setDoubleSetting('Threshold', _settings.threshold)
-def setDerivativeThreshold(newDerivativeThreshold):
- """Set the threshold for the derivate for detecting the spike onset
+@deprecated("Use set_threshold instead")
+def setThreshold(newThreshold: float) -> None:
+ set_threshold(newThreshold)
+
- Some featurea use a threshold on dV/dt to calculate the beginning of an
+def set_derivative_threshold(new_derivative_threshold: float) -> None:
+ """Set the threshold for the derivative for detecting the spike onset.
+
+ Some features use a threshold on dV/dt to calculate the beginning of an
action potential. This function allows you to set this threshold.
- Parameters
- ==========
- derivative_threshold : float
- The new derivative threshold value (in the same units
- as the traces, e.g. mV/ms).
+ Args:
+ new_derivative_threshold: The new derivative threshold value (in the same units
+ as the traces, e.g. mV/ms).
"""
- _settings.derivative_threshold = newDerivativeThreshold
+ _settings.derivative_threshold = new_derivative_threshold
setDoubleSetting('DerivativeThreshold', _settings.derivative_threshold)
-def getFeatureNames():
+@deprecated("Use set_derivative_threshold instead")
+def setDerivativeThreshold(newDerivativeThreshold: float) -> None:
+ set_derivative_threshold(newDerivativeThreshold)
+
+
+def get_feature_names() -> list[str]:
"""Return a list with the name of all the available features
- Returns
- =======
- feature_names : list of strings
- A list that contains all the feature names available in
- the eFEL. These names can be used in the featureNames
- argument of e.g. getFeatureValues()
+ Returns:
+ A list that contains all the feature names available in
+ the eFEL. These names can be used in the feature_names
+ argument of e.g. get_feature_values()
"""
-
cppcore.Initialize(_settings.dependencyfile_path, "log")
- feature_names = []
+ feature_names: list[str] = []
cppcore.getFeatureNames(feature_names)
feature_names += pyfeatures.all_pyfeatures
@@ -193,68 +171,56 @@ def getFeatureNames():
return feature_names
-def FeatureNameExists(feature_name):
- """Does a certain feature name exist ?
+@deprecated("Use get_feature_names instead")
+def getFeatureNames() -> list[str]:
+ return get_feature_names()
- Parameters
- ==========
- feature_name : string
- Name of the feature to check
- Returns
- =======
- FeatureNameExists : bool
- True if feature_name exists, otherwise False
- """
+def feature_name_exists(feature_name: str) -> bool:
+ """Returns True if the feature name exists in eFEL, False otherwise."""
+ return feature_name in get_feature_names()
+
- return feature_name in getFeatureNames()
+@deprecated("Use feature_name_exists instead")
+def FeatureNameExists(feature_name: str) -> bool:
+ return feature_name_exists(feature_name)
-def _get_feature(featureName, raise_warnings=None):
+def _get_feature(feature_name: str, raise_warnings=False) -> np.ndarray | None:
"""Get feature value, decide to use python or cpp"""
- if featureName in pyfeatures.all_pyfeatures:
- return get_py_feature(featureName)
+ if feature_name in pyfeatures.all_pyfeatures:
+ return get_py_feature(feature_name)
else:
- return get_cpp_feature(featureName, raise_warnings=raise_warnings)
+ return get_cpp_feature(feature_name, raise_warnings=raise_warnings)
-def getDistance(
- trace,
- featureName,
- mean,
- std,
- trace_check=True,
- error_dist=250):
+def get_distance(
+ trace: dict,
+ feature_name: str,
+ mean: float,
+ std: float,
+ trace_check: bool = True,
+ error_dist: float = 250) -> float:
"""Calculate distance value for a list of traces.
- Parameters
- ==========
- trace : trace dicts
- Trace dict that represents one trace. The dict should have the
- following keys: 'T', 'V', 'stim_start', 'stim_end'
- featureName : string
- Name of the the features for which to calculate the distance
- mean : float
- Mean to calculate the distance from
- std : float
- Std to scale the distance with
- trace_check : float
- Let the library check if there are spikes outside of stimulus
- interval, default is True
- error_dist : float
- Distance returned when error, default is 250
-
- Returns
- =======
- distance : float
- The absolute number of standard deviation the feature is away
- from the mean. In case of anomalous results a value of
- 'error_dist' standard deviations is returned.
- This can happen if: a feature generates an error, there are
- spikes outside of the stimulus interval, the feature returns
- a NaN, etc.
+ Args:
+ trace: Trace dict that represents one trace. The dict should have the
+ following keys: 'T', 'V', 'stim_start', 'stim_end'
+ feature_name: Name of the the features for which to calculate the distance
+ mean: Mean to calculate the distance from
+ std: Std to scale the distance with
+ trace_check: Let the library check if there are spikes outside of stimulus
+ interval, default is True
+ error_dist: Distance returned when error, default is 250
+
+ Returns:
+ The absolute number of standard deviation the feature is away
+ from the mean. In case of anomalous results a value of
+ 'error_dist' standard deviations is returned.
+ This can happen if: a feature generates an error, there are
+ spikes outside of the stimulus interval, the feature returns
+ a NaN, etc.
"""
-
_initialise()
# Next set time, voltage and the stimulus start and end
@@ -266,9 +232,9 @@ def getDistance(
if trace_check_success["trace_check"] is None:
return error_dist
- feature_values = _get_feature(featureName)
+ feature_values = _get_feature(feature_name)
- distance = 0
+ distance = 0.0
if feature_values is None or len(feature_values) < 1:
return error_dist
else:
@@ -286,8 +252,19 @@ def getDistance(
return distance
-def _initialise():
- """Set cppcore initial values"""
+@deprecated("Use get_distance instead")
+def getDistance(
+ trace,
+ featureName,
+ mean,
+ std,
+ trace_check=True,
+ error_dist=250) -> float:
+ return get_distance(trace, featureName, mean, std, trace_check, error_dist)
+
+
+def _initialise() -> None:
+ """Set cppcore initial values."""
cppcore.Initialize(_settings.dependencyfile_path, "log")
# flush the GErrorString from previous runs by calling getgError()
cppcore.getgError()
@@ -307,33 +284,45 @@ def _initialise():
cppcore.setFeatureString(setting_name, str_setting)
-def setIntSetting(setting_name, new_value):
+def set_int_setting(setting_name: str, new_value: int) -> None:
"""Set a certain integer setting to a new value"""
-
_int_settings[setting_name] = new_value
-def setDoubleSetting(setting_name, new_value):
- """Set a certain double setting to a new value"""
+@deprecated("Use set_int_setting instead")
+def setIntSetting(setting_name: str, new_value: int) -> None:
+ set_int_setting(setting_name, new_value)
+
+def set_double_setting(setting_name: str, new_value: float) -> None:
+ """Set a certain double setting to a new value"""
_double_settings[setting_name] = new_value
-def setStrSetting(setting_name, new_value):
- """Set a certain string setting to a new value"""
+@deprecated("Use set_double_setting instead")
+def setDoubleSetting(setting_name: str, new_value: float) -> None:
+ set_double_setting(setting_name, new_value)
+
+def set_str_setting(setting_name: str, new_value: str) -> None:
+ """Set a certain string setting to a new value"""
_string_settings[setting_name] = new_value
-def getFeatureValues(
- traces,
- featureNames,
- parallel_map=None,
- return_list=True,
- raise_warnings=True):
+@deprecated("Use set_str_setting instead")
+def setStrSetting(setting_name: str, new_value: str) -> None:
+ set_str_setting(setting_name, new_value)
+
+
+def get_feature_values(
+ traces: list[dict],
+ feature_names: list[str],
+ parallel_map: Callable | None = None,
+ return_list: bool = True,
+ raise_warnings: bool = True) -> list | map:
"""Calculate feature values for a list of traces.
- This function is the core of the eFEL API. A list of traces (in the form
+ This function is the core of eFEL API. A list of traces (in the form
of dictionaries) is passed as argument, together with a list of feature
names.
@@ -343,41 +332,33 @@ def getFeatureValues(
Beware that every feature returns an array of values. E.g. AP_amplitude
will return a list with the amplitude of every action potential.
- Parameters
- ==========
- traces : list of trace dicts
- Every trace dict represent one trace. The dict should have the
- following keys: 'T', 'V', 'stim_start', 'stim_end'
- feature_names : list of string
- List with the names of the features to be calculated on all
- the traces.
- parallel_map : map function
- Map function to parallelise over the traces. Default is the
- serial map() function
- return_list: boolean
- By default the function returns a list of dicts. This
- optional argument can disable this, so that the result of the
- parallel_map() is returned. Can be useful for performance
- reasons when an iterator is preferred.
- raise_warnings: boolean
- Raise warning when efel c++ returns an error
-
- Returns
- =======
- feature_values : list of dicts
- For every input trace a feature value dict is return (in
- the same order). The dict contains the keys of
- 'feature_names', every key contains a numpy array with
- the feature values returned by the C++ efel code.
- The value is None if an error occured during the
- calculation of the feature.
+ Args:
+ traces: Every trace dict represents one trace. The dict should have the
+ following keys: 'T', 'V', 'stim_start', 'stim_end'
+ feature_names: List with the names of the features to be calculated on all
+ the traces.
+ parallel_map: Map function to parallelise over the traces. Default is the
+ serial map() function
+ return_list: By default the function returns a list of dicts. This
+ optional argument can disable this, so that the result of the
+ parallel_map() is returned. Can be useful for performance
+ reasons when an iterator is preferred.
+ raise_warnings: Raise warning when efel c++ returns an error
+
+ Returns:
+ For every input trace a feature value dict is returned (in
+ the same order). The dict contains the keys of
+ 'feature_names', every key contains a numpy array with
+ the feature values returned by the C++ efel code.
+ The value is None if an error occured during the
+ calculation of the feature.
"""
if parallel_map is None:
parallel_map = map
traces_featurenames = (
- (trace, featureNames, raise_warnings)
+ (trace, feature_names, raise_warnings)
for trace in traces)
map_result = parallel_map(_get_feature_values_serial, traces_featurenames)
@@ -387,18 +368,28 @@ def getFeatureValues(
return map_result
-def get_py_feature(featureName):
- """Return python feature"""
-
- return getattr(pyfeatures, featureName)()
+@deprecated("Use get_feature_values instead")
+def getFeatureValues(
+ traces,
+ featureNames,
+ parallel_map=None,
+ return_list=True,
+ raise_warnings=True):
+ return get_feature_values(
+ traces, featureNames, parallel_map, return_list, raise_warnings)
-def _get_feature_values_serial(trace_featurenames):
- """Single thread of getFeatureValues"""
+def get_py_feature(feature_name: str) -> np.ndarray | None:
+ """Return values of the given feature name."""
+ return getattr(pyfeatures, feature_name)()
- trace, featureNames, raise_warnings = trace_featurenames
- featureDict = {}
+def _get_feature_values_serial(
+ trace_featurenames: tuple[dict, list[str], bool]
+) -> dict:
+ """Single process of get_feature_values."""
+ trace, feature_names, raise_warnings = trace_featurenames
+ result = {}
if 'stim_start' in trace and 'stim_end' in trace:
try:
@@ -428,55 +419,59 @@ def _get_feature_values_serial(trace_featurenames):
for item in list(trace.keys()):
cppcore.setFeatureDouble(item, [x for x in trace[item]])
- for featureName in featureNames:
- featureDict[featureName] = _get_feature(
- featureName, raise_warnings=raise_warnings)
+ for feature_name in feature_names:
+ result[feature_name] = _get_feature(
+ feature_name, raise_warnings=raise_warnings)
- return featureDict
+ return result
-def getMeanFeatureValues(traces, featureNames, raise_warnings=True):
- """Convenience function that returns mean values from getFeatureValues()
+def get_mean_feature_values(
+ traces: list[dict],
+ feature_names: list[str],
+ raise_warnings: bool = True) -> list[dict]:
+ """Convenience function that returns mean values from get_feature_values()
- Instead of return a list of values for every feature as getFeatureValues()
+ Instead of return a list of values for every feature as get_feature_values()
does, this function returns per trace one value for every feature, namely
the mean value.
- Parameters
- ==========
- traces : list of trace dicts
- Every trace dict represent one trace. The dict should have the
- following keys: 'T', 'V', 'stim_start', 'stim_end'
- feature_names : list of string
- List with the names of the features to be calculated on all
- the traces.
- raise_warnings: boolean
- Raise warning when efel c++ returns an error
-
- Returns
- =======
- feature_values : list of dicts
- For every input trace a feature value dict is return (in
- the same order). The dict contains the keys of
- 'feature_names', every key contains the mean of the array
- that is returned by getFeatureValues()
- The value is None if an error occured during the
- calculation of the feature, or if the feature value array
- was empty.
+ Args:
+ traces: Every trace dict represents one trace. The dict should have the
+ following keys: 'T', 'V', 'stim_start', 'stim_end'
+ feature_names: List with the names of the features to be calculated on all
+ the traces.
+ raise_warnings: Raise warning when efel c++ returns an error
+
+ Returns:
+ For every input trace a feature value dict is returned (in
+ the same order). The dict contains the keys of
+ 'feature_names', every key contains the mean of the array
+ that is returned by get_feature_values()
+ The value is None if an error occured during the
+ calculation of the feature, or if the feature value array
+ was empty.
"""
-
featureDicts = getFeatureValues(
traces,
- featureNames,
+ feature_names,
raise_warnings=raise_warnings)
for featureDict in featureDicts:
for (key, values) in list(featureDict.items()):
if values is None or len(values) == 0:
featureDict[key] = None
else:
- featureDict[key] = numpy.mean(values)
+ featureDict[key] = np.mean(values)
return featureDicts
+@deprecated("Use get_mean_feature_values instead")
+def getMeanFeatureValues(
+ traces,
+ featureNames,
+ raise_warnings=True):
+ return get_mean_feature_values(traces, featureNames, raise_warnings)
+
+
reset()
diff --git a/efel/io.py b/efel/io.py
index d5afe622..8753e702 100644
--- a/efel/io.py
+++ b/efel/io.py
@@ -37,27 +37,31 @@ def load_ascii_input(
return time, voltage
-def extract_stim_times_from_neo_data(blocks, stim_start, stim_end):
+def extract_stim_times_from_neo_data(blocks, stim_start, stim_end) -> tuple:
"""
- Seeks for the stim_start and stim_end parameters inside the Neo data.
-
- Parameters
- ==========
- blocks : Neo object blocks
- stim_start : numerical value (ms) or None
- stim_end : numerical value (ms) or None
-
- Epoch.name should be one of "stim", "stimulus", "stimulation",
- "current_injection"
- First Event.name should be "stim_start", "stimulus_start",
- "stimulation_start", "current_injection_start"
- Second Event.name should be one of "stim_end",
- "stimulus_end", "stimulation_end", "current_injection_end"
-
- Returned objects
- ====================
- stim_start : numerical value (ms) or None
- stim_end : numerical value (ms) or None
+ Seeks for the stim_start and stim_end parameters inside the Neo data.
+
+ Args:
+ blocks (Neo object blocks): Description of what blocks represents.
+ stim_start (numerical value or None): Start time of the stimulation in
+ milliseconds. If not available, None should be used.
+ stim_end (numerical value or None): End time of the stimulation in
+ milliseconds. If not available, None should be used.
+
+ Returns:
+ tuple: A tuple containing:
+ - stim_start (numerical value or None): Start time of the stimulation
+ in milliseconds.
+ - stim_end (numerical value or None): End time of the stimulation in
+ milliseconds.
+
+ Notes:
+ - Epoch.name should be one of "stim", "stimulus", "stimulation",
+ "current_injection".
+ - First Event.name should be "stim_start", "stimulus_start",
+ "stimulation_start", "current_injection_start".
+ - Second Event.name should be one of "stim_end", "stimulus_end",
+ "stimulation_end", "current_injection_end".
"""
# this part code aims to find informations about stimulations, if
@@ -150,31 +154,29 @@ def extract_stim_times_from_neo_data(blocks, stim_start, stim_end):
return stim_start, stim_end
-def load_neo_file(file_name, stim_start=None, stim_end=None, **kwargs):
+def load_neo_file(file_name: str, stim_start=None, stim_end=None, **kwargs) -> list:
"""
- Use neo to load a data file and convert it to be readable by eFEL.
-
- Parameters
- ==========
- file_name : string
- path to the location of a Dependency file
- stim_start : numerical value (ms)
- Optional if there is an Epoch or two Events in the file
- stim_end : numerical value (ms)
- Optional if there is an Epoch or two Events in the file
- kwargs : keyword arguments to be passed to the read() method of the
- Neo IO class
-
- Epoch.name should be one of "stim", "stimulus", "stimulation",
- "current_injection"
- First Event.name should be "stim_start", "stimulus_start",
- "stimulation_start", "current_injection_start"
- Second Event.name should be one of "stim_end", "stimulus_end",
- "stimulation_end", "current_injection_end"
-
- The returned object is presented like this :
- returned object : [Segments_1, Segments_2, ..., Segments_n]
- Segments_1 = [Traces_1, Traces_2, ..., Traces_n]
+ Loads a data file using neo and converts it for eFEL readability.
+
+ Args:
+ file_name (string): Path to the Dependency file location.
+ stim_start (numerical value, optional): Start time in ms. Optional if an Epoch
+ or two Events are in the file.
+ stim_end (numerical value, optional): End time in ms. Optional if an Epoch
+ or two Events are in the file.
+ **kwargs: Additional arguments for the read() method of Neo IO class.
+
+ Returns:
+ list of Segments: Segments containing traces, formatted as
+ [Segments_1, Segments_2, ..., Segments_n], where each Segments_i is
+ [Traces_1, Traces_2, ..., Traces_n].
+
+ Notes:
+ - Epoch.name should be "stim", "stimulus", "stimulation", "current_injection".
+ - First Event.name: "stim_start", "stimulus_start", "stimulation_start",
+ "current_injection_start".
+ - Second Event.name: "stim_end", "stimulus_end", "stimulation_end",
+ "current_injection_end".
"""
reader = neo.io.get_io(file_name)
blocks = reader.read(**kwargs)
diff --git a/efel/pyfeatures/pyfeatures.py b/efel/pyfeatures/pyfeatures.py
index 740380c8..acdddee5 100644
--- a/efel/pyfeatures/pyfeatures.py
+++ b/efel/pyfeatures/pyfeatures.py
@@ -27,8 +27,9 @@
"""
from typing_extensions import deprecated
+import warnings
-import numpy
+import numpy as np
from efel import cppcore
from numpy.fft import *
@@ -70,36 +71,36 @@ def time():
@deprecated("Use spike_count instead.")
-def Spikecount() -> numpy.ndarray:
+def Spikecount() -> np.ndarray:
return spike_count()
-def spike_count() -> numpy.ndarray:
+def spike_count() -> np.ndarray:
"""Get spike count."""
peak_indices = get_cpp_feature("peak_indices")
if peak_indices is None:
- return numpy.array([0])
- return numpy.array([peak_indices.size])
+ return np.array([0])
+ return np.array([peak_indices.size])
@deprecated("Use spike_count_stimint instead.")
-def Spikecount_stimint() -> numpy.ndarray:
+def Spikecount_stimint() -> np.ndarray:
return spike_count_stimint()
-def spike_count_stimint() -> numpy.ndarray:
+def spike_count_stimint() -> np.ndarray:
"""Get spike count within stimulus interval."""
stim_start = _get_cpp_data("stim_start")
stim_end = _get_cpp_data("stim_end")
peak_times = get_cpp_feature("peak_time")
if peak_times is None:
- return numpy.array([0])
+ return np.array([0])
res = sum(1 for time in peak_times if stim_start <= time <= stim_end)
- return numpy.array([res])
+ return np.array([res])
-def trace_check() -> numpy.ndarray | None:
+def trace_check() -> np.ndarray | None:
"""Returns np.array([0]) if there are no spikes outside stimulus boundaries.
Returns None upon failure.
@@ -108,23 +109,23 @@ def trace_check() -> numpy.ndarray | None:
stim_end = _get_cpp_data("stim_end")
peak_times = get_cpp_feature("peak_time")
if peak_times is None: # If no spikes, then no problem
- return numpy.array([0])
+ return np.array([0])
# Check if there are no spikes or if all spikes are within the stimulus interval
- if numpy.all((peak_times >= stim_start) & (peak_times <= stim_end * 1.05)):
- return numpy.array([0]) # 0 if trace is valid
+ if np.all((peak_times >= stim_start) & (peak_times <= stim_end * 1.05)):
+ return np.array([0]) # 0 if trace is valid
else:
return None # None if trace is invalid due to spike outside stimulus interval
-def burst_number() -> numpy.ndarray:
+def burst_number() -> np.ndarray:
"""The number of bursts."""
burst_mean_freq = get_cpp_feature("burst_mean_freq")
if burst_mean_freq is None:
- return numpy.array([0])
- return numpy.array([burst_mean_freq.size])
+ return np.array([0])
+ return np.array([burst_mean_freq.size])
-def strict_burst_number() -> numpy.ndarray:
+def strict_burst_number() -> np.ndarray:
"""Calculate the strict burst number.
This implementation does not assume that every spike belongs to a burst.
@@ -135,8 +136,8 @@ def strict_burst_number() -> numpy.ndarray:
strict_burst_factor. Default value is 2.0."""
burst_mean_freq = get_cpp_feature("strict_burst_mean_freq")
if burst_mean_freq is None:
- return numpy.array([0])
- return numpy.array([burst_mean_freq.size])
+ return np.array([0])
+ return np.array([burst_mean_freq.size])
def impedance():
@@ -153,19 +154,19 @@ def impedance():
normalized_current = current_trace - holding_current
n_spikes = spike_count()
if n_spikes < 1: # if there is no spikes in ZAP
- fft_volt = numpy.fft.fft(normalized_voltage)
- fft_cur = numpy.fft.fft(normalized_current)
+ fft_volt = np.fft.fft(normalized_voltage)
+ fft_cur = np.fft.fft(normalized_current)
if any(fft_cur) == 0:
return None
# convert dt from ms to s to have freq in Hz
- freq = numpy.fft.fftfreq(len(normalized_voltage), d=dt / 1000.)
+ freq = np.fft.fftfreq(len(normalized_voltage), d=dt / 1000.)
Z = fft_volt / fft_cur
norm_Z = abs(Z) / max(abs(Z))
- select_idxs = numpy.swapaxes(
- numpy.argwhere((freq > 0) & (freq <= Z_max_freq)), 0, 1
+ select_idxs = np.swapaxes(
+ np.argwhere((freq > 0) & (freq <= Z_max_freq)), 0, 1
)[0]
smooth_Z = gaussian_filter1d(norm_Z[select_idxs], 10)
- ind_max = numpy.argmax(smooth_Z)
+ ind_max = np.argmax(smooth_Z)
return freq[ind_max]
else:
return None
@@ -184,7 +185,7 @@ def ISIs():
if peak_times is None:
return None
else:
- return numpy.diff(peak_times)
+ return np.diff(peak_times)
def initburst_sahp_vb():
@@ -198,7 +199,7 @@ def initburst_sahp_vb():
len(initburst_sahp_value) != 1 or len(voltage_base) != 1:
return None
else:
- return numpy.array([initburst_sahp_value[0] - voltage_base[0]])
+ return np.array([initburst_sahp_value[0] - voltage_base[0]])
def initburst_sahp_ssse():
@@ -212,7 +213,7 @@ def initburst_sahp_ssse():
len(initburst_sahp_value) != 1 or len(ssse) != 1:
return None
else:
- return numpy.array([initburst_sahp_value[0] - ssse[0]])
+ return np.array([initburst_sahp_value[0] - ssse[0]])
def initburst_sahp():
@@ -285,18 +286,18 @@ def initburst_sahp():
if sahp_interval_end <= sahp_interval_start:
return None
else:
- sahp_interval = voltage[numpy.where(
+ sahp_interval = voltage[np.where(
(time <= sahp_interval_end) &
(time >= sahp_interval_start))]
if len(sahp_interval) > 0:
- min_volt_index = numpy.argmin(sahp_interval)
+ min_volt_index = np.argmin(sahp_interval)
else:
return None
slow_ahp = sahp_interval[min_volt_index]
- return numpy.array([slow_ahp])
+ return np.array([slow_ahp])
def depol_block():
@@ -314,24 +315,24 @@ def depol_block():
voltage = get_cpp_feature("voltage")
time = get_cpp_feature("time")
AP_begin_voltage = get_cpp_feature("AP_begin_voltage")
- stim_start_idx = numpy.flatnonzero(time >= stim_start)[0]
- stim_end_idx = numpy.flatnonzero(time >= stim_end)[0]
+ stim_start_idx = np.flatnonzero(time >= stim_start)[0]
+ stim_end_idx = np.flatnonzero(time >= stim_end)[0]
if AP_begin_voltage is None:
- return numpy.array([1]) # if subthreshold no depolarization block
+ return np.array([1]) # if subthreshold no depolarization block
elif AP_begin_voltage.size:
- depol_block_threshold = numpy.mean(AP_begin_voltage) # mV
+ depol_block_threshold = np.mean(AP_begin_voltage) # mV
else:
depol_block_threshold = -50
block_min_duration = 50.0 # ms
long_hyperpol_threshold = -75.0 # mV
- bool_voltage = numpy.array(voltage > depol_block_threshold, dtype=int)
- up_indexes = numpy.flatnonzero(numpy.diff(bool_voltage) == 1)
- down_indexes = numpy.flatnonzero(numpy.diff(bool_voltage) == -1)
+ bool_voltage = np.array(voltage > depol_block_threshold, dtype=int)
+ up_indexes = np.flatnonzero(np.diff(bool_voltage) == 1)
+ down_indexes = np.flatnonzero(np.diff(bool_voltage) == -1)
if len(up_indexes) > len(down_indexes):
- down_indexes = numpy.append(down_indexes, [stim_end_idx])
+ down_indexes = np.append(down_indexes, [stim_end_idx])
if len(up_indexes) == 0:
# if it never gets high enough, that's not a good sign (meaning no
@@ -340,23 +341,23 @@ def depol_block():
else:
# if it stays in the depolarization block more than min_duration, flag
# as depolarization block
- max_depol_duration = numpy.max(
+ max_depol_duration = np.max(
[time[down_indexes[k]] - time[up_idx] for k,
up_idx in enumerate(up_indexes)])
if max_depol_duration > block_min_duration:
return None
- bool_voltage = numpy.array(voltage > long_hyperpol_threshold, dtype=int)
- up_indexes = numpy.flatnonzero(numpy.diff(bool_voltage) == 1)
- down_indexes = numpy.flatnonzero(numpy.diff(bool_voltage) == -1)
+ bool_voltage = np.array(voltage > long_hyperpol_threshold, dtype=int)
+ up_indexes = np.flatnonzero(np.diff(bool_voltage) == 1)
+ down_indexes = np.flatnonzero(np.diff(bool_voltage) == -1)
down_indexes = down_indexes[(down_indexes > stim_start_idx) & (
down_indexes < stim_end_idx)]
if len(down_indexes) != 0:
up_indexes = up_indexes[(up_indexes > stim_start_idx) & (
up_indexes < stim_end_idx) & (up_indexes > down_indexes[0])]
if len(up_indexes) < len(down_indexes):
- up_indexes = numpy.append(up_indexes, [stim_end_idx])
- max_hyperpol_duration = numpy.max(
+ up_indexes = np.append(up_indexes, [stim_end_idx])
+ max_hyperpol_duration = np.max(
[time[up_indexes[k]] - time[down_idx] for k,
down_idx in enumerate(down_indexes)])
@@ -365,7 +366,7 @@ def depol_block():
if max_hyperpol_duration > block_min_duration:
return None
- return numpy.array([1])
+ return np.array([1])
def depol_block_bool():
@@ -373,9 +374,9 @@ def depol_block_bool():
is None, [0] otherwise."""
if depol_block() is None:
- return numpy.array([1])
+ return np.array([1])
else:
- return numpy.array([0])
+ return np.array([0])
def spikes_per_burst():
@@ -391,7 +392,7 @@ def spikes_per_burst():
for idx_begin, idx_end in zip(burst_begin_indices, burst_end_indices):
ap_per_bursts.append(idx_end - idx_begin + 1)
- return numpy.array(ap_per_bursts)
+ return np.array(ap_per_bursts)
def spikes_per_burst_diff():
@@ -411,7 +412,7 @@ def spikes_in_burst1_burst2_diff():
) < 1:
return None
- return numpy.array([spikes_per_burst_diff_values[0]])
+ return np.array([spikes_per_burst_diff_values[0]])
def spikes_in_burst1_burstlast_diff():
@@ -420,42 +421,39 @@ def spikes_in_burst1_burstlast_diff():
if spikes_per_burst_values is None or len(spikes_per_burst_values) < 2:
return None
- return numpy.array([
+ return np.array([
spikes_per_burst_values[0] - spikes_per_burst_values[-1]
])
-def phaseslope_max():
- """Calculate the maximum phase slope"""
-
+def phaseslope_max() -> np.ndarray | None:
+ """Calculate the maximum phase slope."""
voltage = get_cpp_feature("voltage")
time = get_cpp_feature("time")
+ if voltage is None or time is None:
+ return None
time = time[:len(voltage)]
from numpy import diff
phaseslope = diff(voltage) / diff(time)
try:
- return numpy.array([numpy.max(phaseslope)])
+ return np.array([np.max(phaseslope)])
except ValueError:
return None
-def get_cpp_feature(featureName, raise_warnings=None):
- """Return value of feature implemented in cpp"""
- cppcoreFeatureValues = list()
- exitCode = cppcore.getFeature(featureName, cppcoreFeatureValues)
-
+def get_cpp_feature(feature_name: str, raise_warnings=False) -> np.ndarray | None:
+ """Return value of feature implemented in cpp."""
+ cppcoreFeatureValues: list[int | float] = list()
+ exitCode = cppcore.getFeature(feature_name, cppcoreFeatureValues)
if exitCode < 0:
if raise_warnings:
- import warnings
warnings.warn(
- "Error while calculating feature %s: %s" %
- (featureName, cppcore.getgError()),
+ f"Error while calculating {feature_name}, {cppcore.getgError()}",
RuntimeWarning)
return None
- else:
- return numpy.array(cppcoreFeatureValues)
+ return np.array(cppcoreFeatureValues)
def _get_cpp_data(data_name: str) -> float:
diff --git a/requirements_docs.txt b/requirements_docs.txt
index df154a6f..1aeaefe1 100644
--- a/requirements_docs.txt
+++ b/requirements_docs.txt
@@ -24,6 +24,8 @@
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-sphinx>=2.0.0
-sphinx_rtd_theme
-sphinx-autorun
\ No newline at end of file
+sphinx>=7.2.6
+sphinx_rtd_theme>=2.0.0
+sphinx-autobuild>=2021.3.14
+sphinx-autorun>=1.1.1
+sphinx-autodoc-typehints>=1.25.2
diff --git a/tests/test_basic.py b/tests/test_basic.py
index 37087b93..b01b8059 100644
--- a/tests/test_basic.py
+++ b/tests/test_basic.py
@@ -103,7 +103,7 @@ def test_setDependencyFileLocation_wrongpath():
import efel
efel.reset()
pytest.raises(
- Exception,
+ FileNotFoundError,
efel.setDependencyFileLocation, "thisfiledoesntexist")
@@ -169,17 +169,21 @@ def test_raise_warnings():
with warnings.catch_warnings(record=True) as warning:
warnings.simplefilter("always")
+ # Ignore DeprecationWarning
+ warnings.filterwarnings("ignore", category=DeprecationWarning)
feature_value = efel.getFeatureValues(
[trace],
['AP_amplitude'])[0]['AP_amplitude']
assert feature_value is None
assert len(warning) == 1
- assert ("Error while calculating feature AP_amplitude" in
+ assert ("Error while calculating AP_amplitude" in
str(warning[0].message))
with warnings.catch_warnings(record=True) as warning:
warnings.simplefilter("always")
+ # Ignore DeprecationWarning
+ warnings.filterwarnings("ignore", category=DeprecationWarning)
feature_value = efel.getFeatureValues(
[trace],
['AP_amplitude'], raise_warnings=False)[0]['AP_amplitude']
diff --git a/tox.ini b/tox.ini
index 2922bdb9..f76fb0f3 100644
--- a/tox.ini
+++ b/tox.ini
@@ -63,11 +63,9 @@ commands =
[testenv:docs]
usedevelop=True # to access local rst files
envdir = {toxworkdir}/docs
-deps =
- pytest
- sphinx
- sphinx-autobuild
- sphinx_rtd_theme
+deps =
+ -r{toxinidir}/requirements_docs.txt
+ pytest>=7.4.4
allowlist_externals =
make
changedir = docs