Skip to content

Commit

Permalink
Lava va (lava-nc#740)
Browse files Browse the repository at this point in the history
* prod neuron

* trying to get prod neuron to work...

* trying to get prod neuron cpu to work...

* prod neuron process cpu backend working with unit test

* remove init file from prod_neuron

* gradedvec process and test

* working on norm vec

* fixed prod neuron license headers

* invsqrt model and tests, reconfigured to process and models

* normvecdelay and tests, timing weirdness with normvecdelay

* test for second channel of norm vec

* renamed to prodneuron.

* fixing some linting errors

* cleanup

* frameworks and networks added to lava-nc

* adding some docstring, fixing unused imports

* Fix partition parse bug for internal vLab (lava-nc#741)

* Adding deprecated lava.utils.system.

* Update system.py

* Update system.py

* Bugfix for missing fields in sinfo.

---------

Co-authored-by: PhilippPlank <32519998+PhilippPlank@users.noreply.github.com>

* Add linting tutorials folder (lava-nc#742)

* serialization first try

* first try

* serialization implementation + unittests

* fix linting

* fix bandit

* fix unittest

* fix codacy

* added tutorial

* Update tutorial11_serialization.ipynb

* added notebook to unit tests

* Fixed broken link in tutorial.

* fix linting

* add tutorials folder to CI linting check

* Update ci.yml

* fix bug

---------

Co-authored-by: Mathis Richter <mathis.richter@intel.com>

* Iterator callback fx signature fix (lava-nc#743)

* update refport unittest to always wait when it writes to port for consistent behavior

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Removed pyproject changes

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Fix to convolution tests. Fixed imcompatible mnist_pretrained for old python versions.

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Missing moudle parent fix

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Added ConvVarModel

* Added iterable callback function

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Fix codacy issues in callback_fx.py

* Fix linting in callback_fx.py

* Fix codacy sig issue in callback_fx.py

---------

Signed-off-by: bamsumit <bam_sumit@hotmail.com>
Co-authored-by: Joyesh Mishra <joyesh.mishra@intel.com>
Co-authored-by: Marcus G K Williams <168222+mgkwill@users.noreply.github.com>

* Bugfix to pass the args by keyword (lava-nc#744)

* CLP Tutorial 01 Only (lava-nc#746)

* CLP initial commit: PrototypeLIF, NoveltyDetector, Readout procs/tests

* small linting fix

* Novelty detector upgraded to target next neuron; codacy errors fixed

* integration test; small fixes

* removed duplicate code in prototypeLIF process; linting fixes

* linting fixes

* Linting and codacy fixes

* remove duplicate test; some more codacy fixes

* clp tutorial01 v1

* PrototypeLIF spikes when it recieves a 3rd factor input

* a test for PrototypeLIF output spike after 3rd factor input

* clp tutorial01 ready to be roughly finished

* linting, license and utils fixes

* CLP on COIL-100, extracted features from 42 objects, tutorial01 fixes

* Allocation & prototype id tracking is abstracted away from
NoveltyDetector

* Allocator process; Readout proc sends allocation trigger if error

* introduce learning rate Var in PrototypeLIF

* updated integration tests; full system test included

* Linting fixes

* Another small lintint fix

* clp tutorial 2, 20 class experiments (Coil-100)

* PrototypeLIF hard reset capability to enable faster temporal  WTA

* allocation mechanism changed; proc interfaces changes; dense conns
added; lr var removed

* small linting fix

* small codacy fix

* prints removed, spelling mistakes fixed

* ignoring one check in an integration test

* Revert "small linting fix"

This reverts commit bde4fa9.

* CLP tutorial 1 is finalized

* Fix linting in test_models.py

* Test fix in utils.py

* Fix test of bug fix in utils.py

* Fix utils.py

* Implemented individual threadsafe random call

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* tutorial 2 use new abstracted CLP class

* CLP tutorial 2: unsupervised and supervised experiments are seperated

* addressed reviewer's requests, added tests, removed pics etc

* Update clp.py

fix linting

* Update tutorial01_one-shot_learning_with_novelty_detection.ipynb

* Update tutorial02_clp_on_coil100.ipynb

* Update tutorial01_one-shot_learning_with_novelty_detection.ipynb

* Update tutorial02_clp_on_coil100.ipynb

* Removed sklearn dependency. Now np data gen.

* rm COIL tutorial, dataset, tests from branch

---------

Signed-off-by: bamsumit <bam_sumit@hotmail.com>
Co-authored-by: Elvin Hajizada <elvin.hajizada@intel.com>
Co-authored-by: PhilippPlank <32519998+PhilippPlank@users.noreply.github.com>
Co-authored-by: Marcus G K Williams <168222+mgkwill@users.noreply.github.com>
Co-authored-by: bamsumit <bam_sumit@hotmail.com>

* Update release job, add pypi upload, github release creation (lava-nc#737)

* Add pypi upload, github release creation in cd.yml

* Set version to 0.8.0.dev0

* Add readme to pyproject.toml

* use v1.3 of composite action

* Test run of release creation/pypi pub in cd.yml

* Run tests from py 3.10 in cd.yml

* Fix export of output vars in cd.yml

---------

Co-authored-by: PhilippPlank <32519998+PhilippPlank@users.noreply.github.com>

* Update release job, pypi auth

Signed-off-by: Marcus G K Williams <Marcus G K Williams 168222+mgkwill@users.noreply.github.com>

* Use github pypi auth in release job (lava-nc#747)

* Update release job, pypi auth

Signed-off-by: Marcus G K Williams <Marcus G K Williams 168222+mgkwill@users.noreply.github.com>

* Add id-token to cd.yml

---------

Signed-off-by: Marcus G K Williams <Marcus G K Williams 168222+mgkwill@users.noreply.github.com>
Co-authored-by: Marcus G K Williams <Marcus G K Williams 168222+mgkwill@users.noreply.github.com>

* Release 0.8.0

Signed-off-by: Marcus G K Williams <Marcus G K Williams 168222+mgkwill@users.noreply.github.com>

* Fix conv python model to send() before recv() (lava-nc#751)

Co-authored-by: Gavin Parpart <gavin.parpart@pnnl.gov>

* Adds support for Monitor a Port to observe if it is blocked (lava-nc#755)

* Adds support for Monitor a Port to observe if it is blocked

* Fix lint issues

* Redesigned Watchdog to use Multiprocessing Manager; Invoke only 2 Event Monitors and use 2 queues for watching events; Configs are piped in via compiler now

* Incorporate Codacy Suggestions

* Fix lint comments

* Fix failing unit tests to add the watchdog builder

* Code review comments

* Set version to dev0 in pyproject.toml

* Update README.md

Updated version in install instructions.

* Update README.md (lava-nc#758)

Updated the installation branch to the most recent version.

Co-authored-by: PhilippPlank <32519998+PhilippPlank@users.noreply.github.com>

* Fix DelayDense buffer issue (lava-nc#767)

* update refport unittest to always wait when it writes to port for consistent behavior

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Removed pyproject changes

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Fix to convolution tests. Fixed imcompatible mnist_pretrained for old python versions.

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Missing moudle parent fix

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Added ConvVarModel

* Added iterable callback function

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Fix codacy issues in callback_fx.py

* Fix linting in callback_fx.py

* Fix codacy sig issue in callback_fx.py

* Bugfix to pass the args by keyword

* Delay Dense PyModel fix

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Fixed unittests

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Fixed sparse delay

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

---------

Signed-off-by: bamsumit <bam_sumit@hotmail.com>
Co-authored-by: Joyesh Mishra <joyesh.mishra@intel.com>
Co-authored-by: Marcus G K Williams <168222+mgkwill@users.noreply.github.com>

* Allow np.array as input weights for Sparse (lava-nc#772)

* ndarray as input weights for Sparse

* docs

* codacy

* remove implementation details from docstring and from tests

* move tests to corresponding classes

* put weight casting into extra method

* Removed unused import

---------

Co-authored-by: Mathis Richter <mathis.richter@intel.com>

* Bump tornado from 6.3.2 to 6.3.3 (lava-nc#778)

Bumps [tornado](https://github.com/tornadoweb/tornado) from 6.3.2 to 6.3.3.
- [Changelog](https://github.com/tornadoweb/tornado/blob/master/docs/releases.rst)
- [Commits](tornadoweb/tornado@v6.3.2...v6.3.3)

---
updated-dependencies:
- dependency-name: tornado
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Bump cryptography from 41.0.2 to 41.0.3 (lava-nc#779)

Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.2 to 41.0.3.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](pyca/cryptography@41.0.2...41.0.3)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Mathis Richter <mathis.richter@intel.com>

* small docstring, typing and other formatting changes

* Update README.md (lava-nc#758)

Updated the installation branch to the most recent version.

Co-authored-by: PhilippPlank <32519998+PhilippPlank@users.noreply.github.com>

* small docstring, typing and other formatting changes

* doc strings for graded vec

* Bump gitpython from 3.1.32 to 3.1.35 (lava-nc#785)

Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.32 to 3.1.35.
- [Release notes](https://github.com/gitpython-developers/GitPython/releases)
- [Changelog](https://github.com/gitpython-developers/GitPython/blob/main/CHANGES)
- [Commits](gitpython-developers/GitPython@3.1.32...3.1.35)

---
updated-dependencies:
- dependency-name: gitpython
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* fixing merge conflicts on prodneuron

* Merge Spike IO (lava-nc#786)

* Made changes to channel builder for SpikeIO

* Added ChannelStub to configure channels

* Added NcSpikeIOVarModel

* Refined the NcSpikeIOVarModel

* Add interface_type and populate it in SpikeIOVarModel

* Add Interface Type

* Test PyNcChannel for Dense and Sparse Data using Unix Message Queues

* Faster encoding for sparse csp_port.send

* Added msg_queue_id to NcSpikeIOVarModel

* Added defaults for ByteEncoder

* Add advance_io API to PyOutPort

* Rename and refactor ConnectionConfig

* Only create channels for ChannelBuilderPyNc; Ignore ChannelBuilderNx

* Added spike_io_port to connection config and var model

* Integrate lower C level code, fix Lava bugs

* Fix merge

* Add enum for spike io mode and add it to ConnectionConfig and SpikeIO Var Model

* Fix indices dtype in sparse send to int32

* Add advance_time API to PyLoihiProcessModel

* Initial commit for spikeio output mode

* Rename to advance_to_time_step API

* Switch to TIME_COMPARE mode as default

* Commit for Spike Block Output Mode

* Move the axon allocation to 2 instead of 1 for output side

* Create a new config for watchdog and print warning; Set PyProcCompiler SpikeCounter Offset to None when object gets deleted

* Expose Mac Address, Num Input Buckets and Use Ethernet Interface from ConnectionConfig

* Fix lint specific errors

* fix codacy errors

* Fix an issue with length of connection_config list

* Fix unit tests

---------

Co-authored-by: yashward <yashwardhan.singh@intel.com>
Co-authored-by: Julia <julia.a.gould@intel.com>

* CLP tutorial 1 small patch (lava-nc#773)

* CLP initial commit: PrototypeLIF, NoveltyDetector, Readout procs/tests

* small linting fix

* Novelty detector upgraded to target next neuron; codacy errors fixed

* integration test; small fixes

* removed duplicate code in prototypeLIF process; linting fixes

* linting fixes

* Linting and codacy fixes

* remove duplicate test; some more codacy fixes

* PrototypeLIF spikes when it recieves a 3rd factor input

* a test for PrototypeLIF output spike after 3rd factor input

* Allocation & prototype id tracking is abstracted away from
NoveltyDetector

* Allocator process; Readout proc sends allocation trigger if error

* introduce learning rate Var in PrototypeLIF

* updated integration tests; full system test included

* Linting fixes

* Another small lintint fix

* PrototypeLIF hard reset capability to enable faster temporal  WTA

* allocation mechanism changed; proc interfaces changes; dense conns
added; lr var removed

* small linting fix

* small codacy fix

* prints removed, spelling mistakes fixed

* ignoring one check in an integration test

* Revert "small linting fix"

This reverts commit bde4fa9.

* Fix linting in test_models.py

* Test fix in utils.py

* Fix test of bug fix in utils.py

* Fix utils.py

* Implemented individual threadsafe random call

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* fix figures, removed redundant cell

---------

Signed-off-by: bamsumit <bam_sumit@hotmail.com>
Co-authored-by: PhilippPlank <32519998+PhilippPlank@users.noreply.github.com>
Co-authored-by: Marcus G K Williams <168222+mgkwill@users.noreply.github.com>
Co-authored-by: bamsumit <bam_sumit@hotmail.com>

* CLP Tutorial 02: COIL-100 (lava-nc#721)

* CLP initial commit: PrototypeLIF, NoveltyDetector, Readout procs/tests

* small linting fix

* Novelty detector upgraded to target next neuron; codacy errors fixed

* integration test; small fixes

* removed duplicate code in prototypeLIF process; linting fixes

* linting fixes

* Linting and codacy fixes

* remove duplicate test; some more codacy fixes

* clp tutorial01 v1

* PrototypeLIF spikes when it recieves a 3rd factor input

* a test for PrototypeLIF output spike after 3rd factor input

* clp tutorial01 ready to be roughly finished

* linting, license and utils fixes

* CLP on COIL-100, extracted features from 42 objects, tutorial01 fixes

* Allocation & prototype id tracking is abstracted away from
NoveltyDetector

* Allocator process; Readout proc sends allocation trigger if error

* introduce learning rate Var in PrototypeLIF

* updated integration tests; full system test included

* Linting fixes

* Another small lintint fix

* clp tutorial 2, 20 class experiments (Coil-100)

* PrototypeLIF hard reset capability to enable faster temporal  WTA

* allocation mechanism changed; proc interfaces changes; dense conns
added; lr var removed

* small linting fix

* small codacy fix

* prints removed, spelling mistakes fixed

* ignoring one check in an integration test

* Revert "small linting fix"

This reverts commit bde4fa9.

* CLP tutorial 1 is finalized

* Fix linting in test_models.py

* Test fix in utils.py

* Fix test of bug fix in utils.py

* Fix utils.py

* Implemented individual threadsafe random call

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* tutorial 2 use new abstracted CLP class

* CLP tutorial 2: unsupervised and supervised experiments are seperated

* addressed reviewer's requests, added tests, removed pics etc

* Update clp.py

fix linting

* Update tutorial01_one-shot_learning_with_novelty_detection.ipynb

* Update tutorial02_clp_on_coil100.ipynb

* Update tutorial01_one-shot_learning_with_novelty_detection.ipynb

* Update tutorial02_clp_on_coil100.ipynb

* CLP class and experiments improved; pytorch dependency removed;
feature extraction added

* Allocator accepts arbitrary initial index as param

* New experiments; improved CLP class; continuous experimentation; new
features for COIL-100; toch as optional dependency

* linting fixes

* Temporarily skipping CLP tutorials test till sk-learn is added

* scikit-learn added to poetry; tutorial tst is re-enabled

---------

Signed-off-by: bamsumit <bam_sumit@hotmail.com>
Co-authored-by: PhilippPlank <32519998+PhilippPlank@users.noreply.github.com>
Co-authored-by: Marcus G K Williams <168222+mgkwill@users.noreply.github.com>
Co-authored-by: bamsumit <bam_sumit@hotmail.com>

* Bump cryptography from 41.0.3 to 41.0.4 (lava-nc#790)

Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.3 to 41.0.4.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](pyca/cryptography@41.0.3...41.0.4)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Generalize int shape check in injector and extractor to take numpy ints (lava-nc#792)

* update refport unittest to always wait when it writes to port for consistent behavior

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Removed pyproject changes

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Fix to convolution tests. Fixed imcompatible mnist_pretrained for old python versions.

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Missing moudle parent fix

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Added ConvVarModel

* Added iterable callback function

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Fix codacy issues in callback_fx.py

* Fix linting in callback_fx.py

* Fix codacy sig issue in callback_fx.py

* Bugfix to pass the args by keyword

* Delay Dense PyModel fix

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Fixed unittests

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* Fixed sparse delay

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* IO modules fixes

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

* IO modules fixes

Signed-off-by: bamsumit <bam_sumit@hotmail.com>

---------

Signed-off-by: bamsumit <bam_sumit@hotmail.com>
Co-authored-by: Joyesh Mishra <joyesh.mishra@intel.com>
Co-authored-by: Marcus G K Williams <168222+mgkwill@users.noreply.github.com>

* Resfire (lava-nc#787)

* resfire process and fixed process model

* changed vth->uth in RFZero. Added tests.

* removed unused imports

* unused imports, copyright statement.

* bsd license on resfire models.py

* Bump pillow from 10.0.0 to 10.0.1 (lava-nc#794)

Bumps [pillow](https://github.com/python-pillow/Pillow) from 10.0.0 to 10.0.1.
- [Release notes](https://github.com/python-pillow/Pillow/releases)
- [Changelog](https://github.com/python-pillow/Pillow/blob/main/CHANGES.rst)
- [Commits](python-pillow/Pillow@10.0.0...10.0.1)

---
updated-dependencies:
- dependency-name: pillow
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: PhilippPlank <32519998+PhilippPlank@users.noreply.github.com>

* Bump urllib3 from 1.26.16 to 1.26.17 (lava-nc#793)

Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.16 to 1.26.17.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](urllib3/urllib3@1.26.16...1.26.17)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: PhilippPlank <32519998+PhilippPlank@users.noreply.github.com>

* mulitply for threshvec, fixes to frameworks imports, fixes for resfire network

* rename ThreshVec to GradedVec and fixes.

* lava VA tutorials.

* slight updates to lava_va tutorials and removed csr_matrix cast

* super needed, formatting, fixed test_network

* Automatically create identity connections when using lva to connect vectors

* NetworkList to keep track of + Networks. More flexibility in algebra syntax.

* Updated tutorial 1 to demo automatic vec2vec connections and better + overloading

* Updates to Tutorial01 that show automatic identity connections when connecting AlgebraicVectors and syntax.

* Comments, docstrings, typing clean-up.

* changing embedded io import location, in case theres no lava-loihi.

* small codacy fixes. Test lava va tutorials.

* Cleanup comments on test_graded.py and test_tutorials-lva.py

---------

Signed-off-by: bamsumit <bam_sumit@hotmail.com>
Signed-off-by: Marcus G K Williams <Marcus G K Williams 168222+mgkwill@users.noreply.github.com>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Tim Shea <tim-shea@users.noreply.github.com>
Co-authored-by: PhilippPlank <32519998+PhilippPlank@users.noreply.github.com>
Co-authored-by: Mathis Richter <mathis.richter@intel.com>
Co-authored-by: bamsumit <bam_sumit@hotmail.com>
Co-authored-by: Joyesh Mishra <joyesh.mishra@intel.com>
Co-authored-by: Marcus G K Williams <168222+mgkwill@users.noreply.github.com>
Co-authored-by: Danielle Rager <83376999+drager-intel@users.noreply.github.com>
Co-authored-by: Elvin Hajizada <elvin.hajizada@intel.com>
Co-authored-by: Marcus G K Williams <Marcus G K Williams 168222+mgkwill@users.noreply.github.com>
Co-authored-by: Gavin Parpart <GGParpart@yahoo.com>
Co-authored-by: Gavin Parpart <gavin.parpart@pnnl.gov>
Co-authored-by: Alexander Henkes <62153181+ahenkes1@users.noreply.github.com>
Co-authored-by: Svea Marie Meyer <46671894+SveaMeyer13@users.noreply.github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: yashward <yashwardhan.singh@intel.com>
Co-authored-by: Julia <julia.a.gould@intel.com>
  • Loading branch information
17 people authored Jun 25, 2024
1 parent 4c87af0 commit 2750df0
Show file tree
Hide file tree
Showing 18 changed files with 2,544 additions and 39 deletions.
14 changes: 14 additions & 0 deletions src/lava/frameworks/loihi2.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Copyright (C) 2022-23 Intel Corporation
# SPDX-License-Identifier: BSD-3-Clause
# See: https://spdx.org/licenses/

from lava.networks.gradedvecnetwork import (InputVec, OutputVec, GradedVec,
GradedDense, GradedSparse,
ProductVec,
LIFVec,
NormalizeNet)

from lava.networks.resfire import ResFireVec

from lava.magma.core.run_conditions import RunSteps, RunContinuous
from lava.magma.core.run_configs import Loihi2SimCfg, Loihi2HwCfg
324 changes: 324 additions & 0 deletions src/lava/networks/gradedvecnetwork.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,324 @@
# Copyright (C) 2022-23 Intel Corporation
# SPDX-License-Identifier: BSD-3-Clause
# See: https://spdx.org/licenses/

import numpy as np
import typing as ty

from lava.proc.graded.process import InvSqrt
from lava.proc.graded.process import NormVecDelay
from lava.proc.sparse.process import Sparse
from lava.proc.dense.process import Dense
from lava.proc.prodneuron.process import ProdNeuron
from lava.proc.graded.process import GradedVec as GradedVecProc
from lava.proc.lif.process import LIF
from lava.proc.io import sink, source

from .network import Network, AlgebraicVector, AlgebraicMatrix


class InputVec(AlgebraicVector):
"""InputVec
Simple input vector. Adds algebraic syntax to RingBuffer
Parameters
----------
vec : np.ndarray
NxM array of input values. Input will repeat every M steps.
exp : int, optional
Set the fixed point base value
loihi2 : bool, optional
Flag to create the adapters for loihi 2.
"""

def __init__(self,
vec: np.ndarray,
loihi2: ty.Optional[bool] = False,
exp: ty.Optional[int] = 0,
**kwargs) -> None:

self.loihi2 = loihi2
self.shape = np.atleast_2d(vec).shape
self.exp = exp

# Convert it to fixed point base
vec *= 2**self.exp

self.inport_plug = source.RingBuffer(data=np.atleast_2d(vec))

if self.loihi2:
from lava.proc import embedded_io as eio
self.inport_adapter = eio.spike.PyToNxAdapter(
shape=(self.shape[0],),
num_message_bits=24)
self.inport_plug.s_out.connect(self.inport_adapter.inp)
self.out_port = self.inport_adapter.out

else:
self.out_port = self.inport_plug.s_out

def __lshift__(self, other):
# Maybe this could be done with a numpy array and call set_data?
return NotImplemented


class OutputVec(Network):
"""OutputVec
Records spike output. Adds algebraic syntax to RingBuffer
Parameters
----------
shape : tuple(int)
shape of the output to record
buffer : int, optional
length of the recording.
(buffer is overwritten if shorter than sim time).
loihi2 : bool, optional
Flag to create the adapters for loihi 2.
num_message_bits : int
size of output message. ("0" is for unary spike event).
"""

def __init__(self,
shape: ty.Tuple[int, ...],
buffer: int = 1,
loihi2: ty.Optional[bool] = False,
num_message_bits: ty.Optional[int] = 24,
**kwargs) -> None:

self.shape = shape
self.buffer = buffer
self.loihi2 = loihi2
self.num_message_bits = num_message_bits

self.outport_plug = sink.RingBuffer(
shape=self.shape, buffer=self.buffer, **kwargs)

if self.loihi2:
from lava.proc import embedded_io as eio
self.outport_adapter = eio.spike.NxToPyAdapter(
shape=self.shape, num_message_bits=self.num_message_bits)
self.outport_adapter.out.connect(self.outport_plug.a_in)
self.in_port = self.outport_adapter.inp
else:
self.in_port = self.outport_plug.a_in

def get_data(self):
return (self.outport_plug.data.get().astype(np.int32) << 8) >> 8


class LIFVec(AlgebraicVector):
"""LIFVec
Network wrapper to LIF neuron.
Parameters
----------
See lava.proc.lif.process.LIF
"""

def __init__(self, **kwargs):
self.main = LIF(**kwargs)

self.in_port = self.main.a_in
self.out_port = self.main.s_out


class GradedVec(AlgebraicVector):
"""GradedVec
Simple graded threshold vector with no dynamics.
Parameters
----------
shape : tuple(int)
Number and topology of neurons.
vth : int, optional
Threshold for spiking.
exp : int, optional
Fixed point base of the vector.
"""

def __init__(self,
shape: ty.Tuple[int, ...],
vth: int = 10,
exp: int = 0,
**kwargs):

self.shape = shape
self.vth = vth
self.exp = exp

self.main = GradedVecProc(shape=self.shape, vth=self.vth, exp=self.exp)
self.in_port = self.main.a_in
self.out_port = self.main.s_out

super().__init__()

def __mul__(self, other):
if isinstance(other, GradedVec):
# Create the product network
prod_layer = ProductVec(shape=self.shape, vth=1, exp=self.exp)

weightsI = np.eye(self.shape[0])

weights_A = GradedSparse(weights=weightsI)
weights_B = GradedSparse(weights=weightsI)
weights_out = GradedSparse(weights=weightsI)

prod_layer << (weights_A @ self, weights_B @ other)
weights_out @ prod_layer
return weights_out
else:
return NotImplemented


class ProductVec(AlgebraicVector):
"""ProductVec
Neuron that will multiply values on two input channels.
Parameters
----------
shape : tuple(int)
Number and topology of neurons.
vth : int
Threshold for spiking.
exp : int
Fixed point base of the vector.
"""

def __init__(self,
shape: ty.Tuple[int, ...],
vth: ty.Optional[int] = 10,
exp: ty.Optional[int] = 0,
**kwargs):
self.shape = shape
self.vth = vth
self.exp = exp

self.main = ProdNeuron(shape=self.shape, vth=self.vth, exp=self.exp)

self.in_port = self.main.a_in1
self.in_port2 = self.main.a_in2

self.out_port = self.main.s_out

def __lshift__(self, other):
# We're going to override the behavior here,
# since there are two ports the API idea is:
# prod_layer << (conn1, conn2)
if isinstance(other, (list, tuple)):
# It should be only length 2, and a Network object,
# TODO: add checks
other[0].out_port.connect(self.in_port)
other[1].out_port.connect(self.in_port2)
else:
return NotImplemented


class GradedDense(AlgebraicMatrix):
"""GradedDense
Network wrapper for Dense. Adds algebraic syntax to Dense.
Parameters
----------
See lava.proc.dense.process.Dense
weights : numpy.ndarray
Weight matrix expressed as floating point. Weights will be automatically
reconfigured to fixed point (may lead to changes due to rounding).
exp : int, optional
Fixed point base of the weight (reconfigures weights/weight_exp).
"""

def __init__(self,
weights: np.ndarray,
exp: int = 7,
**kwargs):
self.exp = exp

# Adjust the weights to the fixed point
w = weights * 2 ** self.exp

self.main = Dense(weights=w,
num_message_bits=24,
num_weight_bits=8,
weight_exp=-self.exp)

self.in_port = self.main.s_in
self.out_port = self.main.a_out


class GradedSparse(AlgebraicMatrix):
"""GradedSparse
Network wrapper for Sparse. Adds algebraic syntax to Sparse.
Parameters
----------
See lava.proc.sparse.process.Sparse
weights : numpy.ndarray
Weight matrix expressed as floating point. Weights will be automatically
reconfigured to fixed point (may lead to changes due to rounding).
exp : int, optional
Fixed point base of the weight (reconfigures weights/weight_exp).
"""

def __init__(self,
weights: np.ndarray,
exp: int = 7,
**kwargs):

self.exp = exp

# Adjust the weights to the fixed point
w = weights * 2 ** self.exp
self.main = Sparse(weights=w,
num_message_bits=24,
num_weight_bits=8,
weight_exp=-self.exp)

self.in_port = self.main.s_in
self.out_port = self.main.a_out


class NormalizeNet(AlgebraicVector):
"""NormalizeNet
Creates a layer for normalizing vector inputs
Parameters
----------
shape : tuple(int)
Number and topology of neurons.
exp : int
Fixed point base of the vector.
"""

def __init__(self,
shape: ty.Tuple[int, ...],
exp: ty.Optional[int] = 12,
**kwargs):
self.shape = shape
self.fpb = exp

vec_to_fpinv_w = np.ones((1, self.shape[0]))
fpinv_to_vec_w = np.ones((self.shape[0], 1))
weight_exp = 0

self.vfp_dense = Dense(weights=vec_to_fpinv_w,
num_message_bits=24,
weight_exp=-weight_exp)
self.fpv_dense = Dense(weights=fpinv_to_vec_w,
num_message_bits=24,
weight_exp=-weight_exp)

self.main = NormVecDelay(shape=self.shape, vth=1,
exp=self.fpb)
self.fp_inv_neuron = InvSqrt(shape=(1,), fp_base=self.fpb)

self.main.s2_out.connect(self.vfp_dense.s_in)
self.vfp_dense.a_out.connect(self.fp_inv_neuron.a_in)
self.fp_inv_neuron.s_out.connect(self.fpv_dense.s_in)
self.fpv_dense.a_out.connect(self.main.a_in2)

self.in_port = self.main.a_in1
self.out_port = self.main.s_out
Loading

0 comments on commit 2750df0

Please sign in to comment.