Skip to content

We evaluate SNNs' performance and energy efficiency on CPUs, GPUs, and IPUs using a simulation framework and benchmark dataset. Our study aims to identify the optimal computing device for SNNs and contribute to energy-efficient machine learning.

Notifications You must be signed in to change notification settings

virgit1/Spiking-Neural-Network-Comparison

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Performance and Energy Efficiency Evaluation of SNN Neural Networks on CPU, GPU, and IPU

In this project, we investigate the performance and energy efficiency of spiking neural networks (SNNs) on different computing devices, including CPUs, GPUs, and IPUs. SNNs are a type of artificial neural network that emulate the behavior of biological neurons, which can potentially lead to more efficient and accurate machine learning models.

To evaluate the performance and energy efficiency of SNNs, we will develop a simulation framework that allows us to execute SNN models on different computing devices and measure their runtime and power consumption. We will use a benchmark dataset and a standard evaluation metric to compare the performance of SNNs on different devices.

We expect to observe significant differences in the performance and energy efficiency of SNNs on different devices. CPUs are the most common computing devices but may not provide the best performance for SNNs due to their relatively low parallelism. GPUs are well-suited for parallel computations but may have higher power consumption. IPUs, on the other hand, are specialized devices that can potentially offer the best performance and energy efficiency for SNNs.

By conducting this study, we aim to provide insights into the optimal choice of computing devices for SNNs and contribute to the development of more efficient and accurate machine learning models.

About

We evaluate SNNs' performance and energy efficiency on CPUs, GPUs, and IPUs using a simulation framework and benchmark dataset. Our study aims to identify the optimal computing device for SNNs and contribute to energy-efficient machine learning.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published