TensorFlake is my own machine learning framework. This aims to be able to train several networks on the CPU fast.
- Automatic differentiation
- Foundational NDArray operations
- Foundational functions for deep neural networks
- smallvec
- backward without create_graph
- Efficient Linear -> matmul_add
- Save & load
- param_bin.rs
- serde
- Restore optimizers
- Restore Fn (MLP::activation, etc...)
- Strong typing -> Functional API (tensorflake::function::chain)
- Generic for dimension
- Multi thread
- High level API
- Synchronous update
- Asynchronous update
- Specify the number of thread
- Impl ops
- Lazy execution for optimization on the graph
- Regularization
- Tensordot -> ndarray_einsum_beta
- Transposed convolution
- Batch normalization
- Embedding
- Sequential
- Param creator -> Initializer
- Benchmarks
- Measure the execution time of functions and export it as dot file
- Tensor summarization
- Sparse tensor
- Examples
- CNN
- RNN
- GAN
- VAE
- wasm
- safe rust
- Compare performance against GPU Tensorflow
- Fitting report
- Generic ndarray
- functions
- reduce_sum
- signum
- Optimize consecutive element-wise operations
$ cargo +nightly bench -q > benches/result.txt
- carrotflakes (carrotflakes@gmail.com)
Copyright (c) 2022 carrotflakes (carrotflakes@gmail.com)
Licensed under the MIT License.