Skip to content

Commit

Permalink
Merge pull request #145 from imdeepmind/docs/fixed-misc-issues
Browse files Browse the repository at this point in the history
updated the docs
  • Loading branch information
imdeepmind authored Jan 31, 2021
2 parents 1e5a24c + 62effe1 commit 6e5af2a
Show file tree
Hide file tree
Showing 12 changed files with 17 additions and 14 deletions.
2 changes: 1 addition & 1 deletion docs/docs/callbacks/train_logger.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ TrainLogger is a NeuralPy Callback class that generates training logs. It genera

## Supported Arguments

- `path`: (String) Directory where the logs will be stored
- `path`: (String) Directory where the logs will be stored

## Example Code

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/layers/activation_functions/gelu.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ To learn more about GELU, please check PyTorch [documentation](https://pytorch.o

## Supported Arguments

- `name`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer.
- `name=None`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer.

## Example Code

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/layers/activation_functions/leaky_relu.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ To learn more about LeakyReLU, please check PyTorch [documentation](https://pyto
## Supported Arguments

- `negative_slope`: (Float) A negative slope for the LeakyReLU, default value is 0.01
- `name`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer.
- `name=None`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer.

## Example Code

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/layers/activation_functions/relu.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ To learn more about ReLU, please check PyTorch [documentation](https://pytorch.o

## Supported Arguments

- `name`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer.
- `name=None`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer.

## Example Code

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/layers/activation_functions/selu.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ To learn more about SELU, please check PyTorch [documentation](https://pytorch.o

## Supported Arguments

- `name`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer.
- `name=None`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer.

## Example Code

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/layers/activation_functions/sigmoid.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ To learn more about Sigmoid, please check PyTorch [documentation](https://pytorc

## Supported Arguments

- `name`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer.
- `name=None`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer.

## Example Code

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/layers/activation_functions/softmax.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ To learn more about Softmax, please check PyTorch [documentation](https://pytorc
## Supported Arguments

- `dim`: (Integer) A dimension along which Softmax will be computed (so every slice along dim will sum to 1).
- `name`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer.
- `name=None`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer.

## Example Code

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/layers/activation_functions/tanh.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ To learn more about Tanh, please check PyTorch [documentation](https://pytorch.o

## Supported Arguments

- `name`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer.
- `name=None`: (String) Name of the activation function layer, if not provided then automatically calculates a unique name for the layer.

## Example Code

Expand Down
5 changes: 3 additions & 2 deletions docs/docs/layers/linear/bilinear.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,13 @@ hide_title: true
neuralpy.layers.linear.Bilinear(n_nodes, n1_features=None, n2_features=None, bias=True, name=None)
```

:::info
:::danger

Bilinear Layer is mostly stable and can be used for any project. In the future, any chance of breaking changes is very low.
Bilinear Layer is unstable and buggy, not ready for any real use

:::


Bilinear layer performs a bilinear transformation of the input.

To learn more about Bilinear layers, please check [pytorch documentation](https://pytorch.org/docs/stable/nn.html?highlight=bilinear) for it.
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/layers/pooling/avgpool3d.mdx
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
id: avgpool3d
title: AvgPool3D
sidebar_label: AvgPool1D
sidebar_label: AvgPool3D
slug: /layers/pooling-layers/avgpool3d
description: Applies Batch Normalization over a 2D or 3D input
image: https://user-images.githubusercontent.com/34741145/81591141-99752900-93d9-11ea-9ef6-cc2c68daaa19.png
Expand Down
5 changes: 3 additions & 2 deletions docs/docs/layers/sparse/embedding.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,13 @@ hide_title: true
neuralpy.layers.sparse.Embedding(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False, name=None)
```

:::info
:::danger

Embedding is mostly stable and can be used for any project. In the future, any chance of breaking changes is very low.
Embedding Layer is unstable and buggy, not ready for any real use

:::


A simple lookup table that stores embeddings of a fixed dictionary and size.

For more information, check [this](https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html?highlight=embedding#torch.nn.Embedding) page
Expand Down
3 changes: 2 additions & 1 deletion docs/support.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,5 @@ If you need help or you need some information regarding NeuralPy, then are the f

1. Raise an issue on Github
2. Join our discord server (https://discord.gg/6aTTwbW)
3. Contact with Abhishek Chatterjee(abhishek.chatterjee97@protonmail.com)
3. Start a discussion on Github discussion (https://github.com/imdeepmind/NeuralPy/discussions/new)
4. Contact with Abhishek Chatterjee(abhishek.chatterjee97@protonmail.com)

0 comments on commit 6e5af2a

Please sign in to comment.