- Numpy as NP
- Pandas as pd
- Keras(A.P.I) Application Programming Interface.
- TensorFlow using Cuda cores
- Collab Virtual Cuda Cores
- Recurrent Neural Networks.
Deep learning is just a subset of machine learning. In fact, deep learning technically is machine learning and functions in a similar way (hence why the terms are sometimes loosely interchanged). However, its capabilities are different.
While basic machine learning models do become progressively better at whatever their function is, they still need some guidance. If an AI algorithm returns an inaccurate prediction, then an engineer has to step in and make adjustments. With a deep learning model, an algorithm can determine on its own if a prediction is accurate or not through its own neural network.
Long short-term memory (LSTM) is an artificial recurrent neural network (RNN) architecture used in the field of deep learning. Unlike standard feedforward neural networks, LSTM has feedback connections. It can not only process single data points (such as images), but also entire sequences of data (such as speech or video). For example, LSTM is applicable to tasks such as unsegmented, connected handwriting recognition, speech recognition and anomaly detection in network traffic or IDSs (intrusion detection systems).
A common LSTM unit is composed of a cell, an input gate, an output gate and a forget gate. The cell remembers values over arbitrary time intervals and the three gates regulate the flow of information into and out of the cell.
LSTM networks are well-suited to classifying, processing and making predictions based on time series data, since there can be lags of unknown duration between important events in a time series. LSTMs were developed to deal with the vanishing gradient problem that can be encountered when training traditional RNNs. Relative insensitivity to gap length is an advantage of LSTM over RNNs, hidden Markov models and other sequence learning methods in numerous applications.
A deep neural network analyzes data with learned representations akin to the way a person would look at a problem. In traditional machine learning, the algorithm is given a set of relevant features to analyze, however, in deep learning, the algorithm is given raw data and derives the features itself.
-LSTM (Long Short Term Memory): LSTM has three gates (input, output and forget gate)
-GRU (Gated Recurring Units): GRU has two gates (reset and update gate).
-GRU couples forget as well as input gates. GRU use less training parameters and therefore use less memory, execute faster and train
faster than LSTM's whereas LSTM is more accurate on
dataset using longer sequence. In short, if sequence is large or accuracy is very critical, please go for LSTM whereas for less memory consumption and faster operation go for GRU.
If you donot have much floating point operations per second (FLOP's) to spare switch to GRU. LSTM has three values at output (output, hidden and cell) whereas GRU has two values at
output (output and hidden).
***### Q) Is there any difference between in LSTM and ML?
Points to be remember while performing this project:
1) The individual shud have a grasp of good knowledge of numpy and pandas.
2) The individual shud have a grasp of knowledge of using jupyter notebook
The saved CSV file in local directory
We Will plot the dataframe using the matplotlib module so in order to do that, Before we need to do the scaling
We will be using the sklear.prepocessing import minmax scaler, for scalling of the dataframe
Trainning data scaling -> we have kept the percentage as 65
BEFORE CREATING OUR LSTM MODEL WE HAVE TO RESHAPE OUR TRAINING DATA & TEST DATA( X_TRAIN AND X_TEST)
Keras Tuner is an easy-to-use, scalable hyperparameter optimization framework that solves the pain points of hyperparameter search. Easily configure your search space with a define-by-run syntax, then leverage one of the available search algorithms to find the best hyperparameter values for your models. Keras Tuner comes with Bayesian Optimization, Hyperband, and Random Search algorithms built-in, and is also designed to be easy for researchers to extend in order to experiment with new search algorithms.
Sequential groups a linear stack of layers into a tf.keras.Model.
Sequential provides training and inference features on this model.
for the keras Model Documentation please refer the below links.
After setting the epochs value and after executing it we need to set the test/train data do check the highlighted texts
Why to use Keras Model Library API?*
The Keras Python library makes creating deep learning models fast and easy.
The sequential API allows the user to create models layer-by-layer for most problems. It is limited in that it does not allow you to create models that share layers or have multiple inputs
or outputs.
The functional API in Keras is an alternate way of creating models that offers a lot more flexibility, including creating more complex models.