LSTM in MATLAB compared to in Python Tensorflow for Timeseries predictions

8 views (last 30 days)
I wish to understand the differences between the TensorFlow Keras implementation of an LSTM and MATLAB's.
First of all, Keras requires a "lookback window" for prediction making, whereas MATLAB's implementation does not have this features. Does MATLAB's LSTM make a prediction based on a fixed size time window and, if yes, where can I adjust this length? If no, how does LSTM in MATLAB function in this regard? For instance, Keras requires users to restructure the dataset from a timeseries problem to a supervised problem by creating a 3D array from the usual 2D matrices of predictors and targets. This is done by creating smaller "batches" of predictor and target matrices of length lookback and stacking these slices in the 3D structure fed into the LSTM. Intuitively speaking, the original whole matrix for the time series of dimensions (n. of timeseries observations)x(predictors OR targets) is now ( n. of timeseries observations / window )x(window)x(predictors OR targets).
Secondly, I am struggling to understand the number of Hidden Units in the LSTM layer of MATLAB's implementation and what does increasing / decreasing the number actually do.
Finally, where is the cell state of the LSTM implemented in MATLAB and how can I adjust this?
Kind regards,
Angelo

Answers (0)

Categories

Find more on Deep Learning with Time Series and Sequence Data in Help Center and File Exchange

Products


Release

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!