Hi Manny,
Different predictions from a LSTM model on the same input data across many runs is due to a variety of factors as mentioned below:
- Neural networks typically start with randomly initialized weights. Different initializations can lead to different local minima during training, resulting in different predictions.
- If you're using stochastic gradient descent (SGD) or any of its variants (like Adam), the training process involves randomness (e.g., random shuffling of data, mini-batch selection).
- Some operations in deep learning libraries can be non-deterministic depending on the environment which leads to varied results.
Essentially, Machine learning algorithms in general are non-deterministic. This has to do with the random initialization of the weights. If you want to make the results reproducible you have to eliminate the randomness from the table.
To achieve consistent results, you can set a random seed to ensure that the random initialization and other stochastic processes are consistent across runs. You can use the ‘rng’ function in MATLAB to set random seed for reproducibility. However, note that even with a fixed seed, some non-deterministic operations might still cause slight variations. Also, once you have trained a model that gives satisfactory results, save the model weights and load them for future predictions to ensure consistency.
For more information regarding the above, refer to the following documentations: