The problem of poor quality of training of an NARXNET model

8 views (last 30 days)
In the course of research, I realized that I could not solve the problem of poor quality of training of an artificial neural network model of the NARX type.
Such networks successfully simulate the dynamics of simulation models in the form of transfer functions. But when the network was built on data from production, it does not want to repeat the data in any way or repeats them with a large average error. The time series of product temperature parameters at the entrance to the apparatus and exit from the apparatus are used as data. The selected parameters are functionally related, but with a delay.
Yesterday I noticed that a similar result is obtained on the simulation model if they is connected to a serial circuit.
1. For my time series during training, I disabled the divide of data into training, test and validation samples;
2. The number of layers and input and output delays is selected approximately;
3. The learning algorithm is a cycle until the MSE of the network reached on the test data of the specified eps. In each iteration of the loop, a NARX network object with random initial weights is generated. The open network is trained, the network is closed, the second training of the network takes place in a closed state with a fixed number of training epochs (200) without initial initialization of weights and on the same training data. The MSE of the resulting network is used as an indicator of network quality.
4. All other parameters of the NARX network are used by default.
Could you please help me?

Answers (1)

Harsh
Harsh on 25 Jun 2025
To address the poor training performance of your NARX neural network on real production data (compared to simulation data), it's important to follow best practices for time series modeling in MATLAB. Here are three key areas to focus on:
1. Proper Data Preparation
Ensure your input and target time series are properly aligned using "preparets", especially when working with delays. Also, avoid disabling data division—splitting into training, validation, and test sets is crucial for evaluating generalization.
Here's the documentation for "preparets" - https://www.mathworks.com/help/deeplearning/ref/preparets.html
2. Systematic NARX Design
Instead of selecting delays and network architecture approximately, follow MathWorks' recommended design strategy for NARX networks to choose appropriate input/output delays and hidden layer sizes based on your problem.
3. Two-Stage Training and Evaluation
Train your network first in open-loop mode using "narxnet", then convert it to closed-loop using "closeloop" for recursive prediction. Re-train the closed-loop network if needed. Use built-in performance plots like "plotperform", "plotresponse", and "ploterrcorr" to assess and debug model quality.

Categories

Find more on Deep Learning Toolbox in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!