I have a setup where I want to evaluate the latency when sending data from one Simulink model to another, where the second model can either run in a different instance on the same computer or a remote machine. What I want to achieve is evaluating the Round Trip Time (RTT), i.e. the time it takes between generating the data, encoding and sending it to the second simulation and receiving and decoding the data coming back (see the attached image for the general idea). The commuication can e.g. be based on TCP socket communication, but the measurement of RTT should be independent of the underlying principle. It is important, though, to be able to measure the time it takes for every "data packet" to go around, so that information about packet jitter / RTT standard deviation can be gathered.
My general idea was to generate time stamps with a C++ MEX S-Function using the following two methods:
- the "chrono" library and the "high_resolution_clock::now()" to create a time stamp
- the "fstream" library to ofstream the timestamp to a text file, e.g. csv or txt
When placing two of such MEX S-Functions blocks in the model (one before sending and one after receving data) the RTT could be evaluated by taking the difference of the logged time stamps.
My questions are:
- Does this approach seem valid / is there a better way to get a (most accurate) time measurement?
- The data should be sent at different sample rates. What is the best way to "trigger" the time stamp functions accordingly, i.e. everytime a new data package is sent / received?
- The data can vary in size, e.g. vectors or matrices with up to several hundred elements. Is there an efficient way to achieve the time stamping as proposed using MEX S-Functions without having to pass all the data through the block? I could imagine this to be computationally inefficient. The pseudo code in this case would basically be: timestamp = high_resolution_clock::now(); ofstream timestamp; y=u;
Any help / input would be appreciated!