help required for fixed point conversion
Show older comments
I am modelling a encoder angle decoder control system in simulink. First I built a model with floating point and it works fine. When I am trying to convert it to fixed point, I am landing in trouble. After examining my model, I discovered that my model has subtractor that takes in set point as one input and other from feedback from the plant. The issue is that subtractor output has range (as captured by fixed point designer tool) from -0.00021679 to 0.01076. This translated to fixed point value of (1,32,37). Since this is out of range, the simulated result is erratic. How can I correct this error?
Accepted Answer
More Answers (2)
Hi Gary,
That portion of your model will involve 4 data types.
- Digital measurement of output signal from analog plant
- Set point signal
- Accumulator type used to do the math inside the Subtraction block
- Output of that subtraction block feed to your controller logic
Normally, the fixed-point tool should be able to pick separate data types and scaling for each of those signals.
For the range values and data type for the subtractor output, everything looks pretty good.
errorSignalExample = fi([-0.00021679, 0.01076],1,32,37)
rangeErrorSignalDataType = range(errorSignalExample)
The issues are likely coming from else where in the model.
Try setting the model's diagnostics for Signals with Saturating and Wrapping overflows to Warning, then rerun the simulation. This should help isolate sources of overflow.
If overflows are occurring, then try rerunning the Fixed-Point Tool workflow but give a bigger Safety Margin before proposing data types. If there are fixed-point types in the model at the start of the workflow, then turn on Data Type Override double when Collecting Ranges.
If overflows are not the issue, turn on signal logging for several key signals in the model. Repeat the fixed-point tool workflow with Data Type Override double on during collect ranges. Then at the end of the workflow click Compare Signals and use Simulation Data Inspector to isolate where the doubles signal traces first start to diverge from the fixed-point traces. This will point you to a place in the model to look more carefully at the math and the data types.
3 Comments
Change the data type of the accumulator to move one bit from the precision end to the range end, thus doubling the range. This will allow +1 to be represented without overflow.
dt = fixdt(1,32,30)
representableRange = range( numerictype(dt) )
Gary
on 30 Mar 2023
The number 395 is how many times that block in the model overflowed during the previous simulation. Suppose the element was a data type conversion block with input int16 and the output uint8. Suppose the input at one time step in the simulation was 260. Since 260 exceeds the maximum representable value, 255, of the output data type, an overflow will occur. The block could be configured to handle overflows with saturation in which case the output would be 255, and that would increase the count of "overflow saturations" for the block by 1. Alternately, the block could be configured to handle overflows with Modulo 2^Nbits wrapping in which case the output would be mod(260,2^8) = 4, and that would increase the count of "overflow wraps" for the block by 1.
So 395 overflow wraps means that during the previous instrumented simulation that block had 395 overflow events handled by Modulo 2^Nbits wrapping.
The count of overflows does NOT indicate what new data type is required to avoid overflows. Overflows due to values slightly too big for the output data type count as one overflow event, and overflows due to values 1000X to big for the output data type also count as just one overflow event.
Collecting the simulation minimum and maximum values is what helps pick a type that will avoid overflows. Calling fi with the simulation min and max will show the type thats big enough.
format long g
simulationMinMax = [-13.333333333333334, 12.666666666666666]; % Collected by Fixed-Point Tool or some other way
safetyMarginPercent = 25;
%
% Expanded range to cover
%
expandedMinMax = (1 + safetyMarginPercent/100) .* simulationMinMax
%
% Data type container attributes
% Signedness
% WordLength
%
isSigned = 1; % manually set
%isSigned = any( expandedMinMax < 0 ) % use range to see of negatives needed
wordLength = 8; % manually set
%
% Automatically determine scaling
% using fi's best precision mode (just don't specify scaling)
%
quantizedExpandedMinMax = fi( expandedMinMax, isSigned, wordLength)
bestPrecisionNumericType = numerictype(quantizedExpandedMinMax)
representableRangeOfBestPrecisionDataType = range(bestPrecisionNumericType)
5 Comments
Andy Bartlett
on 3 Apr 2023
I suggest you do a "system-engineering sanity-check exploration."
First, collect known system-engineering information. For example, the data types used for certain key signals, such as sensors and actuators, are often locked down before the algorithms are finalized. Collect this information and then model the quantization of those signal but dropping in a pair data type conversion blocks back to back. The first data type conversion will quantize to the data type and scaling that the system engineers have already specified. The second data type conversion will convert the signal back to the previous data type, likely double. For the first data type conversion, be sure to set the overflow handling mode and rounding mode correctly. A signal coming from an A2D sensor probably saturates on overflow and rounds to nearest.
A tip for handling angle sensors. Several types of angle sensors nicely output the angle in revolutions. For example, each tick from the angle sensor represents 1/4096th of a revolution. Modeling this angle sensor signal as fixdt(0,12,12) in revolutions will be much more accurate and efficient that trying to model this in radians. If you really must convert to radians at some point, it is better to delay that conversion to avoid accumulating the errors of quantizing 2*pi.
Once you've modeled the data types that system engineering has already locked down, run your simulation behavioral tests. This is what I call a sanity-check simulation. If this simulation with only sensors and actuators quantized can't easily pass the system level behavior tests, then the "design is doomed." The engineering team will need to make a major change such as switch to more expensive sensors, change the behavioral requirements, or come up with an algorithm than can better deal with the quantization.
Assuming the quantized sensors and actuators sanity-check passed the behavioral tests, now explore quantizing other key signals one at a time. Again drop in a pair of back to back data type conversion blocks. First explore different levels of precision quantization to see where the behavior tests begin to fail. As an example, on the output of the first integrator drop in the pair of data type conversion blocks. Set the data type of the first to fixdt(1,64,32) which probably has more than enough range and precision. Now run the simulation tests to determine if behavioral constraints are met. If yes, reduce the number of precison bits, by M bits. So change the data type from fixdt(1,64,32) to fixdt(1, 64-m, 32-m ). Then repeat the behavioral simulation tests. Consider using a binary search strategy to determine where the behavior tests just barely pass. For example, suppose the tests pass with a fraction length of 7, but fail with a fraction length of 6. This strongly suggests the final design will need a fraction length of at least 7 for this signal, and probably more when the all the other signals are quantized and the quantization errors accumulate.
Next figure out the range needs for that signal, the quickest path may be to allow the Fixed-Point Tool instrumentation to collect the ranges of all signals in one-shot. But you could also run a set of simulations, with k bits removed from the range end. So you could change fixdt(1, 44, 10) to fixdt( 1, 44 - k, 10 ) to reduce k bits of range. You can also explore you can drop the sign bit fixdt( 0, 44 - 1 - k, 10 ).
Tip: set up scripts to run test experiments and collect data. Data types can be set like so.
dataTypeX = fixdt(1,8,3);
set_param( blockPath, 'OutDataTypeStr', dataTypeX.tostring )
Consider using a binary search strategy to make the experimentation processes faster.
Combining the results of the precision and range experiments will give you a good sense of what the key signals need in terms of precision bits, range bits, and word length. This will allow you to perform a quick "sanity check" on a proposed design and know why it won't work. For example, if the proposed data type for a key signal is fixdt(1,16,3), but your experiments showed that a fraction length of at least 7 was needed, you can predict the design will fail behavioral tests due to insufficient accuracy.
This deeper understanding of the model's behavioral requirements will help you figure out what's wrong with the design process and correct it.
Gary
on 4 Apr 2023
Andy Bartlett
on 7 Apr 2023
>> How do I convert fixdt(1,32,30) to fixdt(1,16,14) without loss of precision?
You can't avoid precision loss. You are dropping 16 bits from the precision end of the variable.
If you use the fastest, leanest conversion, Round to Floor, that you will introduce quantization error of 0 to just under 1 bit. In real world value terms, the absolutie quantization error is in the range 0 to Slope, where Slope = 2^-14;
You can cut the quantization error in half and balance around zero if you use Nearest rounding. With Nearest, the absolute quantization error will be 0 to 1/2 bits or in real world values 0 to 0.5*Slope = 2^-15. But keep in mind that round to nearest can overflow for values close to +1. If you turn on Saturation in the cast, quantization error will still be always be less or equal to half a bit. But if you allow the overflow cases to wrap modulo 2^N, then the quantization error will be huge for the overflowing cases.
Categories
Find more on Fixed-Point Arithmetic in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!