help required for fixed point conversion

I am modelling a encoder angle decoder control system in simulink. First I built a model with floating point and it works fine. When I am trying to convert it to fixed point, I am landing in trouble. After examining my model, I discovered that my model has subtractor that takes in set point as one input and other from feedback from the plant. The issue is that subtractor output has range (as captured by fixed point designer tool) from -0.00021679 to 0.01076. This translated to fixed point value of (1,32,37). Since this is out of range, the simulated result is erratic. How can I correct this error?

 Accepted Answer

Mapping an A2D to a fixed-point data type
One way to map an A2D converter to a fixed-point data type is to use two real-world-value and stored-integer pairs.
You then solve a pair of affine equations
realWorldValue1 = Slope * storedIntegerValue1 + Bias
realWorldValue2 = Slope * storedIntegerValue2 + Bias
and enter those in the data type
fixdt( isSigned, WordLength, Slope, Bias)
realWorldValue1 = 10; % Volts
storedIntegerValue1 = 32767;
realWorldValue2 = -10; % Volts
storedIntegerValue2 = -32767; % Is this the correct value?
% Solve for data types Slope and Bias
% V = Slope * Q + Bias
%
% form as a Matrix Equation
% Vvec = [ Qvec ones ] * [Slope; Bias]
% Vvec = QOMat * slopeBiasVector
% then use backslash
% slopeBiasVector = QOMat \ Vvec
Vvec = [realWorldValue1; realWorldValue2];
SIvec = [storedIntegerValue1; storedIntegerValue2]
SIvec = 2×1
32767 -32767
QOMat = [SIvec ones(numel(SIvec),1)];
slopeBiasVector = QOMat \ Vvec;
Slope = slopeBiasVector(1)
Slope = 3.0519e-04
Bias = slopeBiasVector(2)
Bias = 0
isSigned = any( SIvec < 0 )
isSigned = logical
1
wordLength = max( ceil( log2( abs( double(SIvec) ) ) ) )
wordLength = 15
a2dNumericType = numerictype( isSigned, wordLength,Slope,Bias)
a2dNumericType = DataTypeMode: Fixed-point: slope and bias scaling Signedness: Signed WordLength: 15 Slope: 0.00030518509475997192 Bias: 0
% Sanity check
checkRealWorldValue1 = Slope * storedIntegerValue1 + Bias
checkRealWorldValue1 = 10
checkRealWorldValue2 = Slope * storedIntegerValue2 + Bias
checkRealWorldValue2 = -10
err1 = checkRealWorldValue1 - realWorldValue1
err1 = 0
err2 = checkRealWorldValue2 - realWorldValue2
err2 = 0

More Answers (2)

Hi Gary,
That portion of your model will involve 4 data types.
  1. Digital measurement of output signal from analog plant
  2. Set point signal
  3. Accumulator type used to do the math inside the Subtraction block
  4. Output of that subtraction block feed to your controller logic
Normally, the fixed-point tool should be able to pick separate data types and scaling for each of those signals.
For the range values and data type for the subtractor output, everything looks pretty good.
errorSignalExample = fi([-0.00021679, 0.01076],1,32,37)
errorSignalExample =
-0.0002 0.0108 DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 32 FractionLength: 37
rangeErrorSignalDataType = range(errorSignalExample)
rangeErrorSignalDataType =
-0.0156 0.0156 DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 32 FractionLength: 37
The issues are likely coming from else where in the model.
Try setting the model's diagnostics for Signals with Saturating and Wrapping overflows to Warning, then rerun the simulation. This should help isolate sources of overflow.
If overflows are occurring, then try rerunning the Fixed-Point Tool workflow but give a bigger Safety Margin before proposing data types. If there are fixed-point types in the model at the start of the workflow, then turn on Data Type Override double when Collecting Ranges.
If overflows are not the issue, turn on signal logging for several key signals in the model. Repeat the fixed-point tool workflow with Data Type Override double on during collect ranges. Then at the end of the workflow click Compare Signals and use Simulation Data Inspector to isolate where the doubles signal traces first start to diverge from the fixed-point traces. This will point you to a place in the model to look more carefully at the math and the data types.

3 Comments

Gary
Gary on 30 Mar 2023
Edited: Gary on 30 Mar 2023
Yes you are right. I am getting overflows from some other element of the model. For instance my simulation says that , sum1:accumulator with fixdt(1,32,31) has 395 overflowwraps.
It has SimMin: -0.99999027...
SimMax: 0.999993050098
Proposed Min: -1
Proposed Max: 0.999999999....
Now please let me know how to remove this error.
Change the data type of the accumulator to move one bit from the precision end to the range end, thus doubling the range. This will allow +1 to be represented without overflow.
dt = fixdt(1,32,30)
dt =
NumericType with properties: DataTypeMode: 'Fixed-point: binary point scaling' Signedness: 'Signed' WordLength: 32 FractionLength: 30 IsAlias: 0 DataScope: 'Auto' HeaderFile: '' Description: ''
representableRange = range( numerictype(dt) )
representableRange =
-2.0000 2.0000 DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 32 FractionLength: 30
what does 395 overflowwraps mean and how do I correlate to the data type required based on the number. Now elsewhere I see around 9000 wraps.

Sign in to comment.

The number 395 is how many times that block in the model overflowed during the previous simulation. Suppose the element was a data type conversion block with input int16 and the output uint8. Suppose the input at one time step in the simulation was 260. Since 260 exceeds the maximum representable value, 255, of the output data type, an overflow will occur. The block could be configured to handle overflows with saturation in which case the output would be 255, and that would increase the count of "overflow saturations" for the block by 1. Alternately, the block could be configured to handle overflows with Modulo 2^Nbits wrapping in which case the output would be mod(260,2^8) = 4, and that would increase the count of "overflow wraps" for the block by 1.
So 395 overflow wraps means that during the previous instrumented simulation that block had 395 overflow events handled by Modulo 2^Nbits wrapping.
The count of overflows does NOT indicate what new data type is required to avoid overflows. Overflows due to values slightly too big for the output data type count as one overflow event, and overflows due to values 1000X to big for the output data type also count as just one overflow event.
Collecting the simulation minimum and maximum values is what helps pick a type that will avoid overflows. Calling fi with the simulation min and max will show the type thats big enough.
format long g
simulationMinMax = [-13.333333333333334, 12.666666666666666]; % Collected by Fixed-Point Tool or some other way
safetyMarginPercent = 25;
%
% Expanded range to cover
%
expandedMinMax = (1 + safetyMarginPercent/100) .* simulationMinMax
expandedMinMax = 1×2
-16.6666666666667 15.8333333333333
%
% Data type container attributes
% Signedness
% WordLength
%
isSigned = 1; % manually set
%isSigned = any( expandedMinMax < 0 ) % use range to see of negatives needed
wordLength = 8; % manually set
%
% Automatically determine scaling
% using fi's best precision mode (just don't specify scaling)
%
quantizedExpandedMinMax = fi( expandedMinMax, isSigned, wordLength)
quantizedExpandedMinMax =
-16.75 15.75 DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 8 FractionLength: 2
bestPrecisionNumericType = numerictype(quantizedExpandedMinMax)
bestPrecisionNumericType = DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 8 FractionLength: 2
representableRangeOfBestPrecisionDataType = range(bestPrecisionNumericType)
representableRangeOfBestPrecisionDataType =
-32 31.75 DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 8 FractionLength: 2

5 Comments

Gary
Gary on 3 Apr 2023
Edited: Gary on 3 Apr 2023
I am faced with a peculiar problem. My model has a velocity and position integrator. When simulating in tracking mode, the input angle varies from 0 to 2pi. The position integrator too tracks similarly without any error. When converting this model to fixed point, it runs in simulation for tracking mode. However for static position (say 45 deg) , I see erratic results. I figured out that the output of the position integrator varies from -pi to +pi corresponding to the input 0 to 2pi. hence position integrator outputs negative values, which was not accounted in tracking mode by simulink. Now I am perplexed as how to account for static and tracking mode of operation and optimize the fixed point width and fraction.
I suggest you do a "system-engineering sanity-check exploration."
First, collect known system-engineering information. For example, the data types used for certain key signals, such as sensors and actuators, are often locked down before the algorithms are finalized. Collect this information and then model the quantization of those signal but dropping in a pair data type conversion blocks back to back. The first data type conversion will quantize to the data type and scaling that the system engineers have already specified. The second data type conversion will convert the signal back to the previous data type, likely double. For the first data type conversion, be sure to set the overflow handling mode and rounding mode correctly. A signal coming from an A2D sensor probably saturates on overflow and rounds to nearest.
A tip for handling angle sensors. Several types of angle sensors nicely output the angle in revolutions. For example, each tick from the angle sensor represents 1/4096th of a revolution. Modeling this angle sensor signal as fixdt(0,12,12) in revolutions will be much more accurate and efficient that trying to model this in radians. If you really must convert to radians at some point, it is better to delay that conversion to avoid accumulating the errors of quantizing 2*pi.
Once you've modeled the data types that system engineering has already locked down, run your simulation behavioral tests. This is what I call a sanity-check simulation. If this simulation with only sensors and actuators quantized can't easily pass the system level behavior tests, then the "design is doomed." The engineering team will need to make a major change such as switch to more expensive sensors, change the behavioral requirements, or come up with an algorithm than can better deal with the quantization.
Assuming the quantized sensors and actuators sanity-check passed the behavioral tests, now explore quantizing other key signals one at a time. Again drop in a pair of back to back data type conversion blocks. First explore different levels of precision quantization to see where the behavior tests begin to fail. As an example, on the output of the first integrator drop in the pair of data type conversion blocks. Set the data type of the first to fixdt(1,64,32) which probably has more than enough range and precision. Now run the simulation tests to determine if behavioral constraints are met. If yes, reduce the number of precison bits, by M bits. So change the data type from fixdt(1,64,32) to fixdt(1, 64-m, 32-m ). Then repeat the behavioral simulation tests. Consider using a binary search strategy to determine where the behavior tests just barely pass. For example, suppose the tests pass with a fraction length of 7, but fail with a fraction length of 6. This strongly suggests the final design will need a fraction length of at least 7 for this signal, and probably more when the all the other signals are quantized and the quantization errors accumulate.
Next figure out the range needs for that signal, the quickest path may be to allow the Fixed-Point Tool instrumentation to collect the ranges of all signals in one-shot. But you could also run a set of simulations, with k bits removed from the range end. So you could change fixdt(1, 44, 10) to fixdt( 1, 44 - k, 10 ) to reduce k bits of range. You can also explore you can drop the sign bit fixdt( 0, 44 - 1 - k, 10 ).
Tip: set up scripts to run test experiments and collect data. Data types can be set like so.
dataTypeX = fixdt(1,8,3);
set_param( blockPath, 'OutDataTypeStr', dataTypeX.tostring )
Consider using a binary search strategy to make the experimentation processes faster.
Combining the results of the precision and range experiments will give you a good sense of what the key signals need in terms of precision bits, range bits, and word length. This will allow you to perform a quick "sanity check" on a proposed design and know why it won't work. For example, if the proposed data type for a key signal is fixdt(1,16,3), but your experiments showed that a fraction length of at least 7 was needed, you can predict the design will fail behavioral tests due to insufficient accuracy.
This deeper understanding of the model's behavioral requirements will help you figure out what's wrong with the design process and correct it.
That was elaborate answer and I could resolve all the errors based on your suggestions. Thank you. Only one specific detail is required. My model uses a bioplar adc which outputs 0-32767 for 0-10v and 2s complement for negative values 0 to -10v. now how do i define max/min ranges for the model inputs?
Gary
Gary on 7 Apr 2023
Edited: Gary on 7 Apr 2023
Just revisiting. My model functions well with 32 bit fixed point. However, I need 16 bit at output. How do I convert fixdt(1,32,30) to fixdt(1,16,14) without loss of precision?
>> How do I convert fixdt(1,32,30) to fixdt(1,16,14) without loss of precision?
You can't avoid precision loss. You are dropping 16 bits from the precision end of the variable.
If you use the fastest, leanest conversion, Round to Floor, that you will introduce quantization error of 0 to just under 1 bit. In real world value terms, the absolutie quantization error is in the range 0 to Slope, where Slope = 2^-14;
You can cut the quantization error in half and balance around zero if you use Nearest rounding. With Nearest, the absolute quantization error will be 0 to 1/2 bits or in real world values 0 to 0.5*Slope = 2^-15. But keep in mind that round to nearest can overflow for values close to +1. If you turn on Saturation in the cast, quantization error will still be always be less or equal to half a bit. But if you allow the overflow cases to wrap modulo 2^N, then the quantization error will be huge for the overflowing cases.

Sign in to comment.

Products

Release

R2015b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!