Main Content

fixed.realQlessQRMatrixSolveFixedpointTypes

Determine fixed-point types for matrix solution of real-valued A'AX=B using QR decomposition

Since R2021b

Description

T = fixed.realQlessQRMatrixSolveFixedpointTypes(m,n,max_abs_A,max_abs_B,precisionBits) computes fixed-point types for the matrix solution of real-valued A'AX=B using QR decomposition. T is returned as a structure with fields that specify fixed-point types for A and B that guarantee no overflow will occur in the QR algorithm transforming A in-place into upper-triangular R, where QR=A is the QR decomposition of X, and X such that there is a low probability of overflow.

example

T = fixed.realQlessQRMatrixSolveFixedpointTypes(___,noiseStandardDeviation) specifies the standard deviation of the additive random noise in A. noiseStandardDeviation is an optional parameter. If not supplied or empty, then the default value is used.

example

T = fixed.realQlessQRMatrixSolveFixedpointTypes(___,p_s) specifies the probability that the estimate of the lower bound for the smallest singular value of A is larger than the actual smallest singular value of the matrix. p_s is an optional parameter. If not supplied or empty, then the default value is used.

example

T = fixed.realQlessQRMatrixSolveFixedpointTypes(___,regularizationParameter) computes fixed-point types for the matrix solution of real-valued

[λInA]'[λInA]X=(λ2In+A'A)X=B

where λ is the regularizationParameter, A is an m-by-n matrix, and In = eye(n). regularizationParameter is an optional parameter. If not supplied or empty, then the default value is used.

example

T = fixed.realQlessQRMatrixSolveFixedpointTypes(___,maxWordLength) specifies the maximum word length of the fixed-point types. maxWordLength is an optional parameter. If not supplied or empty, then the default value is used.

Examples

collapse all

This example shows the algorithms that the fixed.realQlessQRMatrixSolveFixedpointTypes function uses to analytically determine fixed-point types for the solution of the real matrix equation AAX=B, where A is an m-by-n matrix with m>n, B is n-by-p, and X is n-by-p.

Overview

You can solve the fixed-point matrix equation AAX=B using QR decomposition. Using a sequence of orthogonal transformations, QR decomposition transforms matrix A in-place to upper triangular R, where QR=A is the economy-size QR decomposition. This reduces the equation to an upper-triangular system of equations RRX=B. To solve for X, compute X=R\(R\B) through forward- and backward-substitution of R into B.

You can determine appropriate fixed-point types for the matrix equation AAX=B by selecting the fraction length based on the number of bits of precision defined by your requirements. The fixed.realQlessQRMatrixSolveFixedpointTypes function analytically computes the following upper bounds on R, and X to determine the number of integer bits required to avoid overflow [1,2,3].

The upper bound for the magnitude of the elements of R=QA is

max(|R(:)|)mmax(|A(:)|).

The upper bound for the magnitude of the elements of X=(AA)\B is

max(|X(:)|)nmax(|B(:)|)min(svd(A))2.

Since computing svd(A) is more computationally expensive than solving the system of equations, the fixed.realQlessQRMatrixSolveFixedpointTypes function estimates a lower bound of min(svd(A)).

Fixed-point types for the solution of the matrix equation (AA)X=B are generally well-bounded if the number of rows, m, of A are much greater than the number of columns, n (i.e. mn), and A is full rank. If A is not inherently full rank, then it can be made so by adding random noise. Random noise naturally occurs in physical systems, such as thermal noise in radar or communications systems. If m=n, then the dynamic range of the system can be unbounded, for example in the scalar equation x=a2/b and a,b[-1,1], then x can be arbitrarily large if b is close to 0.

Proofs of the Bounds

Properties and Definitions of Vector and Matrix Norms

The proofs of the bounds use the following properties and definitions of matrix and vector norms, where Q is an orthogonal matrix, and v is a vector of length m [6].

||Av||2||A||2||v||2||Q||2=1||v||=max(|v(:)|)||v||||v||2m||v||

If A is an m-by-n matrix and QR=A is the economy-size QR decomposition of A, where Q is orthogonal and m-by-n and R is upper-triangular and n-by-n, then the singular values of R are equal to the singular values of A. If A is nonsingular, then

||R-1||2=||(R)-1||2=1min(svd(R))=1min(svd(A))

Upper Bound for R = Q'A

The upper bound for the magnitude of the elements of R is

max(|R(:)|)mmax(|A(:)|).

Proof of Upper Bound for R = Q'A

The jth column of R is equal to R(:,j)=QA(:,j), so

max(|R(:,j)|)=||R(:,j)||||R(:,j)||2=||QA(:,j)||2||Q||2||A(:,j)||2=||A(:,j)||2m||A(:,j)||=mmax(|A(:,j)|)mmax(|A(:)|).

Since max(|R(:,j)|)mmax(|A(:)|) for all 1j, then

max(|R(:)|)mmax(|A(:)|).

Upper Bound for X = (A'A)\B

The upper bound for the magnitude of the elements of X=(AA)\B is

max(|X(:)|)nmax(|B(:)|)min(svd(A))2.

Proof of Upper Bound for X = (A'A)\B

If A is not full rank, then min(svd(A))=0, and if B is not equal to zero, then nmax(|B(:)|)/min(svd(A))2=and so the inequality is true.

If AAx=b and QR=A is the economy-size QR decomposition of A, then AAx=RQQRx=RRx=b. If A is full rank then x=R-1((R)-1b). Let x=X(:,j) be the jth column of X, and b=B(:,j) be the jth column of B. Then

max(|x(:)|)=||x||||x||2=||R-1((R)-1b)||2||R-1||2||(R)-1||2||b||2=(1/min(svd(A))2)||b||2=||b||2/min(svd(A))2n||b||/min(svd(A))2=nmax(|b(:)|)/min(svd(A))2.

Since max(|x(:)|)nmax(|b(:)|)/min(svd(A))2 for all rows and columns of B and X, then

max(|X(:)|)nmax(|B(:)|)min(svd(A))2.

Lower Bound for min(svd(A))

You can estimate a lower bound s of min(svd(A))for real-valued A using the following formula,

s=σN2γ-1(psΓ(m-n+1)Γ(n/2)2m-nΓ(m+12)Γ(m-n+12),m-n+12)

where σN is the standard deviation of random noise added to the elements of A, 1-ps is the probability that smin(svd(A)), Γ is the gamma function, and γ-1is the inverse incomplete gamma function gammaincinv.

The proof is found in [1]. It is derived by integrating the formula in Lemma 3.3 from [3] and rearranging terms.

Since smin(svd(A)) with probability 1-ps, then you can bound the magnitude of the elements of X without computing svd(A),

max(|X(:)|)nmax(|B(:)|)min(svd(A))2nmax(|B(:)|)s2 with probability 1-ps.

You can compute s using the fixed.realSingularValueLowerBound function which uses a default probability of 5 standard deviations below the mean, ps=(1+erf(-5/2))/22.866510-7, so the probability that the estimated bound for the smallest singular value s is less than the actual smallest singular value of A is 1-ps0.9999997.

Example

This example runs a simulation with many random matrices and compares the analytical bounds with the actual singular values of A and the actual largest elements of R=QA, and X=(AA)\B.

Define System Parameters

Define the matrix attributes and system parameters for this example.

m is the number of rows in matrix A. In a problem such as beamforming or direction finding, m corresponds to the number of samples that are integrated over.

m = 300;

n is the number of columns in matrix A and rows in matrices B and X. In a least-squares problem, m is greater than n, and usually m is much larger than n. In a problem such as beamforming or direction finding, n corresponds to the number of sensors.

n = 10;

p is the number of columns in matrices B and X. It corresponds to simultaneously solving a system with p right-hand sides.

p = 1;

In this example, set the rank of matrix A to be less than the number of columns. In a problem such as beamforming or direction finding, rank(A) corresponds to the number of signals impinging on the sensor array.

rankA = 3;

precisionBits defines the number of bits of precision required for the matrix solve. Set this value according to system requirements.

precisionBits = 24;

In this example, real-valued matrices A and B are constructed such that the magnitude of their elements is less than or equal to one. Your own system requirements will define what those values are. If you don't know what they are, and A and B are fixed-point inputs to the system, then you can use the upperbound function to determine the upper bounds of the fixed-point types of A and B.

max_abs_A is an upper bound on the maximum magnitude element of A.

max_abs_A = 1;  

max_abs_B is an upper bound on the maximum magnitude element of B.

max_abs_B = 1;

Thermal noise standard deviation is the square root of thermal noise power, which is a system parameter. A well-designed system has the quantization level lower than the thermal noise. Here, set thermalNoiseStandardDeviation to the equivalent of -50dB noise power.

thermalNoiseStandardDeviation = sqrt(10^(-50/10))
thermalNoiseStandardDeviation = 
0.0032

The standard deviation of the noise from quantizing a real signal is 2-precisionBits/12 [4,5]. Use fixed.realQuantizationNoiseStandardDeviation to compute this. See that it is less than thermalNoiseStandardDeviation.

quantizationNoiseStandardDeviation = fixed.realQuantizationNoiseStandardDeviation(precisionBits)
quantizationNoiseStandardDeviation = 
1.7206e-08

Compute Fixed-Point Types

In this example, assume that the designed system matrix A does not have full rank (there are fewer signals of interest than number of columns of matrix A), and the measured system matrix A has additive thermal noise that is larger than the quantization noise. The additive noise makes the measured matrix A have full rank.

Set σnoise=σthermal noise.

noiseStandardDeviation = thermalNoiseStandardDeviation;

Use fixed.realQlessQRMatrixSolveFixedpointTypes to compute fixed-point types.

T = fixed.realQlessQRMatrixSolveFixedpointTypes(m,n,max_abs_A,max_abs_B,...
    precisionBits,noiseStandardDeviation)
T = struct with fields:
    A: [0x0 embedded.fi]
    B: [0x0 embedded.fi]
    X: [0x0 embedded.fi]

T.A is the type computed for transforming A to R in-place so that it does not overflow.

T.A
ans = 

[]

          DataTypeMode: Fixed-point: binary point scaling
            Signedness: Signed
            WordLength: 31
        FractionLength: 24

T.B is the type computed for B so that it does not overflow.

T.B
ans = 

[]

          DataTypeMode: Fixed-point: binary point scaling
            Signedness: Signed
            WordLength: 27
        FractionLength: 24

T.X is the type computed for the solution X=(AA)\B so that there is a low probability that it overflows.

T.X
ans = 

[]

          DataTypeMode: Fixed-point: binary point scaling
            Signedness: Signed
            WordLength: 40
        FractionLength: 24

Upper Bound for R

The upper bound for R is computed using the formula max(|R(:)|)mmax(|A(:)|), where m is the number of rows of matrix A. This upper bound is used to select a fixed-point type with the required number of bits of precision to avoid an overflow in the upper bound.

upperBoundR = sqrt(m)*max_abs_A
upperBoundR = 
17.3205

Lower Bound for min(svd(A)) for Real A

A lower bound for min(svd(A)) is estimated by the fixed.realSingularValueLowerBound function using a probability that the estimate s is not greater than the actual smallest singular value. The default probability is 5 standard deviations below the mean. You can change this probability by specifying it as the last input parameter to the fixed.realSingularValueLowerBound function.

estimatedSingularValueLowerBound = fixed.realSingularValueLowerBound(m,n,noiseStandardDeviation)
estimatedSingularValueLowerBound = 
0.0371

Simulate and Compare to the Computed Bounds

The bounds are within an order of magnitude of the simulated results. This is sufficient because the number of bits translates to a logarithmic scale relative to the range of values. Being within a factor of 10 is between 3 and 4 bits. This is a good starting point for specifying a fixed-point type. If you run the simulation for more samples, then it is more likely that the simulated results will be closer to the bound. This example uses a limited number of simulations so it doesn't take too long to run. For real-world system design, you should run additional simulations.

Define the number of samples, numSamples, over which to run the simulation.

numSamples = 1e4;

Run the simulation.

[actualMaxR,singularValues,X_values] = runSimulations(m,n,p,rankA,max_abs_A,max_abs_B,numSamples,...
    noiseStandardDeviation,T);

You can see that the upper bound on R compared to the measured simulation results of the maximum value of R over all runs is within an order of magnitude.

upperBoundR
upperBoundR = 
17.3205
max(actualMaxR)
ans = 
8.1682

Finally, see that the estimated lower bound of min(svd(A)) compared to the measured simulation results of min(svd(A)) over all runs is also within an order of magnitude.

estimatedSingularValueLowerBound
estimatedSingularValueLowerBound = 
0.0371
actualSmallestSingularValue = min(singularValues,[],'all')
actualSmallestSingularValue = 
0.0421

Plot the distribution of the singular values over all simulation runs. The distributions of the largest singular values correspond to the signals that determine the rank of the matrix. The distributions of the smallest singular values correspond to the noise. The derivation of the estimated bound of the smallest singular value makes use of the random nature of the noise.

clf
fixed.example.plot.singularValueDistribution(m,n,rankA,...
    noiseStandardDeviation,singularValues,...
    estimatedSingularValueLowerBound,"real");

Figure contains an axes object. The axes object with title Singular value distributions for 300 -by- 10 blank real blank matrices blank of blank rank blank 3 blank with blank sigma indexOf noise baseline blank = blank 0 . 00316, xlabel Singular value magnitude, ylabel Probability contains 20 objects of type line, text.

Zoom in to the smallest singular value to see that the estimated bound is close to it.

xlim([estimatedSingularValueLowerBound*0.9, max(singularValues(n,:))]);

Figure contains an axes object. The axes object with title Singular value distributions for 300 -by- 10 blank real blank matrices blank of blank rank blank 3 blank with blank sigma indexOf noise baseline blank = blank 0 . 00316, xlabel Singular value magnitude, ylabel Probability contains 20 objects of type line, text.

Estimate the largest value of the solution, X, and compare it to the largest value of X found during the simulation runs. The estimation is within an order of magnitude of the actual value, which is sufficient for estimating a fixed-point data type, because it is between 3 and 4 bits.

This example uses a limited number of simulation runs. With additional simulation runs, the actual largest value of X will approach the estimated largest value of X.

estimated_largest_X = fixed.realQlessQRMatrixSolveUpperBoundX(m,n,max_abs_B,noiseStandardDeviation)
estimated_largest_X = 
7.2565e+03
actual_largest_X = max(abs(X_values),[],'all')
actual_largest_X = 
582.6761

Plot the distribution of X values and compare it to the estimated upper bound for X.

clf
fixed.example.plot.xValueDistribution(m,n,rankA,noiseStandardDeviation,...
    X_values,estimated_largest_X,"real normally distributed random");

Figure contains an axes object. The axes object with title X distributions for 300 -by- 10 blank blank matrices blank of blank rank blank 3 blank with blank sigma indexOf noise baseline blank = blank 0 . 00316, xlabel X value magnitude, ylabel Probability contains an object of type line.

Supporting Functions

The runSimulations function creates a series of random matrices A and B of a given size and rank, quantizes them according to the computed types, computes the QR decomposition of A, and solves the equation AAX=B. It returns the maximum values of R=QA, the singular values of A, and the values of X so their distributions can be plotted and compared to the bounds.

function [actualMaxR,singularValues,X_values] = runSimulations(m,n,p,rankA,max_abs_A,max_abs_B,...
        numSamples,noiseStandardDeviation,T)
    precisionBits = T.A.FractionLength;
    A_WordLength = T.A.WordLength;
    B_WordLength = T.B.WordLength;
    actualMaxR = zeros(1,numSamples);
    singularValues = zeros(n,numSamples);
    X_values = zeros(n,numSamples);
    for j = 1:numSamples
        A = max_abs_A*fixed.example.realRandomLowRankMatrix(m,n,rankA);
        % Adding random noise makes A non-singular.
        A = A + fixed.example.realNormalRandomArray(0,noiseStandardDeviation,m,n);
        A = quantizenumeric(A,1,A_WordLength,precisionBits);
        B = fixed.example.realUniformRandomArray(-max_abs_B,max_abs_B,n,p);
        B = quantizenumeric(B,1,B_WordLength,precisionBits);
        [~,R] = qr(A,0);
        X = R\(R'\B);
        actualMaxR(j) = max(abs(R(:)));
        singularValues(:,j) = svd(A);
        X_values(:,j) = X;
    end
end

References

  1. Thomas A. Bryan and Jenna L. Warren. “Systems and Methods for Design Parameter Selection”. Patent pending. U.S. Patent Application No. 16/947,130. 2020.

  2. Perform QR Factorization Using CORDIC. Derivation of the bound on growth when computing QR. MathWorks. 2010.

  3. Zizhong Chen and Jack J. Dongarra. “Condition Numbers of Gaussian Random Matrices”. In: SIAM J. Matrix Anal. Appl. 27.3 (July 2005), pp. 603–620. issn: 0895-4798. doi: 10.1137/040616413. url: https://dx.doi.org/10.1137/040616413.

  4. Bernard Widrow. “A Study of Rough Amplitude Quantization by Means of Nyquist Sampling Theory”. In: IRE Transactions on Circuit Theory 3.4 (Dec. 1956), pp. 266–276.

  5. Bernard Widrow and István Kollár. Quantization Noise – Roundoff Error in Digital Computation, Signal Processing, Control, and Communications. Cambridge, UK: Cambridge University Press, 2008.

  6. Gene H. Golub and Charles F. Van Loan. Matrix Computations. Second edition. Baltimore: Johns Hopkins University Press, 1989.

Suppress mlint warnings in this file.

%#ok<*NASGU>
%#ok<*ASGLU>

This example shows how to use the fixed.realQlessQRMatrixSolveFixedpointTypes function to analytically determine fixed-point types for the solution of the real least-squares matrix equation AAX=B, where A is an m-by-n matrix with mn, B is n-by-p, and X is n-by-p.

Fixed-point types for the solution of the matrix equation AAX=B are well-bounded if the number of rows, m, of A are much greater than the number of columns, n (i.e. mn), and A is full rank. If A is not inherently full rank, then it can be made so by adding random noise. Random noise naturally occurs in physical systems, such as thermal noise in radar or communications systems. If m=n, then the dynamic range of the system can be unbounded, for example in the scalar equation x=a/b and a,b[-1,1], then x can be arbitrarily large if b is close to 0.

Define System Parameters

Define the matrix attributes and system parameters for this example.

m is the number of rows in matrix A. In a problem such as beamforming or direction finding, m corresponds to the number of samples that are integrated over.

m = 300;

n is the number of columns in matrix A and rows in matrices B and X. In a least-squares problem, m is greater than n, and usually m is much larger than n. In a problem such as beamforming or direction finding, n corresponds to the number of sensors.

n = 10;

p is the number of columns in matrices B and X. It corresponds to simultaneously solving a system with p right-hand sides.

p = 1;

In this example, set the rank of matrix A to be less than the number of columns. In a problem such as beamforming or direction finding, rank(A) corresponds to the number of signals impinging on the sensor array.

rankA = 3;

precisionBits defines the number of bits of precision required for the matrix solve. Set this value according to system requirements.

precisionBits = 24;

In this example, real-valued matrices A and B are constructed such that the magnitude of their elements is less than or equal to one. Your own system requirements will define what those values are. If you don't know what they are, and A and B are fixed-point inputs to the system, then you can use the upperbound function to determine the upper bounds of the fixed-point types of A and B.

max_abs_A is an upper bound on the maximum magnitude element of A.

max_abs_A = 1;

max_abs_B is an upper bound on the maximum magnitude element of B.

max_abs_B = 1;

Thermal noise standard deviation is the square root of thermal noise power, which is a system parameter. A well-designed system has the quantization level lower than the thermal noise. Here, set thermalNoiseStandardDeviation to the equivalent of -50dB noise power.

thermalNoiseStandardDeviation = sqrt(10^(-50/10))
thermalNoiseStandardDeviation = 
0.0032

The quantization noise standard deviation is a function of the required number of bits of precision. Use fixed.realQuantizationNoiseStandardDeviation to compute this. See that it is less than thermalNoiseStandardDeviation.

quantizationNoiseStandardDeviation = fixed.realQuantizationNoiseStandardDeviation(precisionBits)
quantizationNoiseStandardDeviation = 
1.7206e-08

Compute Fixed-Point Types

In this example, assume that the designed system matrix A does not have full rank (there are fewer signals of interest than number of columns of matrix A), and the measured system matrix A has additive thermal noise that is larger than the quantization noise. The additive noise makes the measured matrix A have full rank.

Set σnoise=σthermal noise.

noiseStandardDeviation = thermalNoiseStandardDeviation;

Use fixed.realQlessQRMatrixSolveFixedpointTypes to compute fixed-point types.

T = fixed.realQlessQRMatrixSolveFixedpointTypes(m,n,max_abs_A,max_abs_B,...
    precisionBits,noiseStandardDeviation)
T = struct with fields:
    A: [0x0 embedded.fi]
    B: [0x0 embedded.fi]
    X: [0x0 embedded.fi]

T.A is the type computed for transforming A to R=QA in-place so that it does not overflow.

T.A
ans = 

[]

          DataTypeMode: Fixed-point: binary point scaling
            Signedness: Signed
            WordLength: 31
        FractionLength: 24

T.B is the type computed for B so that it does not overflow.

T.B
ans = 

[]

          DataTypeMode: Fixed-point: binary point scaling
            Signedness: Signed
            WordLength: 27
        FractionLength: 24

T.X is the type computed for the solution X=(AA)\B so that there is a low probability that it overflows.

T.X
ans = 

[]

          DataTypeMode: Fixed-point: binary point scaling
            Signedness: Signed
            WordLength: 40
        FractionLength: 24

Use the Specified Types to Solve the Matrix Equation A'AX=B

Create random matrices A and B such that rankA=rank(A). Add random measurement noise to A which will make it become full rank.

rng('default');
[A,B] = fixed.example.realRandomQlessQRMatrices(m,n,p,rankA);
A = A + fixed.example.realNormalRandomArray(0,noiseStandardDeviation,m,n);

Cast the inputs to the types determined by fixed.realQlessQRMatrixSolveFixedpointTypes. Quantizing to fixed-point is equivalent to adding random noise [4,5].

A = cast(A,'like',T.A);
B = cast(B,'like',T.B);

Accelerate the fixed.qlessQRMatrixSolve function by using fiaccel to generate a MATLAB executable (MEX) function.

fiaccel fixed.qlessQRMatrixSolve -args {A,B,T.X} -o qlessQRMatrixSolve_mex

Specify output type T.X and compute fixed-point X=(AA)\B using the QR method.

X = qlessQRMatrixSolve_mex(A,B,T.X);

Compute the relative error to verify the accuracy of the output.

relative_error = norm(double(A'*A*X - B))/norm(double(B))
relative_error = 
0.0624

Suppress mlint warnings in this file.

%#ok<*NASGU>
%#ok<*ASGLU>

This example shows how to use the fixed.realQlessQRMatrixSolveFixedpointTypes function to analytically determine fixed-point types for the solution of the real least-squares matrix equation

[λInA]T[λInA]X=(λ2In+ATA)X=B

where A is an m-by-n matrix with mn, B is n-by-p, X is n-by-p, In=eye(n), and λ is a regularization parameter.

Define System Parameters

Define the matrix attributes and system parameters for this example.

m is the number of rows in matrix A. In a problem such as beamforming or direction finding, m corresponds to the number of samples that are integrated over.

m = 300;

n is the number of columns in matrix A and rows in matrices B and X. In a least-squares problem, m is greater than n, and usually m is much larger than n. In a problem such as beamforming or direction finding, n corresponds to the number of sensors.

n = 10;

p is the number of columns in matrices B and X. It corresponds to simultaneously solving a system with p right-hand sides.

p = 1;

In this example, set the rank of matrix A to be less than the number of columns. In a problem such as beamforming or direction finding, rank(A) corresponds to the number of signals impinging on the sensor array.

rankA = 3;

precisionBits defines the number of bits of precision required for the matrix solve. Set this value according to system requirements.

precisionBits = 32;

Small, positive values of the regularization parameter can improve the conditioning of the problem and reduce the variance of the estimates. While biased, the reduced variance of the estimate often results in a smaller mean squared error when compared to least-squares estimates.

regularizationParameter = 0.01;

In this example, real-valued matrices A and B are constructed such that the magnitude of their elements is less than or equal to one. Your own system requirements will define what those values are. If you don't know what they are, and A and B are fixed-point inputs to the system, then you can use the upperbound function to determine the upper bounds of the fixed-point types of A and B.

max_abs_A is an upper bound on the maximum magnitude element of A.

max_abs_A = 1;

max_abs_B is an upper bound on the maximum magnitude element of B.

max_abs_B = 1;

Thermal noise standard deviation is the square root of thermal noise power, which is a system parameter. A well-designed system has the quantization level lower than the thermal noise. Here, set thermalNoiseStandardDeviation to the equivalent of -50dB noise power.

thermalNoiseStandardDeviation = sqrt(10^(-50/10))
thermalNoiseStandardDeviation = 
0.0032

The quantization noise standard deviation is a function of the required number of bits of precision. Use fixed.realQuantizationNoiseStandardDeviation to compute this. See that it is less than thermalNoiseStandardDeviation.

quantizationNoiseStandardDeviation = fixed.realQuantizationNoiseStandardDeviation(precisionBits)
quantizationNoiseStandardDeviation = 
6.7212e-11

Compute Fixed-Point Types

In this example, assume that the designed system matrix A does not have full rank (there are fewer signals of interest than number of columns of matrix A), and the measured system matrix A has additive thermal noise that is larger than the quantization noise. The additive noise makes the measured matrix A have full rank.

Set σnoise=σthermal noise.

noiseStandardDeviation = thermalNoiseStandardDeviation;

Use the fixed.realQlessQRMatrixSolveFixedpointTypes function to compute fixed-point types.

T = fixed.realQlessQRMatrixSolveFixedpointTypes(m,n,max_abs_A,max_abs_B,...
    precisionBits,noiseStandardDeviation,[],regularizationParameter)
T = struct with fields:
    A: [0x0 embedded.fi]
    B: [0x0 embedded.fi]
    X: [0x0 embedded.fi]

T.A is the type computed for transforming [λInA] to R=QT[λInA] in-place so that it does not overflow.

T.A
ans = 

[]

          DataTypeMode: Fixed-point: binary point scaling
            Signedness: Signed
            WordLength: 39
        FractionLength: 32

T.B is the type computed for B so that it does not overflow.

T.B
ans = 

[]

          DataTypeMode: Fixed-point: binary point scaling
            Signedness: Signed
            WordLength: 35
        FractionLength: 32

T.X is the type computed for the solution X=([λInA]T[λInA])\B so that there is a low probability that it overflows.

T.X
ans = 

[]

          DataTypeMode: Fixed-point: binary point scaling
            Signedness: Signed
            WordLength: 48
        FractionLength: 32

Use the Specified Types to Solve the Matrix Equation

Create random matrices A and B such that rankA=rank(A). Add random measurement noise to A which will make it become full rank.

rng('default');
[A,B] = fixed.example.realRandomQlessQRMatrices(m,n,p,rankA);
A = A + fixed.example.realNormalRandomArray(0,noiseStandardDeviation,m,n);

Cast the inputs to the types determined by fixed.realQlessQRMatrixSolveFixedpointTypes. Quantizing to fixed-point is equivalent to adding random noise.

A = cast(A,'like',T.A);
B = cast(B,'like',T.B);

Accelerate the fixed.qlessQRMatrixSolve function by using fiaccel to generate a MATLAB® executable (MEX) function.

fiaccel fixed.qlessQRMatrixSolve -args {A,B,T.X,[],regularizationParameter} -o qlessQRMatrixSolve_mex

Specify output type T.X and compute fixed-point X=([λInA]T[λInA])\B using the QR method.

X = qlessQRMatrixSolve_mex(A,B,T.X,[],regularizationParameter);

Verify the Accuracy of the Output

Verify that the relative error between the fixed-point output and builtin MATLAB in double-precision floating-point is small.

Xdouble=([λInA]T[λInA])\B

A_lambda = double([regularizationParameter*eye(n);A]);
X_double = (A_lambda'*A_lambda)\double(B);
relativeError = norm(X_double - double(X))/norm(X_double)
relativeError = 
1.0344e-05

Suppress mlint warnings in this file.

%#ok<*NASGU>
%#ok<*ASGLU>

Input Arguments

collapse all

Number of rows in A and B, specified as a positive integer-valued scalar.

Data Types: double

Number of columns in A, specified as a positive integer-valued scalar.

Data Types: double

Maximum of the absolute value of A, specified as a scalar.

Example: max(abs(A(:)))

Data Types: double

Maximum of the absolute value of B, specified as a scalar.

Example: max(abs(B(:)))

Data Types: double

Required number of bits of precision of the input and output, specified as a positive integer-valued scalar.

Data Types: double

Standard deviation of additive random noise in A, specified as a scalar.

If noiseStandardDeviation is not specified, then the default is the standard deviation of the real-valued quantization noise σq=(2precisionBits)/(12), which is calculated by fixed.realQuantizationNoiseStandardDeviation.

Data Types: double

Probability that estimate of lower bound s is larger than actual smallest singular value of matrix, specified as a scalar. Use fixed.realSingularValueLowerBound to estimate the smallest singular value, s, of A. If p_s is not specified, the default value is ps=(1/2)(1+erf(5/2))3107 which is 5 standard deviations below the mean, so the probability that the estimated bound for the smallest singular value is less than the actual smallest singular value is 1-ps ≈ 0.9999997.

Data Types: double

Regularization parameter, specified as a nonnegative scalar. Small, positive values of the regularization parameter can improve the conditioning of the problem and reduce the variance of the estimates. While biased, the reduced variance of the estimate often results in a smaller mean squared error when compared to least-squares estimates.

regularizationParameter is the Tikhonov regularization parameter of the matrix problem

[λInA]'[λInA]X=(λ2In+A'A)X=B

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | fi

Maximum word length of fixed-point types, specified as a positive integer.

If the word length of the fixed-point type exceeds the specified maximum word length, the default of 65535 bits is used.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | fi

Output Arguments

collapse all

Fixed-point types for A, B, and X, returned as a structure. The structure T has fields T.A, T.B, and T.X. These fields contain fi objects that specify fixed-point types for:

  • A and B that guarantee no overflow will occur in the QR algorithm.

    The QR algorithm transforms A in-place into upper-triangular R, where QR=A is the QR decomposition of A.

  • X such that there is a low probability of overflow.

Algorithms

The fixed-point type for A is computed using fixed.qlessqrFixedpointTypes. The required number of integer bits to prevent overflow is derived from the following bound on the growth of R [1]. The required number of integer bits is added to the number of bits of precision, precisionBits, of the input, plus one for the sign bit, plus one bit for intermediate CORDIC gain of approximately 1.6468 [2].

The elements of R are bounded in magnitude by

max(|R(:)|)mmax(|A(:)|).

Matrix B is not transformed, so it does not need any additional growth bits.

The elements of X=R\(R'\B) are bounded in magnitude by

max(|X(:)|)nmax(|B(:)|)min(svd(A))2.

Computing the singular value decomposition to derive the above bound on X is more computationally intensive than the entire matrix solve, so the fixed.realSingularValueLowerBound function is used to estimate a bound on min(svd(A)).

References

[2] Voler, Jack E. "The CORDIC Trigonometric Computing Technique." IRE Transactions on Electronic Computers EC-8 (1959): 330-334.

Version History

Introduced in R2021b

expand all