SVD of each row of a matrix

7 views (last 30 days)
Hi,
I am looking for a way to compute SVD of each row of a matrix without using a loop.
thanks

Accepted Answer

Christine Tobler
Christine Tobler on 1 Sep 2020
The SVD of a row vector has the singular value equal to its norm, and the singular vectors 1 and the normalized vector.
If this is really what you are looking for, you can compute the singular values using
S = vecnorm(A);
and the singular vectors using
V = A ./ S;
(you might need to transpose these)
  1 Comment
John D'Errico
John D'Errico on 1 Sep 2020
Exactly. Since these outputs of the SVD are so easy to gain without recourse to the SVD directly, the only real reason for doing this that I can think of is @Jayant wants to compute a basis for the null space for each row of a matrix. If this is a problem in 2 or 3 dimensions, even that is quite easy to do, and to do efficiently. So I think we need to know the real reason for wanting to compute the SVD, as well as the dimension of the problem.

Sign in to comment.

More Answers (2)

John D'Errico
John D'Errico on 1 Sep 2020
Edited: John D'Errico on 1 Sep 2020
Sorry, but this seems to make little sense, since most of that set of singular value decompositions will be trivial. Or, perhaps, let me ask what it is that you are looking to do here? Computing the SVD of each row of a matrix just means you want to find the SVD of a sequence of row vectors.
That is, if X is ANY row vector, then what is the svd(X)? The SVD returns three arrays, thus U,S,V. If X is a row vector, then we will always have U == 1.
Likewise, for any row vector X, S will be a row vector of the same size as X, with all identical zeros except for the first element, which will be norm(X).
Finally, V will be an nxn array, which n is the length of X. The first column of V will be X'/norm(X), so X scaled to have unit norm. The remaining columns of V will be the null space of X, thus a basis for the n-1 dimensional subspaces that is orthogonal to the vector X.
For example...
X = 1:3
X =
1 2 3
[U,S,V] = svd(X)
U =
1
S =
3.74165738677394 0 0
V =
0.267261241912424 -0.534522483824849 -0.801783725737273
0.534522483824849 0.774541920588438 -0.338187119117343
0.801783725737273 -0.338187119117343 0.492719321323986
Since it is trivial to compute U and S for any set of row vectors, and it is trivial to compute the 2-norm of any set of row vectors, AND it is trivial to scale a vector to unit length by dividing by the 2-norm, then you must be asking how to compute the nullspaces of each of a set of vectors?
Note that those nullspaces are not unique.
The point is, depending on what you really wanted from this, it can arguably be far easier to compute if we know what it is that you are looking to find. We also would need to know the length of your vectors, since there are easy ways to do this computation without any need for the SVD at all, if the dimension of the problem is low.

Jayant chouragade
Jayant chouragade on 2 Sep 2020
Hi,
Thank you all for your valuable inputs. I think I am bit confused and should give more details about applicalion. Here is what I am tring to do:
  1. I am implementing Test of Orthogonality of Frequency Subspace (TOFS) for DOA estimation. As given in attached paper.
2. Which suggests to use smallest singular values of below matrix in each hypothetical DOA to estimate the true DOA as
given in below second equation.
3. By doing so, row which corresponds to mimimum value of singular value will give true DOA.
4. For my implementation Above vector a(wi,θ) is a 4x1 vector . Wi is a 4x3 matrix. where i is 0to NFFT-1=0 to 255.
Therefore above matrix D(θ) in given hypothetical direction (obtained by creating a mesh for Azimuth=0:360 ans
Elevation=0:90) will be a 1xNFFT matrix.
Hope this make sense.
jayant
  4 Comments
Bruno Luong
Bruno Luong on 2 Sep 2020
As I read the paper suggests to compute D(theta) where theta is the vector of length p (number of sensor?), which is a matrix that has p rows (and (p*M) columns).
Then you have to search the best theta vector in the "hypothesized DOA θ" (whatever that mean, possibly the set of vector angles when a specific obstacle is somewhere inside the measurement volume) by maximizing the the inverse of the smallest singular value of this matrix.
Are we suppose to read the paper at your place?
Jayant chouragade
Jayant chouragade on 4 Sep 2020
Edited: Jayant chouragade on 4 Sep 2020
@Bruno Luong thank you for participating in discussion. I had a thought before enclosing the reference paper . But then decided so, thinking,curious minds may need better understanding .

Sign in to comment.

Categories

Find more on Linear Algebra in Help Center and File Exchange

Tags

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!