Why are the Wigner-Ville and Smoothed-P​seudo-Wign​er-Ville distributions so computationally expensive?

4 views (last 30 days)
Hi,
I have a complex signal plus AWGN that I need to decompose in time-frequency to be analyzed, and I apply Short Time Fourier Transform (STFT) with a larger number of samples (500.000 samples) signals and keep good results! However, when I tried the Wigner-Ville and Smoothed-Pseudo-Wigner-Ville distribution, its impossible with the same number of samples. Therefore, I tried to reduce the number of samples to 12.000 only, and yet so computationally expensive. I know that the wigner-ville distribution has a better time-frequency resolution tradeoff. However, the cross-terms are a problem, so I tried the Smoothed-Pseudo-Wigner-Ville distribution that could eliminate the cross-terms at the expense of time-frequency resolution. Finally, I tried to plot a 3D surface with (time x frequency x amplitude) signal decomposition using the 'mesh' plot, which is so computationally expensive too.
Please help me understand, because I need to compare the different time-frequency transform/distribution with the same number of samples by my complex plus AWGN signal.
Thank you
  4 Comments
Maximiliano Barbosa
Maximiliano Barbosa on 15 Apr 2022
Bjorn Gustavsson, please tell me what you mean by 'O' in O(n*log(n)) and (O(n^3)); I do not understand, 'n' means the number of samples of the signal, but 'O' I do not know what you want to say?
However, why the cost of Wigner-Ville is (O(n^3)? Its because it made an instantaneous autocorrelation signal and then applied the Fourier Transform to the result in \tau (lag observation) to obtain the time-frequency matrix (t,f)?
Please help me
Walter Roberson
Walter Roberson on 15 Apr 2022
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity.
So O(n^3) means that as the size of the input gets large enough, the time to execute becomes proportionate to the cube of the size of the input. O(n*log(n)) would mean that as the size of the input gets large enough, the time to execute becomes proportionate to the size of the input times the log of the size of the input -- which can be much much faster than n^3 for even moderate n. For example with n^3 and input size n = 10, the time would be some constant times 10^3 or about 1000, but with n*log(n) and use log 10 for simplicity, it would be 10*log10(10) = 10*1
When I looked at the code, it looked to me as if Wigner-Ville did autocorrelation of the signal against a number of lags that worked out to +/- 1 of the size of the complete signal, so with signal length n that would be proportionate to n^2 if I did not overlook anything. However, with smooth Wigner-Ville it looked like after the autocorrelation there were fft steps, and fft() is theoretically n*log(n). I might easily have overlooked some steps. An operation that costs n^2 plus an operation that costs n*log(n) has the log(n) become much smaller than n as n grows, so the limit of n^2 + n*log(n) would have n^2 as its limiting rate. If I overlooked operations after the fft then the cost would be higher.
Ah, wait, it was fft over the entire n x n array, and each column of that fft costs n*log(n) so the overall cost should be n*n*log(n) --> n^2 * log(n) which is more than n^2

Sign in to comment.

Answers (0)

Categories

Find more on Time-Frequency Analysis in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!