Fixing Value in lsqnonlin
1 view (last 30 days)
Show older comments
I have seen that based off the lsqnonlin algorithm, that the upper and lower bounds are not strictly respected for different iterations of lsqnonlin. I have a case where I am attempting to optimize a general N by N square array where some of the values should occassionally be fixed to 0. The issue is illustrated by the function used to optimize, shown below.
An example K can have the form [a, 0; b, c]. If I don't include the 0 in the optimization, the code runs with no problem. However, I would like to make this function general to any N by N array of K. The issue arrises when lsqnonlin changes the 0 to something like 1e-8 even though my upper and lower bounds are both 0, which in turn gets blown up as expm(1e8). Is there a good way to fix the value of the 0s?
Secondary question: I only inverted the K inside the optimzation function because the fit didn't work very well when I was optimizing the small decimal values. It seems that lsqnonlin thinks that a change of lets say 1/500 vs 1/600 is less significant than a change between 500 and 600. If this is something I can tune in the options that I have not discovered, I would happily optimize the non-inverted numbers.
Thanks!
function xout = Global_SAS_1(T,Y,K)
%if Y is n1 by n2, T needs to be size 1 x n2
%K can be an arbitrary matrix size. Will have units as T^-1.
Tm = zeros(numel(K),length(T));
Kinv = 1./K;
Kinv(isinf(Kinv)) = 0; % in matlab, 1/0 = inf
for i = 1:length(T)
A = expm(Kinv*T(i));
Tm(:,i) = A(:);
end
Gm = real(Y)*pinv(Tm);
xout = Gm*Tm - Y
Accepted Answer
Matt J
on 27 Aug 2019
Edited: Matt J
on 27 Aug 2019
OK, I think I see what's happening. In i_sfdnls, I see this line:
% Pass in user's objective function and not the wrapped function as
% we want the Jacobian with respect to all the variables
[A,findiffevals] = sfdnls(xcurr,fvec,Jstr,group,alpha,funfcn,l,u,...
options,sizes,finDiffFlags,varargin{:});
Apparently, the code thinks it needs to do finite difference Jacobian calculations on the original, non-reduced objective. I've no idea why they think that's necessary, but the point is that the bounds are being ignored deliberately as a part of the finite differencing.
The best is if you formulate the problem in terms of [a,b,c] (see my comment elow). That's what Matlab is attempting to automate for you anyway. However, the following modification also works as a way of tricking the Jacobian calculation into doing the right thing
function xout = Global_SAS_1(T,Y,K)
%if Y is n1 by n2, T needs to be size 1 x n2
%K can be an arbitrary matrix size. Will have units as T^-1.
Tm = zeros(numel(K),length(T));
Kinv = reshape(1./K,[2,2]);
Kinv(3) = 0;
for i = 1:length(T)
A = expm(Kinv*T(i));
Tm(:,i) = A(:);
end
Gm = real(Y)*pinv(Tm);
xout = Gm*Tm - Y;
end
3 Comments
Matt J
on 27 Aug 2019
Edited: Matt J
on 27 Aug 2019
If you are expressing the K(i,j) parameters in unnaturally large units (so that the values in the neighborhood of the optimum are small in magnitude), then the values of lsqnonlin's default tolerance parameters (e.g. StepTolerance and FiniteDifferenceStepSize) may be inappropriately scaled in relation to your unknowns.
You could experiment with smaller values of these tolerance parameters, but I believe it is always better just to choose judicious units for your unknown variables. Just as you wouldn't optimize the dimensions of a house with length variables measured in kilometers, you shouldn't be choosing units such that your K(i,j) are on the order of 1e-5.
More Answers (0)
See Also
Categories
Find more on Get Started with Optimization Toolbox in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!