version 1.4.0.0 (20.4 KB) by
John D'Errico

Bound constrained optimization using fminsearch

**Editor's Note:** This file was selected as MATLAB Central Pick of the Week

Fminsearch does not admit bound constraints.

However simple transformation methods exist to

convert a bound constrained problem into an

unconstrained problem.

Fminsearchbnd is used exactly like fminsearch,

except that bounds are applied to the variables.

The bounds are applied internally, using a

transformation of the variables. (Quadratic for

single bounds, sin(x) for dual bounds.)

The bounds are inclusive inequalities, which admit

the boundary values themselves, but will not permit

ANY function evaluations outside the bounds.

Note that fminsearchbnd allows the user to exactly fix a variable at some given value, by setting both bounds to the exact same value.

Example usage:

rosen = @(x) (1-x(1)).^2 + 105*(x(2)-x(1).^2).^2;

% unconstrained fminsearch solution

fminsearch(rosen,[3 3])

ans =

1.0000 1.0000

% Lower bounds, no upper bounds

fminsearchbnd(rosen,[2.5 2.5],[2 2],[])

ans =

2.0000 4.0000

Lower bounds on both vars, upper bound on x(2)

fminsearchbnd(rosen,[2.5 2.5],[2 2],[inf 3])

ans =

2.0000 3.0000

I've now included fminsearchcon in the package, a tool that also allows nonlinear constraints.

John D'Errico (2021). fminsearchbnd, fminsearchcon (https://www.mathworks.com/matlabcentral/fileexchange/8277-fminsearchbnd-fminsearchcon), MATLAB Central File Exchange. Retrieved .

Created with
R14SP1

Compatible with any release

**Inspired:**
fitVirusCV19varW (Variable weight fitting of SIR Model), Ogive optimization toolbox, Fminspleas, easyfit(x,y,varargin), fminsearchbnd new, Zfit, minimize, variogramfit, easyfitGUI
, Total Least Squares Method, Accelerated Failure Time (AFT) models, Fit distributions to censored data, fminsearcharb, Matlab to Ansys ICEM/Fluent and Spline Drawing Toolbox

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!Create scripts with code, output, and formatted text in a single executable document.

Rory ConollyPetros NeoptolemouJh xxuCarlo BiscaroAmeer HamzaChenghao WangSarka LaxovaMohammadmehdi NaghiaeiJonathan ScharfKyriakos FlentzerisChang hsiungAlison Weberiwein vranckxThank you very much, works flawlessly and this is precisely what I was looking for! Great work!!

Yubin LeeI'm wondering who was able to use this code for maximizing the problem?

Jesús Ángel Andrés San RománDaniel MbadjounThanks for your help. Please can you help me to express well. Why am I getting the error message "??? Assignment has more non-singleton rhs dimensions than non-singleton subscripts"?

My program is:

clear all

clc

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%P QTU AMONT CdF HB R Rg

GG=[

296 845.7143 528.2500 488 40.2500 0.7475 0.8864

247 705.7143 528.2700 488 40.2700 0.6237 0.8860

246 702.8571 528.2400 488 40.2400 0.6212 0.8866

256 731.4286 528.2000 488 40.2000 0.6465 0.8875

265 757.1429 528.2600 488 40.2600 0.6692 0.8862

286 817.1429 528.2500 488 40.2500 0.7222 0.8864

273 780 528.2300 487.5000 40.7300 0.6894 0.8760

265 757.1429 528.3000 487.5000 40.8000 0.6692 0.8745

351 1.0029e+03 528.1000 487.6000 40.5000 0.8864 0.8809

349 997.1429 528.0600 487.7000 40.3600 0.8813 0.8840

365 1.0429e+03 528 487.7000 40.3000 0.9217 0.8853

359 1.0257e+03 528.1600 487.7000 40.4600 0.9066 0.8818

347 991.4286 528.3000 487.7000 40.6000 0.8763 0.8788

351 1.0029e+03 528.3000 487.7000 40.6000 0.8864 0.8788

368 1.0514e+03 528.2000 487.7000 40.5000 0.9293 0.8809

358 1.0229e+03 528.1400 487.7000 40.4400 0.9040 0.8822

359 1.0257e+03 528.0800 487.7000 40.3800 0.9066 0.8836

346 988.5714 528.1800 487.7000 40.4800 0.8737 0.8814

377 1.0771e+03 528.1500 488 40.1500 0.9520 0.8886

373 1.0657e+03 528.2100 488 40.2100 0.9419 0.8873

376 1.0743e+03 528.2400 488 40.2400 0.9495 0.8866

367 1.0486e+03 528.2000 488 40.2000 0.9268 0.8875

355 1.0143e+03 528.1800 488 40.1800 0.8965 0.8880

309 882.8571 528.2300 488 40.2300 0.7803 0.8868

276 788.5714 528.2500 488 40.2500 0.6970 0.8864

254 725.7143 528.2600 488 40.2600 0.6414 0.8862

253 722.8571 528.2600 488 40.2600 0.6389 0.8862

250 714.2857 528.2700 488 40.2700 0.6313 0.8860

258 737.1429 528.2400 488 40.2400 0.6515 0.8866

292 834.2857 528.1600 488 40.1600 0.7374 0.8884

280 800 528.1600 488.6000 39.5600 0.7071 0.9019

303 865.7143 528.2000 488.6000 39.6000 0.7652 0.9010

312 891.4286 528.1000 488.6000 39.5000 0.7879 0.9032

343 980.0000 528 488.6000 39.4000 0.8662 0.9055

345 985.7143 528.0600 488.6000 39.4600 0.8712 0.9042

333 951.4286 528.1800 488.6000 39.5800 0.8409 0.9014

329 940.0000 528.2400 488.6000 39.6400 0.8308 0.9000

315 900.0000 528.2800 488.6000 39.6800 0.7955 0.8991

338 965.7143 528.2000 488.6000 39.6000 0.8535 0.9010

328 937.1429 528.1000 488.6000 39.5000 0.8283 0.9032

315 900.0000 528.1000 488.6000 39.5000 0.7955 0.9032

317 905.7143 528.1000 488.6000 39.5000 0.8005 0.9032

358 1.0229e+03 528.0400 487.9000 40.1400 0.9040 0.8888

371 1060 528.1200 487.9000 40.2200 0.9369 0.8871

373 1.0657e+03 528.2200 487.9000 40.3200 0.9419 0.8849

364 1040 528.2500 487.9000 40.3500 0.9192 0.8842

348 994.2857 528.2100 487.9000 40.3100 0.8788 0.8851

296 845.7143 528.2400 487.9000 40.3400 0.7475 0.8844

267 762.8571 528.2200 487.9000 40.3200 0.6742 0.8849

242 691.4286 528.2800 487.9000 40.3800 0.6111 0.8836

243 694.2857 528.2800 487.9000 40.3800 0.6136 0.8836

238 680 528.2600 487.9000 40.3600 0.6010 0.8840

242 691.4286 528.2200 487.9000 40.3200 0.6111 0.8849

251 717.1429 528.2000 487.9000 40.3000 0.6338 0.8853

267 762.8571 528.2900 487.3000 40.9900 0.6742 0.8704

222 634.2857 528.1000 487.3000 40.8000 0.5606 0.8745

267 762.8571 527.9000 487.3000 40.6000 0.6742 0.8788

262 748.5714 527.9000 487.3000 40.6000 0.6616 0.8788

270 771.4286 528.1800 487.3000 40.8800 0.6818 0.8727

273 780 528.3000 487.3000 41.0000 0.6894 0.8702];

P=GG(:,1);

QTU=GG(:,2);

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% options = optimset('PlotFcns',@optimplotfval);

%options = optimset('PlotFcns',@optimplotfval);

opts = optimset('fminsearch');

%opts = optimset('fminsearch',@optimplotfval);

opts.Display = 'iter';

opts.TolX = 1.e-12;

%opts.TolFun = 1.e-12;

opts.MaxFunEvals = 100;

sse =@(x)QTU-sum(x(1)+x(2)+x(3)+x(4)+x(5)+x(6)+x(7)+x(8));

x0=[ones(60,1),ones(60,1),ones(60,1),ones(60,1),ones(60,1),ones(60,1),ones(60,1),ones(60,1)];

n=length(x0);

LB=[0*ones(n,1) zeros(n,1) zeros(n,1) zeros(n,1) zeros(n,1) zeros(n,1) zeros(n,1) zeros(n,1)];

UB=[152*ones(n,1) 152*ones(n,1) 152*ones(n,1) 152*ones(n,1) 152*ones(n,1) 152*ones(n,1) 152*ones(n,1) 152*ones(n,1)];

%sse = fminsearch(sse,x0,options)

%xsol = fminsearchbnd(sse,x0,LB,UB,opts)

[xsol,fval,exitflag,output] = fminsearchbnd(sse,x0,LB,UB)

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Please help me to run this program well

Philip GMichael HoffmanIs it possible to pass an "options" argument without specifying "nonlcon"?

BYEONGUK IMDaniel MbadjounHi John D'Errico,

Please could you help me to have fminsearchbnd in python.

Richard GilliesI cannot get this to work in Python 3.7.

Xiang LiCan BozdagAbhinavHow can pass the objective function as a function file? For e.g. in fmincon I pass the objective function and constraints as

[x,f] = fmincon(@objfun,x0,[],[],[],[],lb,ub,@confcun);

However with fminsearchcon, it is giving error- too many input parameters

luckycyanExcellent performance!. I am trying to translating your code into C language.

TylerJohn D'ErricoGod no. DON'T modify supplied code. Especially when you don't understand what you are doing!

In fact, had you looked carefully, fminseachbnd actually already uses varargin! But I think what you did is not edit the fminsearchbnd code, but it looks like you tried to add varargin as an input argument to fminsearchbnd, when you called it. That is NOT how you use varargin. In fact, you never needed to do that at all.

So, looking at the debug stack, paramfittreat7D calls fminsearchbnd.

Error in paramfittreat7D (line 30)

[bmin, Smin] = fminsearchbnd(@Sfun7D,b,Lb,Ub,options);

You passed fminsearchbnd the function: @Sfun7D

But you never told it what arguments to use there.

So, really, you need to learn how to use function handles, because you are doing silly things with varargin, not understanding varargin, when you never needed to use that at all.

Pass in the arguments to Sfun7D, in the call to fminsearchbnd, where you create the function handle. Of course, you have not told me what arguments Sfun7D uses, so I cannot even help you there. LEARN TO USE AND CREATE A FUNCTION HANDLE.

And, NO, I won't do consulting in the comments to code, because this will easily turn into a situation where I need to teach you how to use function handles in multiple comments.

Reihaneh MostolizadehThank you for this file.

I am running a program with the usage of this file, but I get the following error:

" Error using paramfittreat7D>Sfun7D

Too many input arguments.

Error in fminsearchbnd>@(x,varargin)fun(xtransform(x),varargin{:}) (line 233)

intrafun = @(x, varargin) fun(xtransform(x), varargin{:});

Error in fminsearch (line 200)

fv(:,1) = funfcn(x,varargin{:});

Error in fminsearchbnd (line 264)

[xu,fval,exitflag,output] = fminsearch(intrafun,x0u,options,varargin);

Error in paramfittreat7D (line 30)

[bmin, Smin] = fminsearchbnd(@Sfun7D,b,Lb,Ub,options);

"

And when I add "varargin" to fminsearchbnd, so I get another error

" Attempt to execute SCRIPT varargin as a function:

/Applications/Matlab/MATLAB_R2018a.app/toolbox/matlab/lang/varargin.m

Error in paramfittreat7D (line 30)

[bmin, Smin] = fminsearchbnd(@Sfun7D,b,Lb,Ub,options, varargin); " .

Could I request you please help me in this case? Thanks in advance!

Nicholas TuckerDonald HumeMareedu.veera raoclear; clc;close all

fid = fopen('vvrk.txt');

data=fscanf(fid, '%g %g', [2 inf]);

data=data';

xdata=data(:,1);

ydata=data(:,2);

x0 = [120 100 98];

% Zk is in the first column of the Excel xls file.

% In the experimental data, Zk is in the unit of millimeter, convert Zk in unit of meter.

% Set the initial points for floating parameters for P(O)T2P(O) and P(S)T2P(S). x(1), x(2) and

% x(3) are TPA, singlet and triplet ESA cross sections. For P(Se)T2P(Se) compound, floating

%parameters becomes x(1), x(2) , x(3) and x(4).

opts = optimset('fminsearch');

opts.Display = 'iter';

opts.TolX = 1.e-12;

opts.MaxFunEvals = 100;

myfun = @(x,xdata,data)FiveLevelSystem_ps_ns;

% Use function (FiveLevelSystem_ps_ns) to calculate the sum of SSE of 6 data bases.

%curvefitoptions=optimset('UseParallel','always','Display','iter','TolFun',1e-6);

%['TolFun',1e-6]--Terminate the function when changing sum of squares is less than 1e-6.

[x,fval,exitflag,output]=lsqcurvefit(myfun,x0,xdata,ydata,[0,0,0],[],opts);

%Outer most loop 5, find a set of values (x(1), x(2), x(3)) until a minimum SSE was obtained;

% x(1), x(2) and x(3) are TPA, singlet and triplet ESA cross sections.

% fminsearchbnd attempts to find a minimum of a scalar function of several variables, starting at

%an initial estimate x0.

% fminsearchbnd calls the function ‘myfun=@(x,xdata,num)FiveLevelSystem_ps_ns(x,xdata)’

%to compare the result of function ‘FiveLevelSystem_ps_ns’ for different set of values (x(1),

%x(2), x(3))

disp(' x1 x2 x3 ')

format long; disp(x)

i am running this program but it taking too much time one day or more can tell what is the problem

Mario ReutterDavideTorben HelslootGhazwanStefan FillaferJungwooChuanPeng HuChuanPeng HuThank you very much!

But I found that I can't download the file. I right-clicked "save as", but it showed error:"failed- No file".

Can you please help me out?

Thanks again.

YcekJackalJordy van der PolGreat! thanks

KeFopJonasBirk AndreasThank you very much!

Jakob Rabjerg VangI have used this tool a lot. Thank you for the contribution.

Jacklyn BuhrmannThanks for writing this function. Is it possible to set the options (like Display, TolFun, etc.) for the underlying fminsearch?

LeahGJdeenIs it possible to optimize a function whose domain is discrete? Or rather, is it possible to use the fact that I am only interested in discrete values for the parameters to be optimized?

John D'ErricoI don't have Octave, so I cannot program for a tool I have never used, one to which I have no access.

The params argument is there to satisfy users who are using older releases of MATLAB and need that option. You can go through and edit the code, removing that argument. Note that for those who choose to modify code, it becomes your code, in the sense that I cannot support modified code.

Hafiz Muhammad LuqmanHi

I want to use this function (fminsearchbnd) on Octave. But on octave it gives incompatibility issue with fminsearch input arguments. At line 229, while calling fminsearch, 'params' structure is also passed. In octave fminsearch does not accept this. Is there any solution for this ?

I want to use this while using Total Least Sqaure Package.

SethAs always, John D'Errico's contribution is among the best the File Exchange has to offer. Thank you for the very nice work.

Specific note: I am estimating parameters of an ODE with (13) total variables. I know it's not wise to use this many variables as the author explicitly states in this comments section - however every other method I have tried in Matlab fails. Simplex methods are the only method I have found to work well for my specific problem and this is a nice code that helps with this solution. (Optimization toolbox included - fmincon, fminunc, etc. - but did NOT try "Global Optimization Toolbox").

Does it take a lot of iterations to solve? Yes. However that's better than getting non-usable results!

SethYanHi,

I want to make code generation from fminsearchbnd. As fminsearch is able to do that, I expected the same from fminsearchbnd or fminsearchcon too.

Unfortunately, I'm getting the following error:

Nested functions are not supported.

Function 'fminsearchbnd.m' (#1439.6496.6510), line 238, column 19:

"outfun_wrapper"

Could someone help to overcome that issue?

You can also check it using :

coder.screener('fminsearchbnd')

Thanks.

Yannick

VinceExcellent--very well documented

VinceVinceGlenn GomesOutstanding.

John D'ErricoJakob - Sorry, but no.

You have a 7 dimensional problem, with 7 unknowns. 7 unknowns is very near the upper limit of what is possible using a tool like fminsearch. Even that is pushing the limits for good performance.

If you wish to solve several of these problems in parallel, say 2 or 3 or 10 of them, you would effectively have a 14 or 21 or 70 dimensional problem.

Just use a loop. Even f these tools were smart enough to know that what you wanted was to solve multiple problems with different starting points, all it would be able to do is set up an internal loop anyway. There would be absolutely no gain.

Jakob SieversI have a 7-parameter problem which I solve using fminsearchbnd. I am wondering if it might be possible to solve for several x0 by giving them as a matrix rather than one vector at a time. My evaluated function supports vectorized inputs like this at least. Obviously this problem relates more to fminsearch than to your fminsearchbnd submission but I am wondering what your thoughts are on this?

Yakun ZhangashChad BircherArmagan OzbilgeAlexandre WalkerVery nice. Worked like a charm with the added constraints!

John D'ErricoWith multiple constraints, you need to return a vector of constraint values, not just each constraint as an extra function handle argument at the end. This is consistent with the fmincon approach, although I do not allow equality constraints in fminsearchcon.

So assuming my editing fingers worked correctly, more like this:

parag=fminsearchcon(@(para)fitfunction,para0,lb,ub,[],[],@(para) [para(3)/sqrt(para(1)*para(2))/2-1), para(9)/sqrt(para(7)*para(8))/2-1,

-para(3)/sqrt(para(1)*para(2))/2+1,

-para(9)/sqrt(para(7)*para(8))/2+1]);

Miao YuThanks a lot for the fminsearch codes.

I am using fminsearchcon for some nonlinear inequality functions like:parag=fminsearchcon(@(para)fitfunction,para0,lb,ub,[],[],@(para)para(3)/sqrt(para(1)*para(2))/2-1),@(para)para(9)/sqrt(para(7)*para(8))/2-1,@(para)-para(3)/sqrt(para(1)*para(2))/2+1,@(para)-para(9)/sqrt(para(7)*para(8))/2+1);

I need four nonlinear conditions in this searching, but when I type like this, the matlab gives the error:@(para)para(3)/sqrt(para(1)*para(2))/2-1

Too many input arguments. So how should I write this sentene here?

WarwickVery useful. The feature allowing one or more of the x0 parameters to be fixed is a nice bonus.

John D'ErricoSergio - the fact is, minimizing the negative of a function will cause a minimizer to try to maximize it. Period.

My guess is while you think you introduced a negative sign in there, in fact you really did not. Perhaps you still passed in the original function. This is a common mistake people make. Or perhaps you did something different. Of course, without you showing what you did, I have no idea what you actually did, I cannot know how you did it wrong.

Sérgio SilvaThanks for the functions.

Although i want to use them to maximize the function.

I tried to introduce a minus (-) into my function but it dindt solve my problem, the 'fminsearcbnd' still minimizing my function

Best regards,

Sérgio

GreigThanks for the info, and thanks again for the excellent functions.

John D'ErricoGreig - Sorry, no.

Inequality constraints were easy enough to put into the code, in the form of a penalty function, i.e., slap its hands if it wanders outside of the boundaries. As long as the code starts at a feasible point, the penalty function ensures it stays feasible.

However, for equalities it makes more sense to use a scheme that will keep all points on the equality constraint manifold. For this, penalty functions will not work, and so I chose to not offer the capability of equality constraints as an option.

GreigHi John,

Thanks for some very useful tools.

One question... does fminsearchcon handle nonlinear equalities, like fmincon, or does it deal only with inequalities?

Cheers

Jeff MillerAlexandre LaurinSimon MængDmitryJohn D'ErricoIt uses fminsearch, which is part of MATLAB, not in the optimization toolbox. If you already had the optimization toolbox, I'd just tell you to use fmincon.

Seb BiassHey, thanks for that. I was just wondering… does it work without the optimization toolbox?

Thanks

SaniJohn D'ErricoI have no idea why you are having a problem installing it, after apparently many days of trying to do so. I can't tell you much more than what I've already said.

1. Click on the button that says DOWNLOAD SUBMISSION. Use your mouse to do so.

2. Unzip the file. There are MANY utilities that will unzip a file. In fact, MATLAB itself contains an unzip tool. Use it.

3. Add the resulting top level directory to your search path. Use either pathtool or addpath to do this.

I don't know which of the above things has caused you a problem, but they all seem pretty basic.

I have NO idea what you are asking about how to use an if statement in this context.

Finally, I'm sorry, but I won't go into depth about the differences between fminsearch and fmincon. That would take far too much time to do on my part. There are many places online where Nelder/Mead tools are explained for you to learn about FMINSEARCH. As far as fmincon goes, start with reading the doc for it. If you bother to look at the bottom, you will find references.

NathalieHi, i'm struggling how to install it?

I'll be thankful if you help me with that.

And also, can i instead write some loop with 'if..' , imposing constraints/conditions on the values my parameters can take, so to double check my constrained problem. Also, why 'fmincon' may work worse than 'fminsearch'. I can't find the mechanics of how these two commands work? thank you in advance.

John D'Errico1. Download it.

2. Install it as directed.

3. Read the help, look at the examples.

What else can I possibly say?

Nathaliehi, how to use this command?

MatthewSeb BiassAlways good to see Mathworks relying on users to provide such functions...

Kees de KapperDear John,

Thank you for your contribution to the community.

I’ve got a question regarding the initial step. Probably it is related to the Q/A by Nguyen, but I’m not completely sure.

If the initial value is at the boundary, the initial step is very small and therefore the optimization is stuck at the boundary value (in case of my optimization function). If the initial value is middle of the boundaries, the initial step is reasonable, and the optimization can walk to the “global” minimum (which can be close to one of the boundaries). Apparently there is scaling of the step size between the boundaries.

Is it possible to avoid this scaling?

Thanks in advance,

Kees

NguyenDear John,

Thank you very much for your last help.

May I have a further question about algorithm?

Regarding Trust-Region-Reflective algorithm, minimization for example, basically its four steps are repeated until convergence.

How about the relationship between this convergence and TolFun or TolX?

Thank you again!

Jakob SieversThanks John! This resolved a problem I had been dealing with, very nicely!

ChristopheDear John,

No problem, thank you so much for taking the time to answer my question. So very much appreciated. I am using fminsearchbnd a lot and have also included rmsearch in my list of tools for optimization. Thank you so much for making these tools available!

John D'ErricoHi Christophe,

The problem is, these tools are really just wrappers around fminsearch, which uses only a restricted set of parameters. From the help for fminsearch, I see only:

fminsearch uses

these options: Display, TolX, TolFun, MaxFunEvals, MaxIter, FunValCheck,

PlotFcns, and OutputFcn.

So I could never be able to control the problem as you would desire. Sorry. Even that limited set of variables becomes corrupted, since fminsearchbnd uses a transformation, which will prevent you from controlling things using TolX as it is designed.

John

ChristopheDear John,

So many thanks for this program. I was wondering if there was an easy way to modify the fminsearchbnd to be able to handle the input of an argument like ( 'DiffMinChange',1e-2 )? My variables are in millimetres and I would like to have the variables changing by at least 0.01mm.

Once again, thank you for fminsearchbnd!

NguyenThank you so much for your support.

I have total 7 variables.

With a support from my friend, we tried to devide constrained range of each variable into n value.

We carried out n^7 local optimization and chose the best.

We tried to globalize the local minimization function.

However, we hope to find other ways/tools.

John D'ErricoHow clear can I make this? These are search routines, just like fminsearch, fmincon, fsolve, etc. In fact, the tools I have supplied are based on fminsearch. All of those tools search for a local minimizer, based on their starting point.

If you choose a good starting point, then you get a good solution. Your knowledge of the system you will optimize will always help.

NO such tool can EVER be assured of finding a global optimum on a completely general function. Period.

If you will insist on positively finding a globally optimal solution, regardless of your starting point, then these are the wrong tools for you. Of course, unless you can be assured that your function has some nice properties, global optimization can be a terribly tricky problem. Good luck in that.

At best, a scheme that has some decent properties is to start with many random starting points, then choose the best of the solutions that result. This is implemented in my rmsearch tool, also found on the file exchange, although it is fairly trivial to do on your own. A virtue of such a stochastic scheme is that if you start in the basin of attraction of the global optimizer, then it should converge to that solution. So as long as that basin is not too small that it is never visited, then you can succeed here. And because of the Monte Carlo starts, you can even assign a probability that the global solution was found. But certainty? Sorry, no. Again, good luck.

Finally, you say only that you have MORE than 4 variables. How many more? My advice has always been that too many variables make tools like fminsearch work poorly. I have suggested that 6 variables is a reasonable upper limit, but I have heard of people solving larger problems.

NguyenThank you very much for your file.

But I still have an unclear issue regarding fminsearchbnd.

When I change the starting value x0, the fval result will be changed (a lot).

So I have an initial conclusion that your function will find out local minimum, depends on starting guess.

Is it right?

In such that case, please help me how to find global minimum between lower and upper boundaries of variables (more than 4 variables)?

Again, thank you very much.

Adam AutonBlake MilnerLorenzJacob KirkensgaardPooryaAlvaro Campos DuqueYeah, it works =) Thank u so much!!

John D'ErricoFminsearchcon uses your objective function, which for 5 parameter models, must take vector input. Thus it will pass in a vector x to the objective function that has 5 elements here. The constraint function must then also accept a vector of length 5. So your call to fminsearchcon might look like this:

[x,fval]=fminsearchcon(@fit,x0,lb,ub,[],[],@(x) 2.3*(x(4)^2+x(3))^0.5-x(5)-x(2));

Alvaro Campos DuqueCongrats again for the useful function John. I just have a qestion: i'm trying to use fminsearchcon in a 5 parameter function i've created:

function error=fit(parameters)

x1=parameters(1);

x2=parameters(2);

x3=parameters(3);

x4=parameters(4);

x5=parameters(5);

(...) Here there is a long script but, at the end, the important thing are the input and outputs (...)

What I wonder is to find the parameters that optimize the error function. These parameters are interelated by the non-linear constrain:

-2.3*(x4^2+x3)^0.5+x5+x2>0

The problem is that I don't know how to implement this constrain, the examples I have seen are just with one parameter. I have tried this:

x0=[0.5,0.05,0.05,1,0.5];

lb=[0,0,0,0,0];

ub=[2,1,1,10,5];

[x,fval]=fminsearchcon(@fit,x0,lb,ub,[],[],@(x2,x3,x4,x5) 2.3*(x4^2+x3)^0.5-x5-x2)

But, of course, it doesn't recognise what is x2, x3, x4, or x5 because it is defined nowhere.

Any clue?? Thanks in advance! :)

Alvaro Campos DuqueCongratulations John, thank you so much for such a good job!

Jeff WarnerCongratulations on a great job creating this function. This is something I have been searching for. I have been using the fminsearch in the past but it is not a very robust fitting routine because you cannot constrain the fitting parameters so I would not always obtain a good fit.

Now that I am using your fminsearchbnd, every time I run the program on the same file the fitting result is close to the previous fits.

John D'ErricoAlfonso - sorry, but I strenuously refuse to answer questions that are not directly related to a submission. The comments field really is not for consulting, which by the way, is something I now avoid anyway. Your question is related only peripherally to this tool.

AlfonsoHi again,

thanks for your "non-answer".

My previous comment was just a general question on box constrained optimization using variable transformation, and not a critics on your code (which, btw, I judge 4 stars).

Thought it is just a wrapper..., I have not doubts on the quality of your work....

However, in a real world, multidimensional problems are very common and they need to be solved without waiting years to converge to the minimum.

In this context, gradient-based methods (e.g. L-BFGS, SD or CG) can be used togheter with variable transformation.

Then, I ask you again, how can we avoid to be stuck on the bonduary of the constraints (where the transformation gradient is 0)?

This is not a trivial problem, and it can lead to early stopping the optimization iterations on a sub-optimal saddle point.

Regards, Alfonso

Alessandro MagnaniJohn D'Errico(Part 3, continued from my previous responses.)

So, has fminsearch succeeded in this task? Well, yes, a qualified yes. It was fairly slow. For example, lets try fminunc from the optimization toolbox.

>> [xfinal,fval,exitflag] = fminunc(fofx,x0,optimset('display','iter'))

First-order

Iteration Func-count f(x) Step-size optimality

0 26 2500 20

1 52 2025 0.05 18

2 78 0 1 1.49e-08

Local minimum found.

Optimization completed because the size of the gradient is less than

the default value of the function tolerance.

<stopping criteria details>

xfinal =

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

fval =

0

exitflag =

1

Fminunc took 0.014 seconds to run, requiring 2 iterations and 78 function evaluations. When done, the solution was composed of exact zeros.

>> format long g

>> max(abs(xfinal))

ans =

0

As I said, this was a really really REALLY simple problem. There is no curved valley to follow, as many optimization problems will have. There are no bounds or constraints here to bounce away from. Did fminsearch succeed on that problem? Well, yes, but it took some effort. There are better tools to be found, but fminsearch based tools will manage to stumble along. I generally advise that fminsearch is perfectly fine for small problems. For 2 or 3 variables, fminsearch is often my "go to" algorithm. (I'll even use it on 1 variable problems at times.) There is no reason to pull out a big gun there, and fminsearch has some very nicely robust qualities about it for small problems. It is insensitive to some irregularities in your function, and can even survive some things like derivative discontinuities that may make gradient based optimizers unhappy campers.

Up to about 6 variables it is still ok. I tend to be careful above 6 variables or so, and that limit is pretty problem dependent, pretty soft. Even 10 or 12 variables might be ok.

But really, by the time a problem has more than 10 variables, I'll be looking at a tool from the optimization toolbox. fminunc, fmincon will be common choices here. Even better is lsqlin for the problems that really are "linear" in nature. As such, you can almost think of lsqlin (or quadprog) as almost not being iterative solvers at all, in the sense that you don't need to provide starting values.

So fminsearch and the derived tools I provide here are good, workable tools. These are NOT toy functions, but they have their limits. They work splendidly on smaller problems, giving you an easy way to implement simple constraints. They MAY do a reasonable job for your problem, but I won't push them too far. Even as the author of fminsearchbnd, the very first toolbox I would recommend buying is the optimization toolbox if you do at all many optimizations of any size.

John D'Errico(Part 2, continued from the last response.)

Lets try being more explicit here. Vary the MaxFunEvals parameter.

>> [xfinal,fval,exitflag] = fminsearch(fofx,x0,optimset('maxfunevals',1000))

Exiting: Maximum number of function evaluations has been exceeded

- increase MaxFunEvals option.

Current function value: 887.434654

xfinal =

Columns 1 through 12

-2.923 12.909 3.1561 3.0168 -1.2408 3.0691 4.9753 2.4416 7.3143 -2.3687 3.5243 5.8719

Columns 13 through 24

2.311 -10.707 6.0148 0.45126 11.192 4.6186 3.3215 5.484 3.3361 4.2973 1.9503 12.542

Column 25

3.2774

fval =

887.43

>> [xfinal,fval,exitflag] = fminsearch(fofx,x0,optimset('maxfunevals',2000))

Exiting: Maximum number of function evaluations has been exceeded

- increase MaxFunEvals option.

Current function value: 64.925968

xfinal =

Columns 1 through 12

1.1789 -1.8439 1.2661 -1.4893 0.20456 3.6405 -0.75783 0.82694 -0.63308 -0.96542 1.5783 1.6521

Columns 13 through 24

-0.07538 -2.409 0.76939 1.9824 -1.9948 0.87846 -0.94908 2.7612 0.351 -0.74091 0.32458 3.1321

Column 25

1.0073

fval =

64.926

>> [xfinal,fval,exitflag] = fminsearch(fofx,x0,optimset('maxfunevals',4000))

Exiting: Maximum number of function evaluations has been exceeded

- increase MaxFunEvals option.

Current function value: 5.887733

xfinal =

Columns 1 through 12

-0.039379 -0.62272 0.3608 0.033666 -0.3333 -0.43402 -0.30185 0.16978 0.106 0.41576 0.3982 -0.15116

Columns 13 through 24

0.48606 -0.83993 0.66158 0.95052 0.20017 0.77219 -0.80237 0.57812 0.10016 0.62498 -0.36942 -0.37967

Column 25

0.06037

fval =

5.8877

>> [xfinal,fval,exitflag] = fminsearch(fofx,x0,optimset('maxfunevals',8000))

Exiting: Maximum number of iterations has been exceeded

- increase MaxIter option.

Current function value: 1.364211

xfinal =

Columns 1 through 12

0.0085952 -0.12505 0.034718 -0.21946 -0.16216 0.020216 0.21074 -0.054082 0.25911 0.16582 0.41984 0.15747

Columns 13 through 24

-0.049798 -0.12582 0.45415 0.4152 -0.093939 0.38964 -0.40392 -0.12336 0.055191 -0.077377 -0.39227 -0.14665

Column 25

-0.094889

fval =

1.3642

Ok, so after 8000 function evaluations, now it is hitting the default iteration limit.

mean(abs(xfinal))

ans =

0.18638

I can really let it go wild. For example, this next example took under 2 seconds to solve on my machine:

>> [xfinal,fval,exitflag] = fminsearch(fofx,x0,optimset('maxfunevals',100000,'maxiter',100000))

xfinal =

Columns 1 through 12

0.00012307 2.9632e-05 -5.5942e-05 -6.4526e-05 -0.00013126 -0.00019728 -8.2417e-05 3.2675e-05 8.4082e-05 -5.6463e-06 -0.00017336 -0.00015729

Columns 13 through 24

-4.7605e-05 0.00023687 -1.0958e-05 0.00019857 -9.4663e-05 -3.9436e-05 6.2949e-05 -5.8151e-05 4.9917e-05 -8.9443e-05 1.8443e-06 0.00011523

Column 25

3.8857e-05

fval =

2.9028e-07

exitflag =

1

>> mean(abs(xfinal))

ans =

8.7266e-05

All of the other examples had exitflag==0 by the way. It finally went far enough that fminsearch is tripping on its convergence tolerances, rather than on the function or iteration limits. It thinks it may have converged!

(End of part 2. Part 3 to follow.)

John D'ErricoHi Jessica,

The problem is, fminsearch tools are simply not very efficient at searching spaces that are at all high dimensional. The polytope flops around, expanding, contracting, etc., but the human mind simply does not appreciate how large a 25 dimensional space is. That makes sense, because we are built to visualize things in 3 dimensions, so we tend to think of everything in terms of our experience.

I'll give a simple example that illustrates the problem. Consider a perfectly circular well in 25 dimensions. I don't need to worry about bounds or anything, because the characteristics won't change in any material way.

The function is a paraboloid of revolution, centered at the 25 dimensional origin, and I'll start out at repmat(10,1,25) so it has to go a little ways, but this is a really simple problem. How does fminsearch handle it?

>> fofx = @(x) sum(x.^2,2);

>> x0 = repmat(10,1,25);

>> [xfinal,fval] = fminsearch(fofx,x0)

Exiting: Maximum number of function evaluations has been exceeded

- increase MaxFunEvals option.

Current function value: 3.762363

xfinal =

Columns 1 through 12

0.15208 -0.37597 0.47583 -0.4156 -0.47475 -0.22822 0.30263 0.15141 0.25204 0.45515 0.48452 -0.16627

Columns 13 through 24

0.18958 -0.65985 0.55836 0.54621 -0.012447 0.32472 -0.55204 0.29942 0.13633 0.28518 -0.69201 -0.25536

Column 25

0.21961

fval =

3.7624

As you can see, starting at a vector of all 10's, it got down to something that averaged roughly 0.35, so it was an improvement.

mean(abs(xfinal))

ans =

0.34662

So by the time it had run out of iterations, it had gone 96.5% of the way it had to go along a straight line, so some might say it has done pretty well. On the other hand, we can also think of what it has done is to improve the starting estimate by only enough to get the first decimal digit correct.

(End of part 1, of avery long response.)

Jessica PiperHi John, love fminsearchbnd, thank you! Just a quick question. I've been using fminsearchbnd on a problem with 26 variables, and I've been getting good results. But based on your comments below, I worry I'm just getting lucky! What algorithm would you suggest for larger problems? (I'm fitting data from an optics experiment, but the objective is nonlinear, and the problem isn't even quasi-convex.)

John D'ErricoAlfonso, It appears that you totally misunderstand how these tools work. fminsearchbnd and its cousin are based on fminsearch, a Nelder-Mead (polytope) optimizer. In fact, they call fminsearch to do the actual work. Fminsearch is NOT a gradient based tool. No gradient is EVER computed or even approximated. As such, they tend not to get stuck at such a point unless that is where the optimization drives them. You might want to do some reading about how the class of Nelder-Mead style tools work before you judge the algorithm.

Any optimization is of course subject to problem specific issues. Singularities, ill-conditioning, and multiple local solutions are all problems. But that point at the boundary is not as much an issue as you seem to fear.

AlfonsoHi John,

One simple question for you.

When you apply a quadratic transformation x=y^2 for x>=0 (btw, the same question holds for sin(x)); how you prevent the case that the optimizer is stuck on the bonduary (y=0)? Indeed, in the case that the actual solution is not on the bonduary, but during optimization iterations your gradient-based optimizer arrives in a point that lies on the bonduary, it cannot improve towards the minimum since the transformation gradient is 0 (dx/dy=2y=0) there.

I hope I was clear enough,

thanks in advance for your kind answer.

Alfonso

John D'ErricoKirk - when you change the bounds, you change the implicit transformation. It will still find a minimizer, but the problem has been changed, so there is no absolute guarantee it will converge to the same solution. When you have a problem with multiple local minimizers, any optimization tool can suffer from this issue.

Kirk SmithI found some odd behavior. I had my bounds on 7 variables [0,1]. My last 2 came out to be below 0.01. Next I changed the bounds for these last two variables to [0,0.4] and I got a new set of values.... shouldn't these be the same seeing as how in the first run they never even got close to 0.4?

Anyways I just added something like e*(max(x-1,0)+max(-x,0)) for my ub and lb with original fminsearch and it worked great. (Choose e large enough)

John D'ErricoI'm sorry Pietro, but if you are using a Nelder-Mead solver to solve a 20 variable problem, you are wasting your time and I'm afraid mine too. I will not try to debug that mass of code for a problem that should never be thrown at this tool anyway.

pietrojohn here can find my code....

http://www.mathworks.se/matlabcentral/newsreader/view_thread/320630#879608

John D'ErricoPietro - without a specific example that shows there is a problem, I cannot know what is happening. You may be mistaken about whether the start point truly does satisfy the constraints. You may be supplying the constraints or the objective itself improperly. There may be a bug. Or, perhaps your starting point is right on a constraint boundary, or even epsilon over that edge. How can I guess? Again, fmincon does NOT require a feasible solution to start, although that surely helps. So that fmincon succeeds is irrelevant as these are different algorithms.

pietrothe starting point provided satisfies all the constrains.

John D'ErricoFmincon uses a different algorithm, that can better handle infeasible starting values. The style of penalty used by fminsearchcon fails there, so you need to provide a starting point that at least satisfies your constraints.

pietrothanks John! I got this message:

Infeasible starting values (nonlinear inequalities failed).

But with the same starting point, with fmincon the optimization works. Do you have any suggestion to give me?

thanks.

Best regards

Pietro

John D'ErricoPietro - function handles make it easy to pass parameters in. For example, suppose you wish to find the minimum of the function (x-a)^2 for some fixed value of a. (Yes, you and I know that the min happens at a, but the computer does not, and I'm feeling too lazy to be more creative.)

myfun = @(x,a) (x-a).^2;

x0 = 1;

% Solve for the unconstrained min, with a = 12.

[xmin,fmin] = fminsearchbnd(@(x) myfun(x,12),x0)

xmin =

12

fmin =

6.18466949693273e-28

The above example, really just passes the call directly into fminsearch, but the point is how to pass in a.

% Solve for a constrained min, with a = 12.

ub = 5;

[xmin,fmin] = fminsearchbnd(@(x) myfun(x,12),x0,[],ub)

xmin =

5

fmin =

49

And of course, the min was found at the upper bound, at the point nearest to a that was allowed.

pietroGreat function. How can I pass extra parameters to the objective function using the function handle?

DavideThanks John! I fixed it.

Thanks again.

John D'ErricoI should point out that these tools use fminsearch as the optimizer, just providing an overlay on top to implement constraints. That means that any parameters that fminsearch ignores are ignored here too. So diffminchange and diffmaxchange are completely irrelevant.

I can't really test out what is happening since I don't have any data. But I looked at the code, and usually when you get a nan as a result, it comes from some parameter going into a bad place.

So I'd put some debugging effort into the code, letting you drop in with the debugger as soon as a nan gets generated. Do one of these before you execute the code:

dbstop if naninf

I'm a bit confused why you are calling det on a diagonal matrix too. That would be far more efficiently done as simply the product of the elements. And since you are then computing the log of the determinant, do it as the sum of the logs of the elements, avoiding potential underflows or overflows.

Anyway, before you do any optimizations, always test your objective function. Make sure that it does what you expect, that it yields valid results, and that the results change when you perturb the parameters. Next, when you run an optimizer and you see strange things happening, set the "Display" option in the optimizer to 'iter'. In fact, I always do this on every optimization the first time I run one for any problem. Verify that it is doing something intelligent. Are the parameters changing? (Note that fminsearchbnd transforms the parameters, so that fminsearch is working with different numbers than you expect.)

And, finally, 9 parameters is at the very upper limit of what I'd ever recommend for a Nelder-Mead based tool, but it should run, albeit a bit slowly.

DavideDear John, very nice function! It worked very well. I have used it in my optimization problems. However, in my last code I get a problem. The output is NaN and the function parameters seem not optimized. Perhaps you you may help me... Here is my code. I get the necessary data from an excel table.

I cannot realize where the problem is. Thanks in advance! Cheers

function [H,par,output]=mygarch()

format short

data=xlsread('select3stocks.xlsx',2,'A1:D249');

data1=data(1:149,:);

data2=data(150:199,:);

[L1, L2]=size(data2);

C0=[0.01,0.02,0.03]';

A0=[0.04,0.02,0.06]';

B0=[0.05,0.09,0.03]';

matr=[C0,A0,B0];

par0=reshape(matr,1,9);

X=zeros(L1-1,L2);

m=mean(data2);

X(1,:)=data2(1,:)-m;

s=0;

H=zeros(L1-1,L2);

H(1,:)=(std(data1)).^2;

lb=zeros(1,9);

ub=ones(1,9);

options =optimset('MaxIter',50000,'TolX',1e-10,'DiffMaxChange',0.00005,'DiffMinChange',0.00001,'MaxFunEvals',10000000);

[par,output]=fminsearchbnd(@maxlikelyhood,par0,lb,ub,options);

function y=maxlikelyhood(par)

C=par(1:3); A=par(4:6); B=par(7:9);

for i=2:L1

for j=1:L2

H(i,j)=C(j)+A(j)*X(i-1,j)^2+B(j)*H(i-1,j);

end

X(i,:)=data2(i,:)-m;

D=diag(H(i,:));

s=s+3*log(2*pi)+log(det(D))+X(i,:)*D.^(-2)*X(i,:)';

end

y=s;

end

end

John D'Errico(I've fixed the bug about the outputfcn problem and re-posted it. The new version should appear sometime this morning.)

As far as Stefan's problem goes, anytime you throw problems with variables that vary by 20 to 30 powers of 10 at ANY numerical optimizer, expect serious things to go wrong. This is a no-no for literally ANY numerical tool. And setting a convergence tolerance (TolX) to eps is just as silly, especially when your parameters vary by that many orders of magnitude.

Essentially, you are asking that one variable, which can apparently vary somewhere between 1 and 1e30, must be computed to an absolute accuracy of roughly 2e-16.

A common solution is to normalize your variables, so that both are at least of similar orders of magnitude. If you were trying to solve a problem where one variable had bounds of 1e20 to 5e20, and a second variable was bounded in the interval 1e-6 to 1e-5, then normalizing the variables to both be of order 1 would make complete sense. But your variables each vary by many powers of 10!

One thing all novices need to learn about computing, is that when things vary by that many powers of 10, is to use logs! Let fminsearch vary the log10 of your variables, so they now have bound vectors that look like [0, -6] for the lower bounds, and [30, 0] for the upper bound. Now, inside your function, take the parameters and raise 10 to that power BEFORE they are used. Do the same with the output when it is returned. As far as fminsearch is concerned, your numbers are perfectly normal looking, but your objective sees the numbers in their proper scales.

You will find that any optimizer runs much more happily when you do this, as now the variables are very nicely normalized. All of the mathematics works more simply when you work in the log space. In fact, this is surely the natural way to work with variables that vary by many powers of 10. A nice consequence is the tolerances in TolX become relative tolerances on the variables, something that surely makes much more sense.

And DON'T use eps as a convergence tolerance. Fminsearch will never be able to give you even a reasonable shot at convergence in even 10000 function evals with that fine of a tolerance. So what happens is fminsearch will keep on iterating until it runs out of function evaluations. Use a meaningful, realistic tolerance. You were probably only setting that fine of a tolerance because of the ridiculous variable scaling anyway.

StefanThe following test shows strange results:

----------------------------------

G0bnd 2.22e+014 L0bnd 2.707e-006

G0 2e+014 L0 3e-006

----------------------------------

function test()

options=optimset('Display','on', ...

'MaxFunEvals',1e4, ...

'MaxIter',1e4, ...

'TolFun', eps, ...

'TolX', eps, ...

'FunValCheck', 'on' ...

);

w= 150e-6;

xData = 1e-6:1e-6:w;

G0 = 2e14;

L0 = 3e-6;

yData = G0*exp(-xData./L0);

start_params = [G0, L0];

min_params = [ 1, 1e-6];

max_params = [1e30, 1];

plot(xData,yData,'-r');

hold('on');

result_params=fminsearchbnd(@fitG_error_function,start_params,min_params,max_params,options,xData,yData);

['G0bnd ' num2str(result_params(1),4) ' L0bnd ' num2str(result_params(2),4)]

plot(xData, result_params(1)*exp(-xData./result_params(2)),'-g');

result_params=fminsearch(@fitG_error_function,start_params,options,xData,yData);

['G0 ' num2str(result_params(1),4) ' L0 ' num2str(result_params(2),4)]

plot(xData, result_params(1)*exp(-xData./result_params(2)),'-b');

function fiterror = fitG_error_function(start_params,xData,yData)

Fitted_Curve= start_params(1) * exp(-xData./start_params(2));

Error_Vector=Fitted_Curve - yData;

fiterror=sum(Error_Vector.^2);

results.fiterror=fiterror;

end

end

RomainGreat function, very handy.

I have noticed a small error, though. If fminsearch is not called, the function returns output.funcount, whereas it returns output.funcCount otherwise.

John D'ErricoSorry. I'd been playing with various alternatives sometime ago. The name on the header will be correct now when the latest update comes alive.

Martin RichardJohn,

I've been using that function for years now. Love it. Perhaps you may want to change the function name so that it fits the file name in the new version (fminsearchbnd3 in new file).

John D'ErricoHI Kathleen - I've been asked about citing many of my submissions before. WE came up with a few options, detailed here:

http://blogs.mathworks.com/desktop/2010/12/13/citing-file-exchange-submissions/

KathleenHi John

I use fminsearchbnd a lot and I'd like to cite it in my work. Do you have a reference you would like used?

Thanks

Kathleen

KathleenHi John

I use fminsearchbnd a lot and I'd like to cite it in my work. Do you have a reference you would like used?

Thanks

Kathleen

John D'ErricoNick - fminsearchbnd is a simple optimizer, a close cousin to fminsearch. All it cares about is finding an optimum value, and it has no idea that your objective is based on a least squares estimation problem of some sort. If you need confidence bounds, a simple solution is to use a linear approximation to your function at the optimum, computing the Jacobian matrix at that point. Then it is a simple problem to compute approximate confidence intervals on the parameters. You can find this procedure explained, with an example, in my Optimization Tips & Tricks, also on the File Exchange.

Brennan SmithThanks! Very useful

Nick M.Hi John

This is a great work and I actually used it and worked for me. I have a question though. I used fminsearchbnd and it turned it parameters, how can I compute uncertainty for those parameters?(covariance matrix?)

Thanks!

ChristopheDear John,

Many thanks for this very useful program. I have a question if I may:

I deal with the optimization of "physical" problems, keeping in mind the manufacturing-accuracies available to me. I know, for example, that I cannot have my parameters (defining the geometry I optimize) accurately machined. To me, a parameter set at 12.000, 12.001 or even 12.01 would give pretty much the same value for the output function.

So, I would like to be able to ensure that the parameters are moved by at least a minimum "delta" in the optimization, say 1e-2 for example.

Is this something that fminsearchbnd can handle?

YankunVery useful.

ksThank you very much. It works Great! :)

OlegThis is great! Thank you!

Prabuddha MukherjeeThank you very much...very useful

nicoVery useful...thank you !

Sahin AktasExcellent Code...

John D'ErricoRyan,

This is really not a bound constrained problem. Your constraint is a general one, either a linear or a nonlinear inequality constraint.

The simple solution is to use a code that allows explicit linear or nonlinear constraints. There are a few on the FEX. In fact my fminsearchcon on the FEX does that, by applying a penalty to the problem when a constraint is violated. You might also look at optimize, by Rody Oldenhuis. This code allows explicit constraints of all forms.

However, there is a simple way to solve this class of problem with fminsearchbnd. Use another class of transformation. For example, suppose one wishes to minimize the function f(x,y), subject to the "bound" constraint that x <= erf(y).

Transform your problem such that

u = x + erf(y)

v = x - erf(y)

Clearly, v must be bounded above by 0. So use fminsearchbnd to optimize over the two dimensional domain (u,v). Inside your objective function, for any value of (u,v) you will compute the parameters (x,y) as

x = (u + v)/2;

y = erfinv(u - x);

Now you can finish evaluating the function f(x,y).

The only problem here happens if you also had other fixed bounds on x and y. But many other transformations would also have worked. For example, this transformation:

u = x

v = x - erf(y)

will still allow simple bound constraints on the parameter x, as well as allowing the nonlinear constraint x <= erf(y) as a bound constraint.

John

Ryan WebbGreat Function. Used it mane times.

However, now I have a trickier problem where one of the boundaries is a function of one of the variables. Making UB a function of xtrans is possible, but how would fminsearchbnd determine this intent given that the cases for transformation (k) are determined solely by the initial numerical values of the constraints? Thoughts?

WillExcellent modification to create a very useful algorithm. Thanks!

Srinivasa ChemudupatiCannot Thank you enough John!!

Chris Menthe beauty is yours.

Umesh RudrapatnaThanks a lot! Helped me a lot.

leo nidasThanks John!

Chirackel YoonusVery useful

M. P.M. P.Worked beautifully for my own optimization problem. Thanks John.

Ken PurchaseNotice: I have submitted a slightly modified version that includes output and plot functions as well as slightly improved handling of varargin. Search for fminsearch.

The original is excellent and very useful- I hope the changes didn't break anything.

Mert Sabuncuexcellent code - gets the job done!

Yossef EhrlichmanIt's really working. Helped alot! Matlab should incoprate it into its libraries.

Alex ChirokovI really like this code: it is very well written and useful.

John D'ErricoSee fminsearchcon for linear and nonlinear inequality constraints. Equality constraints are more difficult to implement when coupled with bound constraints as implemented using transformations - John.

Yang ZhangI like this program and do you think it is possible to expand it to deal with linear constraints?

Lauren Cooneythank you for this great program! it saved me a lot of time and frustration!

Eli TomNice indeed.

jimmy tsaiIt's great. I couldn't solve my problem with fmincon but did it with this file. I really appreciate it.

Natis AngaritaDmitrey KroshkoI connected John D'Errico file to the OpenOpt project

http://www.mathworks.com/matlabcentral/fileexchange/loadFile.do?objectId=13115&objectType=file

& informed him (I hope my letter passed antispam filter OK)

if any pretensions will be received, I promise to exclude the one

btw now default inner solver is Shor ralg with AST, which is better than current implementation of fminsearch, at least, for those task I tried with. Also, it can handle (sub)gradient info provided by user, and corresponding changes in John code were made.

Umberto .Yes its Goodworks very well!

CC GomezNicely written.

G.H. Raoexcellent program. on these lines I applied bounds successfully to the genetic algorithm program 'ga' of Matlab release 14

cedric penardWorks very well, exellent ! Thanks.

Rui MiguelI was really needing it! thanks a lot.

Has worked flawlessly for me.

Kaushik bThanks a lot. It really useful and works beautifully.

Evan PalmerWorks really well and is very handy! Nice job!

Vijit NairThis is a great funcn. Thanks John.

Ken CampbellNice trick solving a common problem