Fminsearchbnd vs. Pattern Search vs. Surrogate Optimzation to attempt to find the global optimal solution.

19 views (last 30 days)
Hello,
I'm trying to decide on what type of solver I should use for my problem and I'm at a bit of an impasse.
To give an overview of my problem, I'm attempting to optimze for the best choice of parameters (between 3-4) of an ODE having a nonsmooth cost function (calculated by first solving the ODE). Moreover, the objective function has many local minima (some close to each other) and my goal is to try and find the parameters which minimze the objective function best. The only constraints on the problem are lower and upper bounds on the parameters.
Now, the 3 solvers I'm considering are fminsearchbnd, patternsearch, and surrogateopt. For fminsearchbnd, I know it can find local minimums, but can also get stuck, depending on where the initial guess is. From my understanding, a similar situation can arise from using patternsearch, and so if I where to sample the parameter space by randomly choosing 20 starting points, this may lead to the "best" solution using either fminsearchbnd or patternsearch. As for surrogateopt, it seems like this may be the best choice since it is made to handle costly nonsmooth objective functions and find a global minimum, if given enough time. I also think that based on the setup of my problem surrogate optimzation will probably take the least amount of time, but I'm unsure of if this is the case.
My question is, will fminsearchbnd (with 20 random initial points) give similar results (that is similarly sized cost function) as compared to using patternsearch or surrogate optimization? Or will I have to choose many more initial guesses (which can get computationally costly) to acheive similar results?
Thanks for any help!

Accepted Answer

John D'Errico
John D'Errico on 19 Jul 2019
Edited: John D'Errico on 19 Jul 2019
Each solver has what is called basins of attraction for each local solution. If you start the solver off in any point in one of the basins of attraction, then it will converge to essentially the "same" solution, i.e., that one associated with that basin of attraction. Not exactly of course, the convergence tolerance will be important, along with floating point arithmetic. This is a basic concept of optimizers.
Basins of attraction can be strangely shaped things, and they will not be nice, simply shaped regions. The set of basins in aggregate will fill the domain space of your objective, but they will not even be neatly shaped convex sets in general. They may have holes in them, with one basin entirely enclosed inside another. But if you can start at a point, and can move downhill in some way to a given solution along some path, then most optimzers will tend to arrive at the "same" solution from that point. Optimzers try to move basically downhill. (Actually, Nelder-Mead search tools try to move away from uphill. A subtle distinction, but not really that important here.)
There is no reson to assume that the basins of attraction are identical for each different solver. So if I start in the same point for each solver, I might get different answers, expecially if I start near the boundary of one of the basins. But it is probably a decent assumption that the basins are pretty similar in size and shape for most objectives, unless your objective function is something truly nasty looking.
So IF that last assumption is reasonably valid, then the same number of uniformly random start points for two different optimizers should result in ROUGHLY the same probability that you will find the global optimizer in the end.
So, no, you should not need MANY more random starts for any one method, but without extensively mapping the domain space for a specific objective, you cannot be positive what will happen. More random starts cannot hurt. And, again, the shape of that surface will be important, because that impacts the shape of the respective basins of attraction. Different optimization tool can be very different in speed of course, in the number of function evals needed to reach any given solution.
  2 Comments
Shant Danielian
Shant Danielian on 25 Jul 2019
Hi John, I have one follow up question with the last line of what you wrote earlier, namely,
"Different optimization tool can be very different in speed of course, in the number of function evals needed to reach any given solution."
For the type of problem I'll need to solve, would it be safe to say one of the global optimization solvers would be better suited (for both accuracy and computational efficiency) as opposed to using the free solver fminsearchbnd with multiple initial guesses?

Sign in to comment.

More Answers (1)

Matt J
Matt J on 19 Jul 2019
There is no way to predict the number of multi-starts you would need to make fminsearchbnd behave competitively with the other solvers. The only thing you can do is test and compare them all. ga() might also be a good candidate.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!