How to optimize the solution of one optimization function using another?
8 views (last 30 days)
The problem I have is as follows.
I have two different cost functions for the same problem. However the nature of the problem prevents multi-objective optimisation (because the problem would become over constrained and not converge to a solution). For this, the appraoch I would like to take is
% Step 1
% Step 2
Solution_B=some_other_optimisation_function(cost_func_b, non_linear_constraint_b, Solution_A);
% Step 3
Solution_AA=some_other_optimisation_function(cost_func_a, non_linear_constraint_a, Solution_B);
% Step 4
Solution_BB=some_other_optimisation_function(cost_func_b, non_linear_constraint_b, Solution_AA);
I would like for each subsequent optimisation to optimise the solution of the previous iteration.
And to iteratively continue this process until consecutive solutions do not vary (in other words, the solution converges).
The problem is non-convex (both cost functions) and has many local minima (although converging to a local minimum is also acceptable as long as constraints are respected).
I am unable to figure out how to formulate this problme in MATLAB (which optimisation function to use ? Any references or examples of such formulations ?).
Any help is much appreciated.
John D'Errico on 29 May 2022
Edited: John D'Errico on 29 May 2022
Why do you think this has any remote reason to converge? Consider this trivial example of a pair of problems, based exactly on the scheme you have proposed.
Problem a: Minimize the function y1 = x^2, Don't need no stupid constraints, but pick anything you want. Start at x==1 (or wherever you want, this problem has a simple, unique solution.) You will find the solution at x==0.
Problem b: Minimize the function y2 = (x-1)^2. Your start point will be x=0. Find solution at x==1.
How many times will you choose to iterate between these two problems, starting at the solution found from the iterations from the alternate solve? How long will it take before you get bored, and decide the problem you have posed as you have posed it has no convergent solution?
Now, look carefully at what you just asked to do. Does it bear any resemblance to what I just described? (It should.) And the simple example I have posed is extremely simple. There are no issues of non-convex domains. There are no multiple local solutions. And it still will never converge. But you expect the scheme you have proposed to work on a much more difficult problem? Hey, good luck. It might. A cow MIGHT jump over the moon. Well, it might if it was a quantum cow, in which case it might tunnel over the moon. Ok, with probability 1, EVENTUALLY a quantum cow will sometime finally manage to tunnel over the moon. But it may take a bajillion years.
Do you see that this is why multi-criteria optimization is typically done as it is done? For example, given the above trivial pair of problems, we form the aggregate problem:
syms x u
y3 = u*x^2 + (1-u)*(x-1)^2;
xsol = solve(diff(y3,x) == 0,x)
So if the solution to sub-probem a is 0, and the solution to sub-problem b is 1, then we would expect a reasonable compromise to lie at x = 1/2. With the equally weighted aggregate problem when u=1/2, we get exactly that: