Algorithms for Convex OptimizationConvex Optimization > Convex sets | Convex functions | Convex optimization problems | Algorithms | DCP
From Least-Squares to convex minimizationWe have seen how ordinary least-squares (OLS) problems can be solved using linear algebra (e.g. SVD) methods. Using OLS, we can minimize convex, quadratic functions of the form ![]() when It turns out one can leverage the approach to minimizing more general functions, using an iterative algorithm, based on a local quadratic approximation of the the function at the current point. The approach can then be extended to problems with constraints, by replacing the original constrained problem with an unconstrained one, in which the constraints are penalized in the objective. Unconstrained minimization: Newton's methodWe consider an unconstrained minimization problem, where we seek to minimize a function twice-differentiable function For minimizing convex functions, an iterative procedure could be based on a simple quadratic approximation procedure known as Newton's method. We start with initial guess ![]() where ![]() This idea will fail for general (non-convex) functions. It might even fail for some convex functions. However, for a large class of convex functions, knwon as self-concordant functions, a variation on the Newton method works extremely well, and is guaranteed to find the global minimizer of the function Constrained minimization: interior-point methodsThe method above can be applied to the more general context of convex optimization problems of standard form: ![]() where every function involved is twice-differentiable, and convex. The basic idea behind interior-point methods is to replace the constrained problem by an unconstrained one, involving a function that is constructed with the original problem functions. One further idea is to use a logarithmic barrier: in lieu of the original problem, we address ![]() where For For a large class of convex optimization problems, the function The interior-point approach is limited by the need to form the gradient and Hessian of the function Gradient methodsUnconstrained caseGradient methods offer an alternative to interior-point methods, which is attractive for large-scale problems. Typically, these algorithms need a considerably larger number of iterations compared to interior-point methods, but each iteration is much cheaper to process. Perhaps the simplest algorithm to minimizing a convex function involves the iteration ![]() where Depending on the choice of the parameter Constrained caseThe gradient method can be adapted to constrained problems, via the iteration ![]() where Example: Box-constrained least-squares. |