The Library
Help/Info
Current Release
Sourceforge









Get dlib C++ Library at SourceForge.net. Fast, secure and Free Open Source software downloads
Last Modified:
Feb 09, 2014

Optimization



This page documents library components that attempt to find the minimum or maximum of a user supplied function. An introduction to the general purpose non-linear optimizers in this section can be found here. For an example showing how to use the non-linear least squares routines look here.


General Purpose Optimizers
Special Purpose Optimizers
Strategies
Helper Routines
[top]

backtracking_line_search



Performs a line search on a given function and returns the input that makes the function significantly smaller. This implementation uses a basic Armijo backtracking search with polynomial interpolation.

#include <dlib/optimization.h>
Detailed Documentation

[top]

bfgs_search_strategy



This object represents a strategy for determining which direction a line search should be carried out along. This particular object is an implementation of the BFGS quasi-newton method for determining this direction.

This method uses an amount of memory that is quadratic in the number of variables to be optimized. It is generally very effective but if your problem has a very large number of variables then it isn't appropriate. Instead, you should try the lbfgs_search_strategy.



#include <dlib/optimization.h>
Detailed Documentation
C++ Example Programs: optimization_ex.cpp

[top]

cg_search_strategy



This object represents a strategy for determining which direction a line search should be carried out along. This particular object is an implementation of the Polak-Ribiere conjugate gradient method for determining this direction.

This method uses an amount of memory that is linear in the number of variables to be optimized. So it is capable of handling problems with a very large number of variables. However, it is generally not as good as the L-BFGS algorithm (see the lbfgs_search_strategy class).



#include <dlib/optimization.h>
Detailed Documentation
C++ Example Programs: optimization_ex.cpp

[top]

clamp_function



This is a function that takes another function, f(x), as input and returns a new function object, g(x), such that g(x) == f(clamp(x,x_lower,x_upper)) where x_lower and x_upper are vectors of box constraints which are applied to x.

#include <dlib/optimization.h>
Detailed Documentation

[top]

derivative



This is a function that takes another function as input and returns a function object that numerically computes the derivative of the input function.

#include <dlib/optimization.h>
Detailed Documentation

[top]

find_max



Performs an unconstrained maximization of a nonlinear function using some search strategy (e.g. bfgs_search_strategy).

#include <dlib/optimization.h>
Detailed Documentation

[top]

find_max_bobyqa



This function is identical to the find_min_bobyqa routine except that it negates the objective function before performing optimization. Thus this function will attempt to find the maximizer of the objective rather than the minimizer.

Note that BOBYQA only works on functions of two or more variables. So if you need to perform derivative-free optimization on a function of a single variable then you should use the find_max_single_variable function.



#include <dlib/optimization.h>
Detailed Documentation
C++ Example Programs: optimization_ex.cpp, model_selection_ex.cpp

[top]

find_max_box_constrained



Performs a box constrained maximization of a nonlinear function using some search strategy (e.g. bfgs_search_strategy). This function uses a backtracking line search along with a gradient projection step to handle the box constraints.

#include <dlib/optimization.h>
Detailed Documentation

[top]

find_max_factor_graph_nmplp



This function is a tool for approximately solving the MAP problem in a graphical model or factor graph with pairwise potential functions. That is, it attempts to solve a certain kind of optimization problem which can be defined as follows:
   maximize: f(X)
   where X is a set of integer valued variables and f(X) can be written
   as the sum of functions which each involve only two variables from X.
If the graph is tree-structured then this routine always gives the exact solution to the MAP problem. However, for graphs with cycles, the solution may be approximate.

This function is an implementation of the NMPLP algorithm introduced in the following papers:
Fixing Max-Product: Convergent Message Passing Algorithms for MAP LP-Relaxations (2008) by Amir Globerson and Tommi Jaakkola
Introduction to dual decomposition for inference (2011) by David Sontag, Amir Globerson, and Tommi Jaakkola


#include <dlib/optimization.h>
Detailed Documentation

[top]

find_max_factor_graph_potts



This is a set of overloaded functions for exactly solving the MAP problem in a Potts model. This type of model is useful when you have a problem which can be modeled as a bunch of binary decisions on some variables, but you have some kind of labeling consistency constraint. This means that there is some penalty for giving certain pairs of variables different labels. So in addition to trying to figure out how to best label each variable on its own, you have to worry about making the labels pairwise consistent in some sense. The find_max_factor_graph_potts() routine can be used to find the most probable/highest scoring labeling for this type of model.

The implementation of this routine is based on the min_cut object.



#include <dlib/graph_cuts.h>
Detailed Documentation

[top]

find_max_factor_graph_viterbi



This function is a tool for exactly solving the MAP problem in a chain-structured graphical model or factor graph. In particular, it is an implementation of the classic Viterbi algorithm for finding the maximizing assignment. In addition to basic first order Markov models, this function is also capable of finding the MAP assignment for higher order Markov models.

#include <dlib/optimization.h>
Detailed Documentation

[top]

find_max_parse_cky



This function implements the CKY parsing algorithm. In particular, it finds the maximum scoring binary parse tree that parses an input sequence of tokens.

#include <dlib/optimization.h>
Detailed Documentation

[top]

find_max_single_variable



Performs a bound constrained maximization of a nonlinear function. The function must be of a single variable. Derivatives are not required.

#include <dlib/optimization.h>
Detailed Documentation

[top]

find_max_trust_region



Performs an unconstrained maximization of a nonlinear function using a trust region method.

#include <dlib/optimization.h>
Detailed Documentation

[top]

find_max_using_approximate_derivatives



Performs an unconstrained maximization of a nonlinear function using some search strategy (e.g. bfgs_search_strategy). This version doesn't take a gradient function but instead numerically approximates the gradient.

#include <dlib/optimization.h>
Detailed Documentation

[top]

find_min



Performs an unconstrained minimization of a nonlinear function using some search strategy (e.g. bfgs_search_strategy).

#include <dlib/optimization.h>
Detailed Documentation
C++ Example Programs: optimization_ex.cpp

[top]

find_min_bobyqa



This function defines the dlib interface to the BOBYQA software developed by M.J.D Powell. BOBYQA is a method for optimizing a function in the absence of derivative information. Powell described it as a method that seeks the least value of a function of many variables, by applying a trust region method that forms quadratic models by interpolation. There is usually some freedom in the interpolation conditions, which is taken up by minimizing the Frobenius norm of the change to the second derivative of the model, beginning with the zero matrix. The values of the variables are constrained by upper and lower bounds.

The following paper, published in 2009 by Powell, describes the detailed working of the BOBYQA algorithm.

The BOBYQA algorithm for bound constrained optimization without derivatives by M.J.D. Powell

Note that BOBYQA only works on functions of two or more variables. So if you need to perform derivative-free optimization on a function of a single variable then you should use the find_min_single_variable function.



#include <dlib/optimization.h>
Detailed Documentation
C++ Example Programs: optimization_ex.cpp, model_selection_ex.cpp

[top]

find_min_box_constrained



Performs a box constrained minimization of a nonlinear function using some search strategy (e.g. bfgs_search_strategy). This function uses a backtracking line search along with a gradient projection step to handle the box constraints.

#include <dlib/optimization.h>
Detailed Documentation
C++ Example Programs: optimization_ex.cpp

[top]

find_min_single_variable



Performs a bound constrained minimization of a nonlinear function. The function must be of a single variable. Derivatives are not required.

#include <dlib/optimization.h>
Detailed Documentation

[top]

find_min_trust_region



Performs an unconstrained minimization of a nonlinear function using a trust region method.

#include <dlib/optimization.h>
Detailed Documentation
C++ Example Programs: optimization_ex.cpp

[top]

find_min_using_approximate_derivatives



Performs an unconstrained minimization of a nonlinear function using some search strategy (e.g. bfgs_search_strategy). This version doesn't take a gradient function but instead numerically approximates the gradient.

#include <dlib/optimization.h>
Detailed Documentation
C++ Example Programs: optimization_ex.cpp

[top]

find_trees_not_rooted_with_tag



Finds all the largest non-overlapping parse trees in tree that are not rooted with a particular tag.

This function is useful when you want to cut a parse tree into a bunch of sub-trees and you know that the top level of the tree is all composed of the same kind of tag. So if you want to just "slice off" the top of the tree where this tag lives then this function is useful for doing that.



#include <dlib/optimization.h>
Detailed Documentation

[top]

gradient_norm_stop_strategy



This object represents a strategy for deciding if an optimization algorithm should terminate. This particular object looks at the norm (i.e. the length) of the current gradient vector and stops if it is smaller than a user given threshold.

#include <dlib/optimization.h>
Detailed Documentation

[top]

graph_cut_score



This routine computes the score for a candidate graph cut. This is the quantity minimized by the min_cut algorithm.

#include <dlib/graph_cuts.h>
Detailed Documentation

[top]

lagrange_poly_min_extrap



This function finds the second order polynomial that interpolates a set of points and returns the minimum of that polynomial.

#include <dlib/optimization.h>
Detailed Documentation

[top]

lbfgs_search_strategy



This object represents a strategy for determining which direction a line search should be carried out along. This particular object is an implementation of the L-BFGS quasi-newton method for determining this direction.

This method uses an amount of memory that is linear in the number of variables to be optimized. This makes it an excellent method to use when an optimization problem has a large number of variables.



#include <dlib/optimization.h>
Detailed Documentation
C++ Example Programs: optimization_ex.cpp

[top]

line_search



Performs a gradient based line search on a given function and returns the input that makes the function significantly smaller. This implements the classic line search method using the strong Wolfe conditions with a bracketing and then sectioning phase, both using polynomial interpolation.

#include <dlib/optimization.h>
Detailed Documentation

[top]

make_line_search_function



This is a function that takes another function f(x) as input and returns a function object l(z) = f(start + z*direction). It is useful for turning multi-variable functions into single-variable functions for use with the line_search or backtracking_line_search routines.

#include <dlib/optimization.h>
Detailed Documentation

[top]

max_cost_assignment



This function is an implementation of the Hungarian algorithm (also know as the Kuhn-Munkres algorithm) which runs in O(N^3) time. It solves the optimal assignment problem. For example, suppose you have an equal number of workers and jobs and you need to decide which workers to assign to which jobs. Some workers are better at certain jobs than others. So you would like to find out how to assign them all to jobs such that overall productivity is maximized. You can use this routine to solve this problem and others like it.

Note that dlib also contains a machine learning method for learning the cost function needed to use the Hungarian algorithm.



#include <dlib/optimization.h>
Detailed Documentation
Python Example Programs: max_cost_assignment.py

[top]

max_sum_submatrix



This function finds the submatrix within a user supplied matrix which has the largest sum. It then zeros out that submatrix and repeats the process until no more maximal submatrices can be found.

#include <dlib/optimization.h>
Detailed Documentation

[top]

min_cut



This is a function object which can be used to find the min cut on a graph. The implementation is based on the method described in the following paper:
An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Vision, by Yuri Boykov and Vladimir Kolmogorov, in PAMI 2004.


#include <dlib/graph_cuts.h>
Detailed Documentation

[top]

negate_function



This is a function that takes another function as input and returns a function object that computes the negation of the input function.

#include <dlib/optimization.h>
Detailed Documentation

[top]

newton_search_strategy



This object represents a strategy for determining which direction a line search should be carried out along. This particular routine is an implementation of the newton method for determining this direction. That means using it requires you to supply a method for creating hessian matrices for the problem you are trying to optimize.

Note also that this is actually a helper function for creating newton_search_strategy_obj objects.



#include <dlib/optimization.h>
Detailed Documentation
C++ Example Programs: optimization_ex.cpp

[top]

objective_delta_stop_strategy



This object represents a strategy for deciding if an optimization algorithm should terminate. This particular object looks at the change in the objective function from one iteration to the next and bases its decision on how large this change is. If the change is below a user given threshold then the search stops.

#include <dlib/optimization.h>
Detailed Documentation
C++ Example Programs: optimization_ex.cpp

[top]

oca



This object is a tool for solving the following optimization problem:
   Minimize: f(w) == 0.5*||w||^2 + C*R(w)

   Where R(w) is a user-supplied convex function and C > 0.  Optionally,
   this object can also add non-negativity constraints to some or all
   of the elements of w.

Or it can alternatively solve:
   Minimize: f(w) == 0.5*||w-prior||^2 + C*R(w)

   Where prior is a user supplied vector and R(w) has the same
   interpretation as above.


For a detailed discussion you should consult the following papers from the Journal of Machine Learning Research:
Optimized Cutting Plane Algorithm for Large-Scale Risk Minimization by Vojtech Franc, Soren Sonnenburg; 10(Oct):2157--2192, 2009.
Bundle Methods for Regularized Risk Minimization by Choon Hui Teo, S.V.N. Vishwanthan, Alex J. Smola, Quoc V. Le; 11(Jan):311-365, 2010.


#include <dlib/optimization.h>
Detailed Documentation

[top]

parse_tree_to_string



This is a set of functions useful for converting a parse tree output by find_max_parse_cky into a bracketed string suitable for displaying the parse tree.

#include <dlib/optimization.h>
Detailed Documentation

[top]

poly_min_extrap



This function finds the 2nd or 3rd degree polynomial that interpolates a set of points and returns the minimum of that polynomial.

#include <dlib/optimization.h>
Detailed Documentation

[top]

potts_model_score



This routine computes the model score for a Potts problem and a candidate labeling. This score is the quantity maximised by the find_max_factor_graph_potts routine.

#include <dlib/graph_cuts.h>
Detailed Documentation

[top]

solve_least_squares



This is a function for solving non-linear least squares problems. It uses a method which combines the traditional Levenberg-Marquardt technique with a quasi-newton approach. It is appropriate for large residual problems (i.e. problems where the terms in the least squares function, the residuals, don't go to zero but remain large at the solution)

#include <dlib/optimization.h>
Detailed Documentation
C++ Example Programs: least_squares_ex.cpp

[top]

solve_least_squares_lm



This is a function for solving non-linear least squares problems. It uses the traditional Levenberg-Marquardt technique. It is appropriate for small residual problems (i.e. problems where the terms in the least squares function, the residuals, go to zero at the solution)

#include <dlib/optimization.h>
Detailed Documentation
C++ Example Programs: least_squares_ex.cpp

[top]

solve_qp2_using_smo



This function solves the following quadratic program:
   Minimize: f(alpha) == 0.5*trans(alpha)*Q*alpha 
   subject to the following constraints:
      sum(alpha) == nu*y.size() 
      0 <= min(alpha) && max(alpha) <= 1 
      trans(y)*alpha == 0

   Where all elements of y must be equal to +1 or -1 and f is convex.  
   This means that Q should be symmetric and positive-semidefinite.

This object implements the strategy used by the LIBSVM tool. The following papers can be consulted for additional details:

#include <dlib/optimization.h>
Detailed Documentation

[top]

solve_qp3_using_smo



This function solves the following quadratic program:
   Minimize: f(alpha) == 0.5*trans(alpha)*Q*alpha + trans(p)*alpha
   subject to the following constraints:
        for all i such that y(i) == +1:  0 <= alpha(i) <= Cp 
        for all i such that y(i) == -1:  0 <= alpha(i) <= Cn 
        trans(y)*alpha == B 

   Where all elements of y must be equal to +1 or -1 and f is convex.  
   This means that Q should be symmetric and positive-semidefinite.

This object implements the strategy used by the LIBSVM tool. The following papers can be consulted for additional details:

#include <dlib/optimization.h>
Detailed Documentation

[top]

solve_qp4_using_smo



This function solves the following quadratic program:
   Minimize: f(alpha,lambda) == 0.5*trans(alpha)*Q*alpha - trans(alpha)*b + 
                                0.5*trans(lambda)*lambda - trans(lambda)*A*alpha
   subject to the following constraints:
      sum(alpha)  == C 
      min(alpha)  >= 0 
      min(lambda) >= 0
   Where f is convex.  This means that Q should be positive-semidefinite.


#include <dlib/optimization.h>
Detailed Documentation

[top]

solve_qp_using_smo



This function solves the following quadratic program:
   Minimize: f(alpha) == 0.5*trans(alpha)*Q*alpha - trans(alpha)*b
   subject to the following constraints:
      sum(alpha) == C 
      min(alpha) >= 0 
   Where f is convex.  This means that Q should be symmetric and positive-semidefinite.


#include <dlib/optimization.h>
Detailed Documentation

[top]

solve_trust_region_subproblem



This function solves the following optimization problem:
Minimize: f(p) == 0.5*trans(p)*B*p + trans(g)*p
subject to the following constraint:
   length(p) <= radius


#include <dlib/optimization.h>
Detailed Documentation