
In mathematics, computer science and operations research, mathematical optimization (alternatively, optimization or mathematical programming) is the selection of a best element (with regard to some criteria) from some set of available alternatives.〔"(The Nature of Mathematical Programming )," ''Mathematical Programming Glossary'', INFORMS Computing Society.〕 In the simplest case, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalization of optimization theory and techniques to other formulations comprises a large area of applied mathematics. More generally, optimization includes finding "best available" values of some objective function given a defined domain (or a set of constraints), including a variety of different types of objective functions and different types of domains. == Optimization problems == (詳細はfunction ''f'' : ''A'' $\backslash to$ R from some set ''A'' to the real numbers :''Sought:'' an element ''x''_{0} in ''A'' such that ''f''(''x''_{0}) ≤ ''f''(''x'') for all ''x'' in ''A'' ("minimization") or such that ''f''(''x''_{0}) ≥ ''f''(''x'') for all ''x'' in ''A'' ("maximization"). Such a formulation is called an optimization problem or a mathematical programming problem (a term not directly related to computer programming, but still in use for example in linear programming – see History below). Many realworld and theoretical problems may be modeled in this general framework. Problems formulated using this technique in the fields of physics and computer vision may refer to the technique as energy minimization, speaking of the value of the function ''f'' as representing the energy of the system being modeled. Typically, ''A'' is some subset of the Euclidean space R^{''n''}, often specified by a set of ''constraints'', equalities or inequalities that the members of ''A'' have to satisfy. The domain ''A'' of ''f'' is called the ''search space'' or the ''choice set'', while the elements of ''A'' are called ''candidate solutions'' or ''feasible solutions''. The function ''f'' is called, variously, an objective function, a loss function or cost function (minimization),〔W. Erwin Diewert (2008). "cost functions," ''The New Palgrave Dictionary of Economics'', 2nd Edition (Contents ).〕 a utility function or fitness function (maximization), or, in certain fields, an energy function or energy functional. A feasible solution that minimizes (or maximizes, if that is the goal) the objective function is called an ''optimal solution''. In mathematics, convention optimization problems are usually stated in terms of minimization. Generally, unless both the objective function and the feasible region are convex in a minimization problem, there may be several local minima. A ''local minimum'' x^{*} is defined as a point for which there exists some δ > 0 so that for all x such that :$\backslash \backslash mathbf\backslash mathbf^$ *\\leq\delta;\, the expression :$f(\backslash mathbf^$ *)\leq f(\mathbf) holds; that is to say, on some region around x^{*} all of the function values are greater than or equal to the value at that point. Local maxima are defined similarly. A large number of algorithms proposed for solving nonconvex problems – including the majority of commercially available solvers – are not capable of making a distinction between local optimal solutions and rigorous optimal solutions, and will treat the former as actual solutions to the original problem. The branch of applied mathematics and numerical analysis that is concerned with the development of deterministic algorithms that are capable of guaranteeing convergence in finite time to the actual optimal solution of a nonconvex problem is called global optimization. 抄文引用元・出典: フリー百科事典『 ウィキペディア（Wikipedia）』 ■ウィキペディアで「Mathematical optimization」の詳細全文を読む スポンサード リンク
