Definition of conditional extremum. Extremum of a function of several variables The concept of an extremum of a function of several variables. Necessary and sufficient conditions for an extremum Conditional extremum The largest and smallest values of continuous functions
Necessary and sufficient conditions for the extremum of functions of two variables. A point is called a minimum (maximum) point of a function if in some neighborhood of the point the function is defined and satisfies the inequality (respectively, the maximum and minimum points are called the extremum points of the function.
A necessary condition for an extremum. If at the extremum point the function has first partial derivatives, then they vanish at this point. It follows that to find the extremum points of such a function, one should solve the system of equations. Points whose coordinates satisfy this system are called critical points of the function. Among them there can be maximum points, minimum points, as well as points that are not extremum points.
Sufficient extremum conditions are used to select extremum points from the set of critical points and are listed below.
Let the function have continuous second partial derivatives at the critical point. If at this point,
condition, then it is a minimum point at and a maximum point at. If at a critical point, then it is not an extremum point. In the case, a more subtle study of the nature of the critical point is required, which in this case may or may not be an extremum point.
Extrema of functions of three variables. In the case of a function of three variables, the definitions of extremum points repeat verbatim the corresponding definitions for a function of two variables. We confine ourselves to presenting the procedure for studying a function for an extremum. Solving the system of equations, one should find the critical points of the function, and then at each of the critical points calculate the quantities
If all three quantities are positive, then the critical point under consideration is a minimum point; if then the given critical point is a maximum point.
Conditional extremum of a function of two variables. The point is called the conditional minimum (maximum) point of the function, provided that there is a neighborhood of the point at which the function is defined and in which (respectively) for all points the coordinates of which satisfy the equation
To find conditional extremum points, use the Lagrange function
where the number is called the Lagrange multiplier. Solving the system of three equations
find the critical points of the Lagrange function (as well as the value of the auxiliary factor A). At these critical points, there may be a conditional extremum. The given system gives only necessary conditions for an extremum, but not sufficient ones: it can be satisfied by the coordinates of points that are not points of a conditional extremum. However, proceeding from the essence of the problem, it is often possible to establish the nature of the critical point.
Conditional extremum of a function of several variables. Consider a function of variables under the condition that they are related by the equations
Extrema of functions of several variables. A necessary condition for an extremum. Sufficient condition for an extremum. Conditional extreme. Method of Lagrange multipliers. Finding the largest and smallest values.
Lecture 5
Definition 5.1. Dot M 0 (x 0, y 0) called maximum point functions z = f(x, y), if f (x o , y o) > f(x, y) for all points (x, y) M 0.
Definition 5.2. Dot M 0 (x 0, y 0) called minimum point functions z = f(x, y), if f (x o , y o) < f(x, y) for all points (x, y) from some neighborhood of the point M 0.
Remark 1. The maximum and minimum points are called extremum points functions of several variables.
Remark 2. The extremum point for a function of any number of variables is defined in a similar way.
Theorem 5.1(necessary extremum conditions). If a M 0 (x 0, y 0) is the extremum point of the function z = f(x, y), then at this point the first-order partial derivatives of this function are equal to zero or do not exist.
Proof.
Let's fix the value of the variable at counting y = y 0. Then the function f(x, y0) will be a function of one variable X, for which x = x 0 is the extremum point. Therefore, by Fermat's theorem or does not exist. The same assertion is proved for .
Definition 5.3. Points belonging to the domain of a function of several variables, at which the partial derivatives of the function are equal to zero or do not exist, are called stationary points this function.
Comment. Thus, the extremum can be reached only at stationary points, but it is not necessarily observed at each of them.
Theorem 5.2(sufficient conditions for an extremum). Let in some neighborhood of the point M 0 (x 0, y 0), which is a stationary point of the function z = f(x, y), this function has continuous partial derivatives up to the 3rd order inclusive. Denote Then:
1) f(x, y) has at the point M 0 maximum if AC-B² > 0, A < 0;
2) f(x, y) has at the point M 0 minimum if AC-B² > 0, A > 0;
3) there is no extremum at the critical point if AC-B² < 0;
4) if AC-B² = 0, additional research is needed.
Proof.
Let us write the Taylor formula of the second order for the function f(x, y), keeping in mind that at a stationary point, the partial derivatives of the first order are equal to zero:
where If the angle between the segment M 0 M, where M (x 0 +Δ x, y 0 +Δ at), and the O axis X denote φ, then Δ x =Δ ρ cos φ, Δ y=Δρsinφ. In this case, the Taylor formula will take the form: . Let Then we can divide and multiply the expression in parentheses by BUT. We get:
Consider now four possible cases:
1) AC-B² > 0, A < 0. Тогда , и for sufficiently small Δρ. Therefore, in some neighborhood M 0 f (x 0 + Δ x, y 0 +Δ y)< f(x0, y0), i.e M 0 is the maximum point.
2) Let AC-B² > 0, A > 0. Then , and M 0 is the minimum point.
3) Let AC-B² < 0, A> 0. Consider the increment of arguments along the ray φ = 0. Then it follows from (5.1) that , that is, when moving along this ray, the function increases. If we move along a ray such that tg φ 0 \u003d -A / B, then , therefore, when moving along this ray, the function decreases. So the point M 0 is not an extreme point.
3`) When AC-B² < 0, A < 0 доказательство отсутствия экстремума проводится
similar to the previous one.
3``) If AC-B² < 0, A= 0, then . Wherein . Then, for sufficiently small φ, expression 2 B cos + C sinφ close to 2 AT, that is, it retains a constant sign, and sinφ changes sign in the vicinity of the point M 0 . This means that the increment of the function changes sign in the vicinity of the stationary point, which is therefore not an extremum point.
4) If AC-B² = 0, and , , that is, the sign of the increment is determined by the sign 2α 0 . At the same time, further research is needed to elucidate the question of the existence of an extremum.
Example. Let's find the extremum points of the function z=x² - 2 xy + 2y² + 2 x. To search for stationary points, we solve the system . So, the stationary point is (-2,-1). Wherein A = 2, AT = -2, With= 4. Then AC-B² = 4 > 0, therefore, an extremum is reached at the stationary point, namely the minimum (since A > 0).
Definition 5.4. If the function arguments f (x 1 , x 2 ,…, x n) bound by additional conditions in the form m equations ( m< n) :
φ 1 ( x 1, x 2,…, x n) = 0, φ 2 ( x 1, x 2,…, x n) = 0, …, φ m ( x 1, x 2,…, x n) = 0, (5.2)
where the functions φ i have continuous partial derivatives, then equations (5.2) are called connection equations.
Definition 5.5. Function extremum f (x 1 , x 2 ,…, x n) under conditions (5.2) is called conditional extremum.
Comment. We can propose the following geometric interpretation of the conditional extremum of a function of two variables: let the arguments of the function f(x,y) are related by the equation φ (x, y)= 0, defining some curve in the plane O hu. Having restored from each point of this curve perpendiculars to the plane O hu before crossing the surface z = f (x, y), we obtain a spatial curve lying on the surface above the curve φ (x, y)= 0. The problem is to find the extremum points of the resulting curve, which, of course, in the general case do not coincide with the unconditional extremum points of the function f(x,y).
Let us define the necessary conditional extremum conditions for a function of two variables by introducing the following definition beforehand:
Definition 5.6. Function L (x 1 , x 2 ,…, x n) = f (x 1 , x 2 ,…, x n) + λ 1 φ 1 (x 1 , x 2 ,…, x n) +
+ λ 2 φ 2 (x 1 , x 2 ,…, x n) +…+λ m φ m (x 1 , x 2 ,…, x n), (5.3)
where λ i - some constants, called Lagrange function, and the numbers λi– indefinite Lagrange multipliers.
Theorem 5.3(necessary conditional extremum conditions). Conditional extremum of the function z = f(x, y) in the presence of the constraint equation φ ( x, y)= 0 can only be reached at stationary points of the Lagrange function L (x, y) = f (x, y) + λφ (x, y).
Proof. The constraint equation defines an implicit dependency at from X, so we will assume that at there is a function from X: y = y(x). Then z there is a complex function X, and its critical points are determined by the condition: . (5.4) It follows from the constraint equation that . (5.5)
We multiply equality (5.5) by some number λ and add it to (5.4). We get:
, or .
The last equality must hold at stationary points, from which it follows:
(5.6)
A system of three equations for three unknowns is obtained: x, y and λ, with the first two equations being the conditions for the stationary point of the Lagrange function. Eliminating the auxiliary unknown λ from system (5.6), we find the coordinates of the points at which the original function can have a conditional extremum.
Remark 1. The presence of a conditional extremum at the found point can be checked by studying the second-order partial derivatives of the Lagrange function by analogy with Theorem 5.2.
Remark 2. Points at which the conditional extremum of the function can be reached f (x 1 , x 2 ,…, x n) under conditions (5.2), can be defined as solutions of the system (5.7)
Example. Find the conditional extremum of the function z = xy given that x + y= 1. Compose the Lagrange function L(x, y) = xy + λ (x + y – one). System (5.6) then looks like this:
Whence -2λ=1, λ=-0.5, x = y = -λ = 0.5. Wherein L (x, y) can be represented as L(x, y) = - 0,5 (x-y)² + 0.5 ≤ 0.5, therefore, at the found stationary point L (x, y) has a maximum and z = xy - conditional maximum.
Conditional extreme.
Extrema of a Function of Several Variables
Least square method.
Local extremum of FNP
Let the function and= f(P), RÎDÌR n and let the point Р 0 ( a 1 , a 2 , ..., a p) –internal point of set D.
Definition 9.4.
1) The point P 0 is called maximum point functions and= f(P) if there exists a neighborhood of this point U(P 0) Ì D such that for any point P( X 1 , X 2 , ..., x n)н U(P 0) , Р¹Р 0 , the condition f(P) £ f(P0) . Meaning f(P 0) functions at the maximum point is called function maximum and denoted f(P 0) = max f(P) .
2) The point P 0 is called minimum point functions and= f(P) if there exists a neighborhood of this point U(P 0)Ì D such that for any point P( X 1 , X 2 , ..., x n)нU(P 0), Р¹Р 0 , the condition f(P)³ f(P0) . Meaning f(P 0) functions at the minimum point is called function minimum and denoted f(P 0) = min f(P).
The minimum and maximum points of a function are called extreme points, the values of the function at the extremum points are called function extrema.
As follows from the definition, the inequalities f(P) £ f(P0) , f(P)³ f(P 0) must be performed only in a certain neighborhood of the point P 0 , and not in the entire domain of the function, which means that the function can have several extrema of the same type (several minima, several maxima). Therefore, the extrema defined above are called local(local) extremes.
Theorem 9.1. (necessary condition for the extremum of the FNP)
If the function and= f(X 1 , X 2 , ..., x n) has an extremum at the point P 0 , then its first-order partial derivatives at this point are either equal to zero or do not exist.
Proof. Let at the point Р 0 ( a 1 , a 2 , ..., a p) function and= f(P) has an extreme, such as a maximum. Let's fix the arguments X 2 , ..., x n, putting X 2 =a 2 ,..., x n = a p. Then and= f(P) = f 1 ((X 1 , a 2 , ..., a p) is a function of one variable X one . Since this function has X 1 = a 1 extremum (maximum), then f 1 ¢=0 or does not exist when X 1 =a 1 (a necessary condition for the existence of an extremum of a function of one variable). But , then or does not exist at the point P 0 - the point of extremum. Similarly, we can consider partial derivatives with respect to other variables. CHTD.
The points of the domain of a function at which the first-order partial derivatives are equal to zero or do not exist are called critical points this function.
As follows from Theorem 9.1, the extremum points of the FNP should be sought among the critical points of the function. But, as for a function of one variable, not every critical point is an extremum point.
Theorem 9.2
Let Р 0 be a critical point of the function and= f(P) and is the second-order differential of this function. Then
and if d 2 u(P 0) > 0 for , then Р 0 is a point minimum functions and= f(P);
b) if d 2 u(P0)< 0 при , то Р 0 – точка maximum functions and= f(P);
c) if d 2 u(P 0) is not defined by sign, then P 0 is not an extremum point;
We consider this theorem without proof.
Note that the theorem does not consider the case when d 2 u(P 0) = 0 or does not exist. This means that the question of the presence of an extremum at the point P 0 under such conditions remains open - additional studies are needed, for example, the study of the increment of the function at this point.
In more detailed mathematics courses, it is proved that, in particular, for the function z = f(x,y) of two variables whose second-order differential is a sum of the form
the study of the presence of an extremum at the critical point Р 0 can be simplified.
Denote , , . Compose the determinant
.
Turns out:
d 2 z> 0 at the point P 0 , i.e. P 0 - minimum point, if A(P 0) > 0 and D(P 0) > 0;
d 2 z < 0 в точке Р 0 , т.е. Р 0 – точка максимума, если A(P0)< 0 , а D(Р 0) > 0;
if D(P 0)< 0, то d 2 z in the vicinity of the point Р 0 changes sign and there is no extremum at the point Р 0;
if D(Р 0) = 0, then additional studies of the function in the vicinity of the critical point Р 0 are also required.
Thus, for the function z = f(x,y) two variables, we have the following algorithm (let's call it "algorithm D") for finding the extremum:
1) Find the domain of definition D( f) functions.
2) Find critical points, i.e. points from D( f) for which and are equal to zero or do not exist.
3) At each critical point Р 0 check the sufficient conditions for the extremum. To do this, find , where , , and calculate D(Р 0) and BUT(P 0). Then:
if D(Р 0) >0, then there is an extremum at the point Р 0, moreover, if BUT(P 0) > 0 - then this is a minimum, and if BUT(P 0)< 0 – максимум;
if D(P 0)< 0, то в точке Р 0 нет экстремума;
If D(Р 0) = 0, then additional studies are needed.
4) Calculate the value of the function at the found extremum points.
Example1.
Find the extremum of a function z = x 3 + 8y 3 – 3xy .
Decision. The domain of this function is the entire coordinate plane. Let's find the critical points.
, , Þ Р 0 (0,0) , .
Let us check the fulfillment of sufficient extremum conditions. Let's find
6X, = -3, = 48at and = 288hu – 9.
Then D (P 0) \u003d 288 × 0 × 0 - 9 \u003d -9< 0 , значит, в точке Р 0 экстремума нет.
D(Р 1) = 36-9>0 - there is an extremum at the point Р 1, and since BUT(P 1) = 3 >0, then this extremum is a minimum. So min z=z(P1) = .
Example 2
Find the extremum of a function .
Solution: D( f) = R 2 . Critical points: ; does not exist at at= 0, so P 0 (0,0) is the critical point of this function.
2, = 0, = , = , but D(Р 0) is not defined, so it is impossible to study its sign.
For the same reason, it is impossible to apply Theorem 9.2 directly − d 2 z does not exist at this point.
Consider the increment of the function f(x, y) at the point Р 0 . If D f =f(P)- f(P 0)>0 "P, then P 0 is the minimum point, if D f < 0, то Р 0 – точка максимума.
We have in our case
D f = f(x, y) – f(0, 0) = f(0+D x,0+D y) – f(0, 0) = .
At D x= 0.1 and D y= -0.008 we get D f = 0,01 – 0,2 < 0, а при Dx= 0.1 and D y= 0.001D f= 0.01 + 0.1 > 0, i.e. in the vicinity of the point Р 0 neither the condition D f <0 (т.е. f(x, y) < f(0, 0) and, therefore, P 0 is not a maximum point), nor the condition D f>0 (i.e. f(x, y) > f(0, 0) and then Р 0 is not a minimum point). Hence, by definition of an extremum, this function has no extremums.
Conditional extreme.
The considered extremum of the function is called unconditional, since no restrictions (conditions) are imposed on the function arguments.
Definition 9.2. Function extremum and = f(X 1 , X 2 , ... , x n), found under the condition that its arguments X 1 , X 2 , ... , x n satisfy the equations j 1 ( X 1 , X 2 , ... , x n) = 0, …, j t(X 1 , X 2 , ... , x n) = 0, where P ( X 1 , X 2 , ... , x n) О D( f), is called conditional extremum .
Equations j k(X 1 , X 2 , ... , x n) = 0 , k = 1, 2,..., m, are called connection equations.
Consider the functions z = f(x,y) of two variables. If there is only one constraint equation, i.e. , then finding a conditional extremum means that the extremum is sought not in the entire domain of the function, but on some curve lying in D( f) (i.e., not the highest or lowest points of the surface are searched z = f(x,y), and the highest or lowest points among the points of intersection of this surface with the cylinder , Fig. 5).
Conditional extremum of the function z = f(x,y) of two variables can be found in the following way( elimination method). From the equation, express one of the variables as a function of the other (for example, write ) and, substituting this value of the variable into the function , write the latter as a function of one variable (in the considered case ). Find the extremum of the resulting function of one variable.
A sufficient condition for an extremum of a function of two variables
1. Let the function be continuously differentiable in some neighborhood of the point and have continuous second-order partial derivatives (pure and mixed).
2. Denote by the second order determinant
extremum variable lecture function
Theorem
If the point with coordinates is a stationary point for the function, then:
A) When it is a point of local extremum and, at a local maximum, - a local minimum;
C) when the point is not a local extremum point;
C) if, maybe both.
Proof
We write the Taylor formula for the function, limiting ourselves to two members:
Since, according to the condition of the theorem, the point is stationary, the second-order partial derivatives are equal to zero, i.e. and. Then
Denote
Then the increment of the function will take the form:
Due to the continuity of partial derivatives of the second order (pure and mixed), according to the condition of the theorem at a point, we can write:
Where or; ,
1. Let and, i.e., or.
2. We multiply the increment of the function and divide by, we get:
3. Complement the expression in curly brackets to the full square of the sum:
4. The expression in curly brackets is non-negative, since
5. Therefore, if and hence, and, then and, therefore, according to the definition, the point is a point of local minimum.
6. If and means, and, then, according to the definition, a point with coordinates is a local maximum point.
2. Consider a square trinomial, its discriminant, .
3. If, then there are points such that the polynomial
4. The total increment of the function at a point in accordance with the expression obtained in I, we write in the form:
5. Due to the continuity of second-order partial derivatives, by the condition of the theorem at a point, we can write that
therefore, there exists a neighborhood of a point such that, for any point, the square trinomial is greater than zero:
6. Consider - the neighborhood of the point.
Let's choose any value, so that's the point. Assuming that in the formula for the increment of the function
What we get:
7. Since, then.
8. Arguing similarly for the root, we get that in any -neighborhood of the point there is a point for which, therefore, in the neighborhood of the point it does not preserve sign, therefore there is no extremum at the point.
Conditional extremum of a function of two variables
When searching for extrema of a function of two variables, problems often arise related to the so-called conditional extremum. This concept can be explained by the example of a function of two variables.
Let a function and a line L be given on the plane 0xy. The task is to find such a point P (x, y) on the line L, at which the value of the function is the largest or smallest compared to the values of this function at the points of the line L, located near the point P. Such points P are called conditional extremum points functions on the line L. In contrast to the usual extremum point, the value of the function at the conditional extremum point is compared with the values of the function not at all points of some of its neighborhood, but only at those that lie on the line L.
It is quite clear that the point of the usual extremum (they also say the unconditional extremum) is also the point of the conditional extremum for any line passing through this point. The converse, of course, is not true: a conditional extremum point may not be a conventional extremum point. Let's illustrate what has been said with an example.
Example #1. The graph of the function is the upper hemisphere (Fig. 2).
Rice. 2.
This function has a maximum at the origin; it corresponds to the vertex M of the hemisphere. If the line L is a straight line passing through points A and B (its equation), then it is geometrically clear that for the points of this line the maximum value of the function is reached at the point lying in the middle between points A and B. This is the conditional extremum (maximum) point functions on this line; it corresponds to the point M 1 on the hemisphere, and it can be seen from the figure that there can be no question of any ordinary extremum here.
Note that in the final part of the problem of finding the largest and smallest values of the function in closed area one has to find the extremal values of the function on the boundary of this region, i.e. on some line, and thereby solve the problem for a conditional extremum.
Definition 1. They say that where has a conditional or relative maximum (minimum) at a point that satisfies the equation: if for any that satisfies the equation, the inequality
Definition 2. An equation of the form is called a constraint equation.
Theorem
If the functions and are continuously differentiable in a neighborhood of a point, and the partial derivative and the point are the point of the conditional extremum of the function with respect to the constraint equation, then the second-order determinant is equal to zero:
Proof
1. Since, according to the condition of the theorem, the partial derivative, and the value of the function, then in some rectangle
implicit function defined
A complex function of two variables at a point will have a local extremum, therefore, or.
2. Indeed, according to the invariance property of the first-order differential formula
3. The connection equation can be represented in this form, which means
4. Multiply equation (2) by, and (3) by and add them
Therefore, at
arbitrary. h.t.d.
Consequence
The search for conditional extremum points of a function of two variables in practice is carried out by solving a system of equations
So, in the above example No. 1 from the equation of communication we have. From here it is easy to check what reaches a maximum at . But then from the equation of communication. We get the point P, found geometrically.
Example #2. Find the conditional extremum points of the function with respect to the constraint equation.
Let's find partial derivatives given function and the connection equations:
Let's make a second-order determinant:
Let's write down the system of equations for finding conditional extremum points:
hence, there are four conditional extremum points of the function with coordinates: .
Example #3. Find the extremum points of the function.
Equating the partial derivatives to zero: , we find one stationary point - the origin. Here,. Therefore, the point (0, 0) is not an extremum point either. The equation is the equation of a hyperbolic paraboloid (Fig. 3), the figure shows that the point (0, 0) is not an extremum point.
Rice. 3.
The largest and smallest value of a function in a closed area
1. Let the function be defined and continuous in a bounded closed domain D.
2. Let the function have finite partial derivatives in this region, except for individual points of the region.
3. In accordance with the Weierstrass theorem, in this area there is a point at which the function takes the largest and smallest values.
4. If these points are interior points of the region D, then it is obvious that they will have a maximum or a minimum.
5. In this case, the points of interest to us are among the suspicious points on the extremum.
6. However, the function can also take on the maximum or minimum value on the boundary of the region D.
7. In order to find the largest (smallest) value of the function in the area D, you need to find all internal points suspicious for an extremum, calculate the value of the function in them, then compare with the value of the function at the boundary points of the area, and the largest of all found values will be the largest in the closed region D.
8. The method of finding a local maximum or minimum was considered earlier in Section 1.2. and 1.3.
9. It remains to consider the method of finding the maximum and minimum values of the function on the boundary of the region.
10. In the case of a function of two variables, the area usually turns out to be bounded by a curve or several curves.
11. Along such a curve (or several curves), the variables and either depend on one another, or both depend on one parameter.
12. Thus, on the boundary, the function turns out to be dependent on one variable.
13. The method of finding the largest value of a function of one variable was discussed earlier.
14. Let the boundary of the region D be given by the parametric equations:
Then on this curve the function of two variables will be a complex function of the parameter: . For such a function, the largest and smallest value is determined by the method of determining the largest and smallest values for a function of one variable.
CONDITIONAL EXTREME
The minimum or maximum value achieved by a given function (or functional) provided that some other functions (functionals) take values from a given admissible set. If there are no conditions that limit changes in independent variables (functions) in the indicated sense, then one speaks of an unconditional extremum.
Classic task for W. e. is the problem of determining the minimum of a function of several variables
Provided that some other functions take the given values:
In this problem G, to which the values of the vector function g=(g 1 , ...,g m),
included in additional conditions (2) is a fixed point c=(c 1 , ..., with t) in m-dimensional Euclidean space
If in (2) along with the equal sign, inequality signs are allowed
This leads to the problem non-linear programming(thirteen). In problem (1), (3), the set G of admissible values of the vector function g is a certain curvilinear , belonging to the (n-m 1)-dimensional hypersurface defined by m 1 , m 1
A special case of problem (1), (3) on a U.v. is the task linear programming, in which all considered functions f and gi are linear in x l , ... , x p. In a linear programming problem, the set G of possible values of a vector function g, included in the conditions limiting the range of variables x 1 , .....x n , is , which belongs to the (n-t 1)-dimensional hyperplane defined by m 1 equality-type conditions in (3).
Similarly, most optimization problems for functionals that represent practical interest, is reduced to tasks on U. e. (cm. Isoperimetric problem, Ring problem, Lagrange problem, Manner problem).
Just like in math. programming, the main problems of the calculus of variations and the theory of optimal control are problems on the convex e.
When solving problems in the U. e., especially when considering the theoretical. questions related to problems on C. e., it turns out to be very useful to use indefinite Lagrangian multipliers, allowing to reduce the problem to U. e. to the problem on the unconditional and simplify the necessary optimality conditions. The use of Lagrange multipliers underlies most of the classical methods for solving problems in U. e.
Lit.: Hadley J., Nonlinear and , trans. from English, M., 1967; Bliss G.A., Lectures on the calculus of variations, trans. from English, M., 1950; Pontryagin L. S. [et al.], Mathematical Optimal Processes, 2nd ed., M., 1969.
I. B. Vapnyarsky.
Mathematical encyclopedia. - M.: Soviet Encyclopedia. I. M. Vinogradov. 1977-1985.
See what "CONDITIONAL EXTREME" is in other dictionaries:
Relative extremum, extremum of the function f (x1,..., xn + m) of n + m variables, assuming that these variables are subject to m more coupling equations (conditions): φk (x1,..., xn + m) = 0, 1≤ k ≤ m (*) (see Extremum).… …
Let an open set and on be given functions. Let be. These equations are called constraint equations (the terminology is borrowed from mechanics). Let a function be defined on G ... Wikipedia
- (from Latin extremum extreme) value of a continuous function f (x), which is either a maximum or a minimum. More precisely: a function f (x) continuous at the point x0 has a maximum (minimum) at x0 if there is a neighborhood (x0 + δ, x0 δ) of this point, ... ... Great Soviet Encyclopedia
This term has other meanings, see Extreme (meanings). Extremum (Latin extremum extreme) in mathematics is the maximum or minimum value of a function on a given set. The point at which the extremum is reached is ... ... Wikipedia
A function used in solving problems for a conditional extremum of functions of several variables and functionals. With the help of L. f. the necessary optimality conditions are written down in problems for a conditional extremum. There is no need to express only variables... Mathematical Encyclopedia
A mathematical discipline devoted to finding extreme (maximum and minimum) values of functionals of variables depending on the choice of one or more functions. In and. is a natural development of that chapter… … Great Soviet Encyclopedia
Variables, with the help of which the Lagrange function is constructed in the study of problems for a conditional extremum. The use of L. m. and the Lagrange function makes it possible to obtain the necessary optimality conditions in a uniform way in problems for a conditional extremum ... Mathematical Encyclopedia
The calculus of variations is a branch of functional analysis that studies the variations of functionals. The most typical task of the calculus of variations is to find a function on which a given functional reaches ... ... Wikipedia
A section of mathematics devoted to the study of methods for finding extrema of functionals that depend on the choice of one or more functions under various kinds of restrictions (phase, differential, integral, etc.) imposed on these ... ... Mathematical Encyclopedia
The calculus of variations is a branch of mathematics that studies the variations of functionals. The most typical task of the calculus of variations is to find a function on which the functional reaches an extreme value. Methods ... ... Wikipedia
Books
- Lectures on control theory. Volume 2. Optimal Control, V. Boss. The classical problems of the theory of optimal control are considered. The presentation begins with the basic concepts of optimization in finite-dimensional spaces: conditional and unconditional extremum, ...