Scholarly article on topic 'Maximum Principle in the Optimal Control Problems for Systems with Integral Boundary Conditions and Its Extension'

Maximum Principle in the Optimal Control Problems for Systems with Integral Boundary Conditions and Its Extension Academic research paper on "Mathematics"

CC BY
0
0
Share paper
Academic journal
Abstract and Applied Analysis
OECD Field of science
Keywords
{""}

Academic research paper on topic "Maximum Principle in the Optimal Control Problems for Systems with Integral Boundary Conditions and Its Extension"

Hindawi Publishing Corporation Abstract and Applied Analysis Volume 2013, Article ID 946910, 9 pages http://dx.doi.org/10.1155/2013/946910

Research Article

Maximum Principle in the Optimal Control Problems for Systems with Integral Boundary Conditions and Its Extension

A. R. Safari,1 M. F. Mekhtiyev,2 and Y. A. Sharifov2,3

1 Meshkin Shahr Branch, Islamic Azad University, Meshkin Shahr 5661668691, Iran

2 Baku State University, Z. Khalilov Street 23,1148 Baku, Azerbaijan

3 Institute Cybernetics of Azerbaijan National Academy of Sciences, B. Vahab-Zade Street 9,1141 Baku, Azerbaijan Correspondence should be addressed to Y. A. Sharifov; sharifov22@rambler.ru

Received 19 April 2013; Revised 14 June 2013; Accepted 1 July 2013 Academic Editor: Nazim Idrisoglu Mahmudov

Copyright © 2013 A. R. Safari et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

The optimal control problem with integral boundary condition is considered. The sufficient condition is established for existence and uniqueness of the solution for a class of integral boundary value problems for fixed admissible controls. First-order necessary condition for optimality is obtained in the traditional form of the maximum principle. The second-order variations of the functional are calculated. Using the variations of the controls, various optimality conditions of second order are obtained.

1. Introduction

Boundary value problems with integral conditions constitute a very interesting and important class of boundary problems. They include two-, three-, and multipoint and nonlocal boundary value problems as special cases, (see [1-3]). The theory of boundary value problems with integral boundary conditions for ordinary differential equations arises in different areas of applied mathematics and physics. For example, heat conduction, thermoelasticity, chemical engineering, plasma physics, and underground water flow can be reduced to the nonlocal problems with integral boundary conditions. For boundary value problems with nonlocal boundary conditions and comments on their importance, we refer the reader to the papers [4, 5] and the references therein.

The role of the Pontryagin maximum principle is critical for any research related to optimal processes that have control constraints. The simplicity of the principle's formulation together with its meaningful and beneficial directness has become an extraordinary attraction and one of the major causes for the appearance of new tendencies in mathematical sciences. The maximum principle is by nature a necessary

first-order optimality condition since it was born as an extension of Euler-Lagrange and Weierstrass necessary conditions of variational calculus.

At present, there exists a great amount of works devoted to derivation of necessary optimality conditions of first and second orders for the systems with local conditions (see [612] and the references therein).

Since the systems with nonlocal conditions describe real processes, it is necessary to study the optimal control problems with nonlocal boundary conditions.

The optimal control problems with nonlocal boundary conditions have been investigated in [13-25]. Note that optimal control problems with integral boundary condition are considered and first-order necessary conditions are obtained in [23-25]. In certain cases, the first-order optimality conditions are "degenerated," and are fulfilled trivially on a series of admissible controls. In this case, it is necessary to obtain second-order optimality conditions.

In the present paper, we investigate an optimal control problem in which the state of the system is described by differential equations with integral boundary conditions. Note that this problem is a natural generalization of the

Cauchy problem. The matters of existence and uniqueness of solutions of the boundary value problem are investigated, first and second increments formula of the functional are calculated. Using the variations of the controls, various optimality conditions of first and second order are obtained.

The organization of the present paper is as follows. First, we give the statement of the problem. Second, theorems on existence and uniqueness of a solution for the problem (1)-(3) are established under some sufficient conditions on nonlinear terms. Third, the functional increment formula of first order is presented, and Pontryagin's maximum principle is provided. Fourth, variations of the functional of the first and second-order are given. Fifth, Legendre-Clebsh condition is obtained. Finally, a conclusion is given.

Consider the following system of differential equations with integral boundary condition:

= f(t,x,u(t)), 0<t<T,

x(0) + \ J0

m (t) x (t) dt = C,

u(t)eU, te[0,T],

where x(t) e R"; f(t, x, u) is n-dimensional continuous function and has second-order derivative with respect to (x, u); C e R" is the given constant vector; m(t) e Rnxn is nxn matrix function; u is a control parameter; and U c Rr is an bounded set.

It is required to minimize the functional

J(u) = cp (x (0), x (T)) + [ F (t, x, u) dt

subject to (1)-(3).

Here, it is assumed that the scalar functions <p(x, y) and F(t, x, u) are continuous by their own arguments and have continuous and bounded partial derivatives with respect to x, y, and u to second order, inclusively. Under the solution of boundary value problem (1)-(3) corresponding to the fixed control parameter u(t), we understand the function x(t) : [0, T] ^ Rn that is absolutely continuous on [0, T]. Denote the space of such functions by AC([0, T],Rn). By C([0, T], Rn), we define the space of continuous functions on [0, T] with values from Rn. It is obvious that this is a Banach space with the norm

IMIc[0,T] = max \x(t)\,

1 ' te[0,T]

where | • | is the norm in space R".

Admissible controls are taken from the class of bounded measurable functions with values from the set U e Rr. The admissible control together with corresponding solutions of (1), (2) is called an admissible process.

The admissible process [u(t), x(t,u)} being the solution of problem (1)-(4), that is, delivering minimum to functional (4) under restrictions (1)-(3) is said to be an optimal process, and is u(t) optimal control.

We suppose the existence of the optimal control in the problem (1)-(4).

2. Existence of Solutions of Boundary Value Problem (1)-(3)

Introduce the following conditions:

(2) f : [0,T]xRn xRr ^ R" is a continuous function, and there exists the constant K > 0

(1) Let \\B\\ < 1, where B = JQ m(t)dt,

0, T]xRn xRr ^ Rn is a cor there exists the constant K > 0

\f (t, x,u)- f (t, y,u)\ < K\x- y\, t e[0,T], x,y e Rn, ueU,

(3) L=(1- py)-^™ < 1, where N = max \\N(f,s)\\,

0<t,s<T

N (t, s) =

E+ [ m(r)dr, 0 <t < s,

-\ m(r) dr, s <t<T,

Theorem 1. Let condition (1) be satisfied. Then, the function %(•) e C([0,T], R") is an absolutely continuous solution of boundary value problem (1)-(3) ifand only if

x(t) = (E + B)-1C+ \ K(t,x)f(x,x(x),u(x))dx, (8)

where K(t, x) = (E + B)'1N(t, x).

Proof. Note that under condition (1), the matrix E + B is invertible and the estimation ||(£ + B)-1\\ <(1-|B|)-1 holds [26, page 78]. If x = %(•) is a solution of differential equation (l), then for te (0,T)

x(t) = x(0)+ \ f(s,x(s),u(s))

s, x(s),u (s)) ds,

where x(0) is still an arbitrary constant. For determining x(0), we require that the function defined by equality (9) satisfies condition (2):

(E + B)x(0) = C- I m(t)\ f(x,x(x),u(x))dxdt.

Since det(£ + B) = 0, then %(0) = (E + B)-1C

-(E + B)-1 [ m(t){ f(x,x(x),u(x))dxdt.

The equality (11) maybe written in the following equivalent from

%(0) = (E + B)-1C

(E + B)-1 \ \ m(x)dxf(t,x(t),u(t))dt.

Now, considering the value of x(0) defined by (12) in (9), we get

x(t) = (E + B)-1C

- fT fT

-(E + B)-1 J J m(x)dxf(t,x(t),u(t))dt (13)

+ I f(x,x(x),u(x))dx. J 0

It is obvious one can write the last equality as x(t) = (E + B)-1C

+ J (E-(E + B)-1 J m(x)dx]

j0 V ^ J (14)

x f (s,x (s) u (s)) ds

-(E + B)1 J J m(x)dxf(s,x(s),u(s))ds.

Since B = J0T m(t)dt, fT

(E + B)-1 J m(x)dx

= (E + B)-1 (e + J m(t)dt- J m (x) dx) (15) = (E + B)-1 (e+ J m(x)dx).

Introduce the matrix function

K(t,x) =

(E + B) 1 (E + 1 m(x)dx), 0<s<t

t<s<T.

-(£ + Б)-11 m(x)dx,

Then, (14) turns to

x(t) = (E +B)-1C+ 1 K(t,x)f(x,x(x),u(x))dx. (17) Jo

Thus, we show that the boundary value problem (1)-(3) may be written in the form of integral equation (8). By direct verification, we can show that the solution of integral equation (8) also satisfies to the boundary value problem (1)-(3). Theorem 1 is proved. □

For every fixed admissible controls, define the operator P:C([0,T],Rn) ^ C([0,T],Rn) by the rule

(Px) (t) = (E + B)-1C + \ K(t,x)f (x, x(x),u (x)) dx.

Theorem 2. Let conditions (1)-(3) be fulfilled. Then, for any C e Rn and for each fixed admissible control, boundary

value problem (1)-(3) has the unique solution that satisfies the following integral equation:

x(t) = (E + B)-1C+ J K(t,x)f(x,x(x),u(x))dx. (19)

Proof. Let C e Rn, and let u(t) e U, t e [0, T] be fixed. Consider the mapping P : C([o,T],^") ^ C([0,T],Rn) defined by equality (18). Clearly, the fixed points of the operator are solutions of the problem (1)-(2). We will use the Banach contraction principle to prove that P defined by (18) has a fixed point. Then, for any v,w e C([0, T],R"), we have

\(Pv) (t)-(Pw) (f)|

< 1 \K (t, s)\ ■ \f (s, v (s),u (s)) -f(s,w (s) ,u(s))\ds

<(1-\\B\\)-1KTN\\v (■) - w (■)Ус[о,т], te[0,T],

\\Pv - Pw\\cm] < L\\v - w\\C[0>T].

Estimation (21) shows that the operator P is a contraction in the space C([0,T], R"). Therefore, according to the principle of contraction operators, the operator P defined by equality (18) has a unique fixed point at C([0, T], Rn). So, integral equation (19) or boundaryvalue problem (1)-(3) has a unique solution. Theorem 2 is proved. □

3. First-Order Optimality Condition

In this section, we assume that U is closed set in Rr. In order to obtain the necessary conditions for optimality, we will use the standard procedure (see, e.g., [7]). Namely, we should analyze the changing of the objective functional caused by some control impulse. In other words, we must derive the increment formula that originates from Taylor's series expansion. A suitable definition of the conjugate system will facilitate the extraction of the dominant term that is destined to determine the necessary condition for optimality. For the sake of simplicity, it will be reasonable to construct a linearized model of nonlinear system (8), (9) in some small vicinity.

3.1. Increment Formula. Let [u,x = x(t,u)} and {и = и + Au, x = x + Ax = x(t, u)} be two admissible processes. We can determine the boundaryvalue problem for problem (1)-(3)

Ax = Af (t, x,u), t e [0, T],

Ax (0) + I m(t)Ax(t)dt = 0, Jo

where Af(t, x, u) = f(t, x, u) - f(t, x, u) denotes the total increment of the function f(t, x, u). Then, we can represent the increment of the functional in the form

M (u) = J (Ü) - J (u)

Af(x(0),x(T)) + \ AF(x,u,t)dt. Jo

Let us introduce some nontrivial vector function f(t) e Rn and numerical vector X e Rn. Then, the increment of functional index (4) maybe represented as

= J(u)-J(u)

= A<p(x(0),x(T)) + \ AF(x,u,t)dt

o (24)

+ \ {f(t),Ax(t)-Af(t,x,u))dt

+ ^X, Ax (0) + \ m(t)Ax(t)dt'

After some operations usually used in deriving of the firstorder optimality conditions, for the increment of the functional, we get the formulas

AJ (u) = - [ AsH(t,f,x,u)dt Jo

T 1 dH(t,w,x,u)

A~u-^-J-,Ax(t))dt

\ dH(tdY,u) +m> m x)dt df

dx (0) df

-f(0)+X, Ax(0) + f(T),Ax(T)^+ns,

, dx (T)

n~u = of (||A* (0)||, \\Ax (DD - £ oH (H* (t)\\) dt,

H (t, f, x, u) = {f, f (t, x, u)) - F (t, x, u). (26)

Suppose that the vector function f(t) e R" and vector X e R" is a solution of the following conjugate problem (the stationary condition of the Lagrangian function by state):

dH (t, w, x, u) .

y(t) =--v J ' - m (t)X, t e [0,T],

df -f(0) + X = 0, +f(T) = 0.

Then, increment formula (25) takes the form rT

AJ(u)= - [ AnH(t,f,x,u)dt

T / dH (t, f, x, u)

, Ax (t) )dt + qi

3.2. The Maximum Principle. Let us consider the formula for the increment of the functional on the needle-shaped variation of the admissible control. As a parameters, we take the point t e (0, T], number e e (0, t] , and vector v eU. The variation interval (t - e, t) belongs to [0, T]. Needle-shaped variation of the control u = u(t) is given as follows:

u (t) = u£ (t)

v eU,t e(T-e,T] c[0,T], e>0, u(t), tt(T-e,r].

A traditional form of the necessary optimality condition will follow from the increment formula (28) if we show that on the needle-shaped variation u(t) = ue(t) the state increment A ex(t) has the order e.

That follows from conditions (1)-(3) and equalities (19) and (22)

Ax(t)= \ K(t,T)[f(T,x + Ax,u)-f(r,x,u)]

+ \ K(t,T)Asf(T,x,u)dT.

From this, we obtain

||A*(i)||<(1-D-1(1-PII)-1N J \\Aaf(t,x,u)\\dt,

which proves our hypothesis on response of the state increment caused by the needle-shaped variation given by (29)

||A£%(f)|| < L-e, te[0,T],L = const > 0. (32)

This also implies that for u(t) = ue(t),

dH (t, f, x, u)

-,Aex(t) ) dt

= o(e),

A £X (t) = x (t, ue) - x (t, u) ~ £.

Therefore, the changing of objective functional caused by the needle-shaped variation (29) can be represented according to (28) as

Aej(u) = j(ue)-j(u)

= -A VH(T,f,x,u)-e + o(e), v eU, re [0,T].

It should be noted that in the last expression, we used the mean value theorem.

For the needle-shaped variation of optimal process {u0, x0 = x(t, u0)}, the increment formula (35) with regard to the estimate (32) implies the necessary optimality condition in the form of the maximum principle.

Theorem 3 (maximum principle). Suppose that the admissible process {u°,x° = x(t,u0)} is optimal for problem (1)-(4) and f°(t) is the solution to conjugate boundary value problem (27) calculated on the optimal process. Then, for all x e [0, T], the following inequality holds:

A VH (x,f0, x0,u0) < 0, for every v eU. (36)

Remark 4. If the function f is linear with respect to (x, u) and functions <p, F are convex with respect to x(0), x(T), and x(t), respectively, then maximum principle (36) is both necessary and sufficient optimality condition. This fact follows from the increment formula

AJ (u) = - [ AsH(t,f,x,u)dt Jo

+ ov tII*(U)H,||*(r)||) + jo oF (IIx(t)II)dt,

where of > 0, op > 0.

4. Variations of the Functional and Derivation of Legendre-Clebsh Conditions

Let the set U c Rr be open. Since the functions <p(x,y), F(t, x, u), and f(t, x, u) are continuous by their own arguments and have continuous and bounded partial derivatives with respect to x, y,and u up to second order, inclusively, then increment formula (28) takes the form

/ dH (t, f, x, u)

Au (t) ) dt

1 (T / d2H(t,f,x,u)

jT (Au(tydRl (t'V'x,u)

J0 \ dxdu

Au (t) ) dt

1 . d2H (t, w, x, u) + -Ax (t)-\ , , Ax (t) ) dt

+ 1-(AX(0)'^-2

2\ dx(0)2

, d2(p +Ax(T)' imhjyAx(0)

+ 2 (ax(0) dx(T)dx(0)

fü = - j0 oH (IAx(i)I2 + IIAu(t)II2)dt + of (||A*(i0)||2,||Ax(ii)||2).

Let now Au(t) = eSu(t), where e > 0 is a rather small number and Su(t) is some piecewise continuous function. Then, the increment of the functional AJ(u) = J(u) - J(u) for the fixed functions u(t), Au(t) is the function of the parameter e. If the representation

AJ (u) = eSJ (u) + 2e2S2J (u) + o (e2) (40)

is valid, then SJ(u) is called the first, and S2J(u) is the second variation of the functional. Further, we get an explicit expression for the first and second variations. To achieve the object, we have to select in Ax(t) the principal term with respect to e.

Assume that

Ax (t) = eSx (t) + o (e, t),

where Sx(t) is the variation of the trajectory. Such a representation exists, and for the function Sx(t), one can obtain an equation in variations. Indeed, by definition of Ax(t), we have:

Ax (t) = \ K(t,r)Af (t, x(t),u(t)) dr. (42)

Applying the Taylor formula to the integrand expression, we get

eSx (t) + o (e, t)

rT [ df (t, x, u)

^ K (t, t) {--[eSx (t) + o (e, x)] (43)

df (x, x, u),

^-Su + o1 (e, r)} dx.

Since this formula is true for any e, then

Sx(t)= \ K(t,x)

i df (x, x, u) 1 d~x

Sx (x) +

df (x, x, u) du

Su (t)} dx.

Equation (44) is said to be an equation in variations. Obviously, integral equation (44) is equivalent to the following nonlocal boundary value problem:

„ ,, df(t,x,u) df(t,x,u)^ ,,

Sx (t) = Sx(t)+ JK' 'Su(t), (45)

-U' (0)^-2

\ dx(0)2

+Ax (T)-^-, Sx (0)

v ; dx(0)dx(T) v ;

Sx (0)+[ m (t) Sx (t) dt = 0. (46)

By [6, page 527], any solution of differential equation (45) may be represented in the form

Sx (t) = O (t) Sx (0) + 0(i)[ O-1 (t) df(T'X'U^Su (t) dr,

where O(i) is a solution of the following differential equation:

-(sX' (0) dx(T)dx(0)

+SX' (T)£k ,S*(r))}+°(e2).

Considering definition (40), we finally obtain i

J / dH (t, w, x, u) SJ(u) = -\ (-V J, Su (t) ) dt, (52)

dO (t) df (t, x, u)

O(t), ®(0) = E. (48) S2J(u)=-j [(s*'

. d2H (t, w, x, u) (*)-dx-2 'Sx(f)

Assume that the solution of differential equation (45) determined by equality (47) satisfies boundary condition (46). Then, for the solutions of problems (45), (46), we get the following explicit formula:

+ 2(Su (t)

, d2H (t, y, x, u)

, d2H (t, w, x, u)

+ ( Su' (*)-d„2 'Su(t)

Sx (t) = jT G (t, t) df(T'X,U)Su (t) dr, J0 du

where G(t,r)

+ (SX (0)^-2

\ dx(0)2

+Ax (T)-^-, Sx (0)

v ; dx(0)dx(T) v ;

0(i)[£ + 51] 1 E+j m(r)O(r)dr ®-1 (t) ,

0 <T<t,

m (t) O (t) dr®-1 (t) ,

t<r<T,

+ (SX (0)dxmdx(0)

+ Sx' (T)-?2^2,Sx(T)). dx(T)2 /

B1 = j m(t)O (t) dt.

It follows from (40) that the conditions

Sj(u0) = 0, S2j(u0)>0

Now, substituting (41) into (38), one may get

Su (t) \ dt

(T / dH (t, w, x, u

AJ(u)= - e \ ( -'

J0 \ du

are fulfilled for the optimal control u0(t). From the first condition in (54), it follows that

, d2H (t, f, x, u)

Sx' (ty 2

10 1 \ dx2

, Sx (t)

t /dH(t,f0,x0,u0)

, Su (t) )dt = 0. (55)

, „ 1, s d2H(t,w,x,u) ^ / ^

+ 2( Su (t)-V; , Sx (t)

. d2H (t, w, x, u)

+ ( Su' (*)-dl'Su (t)

Hence, we can prove that the following equality is fulfilled along the optimal control (see [11, p. 54]):

dH(ty,x0 ,u0 )

= 0, te[0,T]

and it is called the Euler equation. From the second condition in (54), it follows that the following inequality is fulfilled along the optimal control:

S2J(u)

+ 2 ( Su' (t)

, d2H(t,f,x,u)

, Sx (t)

+ ( Su' (t)

d2H(t,y, x, u) dû2

+ (*' l0)£k + A'm + (s* (0)*Jih»+Sx'm

, Su (t) d2<p

dx (0) dx (T)

d2<p dx(T)

Sx(0) 2,Sx(T)

Inequality (57) is an implicit necessary optimality condition of first order. However, the practical value of such conditions is not great, since it requires very complicated calculations.

For obtaining effectively verifiable optimality conditions of second order, following [12, p. 16], we take into account (49) in (57) and introduce the matrix function

, d2cp

r(t,s)= -G' (0,t)—^-ï g(0,s) dx(0)2

-G (T'r)l^ö)G(0,S)

-G' (0,t)bJ^)g(t,s)

, d2(p

-G' (t,t)-H G(T,s)

dx(T)2

+ \ G' (t,T) ■

(1 ^2H

\ G' (t,T)—TG(t,s)dt. Jo OX2

Then, for the second variation of the functional, we get the terminal formula

S2J(u) = -^J (s'u(t)

d' f (t, x, u) du

df (s, x, u)

x R (x,s)-z-—^—,Su (s) ) dtds

(T / , d2H(t,w,x,u) \ J (S'U(T)-,Su(t))dt

fT (T / « /, ,d2H(t,y,x,u) , N

+ 2\ \ (Su (t)-G(t, s)

Jo Jo \ oxdu

^ ('—,—),Su (s)) dtds ou

Theorem 5. If the admissible control u(t) satisfies condition (56), then for its optimality in problem (1)-(4), the inequality

S2J(u)= -^J (s'u(t)

. d'f (t, x, u)

* u (t) —-

df(s,x,u) \

x R (t, s) ——--, Su (s) ) drds

(T / , d2H (t, w, x, u) + \ (S'u(t)-,Su(t))dt

fT (T / « /, ,d2H(t,y,x,u) , N

+ 2J J (Su (t)-II G(t,s)

Jo Jo \ oxdu

df(s,x,u) „ „ A , ,1 x d , Su (s) ) dtds} > 0

should be fulfilled for all Su(t) e L TO[0, T].

The analogy of the Legandre-Klebsh condition for the considered problem follows from condition (60).

Theorem 6. Along the optimal process (u(t),x(t)) for all v e Rr and 0 e [0, T]

ld2H(e,f(e),x(e),u(d)) V dû2

To prove (61), one constructs the variation of the control

Su (t) = ^

v te[9,e + e) 0 tt[d,d + e),

where e > 0 and v is some r-dimensional vector.

By virtue of (62) the corresponding variation of the trajectory indeed is

Sx(t) = a(t)e + o(e,t), t e [0,T]

where a(t) is a continuous bounded function.

Substituting variation (62) into (60) and selecting the principal term with respect to e, one may obtain

2T. . f0+£ ,d2H(t,f(t),x(t),u(t)) S}(u)=-l v-^-

vdt + o (e)

ld2H(e,f(e),x(e),u(d)) v dû2

v + ox (e).

Thus, considering the second condition of (54), one obtains the Legandre-Klebsh criterion (61).

Condition (61) is the second-order optimality condition. It is obvious that when the right-hand side ofsystem (1) is linear with respect to control parameters, then condition (61) also degenerates fulfills trivially. Following [11, p. 27], [12, p. 40], if for all 0 e (0, T), v e Rr

dH(e,f(d),x(e),u(d))

,d2H(d,y(d),x(d),u(d))

V--V = 0,

then the admissible control u(t) is said be a singular control in the classic sense.

Theorem 7. For singular optimality of the control u(t) in the classic sense, the inequality

v' |jT jT Cf(t>x,u)Ht,s)df(s:x,u))dtds

IJ0 J0 \ du du /

+ 2jT jT (d2R(t'^'x'u)G t J0 J0 \ dxdu

df(s,x,u) \ , , 1

du )dtds\ v < 0

should be fulfilled for all v e Rn.

Condition (66) is an integral necessary condition of optimality for singular controls in the classic sense. Selecting special variation in different way in formula (60), we can get various necessary optimality conditions.

5. Conclusion

In this work, the optimal control problem is considered when the state of the system is described by the differential equations with integral boundary conditions. Applying the Banach contraction principle, the existence and uniqueness of the solution are proved for the corresponding boundary problem by fixed admissible control. The first and second order increment formulas for the functional are calculated. Various necessary conditions of optimality of the first and second order are obtained by the help of the variation of the controls. Of course, such type, the existence and uniqueness results and necessary conditions of optimality hold under the same sufficient conditions on nonlinear terms for the system of nonlinear differential equations (1), subject to multipoint nonlocal and integral boundary conditions

Ex (0) + \ m (t) x (t) dt + YBjx (fj) = C,

J0 j=1

where Bj e Rnxn are given matrices and

IIBII + I \\Bj\\ < 1. j=1

Here, 0 < t1 < ••• < tj < T. Moreover, the method given in

[27, 28] and the method presented in the paper may allow

one to investigate optimal control for infinite-dimensional

systems with integral boundary conditions.

References

[1] M. Benchohra, J. J. Nieto, and A. Ouahab, "Second-order boundary value problem with integral boundary conditions," Boundary Value Problems, vol. 2011, Article ID 260309, 9 pages, 2011.

[2] B. Ahmad and J. J. Nieto, "Existence results for nonlinear boundary value problems of fractional integrodifferential equations with integral boundary conditions," Boundary Value Problems, vol. 2009, Article ID 708576,11 pages, 2009.

[3] A. Boucherif, "Second-order boundary value problems with integral boundary conditions," Nonlinear Analysis. Theory, Methods & Applications, vol. 70, no. 1, pp. 368-379, 2009.

[4] R. A. Khan, "Existence and approximation of solutions of nonlinear problems with integral boundary conditions," Dynamic Systems and Applications, vol. 14, no. 2, pp. 281-296, 2005.

[5] A. Belarbi, M. Benchohra, and A. Quahab, "Multiple positive solutions for non-linear boundary value problems with integral boundary conditions," Archivum Mathematicum, vol. 44, no. 1, pp. 1-7, 2008.

[6] F. P. Vasilev, Optimization Methods, Factorial Press, Moscow, Russia, 2002.

[7] O. V. Vasiliev, Optimization Methods, vol. 5 of Advanced Series in Mathematical Science and Engineering, World Federation Publishers Company, Atlanta, Ga, USA, 1996.

[8] A. J. Krener, "The high order maximal principle and its application to singular extremals," SIAM Journal on Control and Optimization, vol. 15, no. 2, pp. 256-293,1977.

[9] H. J. Kelley, R. E. Kopp, and H. G. Moyer, "Singular extremals," in Topics in Optimization, pp. 63-101, Academic Press, New York, NY, USA, 1967, Edited by G. Leitmann.

[10] G. Fraser-Andrews, "Finding candidate singular optimal controls: a state of the art survey," Journal of Optimization Theory and Applications, vol. 60, no. 2, pp. 173-190,1989.

[11] R. Gabasov and F. M. Kirillova, Osobye Optimalnye Upravleniya, Nauka, Moscow, Russia, 1973, in Russian.

[12] K. B. Mansimov, Singular Controls in Systems With Delay, Elm, Baku, Azerbaijan, 1999, in Russian.

[13] Y. A. Sharifov, "Optimal control of impulsive systems with nonlocal boundary conditions," Russian Mathematics, no. 2, pp. 75-84, 2013.

[14] Y. A. Sharifov, "Necessary optimality conditions of first and second order for systems with boundary conditions," Transactions of National Academy of Sciences of Azerbaijan, vol. 28, no. 1, pp. 189-198, 2008.

[15] A. Y. Sharifov, "Conditions optimality in problems control with systems impulsive Differential equations under non-local boundary conditions," Ukrainian Mathematical Journal, vol. 64, no. 6, pp. 836-847, 2012.

[16] A. Y. Sharifov and N. B. Mammadova, "On second-order necessary optimality conditions in the classical sense for systems with nonlocal conditions," Differential Equations, vol. 48, no. 4, pp. 605-608, 2012.

[17] M. F. Mekhtiyev, I. Sh. Djabrailov, and Y. A. Sharifov, "Necessary optimality conditions of second order in classical sense in

optimal control problems of three-point conditions," Journal of Automation and Information Sciences, vol. 42, no. 3, pp. 47-57, 2010.

[18] O. O. Vasilieva and K. Mizukami, "Optimal control of a boundary value problem," Izvestiya Vysshikh Uchebnykh Zavedenii Matematika, no. 12, pp. 33-41, 1994.

[19] O. O. Vasilieva and K. Mizukami, "Dynamical processes described by a boundary value problem: necessary optimal-ity conditions and solution methods," Rossiiskaya Akademiya Nauk. Izvestiya Akademii Nauk. Teoriya i Sistemy Upravleniya, no. 1, pp. 95-100, 2000.

[20] O. Vasilieva and K. Mizukami, "Optimality criterion for singular controllers: linear boundary conditions," Journal of Mathematical Analysis and Applications, vol. 213, no. 2, pp. 620-641, 1997.

[21] O. Vasilieva, "Maximum principle and its extension for bounded control problems with boundary conditions," International Journal of Mathematics and Mathematical Sciences, vol. 2004, no. 35, pp. 1855-1879, 2004.

[22] Y. A. Sharifov, "Classical necessary optimality conditions in discrete optimal control problems with nonlocal conditions," Automatic Control and Computer Sciences, vol. 45, no. 4, pp. 192-200, 2011.

[23] M. F. Mekhtiyev, H. H. Mollai, and Y. A. Sharifov, "On an optimal control problem for nonlinear systems with integral conditions," Transactions of NAS of Azerbaijan, vol. 25, no. 4, pp. 191-198, 2005.

[24] A. Ashyralyev and Y. A. Sharifov, "Optimal control problem for impulsive systems with integral boundary conditions," in AIP Conference Proceedings, vol. 1470, pp. 12-15, 2012.

[25] A. Ashyralyev and Y. A. Sharifov, "Optimal control problem for impulsive systems with integral boundary conditions," Electronic Journal of Differential Equations, vol. 2013, no. 80, pp. 1-11, 2013.

[26] N. N. Samarskii and A. Goolin, Numerical Methods, Nauka, Moscow, Russia, 1989, in Russian.

[27] H. O. Fattorini, Infinite-Dimensional Optimization and Control Theory, vol. 62, Cambridge University Press, Cambridge, UK, 1999.

[28] H. O. Fattorini, Infinite Dimensional Linear Control Systems, vol. 201, Elsevier, Amsterdam, The Netherlands, 2005.

Copyright of Abstract & Applied Analysis is the property of Hindawi Publishing Corporation and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.