Tharwat and Alofi Boundary Value Problems (2015) 2015:33 DOI 10.1186/s13661-015-0290-z

0 Boundary Value Problems

a SpringerOpen Journal

RESEARCH

Open Access

Sampling of vector-valued transforms associated with solutions and Green's matrix of discontinuous Dirac systems

Mohammed M Tharwat1,2 and Abdulaziz S Alofi

"Correspondence: zahraa26@yahoo.com 1 Department of Mathematics, Faculty of Science, University of Jeddah, Jeddah, Saudi Arabia 2Permanent address: Department of Mathematics, Faculty of Science, Beni-Suef University, Beni-Suef, Egypt

Full list of author information is available at the end of the article

Abstract

Our goal in the current paper is to derive the sampling theorems of a Dirac system with a spectral parameter appearing linearly in the boundary conditions and also with an internal point of discontinuity. To derive the sampling theorems including the construction of Green's matrix as well as the eigenvector-function expansion theorem, we briefly study the spectral analysis of the problem as in Levitan and Sargsjan (Translations of Mathematical Monographs, vol. 39,1975; Sturm-Liouville and Dirac Operators, 1991) in a way similar to that of Fulton (Proc. R. Soc. Edinb. A 77:293-308,1977) and Kerimov (Differ. Equ. 38(2):164-174,2002). We derive sampling representations for transforms whose kernels are either solutions or Green's matrix of the problem. In the special case, when our problem is continuous, the obtained results coincide with the corresponding results in Annaby and Tharwat (J. Appl. Math. Comput. 36:291-317,2011). MSC: 34L16; 94A20; 65L15

Keywords: Dirac systems; transmission conditions; eigenvalue parameter in the boundary conditions; discontinuous boundary value problems

ft Spri

ringer

1 Introduction

The sampling theory says that a function may be determined by its sampled values at some certain points provided the function satisfies some certain conditions. Let us consider the Paley-Wiener space B2 of all i2(M)-functions whose Fourier transforms vanish outside [-a, a]. This space is characterized by the following relation which is due to Paley and Wiener [1, 2]:

f (X) e Bl ^ f (X) = —J eiwXg (w) dw

for some functiong(-) e L2(-a, a). (1.1)

In engineering terminology, elements of the Paley-Wiener space B2 are called band-limited signals with band-width a > 0. The space Bl coincides with the class of all L2(R)-entire functions with exponential type a. The classical sampling theorem of Whittaker-Kotel'nikov-Shannon (WKS) states [3-7]: Iff (X) e B2, then it is completely determined

© 2015 Tharwat and Alofi; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the originalwork is properly credited.

from its values at the points kk = kn la, k e Z, by means of the formula

f(k)=^ f(kk) sine a (k - kk), k e C, (.2)

sine x =

^, X = 0,

x X = (1.3)

1, x = 0.

The convergence of series (1.2) is uniform on R and on compact subsets of C and it is absolute on C. Moreover, series (1.2) is in the Z,2(R)-norm. The WKS sampling theorem has many applications in signal processing (see, e.g., [8]).

The WKS sampling theorem has been generalized in many different ways. Here we are interested in two extensions. The first is concerned with replacing the equidistant sampling points by more general ones, which is very important from the practical point of view. The following theorem, which is known in some literature as the Paley-Wiener theorem [2], gives a sampling theorem with a more general class of sampling points.

The Paley and Wiener theorem states that if {kk}, k e Z is a sequence of real numbers such that

E := sup

and A is an entire function defined by

A(k):=(k - k0) g(1-(l-±)> ()

then, for any function of the form (1.1), we have

f (k) = V7(kk) , A(k)--, k e C. (.6)

7 () kei(k) A'(kk)(k - kk)' ( )

Series (1.6) converges uniformly on compact subsets of C.

The WSK sampling theorem is a special case of this theorem because if we choose kk = kn la = -k -k, then

A(k)=#-£)('+h)=kn(1-(kr!)=sinka, a(kk)=(-1)k.

The sampling series (1.6) can be regarded as an extension of the classical Lagrange interpolation formula to R for functions of exponential type. Therefore, (1.6) is called a Lagrange-type interpolation expansion. Note that, although the theorem in its final form may be attributed to Levinson [9] and Kadec [10], it could be named after Paley and Wiener who first derived the theorem in a more restrictive form, see [3, 7,11] for more details.

The second extension of the WKS sampling theorem is the theorem of Kramer [12]. The classical Kramer sampling theorem provides a method for obtaining orthogonal sampling theorems. This theorem has played a very significant role in sampling theory, interpolation theory, signal analysis and, generally, in mathematics; see the survey articles [13,14]. The statement of this general result is as follows: If I is a finite closed interval, K(-,k ): I x C ^ C is a function continuous in k such that K(-,k) e L2(I) for all k e C, and let {kk}keZ be a sequence of real numbers such that {K(-, kk)}keZ is a complete orthogonal set in L2(I). Suppose that

f (k ) = jf K(w,k )g (w) dw, whereg(•) e L2(I). Then

k N k JiK(w,k)K(w,k k)dw n

f (k) = ^k> >K(ak)«L22„) ■ ((

Series (1.7) converges uniformly wherever ||K(-, t) NL2(I) as a function of t is bounded. In this theorem sampling representations were given for integral transforms whose kernels are more general than exp(ixt). Also Kramer's theorem is a generalization of the WKS theorem. If we take K(w,k) = eikw, I = [-a, a ], kk = a, then (1.7) will be (1.2).

The relationship between both extensions of the WSK sampling theorem has been investigated extensively. Starting from a function theory approach, cf. [15], it is proved in [16] that if K(w,k), w e I, k e C satisfies some analyticity conditions, then Kramer's sampling formula (1.7) turns out to be a Lagrange interpolation one, see also [17-19]. In another direction, it is shown that Kramer's expansion (1.7) could be written as a Lagrange-type interpolation formula if K(-,k ) and kk are extracted from ordinary differential operators, see the survey [20] and the references cited therein. The present work is a continuation of the second direction mentioned above. In [21], Tharwat et al. studied the sampling theorems, with solutions and Green's matrix, for a discontinuous Dirac system which has no eigenparameter in boundary conditions, see also [22]. Also, Tharwat [23] studied the same problem but for a discontinuous Dirac system with eigenparameter in one boundary condition. Although the analysis of the present paper and that of [23] look similar, the treatments and results are different from some aspects. Problems with a spectral parameter in equations and boundary conditions form an important part of spectral theory of linear differential operators. A bibliography of papers in which such problems were considered in connection with specific physical processes can be found in [24, 25]. In the present work, we prove that integral transforms associated with Dirac systems, which contain an eigenparameter in all boundary conditions, with an internal point of discontinuity can also be reconstructed in a sampling form of Lagrange interpolation type. Sampling results associated with the discontinuous Dirac system that has an eigenparameter in all boundary conditions have not been extensively studied. Our investigation will be the first in that direction, introducing a good example. To achieve our aim we briefly study the spectral analysis of the problem. Then we derive two sampling theorems using solutions and Green's matrix, respectively.

2 A spectral analysis

In this section we define a discontinuous Dirac system which contains an eigenparameter appearing linearly in all boundary conditions. We define the eigenvalue problem and study some of its properties. Throughout this paper we consider the Dirac system

p(x)y'2(x)-q1(x)y1(x) p(x)y[(x) + q2(x)y2(x)

=X y1(x)

_-y2(x)_

x e [-1,0) U (0,1],

B1(u) := (co1 + X sin 01)y1(-1) - (o)2 + X cos 01)y2(-1) = 0, B2(u) := (v1 + X sin02^1(1) - (v2 + X cos 62^2(1) = 0,

(2.2) (2.3)

and the transmission conditions

7ï(u):=ttyi(0-)- 5^(0+) = 0, 7i(u):=y2y2 (0-) - 52y2(0+) =0,

where X is a complex spectral parameter; p(x) = p1 for x e [-1,0), p(x) = p2 for x e (0,1]; p1 > 0 andp2 > 0 are given real numbers; y = [y^ ], the real-valued functions qi(-) and q2(-) are continuous in [-1,0) and (0,1] and have finite limits q1(0±) := lim^-o± q1(x), q2(0±) := limx^0± q2(x); a>i, Vi, Yi, Si e R (i = 1,2); y = 0, Si = 0 (i = 1,2) and

W2 sin 0! - &>! cos 01 v1 cos 02 - v2 sin 02

To formulate a theoretic approach to problem (2.1)-(2.5), we define the Hilbert space E = H ® C2 with an inner product

1 f0 _ 1 f1 _ 1 _ 1 _ (y(■), Z(•))E :=— yT(x)Z(x) dx +— yT(x)Z(x) dx +— zw + — Z2W2, (2.7) E P1J-1 P2J0 01 02

where T denotes the matrix transpose,

e E, y(x), z(x) e H, zi, wi e C, i = 1,2,

y(x) z(x)

Y (x) = z1 , 2 (x) = W1

z2 W2

H := y =

: y1, y2 e L2(-1,0) e L2(0,1) .

Throughout this article, we consider

Ta(y(x)) Tv (y(x)) ТцШ) T02 (y(x))

«1y1(-1) - «2y2(-1) vxyx (1) - V2y2(1)

sin 01y1(-1) - cos 01y2(-1) sin 02y1(1) - cos 02y2(1)

Equation (2.1) can be written as

L(u) := P(x)y'(x) - Q(x)y(x) = Xy(x),

P(x) =

0 p(x) -p(x) 0

Q(x) =

qi(x) 0 0 q2(x)

y(x) =

yi(x) y2 (x)

(2.10)

For functions y(x), which are defined on [-1,0) U (0,1] and have finite limit y(0±) := limx^±0 y(x), by y(i)(x) andy(2)(x) we denote the functions

y(i) (x) =

y(x), x e [-1,0), y(0-), x = 0,

y (2) (x) =

y(x), x e (0,1], y(0+), x = 0

(2.11)

which are defined on I1 := [-1,0] and I2 := [0,1], respectively. In the following lemma, we will prove that the eigenvalues of problem (2.1)-(2.5) are real.

Lemma 2.1 Let y1y2 = S1S2. The eigenvalues of problem (2.1)-(2.5) are real.

Proof Suppose the reverse that ¡i. = i is an eigenvalue of problem (2.1)-(2.5). Let [^j] be a corresponding (non-trivial) eigenfunction. By (2.1), we have

p(x)ddx{y1(x)y2(x)-y1(x)y2(x)} = (i- ¡){|y1(x)|2 + |y2(x)|2}, x e [-1,0) U (0,1]. Integrating the above equation through [-1,0] and [0,1], we obtain

- ¡) r '0

(|y1(x)|2 + |y2(x)|^ dx

= yi(0-)y2(0-) -y^-)- [y1(-1)y2(-1) -^(-^(-1)],

/ (|y1(x)|2 + ^(x^) dx

= yi(1^(1) -yi(1)y2(1) - [yi(0+)y^O^ -^(0+)y^o^]. Then from (2.2), (2.3) and transmission conditions we have, respectively,

Ol(F - ^)ly2(-1)|2

(2.12)

(2.13)

yi(-1)y2(-1) -y1(-1)y2(-1) = yi(1)y2(1)- y1(1)y2(1) = -

+ ¡x sin 0i |2

a2(x - ¡)ly2(1)|2

|v1 + x sin02|2

yi(0+)y2(0+) -^(0+^(0+) = -

Since x = ¡, it follows from the last three equations and (2.12), (2.13) that

— f (|y1(x)|2 + |y2(x)p) dx + — f (^1(x)|2 + |y2(x)p) dx pU-1 P2 J0

a1|y2(-1)|2 a252|y2(1)|2

|«1 + x sin01|2 |v1 + xsin02|2

This contradicts the conditions f°1(ly1(x)\2 + |y2(x)|2) dx + p^ fO(|y1(x)|2 + |y2(x)|2) dx > O and oi >O,i = 1,2. Consequently, x must be real. □

y(x) T01 (y(x)) . T02 (y(x))

!• y1(>)(-), y2(i)(-) are absolutely continuous on I, i = 1,2,

Let D(A) ç E be the set of all elements Y(x) =

in E such that:

2. L(y) e H,

3. Kiyi(O-)-5iyi(O+) = O, i = 1,2.

Now we define the operator A : D(A) —► E by

y(x) L(y) y(x)

A T01 (y(x)) = -T» (y(x)) , T01 (y(x))

_T02(y(x))_ _-Tv (y(x))_ _T02(y(x))_

e D(A).

(2.15)

Lemma 2.2 Let y1 y2 = 5152. The operator A is symmetric in E. Proof For Y(•), Z(•) e D(A),

e " P S°SCW)))Tz(x) dx + L(y(x)))

(AY (•), Z (•)) E =

z(x) dx

- — (y(x))%1 (Z(x)) - — Tv (y(x))Ts2 (z(x)).

(2.16)

By partial integration we obtain

(Ay(•), Z(•))E = (y(•), AZ(•))E - W(y,Z)(0-)

+ W(y,Z)(-1) - W(y,Z)(1) + W(y,z)(0+)

- — [Tffl(y(x))T01 (Z(x)) - Tfl1 (y(x))Ta(z(x))] a1

- — [Tv (y(x))T% (Z(x)) - T02 (y(x))Tv(Z(x))], a2

(2.17)

where, as usual, by W(y, z)(x) we denote the Wronskian of the functions u and v defined in [26, p.194], i.e.,

W(y, z)(x) :=y1(x)Z2(x) -y2(x)Z1(x).

(2.18)

Since y(x) and z(x) satisfy the boundary condition (2.2)-(2.3) and transmission conditions (2.4) and (2.5), we get

(y(x))T01 (z(x)) - T1 (y(x))Ta(z(x)) = aW(y,z)(-1), Tv (y(x))T&2 (z(x)) - T2 (y(x))Tv (z(x)) = -a2W(y, z)(1), (2.19)

Y1Y2W (y, z)(0-) = SSW (y, z)(0+).

Then, substituting the equations of (2.19) in (2.17), we obtain

{AY(■), 2(-))E = {y(■), AZ(.))g, y(■), Z(•) e D(A). (2.20)

Hence the operator A is Hermitian. Since D(A) is dense in E (see, e.g., [27]), then the operator A is symmetric. □

The operator A: D(A) —► E and the eigenvalue problem (2.1)-(2.5) have the same eigenvalues. Therefore they are equivalent with respect to this aspect.

Lemma 2.3 Let X and ¡x be two different eigenvalues of problem (2.1)-(2.5). Then the corresponding eigenfunctions y(x) and z(x) of this problem satisfy the following equality:

1 f0 1 f1

— I yT(x)z(x) dx +— I yT(x)z(x) dx

p1 -1 p2 0

+ - T01 (y(x))T^1 (z(x)) + - T02 (y(x))T^2 (z(x)) = 0. (2.21)

Proof Equation (2.21) follows immediately from the orthogonality of the corresponding eigenelements

Y (x) =

y(x) 201 (y(x)) .T02 (y(x))

2 (x) =

z(x) T01 (z(x)) _T02(z(x))_

in the Hilbert space E.

Now, we shall construct a special fundamental system of solutions of equation (2.1) for X not being an eigenvalue. Let us consider the following initial value problem:

piy2(x)-qi(x)yi(x)=Xyi(x), piy1(x) +q2(x)y2(x) = -Xy2(x), x e (-1,0), (2.22) y1(-1) = o)2 + X cos 01, y2(-1) = «1 + X sin 01. (2.23)

By virtue of Theorem 1.1 in [28] this problem has a unique solution y = [ y^X) ], which is an entire function of X e C for each fixed x e [-1,0]. Similarly, employing the same method as in the proof of Theorem 1.1 in [28], we see that the problem

p2y2(x) - q1(x)y1(x) = Xy1(x), p2y1(x) + q2(x)y2(x) = -Xy2(x), x e (0,1), (2.24) y1(1) = v2 + X cos 02, y2(1) = v1 + X sin 02 (2.25)

has a unique solution y = [ Z^X) ] which is an entire function of the parameter X for each fixed x e [0,1].

Now the functions yi2(x, X) and ji1 (x, X) are defined in terms of yi1 (x, X) and ji2(x, X), i = 1,2, respectively, as follows: The initial-value problem

pyx - qi(x)yi(x) = Xy1(x), pyx + q2(x)y2(x) = —y2(x), x e (0,1), (2.26)

(2.27)

yi(0) = Y1 yii(0, X), y2(0) = Y2 y2i(0, X),

has a unique solution y = [ y^X) ] for each X e C. Similarly, the following problem also has a unique solution y = [

^i/2(x)-qi(x)yi(x)=Xyi(x), piyi(x) + q2(xjy2(xj = -Xy2(xj, x e (—1, Oj, (2.28j

(2.29j

yi(0) = - zi2(0, X), y2(0) = — 322(0, X). Yi Y2

Let us construct two basic solutions of equation (2.i) as

y(-, X) =

yi(-, X) Î2(-, X)

3(-, X) =

3i(-, X) 32(-, X)

yi(x, X) =

3i(x, X) =

yii(x, X), x e [-i, 0),

yi2(x, X), x e (0, i],

3ii(x, X), x e [—i, 0),

3i2(x, X), x e (0,i],

y 2 (x, X) =

32 (x, X) =

y2i(x, X), x e [—1,0),

y22 (x, X), x e (0,i],

32i(x, X), x e [—i, 0),

322 (x, X), x e (0,i].

(2.30)

(2.3i)

Therefore

Toi( y (x, X)) = ai, T2 ( 3(x, X)) = — 02

Since the Wronskians W(yi, ji)(x, X) are independent on x e li (i = 1,2), and yi(x, X) and ji(x, X) functions are entire of the parameter X for all x e li (i = 1,2), then the functions

Qi(X) := W(yi,Zi)(x,X) = y1i(x,X)j2i(x,X) - y2i(x,X)ju(x,X), x e Ii, i = 1,2, (2.33)

are the entire functions of the parameter X.

Lemma 2.4 If the condition y1y2 = S1S2 is satisfied, then the equality ^1(X) = ^2(X) holds for each X e C.

Proof Taking into account (2.27)and (2.29), a short calculation gives y1y2 W(y1, j1)(0, X) = 51S2W(y2,Z2)(0, X), so ^(X) = ^2(X) holds for each X e C. □

Corollary 2.5 The zeros of the functions ^i(X) and ^2(X) coincide.

Then we may introduce to the consideration the characteristic function ^(X) as

&(X) := ^(X) = ft2(X). (2.34)

Lemma 2.6 All eigenvalues of problem (2.1)-(2.5) are just zeros of the function ^(X).

Proof Since the functions y1(x, X) and y2(x, X) satisfy the boundary condition (2.2) and both transmission conditions (2.4) and (2.5), to find the eigenvalues of (2.1)-(2.5), we have to insert the functions y1(x, X) and y2(x, X) in the boundary condition (2.3) and find the roots of this equation. □

In the following lemma, we show that all eigenvalues of problem (2.1)-(2.5) are simple, see [23, 29,30].

Lemma 2.7 Let y1y2 = S1S2. The eigenvalues of the boundary value problem (2.1)-(2.5)form an at most countable set withoutfinite limit points. All eigenvalues of the boundary value problem (2.1)-(2.5) (of ^(X)) are simple.

Proof The eigenvalues are the zeros of the entire function occurring on the left-hand side in (see Eq. (2.34)),

(v1 + X sin©2)^12(1, X) - (V2 + X cos£2)^22(1, X) = 0.

We have shown (see Lemma 2.1) that this function does not vanish for non-real X. In particular, it does not vanish identically. Therefore, its zeros form an at most countable set without finite limit points. By (2.1) we obtain for X, ¡1 e C, X = ¡,

p(x) dx Mx, X)y2(x, ¡)-y1(x, 1)52 (x, X)} = (¡1 - X){91(x, X)y1(x, ¡) + y2(x, X)y2(x, ¡)}.

Integrating the above equation through [-1,0] and [0,1] and taking into account the initial conditions (2.23), (2.27) and (2.29), we obtain

512(1, X)y22(1, ¡) - ^12(1, ¡)y22(1, X)-(i - X)01

= (l - X) ^p- j (yu(x,X)yu(x,¡) + 521 (x,X)y21(x,¡)) dx

+ p-^ (512(x,X)512(x,¡) + 522(x,X)522(x,¡))dx^. (2.35)

Dividing both sides of (2.35) by (X - ¡) and by letting ¡1 —► X, we arrive at the relation

d 512(1, X) d 522(1, X)

522(1, X)----512(1, X)---+ 01

+ — p2

1/l |2 | |2

— I (|511(x,Xn + |y21(x,Xn )dx

p1 -1 11 21

1 f (|512(x,X)|2 + |522(x,À)|2)dÀ. (2.6)

?2 J0 /

We show that the equation

ft (A.) = (vi + X sin 02)M1, X) - (v2 + A cos 02)522(1,X) = 0 (2.37)

has only simple roots. Assume the converse, i.e., equation (2.37) has a double root X. Then the following two equations hold:

(vi + X sin02)Ml,X) - (V2 + X cos02)522(1,X) = 0, (2.38)

• a v, n t \ ■ a ^9yi2(1, X)

sin 02^12(1, X) + (vi + X sin 02)-—-

- cos 02y22(1,X) -(V2 + X cos 02)d522(1 X) = 0. (2.39)

Since o2 =0 and X is real, then (v1 + X sin 02)2 + (v2 + X cos 02)2 =0. Let v1 + X sin 02 =0. From (2.38) and (2.39), we have

„ (v2 + xcos02) „

y 12(1, x ) = ^^-^ 522(1, x ),

(vi + x sin 02)

d512(1, x) 0-2522(1,x) +(V2 + x cos 02) d522(1,x)

(v1 + x sin 02)2 (v1 + x sin 02) dk

(2.40)

Combining (2.40) and (2.36) with X = X,we obtain 02(522(1,X ))2

_ + 01 = -

(V1+ x sin 02)2 \P1J-

1 r0 , |2 , |2

_ I (|y11(x,k)| + |y21(x,k)| )dx

p1 J-1

■— f (|512(x, k)|2+|y22(x, k)|2) dx |, P2 0 12 22

(2.41)

contradicting the assumption oi > 0, i = 1,2. The other case, when v2 + X cos 02 = 0, can be treated similarly and the proof is complete. □

Let denote the sequence of zeros of ft (A). Then

Y(x, kn) : =

5(x, kn) T01(5(x, k«)) _T02(5(x, k„))_

(2.42)

are the corresponding eigenvectors of the operator A. Since A is symmetric, then it is easy to show that the following orthogonality relation

(Y(-, kn), Y(-, km)) E = 0 for« = m

(2.43)

holds. Here {y(-, Xn)}^_TO will be a sequence of eigenvector-functions of (2.1)-(2.5) corresponding to the eigenvalues {Xn}J;=_TO. We denote by ^(x, Xn) the normalized eigenvectors

of A, i.e.,

V(x, Xn) :=

Y(x, Xn)

IIY(-, Xn)lle

^ (x, Xn) %i(f (x, Xn)) %2(t (x, Xn))_

Since z(-, X) satisfies (2.3)-(2.5), then the eigenvalues are also determined via («1 + X sin #1)311 (-1, X) - ((02 + cos #1)321 (-1, X) = -ft(X).

(2.45)

Therefore (3O, Xn)l£=-TO is another set of eigenvector-functions which is related by ly(-, Xn)l~=-TO with

3(x, Xn) = /ny(x, Xn), x e [—i, 0) U (0, i], n e Z,

(2.46)

where kn = 0 are non-zero constants since all eigenvalues are simple. Since the eigenvalues are all real, we can take the eigenfunctions to be real valued.

, where/(•) = [Î^], be a continuous vector-valued function. To study the

3 Green's matrix and the expansion theorem

r f <•>!

Let F (•)= W1 L W2 J

completeness of the eigenvectors of A, and hence the completeness of the eigenfunctions of (2.1)-(2.5), we derive Green's matrix of problem (2.1)-(2.5) as well as the resolvent of A. Indeed, let X be not an eigenvalue of A and consider the inhomogeneous problem

(A - XI)Y (x) = F (x), Y (x) = where I is the identity operator. Since

y(x) Tai (y(x)) %2(y(x))_

L(y) y(x) f(x)

(A — XI )Y (x) = —T«(y(x)) — X Tai(y(x)) = wi

_—T (y(x))_ _%Mx))_ _ W2 _

then we have

p(x)y2(x) — qi(x) + X} yi(x) =fi(x),

p(x)yi(x) + { q2(x) + X}y2(x) = fx), x e [—i,0) U (0,i],

wi = —T« (y(x)) — XTBl (y(x)), W2 = —%(y(x)) — XT02 (y(x))

and the boundary conditions (2.2), (2.4) and (2.5) with X are not an eigenvalue of problem (2.i)-(2.5).

Now, we can represent the general solution of (3.i) in the following form:

y(x, X) =

*[SS] + *№%)]> x e [—i,0)

] + B2[-S)], x e (0,i].

We applied the standard method of variation of the constants to (3.3), thus the functions A1(x, X), B1(x, X) and A2(x, X), B2(x, X) satisfy the linear system of equations

A[(x, k)521(x, k) + B1(x, k)j21(x, k) =

P1 , f2 (x)

A1(x, k)511(x, k) + B1(x, k)j11(x, k) =--, x e [-1,0),

1 1 P 1

A2(x, k)522(x, k) +^2(x, k)Z22(x, k) =

P2 , f2(x)

A2(x,k)512(x,k) + B2(x,k)z12(x,k) =--, x e (0,1].

2 2 P 2

Since X is not an eigenvalue and ft(X) = 0, each of the linear system in (3.4) and (3.5) has a unique solution which leads to

^ X) = ^, X)f (É ) dÏ + Al, 1 fx

Bi(x, X) = J yT(f, X)f (f ) d£ + B1, x e [-1,0),

A2(x, X) = —f 3T(f, Xf (£ ) df + A2, P2 ft(X) Jx

B2(x, X) = p^XX) J yT(f, X)f (f ) df + B2, x e (0,1], where A1, A2, B1 and B2 are arbitrary constants, and

5(*, k) =

KS * e [-1,0),

Ö * e (0JL

, k) =

[£(*k)], * e [-1,0), [Sk!], * e ((U].

Substituting equations (3.6) and (3.7) into (3.3), we obtain the solution of (3.1)

y(x, k) =

Ä JÜ 3T(*, k)f (*) d* + P^ti 5T(*, k)f (*) d* + A15(x, k) + B1z(x, k), x e [-1,0),

Ê zT(*, kf (* )d*+p2iy /; 5T(*, kf(* )d* + A25(x,k)+B2z(x,k), x e (0,1].

Then, from (3.2) and the transmission conditions (2.4) and (2.5), we get

__B =_

&(k)' 1= ft(k),

A2 = -

pMk) J-

£ 5T(*, k)f (*) d* +

w1 ft(k).

Then (3.8) can be written as

y(x,k) = -Щy(xk) + W)k) + Ш hy } df

y(x, k) f1 3T(f, kf(f)

+ df, x,f e [-1,0) U (0,1],

i(k) Jx p(f)

which can be written as W2

y(x, k) = -

щ y(x, k) + i(k)3(x, k) + L G(x,f, k)pf df,

(3.10)

(3.11)

G(x, f, k) =

Z(x, k)yT(f, k), -1 < f < x < 1,x = 0,f =0, y(x, k)zT(f, k), -1 < x < f < 1,x =0, f =0.

(3.12)

Expanding (3.i2) we obtain the concrete form

G(x, f, k) =

[yîS^yÎISS], -1 < f < x < 1, x = 0,f =0, -1 <x<f <1,x=0,f=0.

(3.13)

The matrix G(x,f,X) is called Green's matrix of problem (2.1)-(2.5). Obviously, G(x,f,X) is a meromorphic function of X for every (x, f) e ([-1,0) U (0,1])2 which has simple poles only at the eigenvalues. Therefore

У (x) = (A - kI)-1F (x) =

-W)y(x, k) + W)z(x, k) + £ G(x,f, k)pf) df

Тв1 (y(x))

%2(y(x))

(3.14)

Lemma 3.1 The operator A is sel/-adjoint in E.

Proo/ Since A is a symmetric densely defined operator, then it is sufficient to show that the deficiency spaces are the null spaces and hence A = A*. Indeed, if F(x) = and X is a non-real number, then taking

-W)y(x,k) + it)3(x,k)^ G(x,f,k)pf) df Тв1 (y(x)) T2(y(x))

implies that Y e D(A). Since G(x,Ç, X) satisfies conditions (2.2)-(2.5), then (A - XI) Y(x) = F(x). Now we prove that the inverse of (A - XI) exists. Since A is a symmetric operator, then, if AY (x) = XY (x),

(X - x)(Y( • ), Y( • )}E = (Y( • ), XY( • )}E - {XY( • ), Y( • )}E = (Y( • ), AY( • )}E - (AY( • ), Y( • )}E = O.

У (x) = Z1 =

_ Z2 _

Since X - X = 0, then Y(■), Y(•)> e = 0, i.e., Y = 0. Then R(X; X) := (.A- X/)-1, the resolvent operator of A, exists. Thus

R(X; A)F = (A - X/)-1F = Y.

Take X = ±i. The domains of (A - i/)-1 and (A + i/)-1 are exactly E. Consequently, the ranges of (A - i/) and (A + i/) are also E. Hence the deficiency spaces of A are

N- := N(A* + il) = R(A - il)j = Ej = {0}, N := N (A* - il) = R(A + il)j = E j = {0}.

Hence A is self-adjoint.

The next theorem is an eigenfunction expansion theorem. The proof is exactly similar to that of Levitan and Sargsjan derived in [28, pp.67-77], see also [24, 26, 31, 32].

Theorem 3.2

(i) For Y(■) e E,

E = £ \(y *«(-))eI2.

(3.15)

(ii) For y(•) e D(A),

y (x)^ (y (•), %(0)Ê*„(x),

(3.16)

the series being absolutely and uniformly convergent in the first component on [-1,0) U (0,1], and absolutely convergent in the second component.

4 Asymptotic formulas of eigenvalues and eigenvector-functions

In this section, we derive the asymptotic formulae of the eigenvalues |X„}2=-TO and the eigenvector-functions |y(-, X„)}J;=-TO. In the following lemma, we shall transform equations (2.1), (2.23), (2.27) and (2.30) into the integral equations, see [26].

Lemma 4.1 Let y (■, X) be the solution of (2.1) defined in Section 2. Then the following integral equations hold:

"k(x +1)" "k(x +1)" + k cos

m2 cos - w1 sin

. P1 . . P1 .

k(x +1) P1

1 r \k(x -1) — I si"

P1 -1 f.

--I cos

P1 k(x - t)

q1(t)511(t, k) dt q2(t)521(t, k) dt,

n=—CO

"X(x +1)" "X(x +1)" + X sin

rn1 cos + rn2 sin

. Pi . . Pi .

X(x +1)

+ — I cos

Pi J-1

Pi J-1

y 12(x, X) = — yu(0-, X) cos 01

X(x -t) . Pi X(x -1) ' Pi . Xx _P2_

q1(t)y11(t, X) dt q2(t)y2i(t, X) dt,

- v2 y2i(O , X) sin

Xx 1 x "X(x -1)'

— sin

P2 P2 /0 . P2 .

qi(t)yi2(t, X) dt

X(x -1)

y22(x,X) = — yii(0-,X) sin 01

P2 Xx LP2 J

q2(t)y22(t, X) dt,

+ 71 y 21(0 , X cos 02

Xx 1 x "X(x -1)'

— + — cos

P2 P2 '0 . P2 .

qi(t)yi2(t, X) dt

1 fx --I sin

X(x -1) P2

q2(t)y22(t, X) dt.

Proof To prove (4.1) and (4.2), it is enough substitute p1y'21(t, A) - Ayu(i, A) and -Piyii(t, A) - Ay21(t, A) instead of q1(t)y11(t, A) and q2(t)y21(i, A) in the integral terms of (4.1) and (4.2) and integrate by parts. By the same way, we can prove (4.3) and (4.1) by substitutingp2y22(t, A)-Ay12(t, A) and-p2y12(t, A)-Ay22(t, A) instead of q1(t)y12(t, A) and q2(t)y22(t, A) in the integral terms of (4.3) and (4.1). □

For |A| —► to, the following estimates hold uniformly with respect to x, x e [-1,0) U (0,1] (cf. [28, p.55], see also [22, 23]):

y11(x, X) = X cos y21(x, X) = X sin y 12 (x, X) = X

X(x +1)

Pi X(x +1)

+ 0i + 0i

+ O exp (x + 1)"

\ . Pi .

+ O exp " (x + 1)"

\ . Pi .

— cos

y22(x, X) = X

X) + 0i Pi J

, (Pix + P2)

+ Ol exp

(Pix + P2) P1P2

Xx ,P2.

Xx ,P2

Y2 + — sin

— + 0i

Xx ,P2.

Xx ,P2.

where t = |^A|. Now we will find an asymptotic formula of the eigenvalues. Since the eigenvalues of the boundary value problem (2.1)-(2.5) coincide with the roots of the equation

(vi + X sin02)yi2(1, X) - (V2 + X cos02)y22(1, X) = 0,

then from estimates (4.7) and (4.8) and (4.9) we get

X sin e2

— cos

X I + 0i

— X cos e2 + Ol Xexp

+ &■

LP2J sin

X + ei X

- sin —

Pi P2

+ — sin

(pi + P2)

which can be written as

sin #2

— cos

X I + 0i

— cos e2

X LP2j

X + ei X

- sin —

Pi P2

X LP2J

+ — sin

+ O( ^ exp

,(Pi + P2) ' PiP2

(4.i0)

Then, if y1 52 - y2S1 = 0, equation (4.10) becomes

X ^ ,+ ei — e2 pip2

+ O( _ exp X

. (Pi + P2) PiP2

(4.ii)

For large |X|, equation (4.11) obviously has solutions which, as is not hard to see, have the form

XJ Pi + P2 I + ei — e2 = nn + -n, n = 0, ±i, ±2,.... PiP2

(4.i2)

Inserting these values in (4.11), we find that sin 5n = O(i), i.e., 5n = O(¿). Thus we obtain the following asymptotic formula for the eigenvalues:

Xn = PiP2 (nn + e2— eo + of^ ), n = 0,±i,±2,....

Pi + P2

(4.i3)

Using formulae (4.13), we obtain the following asymptotic formulae for the eigenvector-functions y( • , Xn):

y(x, Xn) =

Xn cos[( ^^ )+ei]+0(i) I Xn (x+i) \

Xn sin[( XnPi±ii)+ei]+O(i) ' Xn[ Yi cos[(Xi)+ei] cos[Xf] — Y2 sin[(Xf)+ei] sin[Xf]]+O(i)' Xn[ y1 cos[(P-)+ei] sin[Xf ]+ Y2 sin[(Xf)+ei] cos[X2-]]+O(i)

x e [—i,0), x e (0,i],

(4.i4)

y(x, Xn) =

[Ä) ], x e [—i^

[SÎS], xe (0,i].

(4.i5)

5 The sampling theorems

In this section we derive two sampling theorems associated with problem (2.1)-(2.5). The first sampling theorem of this section associated with the boundary value problem (2.1)-(2.5) is the following theorem.

Theorem 5.1 Letf (x) = [f^ ] e H. For A e C, let

1 f0 1 f1

F(X) = — I fT(x)y(x,X)dx +— I fT(x)y(x,X)dx,

Pi J-1 P2 Jo

where y( • , X) is the solution defined above. Then F (X) is an entire function of exponential type that can be reconstructed from its values at the points {Xn via the sampling formula

F (X) = £ F (Xn)

(X - Xn)fi'(Xn)

Series (5.2) converges absolutely on C and uniformly on any compact subset of C, and ft (A) is the entirefunction defined in (2.34).

Proof Relation (5.1) can be rewritten in the form

1 f0 1 f1

F(A)=(F(■ ), Y( ■, A)L = — f T(x)y(x, A) dx + — fT(x)y(x, A) dx, A e C, (5.3)

E P1 J-1 P2 Jo

F(x) =

f(x) 0 0

Y(x, x) =

y (x, X) %Mx, X)) %2(y(x, X))

Since both F( • ) and Y( • ,X) are in E, then they have the Fourier expansions "" Y(x, Xn)

F(x)= £ (®(• ), Y( • , Xn)

Y(x, x)= £ (Y( • , X), Y( • , Xn)

E IIY( • , Xn)lir

Y(x, Xn)

l£ IIY(• ,Xn)llE'

where X e C and

(F(•),Y( • ,Xn))E = -i fT(x)y(x,Xn)dx +—i fT(x)y(x,Xn)dx.

E pi J-i pi Jo

Applying Parseval's identity to (5.3), we obtain

F^^^^^, X e C.

n=—CO

n=—CO

Now we calculate (Y( • , X), Y( •, Xn))E and ||Y(• , Xn) || E of X e C, n e Z.To prove expansion (5.2), we need to show that

<Y( •, X), Y(-, Xn)) e

n e Z, X e C.

II Y( • , Xn) 11 EE (X - XnW)'

Indeed, let X e C and n e Z be fixed. By the definition of the inner product of E, we have

1 f0 1 f1

(Y( • ,X),Y( • ,Xn)L = — I yT(x,X)y(x,Xn)dx +— I yT(x,X)y(x,Xn)dx

E P1 J-1 P2 J0

+ — Tei( y(x, X)) Tei (y(x, Xn))

+ — Te2 (y(x, X))Te2 (y(x, Xn)).

From Green's identity (see [28, p.5i]) we have

(Xn — X)

i f0 i fi

— I yT(x, X)y(x, Xn) dx +— I yT(x, X)y(x, Xn) dx PiJ—i P2 Jo

LPw-i P2

= W(y(O-, X), y(O-, Xn)) - W(y(-i, X), y(-i, Xn)) - W (y (O+, X), y (O+, Xn)) + W (y (i, X), y (i, Xn)).

Then (5.9) and initial conditions (2.23) and (2.27) imply i f ° ....... i "

(Xn — X)

y 1 (x, X)y (x, Xn) dx + — I yT(x, X)y (x, Xn) dx

PiJ—i P2 Jo

= W (y(i, X), y(i, Xn)) — (Xn — X)ai,

from which

i f0 i fi

— I yT(x, X)y(x, Xn) dx +— I yT(x, X)y(x, Xn) dx

Pi J—i P2 Jo

W (y(i, X), y(i, Xn))

Xn — X

■ — ai.

From (2.46), (2.25) and (2.8), we have

W (y (i, X), y (i, Xn)) = yi2 (i, X)y22 (i, Xn) — y22 (i, X)yi2 (i, Xn)

= k—1 [yi2(i, X)322(i, Xn) — y22(i, X)3i2(i, Xn)] = kn—i[(Xn sin e2 + Vi)yi2(i, X) — (Xn cos e2 + V2)y22(i, X)] = k—1[^(X) + (Xn — X)Te2 (y(x, X))].

(5.i2)

Relations (2.46) and Tg2 (z(x, Xn)) = -a2 and the linearity of the boundary conditions yield i k-i

—%2 (y(x,X))%2 (y(x,Xn)) = — %2{y(x,X))%2 (3(x,Xn)) = -k;1TS2 (y(x,X^. (5.i3) a2 a2

Substituting from (5.11), (5.12), (5.13) and T01(y(x, A)) = T01(y(x, An)) = 01 into (5.8), we get (Y(-, A), Y(-, An))E = k-lftA-. (4)

Letting A ^ An in (5.14) and since the zeros of ft (A) are simple, we get

(Y(-, X«), Y(-, Xn) E = ||Y(-, X«)|g = -k-ft (Xn).

(5.15)

Since A e C and n e Z are arbitrary, then (5.14) and (5.15) hold for all A e C and all n e Z. Therefore from (5.14) and (5.15) we get (5.7). Hence (5.2) is proved with a pointwise convergence on C. Now we investigate the convergence of (5.2). First we prove that it is absolutely convergent on C. Using the Cauchy-Schwarz inequality for A e C,we get

F (Xk )

(X - Xk)ft'(Xk)

\k=-œ

(œ £

Kff(-), Y(-, Xk ))E2

II Y(-, Xk )IIE

I<Y(-, X), Y(-, Xk ))e I2 II Y(-, Xk )IIE

(5.16)

Since F(-), Y(', A) e E, then the two series on the right-hand side of (5.16) converge. Thus series (5.2) converges absolutely on C. As for uniform convergence, let M c C be compact. Let A e M and N >0. Define kn(A) to be

kn (X) :=

F (X)- £ F (Xk )(

(A - Ak)ft'(Ak) ' Using the same method developed above, we have

N , x , x> l2 \ 1/2 / N

I v- KF(-),Y(-,Xk))g|

kn(X) < £ Mm^.m,2

II Y (-, Xk )IE

I<Y(-, X), Y(-, Xk ))EI2 II Y (-, Xk)iE

(5.18)

Therefore

kn(X) < ||Y(-,Xk)|e £

N , „ ,A1/2

I<F(-), Y(-, Xk ))E I2 II Y (-, Xk )IE

(5.19)

Since [-1,1] x M is compact, then we can find a positive constant CM such that

||Y(-,X)|E < Cm for all X e M.

kn(X) < CM(J2

I<F(-), Y(-, Xk ))e I 2 II Y (-, Xk )iiE

(5.21)

uniformly on M. In view of Parseval's equality,

/N Kff(-),Y(-,^))eI2\1/2 n M

> ---n- —► 0 as N —► œ.

Thus kn (X) ^ 0 uniformly on M. Hence (5.2) converges uniformly on M. Thus F(X) is an entire function. From the relation

1 f o 1 f o

|F (X)| < —J \fi(x)\\yii(x,X )| dx + —J l/2(x)||y2i(x,X)| dx

+ — / |f1(x)||y12(x,X)|dx +— / \f2(x)||y22(x,X)|dx, Xe C

P2 Jo —2 Jo

and the fact that yij( ■ ,X ), i, j = 1,2, are entire functions of exponential type, we conclude that F(X) is of exponential type. □

Remark 5.2 To see that expansion (5.2) is a Lagrange-type interpolation, we may replace Œ(X) by the canonical product

S(X) = (X-Xo) n(1-y) • (2)

From Hadamard's factorization theorem, see [1], Œ(X) = h(X)Q(X), where h(X) is an entire function with no zeros. Thus,

&(X) h(X)Q.(X)

&(*.„) h(Xn)a'(Xn) and (5.1), (5.2) remain valid for the function F(X )/h(X). Hence

, x ^ , x h(X)Q(X) , x

F (X ) = > F (X n)-~ w-. (.23)

( ) n) h(X n)W(X n)(X- Xn) ( )

We may redefine (5.1) by taking the kernel = y(• , X) to get

F(x ) = F(X) = V F(xn)-^-. (..4)

( ) h(X) ( n)(X- Xn)Q'(Xn) ( )

The next theorem is devoted to give vector-type interpolation sampling expansions associated with problem (2.1)-(2.5) for integral transforms whose kernels are defined in terms of Green's matrix. As we see in (3.12), Green's matrix G(x, %,X) ofproblem (2.1)-(2.5) has simple poles at {Xk. Define the function G(x,X ) to be G(x,X) := ^(X)G(x, %0,X), where e [-1,0) u (0,1] is a fixed point and ^(X ) is the function defined in (2.34) or it is the canonical product (5.22).

Theorem 5.3 Letf (x) = [] e E. Let F(X) = [ ^ ^) ] be the vector-valued transform

1 fo - 1 f1 -

F (X ) = — / G(x,X )f (x) dx +— G (x,X )f (x) dx. (5.25)

—1 J-1 P2 Jo

Then F(A) is a vector-valued entire function of exponential type that admits the vector-valued sampling expansion

F(X) = £ F(Xn)

(X — Хя)^(Хя)

The vector-valued series (5.26) converges absolutely on C and uniformly on compact subsets of C. Here (5.26) means

F1(X) = £ F1(Xn)

(X — Хя)^(Хя)

F2(X) = £ F2(Xn)

(X — Xjft'X )

, (5.27)

where both series converge absolutely on C and uniformly on compact sets of C.

Proof The integral transform (5.25) can be written as

F (X) = ( «(•, X), F(0) e,

F(x) =

f(x) 0 0

0(x, X) =

GT(x, X) T*(G T(x, X)) %2(G T(x, X))_

Applying Parseval's identity to (5.28) with respect to (Y(-, A^l^L^, we obtain

(5.29)

Let A e C such that A = An for n e Z. Since each Y(-, An) is an eigenvector of A, then

(a — xi)Y(x, xn) = (Xn — x)Y(x, Xn).

(A — xi)—1Y(x, Xn) = —Y(x, Xn).

(5.30)

From (3.14) and (5.30) we obtain

T^(y(x, Xn)) T01(y(x, Xn))

"У^ X) +-¡rrrr-X)

ft(X) ' ^(X)

1 0 1 1

— I G(x, f0, X)y(x, Xn) dx + — I G(x,f0,X)y(x, Xn) dx

P1 —1 P2 0

-y(^0, Xn).

(5.31)

n=—CO

n=—CO

n=—CO

Using T01(y(x, Xn)) = Oi, (2.46) and T02(z(x, Xn)) = -02 in (5.31), we get 02k-

0\ 1 f0

3(fo,X) + — G (x, %o,X)y(x, Xn) dx n(X) n(X ) P1J-1

1 /•1

+ — I G(x, %0,X)y(x, Xn) dx

-^T y(%0, X n). (5.32)

Hence (5.32) can be rewritten as

1 f0 1 f1

o2k-1y(f0,X) + oiz(f0,X) + — I G(x,X)y(x,Xn)dx +— I G(x,X)y(x,Xn)dx

P1 J-1 P2 J0

n(X) , ,

y(%0, X n). (5.33)

X n — X

The definition of 0(• ,X ) implies

1 f0 1 f1 (0(• ,X),Y( • ,Xn))E =— I G(x,X)y(x,Xn)dx +— I G(x,X)y(x,Xn)dx

P1 J-1 P2 J0

+ — T01 (G T(x,X ))T01 (y(x, X n))

+ — T2 (G T(x,X))T02 (y(x, Xn)). (5.34) O2

Moreover, from (3.12) we have

T*(GT(x,X)) = 3(f0,X)T01 (yT(x,X)), T2(GT(x,X)) = y(%0,X)T02(jT(x,X)). (5.35)

Combining (5.35), T01(y(x,X)) = T01(y(x, X n)) = 01, T02(z(x,X )) = T02(z(x, X n)) = -02 and

(2.46) together with (5.34) yields 1 f0

(0( • , X ), Y( • , Xn) E = — G(x,X)y(x, Xn) dx

E P1 J-1

+ — G(x,X)y(x, Xn) dx + 02kn1y(|'0,X ) + 0]j(§0,X). (5.36)

P2 0 n

Thus from (5.33) in (5.36), we obtain

(G • ,X), Y( • , Xn))E = y(%0, Xn). (5.37)

X n — X

Taking the limit when X —► Xn in (5.28), we get

F(Xn) = XF( • ))E = £(®( ^ ,X ), Y( ^ ,X k))E• (5-38)

k=-<X)

II Y( • , x k )IIE

Making use of (5.37), we may rewrite (5.38) as, fo e [-1,0) U (0, i], F (Xn) =

^i(Xn)

F2 (X«)

llm ft(X) y (t XA <Y(X),F(-)) E '

iimÀ^À^k=-œ X-X yi(t0, Xk) nY(Xk)||E

llm Y^œ ft(x) y it i,\ <Y(X ),F(E)e llmx^x^k=-œ Xk-X y2(t0, Xk) ||Y(.

-ft'(Xn)yi(t0, Xn)

-ft'(Xn)y2(t0, Xn)

<Y( • ,Xn),F( • ))e '

n) IIY( • ,x«)IE

<Y( • ,X«),F( • ))e n) IY(,Xn)I2E .

(5.39)

The interchange of the limit and summation is justified by the asymptotic behavior of Y(x, An) and that of ft(A). If y^, An) =0 and y2^0, An) =0, then (5.39) gives

<F( • ), Y( • , Xn))e

Fi(Xn)

II Y( • , Xn)lE ft'(Xn)yi(t0, Xn)'

<F( • ), Y( • , Xn))g IIY( • , Xn)lE

F2(Xn)

(5.40)

ft'(Xn)y2(t0, Xn)

Combining (5.37), (5.40) and (5.29) we get (5.28) under the assumption that y1(f0, An) = 0 and y2(f0, An) =0 for all n. If yi(f0, An) = 0, for some n, i = 1 or 2, the same expansions hold with Fi(An) = 0. The convergence properties as well as the analytic and growth properties can be established as in Theorem 5.1 above. □

Now we derive an example illustrating the previous results. Example 5.1 The boundary value problem

y'2 - q(x)yi = Xyi, yi + q(x)y2 = -Xy2, x e [-1,0) U (0,1],

yi(-1) = Xy2(-1), yi(1) = -Xy2(1),

yi(0") -2yi(0+) = 0, 2y2 (0-) -y2(0+) = 0,

(5.41)

(5.42)

(5.43)

is a special case of problem (2.1)-(2.5) when w2 = 1, v2 = -1, rn1 = v1 = 0, p1 = p2 = 1, y1 = S2 = 1, Y2 = ¿1 = 2, 01 = 02 = § and q1(x) = qi(x) = q(x),

q(x) =

1, -1 < x <0, 0, 0< x < 1.

In the notations of equations (2.30) and (2.31), the solutions y(• , X) and z(• , X) of (5.41)-(5.43) are

y11(x, X) = cos[(X + 1)(x + 1)] - X sln[(X + 1)(x + 1)], y21(x, X)= X cos[ (X + 1)(x + 1)) + sin[ (X + 1)(x + 1)), y12(x, X) = cos[ 1 + X(x + 1)) - X sln[ 1 + X(x + 1)), y22(x, X) = X cos[ 1 + X(x + 1)) + sin[ 1 + X(x + 1)), Zii(x, X)= X sln[X -(X + 1)x] - cos[X -(X + 1)x], Z21(x, X) = sln[X -(X + 1)x] + X cos[X -(X + 1)x],

(5.44)

(5.45)

(5.46)

Zl2(x, X) = X sin[X(l - x)l - cos[X(l - x)\

(5.47)

lii(x,X) = sin[X(l -x)] + Xcos[X(l -x)]. The eigenvalues are the solutions of the equation

&(X) = IX cos[1X + l] -X1 -1) sin[1X + l]. (5.48)

As is clearly seen, eigenvalues cannot be computed explicitly. Hence the eigenvalues are the points of R which are illustrated in Figure 1. Green's matrix of problem (5.41)-(5.43) is given by

G (x, %, X) =

2X cos[2X + 1] - (X2 -1) sin[2X + 1]

Gi(x, %, X), -1 < % < x <0,

G2(x, %, X), -1 < x < % <0,

G3(x,%,X), -1 < % < 0,0 < x < 1,

G4(x, %, X), -1 < x <0,0<% < 1,

G5(x,%,X), 0< % < x < 1,

G6(x,%,X), 0< x < % < 1,

(5.49)

(X2 - 1) cos M + 2X sin M - (X2 + 1) cos M (1 - X2) sin[#i] + 2X cos[#J - (X2 + 1) sin [#2]

(X2 -1) sin[#J - 2X cos[#J - (X2 + 1) sin[#2] (X2 - 1) cos[#J + 2X sin[#1] + (X2 + 1)cos[#2] ,

G1(x, %, X) = 2

G2 (X, t, X) =

(X2 - 1) cos[#3] + 2X sln[#3] - (X2 + 1) cos[#2] (X2 - 1) sm[#3] - 2Xcos[#3] - (X2 + 1) sln[#2]

(1- X2) sln[#3]+2X cos[#3]-(X2 +1) sln [#2] (X2-1)cos[#3]+2X sln[#3] + (X2 + 1)cos[#2]

G3(x, t, X) =

(X2 -1) cos[#5] + 2X sln[#5] - (X2 + 1) cos[#4] (1 - X2) sln [#5] + 2X cos [#5] + (X2 + 1) sln [#4]

(X2 -1) sln[#5] - 2X cos [#5] + (X2 + 1) sln[#4] (X2 -1) cos [#5] + 2X sln[#5] + (X2 + 1) cos [#4]

G4(X, t, X) = '

(X2-1)cos[#7] +2X sln[^7]-(X2 + 1)cos[^6] (X2 - 1) sm[#7] - 2X cos [#7] + (X2 + 1) sln[#6]

(1 - X2) sm[#7] + 2X cos [#7] + (X2 + 1) sln[#6] (X2 -1) cos [#7] + 2X sln [#7] + (X2 + 1) cos [#6]

05 (x, t, X) =

(X2 - 1) cos [#9] + 2X sln [#9] - (X2 + 1) cos[#8] (1 - X2) sln[#9] + 2X cos[#9] - (X2 + 1) sln[#8]

(X2 - 1) sln[#9] - 2X cos[#9] - (X2 + 1) sln[#8] (X2 - 1) cos[#9] + 2X sln[#9] + (X2 + 1) cos[#8]

06 (x, t, X) =

(X2 -1) cos [#10] + 2X sln [#10] - (X2 + 1)cos[#8] (X2 -1) sln[#10] - 2X cos[#10] - (X2 + 1) sln[#8]

(1- X2) sin[#10] + 2X cos[$10] - (X2 + 1) sinM (X2 -1) cos[#10] + 2X sin[#10] + (X2 + 1) cos[#8]

#1 := #1(x,H,X) = (X + 1)( -x) + 2X + 1, #2 := #2(x,H,X) = (X + 1)(H + x) + 1, #3 := #3(x, H, X) = (X + 1)(x - H) + 2X + 1, #4 := û^x, H, X) = X - Xx -(X + 1)(H + 1), #5 := #5(x, H, X) = X - Xx +(X + 1)(H + 1), #6 := #e(x, H, X)= X -(X + 1)(x + 1) - Xf, #7 := #7(x, H, X)=X + (X + 1)(x + 1) - XH, #8 := #8(x, H, X) = X(H + x)+ 1, := #9(x, H, X) = X(H - x) + 2X + 1, #10 := #10(x, H, X) = X(x - H) + 2X + 1.

By Theorem 5.1, the transform C0

F(X) = J [fi(x)(cos[ (X + 1)(x + 1)]- X sin[ (X + 1)(x + 1)]) +f2(x)(X cos[ (X + 1)(x + 1)] + sin[ (X + 1)(x + 1)])] dx

+ / f1(xKcos[ 1 + X(x + 1)] - X sin[1 + X(x + 1)])

+f2(x)(X cos[ 1 + X(x + 1)] + sin[1 + X(x + 1)])] dx (5.50)

has the following expansion:

" 2X cos[2X + 1]-(X2-1)sin[2X + 1]

F(X) = £ F(Xn)

(Xn - x)[2(xn - 2) cos(2Xn + 1) + 6Xn sln(2Xn + 1)] '

(5.51)

where [Xnare the zeros of (5.48). In view of Theorem 5.3, the vector-valued transform

F (X) =

r X 8in[X-(X+1)f0]-cos[X-(X+1)f0h p + rcos[(X+1)(%0+1)]-Xsm[(A+1)(%0+1)]-| (p + p )

Lsin[X-(X+1)%0]+Xcos[A-(A+1)%0] J1 1 + L sin[X-(X+1)%0]+Xcos[X-(X+1)%0] J(1 2 + 1 3)

-1 < %0 < 0,

r X sin[X(1-%0)]-cos[X(1-%0)]](p + p )+ r cos[1+X(%0+1)]-X sin[1+A(%0+1)]]p Lsin[X(1-%0)]+X cos[A(1-%0)] J(1 4 + 1 5)+ L X cos[1+A(t0+1)]+sm[1+A(t0+1)] J1 6,

(5.52)

n[X(1-%0)]+X cos[X(1-%0)] 0 < %0 < 1,

[(cos[(X + 1)(x + 1)] - X sin[(X + 1)(x + 1)])f1(x)

+ (X cos[(X + 1)(x + 1)) + sin[(X + 1)(x + 1))f2(x)] dx, f 0

/ [(Xsin[X -(X + 1)x] - cos[X -(X + 1)x])ji(x)

+ (sin[X -(X + 1)x] + X cos[X -(X + 1)x])f2(x)] dx,

I [(X sin[X(1 - x)] - cos[X(1 -x)])f1(x)

+ (sin[X(l - x)] + X cos[X(l - x)])fKx)] dx,

r4 = J [(cos[ (X + l)(x + l)]- X sin[ (X + l)(x + l)])/l(x) + (X cos[ (X + l)(x + l)] + sin[ (X + l)(x + l)])/1(x)] dx,

r5 = / [(cos[1 + X(x + l)]- X sin[ l + X(x + l)]/l(x)

+ (X cos[ l + X(x + l)] + sin[ l + X(x + l)])/1(x)] dx,

r6= i [(X sin[X(l-x)] - cos[X(l-x)])/l(x)

+ (sin[X(l - x)] + X cos[X(l - x)])h(x)] dx. The vector-valued transform (5.51) has the following vector-valued expansion:

F (X) =

"x-(i)_y^œ T (X ) 2X cos[2A+1]-(A2-1) sin[2X+1]_'

F1( )_ n_-œ F1( n) (Xn-X)[2(X2n-2) cos(2An+1)+6An sin(2An+1)]

r ii\ T (\ \ 2Xcos[2A+1]-(A2-1)sin[2X+1] F 2 (X) _ n_-œ F2 (Xn)-

(Xn -X)[2(Xn -2) cos(2An+1)+6An sin(2Xn+1)]

(5.53)

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

The authors have equalcontributions to each part of this article. All the authors read and approved the finalmanuscript. Author details

1 Department of Mathematics, Faculty of Science, University of Jeddah, Jeddah, Saudi Arabia. 2Permanent address: Department of Mathematics, Faculty of Science, Beni-Suef University, Beni-Suef, Egypt. 3Department of Mathematics, Faculty of Science, King Abdulaziz University, Jeddah, Saudi Arabia.

Acknowledgements

This article was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah. The authors, therefore, acknowledge with thanks DSR technical and financial support.

Received: 5 September 2014 Accepted: 20 January 2015 Published online: 20 February 2015 References

1. Boas, R: Entire Functions. Academic Press, New York (1954)

2. Paley, R, Wiener, N: Fourier Transforms in the Complex Domain. Am. Math. Soc. Colloquium Publ. Ser., vol. 19. Am. Math. Soc., Providence (1934)

3. Higgins, JR: Sampling Theory in Fourier and Signal Analysis: Foundations. Oxford University Press, Oxford (1996)

4. Kotel'nikov, V: On the carrying capacity of the "ether" and wire in telecommunications, material for the first all union conference on questions of communications. Izd. Red. Upr. Svyazi RKKA, Moscow, Russian 55, 55-64 (1933)

5. Shannon, C: Communication in the presence of noise. Proc. IRE 37,10-21 (1949)

6. Whittaker, E: On the functions which are represented by the expansion of the interpolation theory. Proc. R. Soc. Edinb. A 35,181-194(1915)

7. Zayed, AI: Advances in Shannon's Sampling Theory. CRC Press, Boca Raton (1993)

8. Marks, RJ: Advanced Topics in Shannon and Interpolation Theory. Springer, Berlin (1993)

9. Levinson, N: Gap and Density Theorems. Am. Math. Soc., Providence (1940)

10. Kadec, MI: The exact value of Paley-Wiener constant. Sov. Math. Dokl. 5,559-561 (1964)

11. Hinsen, G: Irregular sampling of bandlimited Lp-functions. J. Approx. Theory 72, 346-364 (1993)

12. Kramer, HP: A generalized sampling theorem. J. Math. Phys. 38,68-72 (1959)

13. Butzer, P, Higgins, JR, Stens, RL: Sampling theory in signal analysis. In: Pier, JP (ed.) Development of Mathematics 1950-2000, pp. 193-234. Birkhäuser, Basel (2000)

14. Butzer, P, Nasri-Roudsari, G: Kramer's sampling theorem in signal analysis and its role in mathematics. In: Blackledge, JM (ed.) Image Processing; Mathematical Methods and Applications. Proc. of IMA Conference, Cranfield University, UK, pp. 49-95. Clarendon Press, Oxford (1997)

15. Everitt, WN, Hayman, WK, Nasri-Roudsari, G: On the representation of holomorphic functions by integrals. Appl. Anal. 65, 95-102(1997)

16. Everitt, WN, Nasri-Roudsari, G, Rehberg, J: A note on the analytic form of the Kramer sampling theorem. Results Math. 34,310-319(1988)

17. Everitt, WN, Garcia, AG, Hernández-Medina, MA: On Lagrange-type interpolation series and analytic Kramer kernels. Results Math. 51,215-228 (2008)

18. Garcia, AG, Littlejohn, LL: On analytic sampling theory. J. Comput. Appl. Math. 171, 235-246 (2004)

19. Higgins, JR: A sampling principle associated with Saitoh's fundamental theory of linear transformations. In: Saitoh, S, Hayashi, N, Yamamoto, M (eds.) Analytic Extension Formulas and Their Applications. Kluwer Academic, Dordrecht (2001)

20. Everitt, WN, Nasri-Roudsari, G: Interpolation and sampling theories, and linear ordinary boundary value problems. In: Higgins, JR, Stens, RL (eds.) Sampling Theory in Fourier and Signal Analysis: Advanced Topics. Oxford University Press, Oxford (1999) (Chapter 5)

21. Tharwat, MM, Yildirim, A, Bhrawy, AH: Sampling of discontinuous Dirac systems. Numer. Funct. Anal. Optim. 34(3), 323-348 (2013)

22. Tharwat, MM: Discontinuous Sturm-Liouville problems and associated sampling theories. Abstr. Appl. Anal. 2011, 610232 (2011)

23. Tharwat, MM: On sampling theories and discontinuous Dirac systems with eigenparameter in the boundary conditions. Bound. Value Probl. 2013,65 (2013)

24. Fulton, CT: Two-point boundary value problems with eigenvalue parameter contained in the boundary conditions. Proc. R. Soc. Edinb. A 77, 293-308 (1977)

25. Walter, J: Regular eigenvalue problems with eigenvalue parameter in the boundary condition. Math. Z. 133, 301-312 (1973)

26. Levitan, BM, Sargsjan, IS: Sturm-Liouville and Dirac Operators. Kluwer Academic, Dordrecht (1991)

27. Muravei, LA: Riesz bases in L2(—1,1). Proc. Steklov Inst. Math. 91, 117-136 (1967)

28. Levitan, BM, Sargsjan, IS: In: Introduction to Spectral Theory: Selfadjoint Ordinary Differential Operators. Translation of Mathematical Monographs, vol. 39. Am. Math. Soc., Providence (1975)

29. Kerimov, NB: A boundary value problem for the Dirac system with a spectral parameter in the boundary conditions. Differ. Equ. 38(2), 164-174 (2002)

30. Annaby, MH, Tharwat, MM: On sampling and Dirac systems with eigenparameter in the boundary conditions. J. Appl. Math. Comput. 36,291-317 (2011)

31. Hinton, DB: An expansion theorem for an eigenvalue problem with eigenvalue parameters in the boundary conditions. Quart. J. Math. Oxford (2) 30, 33-42 (1979)

32. Wray, SD: Absolutely convergent expansions associated with a boundary-value problem with the eigenvalue parameter contained in one boundary condition. Czechoslov. Math. J. 32(4), 608-622 (1982)