Potential Anal

DOI 10.1007/s11118-017-9651-9

CrossMark

Stochastic Reaction-diffusion Equations Driven by Jump Processes

Zdzislaw Brzezniak1 • Erika Hausenblas2 • Paul Andre Razafimandimby3

Received: 6 January 2014 / Accepted: 4 September 2017 © The Author(s) 2017. This article is an open access publication

Abstract We establish the existence of weak martingale solutions to a class of second order parabolic stochastic partial differential equations. The equations are driven by multiplicative jump type noise, with a non-Lipschitz multiplicative functional. The drift in the equations contains a dissipative nonlinearity of polynomial growth.

Keywords Ito integral driven by a Poisson random measure • Stochastic partial differential equations • Levy processes • Reaction diffusion equations

Mathematics Subject Classification (2010) 60H15 • 60J75 • 60G51

This work was supported by the FWF-Project P17273-N12

S Zdzislaw Brzezniak

zdzislaw.brzezniak@york.ac.uk

Erika Hausenblas erika.hausenblas@unileoben.ac.at

Paul Andre Razafimandimby paul.razafimandimby@up.ac.za

Department of Mathematics, University of York, Heslington, York Y010 5DD, UK

Department of Mathematics and Information Technology, Montanuniversitat Leoben, Franz Josef Strasse 18, 8700 Leoben, Austria

Department of Mathematics and Applied Mathematics, University of Pretoria, Lynwood Road, Pretoria 0083, South Africa

Published online: 21 September 2017

Ô Springer

1 Introduction

The aim of this paper is to study a class of stochastic reaction-diffusion equations driven by a Levy noise. A motivating example is the following stochastic partial differential equation with a Dirichlet boundary condition

du(t, f) = Au(t, f) dt + [u(t, f) — u(t, f)3] dt + /'"^Hm dL(t), t > 0, u(t,f) = 0, f G dO, u(0,f) = uo(f), f G O ,

where L = {L(t) : t > 0} is a real valued Levy process whose Levy measure v has finite p-moment for some p G (1, 2] and O C Rd is a bounded domain with smooth boundary.

One consequence of our main results is that for every "0 G C0O) there exists a C0(0)-valued process u = {u(t) : t > 0}, which is a martingale solution to problem (1.1). The diffusion coefficient in Eq. 1.1, i.e., the function g(u) = ^/\u\/(1 + vT"T), is just a simple example of a bounded and continuous function, the drift term f(u) = u — u3 is an example of a dissipative function f : R ^ R of polynomial growth, and the Laplace operator A is a special case of a second order operator. Our results allow us to treat equations with more general coefficients than those in the model problem (1.1). In fact we can study stochastic partial differential equations with second order uniformly elliptic dissipative operators with non-constant coefficients and an unbounded nonlinear map as drift term. Moreover, our results are applicable to equations with infinite-dimensional Levy processes as well as systems with more general initial data, for instance, elements of the Lebesgue or Sobolev spaces Lq (O) or

WY,q(O). The details are presented in Sections , and .

In our paper we adopt the approach used in [10] by the first named author and Gatarek, in which a similar problem but driven by a Wiener process was treated. The major differences of the current work with respect to [10] are as follows. Firstly, the approach [10] heavily depends on the use of the theory of Ito integral in martingale type 2 Banach spaces with respect to a cylindrical Wiener process, while we use an approach which relies on stochastic integration in martingale type p Banach spaces with respect to a Poisson random measure, see [13]. Secondly, the compactness argument in [10] relies on the Holder continuity of the trajectories of the corresponding stochastic convolution process. However, the trajectories of the stochastic convolution process driven by a Levy process are not continuous, then the use of the counterpart of the Holder continuity, i.e. the cadlag property of the trajectories seems natural to be used. Unfortunately, as many counterexamples have shown, see for instance a recent monograph [65] as well as the recent papers [19] and [11], the trajectories of the stochastic convolution processes driven by a Levy process may not even be cadlag in the space in which the Levy process lives. Hence, this issue has to be handled with special care. Thirdly, we do not use martingale representation theorem, unlike the authors in [10] who used a result by Dettweiler [27]. Instead we use an ad hoc method based on a generalisation of the Skorohod representation theorem, see Theorem C.1. Finally, in order to control certain norms of the approximate solutions, instead of using stopping times as in [10], we apply interpolation methods. Summarizing, this paper contains general results on the existence of martingale solutions to stochastic reaction-diffusion equations with dissipative type coefficients of polynomial growth and with multiplicative Levy noise. We should also point out that, as a by-product of our results, we are able to fill a gap in the proof of the main result in the article [43]. The gap is related to the Step I, see page 8, in the current paper.

Our paper confirms an observation that has already been made in earlier papers [13, 42] that the theory of stochastic integration with respect to a Poisson random measure in martingale type p Banach spaces is, to a large extend, analogous to the theory of stochastic integration with respect to a cylindrical Wiener process in martingale type 2 Banach spaces provided the space of y-radonifying operators is replaced by the space Lp (v), where v is the intensity measure of the Poisson random measure in question. We would also like to point out that the theory of Banach space valued stochastic integrals with respect to a Poisson random measure is richer than the corresponding Gaussian theory, see the recent paper [30] by S. Dirksen. It would be of great interest to develop a theory of inequalities for stochastic convolutions and generalise the paper [13] in the framework of [30].

It is worth mentioning that although SPDEs with unbounded nonlinearity driven by a Levy process have not been as extensively studied as their Gaussian counterparts, there exists a number of interesting recent publications on the subject. For instance, Truman and Wu [74, 75], Jacob et al. [61], and Giri and Hausenblas [41] studied equations with Burgers type nonlinearities driven by a Levy noise. In addition, Dong and Xu in [31] considered the Burgers equation with the compound Poisson noise, thus in fact dealing with a deterministic Burgers' equation on random intervals and random jumps. Some discussion of stochastic Burgers' equations with additive Levy noise is contained in [19], where it was shown how integrability properties of trajectories of the corresponding Ornstein-Uhlenbeck process play an important role in the existence and uniqueness of solutions. In the recent paper [32] Dong and Xie studied the stochastic Navier-Stokes equations (NSEs) driven by a Poisson random measure with finite intensity measure. Fernando and Sritharan [37] and the first two named authours together with Zhu [17] studied the 2-D stochastic NSEs by means of a local-monotonicity method of Barbu [4]. This method seems to be restricted to the 2-D NSEs and does not require the use of compactness results. In their beautiful monograph [65] Peszat and Zabczyk studied classes of reaction-diffusion equations driven by an additive Levy process. In this case, the stochastic evolution equations can be transformed into an evolution equation with random coefficients, a method which usually does not work with multiplicative noise. In another recent paper [55] Marinelli and Rockner investigated a certain class of generalized solutions to problems similar to ours. Rockner and Zhang in [67] established the existence and uniqueness of solution to and a large deviation principles for a class of stochastic evolution equations driven by jump processes. Finally, Debussche et al. in [25, 26] considered a stochastic Chafee-Infante equation driven by an additive Levy noise and investigated the dynamics of the equation, for instance, the first exit times from domains of attraction of the stationary solutions of the deterministic equation.

The stochastic PDEs driven by Levy processes in Banach spaces have not been intensively studied, apart from a few papers by the second named author, like [42, 43], a very recent paper [13] by the first two named authors and [54] by Mandrekar and Riidiger (who actually studied ordinary stochastic differential equations in martingale type 2 Banach spaces). Martingale solution to SPDEs driven by Levy processes in Hilbert spaces are not often treated in the literature. Mytnik [60] constructed a weak solution to SPDEs with non-Lipschitz coefficients driven by space-time stable Levy noise. In [58] Mueller studied non-Lipschitz SPDEs driven by nonnegative stable Levy noise of index a e (0, 1). Mueller, Mytnik and Stan [59] investigated the heat equation with one-sided time independent stable Levy noise. One should add that the noise in the paper [58] does not satisfy the hypothesis of the current work.

The current paper is organized as follows. In Section 2 we introduce the notations used later on in the paper and we present the standing hypotheses and essential preliminary facts. Our two main results, i.e Theorems 3.2 and 3.4 are stated in Section 3. In that section we also

present Theorem 3.5 which could be seen as the reformulation of some of our main results in terms of a Levy process and not a Poisson random measure. Three examples illustrating the applicability of our results are presented in Sections 4, 5 and 6. To be more precise, SPDEs with dissipative polynomial drift driven by a real-valued a-stable tempered Levy process are treated in Section 4, SPDEs with a bounded drift driven by a space-time Levy white noise are treated in Section 5 and finally, stochastic reaction-diffusion equations with dissipative polynomial growth drift, driven by a space-time Levy white noise, are treated in Section 6. In Section 7 we state and prove several preliminary results about the stochastic convolution processes which we believe are interesting in themselves. Sections 8 and 9 are devoted to the proofs of our results. Unfortunately, these proofs are very technical and long and hence their brief outline is presented just after the statement of Theorem 3.5, see page 22. In the appendices we recall some definitions and well-known results in analysis and probability theory. We also prove new results, amongst them a modified version of the Skorohod representation theorem, see Theorem C.1, which are interesting in themselves.

We finish the Introduction by pointing out that the approach presented in this paper (or rather it's earlier arXiv version) has already been taken up and used for the proof of the existence of solutions to Stochastic Navier-Stokes equations and second grade fluids driven by Levy noise, see [57] and [44], respectively.

Notation 1 By N we denote the set of natural numbers, i.e., N = {0, 1, 2, •••} and by N, respectively N*, we denote the set N U {+c}, respectively N \ {0}. Whenever we speak about N, respectively N-valued measurable functions, we implicitly assume that the set N, respectively N, is equipped with the full a-field 2N, resp. 2N. By R+ we denote the set [0, c) of nonnegative real numbers and by R* the set R \ {0}. If X is a topological space, then by B(X) we denote the Borel a-field on X. By Leb we denote the Lebesgue measure on (Rd, B(Rd)) or (R, B(R)). The space of bounded linear operators from a Banach space Y\ to a Banach space Y2 is denoted by C(Y\, Y2). The norm of A G C(Y\, Y2) is denoted by II A||£(Yi,y2). If O C Rd is a bounded domain with smooth boundary dO, by C0O) we denote the space of real continuous functions on O which vanish on the boundary d O.

Suppose that (Z, Z) is a measurable space. By M(Z), respectively M+(Z), we will denote the set of all R, respectively [0, c]-valued measures on (Z, Z). By M(Z), respectively M+(Z), we will denote the a-field on M(Z), respectively M+(Z), generated by functions

iB : M(Z) 3 1 — ¡(B) G R,

respectively by functions

iB : M+(Z) 3 ¡i — ¡(B) g [0, c],

for all B G Z. Similarly, by Mi(Z) we will denote the family of all N-valued measures on (Z, Z), and by Mi(Z) the a-field on Mi(Z) generated by functions iB : M(Z) 3 ¡1 — ¡(B) g N, B g Z.

Finally, by Z ( B(R+) we denote the product a-field on Z x R+ and by v ( Leb we denote the product measure of v and the Lebesgue measure Leb.

For a Banach space Y by D([0, T], Y) we denote the space of all cadlag functions u : [0, T] —> Y which we equip with the /x-Skorohod topology, i.e., the finest among all Skorohod topologies.

2 Hypotheses, Notations and Preliminaries

In this section we introduce the notation and hypotheses used throughout the whole paper. Moreover, we present some preliminary results that will be frequently used later on.

2.1 Analytic Assumptions and Hypotheses

Let us begin with a list of assumptions. Whenever we use any of them this will be clearly indicated.

Let E be a Banach space and let A be a closed linear densely defined map in E. The norm in the space E is denoted by |-| and the norm of any other Banach space Y is denoted by My.

In what follows we will frequently use the following assumptions about the Banach space E and the linear map A.

Assumption 1 1(i) Assume that E is a separable UMD and type p, for a certain p e (1, 2], Banach space.1

1(ii) A is a positive operator in E, i.e, a densely defined and closed operator for which there exists M > 0 such that for X > 0

II (A + X)-1||£(E) < ' r• 1 + X

1(iii) —A is the infinitesimal generator of an analytic semigroup denoted by (e-tA)t>0 on

E. We also assume that A has compact resolvent. 1(iv) The semigroup (e—tA)t>0 on E is of contraction type.

1(v) A has the bounded imaginary power (briefly BIP) property, i.e., there exist constants K > 0 and & e [0, §) such that

I|A"||£(E) < Ke& 1*1, * e R. (2.1)

2(i) There exists a separable Banach space X such that the embedding E C X is dense and continuous.

2(ii) The linear map A has a unique extension to X. This extension map is still denoted by A and satisfies Assumptions 1(ii), 1(iii) and 1(iv).

Notation 2 For any y > 0, the completion of E with respect to the norm |A—Y•I will be denoted by D(A-Y). For any y > 0, the domain of the fractional power operator AY (in E) will be denoted by D(AY). With few exceptions we will only speak about fractional powers of the operator A with respect to the space E and not X nor B and hence the notation AY and D(AY) should be unambiguous. Those few exceptions are when we use notation D(AY), for instance in Theorem 3.2. Finally, let us note that since by assumption A-1 exists and is bounded (on E), the fractional powers A-Y, y > 0, are bounded (on E) too.

Before we proceed further we make the following useful remark.

Remark 2.1 Since E is a separable, UMD and martingale type p Banach space, we infer from [9, Remark 4.2, also Theorem A.4] that for every f e R, the space B0 = D(Af) is also

1It is known that if E has the UMD property and of type p then it is a martingale type p Banach space (see, for instance, [9]).

a UMD, martingale type p Banach space. The linear map A has an extension (or restriction depending on whether f is smaller or larger than 0) Ao (usually denoted by A) to Bo which satisfies Assumptions 1(ii), 1(iii) and 1(v). The operator —Ao generates a contraction type semigroup which will still be denoted by {e—tA}t>o on Bo. Moreover, if A has the BIP property then so has Ao.

One of the consequences of the BIP property in Assumption 1(v) is that the fractional domains of the operator A are equal to the complex interpolation space of an appropriate order between D(A) and E, see e.g. [72].

Finally, if a linear operator A on a Banach space E is positive, then —A is the infinitesimal generator of a Co-semigroup in E, see for instance [1o, Remark 2.1].

If E is a separable Banach space and q e [1, ro), we denote by Lq(o, T; E) (see, for instance, [29]) the Lebesgue space consisting of (equivalence classes of) Lebesgue measurable functions u : [o, T] —^ E such that

IMlLq(o,T;E) :=(fQ \u(s)\qd^jq (2.2)

is finite. The Besov-Slobodetskii space Wa,q(o, T; E), where a e (o, 1), consists of all u e Lq(o, T; E) such that the seminorm

ifT fT \u(t) — u(s)\q \ 1 \u\wa,q(o,T;E) ^ ^ \t — s\1+aq ds dt) (2.3)

is finite. The space Lq(o, T; E) and Wa,q(o, T; E) equipped with the norms (2.2) and, respectively,

ifT q fT fT\u(t) — u(s)\q \ 1

II uIwa,q(o,T;E) := N \u(s)\q ds + J J \t — s\\+aq dsdtl (2.4)

are separable Banach spaces.

We will denote by C([o, T]; E) the Banach space of all E-valued continuous functions defined on the interval [o, T] equipped with the supremum norm. Similarly, if k e N*, then by Ck([o, T]; E) we will denote the Banach space of all E-valued functions of Ck-class. We will denote by Cf ([o, T]; E), for f e (o, 1), a (non-separable) Banach space of all functions u e C([o, T]; E) such that

\u(t) — u(s)\

IIuIcf([o,T];E) := sup \u(t) \ + sup —---—< ro. (2.5)

o<t <t o<s<t<T \t — s\f

We will also denote by H 1,q(o, T; E) the space of functions u e Lq(o, T; E) with weak derivatives Bu := u' e Lq(o, T; E). Endowed with the graph norm of B, the space H 1,q(o, T; E) is a Banach space. By Ho1,q(o, T; E) we denote the subspace of H 1,q(o, T; E) consisting of all u e H 1,q(o, T; E) such that u(o) = o.

Now we introduce an important operator A = At which will play a crucial role in our analysis. We start by setting

Bu = u, u e D(B), D(B) = Ho1,q(o, T; E). The space D(B) is a Banach space when equipped with the graph norm

\u\D(B) := \u\L1(o,T;E) + \u'\Li(o,T;E), u e D(B).

Define also a linear operator A by the formula

D(A) = {u e Lq(0, T; E) : Au(-) e Lq(0, T; E)}, (2.6)

Au := {[0, T] 3 t ^ A(u(t)) e E}. (2.7) The domain D(A) of A is a Banach space with norm

\u\D(a) := |u/ÍL9(0,T;E) + \Au\lí(0,T;E), u e D(A).

Let us note that if A + kI, k > 0, satisfies parts 1(ii), 1(iii) and 1(v) of Assumption 1, then A + kI satisfies them as well, see Dore and Venni [33]. Finally, we define the operator A by

A := B + A, D(A) := D(B) n D(A).

The domain D(A) is endowed with the graph norm., i.e,

\u\D(A) = \u\D(B) + \u\d(A), u e D(A).

We recall that although A is the sum of the closed operators A and B, it is not necessarily a closed operator. However, if E is an UMD Banach space and A + kI, for some k > 0, satisfies parts 1(ii), 1(iii) and 1(v) of Assumption 1 then, since A = B — kI + A + kI, by Dore and Venni [33], see also Giga and Sohr [39], A is a positive operator. In particular, A has a bounded inverse. Consequently, one can define the fractional powers A—a, a > 0. In particular, for a e [0, 1], A—a is a bounded linear map in Lq(0, T; E), and for a e (0, 1),

(A—af)(t) = —- (t — s)a—1e—(t—s)Af(s)ds, (2.8)

^(a)J 0

for any t e (0, T), f e Lq(0, T; E). Under the parts 1(i)-1(v) of Assumption 1, the parabolic operator A and its fractional powers A—a, a e [0, 1], enjoy several nice properties for which we refer the reader to [10]. The properties of A—a which are the most relevant to our study are summarized in the following lemma whose proof can be found in [10, Theorem 2.6 and Corollary 2.8].

Lemma 2.2 Assume that E is an UMD Banach space and an operator A satisfying Assumptions 1(ii)-1(iii) is such that A + kI, for some k > 0, satisfies Assumption 1(v). We also suppose that (A + kI)-1 is a compact operator in E. Let a, fi, S be nonnegative numbers satisfying

0 < f + S <a — -. (2.9)

Then, A—a is a compact linear map from Lq(0, T; E) into Cf([0, T]; D(AS)). Inparticular,

if a > 1, then A—a is a compact map from Lq(0, T; E) into C([0, T]; E).

2.2 Stochastic Preliminaries

The aim of this subsection is to introduce some additional probabilistic notation. We also present some basic results about stochastic integration with respect to compensated Poisson random measure.

Assumption 2 Let us assume that (Z, Z) is a measurable space, v e M+ (Z), i.e., v is a nonnegative measure on (Z, Z).

We assume that P = (V, F, F, P), where F = (Ft)t>0, is a filtered probability space satisfying the so called usual conditions, i.e.,

(i) P is complete on (V, F),

(ii) for each t e R+, Ft contains all (F, P)-null sets,

(iii) the filtration F is right-continuous.

Let us start by recalling the following definition which is taken from [45, Definition I.8.1].

Definition 2.3 In the framework of Assumption 2, a time-homogeneous Poisson random measure on (Z, Z) over P with the intensity measure v ( Leb, is a random variable

n : (V, F) ^ (Mi(Z x R+), Mi(Z x R+))

satisfying the following conditions

(a) for each U e Z ( B(R+), n(U) := iu o n : V ^ N is a Poisson random variable with parameter2 En(U);

(b) n is independently scattered, i.e., if the sets Uj e Z ( B(R+), j = 1, ••• ,n are pairwise disjoint, then the random variables n(Uj), j = 1, ••• ,n are pairwise independent;

(c) for all U e Z and I e B(R+),

E[n(U x I)] = v ( Leb(U x I) = v(U) Leb(I);

(d) for each U e Z, the N-valued process

(0, ex) x V 3 (t, a) ^ n(w)(U x (0, t])

is F-adapted and its increments are independent of the past, i.e., the increment between times t and s, t > s > 0, are independent form the a-field Fs.

If n is a time-homogenous Poisson random measure as above, then by n we will denote the corresponding compensated Poisson random measure defined by

n(U x I) = n(U x I) - E(n(U x I)) = n(U x I) - v(U) Leb(I), U e Z,I e B(R+),

with the convention that x-x = 0.

We proceed to the definition of functional spaces that we need throughout the paper. Suppose that Y is a separable Banach space. We denote by Lq(Z, v; Y), q e [1, x), the space of all (equivalence classes of) measurable functions £ : (Z, Z) ^ (Y, B(Y)) such that

Similarly, we define the space Lp (V; Y) and Lq(Vj; Y), where Vt = [0, T] x V, see [ 29]. In the latter case, we consider the product a-field B([0, T]) x F. By L0 (V; Y) we denote the set of measurable functions from (V, F) to Y.

For T e (0, x] let N(0, T; Y) be the space of (equivalence classes of) progressively measurable processes £ :[0,T) x V ^ Y.

2If En(U) = to, then obviously n(U) = to a.s..

For q e (1, ro) we set

Nq(0,T; Y) = e N(0,T; Y) |£(t)|qdt< ro P-a.s.J , (2.10)

Mq(0,T; Y) = ^ e N(0,T; Y) : E^ l^(t)lqYdt< roj • (2.11)

Let Nstep (0, T; Y) be the space of all £ e N(0, T; Y) for which there exists a partition 0 = t0 < t1 < ••• < tn < T such that for k e {1, ••• , n}, for t e (tk—1, tk], £(t) = £(tk) is Ftk-1 -measurable and £(t) = 0 for t e (tn,b). We put Mqtep = Mq n Nstep. It can be easily shown that Mq(0, T; Y) is a closed subspace of Lq([0, T) x Q; Y) = Lq([0, T); Lq(Q; Y)).

Now for £ e Mptep(0, T; Lp(Z, v; E)) we set

I(£) = lbjz£(tj,z)n(dz, (tj-1, tj])• (2.12)

It is shown in [13] that if E is a Banach space of martingale type p e (1, 2], then I is a bounded linear map from MPtep(0, T; Lp(Z, v; E)) (with respect to the norm inherited from Mp(0, T; Lp(Z, v; E))) to Lp (Q, E). In particular, there exists a positive constant C > 0 which depends only on p and E such that

E|I(£)|p < cf e[ l£(t,z)lpv(dz)dt (2.13)

and EI(£) = 0 for any £ e Mptep(0, T; Lp(Z, v; E)). From these facts, we can define by Eq. 2.12 the stochastic integral of a process £ e Mptep(0, T; Lp(Z, v; E)) with respect to the compound random Poisson measure n. The extension of this integral to Mp(0, T, Lp(Z, v, H)) is possible, thanks to the density of Mptep(0, T, Lp(Z, v; E)) in the space Mp(0, T, Lp(Z, v; E)). More precisely, we recall the following result whose proof can be found in [13, Theorem C.1].

Theorem 2.4 Assume that p e (1, 2] and E is a martingale type p Banach space. Then there exists a unique bounded linear operator

I : Mp(0, T, Lp(Z, v; E)) ^ Lp(Q, F, E),

suchthatfor£ e Mptep(0, T, Lp(Z, v; E)) we have I(£) = I(£). In particular, there exists a positive constant C which depends only on E and p such that

E11(£)|p < CE f f £(t,x)p v(dx)dt, (2.14)

for every £ e Mp(0, T, Lp(Z, v; E)). Moreover, if £ e Mp(0, T, Lp(Z, v; E)), then the process I(1[0,t]£), t > 0, where

[1[0,t]£](r, x; o) := 1[0,t] (r)£(r, x, o), t > 0,r e R+, x e Z and o e Q,

is an E-valued p-integrable martingale.

The above construction is done in the spirit of Metivier, see [56, Exercise 9, p. 195], see also [13] for further details. As pointed out to the authours by the referee, the construction

of the Ito stochastic integral can also be done by defining the stochastic integral first for predictable integrands, and then extending the stochastic integral to progressively measurable integrands using the fact that the dual predictable projection of a Poisson random measure, see Theorem II.1.8 in [46], is absolutely continuous with respect to the Lebesgue measure (with respect to the "time" variable). See also [3] for a recent use of this classical notion.

As usual, we put

%(s,z)r(dz,ds) := /(1[0,,]£), t e[0,T].

Let us state the following useful result which can be proved using the argument of [15, Proof of Theorem 3.4].

Proposition 2.5 Assume that (Z, Z) is a measurable space, v is a nonnegative measure on (Z, Z) and P = (^, F, (Ft)t >0, P) is a filtered probability space. Assume also that ni and r/2 are two time-homogeneous Poisson random measures on (Z, Z) over p, both with the same intensity measures v ( Leb. Assume that p e (1, 2], E is a martingale type p Banach space and % e Mp(0, T, Lp(Z, v; E)).

If P-a.s. r1 = n2 on Mi(Z x [0, T]), then for any t e [0, T], P-a.s.

f %(s,z)m(dz,ds) = f %(s,z)rn(dz,ds). (2.15)

We close this section with the following maximal inequality whose proof can be found in [13, Corollary C2].

Theorem 2.6 Assume that 1 < q < p < 2 and E is a martingale type p Banach space. Then, there exists a constant C > 0 such that for any T > 0 and any process % e Mp(0, T,Lp(S,v; E)) we have

E sup t s[0,T ]

nq /p T p \q/p

%(r,x)rj(dx,dr) < CE / i \^(r,x)\p v(dx)dr) . (2.16)

\Jo Js !

2.3 Levy Processes and Poisson Random Measures

The subject of this section is to give a short account on the correspondence between Levy processes and Poisson random measures. Namely, given a Levy process on a Banach space, one can construct a corresponding Poisson random measure. Conversely, given a Poisson random measure on a Banach space, one may find the corresponding Levy process. To illustrate this fact, let us first recall the definition of a Levy process.

Definition 2.7 Let Z be a Banach space. An Z-valued stochastic process L = {L(t) : t > 0} over a probability space (^, F, P) is called an Z-valued Levy process iff the following conditions are satisfied.

(i) L0 = 0 a.s.;

(ii) for all n e N* and 0 < t0 < t1 < ••• tn, the random variables L(t0), L(t{) — L(t0), ..., L(tn) — L(tn—1) are independent;

(iii) for all 0 < s,t, the laws of L(t + s) — L(s) and L(t) are equal;

(iv) the process L is stochastically continuous;

(v) the trajectories of L are a.s. Z-valued cadlag.

If F = {Ft }t>0 is a filtration on F, we say that an Z-valued stochastic process L = {L(t) : t > 0} is a Levy process over a filtered probability space (Q, F, F, P) iff it is F-adapted, satisfies conditions (i), (iii-v), and

(ii)' for all 0 < s < t, the increment L(t) — L(s) is independent of Fs.

In order to discuss Levy processes in more detail we need to recall a definition of a Levy measure.

Definition 2.8 (Linde [51, section 5.4]) A a-finite Borel measure X on a separable Banach space Z is called a Levy measure on Z iff its symmetric part X + X—, where X—(A) := X(—A), A e B(Z), is a symmetric Levy measure, i.e.,

(i) X + X—({0}) = 0, and

(ii) the function, with Z' being the dual of Z,

Z' 3 a ^ exp ^y (cos(x, a) — 1) fi(dx)j

is a characteristic function of a Borel3 probability measure on Z. The class of all Levy measures on Z is denoted by L(Z).

For the readers convenience let us now list a few basic properties of Levy measures, see [51], Theorem 5.4.8 (i,ii) and Proposition 5.4.5 (i,ii,iv). First let us recall a useful notation [51, p. 68]:

K(x,a) := ei(x,a) — 1 — i (x,a)1Ul (x), x e Z,a e Z', where (x, a) is a shortcut notation for the duality pairing z(x, a)zi and, for r > 0, Ur denotes the closed ball with radius r and centered at 0 in Z.

Proposition 2.9 Suppose that k is a a -finite Borel measure on a separable Banach space Z. Consider the following conditions.

(i') k({0}) = 0;

(i) k is a Lévy measure on Z,

(ii) For every a G Z', Z \K(x,a) \ dk(x) < œ and the map

Z' 3 a ^ efzK(x'a)dk(x) g C

is a characteristic function of a Borel probability measure, denoted by es(k), on Z; (ii') for every S > 0, k(Z \ Us) < œ; (iv') sup jfUi \Z{z,a)Z,\2 v(dz) : a G Z', \a\ < lj < œ.

Then we have the following implication: (i) (i ') V (ii') V (iv' ). Moreover, if (i') holds,

then (i) ^^ (ii).

Suppose now that L = {L(t) : t > 0} is an Z-valued Levy process over a probability space (Œ, F, P), where Z is a separable Banach space. Then for each t > 0 the measure being the law of the Z-valued random variable L(t) is infinitely divisible and hence, see

3Since we assume that Z is a separable Banach space, then every Borel probability measure on Z is Radon, see [62, Theorem II.3.2].

[51, Theorem 5.7.3], there exist a Levy measure vt on Z, a Gaussian measure pt on Z and a vector xt e Z such that

Pt = es(vt) * pt * &xt. (2.17)

A proof of a finite dimensional version of this result can be found in [70, Theorem 8.1].

From now on we will assume that the process L is purely non-Gaussian, i.e., that pt = &0 for all t > 0, see [70, Definition 8.2]. We also assume that xt = 0 for all t > 0. Thus, see [51, Remark p.84],

f2t(a) = efZK(x,a>dVt(x>, a e Z',

EeiLt>,a> = Jz K(x,a)dvt(x), a e Z'. (2.18)

Because L is a Levy process, and not simply an additive process, the measures (pt)t>0 form a convolution semigroup and therefore, the measure vt is equal to tv, where v = V1. A purely non-Gaussian Z-valued Levy process with xt = 0 for all t > 0 satisfying (2.18) will be called a Levy process with generating triplets {0,tv, 0}.

The following theorem is a generalisation of (a version of) Theorem 19.2 from [70]. The result below is an infinite dimensional generalisation of a summary of the first two steps of the proof of [70, Theorem 19.2]. This generalisation is possible because of three important inequalities used by Sato are also true in general separable Banach spaces. Firstly, Lemma 20.2 from [70] is true in Banach spaces, see for instance Prop 1.1.1 on p. 15 in the monograph [50] by Kwapien and Woyczynski. Secondly, Remark 20.3 in [70] is just Remark 1.1.1 p. 17 in [50]. Finally, Lemma 20.5 in [70] is Theorem 1 on p. 29 in the Kahane's book [49].

Theorem 2.10 Assume that L = {L(t) : t > 0} is an Z-valued Levy process defined on a probability space (fi, F, P) with the system of generating triplets {^0, tv, 0}. Let us define a measure v on (0, x) x (Z \ {0}) by

v((0, t]xB) = vt(B) = tv(B), B e B(Z \ {0}).

Let ^0 e F be such that for every a e fi0, the function

[0, x) 3 t ^ L(t, a) e Z

is cadlag. Define a map N by

N(B, a) := ) #{s : (s, L(s, a) — L(s—, a)) e B1 ifa e fi0' 0 (2.19)

10 if a e fi \ fi0.

the function N is a time-homogenous Poisson random measure on (0, x) x (Z \ {0}) with intensity measure v = Leb ( ,

there exists a set fi1 e F with P(fiO = 1 such that for every a e fi1 the following hold:

(1) for every s > 0 and t e (0, x) the measure N(,a) restricted to (0, t]x (Z \ Ue) is supported on a finite number of points, each of which has N( • , a)-measure 1;

(2) for every s e (0, x), N({s} x (Z \ {0}), a} is equal to 0 or 1,

there exists a set fi3 e F with P(fi3) = 1 such that for every a e fi3,

Ss(t,a) := I [xN(d(s,x),a) — xv(d(s,x))] (2.20)

J(0,t ]x( U1\Us)

converges, as e \ 0, to an element of the Skorohod space _D([0, x),Z), locally uniformly, i.e., uniformly on any bounded interval C [0, x), • the Z-valuedprocess Y = {Yt : t > 0} defined, for w e ^3 := ^ by

Yt(w) := lim Se(t,w) + f xN(d(s,x),w) (2.21)

e\0 J(0,t ]x( Z\Ui)

is a Levy process with generating triplets {0, tv, 0}. In particular, the process Y is identical in law with the process L.

Remark 2.11 We have seen that since L is a Levy process, vt = tv. This implies as above that N is a time-homogenous Poisson random measure. Moreover, the equality (2.20) can be written as,

Se(t,w) := / [xN(d(s,x),w) - xdsv(dx)]. (2.22)

J(0,t ]x( Ui\Ue)

The above theorem enables us to define an Ito integral with respect to the Levy process L in terms of the corresponding compensated Poisson random measure N,

f G(s)dL(s) := f G(s)xN(d(s,x)) (2.23)

J0 J(0,t]xU1

+ I G(s)xN(d(s,x)),

J(0,t]x( Z\Ui)

where G is an appropriate process taking values in the space L(Z, V), where V is an appropriate Banach space.

From now we will consider only such processes that N restricted to (0, x) x (Z \ Ui) is equal to 0.

Given a process G as above, we can define a process £ £ : [0, x) x to 3 (t,w) ^ {Z 3 x ^ 1Ul (x)G(s)x e V} e L0(Z, v, V). (2.24)

In view of Section 2.2, the integral f^G(s)dL(s) is well defined for all t e [0, T], provided V is a martingale type p Banach space for some p e (1, 2] and the process £ defined above takes values in L0(Z, v, V) and belongs to the class Mp (0, T, Lp(Z, v; V)). Moreover, for every q e (1,p], there exists a constant C > 0 independent of G such that

sup f G(s)dL(s) q < CEf/ f \£(r,x)\pv(dx)dr)q/p sr0,Tl J0 V V0 Jz '

V V0 JZ

= CE( / I m-^iP Ui

(Jo I lG(r)X PWr)9"- (2'25)

In fact, using the above approach, we can extend the definition of the Ito integral with respect to the Levy process to the integrands belonging to the whole class Mp(0, T,Lp(Z, v; V)) by

f £(s)dL(s) := f £(s)N(d(s,x)). (2.26)

J0 J(0,t]xU1

We finish this section with a result somehow converse to Theorem 2.10 but whose proof, in the finite-dimensional case, can be traced to the Proof of Theorem 19.2 in [70, pp.132134]. As we have observed earlier, this proof generalises to an infinite dimensional setting.

Theorem 2.12 Assume that on a probability space (fi, F, P), N is a time-homogenous Poisson random measure on (0, x) x (Z \ {0}), where Z is a separable Banach space, with intensity measure Leb ( v. Then

• there exists a set fi1 e F with P(fi1) = 1 such that for every a e fi1 the following hold:

(1) for every s > 0 and t e (0, x) the measure N(• , a) restricted to (0, t]x (Z \ Us) is supported on a finite number of points, each of which has N( • , a)-measure 1;

(2) for every s e (0, x), N({s} x (Z \ {0}), a} is equal to 0 or 1,

• there exists a set fi3 e F with P(fi3) = 1 such that for every a e fi3, as s \ 0,

Ss(t,a) := / [xN(d(s,x),a) — xdsv(dx)] (2.27)

J(0,t ]x( U1 \Us)

converges to an element of the Skorohod space D([0, x), Z), locally uniformly, i.e., uniformly on any bounded interval C [0, x),

• an Z-valuedprocess Y = {Y(t) : t > 0} defined, for a e fi3 := fi1 H fi3 by

Y(t, a) := lim Ss(t, a) + / xN(d(s, x), a) (2.28)

s\0 J(0,t ]x( Z\U^

is a Levy process with generating triplets {0, t v, 0}.

The details of the results presented in this subsection will be dealt in a separate publication [16].

3 Martingale Solutions of Stochastic Reaction-diffusion Equations 3.1 Statements of the Main Results

In this section we will state our main results. For this purpose, we will introduce the problem, the concept of a martingale solution, and the main assumptions.

Let the Banach spaces E, X and the linear operator A be as in Assumption 1. We also assume that we have a probability space (fi, F, P) and Poisson random measure n as in Assumption 2 and Definition 2.3. Throughout we fix T > 0. We consider the following stochastic evolution equation:

i du(t) + Au(t)dt = F(t,u(t))dt + fZG(t,u(t); z) n(dz,dt), t e (0,T], | u(0) = u0 e X. ( )

where n is the compensated Poisson random measure corresponding to n, see Definition 2.3.

Using the notation introduced in Notation 2, the assumptions on the nonlinear map G read as follows.

Assumption 3 There exists p e (0, p) and a bounded and separately continuous map G : [0, T]x X ^ Lp(Z, v, D(Ap—p)).

Let us observe that this implies that the map Ap p G :[0,T]x X ^ Lp(Z, v, E) is well defined. In what follows, we will use the latter instead of G.

Next we will present assumptions on the drift operator F. For this purpose we first recall the notion of the subdifferential of the norm y, for more detail see [23].

Given x,y e X the map y : R 3 s ^ |x + sy| e R is convex and therefore is right and left differentiable. Let us denote by D±|x|y the right/left derivative of y at 0. Then the subdifferential d |x| of |x |, x e X, is defined by

d|x| := {x* e X* : D—|x|y < (y,x*> < D+|x|y, y e X},

where X* is the dual space to X. One can show that not only d |x| is a nonempty, closed and convex set, but also

dx| = {x* e X* : (x,x*> = |x| and |x*| < 1}. In particular, d |0| is the unit ball in X*.

Assumption 4 (i) The map F :[0,T]x X ^ X is separately continuous.

(ii) There exist numbers k0 > 0, q > 1 and k > 0 such that with

a(r) = k0(1 + rq),r > 0, the following condition holds for t e[0,T],

(—Ax + F(t,x + y),z> <a(\yX) — k|x|X, x e D(A),y e X,z e 9|x|. (3.2)

(iii) There exists a sequence (Fn)n€jq of bounded separately continuous maps from [0, T] x X to X such that

(a) Fn satisfies condition (ii) above uniformly in n,

(b) Fn converges in X pointwise on [0,T]x X to F.

With all the notations and concepts presented above we are ready to define a martingale solution to the problem (3.1). Let us add a remark that is surely obvious to many readers while leaving it out could lead to a confusion for some other readers. Although in order to present the problem we have used a probability space and a PRM, these two objects are part of the solution. The only given objects are the space (Z, Z) and measure v e M+(Z).

Definition 3.1 Let us assume that E and X are Banach spaces satisfying parts 1(i), 2(i) and 2(ii) of Assumption 1. Let us also assume that v is a a -finite nonnegative measure on a measurable space (Z, Z), i.e., v e M+(Z). Let p e (1, 2] a real number as in part 1(i) of Assumption 1.

An X-valued martingale solution to the problem (3.1) is a system

(fi, F, P, F,n,u) (3.3)

such that

(i) (fi, F, F, P) is a complete filtered probability space with a filtration F = {Ft : t e [0, T]} satisfying the usual conditions,

(ii) n is a time-homogeneous Poisson random measure on (Z, B(Z)) with intensity measure v ( Leb over (fi, F, F, P),

(iii) u : [0,T]x ß ^ X is an F-progressively measurable process such that for any t G [0, T], P a.s.

f \e— (t—r)AF(r,u(r))\ dr < x, (3.4)

Jo 1 X

\e-(t—r>AG(r,u(r); z)\P v(dz)dr < x, (3.5)

/o Jz' E

and for any t e [0, T], P a.s.,

u(t) = e—tAu(0) + i e-(t—r)AF(r,u(r))dr (3.6)

+ f f e-(t-r)A G(r,u(r); z) n(dz,dr). oz

If in addition there exists a separable Banach space B such that

u e D([0, T]; B), P-a.s.,

then the system (3.3) will be called an X-valued martingale solution to problem (3.1) with cadlag paths in B.

We will say that the X-valued martingale solution to problem (3.1) with cadlag paths in B is unique iff for any other martingale solution to Eq. 3.1 with cadlag paths in B

, F', P', F, n', u'),

the laws of the processes u and u' on the space D([0, T]; B) are equal.

We refer to a recent paper [15] where the uniqueness in law of processes defined by stochastic convolutions with respect to PRM's are discussed. We we formulate our main theorem.

Theorem 3.2 Let p e (1, 2], E, X and A be as in Assumption 1. Let v be a a -finite nonnegative measure on a measurable space (Z, Z), i.e., v e M+(Z). Let the nonlinear maps G and F satisfy Assumption 3 and Assumption 4, respectively. Assume that X C

D(AP-p) and that

the embedding X ^ D(AP p) is continuous. Let q be the number from Assumption 4(ii). Assume that q < qmax, where

(,p if — Agenerates a contraction type semigroup on D(AP p), 1 ppp , (3.7)

qmax — i ppp

-,r— otherwise. 1 - p

Assume also that there exists a separable UMD Banach space Y such that

(1) X c Y,

(2) A has an extension Ay which satisfies the parts 1(iii)-1(v) of Assumption 1 on Y,

(3) and DAY) C X for some 0 < 1 - .

Y J ~ qmax

Then, for any uo G X problem (3.1) has an X-valued martingale solution with cadlag paths in D(Ap p), see Definition 3.1.

Moreover, for any q e (q, qmax) and r e (1, p) the stochastic process u satisfies

Remark 3.3 If the maps F(t, ■) and Ap p G(t, ■) are Lipschitz continuous uniformly with respect to t e [0, T], i.e., there exists K > 0 such that for all t e [0, T] and all u1,u2 e X,

F(t,u2) — F(t,u1< Ku — u1|x,

J \AP 1 G(t,u2; z) - Ap p G(t,ui; z)\pE v(dz) < K U - ui\pX,

then the SPDEs (3.1) has a unique strong solution. In our work we are interested in the case when both these conditions are relaxed.

In order to prove Theorem 3.2 we will consider an auxiliary problem for which we will prove an auxiliary existence result (see Theorem 3.4) which holds under more restrictive conditions than the ones stated above. More precisely, we require that the nonlinear maps F and G satisfy the following set of conditions.

Assumption 5 There exists p e [0, p) such that the (nonlinear) maps

F : [0, T]x X ^ D(Ap—1), (3.8)

G : [0, T]x X ^ Lp(Z, v, D(Ap—p)), (3.9)

are bounded and separately continuous.

Remark The above assumption can also be stated in a more precise way, see Assumption 3. To be precise, we could request that there exists p e [0, p) such that the maps

Ap—1F : [0, T]x X ^ E, (3.10)

Ap—p G : [0, T]x X ^ Lp(Z, v, E), (3.11)

are bounded and separately continuous.

We state the following theorem whose proof will be given in Section 8. Although it is only an auxiliary result, it is still important as it is the main tool for the proof of Theorem 3.2 and we are not aware of a related result in the existing literature.

Theorem 3.4 Let E and A be as in Assumption 1 and (Z, Z, v) be a measure space with v e M+(Z). Let the hypothesis in Assumption 5 be satisfied. Then, for every u0 e D(Ap p) problem (3.1) has an E-valued martingale solution

(fi, F, P, F, n, u)

with cadlag trajectories in D(Ap p). Moreover, the stochastic process u satisfies

EW^^dtK x. (3.12)

In view of Section 2.3, Theorem 3.4 can be written in terms of a Levy process as follows.

3.2 Formulation of our Results in Terms of Levy Processes

Let Z be a separable Banach space with the Borel a-field Z = B(Z). Assume that L = {L(t) : t > 0} is a Z-valued Levy process with generating triple (So, tv, 0), see Theorem 2.10, such that for some fixed p e (1, 2],

i \z\pZv(dz) < x, (3.13)

where as in Section 2.3, Ux is the closed unit ball in Z. Note that if supp v C Ux, then the Poisson random measure N corresponding to L restricted to (0, x) x (Z \ U^ is equal to 0. Here, instead of Assumption 3 we assume the following set of hypotheses.

Assumption 6 There exists p e (0, p) such that the diffusion coefficient G is bounded and separately continuous map

Ap—pG : [0, T]x X ^ L(Z,E). (3.14)

Remark Let us notice that this framework is less general than the one of Poisson random measures. In particular, if the map G satisfies Assumption 6 then the map G defined by

G : [0,T]x E 3 (t,u) ^{Z 3 z ^ (z)G(t, u)z} e Lp(Z, v; E).

satisfies, in view of Eq. 3.13 and the continuous embedding E C X, Assumption 3. We consider the following stochastic evolution equation

du(t) + Au(t)dt = F(t,u(t))dt + G(t, u(t)) dL(t), (3i5)

(3.15)

u(0) = u0.

In view of above remark and Section 2.3 we get the following result which is crucial in our reformulation of Theorem 3.4 in terms of Levy processes.

Theorem 3.5 Assume that the Banach space E, the linear map A and the map F satisfy the assumptions of Theorem 3.4. Let us assume that Z is a separable Banach space and Y = {Y(t) : t > 0} is an Z-valued Levy process defined on a probability space (tf0, F0, P0) with the system of generating triplets {S0, tv, 0} such that supp v C U1 and, for some p e (1, 2], the condition (3.13) is satisfied. Assume that the map G satisfies Assumption 6.

Then, for every u0 e D(AP p) there exists a system

(tf, F, P, F,L,u)

such that

(i) (tf, F, F, P) is a complete filtered probability space with filtration F = {Ft }te[0,T] satisfying the usual conditions;

(ii) L = {L(t) : t e [0, T]} is a Z-valued Levy process with Levy measure v over (tf, F, F, P);

(iii) u = {u(t) : t e [0, T]} is an E-valued and adapted process, with D(AP p)-valued cadlag paths, such that

Elu(t)lpdt< x, (3.16)

(F(t, u(t)) : t e [0, T]) and (G(t, u(t)) : t e [0, T]) are well defined D(AP p)-valued, resp. Lp(Z, v; D(AP p))-valued and progressively measurable processes,

and for all t e [0, T], P-a.s.

u(t) = e tAuo + I e

i e-(t-r)AF (r, u(r)) dr

i e-(t-r)A G(r, u(r)) dL(r). (3.17)

+ e 10

Proof of Theorem 3.5 This result readily follows from Theorem 3.4 because of the following argument.

Let us consider a separable Banach space Z and an Z-valued Levy process Y = {Y(t) : t > 0}, defined on a probability space (tt0, F0, P0) with the system of generating triplets {5o, tv, 0} such that supp v C U\, and condition (3.13) is satisfied for some p e (1, 2]. Let N be the corrsponding Poisson random measure given by Theorem 2.10.

Let us fix U0 e D(AP p). Since the map G satisfies Assumption 6, by Theorem 3.4, there exists a system

(&, F, P, F, n, u)

which is an E-valued martingale solution problem (3.1) (with cadlag trajectories in

D(AP p)). In particular, n is a time-homogeneous Poisson random measure on a Banach space Z with the intensity measure v (g> Leb such that condition (3.13) is satisfied. Applying Theorem 2.12 we can find a Z-valued Levy process L = {L(t) : t > 0} with the generating triplets {50, tv, 0}. By the results discussed in Section 2.3 we infer that the system

(tt, F, P, F, n, L)

is a martingale solution to problem (3.15). □

3.3 Outline of the Proof of Theorems 3.2 and 3.4

The detail of the proofs of the theorems 3.2 and 3.4 are given in Sections 9 and 8, respectively. These proofs are very technical and to make the reading of the paper easy, we outline the proofs of Theorem 3.4 and Theorem 3.2 in this subsection.

Outline of the proof of Theorem 3.4 The proof relies on a combination of approximation and compactness methods. Namely, we approximate the initial condition U0 by a sequence (xn)neN C E satisfying

xn ^ U0 strongly in D(AP p), as n ^ x. We also define a sequence (un)neN of adapted E-valued processes by

un(t) = e-tAxn + f e-(t-s)AF(s,un(s))ds 0

+ i f e-(t-s)AG(s,un(s); z)n(dz; ds), t e[0,T], (3.18) J0 Jz

where u n is defined by

xn, if i e [0, 2-n),

Un(s) :H -t:S-2-nUn(r)dr, if s > 2-n, (3.19)

and : [0, x) ^ [0, x) is a function defined by $n(s) = k, if k G N and Jr < s < k^, i.e., $„(s) = 2-„[2„s], s > 0, where [t] is the integer part of t G R. Here we have used the following shortcut notation

/ f(t)dt f f(t) dt, A G B([0, T]),

J A Leb(A) Ja

where Leb denotes the Lebesgue measure. Let us point out that between the grid points, Eq. 3.18 is linear, therefore, u„ is well defined for all n G N.

Secondly, we proved that for any a G (0, p) and p' G (0, p) there exists a constant C such that the following inequalities hold

supE||Ap u„\\PLP(0 TE) < C, „gN

supE||Ap u„\\PLp(0 TE) < C, „gN

sup E|u„ \ Wa,p(0 T E) < C' „gN

The proofs of these uniform estimates are non-trivial and rely on Lemmata 7.2 and 7.4, Proposition E.1 and the maximal regularity for deterministic parabolic equations.

Thirdly, by defining a sequence of Poisson random measures (n„)„GN by putting n„ = n for all n G N, we will prove that for any p' G (0, p) the family of the laws of random variables ((u„, n„))„GN is tight on the cartesian product space ^Lp(0, T; E) n D([0, T]; D(Ap -1))] x Mi(Z x [0, T]). Because a cartesian product of two compact sets is compact, it is sufficient to consider the tightness of the components of the sequence ((u„, n„))„GN.

For this aim let us define two auxiliary sequences of stochastic processes by fn(t) = F(t, u„(t)), t G[0,T],

v„(t) = i f e-(t-s)AG(s,u„(s); z)n(dz; ds), t g[0,T]. J0 JZ

The tightness of laws of the processes (u„)nGN on Lp(0, T; E) n D([0, T]; D(Ap'-1)) follows by observing that

u„ = v„ + A-1 f„ + e~'Ax„, n G N,

and using Lemma 2.2 (for A-1fn) and the Lemmata 7.6 and 7.7 (for vn). The tightness of the laws family of (nn)nGN follows from [62, Theorem 3.2].

This tightness result along with the Prokhorov theorem and the modified Skorohod Representation Theorem, see Theorem C.1, implies that there exist a probability space (ti, FF, P) and Lp(0, T; E) n D([0, T]; D(Ap -1)) x MI(Z x [0, T])- valued random variables (u*, n*), (un, tfn), n G N, such that P-a.s.

(un,rjn) ^ (u*,n*) (3.20)

in Lp(0, T, E) n D([0, T], D(Ap'-1)) x MI(Z x [0, T]), nn = n*, and

L((un, fjn)) = L((U„, nn)),

for all n G N. Taking the new filtration F as the natural filtration of (un, rjn, u*, n*) we prove that over the filtered probability space (ti, F, F, P) the objects nn and n* are

time-homogeneous Poisson random measure with intensity measure v (g> Leb. We also prove that in appropriate topology

Un - e~'Axn - A~1Fn(-, un())ds - vn = 0,

where a process 0n is defined analogously to the way we have defined the process vn by replacing un with un. Using (3.20) and the uniform a priori estimates we obtain earlier we can pass to the limit and derive that P-a.s.

u*(-) = e-Auo + f e-(t-S>AF(s,u„(s))ds + / i e-(t-s>AG(s, u*(s))n*(dz, ds). Jo Jo Jz

This ends the proof of Theorem 3.4. □

The scheme of the proof of Theorem 3.2 is very similar to the above idea, but it is longer and more complicated. In the next paragraph we will simply outline the main ideas of the proof and refer the reader to Section 9 for more details.

Outline of the proof of Theorem 3.2 The proof of Theorem 3.2 also relies on approximation and compactness methods. We mainly exploit the Assumption 4(iii) to set in the Banach space E an approximating problem with bounded coefficients. This approximating (auxiliary) problem takes the form

dun(t) + (Aun(t) + Fn(t, un(t))dt = fZ G(t, Un(t))rj(dz, dt), (3 2!)

Un(0) = uo, .

which, thanks to Theorem 3.4, has an E-valued martingale solution with cadlag paths in D(AP p). We denote this martingale solution by

(tin, Fn, Fn, Pn, Vn, Un). The stochastic process un can be written in the form

un (t) = e uo +

i'e~(t-s)AFn(s, un(s))ds Jo

f f e-(t-s)AG(s,Un(s))nn(dz,ds)

+ ' lze

= e tAU0 + Zn(t) + Vn(t).

The first step of the proof is to derive uniform a priori estimates concerning the convolution processes zn and vn. In this step the results obtained in Section 7, especially Lemmata 7.9 and 10, will play an important role. In fact, thanks to parts 1(i)-1(v) of Assumption 1, we can apply these results and deduce that for any q e (q, qmax) and r e (1, p),

sup En IKHLq (0T.F) + sup |Zn (t) IX < roneN L ( , ' ' te[0,T] J

Let us fix q e (q, qmax) and p' e (0, p). We set B0 = D(AP !) and put

XT = C([0, T]; X) x Lq(0, T; E) n D([0, T]; B0) x Mj(Z x [0, T]). (3.22)

In the following step we will prove that the family of laws of (zn, vn, nn)neN is tight on Xt . For this aim we will prove that the laws of the sequence (vn)neN, respectively (nn)neN, are tight on Lq(0, T; E) n D([0, T]; B0), respectively Mj{Z x [0, T])). The tightness of laws of the sequence (vn)n€jq is a consequence of Lemmata 7.6 and 7.7. The tightness of the family of laws of (nn)neN is a consequence of [62, Theorem 3.2]. The tightness of family of laws of the sequence (zn)neN on C([0, T]; X) is difficult to prove. A proof of this fact

relies very much on the hypotheses (3.2)-(3.2) in Theorem 3.2. Firstly we will need to use Lemma 9.1, see [23] for a proof, as well as the previous uniform estimates, to prove that for some p e (1, ) supn€N|FnG, un(-))\Lp^ T,y) is bounded in probability. Observing that zn = A-1Fn(-, un(•)) and using Lemma 2.2, which is applicable thanks to parts 1(i)-1(v) of Assumption 1, we will deduce that for some Q < 1 — P the family of laws of (zn)neN is tight on C([0, T]; D(Aqy)) and hence on C([0, T]; X).

This along with the Prokhorov Theorem and the modified Skorohod Representation Theorem, see Theorem C.1, imply that there exist a probability space (£2, FF, P) and Xt- valued random variables (z*, v*, f*), (Zn, Vn, fjn), n e N, such that P-a.s.

(Zn, Vn, fjn) ^ (z*, v*, f*) in Xt, (3.23)

and, for all n e N, fn = f* and

L((zn, Vn, fjn)) = L((zn, Vn, fn)).

Next we will carefully construct a new filtration F and prove that over the new filtered probability space (£2, FF, F, P) the objects fn and f * are time-homogeneous Poisson random measure with intensity measure v (g> Leb. We also prove that in appropriate topology

Un — e—Au0 — I e—(—s)AFn(s, Un(s))ds — / e—<-—s)AG(s,Un(s))f (dz,ds) = 0, 00

where un = e— AU0 + zn + Vn. Putting u* := e— AU0 + z* + v* and using (3.23) and the uniform a priori estimates we obtained earlier we can take the limit and deduce that P-a.s.

u*(-) = e—AU0 + /" e^'^Ffru^ds + f i e—(t—S>AG(s, u*(s))ij*(dz, ds). J0 J0 JZ

This ends the hardest part of the proof of Theorem 3.2. The scheme of the proof of Theorem 3.4 is very similar to the above idea and simpler. We refer the reader to Sections 9 and 8 for the omitted details. □

4 Application I: Reaction-diffusion Equations with Levy Noise of the Spectral Type

Throughout this section, we assume that O is a bounded open domain in Rd, d e N*, with boundary. We also fix real numbers T > 0, a e (0, 2), p e (1, 2] and q e (1, x). Finally we assume that L = {L(') : ' e [0, x)} is a real-valued tempered a-stable Levy process, i.e., a Levy process with the Levy measure va given by

va(dz) = 1|z|<1 |z| a 1e |z| dz, z e R. (4.1)

Our aim in this section is to study an equation of the following type,

du(', £) = Au(', £) d' — \u(', £)\q—1 sgn(u(', £)) + u(', $) d'

+ Vlu(',£)|/(1 + J\u(',£)\)dL('), ' e (0,T],£ e O, u(',£) = 0, £ e dO, ' e (0, T], .

u(0,£) = U0(£), £ e O,

We will achieve our aim by finding the conditions on the coefficients so that Theorem 3.2 is applicable. For this purpose we will reformulate problem (4.2) using a more general setting and the language of the Poisson random measures.

Firstly, we denote by X = C0O) the space of real continuous functions on O which vanish on the boundary dO. For y e R and r e (1, x) by the symbol HY,r(O) we will denote the fractional order Sobolev space defined by mean of the complex interpolation method, see [73, Definition 1, page 301]. By HY,r(O) we will denote the closure of the space C0O) in HY,r (O), see also [73, Definition 2, page 301]. We will simply write HY,r and H0Y,r when there is no risk of ambiguity.

Let us briefly recall the definitions of these spaces. If k is a natural number and p e [1, x) is a real number, we denote by Hk,p(O), see [ , section I.6], the space of all functions u e Lp(O) whose weak derivatives Dau of degree |a| < k exist and belong to Lp(O). Endowed with a natural norm || • ||k,p

NlP,p := £ iDYulLp, u e Hk,p(O),

lY l<k

this space is a separable Banach space. The closure of the space CX(O) in the space Hk,p(O) is denoted by H^,p(O). In the case p e R+ \ N, the fractional order Sobolev spaces Hp,p(O) can be defined by the complex interpolation method, i.e.,

Hp,p(O) = [Hk,p(O), Hm,p(O)h, (4.3)

where k, m e N,k < m, and & e (0, 1) satisfy p = (1-&)k+&m. It is well known, seee.g. [52, Theorem 11.1, chapter I] and [73, Theorem 1.4.3.2 on p. 317] that HsQ,p (O) = Hs,p (O)

iff s < -1.

We denote by A the operator - A in Lp(O) with the Dirichlet boundary conditions, i.e.,

D(A) = H^,p(O) n H2,p(O), Au = -Au, u e D(A).

Next, let us consider a separately continuous real valued functions f defined on [0, T] x O x R satisfying the following condition. There exists a number K > 0 such that for t > 0, x e O,u e R,

- K(1 + |u|q 1[0,x)(u)) < f(t, x, u) < K(1 + |u|q 1(-x,0](u)). (4.5)

It is not difficult to prove that if f satisfies (4.5), then

f(t, v + z) sgn(v) < K(1 + H11), for all v,z e R,t e [0, T].

Therefore, by [10, Proposition 6.2] the Nemytskii map F defined by

F(t, u)(%) := f(t, u(£)), u e X,$ e O,t e[0, T], (4.6)

satisfies items (i) and (ii) of Assumption 4 on X.

Approximating f by a sequence (fn)X 1, where for any t e [0, T] and £ e O

fn(t,£,u) :=

f(t,$,u), if u e[-n,n] f(t, n), if u > n, f(t, -n), if u < -n,

we obtain a sequence (Fn)n€jq defined by

Fn : [0, T]x X 3 (t, u) ^{O 3 x ^ fn(t, £, u(£))} e X, which satisfies Assumption 4(iii).

Next we reformulate the noise appearing in the problem that we want to study. In view of the results of Section 2.3, see [1], to the real-valued Levy process L there corresponds a time-homogeneous Poisson random measure f with Levy measure va on Z = R defined by Eq. 4.1. Moreover, the Levy processes L = {L(') : ' > 0} and L = {L (') : ' > 0}, where

L(') := f f zf(dz,ds), ' > 0, J0 JR

have the same law on D([0, x); R).

Now, let g be a bounded and separately continuous function defined on [0,T]x O x R and taking values in R. Furthermore, we assume that g(', x, •) is continuous uniformly w.r.t. (', x). Let G be defined by

G(',u; z)(£) = g(',£,u(£))z, ' e[0,T],u e Ll(O),z e R,£ e O. (4.7)

With this notation problem (4.2) can be rewritten in the following form

du(') + Au(') d' = F(u('))d' + L G(',u,z)fj(dz,dt), ' e (0,T],

R (4.8)

u(') = U0.

The following theorem is a corollary of Theorem 3.2.

Theorem 4.1 Assume that a e (0, 2), p e (1, 2], p > a, d e N*. Let va be a Levy measure on R given by Eq. 4.1. Assume that q > 1 and let r be a number satisfying

r > max{qd, 2d}.

Then, for any U0 e C0O) there exists a C0(O)-valued martingale solution F, F, P,f,u) to problem (4.8) with cadlag trajectories in Lr (O) such that f is a time-homogeneous Poisson random measure on (R, B(R)) with intensity measure va (Leb.

Proof Let us fix parameters a, p, d, q as in the assumptions. Next let us choose real numbers r > max{qd, 2d} and k e (-, 1). Let us also choose X = C0O), E = HK,r and

B = Lr. Let us put & = Then, since | < 1 we infer that & < 2q < 5 and & < 1. We also deduce that

qmax — „ > q. &

Next, we denote by A = Ar the minus Laplace operator — A with the Dirichlet boundary conditions in the space B. Since r > p, r > 2d and p e (1, 2] we infer that E and B are separable, UMD and type p Banach spaces. Now it is well known that the assumptions of the first and second part of Theorem 3.2 are satisfied by Ar. We put Z = R and v = va. Then, we immediately see that

:= J |z|p v(dz) < x.

We define a map G by

G(t, ■) : X 3 u ^{Z 3 z ^ zg o u} e Lp(Z, v; E), t e [0, T].

The map G may not be defined on the whole space X, but the map ArsG is because S = Indeed, we have the following chain of inequalities.

£ |A-SG(u)(z)\pE v(dz) = \A-S ((g o u)z) |E v(dz)

= £|A-S(g o u)|E \z\pv(dz) < Cp A A-S(g o u)\pLr < Cp |g|LM.

Since the function g is continuous one can easily check that the continuity condition in Assumption 3 is satisfied. Observe that

G(t, u)(z) = G(t, u, z), t e [0, T],u e X,z e Z,

where G is defined in Eq. 4.7.

Since k > d we have E C X. Moreover, it is straightforward to check that the nonlinear map F defined by Eq. 4.6 satisfies Assumption 4 on X.

Finally, let Y = Lr (O) and AY = Ar. Since 1 - ^ > 2 and 2 > d, we can find

ki e (f, 2) such that

£(A7) C X C Y.

Thus all the assumptions (with our choice of spaces and maps) of Theorem 3.2 are satisfied and therefore the proof of the existence of a solution with the requested properties follows. □

5 Application II: Reaction-diffusion Equations of an Arbitrary Order with Space-Time Levy Noise

The aim of this section is to show how Theorem 3.4 can be applied to Stochastic reaction-diffusion equations driven by the space-time Poissonian white noise. This type of noise, which is a generalization of the space-time white noise, has been treated quite often in the literature, see for instance [65, Definition 7.24] (see also [7]).

We begin by introducing the assumptions on the drift of our problem and the driving noise. In the second subsection we will present the detail about the coefficient of the noise. Finally, after a careful statement of the assumptions on our problem we will formulate and prove the existence of a martingale solution to stochastic reaction-diffusion equations driven by a space-time Poissonian noise.

5.1 The Noise and the Deterministic Nonlinear Part of the Problem

Let d > 1, p e (1, 2], and O be a bounded open domain in Rd with boundary d O of C class. We consider a complete filtered probability space (¿2, FF, IF, IP) where the filtration IF = (FFt)t>0 satisfies the usual conditions.

The assumptions on the noise in the equation that we are interested in are given below.

Assumption 7 Let n : £2 ^ Mi(O x [0, x)) be the space-time Poissonian white noise on (O x R, B(O) x B(R), Leb ( Leb)4 with intensity measure v ( Leb. We assume that:

supp v c (-1, 1) and there exists p e (1, 2] : Cp(v) := I \z\pv(dz) < x. (5.1)

Remark 5.1 By Theorem A.6 we can find a homogeneous Poisson random measure n with intensity measure v ( Leb such that

n(u)(U x C x D) = n(v)(U x C x D), U e B(O), C e B(R), D e B([0, x)),

v(C) = f i 1C(Z,Z)dZ v(d£), C e B(O) x B(R). (5.2)

jr j o

We fix p as above for the remainder of this section. We also put v = Leb (v, where Leb is the Lebesgue measure on O.

Let also k be a positive integer. Borrowing the presentation of [10, Section 6.3] we introduce a differential operator A of order 2k as follows.

(a) The differential operator A defined by

Au(x) = - aa(x)Dau(x),x e O, (5.3)

|a|<2k

is properly elliptic (see [73, Section 4.9.1]). The coefficients aa are Cx functions on the closure O of O.

(b) A system B = {Bj }k = of differential operators on d O is given,

B} = £ bj,aDa, (5.4)

with the coefficients bj,a being Cx functions on dO. The orders mj of the operators Bj are ordered in the following way:

0 < mx < mx <...< mk.

We assume that mk < 2k and

J2 bj,a{£)n^ = 0, x e O,j = 1, 2,...,k, (5.5)

|a|=mj

where n j is the unit outer normal vector to d O at £ e d O.

(c) For any x e O and £ e Rn\{0} let a(x, £) = £|a|=2k aa(x)£a .We assume that

(-1)ka(x,l) =-1,x e Oe Rn\{0}. (5.6)

(d) If bxj = ^M^aaWJ1* then for all x e dO, £ e Tx(dO), t e (-x, 0] the

polynomials

{t ^ bj(x, £ + thx)}, j = 1, ••• ,k

4Leb is the Lebesgue measure on O and R and we refer to Appendix A for the definition and facts about space-time Poissonian white noise.

are linearly independent modulo polynomial {т ^ ]~[;=i(T — т + (t)}. Here Tx(O) is the set of all tangent vectors to д O at x e дO and T+(t) are the roots with positive imaginary part of the polynomial defined by

C Э т ^ a(x, £ + Tnx) — t.

The differential operator A induces a linear unbounded map Ar on the Banach space Lr (O), r > 1, defined by

D(Ar) = {u e H2k'r : Bju\dO = 0 for mj < 2k — 1}, (5?)

Aru = Au, u e D(Ar).

Assume в > 0, r e [p, ж). Let Ar be the linear operator in the Banach space Lr (O)

defined in Eq. 5.7. The space D(A^k) will be used in what follows quite often and hence it is convenient to write it down in terms of Sobolev spaces and boundary conditions, as in equality (5.7). We have, for в e [0, 2k],

D(Ap) = He/ := {u e He,r : Bju\dO = 0 for mj <в — 1}, (5.8)

Throughout we put E = Hgr. It is well-known, see for instance Triebel's monograph [73, Section 4.9.1], Seeley's paper [71] orLunardi'sbook [53, Section 3.2] (also [10, Section 6.3]), that Ar satisfies parts 1(iii)-1(v) of Assumption 1 on the space E.

Now we introduce a nonlinear map which will play the role of the drift for our stochastic equation.

Assumption 8 Assume that a function f : [0,T]x O x R ^ R is separately continuous and bounded. Moreover, we assume that f(t, x, •) is continuous uniformly w.r.t. (t, x). We denote by F the Nemytskii map associated to f, i.e., defined by

F(t, u)(x) := f(t, x, u(x)), u e Lr(O), x e O,t e[0, T], (5.9)

and assume that F : [0, T] x Lr(O) ^ Lr(O).

The restrictions of F to [0,T]x H^ will also be denoted by F.

5.2 Coefficient of the Noise

We begin this subsection with the precise statement of the assumptions on the coefficient of the noise.

Assumption 9 Let g : [0,T]x O x R ^ R be a bounded function that is separately continuous with respect to the first and the second variables, and g(t, x, •) is continuous uniformly w.r.t. (t, x).

We define a nonlinear map G0 :[0,T]x Lr (O) ^ Lr (O) to be the Nemytskii operator associated to g; that is,

G0(t, u)(x) := g(t, x, u(x)), u e Lr(O), x e O.

Because of Assumption 9, in view of the Lebesgue Dominated Convergence Theorem (DCT) for every t > 0 G0(t, •) is a continuous map from Lr(O) into itself and for each u e Lr (O), the function G0O, u) : [0, T] ^ Lr(O) is strongly measurable.

Let Bsr00(O), be the Besov space as defined in Appendix B. By Proposition B.1 along with Corollary B.4 we can define a bounded linear map

$ : Lp(O) ^ Lp(O x r)(O)), (5.10)

by setting, for v G Lp(O),

[$v](x,y) = (v(x)&x)y, (x,y) G O x R. (5.11)

Indeed, $ is linear and by Corollary B.4 and Eq. 5.1 we have the following chain of equalities/inequalities

f |[$v](x,y)|p , dxv(dy) = f \(v(x)Sx)y|p (d ^ V(dx,dy)

Joxr B-(d r (o) J ox R B-(d r (o)

= f \vSx\p d dx x f \y\pv(dy) < CCp(v)\v\PLP{0

jo b-z r (o) jR

Finally, by the choice of 0, r, and p above, the embeddings

He/ c Lr(O) c Lp(O), are continuous, so we can define a nonlinear map G by

G := $ o G0 :[0,T]x C0(O) ^ Lp(O x R, v; Br,X r'(O)). (5.12) In what follows we will also denote by G the restriction of the previously defined map

G to the sets [0, T] x

H 0 r with r e (p, x) and 0 > 0. It follows from the corresponding properties of the map G0 that for every t > 0, G(t, •) is continuous and that for each

u e H° the function G(^,u) is strongly measurable.

Claim 5.1 Assume that p e (1, 2], d e N, r > p, k e N and 0 > 0 satisfy

0 + d — ^ —. (5.13)

Then there exists $ < p such that the map A-SG defined on [0, T]x E is Lp(Z, v, E)-valued, bounded and continuous on E := Ht,r and measurable on [0, T].

Proof Let us fix k, r, d, 0 and p as in the assumptions. Thus, we can choose k > d — d such that $ := < 1. Let us also notice that E c H0,r (O). Therefore, since A—s maps

— (d—d)

H—K,r(O) into H0,r(O) and, by [73, Theorem 4.6.1-(a,b)], the Banach space Br,x r'(O) is continuously embedded in H —K,r(O), we infer that the map A—$G is Lp(O x R, v; E)-valued continuous. Therefore, by the continuity and boundedness of the function g, the function A-$G from [0, T] xE into Lp(O xR, v; E) is separately continuous and bounded. Since $ < 1 we deduce that G satisfies Assumption 3 with p = p — $. □

2k P ,

the space E is continuously embedded in Co(O).

Remark 5.2 If d < p, then we can find 0 > d such that condition (5.13) is satisfied and

5.3 The Formulation of the Result

Let n be the compensated Poisson random measure associated to the time-homogeneous Poisson random measure given by Remark 5.1. With the functional setting we described above, the problem that we are interested in is

du(t) + Aru(t) dt = f0xR G(u(t))[%, Z] n(d% x dZ x dt)

+ F(t, u(t))dt, t e (0, T], (5.14)

u(0) = uo.

Remark 5.3 A very important example of problem (5.14) is the following SPDE d

— u(t, %) + Au(t) dt = f(u(t, %)) + g(u(t, %))[L(%, t)] % e O,t e (0, T], dt

u(0,%) = u0(%), % e O, (5.15)

u(t,%) = 0, for % e dO, t e (0, T],

where A is a second order differential operator, both f and g are continuous and bounded real functions defined on R and, roughly speaking, L denotes the Radon-Nikodym derivative of the space-time Levy white noise5 L, i.e.,

dL(%, t) L(%,t) := — % ) dtd%

We are finally ready to define the concept of solution to problem (5.14).

Definition 5.4 Let p e (1, 2] and v a Levy measure on R satisfying condition (5.1). Let

16,r 1B

Ar be the linear operator in the Banach space Lr(O) defined by Eq. 5.7. Put E = Hgr, for

some 0 > 0 and r > p.

An E-valued martingale solution to Eq. 5.14 with cadlag paths in Lr (O) is a system

(tt, F, P, F,n,u) , (5.16)

(i) (tt, F, F, P) is a complete filtered probability space equipped with a filtration F = {Ft }t>0 satisfying the usual conditions,

(ii) n is a space-time Poissonian white noise6 on O x R with jump size intensity v = Leb (g>v.

(iii) u is a E-valued F-progressively measurable stochastic process such that

e/ \u(s)\pEds< (5.17)

(iv) u is a Lr(O)-valued cadlag process.

5We refer again to Appendix A for the definition of space-time Levy noise.

6We refer to Appendix A for the definitions and facts about space-time Levy and Poissonian noise

(v) for every t e [0, T], u satisfies the following equation P-a.s.

u(t) = e-tAru0 + f e—(t—r)ArF(r, u(r))dr (5.18)

+ f f e—(t—r)Ar G(u(r))[%, Z] n(d% x dZ x dr). J0 JOxR

Remark 5.5 The last condition in Definition 5.4 should be understood that both integrals in equality (5.18) make sense as E-valued random variables and (5.18) holds as an equality of E-valued random variables.

The following result will be shown by applying Theorem 3.4.

Theorem 5.6 Let p e (1, 2], v be a Levy measure on R satisfying condition (5.1), and A be a differential operator having the properties (a)-(d) in Section 5.1. For 0 > 0 and r > p we put E = H°r, where Ar is the linear operator in the Banach space Lr(O) defined in Eq. 5.7. Let F and G be the two maps defined in Eqs. 5.9 and 5.12, respectively, and satisfying Assumptions 8 and 9, see pages 29 and 29. In addition, let us assume that the numbers p, r, 0, d and k satisfy (5.13). Then, for every u0 e H°,r there exists a H°,r-valued martingale solution u to Eq. 5.14 with cadlag trajectories in Lr (O).

The above theorem can be reformulated in terms of space-time Levy noise, but since such a result would not be significantly different from the last one, we omit it and leave as an exercise to an interested reader.

Proof of Theorem 5.6 Let us fix the numbers d, k, p, and r, the space E and the operator Ar as in the statement of the theorem. Also, let F (resp. G) be defined by equality (5.9) (resp. (5.12)).

Since r > p, the separable Banach spaces E and B are UMD and martingale type p.

As we mentioned above Ar has the BIP property on E, is a positive operator with compact

resolvent and — Ar generates a contraction type C0-semigroup on E. Owing to Claim5.1 we

1 P—1

can find p e [0, 1) such that the map Ar p G defined on [0, T]x E is Lp(Z, v, E)-valued,

bounded and continuous w.r.t. E and measurable with respect to time. Thus all assumptions

but Assumption 5 of Theorem 3.4 are satisfied. However, by Assumption 8 the Nemytskii

map F defined for t e[0,T] and u e E by

F(t,u)(x) := f(t,x,u(x)), x e O,

satisfies Assumption 5. Hence, since Theorem 3.4 is applicable, we infer that problem (5.14) has a E-valued martingale solution u. Since Ar is a infinitesimal generator of a contraction C0-semigroup on Lr(O), by [77], the paths of the process u are cadlag in Lr(O). □

Remark 5.7 Let O be a bounded open domain in Rd, with d > 1. Let n be a fixed natural number. For each i = 1, ••• ,n let vi be a Levy measure on R satisfying (5.1). For each i =

1, ••• ,n, let {Li(t); t > 0} be a Levy noise with Levy measure vi. For a fixed T e (0, x) we consider the following system of SPDEs

dui(t) + Aiui(t) dt = Y, n=1 gii(t, x, u1(t,x), .. ., un(t, x))dLi(t)

+fi(t, x, u1(t, x), .. ., un(t, x))dt, t e (0, T], x e O,

ui(0) = ui,0, x e O, Bjiui(t,x) = 0, t e[0,T],x e dO,

(5.19)

where Ai, i = 1, ••• ,n, are differential operators of order 2k satisfying conditions (a)-(d), see pages 27 and 28. Furthermore, we assume that

f = [fi]n=1 : [0, T]x O x Rn ^ Rn, g = [gi,^,^ :[0,T]x O x Rn ^ Rnxn,

are separately continuous and bounded. In addition, we assume that g(t, x, •) is continuous uniformly w.r.t. (t,x). Problem (5.19) was studied by Cerrai in [20] when each Li is a Wiener process.

We will apply the previous theorem on the Banach space E = He/(O, Rn) to check that for any u0 = (uifi)n=l e H^,,r(O, Rn) n H9/(O, Rn) there exists a H1,r(O, Rn) n H0,r(O, Rn)-valued martingale solution to Eq. 5.19 with cadlag paths in Lr(O, Rn). For

this purpose we consider the diagonal matrix

/A1 0 ... 0\

0 A2 ... 0

\0 0 ...An)

and denote by G0 : [0, T] x Lr(O, Rn) ^ Lr(O, Rn) (resp. F) the Nemytskii operator associated to maps g and respectively f. We also set Z = Rn and define the Levy measure v on O x Z by v = Leb (v ( ••• ( vn). As above we can define a bounded linear map $ : E ^ Lp(O x Z; B——d/r)(O, Rn)) by formula (5.11), i.e.,

[$v](x, y) = (v(x)$x)y, (x,y) e O x Rn,

and put G = $ o G0. The restriction of the maps F and G to sets [0, T]x E are still denoted by F and G, respectively. We denote by n the Poisson random measure with intensity measure Leb(dt) ( v(dx, dz) on [0, T]x O x Z. Then problem (5.19) can be rewritten in the following form

{du +Audt = F(t, u)dt + xz G(t, u)[x, z]n(dx x dz x dt),

(5.20)

u(0) = u0.

The existence result we claimed earlier is now a straightforward consequence of Theorem 5.6.

6 Application III: Stochastic Evolution Equations with Fractional Generator and Polynomial Nonlinearities

In this section we will deal with a problem that is similar to the problem from the previous section, but with one important modification. From now on we will assume that the nonlinear term F is of polynomial growth. We put k = 1 and assume that A and B are differential operators satisfying conditions (a)-(d), see page 27. As in the previous section we fix r > 1 and denote by Ar be the linear operator induced by A in the Banach space Lr (O).

Let y e (0, 1] and Ayr be the fractional power of Ar. It is well known (see, for instance, [72, Theorem 4.3.3]) that

D(AY) = HBkr,r = {u e H2kr,r : Bju\dO = 0 for mj < 2kr — 1}, and for any 0 e [0, 2kr ]

D((AY)2k0Y) = HB,r = {u e H0,r : Bju\gO = 0 for mj <0 — -}. (6.1)

We also consider a space-time Poissonian white noise n on (O x R, B(O) x B(R)) with Levy measure v ( Leb satisfying (5.1) with a fixed number p e (1, 2]. As in Remark 5.1 to n we can associate a time-homogeneous Poisson random measure n with intensity measure v = Leb (v.

Next, let g :[0,T]x R x O ^ R be a bounded, separately continuous w.r.t. to the first and second variables, continuous in the third variable uniformly with respect to the other two. Next, as in previous section we consider the map

G : [0, T] x Lr(O) ^ Lp(O x R, v; Br£ r>(O)),

defined by

[G(t, u)](x, y) = [(g(t,u(x),x)&x]y, u e Lr(O), (x,y) e O x R. (6.2)

We also consider a separately continuous function f : [0,T]x O x R ^ R satisfying condition (4.5), for some q > 1. We denote by F the Nemytskii operator defined by

F(t,u)(x) := f(t,x,u(x)), u e C0(O),x e O,t e[0,T]. (6.3)

We consider the following approximation of the function f by a sequence (fn)neN of functions defined, for any t e[0,T], x e O and n e N, by

fn(t,x, u) :=

f(t,x,u), if u e[—n,n] f(t,x,n), if u > n, f(t, x, -n), if u < — n.

By setting Fn(t, u)(%) = fn(t, u(%)) for (t, u, %) e [0, T] x C0(O) x O we obtain a sequence (Fn)n€jq of bounded and separately continuous maps defined on [0, T] x C0 (O) into C0O) satisfying (3.2) uniformly in n, andpointwise converging to F in C0(O). Hence, the nonlinear map F defined by Eq. 6.3 satisfies Assumption 4 with X = C0O).

Remark 6.1 An example of a real valued function f satisfying the above conditions is

f : [0, ro) x O x R 3 (t, u) ^ —|u|q sgn(u). (6.4)

With the various mappings we have introduced above we consider the following SPDEs

du(t) + Ar u(t) dt = f0xR G(t, u(t№, Z]n(d% x dZ x dt)

+F(t, u(t))dt, t e (0, T], (6.5)

u(0) = u0.

Theorem 6.2 Let v be a Levy measure on R satisfying the conditions (5.1) with a fixed p e (1, 2]. Let us also assume that A is a differential operator satisfying the properties (a)-(d) from page 27. For r e (0, 1], let Ar be the fractional power of the linear operator Ar defined in Eq. 5.3. Finally, let F and G be the maps defined in Eqs. 6.3 and 6.2, respectively.

In addition to the assumptions on G above we also assume that pd < 2ky and that F

satisfies Assumption 4.5 with some q e (1,p). If r > max{p, Ppd-}, then for any uo e

Co(O) there exists a Co(O)-valued martingale solution to Eq. 6.5 with cadlag trajectories in Lr(O).

Before we embark on the proof of this result let us make the following remark.

Remark 6.3 If the intensity measure v of the space-time white noise is finite, then as in the proof of Theorem IV.9.1 in [45] the solution can be written as a concatenation of solutions to the deterministic reaction-diffusion equations on random intervals with the initial data being a measure-valued random variable.

To be more precise, let X := Leb(O) x v(R), (Ti)i€fq be a family of independent, exponentially distributed random variables with parameter X and

N(t) = J2 1[TnTO)(t), t > 0,

where Tn = ^ n= 1 T,n e N. Let also (Yi)i€N be a family of independent v/v(R) distributed random variables and {xi : i e N} be a sequence of independent and uniformly distributed random variables in O. Then, the space-time white noise n can be written as follows: for any A e B(O), B e B(R) and I e B([0, to))

\ 0 if N(t) = 0,

n(A x B x I) = j ^ Sv rt TM x B x I) if N(t)> 0.

Using this representation the above SPDEs can be described by a deterministic PDE with initial condition being a measure in the time intervals [Tn, Tn+i), i.e., u solves the deterministic PDE

&«,$) + Au(t)dt = f(u(t,$)) $ e O,t e (Tn,Tn+i),

u(T+, $) = u(T-) + Yn&xn, $ e O, (6.6)

u(t,$) = 0, for $ e 9O,t e (Tn,Tn+i).

It follows that our conditions have to be stronger than the conditions in [8], which is indeed

2 which is stronger than d < q-

the case. In fact, for y = 1 we assume that d < - which is stronger than d < y imposed

by Brezis and Friedman in [8].

Proof of Theorem 6.2 We just give a sketch of the proof because it is very similar to the proofs of Theorem 4.1 and Theorem 5.6. Let us fix the numbers d, y, p, q, and r as in the statement of the theorem. We denote by A? the fractional power of the linear operator Ar induced by —A on theBanach space B = Lr(O). Also, let F be defined by equality (6.3).

By Remark 5.2 we can find 0 > d such that 0 + d — d < 2kY and the Banach space

J r r p r

E = HB,r is continuously embedded in X := C0O). Thus, owing to the assumption on g (resp. f) we can argue as in the proof of Claim 5.1 (resp. Claim 6.3) to prove that the map G (resp. F) satisfies Assumption 3 (resp. Assumption4) with p e (0, p — Qkdy). Since r > p, E and B are separable, UMD and type p Banach spaces. Finally, let Y = Lr (O) and Ay = At. Since 1--— > 1 — — and r > , we can find ki e (d, 1 — -) such that

1 r -max p p — q 1 vr' p'

Hkw c X c Y.

Since — Ayr is an infinitesimal generator of a contraction type Co-semigroup on Y = B = Lr (O), all the assumptions of Theorem 3.2 are satisfied by problem (5.14). Hence, we easily conclude the proof of Theorem 6.2 from the applicability of Theorem 3.2. □

7 Some Preliminary Results about Stochastic Convolution

In this section we will state several results concerning the stochastic convolution process. 7.1 The Stochastic Convolution

Let us begin with listing the assumptions we will be using throughout the whole section. We assume that E and A are respectively a Banach space and a linear operator satisfying parts 1(i)-1(v) of Assumption 1. A real number p e (1, 2] satisfies part 1(i) of Assumption 1 and p e (0, p) satisfies Assumption 3.

We also assume that the following are given: a measurable space (Z, Z), a nonnegative measure v e M+(Z) on (Z, Z), a filtered probability space P = F, F, P) such that the right-continuous filtration F = (Ft)t>o satisfies the usual conditions, and a time-homogeneous Poisson random measure n with Levy measure v. For any progressively measurable process £ : [0, x) x ^ ^ Lp(Z, v; E) such that

e( f \£(s,z)\pv(dz)ds < x,T > 0,

one can define the so called stochastic convolution process by the following formula

&(£)(t) := i i e—(t —s)A£(s,z)n(dz,ds), t > 0. (7.1) Jo Jz

We will frequently use the real interpolation spaces (E, D(Am))eq = D"m(Q,q), for 0 e (0, 1), q e[1, x) and m e N, defined by

Da(0, q) := {x e E : \x\DA(0,q) < x},

\x\qDAie,q) = \x\E + /o \t1—0Ae—tAx\qdf,x e E.

Let us fix M > 0 and T > 0. Throughout this section we denote by Bm(E) the set of all F-progressively measurable processes £ satisfying

Ja r zj\E

\AP P^(s,z)\pEv(dz) < Mp, for Leb <g>P-a.e. e[0,r]x tt. (7.2)

Let us recall the following two important results. A proof of the first one can be found in [13, Theorem 2.1].

Theorem 7.1 For every 0 G (0, 1 — p ) there exists a constant C = Cg(E) such that for any Da(0, p)-valued progressively measurable process £ the following inequality holds for

every T > 0,

EiTS^KaO+bp) dt < CEiT Z z)\PDA(0,p) V(dz) dt. (7.3)

Before proceeding to the statement and the proof of the second result, let us state the following important remark.

Remark Since, by part 1(v) of Assumption 1, A satisfies the BIP property, it follows from [72, Theorem 1.3.3] and [72, Theorem 1.15.3] that, for 8,0 e (0, 1) with S < 0, the following embeddings are continuous

DA(0,q) C D(As). (7.4)

Lemma 7.2 If p' e (0, p), then there exists a constant C1 > 0 such that E||Ap'&($)\\pLp(0T:E) < C1T2—pSMp, $ e Bm(E).

Proof of Lemma 7.2 Let us fix p' e (0, p) and put S = p' + p — p e (0, p). Let us also choose $ e Bm(E) and put u = S($). Since, by [9], a type p UMD Banach space is a martingale type p Banach space, by Eq. 7.2 and Theorem 2.4 we infer that the stochastic convolution process f^ fz e—(t —s)AAp p $(s, z)n (dz, ds) is well-defined. Moreover, by Theorem 7.1 and Eq. 7.4 it takes values in Lp(0, T; D(AS)) almost surely. Since

u(t) = Ap —pi f e—(t—s)A[Ap—p$(s,z)] n(dz,ds), t e[0,T], 0z

and S = p'+ p — p we infer that u belongs to Lp (0, T; D(Ap')) almost surely. Furthermore,

E|Ap u(t)\p = E

nAse-(t-S)AAp p^(s,z)n(dz,ds)

< Ej' j ||Ase-('-s)A||p(EE)|Ap p i(s,z)\Ev(dz)ds

t 1-ps

'f —

Jo (' —

< CMp -;ds = CMp-

/0 (t — s)pS 1 — pS"

Since pS e (0, 1), it follows from Eq. 7.5 and the Fubini theorem that

T 2—pS

E||Ap'u\Lpl0TE) < CMp-. (7.5)

11 11Lp(0,TE - (1 — pS)(2 — pS) v ;

Thus the proof of Lemma 7.2 is complete. □

Lemma 7.3 Let the assumptions of Lemma (7.2) hold. Let p' e (0, p) and put B = D(Ap'—1). If $ e BM(E), then

(i) there exists a constant C2 = C2(T) > 0 such that

E sup |Ap —1S($)(t)|p < C2Mp, 0<t <T

(ii) and the process u = &($) admits a B-valued cadlag modification (which will be still denoted by &($)).

(iii) If in addition the operator — A generates a contraction type semigroup on the space D(Ap—p), then the parts (i)-(ii) are true for B = D(Ap p).

Proof Let us fix $ e BM(E) and set u = &($), and f = Ap—p—1$. By Eq. 7.2, we have J IAf(s,z)lpv(dz)<Mp, a.a. s e[0,T].

Hence, it is known, see [77, Lemma 3.3], that the process v defined by

v(t) : = f f e—(t—s)Af(s,z)rj(dz, ds), t e [0, T], Jo Jz

is the unique strong solution to the problem

dv(t) + Av(t)dt = j f(t,z)fj(dz, dt), with v(0) = 0, and hence satisfies

v(t) + f Av(s)ds = f i f(s,z)rj(dz,ds), t e[0,T]. (7.6)

Jo J0 Jz

Let & := P + p' — p < 1 and f := 1 — p + p — p' > 0. By applying A& to both sides of the identity (7.6) and by noticing that A—1u = Al p v, we infer that for t e [0, T]

Ap'-1u(t) = f Apu(s)ds + A-ß f f Ap p £(s,z)n(dz,ds). (7.7)

Jo Jo Jz

>' — 1u(t) = l ap'„(s)Js + A— fl j ap~

Using the inequality (2.14) in Theorem 2.4 we obtain

0<i <r

A-ß [ f Ap p z)n(dz, ds) Jo Jz

\\c(e)E[ f

AP-'»I

< C\\A-ß\\£(e)EI I |Ap-p £(s,z)\pv(dz)ds < CTM».

Next applying Holder's inequality twice and invoking inequality (7.5) we get

0<t <T

ft p r

Apu(s)ds < Tp-1E \Ap u(t)\p dt < CpTpMp.

This completes the proof of (i) with C2(T) = CC21T + CiTP. Since, by [76, Theorem 4.1], the function

0 : C([0, T]; E) x D([0, T]; E) 3 (x,y) ^ x + y e D([0, T]; E),

is continuous, in view of the identity (7.7) and Theorem 2.4 we easily deduce that the process u has a D(Ap—1)-valued cadlag modification. This completes the proof of part (ii) of our lemma.

Since, by [9], any UMD Banach space of type p is also an martingale type p Banach space, and £ satisfies (7.2), part (iii) is easily deduced by applying [77, Corollary 5.1]. □

The next lemma is about estimates of &(£),£ e Bm(E) in the Besov-Slobodetskii spaces Wa,p(0, T; E), see its definition on page 5.

Lemma 7.4 Let the assumptions of parts (i)-(ii) of Lemma7.3 hold. Assume that a e (0,p). Then there exists a number C3 > 0 such that

E||6(£)||W*,p(o,t;E) < C3Mp, £ e Bm(E).

Proof Let us fix £ e Bm(E) and put u = &(£). Let us fix a e (0, p) and let us choose an auxiliary p' e (a, p). In view of Lemma 7.2 and the definition of the (2.3) it is sufficient

to estimate the mathematical expectation of the seminorm (2.3) of u. For this aim, without loss of generality, we can take s < t e[0, T]. As in the Gaussian case we have

u(t) — u(s) = S>1(t, s) + S?2(t, s),

S1(t,s) = J J e—(t —r)A$(r; z)n(dz; dr),§2(t,s) = (e—(t —s)A — ^u(s).

In view of the definition (2.3) it is sufficient to prove that there exist two positive numbers C31, C32 such that

51 := E/T f' J^I! dsdt < c31mp,

1 J0 J0 It — slap+1 " 31 ,

fT f1 ISfo (t s)|p

52 := E , , dsdt < C32Mp.

J0 J0 It — sl1+ap - 32

Let us begin with estimating S1. By using the Fubini Theorem, [13, Corollary C.2], the

estimate ||Ap pe—(t—r)AllprlF) < C(t — r) p(p p) and the definition (7.2) of the class

Bm(E), we infer that

"T <•' dsdt

Si < C

0 I' - s\1+ap

Ei f \\Ap -pe-('-f)A\\pc{E)\Ap-p ^(r,z)Ipv(dz)dr

J s JZ

< CMpn dsd! f rpp—1dr ~ J0 J0 It — s |1+ap J0

< CMp fT f-dsdt . < CMpt 1+p(p—«).

J0 J0 It — s|1+(a—p)p -

In order to study the term S2 let us recall, see [63, Theorem II.6.13], that there exists a C > 0 such that

\A—Y(e—hA — A\ < ChY, h> 0. (7.8)

i v /\c(E)

Therefore, by applying the Young inequality for convolutions we infer that <-T f' ||A—p(e—(t—s)A — I)UP(FAAp'u(s)Ip

f f \\A p(e ( s)A - I)rc(E \Apu(s)\p

S2 < CE -—-dsdt

Jo Jo \t - s\1+pa

< CE f \Ap'u(s)\p( i , ,

" Jo () \Jo \T\1+P'

T - - "T"A"p(e-'A - m"lEld<u,

< CE\\Ap'u\\plp(0tT;E)1 t-1+p(p'-a)dT

\ (o,T;E)

< CE\\Ap'u\\pp<oTE)Tp(p!-a)-

wlp^j ;E)t

Invoking Lemma 7.2 and the estimate for S1 concludes the proof of the lemma. □

Remark7.5 Since a e (0, p) we cannot infer from the above Lemma that the process&($), $ e Bm(E), has an E-valued cadlag modification. It is known, see for instance [48], that if E is a Hilbert space, the driving Levy process L lives in E and {e—tA : t > 0} is contraction type C0-semigroup on E, then the stochastic convolution process j0 e

has an E-valued cadlag modification. If E is a Banach space, then it is sufficient to assume that either E is a p-smooth Banach space or the semigroup {e—tA; t > 0} on E is analytic, see [77]. However, we should note that in our framework we do not have such a nice situation. Indeed, roughly speaking, our semigroup is analytic and contractive on a martingale type p, p e (1, 2], Banach space E and our noise lives in a larger space than E (say D(A—a ), a > 0), and in general even if — A is the infinitesimal generator of an analytic semigroup of contraction type it is not known whether the stochastic convolution ©(£), £ e Mp(0, T, Lp(Z, v; E)), has a cadlag modification in E or in a smaller space, say D(Ay), y > 0, than E. This is even an open question for the case when E is a Hilbert space, see for instance [66].

The next three lemmata are about tightness of the family of the laws of {©(£) : £ e Bm(E)}

Lemma 7.6 Let the assumptions of parts (i)-(ii) of Lemma 7.3 hold. Then, the family of the laws of {©(£) : £ e BM(E)} is tight on Lp(0, T; E).

Proof As in Lemma 7.3 we choose an auxiliary p' e (0, p) and put B = D(Ap —1). Let us also put Y = D(Ap ). Since, by Assumption 1, A has compact resolvent, it follows from the combination of [35, Proposition 5.8], [73, Theorem 1.15.3, pp 103] and [73, Theorem 1.16.4-2, pp 117] that the embeddings Y ^ E and E ^ B are compact. Thanks to Lemma 7.2 and Lemma 7.4 {©(£) : £ e BM(E)} is uniformly bounded on Mp(0, T; Y) n Lp(Q.; Wa,p(0, T; E)). Hence, since the embedding

Wa,p(0, T; E) n Lp(0, T : D(Ap')) ^ Lp(0, T; E)

is compact, see [40, Step 1 of Proof of Theorem 2.1], it follows from the Chebyshev inequality and [40, Theorem 2.1] that the laws of {©(£) : £ e Bm(E)} are tight on Lp(0,T; E). □

Lemma 7.7 Let the assumptions of parts (i)-(ii) of Lemma 7.3 hold. Then the family of the lawsof {©(£) : £ e BM(E)} is tight on D([0,T]; D(Ap'—1)) forany p' e (0,p).

For the proof of this lemma we need the following general result.

Lemma 7.8 Assume that p e (1, 2], T > 0. Assume that E and Y are two martingale type p Banach spaces such that the embedding E ^ Y is compact. For every £ e Bm(E) let a process v = ^(£) be defined by

v(t) = i i £(s,z)n(dz,ds), t e [0, T].

Then the family of the laws of {v = ^(£) : £ e BM(E)} on D([0, T]; Y) is tight.

Proof We need to check items (a) and (b) of Corollary D.2.

By the maximal inequality (2.16) in Corollary 2.6 (see also [13, Corollary C2]) there exists C > 0 such that for any £ e Bm(E) we have

E sup | v(s) |p < CMp.

se[0,T ]

Let e > 0 and Ke = {y e X : |y| < (Ce 1 )p M}. It follows easily from the Chebyshev inequality that

P(v(t) e Ke,t e [0, T]) <[(Ce—1)pM]—pE sup |v(t)|p < e.

t e[0,T ]

Corollary D.2-(a) follows from this inequality and the compactness of the embedding E C Y.

Next, let us fix 0 < a < t < T. Then by [13, Corollary C.2] and the Jensen inequality

E sup V(t) — v(a)\y = E sup

t €[a,t ] t €[a,t ]

n£(s, z)n(dz, ds)

< CE^j j \£(s,z)\PpV(dz)dss

< c(eJ j \£(s,z)\PpV(dz)d^jP < CM(t — o)p.

Thus we can apply Corollary D.2-(b) from which the sought result follows. □

Proof of Lemma 7.7 Let us fix £ e Bm(E) and put u = &(£). Let us fix auxiliary numbers p' e (o, p) and y e (p', p). Let us put ß = p — p'+ 1 — ' > o and let us rewrite identity (7.7) as follows

Ap'—1u(t) = Ap'—yf AYu(s)ds + A—ßffAp '£(s,z)n(dz,ds)

Jo Jo Jz

l'l (t) = A' •

/o Jo Jz

= vi(t) + V2(t), t e [o, T].

It follows from Lemma 7.8 that the family of laws of {v2(£) : £ e Bm(E)} is tight on D([o, T]; E).

On the other hand, since y < p, by Lemma 7.2 there exists C > o such that

f \AYu(s)\pds < CMp. o

Since p' — y < 0, the map Ap —y : E ^ E is compact. Therefore, since v1(t) = Ap —y f0)AYu(s) ds we infer that the family of laws of {v1(£) : £ e BM(E)} are tight on C([0, T]; E). Hence we easily conclude the proof of (i) since, by [76, Theorem 4.1], the function

0 : C([0, T]; E) x D([0, T]; E) 3 (x,y) ^ x + y e D([0, T]; E), is continuous. □

We also need the following auxiliary result.

Lemma 7.9 Assume that all the assumptions in Lemma 7.3 are satisfied. Then, for every

q e (p, —) = (p, 1 p ) and every r e (1, p) there exists C > 0 such that

p —p 1 pp

E|S(£)|Lq(0,T;E) < CMr, £ e Bm(E). (7.9)

Moreover, the family of the laws of {©(£) : £ e Bm(E)} on Lq(0, T; E) is tight.

Proof of Lemma 7.9 Let us fix q e (p, -y1-). Since q(P — p) < 1 and q > p we can find

1 -p p

p' e (0, p) such that q(P — p) = 1 — p(p — p'). Let us next put

0 = T^-, = ^^P—tP^PPt = q e (0,1),

- p + p' 1 - pp + pp ' q

and let us define, as in proof of Lemma 7.6, Y = D(Ap ). Wealsoput B = D(Ap p). Then by the reiteration property of the complex interpolation we have,

E = [B, Y]o,

and therefore we get, see [6],

M < b\B-e MY, M e Y.

Next, let us take an arbitrary r e (1,p) and put s = q, 1 + sf = 1. Then srO = p and, since r < p, rsf(1 - 0) < p. Let us choose an auxiliary 5 > 1 such that 5rsf(1 - 0) = p. Let us fix f e Bm(E) and put u = ©(f). Then, by the Heilder and Jensen inequalities,

E\u\Lq(0,T;E) < e[ \ u\ Lq0 (0T ;Y) \ u\ L^n-0) ;B) ^ = e[ \u Lp(0,T ; Y) \ u \ L<cl<0,Г ;B)]

< EHu\sr0 11 Erid5™^1-0) 1 ^

< ^^p^ty^ e[_\uILm(0,T;B)J r w -,1 r

= e[\u\PLp{0J;Y^q e[ \u \ Lc(0,T ;B) j ^ < C4Mr,

where Lemma 7.2 and Lemma 7.3-(iii) were used to obtain the last inequality. The proof of the first part is complete.

To prove the second part we observe that by the same argument as above, given q and r, we can find e > 0 and C > 0 such that

E^OL^T;D(Ae)) < CMr, f e Bm(E). (7.10)

Moreover, by Lemma 7.4, for any fixed a e (0, p), we can find C3 > 0 such that

E\\ ©(f ) \\ wa,P(0,T ;E)

< C3MP, f e Bm(E). Since the embedding D(Ae) — E is compact, the embedding

Lq(0, T; D(Ae)) n Wa,p(0, T; E) — Lq(0, T; E) is compact. Hence the second part of the Lemma follows. □

Next we will formulate an analogous result in the case when only the assumptions of parts (i) and (ii) of Lemma 7.3 are satisfied. The proof will be similar to the last lemma with

the difference that instead of taking B = D(Ap p) we need to take B = D(Ap -1).

Lemma 7.10 Let the assumptions of parts (i) and (ii) of Lemma 7.3 be satisfied with p' e (0, p). Then, for every q e (p, 1-7) and every r e (1, p), there exists C > 0 such that

E^OL^T;E) < CMr, f e Bm(E). (7.11)

Moreover, the family of the laws of {©(f) : f e Bm(E)} on Lq(0, T; E) is tight.

Proof of Lemma 10 Let q e (p, j^—r), p' e (0, p) and define 0 = p .As in proof of

Lemma 7.9 we let Y = D(Ap ) and B = D(Ap -1). Then by the reiteration property of the complex interpolation we have the continuous embedding,

[B, Y]0 = D(A0+p'-1) c E.

Owing to Lemma 7.2 and parts (i) and (ii) of Lemma 7.3 we can argue exactly as in the proof of Lemma 7.9 and show that for any r e (1, p) we have

ElS^Wr;D(Ao+p'-i)) ^ CMr, * 6 Bm(E), (7.12)

which implies inequality (7.11). Thanks to Eq. 7.12 we can again use the same argument as in proof of Lemma 7.9 to deduce that the family of the laws of {&(*) : * e Bm(E)} on Lq(0,T; E) is tight. □

8 Proof of Theorem 3.4

We begin the proof of Theorem 3.4 by introducing a sequence of approximating processes. Let us fix for the whole section a number T > 0. Consider a sequence (xn)neN C E such

that xn ^ uo strongly in D(Ap p) as n ^ ro. Define a function : [0, ro) ^ [0, ro) by $n(s) = k, if k G N and k < s < k+fc1, i.e., $n(s) = 2-n[2ns], s > 0, where [t] is the integer part of t e R. Let us define a sequence (un)nGN of adapted E-valued processes by

un(t) = e tAxn + I e

i e-(t-s)AF(s,Un(s))ds J0

+ f f e-(t-s)AG(s,Un(s); z) n(dz; ds), t e[0,T]

where u n is defined by

xn, if s e [0, 2-n),

Un(s) :H -HsU-nUn^dr, if s > 2-n, (8.2)

and where we have used the following shorthand notation

/ f(t)dt f f(t) dt, A e B([0, T]).

J A Leb (A) J a

(Here Leb denotes the Lebesgue measure.) The E-valued process u is piecewise constant adapted and hence progressively measurable. Between the grid points, Eq. 8.1 is linear, therefore, un is well defined for all n e N.

Next we will prove certain uniform estimates for the sequence (un)neN. Let us recall that F is a bounded nonlinear map defined on [0,T]x E and taking values in D(Ap-1). Furthermore, it is separately continuous.

Proposition 8.1 For any a G (0, p) and p' G (0, p), there exists a constant C such that the following inequalities hold

supE||Ap'unfLp(0T;E) < C, (8.3)

sup Ell Apûn\\PLp(0)j;E) < c, (8.4)

suPElun\PWap(0T.E) < c. (8.5)

neN , P( , ' )

Proof Without loss of generality we take T = 1. For each n e N we divide the interval [0,1] into small intervals of length 2—n each by setting: Ik = [|r, k+1), k = 0, ..., 2n — 1.

We also put J0 = I0 and Jk = U Ii, k = 1, ..., 2n — 1. Define the sequences of processes

{ukn : k = 0, ..., 2n — 1} and {Hkn : k = 0, ..., 2n — 1} inductively by

u0n(t) = e-tAXn + /o'e-(i -s)AF(s,Xn)ds

+/0 /z e-(t-s)AG(s, xn)n(dz, ds), t e lo, ul(t) = xn, t e lo;

«n(t) = e-tAXn + fte-(t-s)AF(s, ukn(s))ds +/0 /Z e-(t-s)AG(s, ûn (s))n (dz, ds), =: Sn(t) + yk(t) + zkn(t), if t e Jk,

ûkn(t) = k = 1, ••

û T1'

-lk-1 ûkn \s)ds if t e Ik, , 2n - 1.

. Hence

Note that, by definition, un is equal to the restriction of un to Jk and un = u to prove our proposition it is sufficient to check that the estimates (8.3)-(8.5) are true and uniform w.r.t k on Jk for un and ukn with k = 0, ..., 2n — 1. On the interval J0 we have

ul(t) = e-tAXn + fte-(t-s)AF(s, Xn)ds

+/0 Z e-(t-s)AG(s, xn)n(dz, ds)

= gn(t) + y0(t) +z0 (t).

First, it follows from [28, Theorems 2 and 7] that there exists a constant C > 0 such that for any n e N

l|Apg \\p

nllLp(Jo;E)

+ llApy0llL

n "Lp(Jo;E)

< C\xn\p + CEllAp-1FllLp(Jo;E) < C.

Secondly, we derive from [28, Theorems 7 and 19] that for any a e (0, p) there exists a constant C > 0 such that for any n e N

nllWa,p(Jo;E)

n nwa,p(Jo;E)

< C\xn\p + CE||Ap-1F11Lp(jo;e) < c.

Hence combining these two remarks with Lemma 7.2, Lemma 7.4 and Proposition E.1 we infer that (8.3)-(8.4) are true on J0 for uan and un. Using the same approach we can prove

by induction that for each a and p as above there exists a constant C > 0 such that for any n e N andk e {0,..., 2n - 1} we have

E[ WAPgn\\PLP(Jt; E) + WAPyl WPLP(Jt ;E)] < Clxnlp + C Ell^-^^ ukn)^Lpijk ;E) < C,

\\gn\\waP(.hr:E) + \\y£ "p

gn\\Wa,p(ht; E) + \\yn \\WaP(hk ;E)

< C|xnip + CE\\Ap-1 F(.,ükn)\\pP(hk. < C.

With the same argument as above we check that (8.3)-(8.5) are correct and uniform w.r.t k on each Jk with k = 0, ..., 2n - 1. With this fact and the identity un = un -1, we conclude the proof of our proposition. □

We will also need the following result.

Proposition 8.2 Suppose that (xn)neN C E is a sequence such that

|Ap-p [xn - u0]| ^ 0 as n ^ro. Then, the sequence (gn)n€N defined by Eq. 8.6 is convergent (and hence the set {gn : n e N} is pre-compact) in the Banach space C([0, T]; D(Ap p)) n Lp(0, T; E).

Proof of Proposition 8.2 The convergence in C([0, T]; D(Ap p)) is obvious. From [28, Theorem 2] we infer that for any 0 e[ 1 - p, p ] there exists some C > 0 such that

sup\\A0[gn - e-tAu0]\\Lp(0 T-E) < C|Ap-1 [xn - u0]|p, (8.7)

from which we derive the convergence in Lp (0, T; E). □

After these preliminary claims we are now ready for the proof of Theorem 3.4 which will be divided into several steps. But before we go further let us define a sequence of Poisson random measures {nn}neN by putting nn = n for all n e N.

Step (I) The family of the laws of ((un,nn))nen is tight on \Lp(0,T; E) n D([0, T]; D(Ap'-1))] x Mj(Z x [0, T]), for any p' e (0, p).

Proof To simplify notation we set B0 = D(Ap'-1) for any p' e (0,p). Define three functions fn, gn and vn by

fn(t) = F(t, un(t)), t e[0,T], (8.8)

gn(t; z) = G(s,un(t); z), t e[0,T], z e Z, (8.9)

Vn(t) =f [ e-(t-s)AG(s,un(s); z)n(dz; ds). (8.10)

We argue exactly as in [10]. We recall that the space Mp(0, T; E), the operators A and A are defined on page 9 and 7, respectively. Since, by estimates (8.4) and Assumption 5, the

family (Ap—1fn)neN is bounded in Mp(0, T; E) and AA—1 is bounded on Lp(0, T; E) it follows from [10, Theorem 2.6] and Lemma 2.2 that A—1fn = A—p(AA—1)1—pAp—1fn is tight on Lp(0, T; E) n C([0, T]; E). This fact, the compact embedding E C B0 and the continuity of the embedding

C([0, T]; B0) C D(0,T; B0)

imply that A—1fn is tight on Lp (0, T; E) n D([0, T]; B0). Next by estimates (8.4), Lemma 7.6 and Lemma 7.7, we infer that the laws of the family (vn)n€N are tight on Lp (0, T; E) n D([0, T]; B0). Finally, from Proposition 8.2 it follows that the family of functions {e—Axn : n e N} is precompact in Lp(0, T; E) n D([0, T]; B0). Since

un = vn + A—1 fn + e~'Axn, n e N,

we easily conclude that the laws of the family (un)neN are tight on Lp(0,T; E) n D([0, T]; B0). Since Mi(Z x [0, T]) is a separable metric space, by [62, Theorem 3.2] the family of the laws of (nn)neN are tight on Mi(Z x [0, T]). Consequently, the family of the laws of ((un, nn))neN is tight on Zt, where

Zt = [Lp(0, T; E) n D([0, T]; B0)] x Mi(Z x [0, T]). (8.11)

Remark Let us observe that the space Zt defined above in Eq. 8.11 differs from the space Xt defined earlier in Eq. 3.22. From Step (I) and Prokhorov Theorem (see, for instance, [24, Theorem 2.3]) we deduce that there exist a subsequence of ((un, nn))nsN, still denoted by ((un, nn))neN, and a Borel probability measure m* on Zt such that L(un, qn) ^ m* weakly. By Theorem C.1 there exist a probability space (¿2, F, IP) and a sequence (un, nn)n€N, of Zt -valued random variables such that

the laws of (un, ijn) and (un, rin) on Zt are equal, (8.12)

and there exists a Zt-valued random variable (u*, n*) on (¿2, F, IP) with

L((u*, n*)) = M*,

such that IP-a.s.

(un,rjn) ^ (u*,n*) in Zt, (8.13)

and nn = n* for all n e N. The sequence (un)neN has similar properties as the original sequence (un)n€jq. Those we will use are stated in part (i) of the next step.

Step (II) The following holds (i)

supnen lluhiiLp(£2x[0,T];E) < ^ and (ii) for any r e (1,p) we have

lim IE ||un — u*ILp(0,T;E) = 0.

Proof Let us begin with an observation that in view of Eq. 8.12, for any n e N, the laws of un and iin on Lp(0, T; E) are identical. Hence,

||un Hlp(2x[0,T];E) = |unyLp(S2x[0,T];E),

and part (i) easily follows from estimates (8.3). Springer

Let us fix r e (1,p). Since u* is Zt-valued, it follows from part (i) that the sequence ||wn — u*llLp(0 t'E) is

F-uniformly integrable. Since by Eqs. 8.13 and 8.11,

llun — w*llji,P(o t'E) ^ 0 on £2, by applying the Vitali Convergence Theorem we deduce part (ii). □

Before we continue we should note that the random variables

un, u* : ¿2 ^ Lp(0, T; E),

induce two E-valued stochastic processes still denoted with the same symbols, see for example [18, Proposition B.4] for a proof for the space L~(R+; ioc (Rd)).NowletF = (Ft)t>0 be the filtration defined by

FFt = a (a(fjn(s), {um(s),m e N}, u*(s); 0 < s < t) UN), t e[0, T], (8.14)

where N denotes the set of null sets of F. Since r\n = n*, it is easy to show that the filtration obtained by replacing nn with n* in Eq. 8.14 is equal to F.

The next two steps imply that the following two E-valued integrals over the filtered probability space (2, F, F, F)

ne—(t—s)A G(s,Un(s),z))nn(dz,ds), t > 0,

i i e—(t —S>AG(s,u*(s),z))n*(dz,ds), t > 0, Jo JZ

do exist.

Step (III) The following holds

(i) for every n e N, njn is a time-homogeneous Poisson random measure on B(Z) x B([0, T]) over (¿2, F, IF, IF) with intensity measure v ( Leb.

(ii) n* is a time-homogeneous Poisson random measure on B(Z) x B([0, T]) over (£2, F, IF, IF) with intensity measure v ( Leb;

Proof Before embarking on the proof, let us first recall that in view of Theorem C.1 we infer that i}n(<±>) = n*(<j) for all a> e and n e N.

For a random measure [ on S x[0,T] and for any U e S let us define an N-valued process (N^(t, U))t>0 by N^(t, U) := [(U x (0, t]), t > 0. In addition, we denote by (N[(t))t>0 the measure-valued process defined by

N[(t) = {S 3 U ^ N[(t,U) e N}, t e[0,T].

Proof of Step (III)-(i) Let U\,...,Uk e Z, be k disjoint sets and t > 0. Since nn is a time-homogeneous Poisson random measure and the random variables Nnn(t, Ui), l e {1,...,k}, are independent, we have

Ee<^k=1 W^)) = j Ej^nn^i). (8.15)

Since ri„ and nn have equal laws, for any U e Z and t > 0, the characteristic functions of the random variables NVn (t, U) and (t, U) are equal. Therefore, it follows from Eq. 8.15 that

Ee (= f] Ee'<

^jelNnn(t,ul) l=1

from which we easily deduce that nn satisfies Definition 2.3 (a)-(c). In order to finish the proof of part (i) of Step (III) we only need to show that nn satisfies Definition 2.3 (d) with the filtration defined in Eq. 8.14. For this purpose let us fix m e N, to e [0, T] and r > s > to. It follows from the definition of F that rjn is F-adapted and it remains to prove that Xm = Nqn (r) — Nqn (s) is independent of Fto. By Definition 2.3 (b) the random variable Xm = N^n(r) — N^n(s) is independent of N^n(to), so we only need to show that Xm is independent of um(a) and u*(a) for any a < to. In what follows we fix a e [0, to]. Since L(Um, Vm) = L(Um, Vm), it follows that

L(Um\[o,a], Xm) = L(Um |[o,a], Xm), (8.16)

where Xm = NVm(r) — NVm(s). Recall that Vm = V* and Um is the unique solution to the linear stochastic evolution Eq. 8.1, hence it is adapted to the a-algebra generated by Vm. Consequently, um\[o,a] is independent of Xm and we infer from this last remark and (8.16) that um \[o,a] is independent of Xm. The remaining part of the proof, which consists in showing that Xm is independent of u*\[o,a], is addressed in the next lemma. □

Lemma 8.3 Assume that (^, F, P) is a probability space and Y is a Banach space and that (yn)nen is a sequence of Y-valued random variables over (^, F, P) such that yn ^ y* weakly, i.e., for all 0 e Y*, Eel(^,yn) ^ Ee1^^. If z is a another Y-valued random variable over (^, F, P) such that yn and z are independent for all n > 1, then y* and z are also independent.

Proof of Lemma 8.3 The random variables y* and z are independent iff

Eei(6iz+62y*) = Eei6iz Eei6iy*, 0i, 02 € Y*.

The weak convergence and the independence of z and yn for all n € N justify the following chain of equalities.

Eei(0iz+02y*) = lim Eei(0iz+02yn) = lim Eei0iz Ee02yn = Eei0iz Ee02y*.

Since um\[o,a] is independent from Xm, Lemma 8.3 implies that U*\[o,a] is independent of Xm.

Proof of Step (III)-(ii) We have to show that n* is a time-homogeneous Poisson random measure with intensity v (g> Leb. But this will follow from Step (III)-(i), since n*(w) = Vm (a) for all w e ^ and m e N. □

Step (IV) The following holds

(i) for every n e N, the process un is a F-progressively measurable;

(ii) the E-valued process u* is F -progressively measurable.

Proof One may suspect that there is a simpler proof by by adaptiveness and left-continuity. However, here the problem is that un e D([0, T]; Y) with E C Y densely and continuously. Because G is only defined on [0, T] x E, we want u as an E-valued process to be progressively measurable.

As we noted earlier, one can argue as in [18, Proposition B.4] and prove that the random variables Un, u* : fi ^ Lp(0, T; E) induce two E-valued stochastic processes still denoted with the same symbols. Here, we have to show that for each n e N, un and u* are F-progressively measurable. By definition of F, for fixed n e N the process un is adapted to F by the definition of F. Let us fix r e (1, p). By Step (II) the process un is bounded in Lr (Q.t ; E), hence, there exists a sequence of simple functions (WH)men such that um ^ un as m ^ w in Lr (Q.t; E). In particularly, by using the shifted Haar projections used in [ 15, Appendix B] we can choose (ujm)meN to be progressively measurable. It follows that un is progressively measurable as a Lr (^t ; E)-limit of a sequence of progressively processes. Finally, since un ^ u* as n ^ w also in Lr (£2t; E), it follows that u* is progressively measurable. □

Let i be a time-homogeneous Poisson random measure over (fi, F, F, P) with intensity

measure v (g> Leb, v be an E-valued progressively measurable process, uo e D(AP p) and K be a nonlinear map defined by

K(x,v,i)(t) := e-tAu0 + i e-(t-s)A F(s, v(s)) ds

+ f f e-(t-s)A G(s,v(s); z))i(dz,ds), t e[0,T]. (8.17)

Here, as usual, ¡1 denotes the compensated Poisson random measure of i. Step (V) For all t e [0, T] and n e N we have P-almost surely

un(t) - K(xn, un, rjn)(t) = 0,

where u n is defined by

xn, if s e[0, 2-n),

2n f}:Cl-2-nUn(r)dr, if s > 2-n. (8-18)

Proof First, let p' e (0, p) and

XT = Lp(0, T; E) n Lw(R+; D(Ap'-1))

XT = Mj(Z x [0, T]).

Again for simplicity we set B0 = D(Ap — 1). It is proved in [14] that the map Q : X\ ^ X\ defined by

' Xn, if s e [0, 2—n),

0(U)(S) 2n f*$l2-:U(r)dr, if S >

l^n(s)-2-n

is well defined, linear and bounded. Therefore, for any n e N, the two triplets of random variables (un, nn, un) and (un, nn, un), where un = G(un) and un = G(un), have equal laws on X\ x . Second, let us define processes |n and zn by

~Zn(t) := e-tAXn + [' e-t-s)A F(s,un(s)) ds

+ i i e—(t—s)A G(s,un(s,z))n(dz,ds),t e[0,T], (8.19)

Zn(t) := e-tAXn + [' e-t-s)A F(s,un(s)) ds 0

+ i i e—(t—s)A G(s,un(s,z))rj(dz,ds),t e[0,T]. (8.20)

Let us also define processes Zn and Zn by replacing (un, n) and (un, n) by (un, nn) and (un, nn) in formula (8.19) and (8.20), respectively. Thanks to the continuity of the linear map G and Assumption 5, it follows from [15, Theorem 1] that the quintuples of random variables (un, nn,un,Zn, Zn) and (un, nn, un,Zn, Zn) have the same law on Zt x %T x %T x XT. Consequently ^(un, Zn) and ^(un, Zn) have equal laws on R, where the continuous functional ^: Xi, x XT —^ R is defined by

^(v, w) = I |v(t) — w(t)\s0 dt, for v e Xt and w e Xt-J0

Therefore, for any function y e Cb(R, R+) we have

En[^K, Zn))] = E[V(^(un,Zn))]- (8.21)

Now let e > 0 be arbitrary and let e Cb (R, R+) be defined by

h if y e [0, e),

<^s(y') l1[e,TO)(y), otherwise. It is easy to check that

P (*(*n,Zn) > e) < fa 1[e,»)(^(un, Zn))dP

+ fa2 l^W^nZV^f^dP

= E0e(V(un,Zn))-

The last inequality altogether with Eq. 8.21 implies that

P (VfrnX) > e) < EnfcW(un, Zn))- (8.22)

Since for any t and P-almost surely un(t) — Zn(t) = 0 we obtain that P almost surely ^(un, Zn) = 0 , which along with Eq. 8.22 yield that for any e > 0

P(^(^n,Zn) > e) = 0. Since e > 0 is arbitrary, we infer from the last equation that P-a.s.,

^(un,Zn) = 0.

This implies that for P almost all t e [0, T] and almost surely un(t) = Zn(t). Since two cadlag functions which are equal almost all t e [0, T] must be equal for all t e [0, T], we derive that almost surely

un = (xn , u n, njn),

for all t e [0, T] . □

Step (VI) We have

\ K. (xn,un,rin) — K. (uo,u±,n±)\ ^ 0, as n ^-<x>.

II llLP(fix[0,T];£)

Proof First, notice that since nn = n* for any n e N, the convergence in Step (VI) is equivalent to

E IIK,(xn, un, n) — fc(u0, u, V*) II — 0 as n — w.

II \\LP(0,T;E)

Observe that for any any n

~ II - IIP fT I - A \P

E \ K, (xn, u n,n*) — K, (u0,u±,n*)i < C ]E/ \e ' [xn — u0]1 dt

II \\Lp(0,T ;E) 1^1 I

f T f t r 1 P

+ EJ J e—(t—s)A[F(s,un(s)) — F(s, u*(s))^ ds dt

+ E[ \f f e—(t —s)AG(s,un(s); z)n*(dz,ds) J0 1J0 JZ

— Jo / e—(t—S)AG(s,u*(s); z)n*(dz,ds)\ dtj =: C (sn + sn + S2 ).

Since Ap p xn — Ap p u0 in E, the Lebesgue DCT implies

"T , , ,

i—(P—p )„ — A_____w.p—p„„ ,P—-a.

c t 1 1 1

S0n < C! 11A (p 1 )e—tAy£(E,E)l(Ap—p u0 — Ap—p xn)\p dt —— 0, 0

as n — w.

Since un = K(xn, un, n*), arguing as in the proof Proposition 8.1 we can show that un e Wa,p(0, T, E) P-a.s., for any a e (0, p). Hence by inequality (E.1) it follows that P-a.s.

IIu n u*IIlp(0,T ;E) < CIIj u*^LP(0,T ;E) + C2 P ^ un II Wa,P(0T ;E). Hence, in view of Eq. 8.13 and the continuous embedding E C X we infer that

lim un = u* in Lp(0, T; X), P a.s. . (8.23)

n—^w

Next, by the Young inequality we infer that

i | / e—(t —s)A[F(s, Un(s)) — F(s, u*(s))]ds\pdt 00

< C \Ap—1F(s, Un(s)) — Ap—1F(s, u*(s))\p ds, 0

for some C > 0. By Assumption 5 and since un, u* e Lp(0, T; E) P-a.s., there exists a constant C > 0 such that for any n e N

9 — 1 ]

f \Ap—1F(s,Un(s))\pds < C, 0

\Ap—1F(s, u*(s))\p ds < C.

The continuity of F (see Assumption 5), the convergence (8.23) and the Lebesgue DCT imply

SI ^ 0 as n ^x.

In a similar way we will show that S2? — 0 as n — x. In particular, from Fubini Theorem and Theorem 7.1 as well as (8.13) we infer that

Jo Jo Jz

Sn < IE

Jo Jo JZ

\e~(t-s)A[G(s, Un(s); z) - G(s, u*(s); \pv(dz) ds dt < Ctf f \AP-p G(s,Un(s); z) - Ap-p G(s,u*(s); z)\p v(dz) ds,

where C = |Ap pe sA\p dSj. By Assumption 3 there exists a constant C > 0 such

' »P-p

supn J \AP ~p G(s,un(s); z)\p v(dz) < C, (8.24)

hence by the continuity of G, the convergence (8.23) and the Lebesgue DCT

Sn ^ o as n ^x.

To establish Theorem 3.4 we need to check the following claim. Step (VII) We have that IP-a.s. for all t e [0, T] u*(t) = K(u0, u*, n*)(t).

Proof Let us fix r e (1, p). From Steps (II) to (VI) we infer that un — u* in Lr(i2t; E), Hn = K(xn, un, nn) in Lr(Q.t; E),

K(xn, un, rjn) — K(u0, u*, n*) — 0 in Lr(£it; E).

By the uniqueness of the limit, we infer that u* = K(u0, u*, n*) in Lr(i2t; E), which implies that P-a.s. u*(t) = K(u0, u*, n*)(t) a.e. t e [0, T]. By equality (8.13) we infer that P — a.s., u* e D([0, T]; B0). Hence by combination of Lemmata 7.2 and 7.3, and [28, Theorem 2.8] we deduce that P — a.s., K(u0, u*, n*)() e D([0, T]; B0). Hence PP-a.s. u*(t) = K(u0, u*, n* )(t) for all t e [0, T]. □

It follows now from Step(III)(ii), Step(IV)(ii) and Step(VII) that the system (£2, FF, IF, P, n*, u*) is an E-valued martingale solution, satisfying (3.12), to problem (3.1). The process u* has cadlag paths in B0 := D(AP —x) for any p e (0, p). Since, by assumption, the maps F and G are bounded, the operator —A is the infinitesimal generator of a

contraction type semigroup in D(Ap p), we easily infer from Lemma 7.3-(iii) along with

Eq. 3.12 that the paths of u* are cadlag in D(Ap p). This completes the proof of Theorem 3.4.

9 Proof of Theorem 3.2

In this section we replace the boundedness assumption on F by the dissipativity of the drift —A + F. The spaces E, X are as in Assumption 1, and we recall that E C X C D(Ap—1). Before we proceed let us state the following important consequence of Assumption 4.

Lemma 9.1 (See Da Prato [23]) Assume that X is a Banach space, — A a generator of a Co-semigroup of bounded linear operators on X and a mapping F :[0,T]x X ^ X satisfies Assumption 4. Assume that for t e [0, w] two continuous functions z, v : [0,t) ^ X satisfy

z(t) = f e—(t—s)AF(s,z(s) + v(s))ds, t<T.

\z(t)\X < i e—k{t—s)a(\v(s)\X)ds, 0 < t < t. (9.1)

Before giving the proof of Theorem 3.2 let us notice that Assumption 4 implies that

\F(t,y)\X < a(\y\X), t > 0, y e X. (9.2)

Proof of Theorem 3.2 Without loss of generality we assume that k = 0. Let (F„)n€N be a sequence of functions from [0, w) x X to X given by Assumption 4(iii). In particular, there exists a sequence (RF)neN of positive numbers, such that \Fn(s, y)\X < RF for all (s, x) e [0, T] x X, n e N, and \Fn(s,x) — F(s,x)\x ^ 0 as n ^ w for all (s,x:) e [0, T] x X.

Finally, by the continuity of the embeddings X ^ D(Ap—p) C D(Ap—1) the family of functions (Fn)neN, F and G satisfy all the assumptions of Theorem 3.4. Hence, we infer from the applicability of Theorem 3.4 that there exists an E-valued martingale solution to the following problem

dun(t) = [-Aun(t) + Fn(s,un(t))] dt + fZG(s,Un(t); z)rjn(dz, dt), (93)

Un(0) = U0. .

Let us denote this martingale solution by

(^n, Fn, Pn, F^ nn,n ) . We denote by En the mathematical expectation on Fn, Pn).

In view of Theorem 3.4, for each n e N, un has cadlag paths in D(AP p). Moreover,

un(t) = e—tAU0 + zn(t) + vn(t), t e [0, T],

vn(t) =f f e—(t—s)AG(s,un(s); z)nn(dz; ds), (9.4)

zn(t) = f e—{t—s]AFn(s,un(s)))ds. (9.5)

Notice, that zn( t) = un(t) — vn( t) — e—t Au0, t e [0, T]. Similarly to the proof of Theorem 3.4 the proof of Theorem 3.2 will be divided into several steps. The first two steps are the following.

Step (I) Let qmax be definedby Eq. 3.7. Then for any q e (q,qmax) and r e (l,p), we have

sup En |K\\rLq(oT.E) < X. (9.6)

neN ( , ; )

Proof Step (I) follows from Lemma 7.9 and Lemma lo. □

Step (II) For any q e (q, qmax) and r e (1,p) defined in Step(I), we have

sup En sup \zn(t)\X < X. (9.7)

neN o<t <T

Moreover, the laws of the family (zn)neN are tight on C([o, T]; X). Proof By Lemma 9.1 we infer that for any T > o

sup \Zn(t)\x <f e—k(t—s)a (K(s)|x + |e—sax|x) ds 0<t <T J0 v '

< C j (1 + \vn(s)\qx + |e—sAx|X) ds. (9.8)

Since the embedding E C X is continuous, it follows from Step (I) that there exists q > q such that for any r e (1,p)

supEn\Vn\rLq(oT;X) < X. (9.9)

Therefore,

supEn sup \zn(t)\X < x.

n o<t<T X

Hence, we proved the first part of Step (II). Note that the last inequality implies that

supn EnkJLq(0,T;X) <

Before we proceed further, we recall that there exist 0 < 1 — q-^ and an UMD, type p and separable Banach space Y such that D(A0y) C X C Y .To prove the second part we use the identity Zn = A—1Fn(s, un(s)), where A—1 = B + Ay with Ay being defined as in Eq. 2.6 by replacing A with Ay, and Remark 2.1 along with Lemma 2.2. But first we need to show that for some p e (1, qmqai), the sequence (|Fn(-, un(-))\Lp(0 t-Y))neN ^ bounded

in probability. Let us fix p e (1, q2msk). By Lemma 9.1 and the continuity of the embedding

E C X we infer that

\Fn(s, un(s))\X < C(1 + |e—sAu01XX + \Vn(s)\Xq + \Zn(s)\Xq). From this inequality we easily deduce that

\Fn(^ un(-))\Lp(0,T;X) < C(1 + |vn\Lq(0,T;X) + ZnL»(0,T;X))- (9.10)

Taking q = pq e (q, qmax) and raising to the power both sides of Eq. 9.10 implies that

ri r_ r_

|Fn(-, un(-))\Lp(0 T;X) < C(1 + |Vn(0 T;X) + |Zn|Lx(0,T;X))-

Since Y c X is continuous we obtain that

\Fn(; unmfp(0,r;y) < C(1 + +\vnL+ \Zn\qnTX))- (9-lD

By Chebyshev inequality we derive that for any m e N

Pn(\Fn(s,Un(s))\Lp 0TY) > ^ < "TT (C + EnK^L (0T ;X) _ '

+En\Zn(s)\L q 0T ;X) _

Since En\Vn(s)\qLq(0 T,X) and En\Zn(s)\qLq(0 TX) are uniformly bounded w.r.t n, we derive

that supn€N\Fn(-,un(-))\q^(0 t•y) is bounded in probability for any p e (1, q2qak).

Next we choose p e (1, ^) such that 0 e (0, 1-P). Since0 < 1-p and D(A9Y) c X,

we can infer from Lemma 2.2 that the family of the laws of (zn = A-1F(-, un(-)))nem is tight on C([0, T]; D(A0y)) and hence on C([0, T]; X). □

Remark 9.2 Let q be a number in the interval [p, ro). It follows from Step (I) and Step (II) that for any q e (q, qmax) and _ e (1,p)

sup EnlKIlLq(0,T;X) < C-

For q < p the above inequality holds with q = p.

Remark 9.3 In [10], the first named author and Gatarek constructed an approximation of F as follows. Let (Fn)nen be defined by

F (s x) - i F(s,x) if \x\x < n, n( , ) F(s , jX-^-x) otherwise.

By Eq. 9.2 \Fn(s, y)\ < a(n), for all s > 0, y e E. They solved the Problem (9.3) driven by Wiener noise on the random interval [0, Tn A T] where the sequence of stopping times {Tn : n > 1} is defined by

t- = inf{i e [0, T] : \un(t)\x > n}.

By proving that supte[0 t ]\un(t)\x is uniformly bounded, which implies that Tn \ T almost surely as n ^ ro, and then using (9.2) and Lemma 2.2 they could show that the laws of zn is tight on C([0, T]; X). In our framework we know a priori that un is only cadlag in D(AP -1) for p e (0, p), hence Tn will not be a well defined stopping time and we will not be able to show that supte[0 t] \un (t)\x is uniformly bounded.

In order to use Theorem C.1 we also need the following.

Step (III) For p' e (0, p) let B0 = D(Ap -1). The family of laws of (vn)nen is tight on Lq (0, T, E) n D([0, T]; B0) and that of (nn)neN is tight on Mr(Z x [0, T]).

Proof By Lemma 7.6 and Lemma 7.7 the laws of the family (i>n)neN are tight on Lq(0,T,E) n D([0, T], B0). Since E„n„(A,I) = En(A,l) for all A e B(Z) and I e B([0, T]), the laws of n„ and rjm on Mi(Z x [0, T]) are identical for any „,m e N.

Hence, we can deduce from [62, Theorem 3.2] that the laws of the family (nn)neN are tight on Ml(Z x [0, T]). □

Let B0 as in Step (III). From Steps (I), (II), (III) and Prokhorov's theorem it follows that there exists a subsequence of ((Zn, vn, nn))neN, also denoted by ((Zn, vn, nn))neN and a Borel probability measure on Xt, where the space has been defined earlier in Eq. 3.22, such that the sequence of laws of (Zn, vn, nn)neN converges to . Moreover, by Theorem C.1, there exists a probability space (22, F, P) and Xt-valued random variables (z*, v*, n*), (Zn, vn, rjn), n e N, such that P-a.s.,

(Zn, Vn, nn) — (z*, v*, n*) in Xt (9.12)

and, for all n e N, njn = n* and

L((Zn, Vn, nn)) = L((Zn, Vn, nn)) on Xt-

We define a filtration IF = (Ft)te[0,T] on (22, F) as the one generated by n*, Z*, v* and the families {Zn : n e N} and {vn : n e N}, that is, for t e [0, T],

Ft = a (a(zn(s), {vn(s),n e N}, Z*(s), v*(s), n*(s); 0 < s < t) UN), (9.13)

where N denotes the set of null sets of FF.

The next two steps imply that the following two Ito integrals over the filtered probability space (2, FF, IF, P)

f f e—(t—s)AG(s,vn(s) +Zn(s),Z))n,n(dZ,ds), t e[0,T],

( i e—(t—s)A G(s,v*(s) + Z*(s),Z))n*(dZ, ds), t e[0,T], J0 Jz are well defined.

Step (IV) The following hold

(i) for all n e N, ?jn is a time-homogeneous Poisson random measure on B(Z) x B([0, T]) over (22, FF, F, PP) with intensity measure v ® Leb.

(ii) n* is a time-homogeneous Poisson random measure on B(Z) x B(0, T) over (22, FF, F, P) with intensity measure v (g> Leb;

Step (V) The following holds

(i) for all n e N, the processes vn and Zn are F-progressively measurable;

(ii) the processes z* and v* are F-progressively measurable.

The proofs of Step (IV) and (V) are the same as the proofs of Step (IV) and (V) of Theorem 3.4. Also, as earlier in the proof of Theorem 3.4, in order to complete the proof of Theorem 3.2 we have to prove the following claim.

Step (VI) Let u* = e— Au0 + z* + v* and K be the mapping defined by

K(u0,u,n)(t) = e—tAu0 + i e—(t —s)AF(s,u(s))ds

+ f f e—(t —s)AG(s,u(s); z) n(dz; ds),

for t e [0, T], u e D([0, T], B) and n e M¡(Z x [0, T]). Then P-a.s.,

u*(t) = K(uo,u„,n*)(t), Vt e [0, T] (9.14)

Proof Since the process un = e AU0 + zn + vn is a martingale solution of problem (9.3) and

L((Zn, vn, fl„)) = L((zn, Vn, Vn)), for all n e N, we can argue as in Step (V) of the previous section and prove that P-a.s.

Un(t) = Kn(uo, Un,fin)(t), (9.15)

for all t e [0, T]. Here Kn is obtained by replacing F with Fn in the definition of K. Since the laws of zn and Zn are equal on C([0, T]; X), by Eqs. 9.8 and 9.7 we infer that

supEn||Znllc([0 T];X) < x. (9.16)

Since vn and vn have equal laws on Lq (0, T; E), from Eq. 9.6 we deduce that

supEnlllnllLq(0TE)< x. (9.17)

n>1 ( , ; )

Invoking (9.12) we infer that, P-a.s., as n ^ x,

— Z*lc([0,T];X) ^ 0 and llznllC([0,T];X) ^ lz*lC([0,T];X).

Thanks to Eq. 9.16 the sequence llzln l<2q([0 T]-X) is P-uniformly integrable. Thus, the applicability of the Vitali Convergence Theorem implies that

nÜm EnllznllC([0,T];X) = Enlz*lC([0,T];X).

Thanks to Eq. 9.16 and this last convergence we can prove, by a similar argument used as above, that

^C([0,T ];X)

With a similar argument we can also show that

lim EJIn - z*\\C(t0 Tvr\ = 0. (9.18)

nHrn E„\ÖB - <1 (0,T;B0) = 0, (9.19)

En\\vn - v*\\ 2* p^ 0. (9.20)

LI (0,T ;E)

We derive from Eqs. 9.18, 9.19 and 9.20 that

lim En\\un - u*\\2|(0T.B ) = 0,

n^OT Lq (0,1 ,B0)

lim En\\un - uj,2* TX) = 0,

which is correct because the embedding E C X is continuous. In the other hand, by Proposition 2.5 we have

En\\Kn(x, un, rjn) - K(x, u*, n*)\\rLP*(0 T.Bo) = En\\Kn(x, in, n*) - K(x, u*, n*)\\rLp*(0 T.B)

¡¿e-(t-s)A (Fn(s, Un(s)) - F(s, u*(s))) ds

/0 z e-(t-S)A (G(s, un(s)) - G(s, u())) ij*(dz, ds)

< in +1

mP(0,T ;E)

where r = and p* = min(q, p). Arguing as in Step (VI) of the previous section we can show that ^ 0 as n ^ 0. To deal with I^ we first use Assumption 4(iii) given on page 16 and (9.12) to derive that Pn-a.s.

\\Fn(-, un) - u*)\\rLp*(0 T;X) ^ 0,

as n ^ x. Since, by Eqs. 9.10, 9.16 and 9.17,

sup En\\Fn(, u n)\\2[p* (0TX) < x,

n>1 ( , ' '

we can apply the Lebesgue DCT and deduce that I^ ^ 0as n ^x. Therefore, as n ^x

En\\Un - u*"L p* (0,t ;b0) ^0,

En\\Kn(x, u n, rjn) - K(x, u*, n*) \ LL p* (0 T 'B) ^ 0, as n ^ x. These two facts along with Eq. 9.15 implies that P-a.s. and for a.e. t e [0, T]

u*(t) = K(x, u*, n*)(t).

Since u* and K(x, u*, n*) are B0-valued cadlag functions, the last equation holds for all t e [0, T]. □

It follows now from Step(IV)(ii), Step(V)(ii) and Step(VI) that the system (22, FF, F, IP, n*, u*) is an X-valued martingale solution to problem (3.1) with cadlag paths in B0 := D(Ap -1) for any p e (0, p). Since —A is the infinitesimal generator of a contraction type C0-semigroup on D(Ap p) and u* e Lq(0, T' X) almost surely, then we

easily infer from Lemma 7.3-(iii) that the paths of u* are cadlag in D(Ap p). Similar calculations as done in Steps (I) and (II), see also Remark 9.2, yield that for any q e (q, qmax) and r e (1, p)

E\\u*\\Lq{0TX)< x.

This completes the proof of Theorem 3.2.

Acknowledgments Open access funding provided by Montanuniversitat Leoben. The research by E. Hausenblas and P. A. Razafimandimby has been funded by the FWF-Austrian Science Fund through the project P21622. The research on this paper was initiated during the visit of Hausenblas to the University of York in October 2008. She would like to thank the Mathematics Department at York for hospitality. A major part of this paper was written when Razafimandimby was an FWF Lise Meitner fellow (with project number M1487) at the Montanuniversitat Leoben. He is very grateful to the FWF and the Montanuniversitat Leoben for their support. Razafimandimby's current research is partially National supported by the National Research Foundation South Africa (Grant number 109355). The authors would like to thank Jerzy Zabczyk

LP (0,T;X)

for discussion related to the dual predictable projection of a Poisson random measure and to Szymon Peszat for discussion related to an example from his paper [64].

We also would like thank the anonymous referees for their insightful comments and help to clarify issues from previous version of the paper; in particular for their help in clarifying the construction of stochastic integral with respect to Poisson random measure (PRM) and progressively measurable integrands.

Last but not least, the authors would like to thank Carl Chalk, Pani Fernando, Ela Motyl, Markus Riedle, Akash Panda and Nimit Rana for a careful reading of the manuscript. Earlier versions of this paper can be found on arXiv:1010.5933.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Appendix A: Stochastic Appendices A.1 Space-time Poissonian White Noise

Analogously to the space-time Gaussian white noise one can construct a space-time Levy white noise or space-time Poissonian white noise. But before doing this, let us recall the definition of a Gaussian white noise, see for e.g. Dalang [22].

Definition A.1 Let (fi, F, P) be a complete probability space and let (S, S, a) a measure space. A Gaussian white noise on (S, S, a) is an F / M(S) measurable map

W : fi ^ M(S)

satisfying

(i) for every U e S such that a(U) < ro, W(U) := iu o W is a Gaussian random variable with mean 0 and variance a(U);

(ii) if U1, U2 e S are disjoint, then the random variables W(U{) and W(U2) are independent and W(U1 U U2) = W(U1) + W(U2).

The space-time Gaussian white noise can be defined as follows. Let O c Rd be a domain. Put S = O x[0, ro), S = B(O) ® B([0, ro)) and let a be the Lebesgue measure on S. The space-time Gaussian white noise is an M(O)-valued process {Wst(_) : r > 0} defined by

Wst(r) = {B(O) 3 U ^ W(U x [0, _)) e R}, _ > 0, where W : fi ^ M(S) is a Gaussian white noise on (S, S, a). Let (fi, F, F, P) be afiltered probability space. We say that the Gaussian white noise W on (S, S, a) is a space-time Gaussian white noise over (fi, F, F, P) if the process {Wst (r) : r > 0} is F-adapted.

One can show that the M(O)-valued process {Wst(r) : r > 0} generates, in a unique way, an L2(O)-cylindrical Wiener process (W_)_>0, see [ 12, Definition 4.1]. In particular, for any U e B(O) such that Leb(U) < ro, and any r > 0, W/t(1U) = W(U x [0, _)) = Wst(r, U). Analogously, we can define a Levy white noise and a space-time Levy white noise.

Definition A.2 Let (fi, F, P) be a complete probability space, (S, S, a) be a measurable space, y e R and let v be a Levy measure on R. Then a Levy white noise on (S, S, a) with intensity jump size measure v is an F/M (S)-measurable mapping

L : fi ^ M(S)

satisfying

(i) for all U e S such that a(U) < x, L(U) := iu o L is an infinitely divisible random variables satisfying, for all 0 e R ,

Eei0L(U)

= exp ^a(U) iy0 + j — ei0x — i0x1[—u](x)) v(dx) j ;

(ii) if U1, U2 e S are disjoint, then the random variables L(U{) and L(U2) are independent and L(U1 U U2) = L(U1) + L(U2).

Definition A.3 Let (2, F, P) be a complete probability space. Suppose that O C Rd is a domain and let S = O x [0, x), S = B(O) x B([0, x)) and a the Lebesgue measure on S. Let v be a Levy measure on R.If L : 2 ^ M(S) is a Levy white noise on (S, S, a) with intensity jump size measure v, then the M(O)-valued process {Lst(r) : r > 0} defined by

Lst(r) = {B(O) 3 U ^ L(U x [0, r)) e R}, r > 0,

is called a space-time Levy noise on O with jump size Levy measure v. We say that L is a space time Levy white noise over a filtered probability space (2, F, F, P) iff the corresponding measure-valued process {Lst (r) : r > 0}, is F-adapted.

Remark A.4 If L is a space-time Levy white noise on (S, S, Leb), then the corresponding M(O)-valued process {Lst(t) : t > 0} is a weakly cylindrical process on L2(O), see Definition 3.2 [2].

As in the case of space-time Gaussian white noise we also introduce the following definition.

Definition A.5 Let us assume that (2, F, P) is a complete probability space and v be a Levy measure on R.

(a) A Poissonian white noise with intensity jump size measure v on a measurable space (S, S, a) is a F/M(Mi(S x R)) measurable mapping

n : 2 ^ MI(S x R) (A.1)

satisfying

(i) for all U e S <g> B(R) with (a <g> v)(U) < x, n(U) := iU on is a Poisson random variable with parameter (Leb <v)(U);

(ii) if the sets U1 e S < B(R) and U2 e S < B(R) are disjoint, then the random variables n(U1) and n(U2) are independent and n (U1 U U2) = n(U{) + n(U2), almost surely.

(b) Let O C Rd be a domain. Then, the map n defined in Eq. A.1 is called a space-time Poissonian white noise on O with intensity jump size measure v iff n is a Poissonian white noise on (S, S, Leb) with intensity jump size measure v, where S = O x[0, x), S = B(O) < B([0, x)).

The corresponding measure-valued process {nt : t > 0} defined by

nt : B(O) x B(R) 3 (U, C) ^ n(U x [0, t) x C) e N0, (A.2)

is called an (homogeneous) space-time Poissonian white noise process.

(c) A space-time Poissonian white noise n on O is called a (homogeneous) spacetime Poissonian white noise over a filtered probability space (2, F, F, P) iff the

measure-valued process (nt : t > 0} defined above is F-adapted. (Compare this definition with [65, Definition 7.2].)

Theorem A.6 Let (&, F, P) be a complete probability space, v a Levy measure on R, O C Rd is a bounded set. Let also

n : & ^ MJ(S x R)

be a space-time Poissonian white noise on O with intensity jump size measure v, where S = O x[0, m), S = B(O) ( B([0, m)) and Leb the Lebesgue measure on S. Let M be the random measure defined by

M : & ^ MJ(O x R x [0, m))

M(w)(U x C x D) = n(w)(U x C x D), U e B(O), C e B(R), D e B([0, m)).

Then, M is a time-homogeneous Poisson random measure on O x R with the intensity measure m ( Leb satisfying

m : B(O x R) 3 C f lC(£, $) v(d$) d£.

Proof The proof is very similar to the proof of [65, Proposition 7.21], so we omit it. □

Remark A.7 The compensator y of a homogeneous space-time Poissonian white noise with jump size intensity v is a measure on O x R x[0, m) defined by

B(O) x B(R) x B([0, m)) 3 (U,C,J) ^ y(U x C x J) = Leb(U) v(C) Leb(J).

Appendix B: Besov Spaces and Their Properties

We follow the approach to the Besov spaces in Runst and Sickel [69, p. 8, Def. 2]. We are interested in the continuity of the mapping G described in Section 5. To be precise, we will prove the following result.

Proposition B.1 Let p,p* G (1, œ) be two numbers satisfying P + P* = 1. Then for

every f e S(Rd) and a e Rd the tempered distribution fSa belongs to the Besov space _ d

Bp]m (Rd) and

\fSa\p d da = (2n)-22 F\f\p • (B.1)

In particular, there exists a unique bounded linear map

^ : Lp(Rd) ^ Lp(Rd, Bp^ (Rd))

such that №(f)](a) = fSa, f G S(Rd), a G

In what follows, for any f e Lp(Rd) we denote by f&a the value of A(f) at a. Observe that (fSa, 0) = f (a) 0(a), 0 e S(Rd) so that f&a = f(a)Sa. Hence, in order to prove (B.1) it is sufficient to prove it for f = 1, i.e.,

\Sa |p _ 4 = (2n)-22 F. (B.2)

Bp, I (rd)

Let us recall the definition of the Besov spaces as given in [69, Definition 2, pp. 7-8]. First we choose a function ^ e S (Rd) such that 0 < ft(x) < 1, x e Rd and

x,(x). \ ^ if \x\ < 1,

f(x) = i 0 if \x \> 2.

Then put

' 0o(x) = f(x), x e Rd, 01(x) = ^(2) - f(x), x e Rd, 0j(x) = 01(2-j+1x), x e Rd, j = 2, 3,....

We will use the definition of the Fourier transform F = F+1 and its inverse -1 as in [69, p. 6]. In particular, with (•, •) being the scalar product in Rd, we put

(F±1f)(f) := (2n)-d/2 f f(x) dx, f e S(Rd), f e Rd.

With the choice of 0 = {0j j1^ as above and F and F-1 being the Fourier and the inverse Fourier transformations (acting on the space S'(Rd) of Schwartz distributions) we have the following definition.

Definition B.2 Let s e R, 0 < p <œand f e S'(Rd). If 0 < q < œ we put

/ œ \ 1

If = I £2SJq \F- 0Ff]\l I = |F-1 [0jFf]\Lp)

If q = œ we put

If|B,œ = ^ \F-1 [0Ff]\lp = "(\F-1 [0jFf]\lJ

We denote by Bp q(Rd) the space of all f e S'(Rd) for which |f |B is finite.

Lemma B.3 If ç G S(Rd), X > 0 and g(x) := ç(Xx), x G Rd, then IF-1giLP(Rd) = Xd(p-1 )|F

Proof The proof follows from simple calculations so it is omitted. □

Proof of Proposition B.1 As remarked earlier it is enough to show equality (B.2). Since F-1 (çu) = (2n)-d/2(F-1ç) * (F-1u), ç g S, u g S' we infer that for j g n*,

IF-1 [0jF(Sa)iLP(rd) = (2n)-d/2I(F-10j) * Sa ^Prd)

= (2n)-d/2|F-10j iLPrd)

= (2n)-d/22d(p-r )2-jd(P-T) |F-1011

Hence, Sa belongs to the Besov space Bp^ 1 )(Rd) as requested and the equality (B.2) follows immediately. □

Corollary B.4 Assume that O is a bounded and open subset of Rd with boundary dO of class Cc. Let r, q e (1, c) with q > r then there exists a unique bounded linear map

* : Lq(O) ^ Lq(O, B,UO)) (B.3)

such that [^(/)](a) = fSa, f e Lq(O), a e O. In particular, there exists a constant C such that for any / e Lq (O)

flq-d(1-i^ da < C If\qmo). (B.4)

Br,CC<

Br,™' r'(o)

Proof It is enough to prove (B.3) for any / e CC(O) as that set is dense in Lq(O). As before, we first need to show the following version of Eq. B.2

sup |Sa | 4 <C(r,d), (B.5)

aeO Brx (o)

for a constant C(r, d) > 0 depending only on r and d. For that aim let us fix a e O and let us recall that according to Definition 4.2.1 from [73], |5a | _ d is equal to infimum of

Br,m (o)

|u| _ d over all u e BrC (Rd) such that u\o = &a. Thus |Sa | _ d < |Sa| _ d

Br£ (rd) ' Brc (o) Brc (rd)

and the result follows by applying (B.2).

Second, let ^ be the linear map defined on Lq (O) by tyf = fha for / e Lq (O) and a e O. Since, by the assumption q > r, Lq(O) C Lr(O) it follows from the first part of the proof that

f WW d da < f f(a)q |Sa | _ d da

Jo b_c (o) JO Brc (o)

< sup |Sa | _ d i /(aWda

aeo Brc (o) Jo

< C(r,d)q f HHo).

The last inequality completes the proof of Corollary B.4. □

Appendix C: A generalisation of the Skorohod Representation Theorem

Within the proofs of Theorems 3.2 and 3.4 we are dealing with the limits of pairs of random variables. For us it was important that certain properties of the pairs are preserved by the Skorohod Representation Theorem. Therefore, we had to use the theorem which is a modified version of the celebrated Skorohod Representation Theorem.

Theorem C.1 Let (ß, F, P) be a probability space and Ui, U2 be two separable metric spaces. Let xn : ß ^ Ui x U2, n e N, be a family of random variables, such that the sequence (Law(xn))nen is weakly convergent on Ui x U2.

For i = 1, 2 let ni : Ui x U2 be the projection onto Ui, i.e.,

Ui x U2 9 z = (x1, X2) ^ ni(x) = xi e Ui.

Finally let us assume that there exists a random variable p : 2 ^ Ui such that Law(ni o Xn) = Law(p), Vn e N.

Then, there exists a probability space (¿2, FF, IP), a family of Ui x U2-valued random variables (Xn)nen, on (22, FF, I1) and a random variable x* : 22 ^ Ui x U2 such that

(i) Law(Xn) = Law(xn), Vn e N;

(ii) Xn ^ X* in Ui x U2 P-a.s.;

(iii) ni o xn (w) = ni o x* (w) for all w e 2.

Proof of Theorem C.1. The proof is a modification of the proof of [24, Chapter 2, Theorem 2.4]. For simplicity, let us put PMUn := Law(xn), PMU^ := Law(ni o xn), n e N, and PMUm := limn^OT L(xn). We will generate families of partitions of Ui and U2. To start with let (xi)i€n and (yi)ieN be dense subsets in Ui and U2, respectively, and let (ri)i en be a sequence of natural numbers converging to zero. Some additional condition on the sequence will be given below.

Now let O} := B(xi,ri),7 Ol := B(xk,ri) \ (u^O}) for k > 2. Similarly, C} := B(yi,ri), Ck := B(yk, ri) \ (uklici) for k > 2. Inductively, we put

n Bx^)

Ok, --i := O

t1. ^ n B(xlk,rk) \(uj=X,- k > ^

and similarly, where we will replace "O" by "C"

^1,.. ,1 := Ck-1 ,'k- n B^rkX

ci...,lk := ^,ik_i n B(yik,rk) ,ik_i,^ k > 2.

For simplicity, we enumerate for any k e N these families and call them (Ok)ieN, and (Cpj eN.

Let 22 := [0, i) x [0, i) and Leb be the Lebesgue measure on [0, i) x [0, i). In the first step, we will construct a family of partition consisting of rectangles in 22. □

Definition C.2 Suppose that j is a Borel probability measure on U = Ui x U2 and j1 is the marginal of j on Ui, i.e., /j1(O) := j(O x U2), O e B(Ui). Assume that (Oi)ieN and (Ci)ieN are partitions of Ui and U2, respectively. Define the following partition of the square [0, i) x [0, i). For i, j e N we put

■ i-i i \ /i(U Oa),/i((J Oa)

Vi(Oi)'

a=1 j-1

I^Oi X U Ca), X U Ca) I . (C.1)

'For r > 0 and x let B(x, r) := {y, |y | < r}.

Remark C.3 Obviously, if /¿i(Oi) = 0 for some i e N, then Iij = 0 for all j e N.

Next for fixed l e N and n e N, we will define a partition (lj,n) of =

V ij h,jeN

[0, 1) x [0, 1) corresponding to the partitions (Oj)ieN and (Cl)ieN of the spaces Ui and U2, respectively.

We denote by nOO the conditional probability of O under the condition C. Then, we have for n e N

Iu := [0, PMU^O!)) x [0, PMUn(U1 x Cj | O\ x U2)) , Illl := [PMUln(O\), PMUln(O\) + PMUln(Ol2)) x [0, PMUn(U1 x C1 | O2 x U2)j

T 1 ,n _

1k,\ —

k-1 k \

J2P MUlO^Y, P MUliOlA x [0,an.k), k > 2,

m—1 m—1 /

I'l :— [0, PMUl(OJ)) x [a„,i, bn) ,

:— ['PMUl(Ol), PMUliOl) + PMUliO1^ x [an,i, bn) .

where ank = PMUn(Ui x C1 | Gj x U2) and

= PMUn(U1 x Cj | Gj x U2) + PMUn(U1 x C2 | Gj x U2). More generally, for k G N

11,n _

1k,2 : —

k- 1 k

J2PMUl(Ol),J2 P MUliOlA x [an,bn),

m—1 m—1 /

and, for k,r G N

1,n 1k,r :—

12 PMUlniOlmPMU1(Om)

_m—1

J2 PMUn(U1 x cm | O1 x U2PMUn(U1 x Cr1 | Ok x U2)

_m—1 m—1

Finally, for k,l,r > 2

ll,n _

1k,r :—

_m—1

k-1 k EP MU1(Olm),J2 P MUlniOlm)\ x

m—1 m—1 /

J2 PMUniU1 x Clm | Ok x U2), £ PMUn(U1 x cm | Ok x U2) ) .

Let us observe that for fixed l G N, the rectangles {Ilk r : k,r G N} are pairwise disjoint and the family {Ilk r : k,r G N} is a covering of . Therefore, we conclude that for any

n e N U{(} we have PMUn{Ui x Un) = 1 and £meNPMUn(Ui x C^ | = 1. Consequently, it follows that for fixed l,n e N the family of sets {Ikr : k,r e N} is a covering of [0, 1) x [0, 1) and consists of disjoint sets.

The next step is to construct the random variables xn : 2 ^ Ui x U2, such that £aw(xn) = Caw(xn). We assume that rm is chosen in such a way, that the measure of the boundaries of the covering (Oj)jeN and (Cj)jeN are zero. In each non-empty sets int(OJm) and int(Cml) we choose points xj1 and ym, respectively, from the dense subsets (xi)ieN and (yi)ieN and define the following random variables. First, we put for m e N

zim(,*) = xm if *e imn

Z2n,m (V) = ym if v e imn, n e N U{(}, and then, for n e N U {(}

x\(u) = lim Zlm^), 2 n,m

x2n(*) = lim z\m(*).

Due to the construction of the partition, the limits above exist. To be precise, for any n e N U {(} and * e 2, we have

IZlm(w) - Zlk(W)l <rm, k > m, i = 1, 2, (C.3)

and therefore (Zn,m(w))m>1 is a Cauchy-sequence for all * e 2 = [0, 1) x [0, 1). Hence, Z'n(w), i, n, is well defined. Furthermore, xn is measurable, since Z'n m are simple functions, hence measurable. Therefore, Zln(a>), i, n, is a random variable.

Finally, we have to proof that the random variables x and xn := (Xn1, Xn2) have the following properties:

(i) Law(xn) = Law(xn), Vn e N,

(ii) xn ^ x a.s. in U1 x Un,

(iii) n o xn(w) = n o x*(*).

Proof of (i) The following identity holds

Leb(lk;) = Leb ^n e O[ x C{)

= PMUln (Olk) x PMUn(U1 x C | Olk x Un) = PMUn(Olk x Un) x PMUn(U1 x C | Olk x Un) = PMUn(u1 e Olk and u2 e Clr) = PMU1((u\u2) e Olk x Clr).

Using the fact that the set of rectangles of [0, 1)x[0, 1) forma n-system in B([0, 1)x[0, 1)) and that Leb and PMU1 are identical on the set of rectangles, we derive from [47, Lemma 1.17, Chapter 1], Leb and PMU are equal on B([0, 1) x [0, 1)). □

Proof of (ii) We will first prove that there exists a random variable x = (x1, x2) such that xn1 ^ x1 and xn2 ^ x2 Leb-a.s. for n ^ (. For this purpose it is enough to show that the sequences (xn1)neN and (x~n2)neN are Leb-a.s. Cauchy sequences. From the triangle inequality we infer that for all n, m, j e N, i = 1, 2

i — xi ^ <\xi — Zi +\Zi — Zi 4- \ Zi — xi

\An Ami — IAn ^ n,j I 1 I n,j m,j I 1 I m,j I*

Let us first observe that by Eq. C.3, for any n e N, the sequences (Z^ AjeN and

(Z2 j)jeN converge uniformly on to xn and Xn , respectively. Hence, it suffices to show that for all e > 0 there exists a number no such that (see [34, Lemma 9.2.4])

Leb ({« e [0, 1) x [0, 1) | Zn,i(w) = ZmJ(0), n,m > no}) < e.

Since {/n ,n e N} converges weakly, for any S > 0 there exists a number no e N such that8 p(/n, /m) < e for all n,m > n0. Hence, for any S > 0 we can find a number n0 e N such that

where and

\ak,r - ak,r \ , \bk,r - bk,r \ < S, k, r, l e N, i = 1, 2,n,m > n0,

jl,n r 1,l,n 7 1,l,n\ r 2,l,n 7 2,l,n\ 7k,r = [ak,r ,bk,r ) x [ak,r ,bk,r ),

rl,m r 1,l,m 7 1,l,m\ r 2,l,m j 2,l,m\

Ilm = [ak,r ,bk,r ) x [ak,r ,bk,r y

In fact, by the construction of and Ilk r, we have

= (uk=1 °j), n,m e N,r,k e N,

ak r = /m(uk=1 °j), n,m e N,r,k e N. Let us now fix S > 0 and let us choose n0 such that p(/n, /m) < S, n,m > n0. Then,

1 ,l,n ak,r

/n (uk=ioj) < /((uk=1 Oj)^ < /m (uk=ioj) + S, r,k,l e N.

ther hand, by symmetricity of the Prokhorov metric,

m (uk=1 Oj) < ((uk=1 Oj)^ < / (uk=1 Oj) + S, r,k e N.

m uj=1 j

Hence, we infer that

1,l,n 1,l,m I,, o i 7 ^ t\t ^

ak / — a^r < S, r, k, l e N, n, m > n0.

The second inequality in Eq. C.4 can be proved in a similar way.

Since the sequence (^n)nen is tight on Ui x U2, we can find a compact set Ki x K2 such that

sup (Vn ((Ui x U2) \ (Ki x K2))) < -.

Let us fix l e N. Since the set Ki x K2 is compact, from the covering (Olk x Clr)k€n,ren of Ui x U2 there exists a finite covering (Olk x Clr)k=i,--- ,K,j=i,- ,R of Ki x K2. Next, observe that the estimate (C.4) is uniformly for all n,m > no. Therefore we can use estimate (C.4) with & = s/2(KR) and infer that K,R

E Leb (ik:

I'll A i!'"!, n,m > n0 \ <

k, r=1

8p denotes the Prokhorov metric on measurable space (U, U), i.e., p(F,G) := inf[e > 0 : F(Oe) < G(O) + e, O e U}.

Moreover, since

Leb ({* e [0, 1) x [0, 1) : Zn,i(w) = Zm,i(w), n,m > 20}) —

J2 Leb (¡l,r A km, n,m > 20) + Leb ((U1 x Un) \ (K1 x Kn)) ■

it follows that

Leb ({w e [0, 1) x [0, 1) : Zn,i(w) = Zm,i(w), n,m > 20}) — 2 + 2.

Summarizing, we proved that for any e > 0 there exists a number 20 e N such that {w e [0, 1) x [0, 1) | Zn,l(w) = Zm,i(2), n, m > n0} — e. From [34, Lemma 9.2.4] we infer (ii). □

Proof of (iii) Let us denote ■ k-1

Jl,n _

Jk —

k-1 k \

YJPMUln(Olk)^ P MUln(Olk)\, k = 1,... ,N1. (C.6)

m=1 m=1

Since the laws of n o xn and n o xn, n,m e N are equal, Jlk,n = Jkm, n,m e N. Let us denote these (equal) sets Jk. Since for each Jk we can find a set Ok satisfying (C.6) for all n e N, we infer that for any m e N,

Zlm(*) = Z1,m(*), * e 2,2 e N.

Let n e N be fixed. Considering the limit of the sequence (Z2 m(*))meN as m ^ ( and keeping in mind that (Z2 m (*))meN is a Cauchy sequence implies the assertion (iii). □

Appendix D: A Tightness Criteria in D([0, T]; Y)

Let Y be a separable and complete metric space and T > 0. The space D([0, T]; Y) denotes the space of all right continuous functions x : [0,T] ^ Y with left limits. The space of continuous function is usually equipped with the uniform topology. But, since D([0, T]; Y) is complete but not separable in the uniform topology, we equip it with the J1-Skorohod topology, i.e., the finest among all Skorohod topologies, with which D([0, T]; Y) is both separable and complete. For more information about Skorohod space and its topology we refer to Billingsley's book [5] or Ethier and Kurtz [36]. In this appendix we only state the following tightness criterion which is necessary for our work.

Theorem D.1 9 A subset C of the space P (D([0, T]; Y)) ofallBorel probability measures on D([0, T]; Y) is tight, iff

a.) for any e > 0 there exists a compact set K C Y such that for every F e C F ({x e D([0, T]; Y) : x(t) e K V t e[0, T]}) > 1 - e;

9Compare with [5, Chapter III, Theorem 13.5, p. 142].

b.) there exist two real numbers y > 0, c > 0 and a nondecreasing continuous function g : [0, T] ^ R+ such that for all t1 — t — tn, n > 0 and k > 0

F({x e D([0, T]; Y) : x(t) - x(t1)| > k, |x(t) - x(t2)| > k}) — [g(tn) - g(t1)], VF e C.

Corollary D.2 Let {xn : n e N} be a sequence of Y-valued cadlag processes, each of the process defined on a probability space (2n, Fn, Pn). Then the sequence of the laws of {xn : n e N} is tight on D([0, T]; Y) if

(a) for any e > 0 there exists a compact set Ke C Y such that

Pn (xn(t) e Ke, t e [0, T]) > 1 - e, Vn e N;

(b) there exist two constants c > 0 and y > 0 and a real number r > 0 such that for all 0 > 0, t e [0, T - 0 ], and n > 0

En sup |xn(t) - xn^)! — c0Y.

t —s—t +0

Proof Corollary D.2-(a) and the Chebyshev inequality imply Theorem D.1-(a). Now fix t1 — t — t2. Then

Pn (W(t) - Xn(tl)| > k, |xn(t) - xn(t2)| > k)

— Pn ( sup ^(s) - xn(t1)|>^.

V1— s—tn J

Estimating the RHS by the Chebyshev inequality and using Corollary D.2-(b) leads to Theorem D.1-(b). This completes the proof of the corollary. □

Appendix E: An Inequality

Let Y be a Banach space with norm |-|, T > 0 and f : Y is a Bochner integrable

function such that

fT\f(s)\pds < œ. Jo

For fixed n let Ik = ( J^, k+r ] and fn : (0, T]^ Y be the function defined by fn (i) = 0 for s G Io and fn(s) = 2n fj f(t) dt for s G Ik+i, k = 0, 1, 2, •••. We have the following facts.

Proposition E.1 (i) If f belongsto Lp(00,T; Y), then fn G Lp(00,T; Y) for each n. (ii) Let a G (0, P ). Then, there exists C > 0 such that for all f G Wa,p (0, T; Y) and n

|| f(s) - fn(s)\\Lp{0T ,Y) < C2~na \\f \\wP,a(0T ; Y) ■ (E.1)

Proof Without loss of generality we take T = i and set t" = j. We have

r 1 2n—1 r 'j+

/ | fn(s) | pds = £ / I fn(s) I pds

J° j=1 Jtj

j + ! .

= l 2n ifn(tjn)ip.

From this last equation and the definition of f we derive

Therefore

r 1 —1 1 /• 'j1

Ifn(s)Ipds < E 2n 2M f(s)ds

0 j=1 -"j— 1

2n —1 , j < E / If(s)IPds.

j=1 J1

/'1Ifn(s)IPds < f1 2n If(s)IPds, 00

which ends the proof of (i).

Next we will prove item (ii). Let j e [0, i], a e (0, ±) and f e Wa,p(0, T; Y). Since the intervals Ik, k = 0, i, ..., 2n — i, form a partition of [0, i] then either j e Io or j e (2-n, i]. In one hand if j e (2—n, i] then there exists k > i such that j e Ik .In this case we have by Holder's inequality that

Jik—1

If(s) — fn(s)I < 2'

7 Is — rIp-1 dr -■>4-1

^ 2-npa

(f(s) - f(r)) , .!+«,

-1- xIs - r IV dr

Is - r I p +a

UIk-x [/

-JIk-1

If(s)- f(r)Ip

Is - r I1+pa

If(s) - f(r)Ip

Is - r I1+pa

Therefore

r 1 r 1 [2n-1 /•

/ If(s) - fn(s)Ipds < 2-npa I J2 J2-n J2-nL k=1 ./Ik-1

If(s) - f(r)Ip

Is - r I1+pa

< 2-npa i1 -f-^drds < 2-npa II f lip

< A ./0 Is - rI1+p« < ""

If(s) - f(r)Ip

Wa,p(0,T ;Y).

On the other hand, by making use of the Holder Inequality we obtain

2-n c2-n

If(s)- fn(s)Ip ds =

f If(s)Ipds 0

[ n2-n

/ If(s)Iqds 0

^ 2-npa

where q = j—pa. Since Wa,p(l0) C Lr (I0) for any r G [1, i_ppa ] we infer from the last inequality that there exists C > 0 such that

ç 2-n

J \f(s) - fn(s)\pds < C2-npa\\f \\PWa,p0T;Y). (E.3)

Now inequality (E.1) follows from inequalities (E.2) and (E.3). This completes the proof of our proposition. □

References

1. Applebaum, D.: Levy processes and stochastic integrals in Banach spaces. Probab. Math. Statist 27(1), 77-88 (2007)

2. Applebaum, D., Riedle, M.: Cylindrical Levy Processes in Banach Spaces. Proc. Lond. Math. Soc. (3) 101, 697-726 (2010)

3. Bandini, E., Russo, F.: Special weak Dirichlet processes and BSDEs driven by a random measure. arXiv:1512.06234v2 (2015)

4. Barbu, V.: Private Communication. Innsbruck (2011)

5. Billingsley, P.: Convergence of Probability Measures, 2nd edn. Wiley Series in Probability and Statistics. Probability and Statistics. A Wiley-Interscience Publication. Wiley, New York (1999)

6. Bergh, J., Lofstrom, J.: Interpolation Spaces: An Introduction, Volume 223 of Die Grundlehren der mathematischen Wissenschaften. Springer Verlag, Berlin (1976)

7. Saint Loubert Bie, E.: Etude d'une EDPS conduite par un bruit poissonnien. (Study of a stochastic partial differential equation driven by a Poisson noise). Probab. Theory Relat. Fields 111, 287-321 (1998)

8. Brezis, H., Friedman, A.: Nonlinear parabolic equations involving measures as initial conditions. J. Math. Pures Appl. (9) 62, 173-97 (1983)

9. Brzezniak, Z.: Stochastic partial differential equations in M-type 2 Banach spaces. Potential Anal. 4, 1-45 (1995)

10. Brzezniak, Z., Gatarek, D.: Martingale solutions and invariant measures for stochastic evolution equations in Banach spaces. Stoch. Process Appl. 84, 187-225 (1999)

11. Brzezniak, Z., Goldys, B., Imkeller, P., Peszat, S., Priola, E., Zabczyk, J.: Time irregularity of generalized Ornstein-Uhlenbeck processes. C. R. Math. Acad. Sci. Paris 348(5-6), 273-276 (2010)

12. Brzezniak, Z., Peszat, S.: Stochastic two dimensional Euler equations. Ann. Probab. 29, 1796-1832 (2001)

13. Brzezniak, Z., Hausenblas, E.: Maximal regularity of stochastic convolution with Levy noise. Probab. Theory Related Fields 145, 615-637 (2009)

14. Brzezniak, Z., Hausenblas, E.: Uniqueness in Law of the Io integral driven by Levy noise. In: Proceedings of the Workshop Stochastic Analysis, Random Fields and Applications - VI, Ascona, Switzerland Progr. Probab, vol. 63, p. 2011. Birkhauser/Springer Basel AG, Basel (2008)

15. Brzezniak, Z., Hausenblas, E., Motyl, E.: Uniqueness in Law of the stochastic convolution driven by Levy noise. Electron. J. Probab. 18, 1-15 (2013)

16. Brzezniak, Z., Hausenblas, E., Razafimandimby, P.: A note on the Levy-Ito decomposition in Banach spaces (in preparation)

17. Brzezniak, Z., Hausenblas, E., Zhu, J.: 2D stochastic Navier-Stokes equations driven by jump noise. Nonlinear Anal. 79, 122-139 (2013)

18. Brzezniak, Z., Ondrejat, M.: Stochastic geometric wave equations with values in compact Riemannian homogeneous spaces. Ann. Probab. 41, 1938-1977 (2013)

19. Brzezniak, Z., Zabczyk, J.: Regularity of Ornstein-Uhlenbeck processes driven by a Levy white noise. Potential Anal. 32, 153-188 (2010)

20. Cerrai, S.: Stochastic reaction-diffusion systems with multiplicative noise and non-Lipschitz reaction term. Probab. Theory Related Fields 125(2), 271-304 (2003)

21. Chalk, C.: Nonlinear evolutionary equations in Banach spaces with fractional time derivative. PhD thesis, The University of Hull, Kingston upon Hull, UK (2006)

22. Dalang, R.: Level sets and excursions of the Brownian sheet. Topics in spatial stochastic processes (Martina Franca, 2001), 167208, Lecture Notes in Math, vol. 1802. Springer, Berlin (2003)

23. Da Prato, G.: Applications croissantes et equations devolution dans les espaces de Banach. Academic Press, London (1976)

24. Da Prato, G., Zabczyk, J.: Stochastic Equations in Infinite Dimensions, Volume 44 of Encyclopedia of Mathematics and Its Applications. Cambridge University Press, Cambridge (1992)

25. Debussche, A., Hogele, M., Imkeller, P.: Asymptotic first exit times of the Chafee-Infante equation with small heavy-tailed Levy noise. Electron. Commun. Probab. 16, 213-225 (2011)

26. Debussche, A., Hogele, M., Imkeller, P.: The Dynamics of Nonlinear Reaction-Diffusion Equations with Small Levy noise. Lecture Notes in Mathematics, vol. 2085. Springer, Berlin (2013)

27. Dettweiler, E.: Representation of Banach space valued martingales as stochastic integrals. In: Probability in Banach Spaces, 7 (Oberwolfach, 1988) Progr. Probab., vol. 21, pp. 43-62 (1990)

28. Di Blasio, G.: Linear parabolic evolution equations in Lp-spaces. Ann. Mat. Pura Appl. (4) 138, 55-104

(1984)

29. Diestel, J., Uhl, J.J., Jr.: Vector measures, with a Foreword by B. J. Pettis Mathematical Surveys, vol. 15. American Mathematical Society, Providence (1977)

30. Dirksen, S.: Ito isomorphisms for Lp-valued Poisson stochastic integrals. Ann. Probab. 42(6), 25952643 (2014)

31. Dong, Z., Xu, T.G.: One-dimensional stochastic Burgers' equations driven by Levy processes. J. Funct. Anal. 243, 631-678 (2007)

32. Dong, Z., Xie, Y.: Global solutions of stochastic 2D Navier-Stokes equations with Levy noise. Sci. China Ser. A 52(7), 1497-1524 (2009)

33. Dore, G., Venni, A.: On the closedness of the sum of two closed operators. Math. Z. 196, 189-201 (1987)

34. Dudley, R.: Real Analysis and Probability. Cambridge Studies in Advanced Mathematics, vol. 74. Cambridge University Press, Cambridge (2002)

35. Engel, K.-J., Nagel, R.: One-parameter semigroups for linear evolution equations. In: Graduate Texts in Mathematics, vol. 194. Springer-Verlag, New York (2000)

36. Ethier, S., Kurtz, T.: Markov Processes, Characterization and Convergence. Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. Wiley, New York (1986)

37. Fernando, B.P.W., Sritharan, S.: Nonlinear Filtering of Stochastic Navier-Stokes Equation with Ito-Levy Noise. Stoch. Anal. Appl. 31(3), 381-426 (2013)

38. Friedman, A.: Partial Differential Equations. Holt, Rinehart and Winston, Inc., New York (1969)

39. Giga, Y., Sohr, H.: Abstract Lp estimates for the Cauchy problem with applications to the Navier-Stokes equations in exterior domains. J. Funct. Anal. 102, 72-94 (1991)

40. Flandoli, F., Gatarek, D.: Martingale and stationary solutions for stochastic Navier-Stokes equations. Probab. Theory Related Fields 102, 367-391 (1995)

41. Hausenblas, E., Giri, A.: Stochastic Burgers equation with polynomial nonlinearity driven by Levy process. Commun. Stoch. Anal. 7, 91-112 (2013)

42. Hausenblas, E.: Existence, uniqueness and regularity of parabolic SPDEs driven by poisson random measure. Electron. J. Probab. 10, 1496-1546 (2005)

43. Hausenblas, E.: SPDEs driven by Poisson Random measure with non Lipschitz coefficients. Probab. Theory Related Fields 137, 161-200 (2007)

44. Hausenblas, E., Razafimandimby, P., Sango, M.: Martingale solution to equations for differential type fluids of grade two driven by random force of Levy type. Potential Anal. 38(04), 1291-1331 (2013)

45. Ikeda, N., Watanabe, S.: Stochastic Differential Equations and Diffusion Processes, Volume 24 of North-Holland Mathematical Library, 2nd ed. North-Holland Publishing Co., Amsterdam (1989)

46. Jacod, J., Shiryaev, A.: Limit theorems for stochastic processes. In: Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. 2nd edn., vol. 288. Springer-Verlag, Berlin (2003)

47. Kallenberg, O.: Foundations of Modern Probability. Probability and its Applications (New York), 2nd edn. Springer-Verlag, New York (2002)

48. Kotelenez, P.: A submartingale type inequality with applications to stochastic evolution equations. Stochastics 8(2), 139-151 (1982/83)

49. Kahane, J.P.: Some Random Series of Functions, 2nd edn. Cambridge University Press, Cambridge

(1985)

50. Kwapien, S., Woyczynski, W.A.: Random Series and Stochastic Integrals: Single and Multiple. Birkhauser, Boston (1992)

51. Linde, W.: Probability in Banach Spaces - Stable and Infinitely Divisible Distributions, 2nd edn. A Wiley-Interscience Publication (1986)

52. Lions, J.L., Magenes, E.: Non-homogeneous Boundary Value Problems and Applications. Vol. I, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], p. 181. Springer-Verlag, New York (1972)

53. Lunardi, A.: Analytic Semigroups and Optimal Regularity in Parabolic Problems. [2013 reprint of the 1995 original]. Modern Birkhauser Classics. Birkhauser/Springer, Basel (1995)

54. Mandrekar, V., Rudiger, B.: Existence and uniqueness of path wise solutions for stochastic integral equations driven by Levy noise on separable Banach spaces. Stochastics 78, 189-212 (2006)

55. Marinelli, C., Rockner, M.: Well-posedness and asymptotic behavior for stochastic reaction-diffusion equations with multiplicative Poisson noise. Electron. J. Probab. 15, 1528-1555 (2010)

56. Metivier, M.: Semimartingales: A Course on Stochastic Processes. de Gruyter, Berlin (1982)

57. Motyl, E.: Stochastic Navier-Stokes equations driven by Levy noise in unbounded 3D domains. Potential Anal. 38, 863-912 (2013)

58. Mueller, C.: The heat equation with Levy noise. Stoch. Process Appl. 74, 67-82 (1998)

59. Mueller, C., Mytnik, L., Stan, A.: The heat equation with time-independent multiplicative stable Levy noise. Stoch. Process Appl. 116, 70-100 (2006)

60. Mytnik, L.: Stochastic partial differential equation driven by stable noise. Probab. Theory Related Fields 123, 157-201 (2002)

61. Jacob, N., Potrykus, A., Wu, J.-L.: Solving a non-linear stochastic pseudo-differential equation of Burgers type. Stoch. Process Appl. 120, 2447-2467 (2010)

62. Parthasarathy, K.R.: Probability measures on metric spaces. Academic Press, New York (1967)

63. Pazy, A.: Semigroups of Linear Operators and Applications to Partial Differential Equations, Volume 44 of Applied Mathematical Sciences. Springer-Verlag, New York (1983)

64. Peszat, S.: Stochastic partial differential equations with Levy noise (a few aspects). In: Dalang, R., et al. (eds.) Stochastic Analysis: A Series of Lectures, Centre Interfacultaire Bernoulli, January-June 2012, Progress in Probability 68, pp. 333-357. Springer, Berlin (2015)

65. Peszat, S., Zabczyk, J.: Stochastic Partial Differential Equations with Levy Noise, Volume 113 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge (2007)

66. Peszat, S., Zabczyk, J.: Time regularity of solutions to linear equations with Levy noise in infinite dimensions. Stoch. Process Appl. 123, 719-751 (2013)

67. Rockner, M., Zhang, T.: Stochastic evolution equations of jump type: existence, uniqueness and large deviation principles. Potential Anal. 26, 255-279 (2007)

68. Reed, M., Simon, B.: Methods of Modern Mathematical Physics. II. Fourier Analysis, Self-adjointness. Academic Press [Harcourt Brace Jovanovich Publishers], New York (1975)

69. Runst, T., Sickel, W.: Sobolev Spaces of Fractional Order, Nemytskii Operators and Nonlinear Partial Differential Equations., Volume 3 of de Gruyter Series in Nonlinear Analysis and Applications. de Gruyter, Berlin (1996)

70. Sato, K.-I.: Levy Processes and Infinite Divisible Distributions. Cambridge Studies in Advanced Mathematics, vol. 68. Cambridge University Press, Cambridge (2005)

71. Seeley, R.: Norms and domains of the complex powers AzB. Amer. J. Math. 93, 299-309 (1971)

72. Triebel, H.: Interpolation Theory, Function Spaces, Differential Operators, North-Holland Mathematical Library, vol. 18. North-Holland Publishing Co., Amsterdam-New York (1978)

73. Triebel, H.: Interpolation Theory, Function Spaces, Differential Operators. 2nd rev. a. enl ed. Barth, Leipzig (1995)

74. Truman, A., Wu, J.-J.: Fractal Burgers' Equation Driven by Levy Noise. Stochastic Partial Differential Equations and Applications—VII, Volume 245 Lect. Notes Pure Appl. Math., pp. 295-310. Chapman & Hall/CRC, Boca Raton (2006)

75. Truman, A., Wu, J.-L.: Stochastic Burgers' Equation with Levy Space-time White Noise. Probabilistic Methods in Fluids, vol. 298-323. World Sci. Publ., River Edge (2003)

76. Whitt, W.: Some useful functions for functional limit theorems. Math. Oper. Res. 5, 67-85 (1980)

77. Zhu, J., Brzezniak, Z., Hausenblas, E.: Maximal inequality of stochastic convolution driven by compensated Poisson random measures in Banach spaces. Ann. Inst. H. Poincare Probab. Statist. 53(2), 937-956 (2017). https://doi.org/10.1214/16-AIHP743