# Sharp weighted logarithmic bound for maximal operatorsAcademic research paper on "Mathematics"

CC BY
0
0
Share paper
Archiv der Mathematik
OECD Field of science
Keywords
{""}

## Academic research paper on topic "Sharp weighted logarithmic bound for maximal operators"

﻿Arch. Math.

with open access at Springerlink.com I 7 ~ " ~ " T7~

doi 10.1007/s00013-016-0980-5 I Archlv der Mathematik

CrossMark

Sharp weighted logarithmic bound for maximal operators

Abstract. The paper contains the study of sharp weighted logarithmic estimates for maximal operators on probability spaces equipped with a tree-like structure. These inequalities can be regarded as LlogL versions of the classical estimates of Fefferman and Stein. The proof exploits the existence of a certain special function, enjoying appropriate majorization and concavity conditions.

Mathematics Subject Classification. Primary 42B25; Secondary 46E30, 60G42.

Keywords. Maximal, Dyadic, Bellman function, Best constants.

1. Introduction. The dyadic maximal operator M on Rd is an operator acting by the formula

Mf (x) = sup 111| J \f (u)\du : x e Q, Q C Rd is a dyadic cube| ,

where f is a locally integrable function on Rd and the dyadic cubes are those formed by the grids 2-N Zd, N = 0, 1, 2, .... This operator plays a prominent role in analysis and PDEs, and in applications it is often of interest to have optimal or at least tight bounds for its norms. For instance, M satisfies the weak-type (1,1) inequality

A|{x e Rd : Mf (x) > A} | < J \f (x)\dx (1.1)

{Mf >\]

for any f e L1 (Rd) and any A > 0. This bound is sharp: there is a non-zero f for which both sides are equal. By a straightforward interpolation argument, the above fact leads to the related Lp estimate

Research supported by the National Science Center, Poland, Grant DEC-2014/14/E/ ST1/00532.

Published online: 12 October 2016

IS? Birkhäuser

\\Mf iiipfR,) < p-1 \\f ^(r,,), 1 <p <*>, (1.2)

in which the constant p/(p — 1) is also optimal. These two statements are absolutely classical, and form a starting point for various extensions and numerous applications. The literature on the subject is extremely large, and we will only mention here some statements which are closely related to the subject of this paper. First, both (1.1) and (1.2) hold in the setting of maximal operators Mt associated with tree-like structure T. To introduce the necessary background, let (X, i) be a nonatomic probability space. Two measurable subsets A, B of X are said to be almost disjoint if ¡(A n B) = 0.

Definition 1.1. A set T of measurable subsets of X will be called a tree if the following conditions are satisfied:

1. X eT and for every I eT we have ¡(I) > 0.

2. For every I eT there is aj, finite subset C(I) C T containing ait least two elements such that

(a) the elements of C(I) are pairwise almost disjoint subsets of I,

(b) i = U c(I).

3. T = Um>0 Tm, where T0 = {X} and Tm+1 = (JIeTm C(I).

4. We have limm^TO supJeTm ¡(I) = 0.

Any probability space equipped with a tree gives rise to the corresponding maximal operator Mt, given by

Mtf (x) = sup I -1) J \f (u)\di(u) : x e I, I gt| .

In the paper, we are interested in weighted logarithmic estimates for the operator Mt. Here the word "weight" refers to a nonnegative, integrable function on X. It follows from the works of Fefferman and Stein [1] that there exists a finite constant C such that

\w({x e X : MTf (x) > X}) < C J\f (x)\MTw(x)d¡(x), X> 0,

for all weights w and all f e L1(X, Mtw) (here we have used the standard notation w(E) = JE wd¡ for any measurable subset E of X). By interpolation, for any p e (1, to) there is a finite Cp such that

)i/p , ^ l/p < Cp ¡J \f (x)\pMT w(x)di(x)

The principal goal of this paper is to establish the following sharp LlogL estimate, which can be regarded as a limiting version of the above Lp bound as p ^ 1.

Theorem 1.2. Let w be a weight on X satisfying Jx Mtwdf < <x. Then for any K > 0 and any measurable function f : X ^ we have

J(Mt f )wdf < K j \f \ log \f \Mt wdf + L(K) J Mt wdf, (1.3)

(<x if K < 1,

L(K) = ^ K2 ( ) 1 itK >1

For each K, the constant L(K) is the best possible.

Here by the optimality of L(K) we mean that for any L < L(K) and any probability space (X, f) with a tree T, there is a weight w and a function f for which (1.3) does not hold. Actually, when constructing such counterexamples, one may restrict oneself to constant weights, i.e., the inequality (1.3) is already sharp in the unweighted setting. This is closely related to the results of Gilat [2] on logarithmic bounds for martingales and the Hardy-Littlewood maximal operator on the positive halfline.

The proof of (1.3) will be based on the existence of a certain special function, enjoying appropriate majorization and concavity properties. This approach, called the Bellman function technique, has gathered a lot of interest in the recent literature: see, e.g., [4,6-9,11], and the references therein. It is nice that here the Bellman function method not only establishes a logarithmic bound, but it also yields the optimal constant involved.

We have organized the rest of this paper as follows. In the following section we introduce the special function corresponding to (1.3). Section 3 contains the proof of Theorem 1.2.

2. A special function. Throughout this section, let K > 1 be a fixed parameter. We start with writing down an elementary estimate, which will be used several times in our further considerations. Namely, one easily verifies that for all x > 0 we have

- Kxlog(KKrx) < L(K). (2.1)

As we have announced in the introductory section, the key role in the proof of the inequality (1.3) is played by a certain special function. Introduce B : (0, to)4 ^ R by the formula

yw — Kxv log x — L(K)v if y > x,

yw + (K — 1)yv — Kxv log (Key) — L(K)v ify < ^Krx.

One easily checks that the function B is of class C1. Further crucial properties of this object will be studied in the two lemmas below.

Lemma 2.1. 1. For any x, w > 0 we have

B(x, x, w, w) < 0. (2.2)

2. For any (x,y,w,v) G (0, to)4 we have

B(x, y, w, v) > yw — Kxv log x — L(K)v. (2.3)

Proof. To establish (2.2), observe that

B(x, x, w, w) = Kxw ^ 1 — log ^ ——— ex^^j — L(K)w

—Kx log(——1 x) — L(K)

is nonpositive, due to (2.1). The proof of (2.3) is also simple. Clearly, we may assume that y < Kx/(K — 1), and then the estimate is equivalent to

(K — 1)y > log (—^ 1 +1.

This is obviously true, because of the elementary bound 1 + log s < s, valid for all s > 0. □

We turn our attention to the main property of B. It can be regarded as a concavity-type condition.

Lemma 2.2. Fix (x,y,w,v) G (0, to)4 satisfying y > x and v > w. Then for any h > —x and any k > —w, we have the estimate

B(x + h, y V (x + h),w + k,v V (w + fc))

< B(x,y,w,v) + Bx(x,y,w,v)h + Bw (x,y,w,v)k. (2.4)

Proof. For the sake of convenience, we have decided to split the reasoning into three intermediate steps.

Case I If x + h < y and w + k < v, then the inequality is immediate: it suffices to note that for fixed y and v, the function (x, w) n- B(x, y, w, v) is concave on (0, y] x (0, v]. Hence, in what follows, we assume that x + h > y or w + k > v.

Case II Suppose that x + h > y and w + k < v. If y < Kx/(K — 1), then (2.4) reads

( K - 1

(x + h)(w + k) + (K — 1)(x + h)v — K(x + h)v log -e(x + h)

( K — 1

< y(w + k) + (K — 1)yv — K(x + h)v log I k ey

or, equivalently,

(s — 1)(w + k) + (K — 1)v(s — 1) — Ksv log s < 0,

where s = (x + h)/y G [1, to). Denoting the left-hand side by F(s), we see that F(1) = 0 and F'(s) = w + k — v — Kv log s < 0. So, F is nonpositive on [1, to), which is exactly the claim.

If y > Kx/(K — 1), then the left-hand side of (2.4) does not depend on y, while the right-hand side increases when y increases. Hence the validity of

(2.4) follows from the analysis of the boundary case y = Kx/(K — 1), just provided above.

Case III Assume that x + h > y and w + k > v. If y < Kx/(K — 1), then (2.4) becomes

(w + k)

-K(x + h)log( (x + h) ) - y - L(K)

< (K - 1)yv - K(x + h)v log ^ K— 1 ey^ - L(K)v.

By (2.1), the expression in the square brackets above is nonpositive and hence the left-hand side is a nonincreasing function of k. So, it suffices to establish the bound for k = v — w; but this boundary case has been already considered above (Case II).

So, suppose that y > Kx/(K — 1). Then, as in Case II, the left-hand side of (2.4) does not depend on y, while the right-hand side is a nondecreasing function of y. Hence, (2.4) follows from its validity in the limit case y = Kx/(K — 1).

Case IV It remains to consider the possibility x + h < y and w + k > v. This case will be most elaborate. Fix x, y, w, v, h and consider the function

H (k) = B (x + h,y,w + k,w + k)

— B(x, y, w, v) — Bx(x, y, w, v)h — Bw(x, y, w, v)k

on [v — w, to). Our aim is to prove that H is nonpositive. By Case I, we know that H(v — w) < 0 and hence we will be done if we show that H is nonincreasing. Assume first that y > K(x + h)/(K — 1); then we see that

H'(k) = —K(x + h) log(x + h) — L(K)

<—K (x + h) log ^ —1 (x + h^j — L(K) < 0 (2.5)

(the latter estimate holds by (2.1)) and we are done. If y < K(x + h)/(K — 1), then

H'(k) = (K — 1)y — K(x + h) log (——1 ey^j — L(K).

By the above assumptions, x + h belongs to [(K — 1)y/K, y] and hence it is enough to check that H'(k) < 0 for x + h = (K — 1)y/K and x + h = y separately. The first case has been already verified above: see (2.5). If x+h = y, then

H'(k) = —(x + h) — K(x + h) log ^—1 (x + h)^ — L(K)

< —K(x + h) log ^(x + h^j — L(K) < 0,

again by (2.1). This finishes the analysis of Case IV and hence completes the proof of the lemma. □

3. Proof of Theorem 1.2.

3.1. Proof of (1.3). For the sake of the clarity of the exposition, we have decided to split the argumentation into a few separate parts.

Step 1: Some reductions. First, it is enough to prove the inequality for K > 1 only, since for K < 1 it is trivial. Next, note that we may and do assume that f is nonnegative: the passage from f to \f \ does not alter the right-hand side of (1.3), while the left-hand side can only increase. Now, by a straightforward continuity argument, we may assume that f and w are strictly positive. Finally, we may restrict ourselves to those functions, which satisfy JX \f \ log \ f \Mtwd¡i < to; indeed, otherwise there is nothing to prove.

Step 2: Auxiliary sequences. Define four sequences (xn)n>0, (yn)n>0, (wn)n>0, (vn)n>0 of measurable functions on X as follows. Given a nonnegative integer n, an element E of Tn, and a point x G E, set

1 f 1 r

Xn(x) = 7~Fn fdl, Wn(x)

/(E )J J /(E) J

and yn(x) = maxo<fc<n x^(x), vn(x) = maxo<^<n wk(x). These objects enjoy the following structural property which will be of crucial importance to the proof. Let n, E be as above and let Ei, E2, ..., Em be the elements of Tn+1 whose union is E. Then we easily check that

x^d // —

wnd/ —

/(Ei) _

d(E) /(Ei)

/(Ei) _

d(E) /(Ei)

xn+1d/-,

Wn+id/

yn+i(x) = max{yn(x),xn+i(x)}, Vn+i(x) = max{vri(x),wri+i(x)}.

The above objects have a very nice probabilistic interpretation. For any n > 0, let Fn be the a-algebra generated by Tn. Then (xn)n>0 and (wn)n>0 are adapted martingales generated by the functions f and w (i.e., xn = E(f \Fn) and wn = E(w\Fn), n = 0, 1, 2, ...); furthermore, (yn)n>0 and (vn)n>0 are the maximal functions associated with these martingales.

S'tep 3: The main argument. The purpose of this part is to show that the sequence (fX B(xn, yn, wn, vn)dn>0 is nonincreasing. To accomplish this, fix an arbitrary integer n > 0, pick E GT, and let Ei, E2, ..., Em be the elements of Tn+i whose union is E. By the very definition, we see that the functions xn, yn, wn, and vn are constant on E; denote the corresponding values by x, y, w, and v, and note that y > x, v > w. Similarly, xn+i and wn+i are constant on each of the sets Ei, E2, ..., Em; denote the corresponding values by x + hi, x + h2, ..., x + hm and w + ki, w + k2, ..., w + km. Then (3.1) implies that

x = HT=i (x + hi), or

± hi = 0. (3.3)

Similarly, (3.2) implies

Now, by (2.4) applied to x, y, w, v and hj, kj, we get

B(x + hj,y V (x + hj),w + kj,v V (w + kj))

< B(x,y,w,v) + Bx(x,y,w,v)hj + Bw (x,y,w,v)kj.

Multiply both sides by j(Ej)/j(E), sum the obtained inequalities over j = 1, 2, ..., m, and apply (3.3), (3.4) to get

m (E •)

IE) B(x + hj ,y V (x + hj), w + kj ,v V (w + kj)) < B(x, y, w, v). j=1 J(E)

It is easy to see that this is equivalent to

J B(xn+i,yn+i,wn+i,vn+i)dj < J B(xn,yn,wn,vn)dj. E E

and summing over all E G Tn yields the aforementioned monotonicity of the sequence (fx B(xn, yn,wn, v„)dJn>Q.

Step 4: Completion of the proof. Combining Step 3 with (2.3), we obtain that for any nonnegative integer n,

J [ynwn - Kxnvn log ^ - L{KK]dj < J B(xn, yn, wn, vn)dj

< j B(xo,yo,wo,vo)dj < 0.

Here in the last passage, we have used (2.2) and the equalities x0 = yo, w0 = v0. By Jensen's inequality and the convexity of the function u n- u log u on [0, to), we see that for each E G Tn,

Ixnvn log xndj <!fvn log f dj.

Furthermore, we have fE ynwndj = fE ynwdj. Thus, the preceding estimate

J ynwdj < K J f log fvndj + L(K) J vnd^.

Now we apply a limiting argument. If n ^ to, then yn = max0<k<n xk ^ supfc>o xk = Mt f and, similarly, vn ^ Mtw almost everywhere with respect to the measure j. Therefore, by Lebesgue's dominated convergence theorem, fX f log fvndj ^ fX f log f Mwdj (here we use the assumptions fx f log fMwdj < to and fX Mwdj < to). In addition, we have that

fx Vnwdp ^ jX Mfwd¡i and jX vnd¡i ^ jX Mwd^, by Lebesgue's monotone convergence theorem. This establishes (1.3).

3.2. Sharpness. Let (X, ¡) be a given probability space with a tree T and take the constant weight w = 1. Fix an arbitrary constant K > 1 and a positive parameter 5 (which will eventually be sent to 0). To provide the construction of an appropriate function (or rather a functional sequence), we will need the following lemma, which can be found in [3].

Lemma 3.1. For every I gT and every a G (0, 1) there is a subfamily F(I) C T consisting of pairwise almost disjoint subsets of I such that

1 I U J I = E ¡(J) = a1(I)'

JeF(I) ) J€F(I)

First we introduce a certain sequence (An)n>0 of subsets of X. The construction is inductive, and in each step we require the corresponding subset to be a union of pairwise almost disjoint elements of T: for each n, An = |JIeF I. First, set A0 = X; since X G T0, we see that F0 = {X}. Suppose that we have successfully constructed An = |JIeF I. Pick I G Fn and apply Lemma 3.1 with a = (1 + 5)-1. Let Fn+1 be the union of all the families F(I) corresponding to all the elements I G Fn, and put An+1 = |JIeF 11.

Directly from the definition, we see that ¡(An) = (1 + 5)-n, n = 0, 1, 2, ... Consider the function

f = -E p —'

K + s K

XAn\An+i ■

Fix a nonnegative integer n and let I G Fn be the "atom" of An. From the above construction, we have

f d^ = iE

(i + s)n

1 fK + s

e V K J 1 + S V K (1 + S) K ( K + s

(K - 1)e V K This implies Mtf > (KK1)e (KK^)n on An and hence

(K - 1)e

K + s K

XAn\An+i ■

Consequently,

Mtfdp >

o ,K + ^ n

(K - 1)e^v K

(1 + s)-

1 + s (K - 1)2e

Next, we compute that

* i ^ / k + j

f log f dj = -J^n

e ^ V K

log I ^ - -

(K + J)J f K + A S f K + J

M KKJE

K (1 + J)2e &V K jn=0 VK (1 + J) J s ( K + J

(1 + J)e ^ VK(1 + J)

K (K + J) , ( K + J \ K ■ log ' 1

(K - 1)2Se\^J (K - 1)e' Therefore, if S — 0, then

r r K2 K2 K2

J mtfdJ- K Jf logfdJ - (K^ - (KK-1)^ + (KKie = L(K),

and hence for each K > 1, the constant L(K) is indeed the best possible.

It remains to show that if K < 1, then no finite constant L(K) works in (1.3). This is straightforward: suppose, on contrary, that there is K' < 1 and some L(K') < to for which (1.3) is satisfied. Since x log x + e-1 > 0 for all x > 0, we see that for any K > 1 and any nonnegative f,

J Mf dj < K' j f log f MTwdj + L(K') J MTwdj

= K ' j (f log f + e-1)MT wdj + (L(K') - K 'e-1) J MT wdj

< K j (f log f + e-1)MTwdj + (L(K') - K'e-1) J MTwdj

= K j f log f MTwdj + (L(K') + (K - K')e-1) J MTwdj.

Therefore, (1.3) holds with L(K') + (K - K')e-1 and hence L(K) < L(K') + (K - K')e-1. But this is a contradiction: as we already know, L(K) explodes as K \ 1. Consequently, L(K) must be infinite for K < 1. This completes the proof.

Acknowledgements. The author would like to thank the anonymous referee for several helpful suggestions.

References

[1] c. Fefferman and e.m. Stein, Some maximal inequalities, Amer. J. Math. 93 (1971), 107-115.

[2] d. Gilat, The best bound in the LlogL inequality of Hardy and Littlewood and its martingale counterpart, Proc. Amer. Math. Soc. 97 (1986), 429-436.

[3] a. d. Melas, The Bellman functions of dyadic-like maximal operators and related inequalities, Adv. Math. 192 (2005), 310-340.

[4] f. nazarov and s. Treil, The hunt for Bellman function: applications to estimates of singular integral operators and to other classical problems in harmonic analysis, Algebra i Analis 8 (1997), pp. 32-162.

[5] x. Shi, Two inequalities related to geometric mean operators, J. Zhejiang Teachers College 1 (1980), 21-25.

[6] l. Slavin, a. Stokolos, and v. Vasyunin, Monge-Ampere equations and Bellman functions: The dyadic maximal operator, C. R. Acad. Sci. Paris, Ser. I 346 (2008), 585-588.

[7] l. Slavin and v. Vasyunin, Sharp results in the integral-form John-Nirenberg inequality, Trans. Amer. Math. Soc. 363 (2011), 4135-4169.

[8] l. Slavin and a. Volberg, Bellman function and the H1-BMO duality, Harmonic analysis, partial differential equations, and related topics, 113-126, Con-temp. Math., 428, Amer. Math. Soc., Providence, RI, 2007.

[9] v. Vasyunin and a. Volberg, Monge-Ampere equation and Bellman optimization of Carleson embedding theorems, Linear and complex analysis, pp. 195-238, Amer. Math. Soc. Transl. Ser. 2, 226, Amer. Math. Soc., Providence, RI, 2009.

[10] v. Vasyunin and a. Volberg, Burkholder's function via Monge-Ampere equation, Illinois J. Math. 54 (2010), 1393-1428.

[11] j. Wittwer, Survey article: a user's guide to Bellman functions, Rocky Mountain J. Math. 41 (2011), 631-661.

[12] x. Yin and b. Muckenhoupt, Weighted inequalities for the maximal geometric mean operator, Proc. Amer. Math. Soc. 124 (1996), 75-81.

Department of Mathematics, Informatics and Mechanics,

University of Warsaw,

Banacha 2,

02-097 Warsaw,

Poland