25TH ANNIVERSARY

An Interdisciplinaiy Journal of Nonlinear Science

Defining chaos

Brian R. Hunt and Edward Ott

Citation: Chaos 25, 097618 (2015); doi: 10.1063/1.4922973 View online: http://dx.doi.org/10.1063/14922973

View Table of Contents: http://scitation.aip.org/content/aip/joumal/chaos/25/9?ver=pdfcov Published by the AIP Publishing

Articles you may be interested in

Chaos in the square billiard with a modified reflection law Chaos 22, 026106 (2012); 10.1063/1.3701992

The least channel capacity for chaos synchronization Chaos 21, 013107 (2011); 10.1063/1.3556694

Hyperlabyrinth chaos: From chaotic walks to spatiotemporal chaos Chaos 17, 023110 (2007); 10.1063/1.2721237

Conditions for efficient chaos-based communication Chaos 13, 145 (2003); 10.1063/1.1513061

New classification of chaos

AIP Conf. Proc. 469, 299 (1999); 10.1063/1.58510

fl) CrossMark

yll fcligk for updates

Defining chaos

Brian R. Hunt1 and Edward Ott2

1 Institute for Physical Science and Technology, University of Maryland, College Park, Maryland 20742, USA 2Institute for Research in Electronics and Applied Physics, University of Maryland, College Park, Maryland 20742, USA

(Received 17 January 2015; accepted 13 April 2015; published online 28 July 2015)

In this paper, we propose, discuss, and illustrate a computationally feasible definition of chaos which can be applied very generally to situations that are commonly encountered, including attractors, repellers, and non-periodically forced systems. This definition is based on an entropylike quantity, which we call "expansion entropy," and we define chaos as occurring when this quantity is positive. We relate and compare expansion entropy to the well-known concept of topological entropy to which it is equivalent under appropriate conditions. We also present example illustrations, discuss computational implementations, and point out issues arising from attempts at giving definitions of chaos that are not entropy-based. © 2015 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License. [http://dx.doi.org/10.106371.4922973]

Toward the end of the 19th century, Poincare demonstrated the occurrence of extremely complicated orbits in the Newtonian dynamics of three gravitationally attracting bodies. This complexity is now called chaos and has received a vast amount of attention since Poincare's early discovery. In spite of this abundant past and current work, there is still no broadly applicable, convenient, generally accepted definition of the term chaos. In this paper, we advocate a particular entropy-based definition that appears to be very simple, while, at the same time, is readily accessible to numerical computation, and can be very generally applied to a variety of often-encountered situations, including attractors, repellers, and non-periodically forced systems. We also review and compare various previous definitions of chaos.

I. INITIAL DISCUSSION

While the word chaos is widely used in science and

mathematics, there are a variety of ways of defining it. Thus,

for this 25th anniversary issue of the journal CHAOS, we are

motivated to review issues that arise when attempting to for-

mulate a generally applicable definition of chaos, and to

advocate a particular entropy-based definition that seems to us to be especially apt. We also relate our proposed defini-

tion to previous definitions.

Intuitively, perhaps the two most prominent (not neces-

sarily independent) attributes of what scientists commonly

think of as chaos are the presence of complex orbit structure

and extreme sensitivity of orbits to small perturbations.

Indeed, in the paper by Li and Yorke1 where the term chaos

was introduced in its now widely accepted nonlinear dynamics context, the term was motivated by the simultane-

ous presence of unstable periodic orbits of all periods, as

well as an uncountable infinity of non-periodic orbits. Thus,

Li and Yorke's introduction of this terminology was motivated by the chaos attribute of complex orbit structure. On

the other hand, Lorenz was concerned with weather forecasting and accordingly focused on the chaos attribute of temporally exponential increase of the sensitivity of orbit locations to small initial perturbations. As we will discuss, these two attributes can be viewed as "two sides of the same coin."

We think of a definition of chaos as being "good" if it conforms to common intuitive notions of chaos (such as complex orbit structure and orbit sensitivity) and, at the same time, has the following three desirable features:

• Generality: The definition should work for almost all the examples that typical readers of this journal are likely to judge as chaotic.

• Simplicity: The definition should be fairly concise and not too technical.

• Computability: The definition should allow a practical, straightforward computational implementation for discerning the existence of chaos in a model.

Considering the issue of generality, one would like a definition of chaos to be applicable not only to attractors but also to non-attracting sets, often called repellers. With respect to chaotic repellers,3-5 we note that they are central to the physically relevant topics of fractal basin boundaries,6 chaotic transients, and chaotic scattering,7 occurring, for example, in fluid dynamics,8,9 celestial mechanics,10 chemistry,11 and atomic physics.12

Furthermore, again considering the issue of generality, due to their common occurrence in applications, we desire that our definition of chaos be applicable to non-autonomous dynamical systems (i.e., systems that are externally forced by time dependent inputs), including external inputs that are temporally quasi-periodic,13 stochastic,14-16 or are themselves chaotic. Here, physical examples include quasi-periodic forcing of atmospheric jets,17 quasi-periodic forcing of stellar luminosity variation by two superposed stellar modal oscillations,18,19 advective transport in fluids with temporally and spatially irregular flow fields,20-24 and phase

1054-1500/2015/25(9)/097618/13

25,097618-1

© Author(s) 20151

synchronism of chaos by noisy or chaotic drives.25 We emphasize that, when considering externally forced systems, we are interested in identifying chaos in the response of the system to a particular realization of the forcing, not in characterizing whether the forcing is chaotic.

An important point for consideration of non-periodically forced chaotic systems is that the notion of a compact invariant set, which is typically used in definitions of chaos for autonomous systems (including Poincare maps of periodically forced systems), may not be appropriate or convenient for situations with non-periodic forcing. Furthermore, in practice, it may be difficult to locate or detect an invariant set that is not an attractor. Thus, rather than defining chaos for an invariant set, we will instead consider a notion of chaos for the dynamics within any given bounded positive-volume subset S of the state space. We call such a set S a restraining region. For autonomous systems, chaos for an invariant set can be detected by taking S to be a neighborhood of the desired invariant set.

In our opinion, the currently most satisfactory way of defining chaos for autonomous systems is by the existence of positive topological entropy or metric entropy. We note, however, that the standard definitions of these entropies are quite difficult to straightforwardly implement in a numerical procedure. In addition, while generalizations to the original definitions of topological and metric entropy have been proposed, we view it as desirable to have a relatively simple definition that is applicable very broadly. However, we do not address here the question of identifying chaos in experimental data, which presents additional challenges, especially in the cases of non-attracting sets and externally forced systems.

Motivated by the considerations above, in Sec. II we introduce and discuss the definition of an alternate entropy quantity that we call "expansion entropy." The expansion entropy of an n-dimensional dynamical system on a restraining region S is the difference between two asymptotic exponential rates: first, the maximum over d < n of the rate at which the system expands d-dimensional volume within S; and second, the rate at which n-dimensional volume leaves S (this rate is 0 for an invariant set). We define chaos as the existence of positive expansion entropy on a given restraining region. Expansion entropy generalizes (to nonautonomous systems and noninvariant restraining regions) a quantity that was formulated by Sacksteder and Shub26 in the case of an autonomous system on a compact manifold. In this restricted case, by the results of Kozlovski,27 expansion entropy is equal to topological entropy for infinitely differentiable maps. In Sec. III, we present examples of the application of our definition of expansion entropy to various systems, and also provide illustrative numerical evaluations of expansion entropy for some of these examples. Section IV discusses topological entropy and previous work on computation of this quantity. Section V discusses issues that arise in previous non-entropy-based definitions of chaos.

II. EXPANSION ENTROPY A. Definition

Our definition of expansion entropy, which we denote H0, is closely related to previous definitions of topological

entropy, to which it is equivalent under appropriate conditions (see Sec. IV). Expansion entropy uses the linearization of the dynamical system and a notion of volume on its state space; thus, unlike topological entropy, it is defined only for smooth dynamical systems. On the other hand, expansion entropy does not require the identification of a compact invariant set. As we will discuss, the differences in the definitions may make the criterion H0 > 0 attractive as a general definition of chaos in smooth dynamical systems.

We assume that the state space of the dynamical system is a finite-dimensional manifold M. (For example, if the manifold M is n-dimensional Euclidean space, the state x at a given time is a vector [x1, x2,..., xn] where each xi is a real number. If some of the coordinates are angle variables, they can be taken modulo 2p, resulting in manifolds such as a circle, cylinder, or torus.) We write i(S) for the volume28 of a subset S of M, and write di(x) for integration with respect to this volume; for n-dimensional Euclidean space, one can take di(x) = dnx. Given a dynamical system on M, we will use an integral to define its expansion entropy on a closed subset S (the restraining region) that has positive, finite volume. The set S need not be invariant under the system.

We consider a deterministic dynamical system to be defined by an evolution operator, by which we mean a family f of maps fa ;t : M ! M, with the interpretation that if x and x0 are the states of the system at times t and t , respectively, then x' = f ;t(x). (For example, f t could represent the solution from time t to t of a system of differential equations.) The family f must satisfy the identities ft,t(x) = x and ft",t(x) = ft'(ft,t(x)). The maps f't are defined for t', t being integer-valued (discrete time) or being real numbers (continuous time), with the restriction that t' > t if the system is noninvertible. We allow the system to be nonautonomous, including the case where ft ;t is a realization of a stochastic dynamical system.29 If the system is autonomous, then ft't depends only on t — t, and in this case we will often write ft',t = fT, where T = t' — t. Regardless, we assume that ft't is a differentiable function of x.

Recall that the singular values of a matrix A are the square roots of the eigenvalues of A>A. Thinking of A as a linear transformation, the image of the unit ball under A is an ellipsoid, and the singular values of A are the semiaxes of this ellipsoid. Let G(A) be the product of the singular values of A that are greater than 1; if none of the singular values are greater than 1, let G(A) = 1. Then, G(A) is roughly the number of e-balls needed to cover the image of an e-ball under A.

If the matrix A is n x n, consider a d-dimensional plane Pd in n-dimensional Euclidean space, where d < n. Let W be a d-dimensional ball in Pd, let A(W) denote the image of W under A, and let id denote d-dimensional volume. Then, G(A) is the maximum over orientations of Pd and over d of iJA(W))/ 1d(W). Thus, G(A) is the largest possible growth ratio of d-dimensional volumes under A. Below we will apply G to the derivative matrix Dft t, in which case it represents a local volume growth ratio for the (typically nonlinear) map ft t.

Let St';t be the set of x, such that ff ;t(x) 2 S for all t" between t and t (that is, the trajectory of x from t to t under f never leaves S). Let

E,t(f , S)

St' ,t

G(DftV(x)) dl(x). (1)

Definition of expansion entropy: We define the expansion entropy Ho to be

Ho( f, S) = lim

InEt ,t( f, S) t' - t '

We consider H0 and other limiting quantities below to be well-defined only if the limit involved exists.30 We remark that if the system f is nonautonomous and the restraining region S is not invariant, H0(f S) could potentially depend on the starting time t in addition to f and S. Also, it can be shown that the value of H0 is invariant under differentiable changes of coordinates that are nonsingular on S.

To help interpret the definition of H0(f S), we now replace 1/i(S) with [1/i(St,t)][l(St',t)/l(S)] in Eq. (1). The definition (2) can then be expressed as

r InEf,t( f, S) Ho(f , S) = lim —j——

t' t - t

lim ln [i(S)/i(Sy

T+ V t' - t

Et ,t( f, S) =

l(St ,t )

G(Df, ,t(x)) di(x).

Thus, we can view H0(f S) as the difference of two exponential rates, given by the limits in Eqs. (3) and (4), with the following interpretations. Imagine that N initial conditions are uniformly sprinkled throughout the volume of S at time t, and that N is very large (N ! 1). The second term 1/s+ in Eq. (3) is then the exponential decay rate, as t increases, of the number of trajectories from these initial conditions that remain in S for all times between t and t . The quantity E ,t(f, S) is the average over these remaining trajectories of the maximum local d-dimensional volume growth ratio along the trajectory. Thus, the first term in Eq. (3) is the exponential growth rate of this average.

It can be shown that the exponential growth rate of G(D/;t(x)) as t' is the sum of the positive Lyapunov

exponents of the trajectory starting at x at time t. Thus, the limit in Eq. (3) is, in a sense, an average of this sum of positive Lyapunov exponents over trajectories that remain in S for all (forward) time. The criterion H0 > 0, which we propose for defining chaos, requires that this exponential volume growth rate strictly exceed the exponential rate 1/s+ at which trajectories leave S. Note that if S is forward invariant, e.g., an absorbing neighborhood of an attractor, then 1/s+ = 0.

Some points of interest for this entropy definition are that

(i) it applies to non-autonomous systems,

(ii) it assigns an entropy value H0 to every restraining region S in the manifold M, and

(iii) it directly suggests a computational technique for numerically estimating H0 (see Sec. IIC).

Finally, notice that

Hof, S') <Ho(f, S) if S' C S.

This property follows from the fact that St t C St't if S' C S, and consequently £V;t(f, S') < £V;t(f, S). Thus, if there is chaos according to the Definition H0 > 0 with respect to a restraining region S , then there is also chaos with respect to every restraining region S that contains S'. In particular, as illustrated by the example in Sec. IIIB, this implies that the expansion entropy will detect chaos (H0 > 0) within a restraining region S when S also contains a nonchaotic attractor, even when the chaos exists only on a repeller.

B. Expansion entropy of the inverse system

In this section, we show that the expansion entropy of an autonomous, invertible system is the same as the expansion entropy of the inverse system. (Note that this is also true for the topological entropy; see Sec. IV.) This equality results from the following identity for all invertible systems (not necessarily autonomous):

Et,t' (f; S)=E,t(f ; S).

To verify this identity, notice that ftt is the inverse of f t. Below we use the notation x' = f t(x), and consequently x = ft t' (X). Then, Dft^ (x') and Df t(x) are inverses, and hence the singular values of D/t t' (/) are the inverses of the singular values of Df ,t(x). Since the product of the singular values of a square matrix is the absolute value of its determinant, if A is invertible then |detA| = G(A)/G(A_1). In particular, |detDf ,t(x)| = G(D/,t(x))/G(Dft;t'(x')). Also, ft/S,,t) = St t'. Writing Et t' (f, S) as an integral over / and then making the change of variables / = ft' t(x), we obtain

Et,t' ( f, S)

1(s) J St,

1(S) Js, 1

GD/t,t' (x ^ di(x')

G(D/t,t' (x' ))|detDft- ,t(x)|di( x)

ÏS)Js GDft',t(x))dl(x) = Et',t(f,S).

If f is autonomous and invertible, we write f ,t = f-t

and fr 1 = f-T. Then

Hof-1,S) = lim ln[Er,of"-1,S)]/TT

= lim ln[£_r>0(f,S)]/T

T—>oo

= Tlimln[£0,-r (f, S)]/T

= limln[£T,0(f, S)]/T = H0(f, S).

Here, the first equality is by definition, the second equality is a change of notation, the third equality follows from the

time-reversal identity for E derived above, and the fourth equality uses the fact that f is autonomous.

C. Discussion of numerical evaluation of expansion entropy

With respect to point (iii) in Sec. II A, we can imagine a computation of H0 proceeding as follows. First, randomly sprinkle a large number of initial conditions {x1, x2,..., xN} uniformly in S. Then evolve each trajectory fT^0(xi) and the corresponding tangent map DfT0(xi) forward in time, continuing to evolve only as long as the trajectory remains in S. At a discrete sequence of times T, compute

Et (f ,S)=N-1J2 'G(D/t,o(xf));

where the prime on the summation symbol signifies that only those i values for which fT,0(Xj) remains in S up to time t are included in the sum. From our definition of E in Eq. (1), we see that ET(f , S) is an estimate of ET0(f, S). Plotting lnET(f, S) versus T, for sufficiently large N and T, we expect to find an approximately linear relationship. Accordingly, we can estimate H0 as the slope of a straight line fitted to such data (see also Ref. 31 for a similar approach in two dimensions).

As in other such procedures, judgment and experimentation are called for in determining reliable choices of N and the range of T over which to do the fit, and such choices will be constrained by computer resources. In practice, we find it useful to choose a number, say, 100, of different samples of size N, compute lnET (f , S) for each sample, and take the mean and standard deviations of these logarithms. Not only does this allow us to estimate the sampling error but it also produces a more reliable mean estimate than computing lnET (f ,S) for a single sample of 100N points. Example illustrations of this computational approach are given in Secs. III B-IIIE.

Specializing to the case of an autonomous invertible system f, since Sec. IIB shows that the expansion entropy of f and f-1 are the same, one could do a numerical computation of H0 using either f or f-1. The question then arises as to which of these two alternatives is preferable from the point of view of computational cost and accuracy. In the remainder of this section, we argue that it is computationally preferable to calculate H0 from f if f is volume contracting in S, while calculation from f-1 is preferable if f is volume expanding in S. In order to see this, we generalize Eq. (4) to define both forward and backward exponential decay rates

— = lim ^ln [i(S)/i(S±T,0)],

S± Tn T

where S±T0 is, as in Sec. II A, the set of initial conditions at time 0 whose trajectories under f remain in S between times 0 and ±T, respectively. That is, in terms of the previously stated numerical procedure for calculating the expansion entropy, 1/s+ is the exponential temporal decay rate of the number of initial conditions sprinkled uniformly throughout S at time 0 that lead to orbits that never leave S up to time T, while 1/s- is the analogous quantity taking the initially sprinkled points backward from time 0 to time -T. Since, to estimate the integral in Eq. (1), we need to compute the

average expansion rates only from those initial conditions that have not left S, statistics at any given T are improved when the number of such orbits is largest. Further, the estimate of the limit T ! +1 dictates that we make T large. These two considerations indicate that the forward (respectively, backward) calculation of H0 will be computationally more efficient if s+ > s- (respectively, s- > s+). Subtracting the definition (7) of 1/s+ from the definition of 1/s-, and using the fact that S-T;0 = fT;0(ST; 0) for an autonomous in-vertible system, we obtain

(1/s+) - (1/s-) = lim 1ln{i(/T,0(ST,0))/l(ST,0)}•

The right hand side is positive (respectively, negative) when the map is volume expanding (respectively, contracting) in S. Thus, s- > s+ if the map is volume expanding, while s+ > s- if the map is volume contracting. In particular, if S is a neighborhood of an attractor, it is best to employ a forward time calculation. We note that the common examples of the Henon map and the Lorenz system are uniformly volume contracting at all points in state space (implying that s+ > s-), while Hamiltonian systems are volume preserving (implying that s+ = s-).

D. Generalization to q-order expansion entropy

In past work on fractal dimension, the box-counting dimension has been generalized to a spectrum of dimensions often denoted Dq, where the box-counting dimension corresponds to q = 0, and the index q can be any nonnegative number.32-34 In addition, a spectrum of entropy-like quantities, again depending on an index q > 0, has been introduced by Grassberger and Procaccia,34,35 where q = 0 corresponds to the topological entropy, and q = 1 corresponds to the metric entropy. Thus, motivated by these past works, it is natural to introduce an analogous spectrum of q-order expansion entropies, Hq, and to consider whether they are useful with respect to the issue of defining chaos.

In Appendix A, we introduce and discuss a natural way of specifying Hq. In particular, the form defining Hq is specified so that it gives Eqs. (1) and (2) when q = 0, gives an expansion entropy analogue of the entropy of Grassberger and Procaccia,34,35 and also gives a correspondence for q = 1 with previous results for the metric entropy of repellers4,36 and with Pesin's formula37 for the metric entropy for attrac-tors. However, as we will argue in Appendix A, q = 0 is special in regard to defining chaos. In particular, Appendix A will consider Hq for an example in which S contains an attracting fixed point and a chaotic repeller (see Sec. IIIB). For this example, it is shown that H0 > 0, while Hq for q > 0 can be zero. Thus, H0 successfully detects the chaos within S, but Hq for q > 0 may not.

III. ILLUSTRATIVE EXAMPLES A. Attracting and repelling fixed points

Consider a one-dimensional differentiable map f with a fixed point x0; for simplicity, we assume that Df(x0) = ± 1. Let the restraining region S be an interval containing x0 on

which |D/(x)| = 1; that is, / is either uniformly expanding or uniformly contracting on S. In either case, we show below that the expansion entropy H0(f, S) is zero, i.e., that fixed points are not chaotic according to our definition.

In the case of an attracting fixed point, |Df ,t(x)| < 1, and hence G(D/t',t(x)) = 1, for all x 2 S and t > t. Also, St,t = S for t' > t. Then, from Eqs. (1) and (2) we have £tV(f, S) = i(S) for t' > t and Hof, S) = 0.

In the expanding case, |D/t' ,t(x)| > 1 for trajectories that remain in S from time t to t'. Also, St',t is a subinterval of S whose endpoints map to the endpoints of S underft ,t. Thus,

E,t(/, S)

l(S) 1

|D/t' ;t(x)|dx

St' ,t

D/t' ,t( x)dx

Once again, H0(/ S) = 0.

For fixed points (or periodic orbits) of higher-dimensional systems, similar arguments can be made, though they are more complicated in the case when the fixed point has both stable and unstable directions. The essence of these calculations is that any growth in the integrand G of Eq. (1) as t' — t increases is balanced (up to a time-independent multiplicative constant) by a reduction in the volume of St' ,t. The conclusion remains that H0 = 0, i.e., that isolated period orbits are not chaotic.

B. Example: A one-dimensional map with a chaotic repeller and an attracting fixed point

Consider the one-dimensional map/ shown in Fig. 1 and the two restraining regions S and S' £ S, where we take S = [—1, 1.5] and S' = [0,1]. The map is linear with derivative 3 on [0, 1/3] and linear with derivative —2 on [1/2, 1], mapping each of these intervals to [0, 1]. For this example, the fixed point x =—1/2 attracts almost all initial conditions with respect to Lebesgue measure in S.

On the other hand, S contains an invariant Cantor set that is commonly called a chaotic repeller. We now show that

-0.5 -1

-1 -0.5 0 0.5 1 1.5

FIG. 1. A map illustrating a case where S = [—1, 1.5] contains a chaotic repeller (in S' = [0,1]) and an attracting fixed point (x = —1/2).

i— — —/—V— — — — /

1 / \ /\

/ \ ' \

! / A !

1 * \ 1

11 / \ i

i / * \ i

\ / \i

according to our definition, / exhibits chaos by having positive expansion entropy H0 in both S and S'. The invariant Cantor set consists of all initial conditions in [0, 1] whose trajectories never land in the interval (1/3, 1/2), i.e., those whose base 6 expansion does not contain the digit 2. The set ST 0 of initial conditions whose trajectories remain in [0, 1] from time 0 to time T consists of 2T intervals corresponding to all possible strings of length T of the letters L and R, where L denotes an iteration in the interval [0, 1/3] and R denotes an iteration in the interval [1/2, 1]. A string with k L's and T — k R's corresponds to an interval of length 3—k2k—T, on which D/T = 3k(—2)T—k. Then, the integral of G(D/T) = |D/T| on each such interval is 1. Thus, ET,0(/, S') = 2T, and hence H0(/, S') = ln2. In accordance with property (5), since S contains S', we also have H0(/, S) = ln 2.

In Appendix A, in addition to defining the quantity Hq discussed in Sec. IID, we also evaluate Hq for the map in Fig. 1. We find for the smaller restraining region S that Hq(/, S') > 0 for all q > 0, but for the larger restraining region S there is a critical value qc < 1 such that Hq(/, S) = 0 for q > qc. For q = 1, we have H1(/, S') = (2/5)ln(5/2) + (3/5)ln(5/3) > 0, while H1(f, S) = 0. Our interpretation is that Hj/, S) is dominated by the dynamics of Lebesgue almost every initial condition whose trajectory approaches the fixed point attractor, while H0(/, S) is dominated by the chaotic saddle in S . We therefore conclude that H1 (and similarly Hq for q > 0) is not an appropriate tool for detecting non-attracting chaos in a restraining region.

Next we use this example to illustrate the numerical computation of H0. Our procedure, as explained in Sec. IIC, is to choose a sample size N and range of T values and to do the following for each T. Using Eq. (6), we compute an estimate £T of ET 0 for each of 100 different samples of N points each in the restraining region, and compute the mean and standard deviation of the 100 samples. The results for N = 1000 and N = 100000 are shown in Fig. 2 for S and in Fig. 3 for S. The estimated value of H0 is the slope of the solid curve in an appropriate scaling interval. The scaling interval for a given N can be judged by consistency of the results with a larger value of N, in addition to smallness of the error bars and straightness of the curve. Notice that the somewhat arbitrarily chosen restraining region S yields nearly as long a scaling interval as the restraining region S that is chosen with knowledge of the invariant Cantor set.

C. Example: A random one-dimensional map

Consider the one-dimensional random map

ht+1 = [ht + at + K sin ht] mod 2p, (8)

where K > 0 and a0, aj, a2,... are independent random variables that are uniformly distributed in the circle [0, 2p). We take the restraining region S to be the entire circle.

Notice that

■J|1 + K cos ht|,

N = 1000

N = 1000

N = 100000

FIG. 2. Computation of lnET versus T for the map of Fig. 1 with restraining region S'. For each T, we computed lnET (f ,S') for 100 different samples, with N = 1000 randomly chosen initial conditions in each sample (top figure) and N = 100000 randomly chosen initial conditions in each samples (bottom figure). The solid curve shows the mean of the 100 samples, and the error bars show their standard deviation. As we discussed in Sec. II C, the slope of the solid curve should, in the limit of large N and T, approximate H0(f S ). The dashed line has slope ln 2, which is the value we obtained analytically for H0(f S ).

and that

Et ,0 = (max(|d0r/d00|, 1))00,

where (■ ■ ■)g denotes an average over g. If 00 is uniformly distributed, then 00, 0j, 02,... are independent and uniformly distributed, so that

(jdhT/d00|)h0 ,0!,., hr-! =n<|1 + K cos ht\)e, t=0

= <|1 + k cos 0|)T •

This suggests that for a typical realization of the random inputs, H0 k k, where

k = ln (| 1 + K cos 0|)h:

For 0 < K < 1

k = ln((1 + K cos 0))0 = ln1 = 0 , while for K > 1

k = ln(| 1 + Kcos h| — (1 + K cos 0) + 1)0 > ln1 = 0:

N = 100000

FIG. 3. Computation of lnET versus T for the map of Fig. 1 with restraining region S. This is the analogue of Fig. 2 with S replaced by S.

For 0 < K < 1, each map is a diffeomorphism, so d0t/ d00 > 0, and

Et,0 = (max(d0r/d00 , 1))e < (d0T/d00 + 1)00 = 2:

Thus, ET,0 is not exponentially increasing, so, indeed, H0 = 0 for 0 < K < 1. Numerical experiments agree with the argument above that H0 > 0 for K > 1, though establishing that the transition to chaos (according to our definition) occurs exactly at K = 1 would require a more definitive study. In Fig. 4, we show the computed lnET (see Secs. IIC and III B) versus T for K = 1.5.

D. Example: Shear map on the 2-torus

This example illustrates a case where orbits are dense and, as in chaos, typical nearby orbits separate from each other with increasing time. However, the rate of separation is linear, rather than exponential in time. The example is the following map of the 2-torus:

/t+1 = [/t + ht] mod2p , 0t+1 = [0t + x] mod2p , (9)

where x/(2p) is irrational.38,39 As shown in Fig. 5, the image under this map of a curve C looping around the 0 , /-torus once in the 0 direction is a curve C that loops once in the / direction, as well as once in the 0 direction, with the number of / loops increasing by one on each subsequent iterate. Orbits are dense and nearby initial conditions with different

co 6 «-i

FIG. 4. Computation of lnET versus T for the random circle map (8) with K = 1.5. For each T, we computed lnET(f ,S) for 100 different samples, with N = 1 000 000 randomly chosen initial conditions in each sample. Each initial condition used a different realization of the random sequence of maps. The solid curve shows the mean of the 100 samples, and the error bars show their standard deviation. The dashed line has slope ln(|1 + 1:5 cos 01)0; this estimate of the expansion entropy appears to be slightly larger than the slope of the computed data.

values of 0 separate linearly with t. To evaluate H0 for this map from the Definition Eq. (2), with the restraining region S being the entire torus, we note that

Dft+1, t = ^ and Dft ^ t

1 t - t 0 1

for all / and 0. For t - t ^ 1, the singular values of Dft', t

are approximately t - t and (t - t) 1. Thus, for large t - t

Ef , t(f ,S)K(2n)2(t - t) ,

and H0 = 0 (since (t - t)-1 log(t0 - t)! 0 as t' n). Hence, this example is not chaotic according to our definition.

E. Example: Horseshoe and Henon map

Figure 6 shows the action of a horseshoe map in the plane on a unit square S, which we also take to be the restraining region. The step (a) ! (b) represents a uniform horizontal compression and vertical stretching of the square. Let p > 2 be the ratio by which the vertical length of the square is

(a) (d)

FIG. 6. A horseshoe map f that is linear for trajectories that remain in the restraining region S.

stretched, and assume that bending deformations in the step (b) ! (c) take place only in the shaded region. The fraction of the original square that remains in the square after one iterate is 2/q (see Fig. 6(d)), and after t - t iterates, the fraction is (2/q)) -Also, G(Dff ,t)= p1'-1, yielding Ef ,t (f ,S) = pt-t(2/p) = 2t -1, and H0 = ln 2. Thus, by our definition the horseshoe map is chaotic in S.

For the Henon map

xt+j = a + byt

yt+1 = xt

with b = 0.3, the results of Devaney and Nitecki imply that for a > 3.4, the map has a topological horseshoe, and for a < 5.1, the nonwandering set is contained in the square -3 < x, y < 3, which we take to be the restraining region S. Fig. 7 shows the results of a numerical computation for

FIG. 5. Image C of a circle C given by / = constant under the shear map (9).

FIG. 7. Computation of lnET versus T for the Henon map with a = 4.2 and b = 0.3. For each T, we computed lnET(f , S) for 100 different samples, with N = 100000 randomly chosen initial conditions in each sample. The solid curve shows the mean of the 100 samples, and the error bars show their standard deviation. The dashed line has slope H0 = ln 2.

a = 4.2 of lnET (see Sees. IIC and IIIB) versus T, which agrees well with the value H0 = ln 2.

IV. TOPOLOGICAL ENTROPY

In this section, we define topological entropy, and discuss its relation to (and equivalence with, in appropriate circumstances) both expansion entropy and the related notion of volume growth.

The original definition of topological entropy, by Adler, Konheim, and McAndrew,41 was for a continuous map f on a compact topological space X. If X is a metric space, an equivalent definition of topological entropy due to Dinaburg42 and Bowen43 is as follows. (Equivalence to the original definition was proved by Bowen.44) For e > 0, two points x and y in X are called (T, e)-separated if the distance between their kth iterates satisfies dfk(x)fk(y)) > e for some 0 < k < T. A finite set of points P C X is said to (T, e)-span X if there is no point in X that is (T, e)-separated from every point in P. Let «(T, e) be the minimum number of points needed to (T, e)-span X, and let —(T, e) be the maximum number of points in X that can be pairwise (T, e)-separated. Let

h«(f, e)

limsup

ln«(T, e)

h— (f, e) = limsup

ln—(T; e)

It is not hard to show that «(T, e) < —(T, e) < «(T, e/2). This implies the analogous relation between h« and h—, which implies that they have the same limit as e ! 0. Define the topological entropy h of f on X by

h(f' X)

lim h«f, e)

lim hv f ,e).

The notions of expansion entropy H0 and topological entropy h are both well-defined in the case when f is a smooth, autonomous system on a compact manifold M and the restraining region S is all of M. In this case, Sacksteder and Shub26 defined a quantity they called hj that is equivalent to expansion entropy. Subsequently, Przytycki45 proved that hj is an upper bound on h if f is a C1+y diffeomorphism for y > 0; this proof was extended to noninvertible maps by Newhouse.46 Though there are examples47 for which the two quantities differ, Kozlovski27 proved that hj = h for C1 maps. Thus, H0f M) = hf M) for a sufficiently smooth map f on a compact manifold M.

From our point of view, these results leave open consideration of important issues regarding nonautonomous systems and the role of restraining regions. For example, suppose now that J is a compact invariant set of an autonomous system f on a (not necessarily compact) manifold M. If J has volume zero, H0f J) is undefined, but we can define H0 for a neighborhood S of J that contains no other invariant sets. In this case, we conjecture that H0f S) = hf, J) if f is C1. More generally, when the restraining region S contains multiple invariant sets, we conjecture (consistent with Eq.

(5)) that H0f S) is the maximum topological entropy off on an invariant subset of S.

Our notion of expansion entropy is related to the notion of volume growth defined by Yomdin48 and Newhouse.46 Yomdin defines the exponential rate vdf of d-dimensional volume growth of a smooth map f on a compact manifold M, and proves that vdf < hf, M) if f is C1. Newhouse defines the volume growth rate more generally for a neighborhood U of a compact invariant set J C M, and proves that hf, J) is bounded above by the maximum over d of the d-dimensional volume growth rate on U. See also Ref. 49 for a discussion of these results.

Based on these results, Newhouse and Pignataro50 proposed and implemented algorithms for computing entropy of two-dimensional diffeomorphisms (including Poincare sections of three-dimensional differential equations) by computing the exponential growth rate of the length of an iterated curve. Other algorithms51,52 compute the growth rate of the number of disconnected arcs resulting from the iteration of an initial line segment within a neighborhood of a two-dimensional chaotic saddle or repeller. Of course, these methods could be extended to higher dimensions by considering growth of surface areas, etc. Expansion entropy, by estimating volume growth locally, allows an analogous computation to be done without having to compute and measure multidimensional surfaces. It is analogous to the approach used by Jacobs et al.31 for two-dimensional maps.

Another approach to computing entropy is by symbolic dynamics: partition the state space into numbered subsets and estimate the exponential growth rate (as time increases) of the number of different sequences of subsets that can be visited by a finite-time trajectory. This approach can yield good estimates with well-chosen partitions, but inadequate partitions may lead to underestimation, and in some cases symbolic dynamics indicates positive entropy when the topo-logical entropy is actually zero.53

We conclude this section with a brief discussion of the connection between the definitions of expansion entropy and topological entropy in the case of a smooth, autonomous system on a compact manifold M, with restraining region S = M. In Appendix B, we argue that for sufficiently small e>0, the quantity ET,0(f, S) of Eq. (1) approximates V~(T, e)/V(0, e), where VV(T, e) is the maximum number of trajectories that are a distance e apart at either time 0 or at time T. Note that VV(T, e) is a lower bound on N(T, e), because the latter distinguishes between trajectories that are e apart at some time between 0 and T; however, at least for hyperbolic systems the difference between vV and v should be inconsequential. Equations (10) and (11) first take a limit with respect to T and then e. Normalizing by v(0, e) does not change the limit

h(f, S) = lim limsup

lnV(T; e) - lnV(0; e)

lim limsup —-—( ' )/—( ' ^ . (12)

We have argued above that

p (t O r N(T, e) ET,0( f ,S)= 1e1"0 We ^

Thus, by Eq. (2), the definition of H0 differs from the definition of h primarily because it uses N(T, e) <N(T, e), and because the limits with respect to T an e are taken in the reverse order.

V. DEFINITIONS OF CHAOS THAT DO NOT INVOLVE ENTROPY: SENSITIVE DEPENDENCE, LYAPUNOV EXPONENTS, AND CHAOTIC ATTRACTORS

A concept often associated with chaos is that of sensitive dependence, the idea that the orbits from two nearby initial conditions can move far apart in the future. In the mathematical literature, the most common definition of "sensitive dependence" is as follows. (This definition is also sometimes called "weak sensitive dependence.")

Definition: A continuous map, f: M 1 M, on the compact metric space M has sensitive dependence if there exists a p > 0 such that for each d > 0 (no matter how small) and each x 2 M, there is a y 2 M that is within the distance d of x and for which at some later time t, \/'1(x)—f1(y)| > p.

That is, no matter how close together the initial conditions x and y are, if we wait long enough, the orbits from these initial conditions will separate by more than some fixed value p. Notice that this definition of sensitive dependence does not say anything about the rate at which these orbits diverge from each other: this rate might, for example, be exponential (e.g., as for situations with a positive Lyapunov exponent), or linear (e.g., as for the example in Sec. IIID).

Another often used concept assigns sensitive dependence to the dynamics on a compact invariant set (the space M is now not necessarily compact) as follows.

Definition: A continuous map f has sensitive dependence on a compact invariant set J of a metric space M if for every d > 0 (no matter how small) and every point x 2 J, there is a point y 2 J within a distance d of x such that, at some later time t, f%t(x)-ft (y)| > p for some fixed value p > 0.

The following is a definition of chaos, based on that given by Devaney.54

Definition of Devaney-chaos: A continuous map f: M ! M, with M a compact metric space, is chaotic if it satisfies the following three conditions.

(i) f has sensitive dependence.

(ii) f has periodic orbits that are dense in M.

(iii) f has an orbit that is dense in M, i.e., there exists an initial condition x* such that for each y 2 M and each d > 0 (no matter how small), at some time t, the orbit from x* will be within the distance d from y:

Zt(x*)-y| < d.

This definition can be converted to define Devaney chaos for a compact invariant set, J = f(J), by replacing M in conditions (ii) and (iii) by J, and condition (i) by "f has sensitive dependence on J."

It was pointed out by Banks et al.55 that conditions (ii) and (iii) of Devaney's definition, imply his condition (i). Thus, condition (i) for Devaney-chaos can be omitted. A

serious drawback of Devaney's definition of chaos is that it excludes significant cases that are sometimes considered and that most would regard as chaotic. For example, consider a map with quasi-periodic forcing

Zt+1 = G(zt, 0t) , 0t+1 = [0t + x] mod2p, (13)

where x/(2p) is an irrational number. Regarding this as a dynamical system with a state x = (z, 0), we see that, because of the quasi-periodic behavior of 0, there are no periodic orbits of this system. Hence, the system (13) fails condition (ii) for Devaney chaos. Thus, according to the definition of Devaney chaos, a system like (13) can never exhibit chaos. Yet quasi-periodically forced systems are of practical interest and can have attractors with a positive Lyapunov exponent, a situation generally thought of as chaotic. Another point to make in connection with example (13) is that it presents a problem for the Devaney definition of chaos even when G is independent of 0: in that case zt+1 = G(zt), on its own, might indeed satisfy the conditions for Devaney-chaos; however, by considering the state to be x = (z, 0) with 0t quasi-periodic, the Devaney chaos condition (ii) is not satisfied, even though there is no change in the chaotic dynamics of z.

According to Banks et al., Devaney-chaos only requires satisfaction of the two conditions that there are a dense orbit and a dense set of periodic orbits. Robinson,38 on the other hand, notes that of the three conditions originally specified by Devaney, the requirement of a dense set of periodic orbits does not seem as "central to the idea of chaos" as the other two conditions (sensitive dependence and a dense orbit). Thus, he (and also, independently, Wiggins56) proposes the following definition.

Definition of Robinson-chaos: The same as Devaney-chaos except that condition (ii) is deleted.

This definition, by not requiring periodic orbits, has the benefit of potentially allowing more consistent treatment of forced systems, like (13). However, there is still, in our opinion, a drawback. This occurs, e.g., with reference to the shear map example (9) of Sec. IIID, which was considered by Robinson38 (see also Ref. 39). As discussed in Sec. III D, orbits are dense and nearby points typically separate linearly with t. Thus, this example is Robinson chaotic. However, the two Lyapunov exponents are zero. While this example satisfies the Robinson-chaos definition, due to the slow, linear-intime, separation of orbits, such dynamics has previously been classified as nonchaotic (see literature on so-called strange nonchaotic attractors13). Indeed, this linear-in-time separation of nearby orbits presents comparatively little prediction difficulty as compared to the exponential divergence emphasized by Lorenz.

Li and Yorke1 define the notion of a "scrambled set," and the presence of a scrambled set can be taken as another definition of chaos. While this works well in the original context of one-dimensional maps considered by Li and Yorke, as we will see, it is not as appropriate for higher dimensional systems.

Definition of a scrambled set: For f: M ! M with M a compact metric space, an uncountably infinite subset J of M is scrambled if, for every pair x, y 2 J with x = y,

limsup r(x)-/(y)| > 0; liminf f(x)-/(y)| = 0.

Thus, by the second Li-Yorke condition, the orbits from x and y come arbitrarily close to each other an infinite number of times, while by the first Li-Yorke condition, the distance between the orbits from x and y also exceeds a fixed positive amount an infinite number of times. An attractive aspect of scrambling is that it excludes some cases that have sensitive dependence but are usually not considered chaotic. In particular, the shear map example (9) discussed above does not have a scrambled set because the h-distance (or /-distance, if the h-distance is 0) between a pair of orbits remains constant, thus violating the second Li-Yorke condition for scrambling. Nevertheless, as with Robinson-chaos, the definition of chaos as having an uncountable scrambled set includes cases that are generally regarded as nonchaotic. One example is a two-dimensional flow with an attracting homoclinic orbit, considered by Robinson38 and Ott and Yorke57 (see Figure 1 in either paper); on a trajectory converging to the homoclinic orbit, a finite piece of the trajectory forms an uncountable scrambled set. Thus, the compact invariant set formed by the homoclinic orbit and its interior exhibits scrambling.

From the discussion above, we see that using notions related to the common definition of sensitive dependence presents problems when attempting to use them to give a generally applicable definition of chaos. On the other hand, another type of dynamical characterization, namely, that of Lyapunov exponents, seems better suited to defining chaos. Indeed, it can be quite useful to define a chaotic attractor using Lyapunov exponents. If one excludes certain cases of Milnor attractors (see below) and concentrates on a definition of an attractor of a map f as a bounded set A with a dense orbit such that there is an e-neighborhood Ae for which \1oft(Ae) = A, then it seems that a good definition of a chaotic attractor of the map f is simply an attractor that has a positive Lyapunov exponent.

Now, however, consider Milnor's definition58 of an attractor: A is an attractor for f: M ! M if there is a positive Lebesgue measure of points x 2 M, such that A is the forward time limit set of A. Figure 8 shows an example demonstrating that the definition of a chaotic attractor as an attractor with a positive Lyapunov exponent can be problematic, if Milnor's definition of an attractor is used.

The function f(x) in Fig. 8 goes to zero at x = ±1 and remains zero for | x| > 1. There is a positive measure of initial conditions x0 that go to x = 0 and stay there (e.g., if a < x0 < b, then xi > 0 and x2 = x3 = x4 =••• = 0). Thus, the unstable fixed point x = 0 is a Milnor attractor with a positive Lyapunov exponent (because df/dx > 1 at x = 0) yet it would be, we think, unacceptable to call the set x = 0 chaotic. This example is rather special and contrived. Thus, in practice, it is still very useful to think of a chaotic attractor as one with a positive Lyapunov exponent. However, a main concern of this paper is a definition of chaos that works fairly generally, including being applicable to both attractors and repellers. In the case of repellers, basing the existence of chaos on Lyapunov exponents presents a problem, since a repelling

-0.5 -1 -1.5

FIG. 8. A one-dimensional map for which the unstable fixed point at x = 0 is a Milnor attractor. Trajectories that reach [a, b] map to 0 two iterates later.

fixed point with a positive Lyapunov exponent could not reasonably be considered chaotic. On the other hand, the anomaly for the fixed-point repeller example and the Milnor example of Fig. 8 is removed if we define chaos by positive expansion entropy (see, e.g., Sec. III A).

VI. CONCLUSION

In this paper, we have introduced a quantity, the expansion entropy (Eqs. (1) and (2)), and we have argued that expansion entropy provides a "good" definition of chaos in that it possesses several desirable properties. We also compare this definition with other past definitions of chaos (Secs. IV and V). In particular, the expansion entropy H0 enjoys the properties of generality, .»'mpl/c/fy, and comput-ab/l/ty discussed in Sec. I. One important feature of H0 is that it assesses the presence of chaos in any given bounded region S in state space, rather than in an invariant set. As such, it applies naturally in cases where the invariant sets are unknown or (e.g., in both deterministically and randomly forced systems) do not exist. Section III C presents examples illustrating various issues and features of expansion entropy, perhaps most importantly its numerical computation. It is our hope that our paper will lead to the use of expansion entropy in applications and to further study of its properties.

ACKNOWLEDGMENTS

The work of E. Ott was supported by the U.S. Army Research Office under Grant No. W911NF-12-1-0101. We thank S. Newhouse for pointing out earlier work related to expansion entropy, and J. Yorke and the reviewers for helpful comments.

APPENDIX A: Q-ORDER EXPANSION ENTROPY

As discussed in Sec. IID, we here generalize our definition of H0 to a definition of a q-order entropy

-1 0 1

1 - q t'1' - t i(s„ _ t)qi(s)

where the argument of G is the same as in Eq. (1). Comparing Eqs. (1) and (2) with (A1), we see that (A1) reduces to (1) and (2) for q = 0. Furthermore, letting q ! 1 and assuming that the q ! 1 and t !i limits can be interchanged, we obtain

t' t - t l(St't) where, as in Sec. IIC,

' l(S)

The quantity (A2) can be viewed as bearing a relationship to metric entropy that is analogous to the relationship between H0 and topological entropy. In the case where S contains an attractor, 1/s+ = 0. In the case where S contains a repeller, we call s+ the average lifetime of repeller orbits. In the case where S is a neighborhood if an invariant set with a "natural measure,"36 we can identify the first term in Eq. (A2) with the sum of the positive Lyapunov exponents kj, so that

H1 = ¿2 k; - 1/s+,

which agrees with the results for metric entropy of Kantz

and Grassberger4,36 for chaotic repellers and of Pesin37 for 1/s+ = 0.

We now obtain Hq(f, S) and Hq (f , S') for the example in Sec. IIIB, where S is the large interval [-1, 1.5] containing both the attracting fixed point and the chaotic repeller, while S' C S is the smaller restraining region [0, 1] containing only the chaotic repeller.

We begin by finding Hq(f , S'). As discussed in Sec. III B, the set ST 0 of initial conditions that remain in S from time 0 to time T consists of 2T initial intervals of varying widths 3-k2k-T (where k = 0, 1,., T) on each of which G(DfT) = 3k2T-k. In addition, the number of such intervals with a given k is the binomial coefficient C(T, k) = T\/[k\(T - k)!]. Thus, the integral of G1-q that appears in Eq. (A1) is

G(DfT )1-qdi =£ C(T ,k)[3k 2T-k]1-q 3-k 2k-T

= (3-q + 2-q)T.

Furthermore, the total length of ST 0 decreases by the ratio 5/ 6 upon increase of T by one, so that

l(ST, 0) = (5/6)T • (A5)

Using Eqs. (A4) and (A5) in Eq. (1), we obtain

Hq(f ,S') = (1 - q)-1ln[(2/5)q + (3/5)q]: (A6)

Note that this quantity is positive for all q > 0, and decreases monotonically with increasing q (dashed curve in Fig. 9). For q = 0, this agrees with our previous result of Sec. III B that H0(f , S') = ln2, while taking the limit q 1 1 yields H1 (f , S') = (2/5)ln(5/2) + (3/5)ln(5/3) > 0. As q 11, Hq(f ,S') 1 ln(5/3), so that H0 = ln2 > Hq > H1 = ln(5/3).

We now turn to the evaluation of the q-order expansion entropy for the larger restraining region S = [-1, 1.5]. The main difference from S is that ST,0 = S for all T> 0, so l(ST,0) = 2.5, in contrast with Eq. (A5). To estimate the integral of G(Dfr)1-q over ST,0, note first that by Eq. (A4)

G(DfT)1-qdi >

G(DfT )1-qdi =(3-q + 2-q)T.

The contribution to the integral of G(DfT)1 q from initial conditions in ST0 but not in S'T 0 can be bounded above by CTmax[(3-q + 2-q)T, 1] for a constant C independent of T; the factor of T in the upper bound comes from considering trajectories that first leave S' at time t, for each of the values t = 0, 1,..., T - 1. Also, since G(DfT) = 1 for initial conditions in the interval near x =-1/2 on which Df < 1, such initial conditions contribute at least c > 0 to the integral, where c is the length of the contracting interval. Thus,

c +(3-q + 2-q)T <

G(DfT )1-qdi

< CTmax[(3-q + 2-q)T, 1].

From Eq. (A1), recalling that i(ST,0) = i(S), we conclude that for q= 1

Hq(f ,S) = (1

lnmax(3-q + 2-q , 1).

Note that there is a critical value 0 < qc < 1 for which 3-qc + 2-qc = 1; then Hq(f, S) = 0 for q > qc. In particular, Hf S) = 0 by taking the limit q 1 1.

Comparing Eqs. (A6) and (A7), we see that H0(f ,S') = H0(f ,S) = ln2 (as we argued in Sec. IIIC), but Hq(f ,S') > Hq(f , S) for q > 0; see Fig. 9. Note also that if the slopes 3

FIG. 9. Hq(f S ) (dashed curve) and Hq(f, S) (solid curve) versus q. The dashed curve decreases slightly from ln 2 k 0.693 at q = 0 to about 0.663 at q = 1.5.

and 2 were increased, the critical value qc beyond which Hqf, S) = 0 could be made arbitrarily close to 0. We conclude that Hq for q > 0 does not always detect chaos (i.e., Hq may be zero) in a restraining region containing an invariant set that is chaotic by all the Definitions we reviewed in Sec. V, as well as by our Definition H0 > 0.

APPENDIX B: RELATION OF EXPANSION ENTROPY INTEGRAL TO TOPOLOGICAL ENTROPY RATIO

Here, we justify the claim in Sec. IV that the integral ET,0(f, S) of Eq. (1) used to define expansion entropy approximates, for small e, the ratio V~(T'e)/—(0; e) related to the definition of topological entropy. Below, when we say that two quantities have the "same order of magnitude," we mean that their ratio is bounded above and below by positive constants that are independent of e and T.

Assume that e > 0 is small enough that the remainder term in the first order Taylor expansion of fT,0 is much smaller than e for points within e of each other, i.e., that

lfT'0(y)-fT'0(x)-DfT'0(x)(y -x)| « e

for x, y 2 S with |y — x| < e. Cover S with a grid of —0 boxes whose diameters are e; then —0 has the same order of magnitude as the maximum number —(0, e) of e-separated points in S. Each box B is contained in a ball of radius e, and contains a ball whose radius has the same order of magnitude as e. Notice also that i(B)«i(S)/—0 for small e. Let xB be the center of B, and let r1 > r2 >••• > be the singular values of DfT0(xB). Then, the image of B under fT0 contains and is contained in ellipses whose semiaxes have the same order of magnitude as r1e,r2e,_, rne. Let d be the largest index for which rd > 1. Then the maximum number of e-separated points in the image of B has the same order of magnitude as G1ar-Gd = G(DfT0(xB)). Summing over all B, the maximum number —V(T, e) of trajectories that are e-separated at either time 0 or at time T has the same order of magnitude as

£g(D/T'0(XB)) «i(B)| G(D/T'0(x))di(x) —0

g(d/t,o( x)) di(x).

Since we assumed that S is invariant, ST,0 is the same as S. Comparing with Eq. (1), the discussion above constitutes an outline of a proof that —(T e)/—(0' e) has the same order of magnitude as ET,0(f, S). (In fact, the same is true when S is not invariant, if we define —V to count only trajectories that remain in S between times 0 and T.)

'T. Y. Li and J. A. Yorke, "Period three implies chaos," Am. Math. Mon. 85, 985-992 (1975).

2E. Lorenz, "Deterministic nonperiodic flow," J. Atmos. Sci. 20, 130-141 (1963).

3C. Grebogi, E. Ott, and J. A. Yorke, "Crises, sudden changes in chaotic attractors and chaotic transients," Physica D 7, 181-200 (1983). 4H. Kantz and P. Grassberger, "Repellers, semi-attractors and long-lived chaotic transients," Physica D 17, 75-86 (1985).

Y.-C. Lai and T. Tél, Transient Chaos: Complex Dynamics on Finite Time Scales, Applied Mathematical Sciences Vol. 173 (Springer, New York, 2011).

6S. W. McDonald, C. Grebogi, E. Ott, and J. A. Yorke, "Fractal basin boundaries," Physica D 17, 125-153 (1985).

7For a review see E. Ott and T. Tel, "Chaotic scattering: An introduction," Chaos 3, 417-426 (1993).

8J. D. Skufca, J. A. Yorke, and B. Eckhardt, "Edge of chaos in parallel shear flow," Phys. Rev. Lett. 96, 174101 (2006).

9J. C. Sommerer, H. C. Ku, and H. E. Gilrath, "Experimental evidence for chaotic scattering in a fluid wake," Phys. Rev. Lett. 77, 5055-5058 (1996).

10C. Jaffe, S. D. Ross, M. L. Lo, J. Marsden, D. Farrelly, and T. Uzer, "Statistical theory of asteroid escape rates," Phys. Rev. Lett. 89, 011101 (2002).

11For example, R. E. Gillian and G. S. Ezra, "Transport and turnstiles in multidimensional Hamiltonian mappings for unimolecular fragmentation: Application to van der Waals predissociation," J. Chem. Phys. 94, 2648-2668 (1991).

12For example, M. L. Du and J. Delos, "Effect of closed classical orbits on quantum spectra: Ionization of atoms in a magnetic field. I. Physical picture and calculations," Phys. Rev. A 38, 1896-1912 (1988).

13U. Feudel, S. Kuznetsov, and A. Pikovsky, Strange Nonchaoti'c Attractors (World Scientific, Singapore, 2006).

14F. Ledrappier and L.-S. Young, "Dimension formula for random transformations," Commun. Math. Phys. 117, 529-548 (1988).

15F. Ledrappier and L.-S. Young, "Entropy formula for random transformations," Probab. Theory Relat. Fields 80, 217-240 (1988).

16L. Yu, E. Ott, and Q. Chen, "Transition to chaos for random dynamical systems," Phys. Rev. Lett. 65, 2935-2938 (1990).

17I. I. Rypina, F. J. Beron-Vera, M. G. Brown, H. Kocak, M. J. Olascoaga, and I. A. Udovydchenkov, "On the Lagrangian dynamics of atmospheric zonal jets and the permeability of the stratospheric polar vortex," J. Atmos. Sci. 64, 3595-3610 (2007).

18J. F. Lindner, V. Kohar, B. Kia, M. Hippke, J. G. Learned, and W. L. Ditto, "Strange non-chaotic stars," Phys. Rev. Lett. 114, 054101 (2015).

19P. Moskalik, "Multimode oscillations in classical Cepheids and RR Lyrae-type stars," Proc. Int. Astron. Union 9(S301), 249-256 (2013).

20F. Varosi, T. M. Antonsen, and E. Ott, "The spectrum of fractal dimensions of passively convected scalar gradients in chaotic fluid flows," Phys. Fluids A 3, 1017-1028 (1991).

21J. C. Sommerer and E. Ott, "Particles floating on a moving fluid: A dynamically comprehensible physical fractal," Science 259, 335-339 (1993).

22G. Haller and G. Yuan, "Lagrangian coherent structures in three-dimensional fluid flows," Physica D 147, 352-370 (2000).

23G. A. Voth, G. Haller, and J. P. Gollub, "Experimental measurements of stretching fields in fluid mixing," Phys. Rev. Lett. 88, 254501 (2002).

24M. Mathur, G. Haller, T. Peacock, J. E. Ruppert-Felsot, and H. L. Swinney, "Uncovering the Lagrangian skeleton of turbulence," Phys. Rev. Lett. 98, 144502 (2007).

25For example, A. S. Pikovsky, M. G. Rosenblum, G. V. Osipov, and J. Kurths, "Phase synchronization of chaotic oscillators by external driving," Physica D 104, 219-238 (1997).

26R. Sacksteder and S. Shub, "Entropy of a differentiable map," Adv. Math. 28, 181-185 (1978).

27O. S. Kozlovski, "An integral formula for topological entropy of C1 maps," Erg. Theory Dyn. Syst. 18, 405-424 (1998).

28If M is a Riemannian manifold, then it has a canonical volume that is equivalent to Lebesgue measure in appropriate local coordinates. Furthermore, when we treat the derivative of a map as a matrix, or use (small) distances in M, we assume the use of "normal coordinates," which exist at least for C2 Riemannian manifolds.

29Though we define expansion entropy only for a particular realization of a stochastic system, we expect that under appropriate hypotheses it has the same value for almost every realization. At a minimum, this should be the case when S is invariant for all realizations of a system forced by an IID process, because in this case the expansion entropy depends only on the tail of the IID process. A suitable setting for the rigorous study of expansion entropy in random systems would be that of Ledrappier and

Young.14,15

30If f is a C1 map on a compact manifold, and S is the entire manifold, then the limit has been proved to exist. More generally, H0 could be defined as the lim sup, as in other definitions of entropy (see Sec. IV).

31J. Jacobs, E. Ott, and B. R. Hunt, "Calculating topological entropy for transient chaos with an application to communicating with chaos," Phys. Rev. E 57, 6577-6588 (1998).

32J. Balatoni and A. Renyi, "Remarks on entropy," Pub. Math. Inst. Hung. Acad. Sci. 1, 9-37 (1956). Translated in Selected Papers of A. Renyi (Akademiai Kiado, Budapest, 1976), Vol. 1, p. 558.

33H. G. E. Hentschel and I. Procaccia, "The infinite number of generalized dimensions of fractals and strange attractors," Physica D 8, 435-444 (1983).

34P. Grassberger and I. Procaccia, "Dimensions and entropies of strange attrac-tors from a fluctuating dynamics approach," Physica D 13, 34-54 (1984).

35P. Grassberger and I. Procaccia, "Estimation of Kolmogorov entropy from a chaotic signal," Phys. Rev. A 28, 2591-2593 (1983).

36B. R. Hunt, E. Ott, and J. A. Yorke, "Fractal dimensions of chaotic saddles of dynamical systems," Phys. Rev. E 54, 4819-4823 (1996).

37Ya. B. Pesin, "Lyapunov characteristic exponents and ergodic properties of smooth dynamical systems with an invariant measure," Dokl. Akad. Nauk SSSR 226, 774-777 (1976) [Sov. Math. Dokl. 17, 196-199 (1976)].

38C. Robinson, "What is a chaotic attractor?," Qual. Theory Dyn. Syst. 7, 227-236 (2008).

39B. R. Hunt and E. Ott, "Fractal properties of robust strange non-chaotic attractors," Phys. Rev. Lett. 87, 254101 (2001).

40R. L. Devaney and Z. Nitecki, "Shift automorphisms in the Henon mapping," Commun. Math. Phys. 67, 137-146 (1979).

41R. L. Adler, A. G. Konheim, and M. H. McAndrew, "Topological entropy," Trans. Am. Math. Soc. 114, 309-319 (1965).

42E. I. Dinaburg, "The relation between topological entropy and metric entropy," Dokl. Akad. Nauk SSSR 190, 19-22 (1970) [Sov. Math. Dokl. 11, 13-16(1970)].

43R. Bowen, "Entropy for group endomorphisms and homogeneous spaces," Trans. Am. Math. Soc. 153,401-414 (1971).

44R. Bowen, "Periodic points and measures for axiom A diffeomorphisms," Trans. Am. Math. Soc. 154, 377-397 (1971).

45F. Przytycki, "An upper estimation for topological entropy of diffeomorphisms," Inv. Math. 59, 205-213 (1980).

46S. Newhouse, "Entropy and volume," Erg. Theory Dyn. Syst. 8, 283-299 (1988).

47M. Misiurewicz and W. Szlenk, "Entropy of piecewise monotone mappings," Stud. Math. 67,45-63 (1980), see https://eudml.org/doc/218304.

48Y. Yomdin, "Volume growth and entropy," Isr. J. Math. 57, 285-300 (1987).

49M. Gromov, "Entropy, homology and semialgebraic geometry," Sem. Bourbaki 28, 225-240 (1985-6), see https://eudml.org/doc/110064.

50S. Newhouse and T. Pignataro, "On the estimation of topological entropy," J. Stat. Phys. 72, 1331-1351 (1993).

51Z. Kovacs and T. Tel, "Thermodynamics of irregular scattering," Phys. Rev. Lett. 64, 1617-1620 (1990).

52Q. Chen, E. Ott, and L. P. Hurd, "Calculating topological entropies of chaotic dynamical systems," Phys. Lett. A 156, 48-52 (1991).

53G. Froyland, O. Junge, and G. Ochs, "Rigorous computation of topological entropy with respect to a finite partition," Physica D 154, 68-84 (2001); see in particular Remark B.2.

54R. L. Devaney, An Introduction to Chaotic Dynamical Systems (Addison-Wesley, New York and Reading, 1989).

55J. Banks, J. Brooks, G. Cairns, G. Davis, and P. Stacy, "On Devaney's definition of chaos," Am. Math. Mon. 99, 332-334 (1992).

56S. Wiggins, Chaotic Transport in Dynamical Systems, Interdisciplinary Applied Mathematics Series Vol. 2 (Springer-Verlag, Berlin, 1992).

57W. Ott and J. Yorke, "When Lyapunov exponents fail to exist," Phys. Rev. E 78, 056203 (2008).

58J. Milnor, "On the concept of an attractor," Commun. Math. Phys. 99, 177-195 (1985).