Entropy 2010,12, 1194-1245; doi:10.3390/e12051194

OPEN ACCESS

entropy

ISSN 1099-4300

www.mdpi.com/journal/entropy

Review

Quantum Entropy and Its Applications to Quantum Communication and Statistical Physics

Masanori Ohya * and Noboru Watanabe *

Department of Information Sciences, Tokyo University of Science, Noda City, Chiba 278-8510, Japan

* Author to whom correspondence should be addressed; E-Mails: ohya@rs.noda.tus.ac.jp (M.O.); watanabe@is.noda.tus.ac.jp (N.W.).

Received: 10 February 2010 /Accepted: 30 April 2010 /Published: 7 May 2010

Abstract: Quantum entropy is a fundamental concept for quantum information recently developed in various directions. We will review the mathematical aspects of quantum entropy (entropies) and discuss some applications to quantum communication, statistical physics. All topics taken here are somehow related to the quantum entropy that the present authors have been studied. Many other fields recently developed in quantum information theory, such as quantum algorithm, quantum teleportation, quantum cryptography, etc., are totally discussed in the book (reference number 60).

Keywords: quantum entropy; quantum information

1. Introduction

Theoretical foundation supporting today's information-oriented society is Information Theory founded by Shannon [1] about 60 years ago. Generally, this theory can treat the efficiency of the information transmission by using measures of complexity, that is, the entropy, in the commutative system of signal space. The information theory is based on the entropy theory that is formulated mathematically. Before Shannon's work, the entropy was first introduced in thermodynamics by Clausius and in statistical mechanics by Boltzmann. These entropies are the criteria to characterize a property of the physical systems. Shannon's construction of entropy is a use of the discrete probability theory based on the idea that "information obtained from a system with a large vagueness has been highly profitable", and he introduced (1) the entropy measuring the amount of information of the state of system and (2) the mutual entropy (information) representing the amount of information correctly transmitted

from the initial system to the final system through a channel. This entropy theory agreed with the development of the probability theory, due to Kolmogorov, gives a mathematical foundation of the classical information theory with the relative entropy of two states by Kullback-Leibler [2] and the mutual entropy by Gelfand-Kolmogorov-Yaglom [3,4] on the continuous probability space. In addition, a channel of the discrete systems given by a transition probability was generalized to the integral kernel theory. The channel of the continuous systems is expressed as a state change on the commutative probability space by introducing the averaged operator by Umegaki and it is extended to the quantum channel (prob. measure) describing a state change in the noncommutative systems [5]. Since the present optical communication uses laser signal, it is necessary to construct new information theory dealing with those quantum quantities in order to discuss the efficiency of information transmission of optical communication processes rigorously. It is called the quantum information theory extending the important measures such as the entropy, the relative entropy and the mutual entropy formulated by Shannon et al into the quantum systems. The study of the entropy in quantum system was begun by von Neumann [6] in 1932, and the quantum relative entropy was introduced by Umegaki [7], and it is extended to general quantum system by Araki [8,9] , Uhlmann [10] and Donald [11]. In the quantum information theory, one of the important subjects is to examine how much information correctly carried through a channel, so that it is necessary to extend the mutual entropy of the classical system to the quantum system.

The mutual entropy in the classical system is defined by the joint probability distribution between input and output systems. However, the joint probability does not exist generally (see [12]) in quantum systems. The compound state devised in [13,14] gives a key solving this problem. It is defined through the Schatten decomposion [15] (one dimensional orthogonal decomposition) of the input state and the quantum channel. Ohya introduced the quantum mutual entropy based on the compound state in 1983 [13,16]. Since it satisfies Shannon's inequalities, it describes the amount of information correctly transmitted from input system through a quantum channel. By using fundamental entropies such as the von Neumann entropy and the Ohya mutual entropy, the complete quantum version of Shannon's information theory was formulated.

The quantum entropy for a density operator was defined by von Neumann [6] about 20 years before the Shannon entropy appeared. The properties of entropy are summarized in [17]. Main properties of the quantum relative entropy are taken from the articles [8-11,17-20]. The quantum mutual entropy was introduced by Holevo, Livitin, Ingarden [21,22] for classical input and output passing through a possible quantum channel. The complete quantum mechanical mutual entropy was defined by Ohya [13], and its generalization to C*-algebra was done in [23]. The applications of the mutual entropy have been been studied in various fields [16,24-29]. The applications of channel were given in [13,16,30-33].

Concerning quantum communication, the following studies have been done. The characterization of quantum communication or stochastic procesess is discussed and the beam splitting was rigorously studied by Fichtner, Freutenberg and Liebscher [34,35]. The transition expectation was introduced by Accardi [36] to study quantum Markov process [37]. The noisy optical channel was discussed in [28]. In quantum optics, a linear amplifier have been discussed by several authours[38,39], and its rigorous expression given here is in [30]. The channel capacities are discussed here based on the papers [40,41]. The bound of the capacity has been studied by first Holevo [42] and many others [39,41,43].

The entangled state is an important concept for quantum theory and it has been studied recently by several authors and its rigorous mathematical study was given in [44,45].

Let us comment general entropies of states in C*-dynamical systems. The C*-entropy was introduced in [23] and its property is discussed in [25,26,46]. The relative entropy for two general states was introduced by Araki [8,9] in von Neumann algebra and Uhlmann [10] in *-algebra. The mutual entropy in C*-algebra was introduced by Ohya [16]. Other references of quantum entropy is totally discussed in a book [17].

The classical dynamical (or Kolmogorov-Sinai) entropy S(T) [47] for a measure preserving transformation T was defined on a message space through finite partitions of the measurable space.

The classical coding theorems of Shannon are important tools to analyse communication processes which have been formulated by the mean dynamical entropy and the mean dynamical mutual entropy. The mean dynamical entropy represents the amount of information per one letter of a signal sequence sent from an input source, and the mean dynamical mutual entropy does the amount of information per one letter of the signal received in an output system.

The quantum dynamical entropy (QDE) was studied by Connes-St0rmer [48], Emch [49], Connes-Narnhofer-Thirring [50], Alicki-Fannes [51], and others [52-56]. Their dynamical entropies were defined in the observable spaces. Recently, the quantum dynamical entropy and the quantum dynamical mutual entropy were studied by the present authors [16,29]: (1) The dynamical entropy is defined in the state spaces through the complexity of Information Dynamics [57]. (2) It is defined through the quantum Markov chain (QMC) was done in [58]. (3) The dynamical entropy for a completely positive (CP) maps was introduced in [59].

In this review paper, we give an overview of the entropy theory mentioned above. Their details of are discussed in the following book [17,60].

2. Setting of Quantum Systems

We first summarize mathematical description of both classical and quantum systems.

(1) Classical System: Let M(Q) be the set of all real random variables on a probability measure space (Q, F,^) and P(Q) be the set all probability measures on a measurable space (Q, F). f G M(Q) and ^ G P(Q) represent a observable and a state of classical systems, respectively. The expectation value of the observable f G M(Q) with respect to a state ^ G P(Q) is given by f fdy.

(2) Usual Quantum Systems: We denote the set of all bounded linear operators on a Hilbert space H by B(H), and the set of all density operators on H by 6(H). A hermite operator A G B(H) and p G 6(H) denote an observable and a state of usual quantum systems. The expectation value of the observable A G B(H) with respect to a state p G 6(H) is obtained by trpA.

(3) General Quantum System: More generally, let A be a C*-algebra (i.e., complex normed algebra with involution * such that ||A|| = ||A*||, ||A*A|| = ||A||2 and complete w.r.t. || ■ ||) and 6(A) be the set of all states on A (i.e., positive continuous linear functionals tp on A such that tp(I) = 1 if the unit I is in A).

If A : A ^ B is a unital map from a C*-algebra A to a C*-algebra B then its dual map A* : 6(B) ^ 6(A) is called the channel. That is, trpA(A) = trA* (p) A (Vp G 6(B), VA G A). Remark that the algebra B sometimes will be denoted A.

Such algebraic approach contains both classical and quantum theories. The description of a classical probabilistic system (CDS), a usual quantum dynamical system(QDS) and a general quantum dynamical system(GQDS) are given in the following Table:

Table. 1.1. Descriptions of CDS, QDS and GQDS.

CDS QDS GQDS

real r.v. Hermitian operator self-adjoint element

observable in A on H A in

M (Q) (self adjoint operator in B(H)) C*-algebra A

state probability measure density operator p.l.fnal G S

V G P(Q) p on H with ) = 1

expectation In trpA V(A)

3. Communication Processes

We discuss the quantum communication processes in this section.

Let M be the infinite direct product of the alphabet A : M = AZ = n^, A calling a message space. A coding is a one to one map S from M to some space X which is called the coded space. This space X may be a classical object or a quantum object. For a quantum system, X may be a space of quantum observables on a Hilbert space H, then the coded input system is described by (B(H), S(H)). The coded output space is denoted by X and the decoded output space M is made by another alphabets. An transmission (map) r from X to X (actually its dual map as discussed below) is called a channel, which reflects the property of a physical device. With a decoding S, the whole information transmission process is written as

m A X A X -^M

That is, a message m e M is coded to S (m) and it is sent to the output system through a channel r, then the output coded message becomes r o S (m) and it is decoded to S o r o S (m) at a receiver.

Then the occurrence probability of each message in the sequence (m(1),m(2), ••• , m(N)) of N messages is denoted by p = {pk} , which is a state in a classical system. If S is a quantum coding, then S (m) is a quantum object (state) such as a coherent state. Here we consider such a quantum coding, that is, S (m(k)) is a quantum state, and we denote S (m(k)) by ok. Thus the coded state for the sequence (m(1),m(2), ■ ■ - ,m(N)) is written as

o = ^ Pfc °k

This state is transmitted through the dual map of r which is called a channel in the sequel. This channel (the dual of r) is expressed by a completely positive mapping r*, in the sense of Chapter 5 , from the state space of X to that of X , hence the output coded quantum state a is TV. Since the information transmission process can be understood as a process of state (probability) change, when Q and Q are classical and X and X are quantum, the process is written as

P (Q) —^ S (H) —^ S(H) —^ P(Q)

where S* (resp. S ) is the channel corresponding to the coding S (resp. S) and S (H) (resp. S(H)) is the set of all density operators (states) on H (resp.H).

We have to be care to study the objects in the above transmission process. Namely, we have to make clear which object is going to study. For instance, if we want to know the information of a quantum state through a quantum channel r (or r*), then we have to take X so as to describe a quantum system like a Hilbert space and we need to start the study from a quantum state in quantum space X not from a classical state associated to messages. We have a similar situation when we treat state change (computation) in quantum computer.

4. Quantum Entropy for Density Operators

The entropy of a quantum state was introduced by von Neumann. This entropy of a state p is defined by

S (p) = —trp log p For a state p, there exists a unique spectral decomposition

p = Sk ^k pk

where is an eigenvalue of p and Pk is the associated projection for each ^k. The projection Pk is not one-dimensional when ^k is degenerated, so that the spectral decomposition can be further decomposed into one-dimensional projections. Such a decomposition is called a Schatten decomposition, namely,

p = Skj ^kj Ekj

where Ekj is the one-dimensional projection associated with and the degenerated eigenvalue repeats dimPk times; for instance, if the eigenvalue ^has the degeneracy 3, then = ^12 = ^13 < To simplify notations we shall write the Schatten decomposition as

p = Sk ^k Ek

where the numbers } form a probability distribution } :

Y^k = 1, ^k > 0

This Schatten decomposition is not unique unless every eigenvalue is non-degenerated. Then the entropy (von Neumann entropy) S (p) of a state p equals to the Shannon entropy of the probability distribution |^k} :

S (p) = — Y1 ^k log ^k

Therefore the von Neumann entropy contains the Shannon entropy as a special case.

Let us summarize the fundamental properties of the entropy S (p).

Theorem 1 For any density operator p e 6( H), the followings hold:

(1) Positivity: S (p) > 0.

(2) Symmetry : Let p' = UpU* for an unitary operator U. Then

S (p') = S (p)

(3) Concavity : S (Api + (1 - A)p2) > AS (pi) + (1 - A)S (p2) for any pi,p2 e 6(H) and A e [0,1].

(4) Additivity : S(p1 ® p2) = S(p1) + S(p2) for any pk e 6(Hk).

(5) Subadditivity : For the reduced states p1, p2 of p e 6(H1 ® H2),

S(p) < S(p1) + S(p2)

(6) Lower Semicontinuity : If ||pn — p|1 ^ 0(= tr|pn — p| ^ 0) as n ^ ro, then

S(p) < lim inf S(pn)

(7) Continuity: Let pn, p be elements in 6(H) satisfying the following conditions : (i) pn ^ p weakly as n ^ ro, (ii) pn < A (Vn) for some compact operator A, and (iii) —J2k ak log ak < +ro for the eigenvalues {ak } of A. Then S (pn) ^ S (p).

(8) Strong Subadditivity : Let H = H1 ® H2 ® H3 and denote the reduced states trHi<g)Hj.p by pk and

trHfcp by pij. Then S(p) + S(p2) < S(p^) + S(p2s) and S(p1) + S(p2) < S(pw) + S(p23).

(9) Entropy increasing: (i) Let H be finite dimensional space. If the channel A* is unital, that is, for the dual map A of A* satisfies AI = I, then S (A*p) > S (p). (ii) For arbitrary Hilbert space H, if the dual map A of the channel A* satisfies A (p) e 6(H), then S(A*p) > S(p).

In order to prove Theorem, we need the following lemma.

Lemma 2 Let f be a convex C1 function on a proper domain and p, a e 6(H). Then

(1) Klein's inequality: tr{f (p) — f (a) — (p — a)f'(a)} > 0.

(2) Peierls inequality: f (< xk , pxk >) < trf (p) for any CONS {xk} in H. (Remark: p = En f(p) = En fWEnJ

5. Relative Entropy for Density Operators

For two states p, a e 6(H), the relative entropy is first defined by Umegaki

/ trp (log p — log a) (p < a)

I ro otherwise

where p ^ a means that traA = 0 ^ trpA = 0 for A > 0 ^ ranp C rana. Main properties of relative entropy are summarized as:

Theorem 3 The relative entropy satisfies the following properties:

(1) Positivity: S(p, a) > 0, = 0 iff p = a.

(2) Joint Convexity : S (Api + (1 — À)p2 ,Aa1 + (1 — A)a2) < AS (pi ,ai) + (1 — A)S (p2,a2) for any A G [0,1].

(3) Additivity : S(p1 ® p2, a1 ® a2) = S(p1, a1 ) + S(p2, a2).

(4) Lower Semicontinuity : If lim^^ ||pn — p|1 = 0 and limn^œ ||an — a|1 = 0, then S (p, a) < liminfп^ж S (pn , an). Moreover, if there exists a positive number A satisfying pn < Aan, then

S (pn, an) = S (p, a).

(5) Monotonicity : For a channel Л* from S to S, S(A*p, A*a) < S(p, a).

(6) Lower Bound: ||p — a||2/2 < S (p, a).

(7) Invaiance under the unitary mapping: S(UpU*; UaU*) = S(p; a) where U is a unitary operator.

Let us extend the relative entropy to two positive operators instead of two states. If A and B are two positive Hermitian operators (not necessarily the states, i.e. not necessarily with unit traces) then we set

S(A, B) = trA (log A — log B) The following Bogoliubov inequality holds [41]. Theorem 4 One has

S (A, B) > trA (log trA — log trB )

This inequality gives us the upper bound of the channel capacity [41]. 6. Channel and Lifting

The concept of the channel plays an important role for mathematical description of the quantum communication. The attenuation channel is one of the most important models to discuss in quantum optical communication [13] . Moreover, there exists a special channel named "lifting", and it is useful to characterize quantum communication or stochastic procesess. Here, we briefly review the definition and fundamental properties of quantum channel and lifting [17,28,29,60].

6.1. Quantum Channels

A general quantum system containing all systems such as discrete and continuous in both classical and quantum is described by a C*-algebra or a von Neumann algebra, so that we discuss the channeling transformation in C*-algebaric contexts. However it is enough for the readers who are not familiar with C*-algebra to imagine usual quantum system, for instace, regard A and S (A) below as B (H) and S(H), respectively. Let A and A be C*-algebras and S (A) and S (A) be the set of all states on A and A.

A channel is a mapping from S (A) to S (A). There exist channels with various properties.

Definition 5 Let (A, 6(A)) be an input system and (A, 6(A)) be an output system. Take any ^ e 6(A).

(1) A* is linear if A*(A^ + (1 — A)^) = AAV + (1 — A)AV holds for any A e [0,1].

(2) A* is completely positive (CP) if A* is linear and its dual A : A ^ A satisfies

£ A*A(A*Aj )Aj > 0

for any n e N and any {A,} C A, {A,} C A.

(3) A* is Schwarz type if A(A*) = A(A)* and A*(A)*A*(A) < A*(A*A).

(4) A* is stationary if A o at = at o A for any t e R.

(Here at and at are groups of automorphisms of the algebra A and A respectively.)

(5) A* is ergodic if A* is stationary and A*(exI(a)) C exI(a).

(Here exI (a) is the set of extreme points of the set of all stationary states I (a).)

(6) A* is orthogonal if any two orthogonal states e 6(A) (denoted by implies

A>1±A>2.

(7) A* is deterministic if A* is orthogonal and bijective.

(8) For a subset 60 of 6(A), A* is chaotic for 60 if A*^1 = AV2 for any e 60.

(9) A* is chaotic if A* is chaotic for 6(A).

(10) Stinespring-Sudarshan-Kraus representation: a completely positive channel A* can be represented as

A*p = £ A,pA*, £ A*Ai ^ 1 i i

Here A, are bounded operators in H.

Most of channels appeared in physical processes are CP channels. Examples of such channels are the followings: Take a density operator p e 6(H) as an input state.

(1) Unitary evolution: Let H be the Hamiltonian of a system.

p ^ A*p = UtpUt*

where t e R, Ut = exp(—itH).

(2) Semigroup evolution: Let V^(t e R+) be an one parameter semigroup on H.

p ^ A*p = VpVt*, t e R+

(3)Quantum measurement: If a measuring apparatus is prepared by an positive operator valued measure {Qn} then the state p changes to a state A*p after this measurement,

p ^ A*p = ^ QnpQn

(4) Reduction: If a system S1 interacts with an external system S2 described by another Hilbert space K and the initial states of S1 and S2 are p and a, respectively, then the combined state 0t of S1 and S2 at time t after the interaction between two systems is given by

0t = Ut(p 0 a)U*

where Ut = exp(—itH) with the total Hamiltonian H of S1 and S2. A channel is obtained by taking the partial trace w.r.t. K such as

p ^ A*p = trKdt

(5) Optical communication processes: Quantum communication process is described by the following scheme [13].

Noise v G 6 (Ki)

6(Hi) 9p

6 (Hi)

6 (Hi0Ki) The above maps y*, a* are given as

p = A*p g 6(H2)

Loss 6(K2) A

, 6 (H2) t a*

n* 6 (H20K2)

Y* (p) = p 0 v, p G S (H1) a* (0) = trK2^ G S (H20K2)

where v is a noise coming from the outside of the system. The map n* is a certain channel determined by physical properties of the device transmitting information. Hence the channel for the above process is given as A*p = trK2n* (p 0 v) = (a* o n* o 7*) (p).

(6) Attenuation process: Based on the construction of the optical communication processes of (5), the attenuation channel is defined as follows [13]: Take v0 = |0) (0| = vacuum state and n* (■) = V0 (■) V* given by

Vo (|n)®|0)) = £ Cj |j)®|n - j)

j! (n - j)!

, (|a|2 + 12 = 1)

Then the output state of the attenuation channel AO is obtained by

A0p _ trK2(p 0 v) = trK2 (p 0 |0) <0|)

n = M (0 < n < 1) is called a transmission rate of the attenuation channel A0. In particular, for a coherent input state p = |0) <0|, one has

(|0) <0| 0 |0) <0|) = |a0) <a0| 0 |—£0) <—,50|

which is called a beam splitting operator.

(7) Noisy optical channel: Based on (5), the noisy optical channel is defined as follows [28]: Take a noise state v = |mi) <mi|, mi photon number state of K and a linear mapping V : Hi 0 K ^

H2 0 K2 as

V (|ni)0|mi)) _ £ c;i'mi |j)0|ni + mi - j) j=o

Cni,mi _ ^^ (— i)ni+j-r \/ni!mi!j! (ni + mi — j)! ^m1-j+2r^ra1+j-2r

j ^ r! (ni — j)! (j — r)! (mi — j + r)!

K = min {ni,j} , L _ max {mi — j, 0} Then the output state of the noisy optical channel A* is defined by

A*p _ tr^2n* (p 0 v) =trjc V (p 0 |mi) <mi|) V*

for the input state p e S (Hi). In particular, for a coherent input state p = |0) <0| and a coherent noise state v = |y) <y|, we obtain

n* (|0) <0| 0 |Y) <Y|) = M + ^7) M + ^71 0 |—^ + «7) <—^ + «71 which is called a generalized beam splitting operator.

6.2. Liftings

There exists a special channel named "lifting", and it is a useful concept to characterize quantum communication or stochastic procesess. It can be a mathematical tool to describe a process in quantum algorithm, so that we will explain its foundation here.

Definition 6 Let Ai, A2 be C*-algebras and let Ai 0 A2 be a fixed C*-tensor product of Ai and A2. A lifting from Ai to Ai 0 A2 is a weak *-continuous map

E* : S(Ai) ^ S(Ai 0 A2)

If E * is affine and its dual is a completely positive map, we call it a linear lifting; if it maps pure states into pure states, we call it pure.

The algebra A2 can be that of the output, namely, A above. Note that to every lifting from Ai to Ai 0 A2 we can associate two channels: one from A1 to A1, defined by

AVi(ai) = (E*pi)(ai 0 1) Vai G Ai

another from Ai to A2, defined by

AViM = (E*pi)(1 0 a2) Va2 G A2

In general, a state p G S(Ai 0 A2) such that

p ui<8>1 = Pi , p |i®A2 = P2

is called a compound state of the states pi G S(Ai) and p2 G S(A2). In classical probability theory, also the term coupling between pi and p2 is used.

The following problem is important in several applications: Given a state pi G S(Ai) and a channel A* : ©(Ai) ^ S(A2), find a standard lifting E* : ©(Ai) ^ ©(Ai 0A2) such that E*pi is a compound state of pi and A*pi. Several particular solutions of this problem have been proposed by Ohya [13,14], Ceccini and Petz [63], however an explicit description of all the possible solutions to this problem is still missing.

Definition 7 A lifting from Ai to Ai 0 A2 is called nondemolition for a state pi G S(Ai) if pi is invariant for A* i.e., if for all ai G Ai

(E*pi)(ai 0 1) = pi(ai)

The idea of this definition being that the interaction with system 2 does not alter the state of system 1.

Definition 8 Let Ai, A2 be C*-algebras and let Ai 0 A2 be a fixed C*-tensor product of Ai and A2. A transition expectation from Ai 0 A2 to Ai is a completely positive linear map E * : Ai 0 A2 ^ Ai satisfying

E*(U 0 1a2) = U

An input signal is transmitted and received by an apparatus which produces an output signal. Here Ai (resp. A2) is interpreted as the algebra of observables of the input (resp. output) signal and E describes the interaction between the input signal and the receiver as well as the preparation of the receiver. If pi G S(Ai) is the input signal, then the state A*pi G ©(A2) is the state of the (observed) output signal. Therefore in the reduction dynamics discussed before, the correspondence from a state p to the interacting state 9t = Ut(p 0 p)Ut* gives us a time dependent lifting.

Further another important lifting related to this signal transmission is one due to a quantum communication process discussed above. In several important applications, the state pi of the system before the interaction (preparation, input signal) is not known and one would like to know this state knowing only A*pi G S(A2), i.e., the state of the apparatus after the interaction (output signal). From a mathematical point of view this problem is not well posed, since the map A* is usually not invertible. The best one can do in such cases is to acquire a control on the description of those input states which have the same image under A* and then choose among them according to some statistical criterion. In the following we rewrite some communication processes by using liftings.

Example 9 (1) : Isometric lifting.

Let V : Hi ^ Hi 0 H2 be an isometry

V *V =

Then the map

E : x G B(Hi) 0 B(H2) ^ V*xV G B(Hi)

is a transition expectation in the sense ofAccardi, and the associated lifting maps a density matrix w1 on H1 into E*w1 = Vw1V* on H1 0 H2. Liftings of this type are called isometric. Every isometric lifting is a pure lifting, which is applied to some of quantum algorithm such as Shor's.

These extend linearly to isometry, and their isometric liftings are neither of convex product type nor nondemolition type.

Example 10 (2) : The compound lifting.

Let A* : S(A1) ^ S(A2) be a channel. For any g S(A1) in the closed convex hull of the

external states, fix a decomposition of as a convex combination of extremal states in S(A1)

where p is a Borel measure on S(A1) with support in the extremal states, and define

E1*^1 = 0 A*^dp

•/e(Ai)

Then E * : S(A1) ^ S(A1 0 A2) is a lifting, nonlinear even if A* is linear, and it is a nondemolition type. The most general lifting, mapping S(A1) into the closed convex hull of the extermal product states on A1 0 A2 is essentially of this type. This nonlinear nondemolition lifting was first discussed by Ohya to define the compound state and the mutual entropy as explained before. However the above is a bit general because we shall weaken the condition that p is concentrated on the extremal states.

Therefore once a channel is given, by which a lifting of convex product type can be constructed. For example, the von Neumann quantum measurement process is written, in the terminology of lifting, as follows: Having measured a compact observable A = anPn (spectral decomposition with En Pn = I) in a state p, the state after this measurement will be

A*p = £ PnpPn

and a lifting E*, of convex product type, associated to this channel A* and to a fixed decomposition of p as p = En^nPn (Pn G ©(A)) is given by :

E*p = £ PnPn 0 A*pra

Before closing this section, we reconsider noisy channel, attenutaion channel and amplifier process (lifting) in optical communication.

Example 11 (3) : The attenuation (or beam splitting) lifting.

It is the particular isometric lifting characterized by the properties.

H1 = H2 =: r(c) (Fock space over c) =L2 (r) V : r(c) ^ r(c) 0 r(c)

is characterized by the expression

V |0) = |a0)0|ft0)

where |0) is the normalized coherent vector parametrized by 0 G c and a, ft G c are such that

| a |2 + |2 = 1

Notice that this liftings maps coherent states into products of coherent states. So it maps the simplex of the so called classical states (i.e., the convex combinations of coherent vectors) into itself. Restricted to these states it is of convex product type explained below, but it is not of convex product type on the set of all states.

Denoting, for 0 G c, the coherent state on B(r(c)), namely,

ue(b) = (0,b0) ; b G B(r(c))

then for any b G B(r(c))

(E)(b 0 1) = ua0(b)

so that this lifting is not nondemolition. These equations mean that, by the effect of the interaction, a coherent signal (beam) |0) splits into 2 signals (beams) still coherent, but of lower intensity, but the total intensity (energy) is preserved by the transformation.

Finally we mention two important beam splitting which are used to discuss quantum gates and quantum teleportation [64,65].

(1) Superposed beam splitting:

Vs |0) = ^=(|a0) 0 |ft0) — i |ft0) 0 |a0)) = Vo (V (|0) 0 |0) — i |0) 0 |0)))

(2) Beam splitting with two inputs and two output: Let |0) and |y) be two input coherent vectors. Then

Vd (|0) 0 |y)) = |a0 + ftY) 0 |—+ aY) = V (|0) 0 |y))

Example 12 (4) Amplifier channel: To recover the loss, we need to amplify the signal (photon). In quantum optics, a linear amplifier is usually expressed by means of annihilation operators a and b on H and K, respectively :

c = VGa 0 I + VG — 1I 0 b*

where G(> 1) is a constant and c satisfies CCR i.e., [c, c*] = I) on H 0 K. This expression is not convenient to compute several informations like entropy. The lifting expression of the amplifier is good for such use and it is given as follows:

Let c = pa 0 I + vI 0 with |p|2 — |v|2 = 1 and |y) be the eigenvector of c : c |y) = y Iy). For two coherent vectors |0) on H and |0') on K, |y) can be written by the squeezing expression : Iy) = |0 0 0' ; P, v) and the lifting is defined by an isometry

V,, |0) = |0 0 0' ; p,v)

such that

E*p = V,,pV* p g © (H)

The channel of the amplifier is

A*p = tr^E *p

7. Quantum Mutual Entropy

Quantum relative entropy was introduced by Umegeki and generalized by Araki, Uhlmann. Then a quantum analogue of Shannon's mutual entropy was considered by Levitin, Holevo, Ingarden for classical input and output passing through a possible quantum channel, in which case, as discussed below, the Shannon theory is essentially applied. Thus we call such quantum mutual entropy semi-quantum mutual entropy in the sequel. The fully quantum mutual entropy, namely, for quantum input and quantum output with quantum channel, was introduced by Ohya, which is called the quantum mutual entropy. It could be generalized to a general quantum system described by a C*-algebra.

The quantum mutual entropy clearly contains the semi-quantum mutual entropy as shown below. We mainly discuss the quantum mutual entropy in usual quantum system described by a Hilbert space, and its generalization to C*-systems will be explained briefly for future use (e.g., relativistic quantum information ) in the last section of this Chapter. Note that the general mutual entropy contains all other cases including the measure theoretic definition of Gelfand and Yaglom.

Let H be a Hilbert space for an input space, and an output space is described by another Hilbert space H , often one takes H =H. A channel from the input system to the output system is a mapping A* from ©(H) to ©(H).

An input state p G ©(H) is sent to the output system through a channel A*, so that the output state is written as p = A*p. Then it is important to investigate how much information of p is correctly sent to the output state A*p. This amount of information transmitted from input to output is expressed by the mutual entropy (or mutual information).

The quantum mutual entropy was introduced on the basis of von Neumann entropy for purely quantum communication processes. The mutual entropy depends on an input state p and a channel A*, so it is denoted by I (p; A*), which should satisfy the following conditions:

(1) The quantum mutual entropy is well-matched to the von Neumann entropy. That is, if a channel is trivial, i.e., A* = identity map, then the mutual entropy equals to the von Neumann entropy: I (p; id) = S (p).

(2) When the system is classical, the quantum mutual entropy reduces to classical one.

(3) Shannon's type fundamental inequality 0 < I (p; A*) < S (p) is held.

In order to define the quantum mutual entropy, we need the quantum relative entropy and the joint state (it is called "compound state" in the sequel) describing the correlation between an input state p and

the output state A*p through a channel A*. A finite partition of Q in classical case corresponds to an orthogonal decomposition {Ek} of the identity operator I of H in quantum case because the set of all orthogonal projections is considered to make an event system in a quantum system. It is known that the following equality holds

and the supremum is attained when {Ek} is a Schatten decomposition of p = k ^fcEk. Therefore the Schatten decomposition is used to define the compound state and the quantum mutual entropy.

The compound state (corresponding to joint state in classical systems) of p and A*p was introduced by Ohya in 1983. It is given by

where E stands for a Schatten decomposition {Ek} of p so that the compound state depends on how we decompose the state p into basic states (elementary events), in other words, how to see the input state. It is easy to see that tr$E = 1, > 0.

Applying the relative entropy S (•, •) for two compound states and $0 = p ® A*p (the former includes a certain correlation of input and output and the later does not), we can define the Ohya's quantum mutual entropy (information) as

where the supremum is taken over all Schatten decompositions of p because this decomposition is not always unique unless every eigenvalue of p is not degenerated. Some computations reduce it to the following form for a linear channel.

Theorem 13 We have

It is easy to see that the quantum mutual entropy satisfies all conditions (1)~(3) mentioned above. When the input system is classical, an input state p is given by a probability distribution or a probability measure. In either case, the Schatten decomposition of p is unique, namely, for the case of probability distribution ; p = } ,

= ßkEk 0 A*Ek

I (p; A*) = sup {S ($E, $q) ; E = {Ek}}

where 8k is the delta measure, that is,

Therefore for any channel A*, the mutual entropy becomes

I (p;A*) = £ pkS (A*4, A*p) k

which equals to the following usual expression when one of the two terms is finite for an infinite dimentional Hilbert space:

I (p;A*) = S (A*p) — £ pk S (A*4)

The above equality has been taken by Levitin and Holevo (LH for short in the sequel), which is one associated with a classical-quantum channel. Thus the Ohya's quantum mutual entropy (we call it the quantum mutual entropy in the sequel) contains the LH quantum mutual entropy (we call it the semi-quantum mutual entropy in the sequel) as a special one.

Note that the definition of the quantum mutual entropy might be written as

If (p; A*) = sup < £ pkS (A*pk, A*p); p = £ pkpk G F (p) >

where F (p) is the set of all orthogonal finite decompositions of p. Here pk is orthogonal to pj (denoted by pk ± pj) means that the range of pk is orthogonal to that of pj. We briefly explain this equality in the next theorem.

Theorem 14 One has I (p; A*) = IF (p; A*).

Moreover the following fundamental inequality follows from the monotonicity of relative entropy : Theorem 15 (Shannon's inequality)

0 < I(p; A*) < min{S(p),S(A*p)} For given two channels A* and A2, one has the quantum data processing inequality. That is,

S(p) > I(p, A1) > I(p, A2 ◦ A1)

The second inequality follows from monotonicity of the relative entropy.

This is analogous to the classical data processing inequality for a Markov process X ^ Y ^ Z :

S(X) > I(X, Y) > I(X,Z)

where I(X, Y) is the mutual information between random variables X and Y.

The mutual entropy is a measure for not only information transmission but also description of state change, so that this quantity can be applied to several topics in quantum dynamics. It can be also applied to some topics in quantum computer or computation to see the ability of information transmission.

8. Some Applications to Statistical Physics

8.1. Ergodic theorem

We have an ergodic type theorem with respect to quantum mutual entropy. Theorem 16 Let a state p be given by p (■) = trp-.

(1) If a channel A* is deterministic, then I(p; A*) = S(p).

(2) If a channel A* is chaotic, then I(p; A*) = 0.

(3) If p is a faithful state and the every eigenvalue of p is nondegenerate, then I(p; A*) = S(A*p). (Remark: Here p is said to be faithful if trpA*A = 0 implies A = 0 )

8.2. CCR and channel

We discuss the attenuation channel in the context of the Weyl algebra.

Let T be a symplectic transformation of H to H © K, i.e., a(/, g) = a(T/, Tg). Then there is a homomorphism aT : CCR(H) ^ CCR(H © K) such that

«t (W (/)) = W (T/) (1)

We may regard the Weyl algebra CCR(H©K) as CCR(H) © CCR(K), and given a state ^ on CCR(H), a channeling transformation arises as

(Au)(A) = (u © ^)(«T(A)) (2)

where the input state u is an arbitrary state of CCR(H) and A e CCR(H) (this ^ is a noise state above). To see a concrete example discussed in [13], we choose H = K, ^ = p and

f (o = < © (3)

If |a|2 + |b|2 = 1 holds for the numbers a and b, this F is an isometry, and a symplectic transformation, and we arrive at the channeling transformation

(Au)W(g) = u(W(ag))e-1 l|bg|2 (g e H) (4)

In order to have an alternative description of A in terms of density operators acting of r(H) we introduce the linear operator V : r(H) ^ r(H) © r(H) defined by

V(nF(A))$ = nF(aT(A))$ © $

we have

V(n-F(W(/)))$ = (nF(W(a/)) © nF(W(b/)))$ © $

V$/ = $o/ © $6/

Theorem 17 Let u be a state of CCR(H) which has a density D in the Fock representation. Then the output state A*u of the attenuation channel has density tr2VDV* in the Fock representation.

The lemma says that A* is really the same to the noisy channel with m = 0

We note that A, the dual of A*, is a so-called quasifree completely positive mapping of CCR(H) given as

A(W (f)) = W (af )e-2l|bf1|2 (5)

Theorem 18 If ^ is a regular state of CCR(H), that is t m ^(W (tf)) is a continuous function on r for every f G H, then

(A*)n(^) m <p

pointwise, where ^ is a Fock state.

It is worth noting that the singular state

t(W(f)) = ( 0 if f = 0 (6)

V \ 1 if f = 0

is an invariant state of CCR(H). On the other hand, the proposition applies to states possesing density operator in the Fock representation. Therefore, we have

Corollary 19 A* regarded as a channel of B(r(H)) has a unique invariant state, the Fock state, and correspondingly A* is ergodic.

A* is not only ergodic but it is completely dissipative in the sense that A(A*A) = A(A*)A(A) may happen only in the trivial case when A is a multiple of the identity, which was discussed by M. Fannes and A. Verbeure. In fact,

A* = (id 0 u)aT (7)

where aT is given by (1) and (3) and u(W(f)) = exp(—||f ||2) is a quasi-free state.

8.3. Irreversible processes

Irreversible phenomena can be treated by several different methods. One of them is due to the entropy change. However, it is difficult to explain the entropy change from reversible equations of motion such as Schrodinger equation, Liouville equation. Therefore we need some modifications to explain the irreversibility of nature:

(i) QM + "a", where a represents an effect of noise, the coarse graining, etc.

Here we discuss some trials concerning (i) and (iii) above, essentially done in [16]. Let p be a state and A*, A* be some channels. Then we ask

(1) p m p = A*p ^ S(p) < S(p)?

(2) p m pt = A**p ^ p ^ 3 lim S(pt) < S(p)?

t—^^o t—^^o

(3) Consider the change of I(p ; A*). (I(p ; A*) should be decreasing!)

8.4. Entropy change in linear response dynamics

We first discuss the entropy change in the linear response dynamics. Let H be a lower bounded Hamiltonian and take

Ut = exp(itH), at(A) = UtAU—

For a KMS state p given by a density operator p and a perturbation AV (V = V* e a, A e [0,1]), the perturbed time evolution is defined by a Dyson series:

V (A) =

and the perturbed state is

(A) = £ (iA)n y dtx ••• J dtn

0<t1<-<tn<t

x [«,1 (V), [••• , [a,„ ^V) ,a, (V)] ••• ]] (8)

(A) = ) (9)

^ v ; p(W*W)

W = ^ (-A)n /" dti • • • i dtnaiti (V) • • • ait„ (V)

n-0 0<i1<-<in<i The linear response time evolution and the linear response perturbed state are given by

ay1(A) = at(A) + iA / ds [as(V),at(A)] (10)

pV,1(A) = p(A) - a/ dsp(Aaia(V)) + Ap(A)p(V) (11)

This linear response perturbed state p^1 is written as

py1(A) = trpy1A (12)

pV>1 = - A J ais(V)ds + trpVJ p The linear response time dependent state is

py1 (t) = ay1*(p)

= - iAjf a-s (V) ds + iA^ (V) ds^ p (13)

|pV'1 (t)| |pV'1| e (t) = 1, V;; ,, 0 = 1, (14)

w tr |pV'1 (t)|' tr |pV'1|

s (py1 (t)) = s (e (t)), s (py1) = s (e) (15)

The change of the linear response entropy S(pV1(t)) is shown in the following theorem [16].

Theorem 20 If py1(t) goes to pV'1 as t ^ ro and S(p) < +ro, then S(pV,1(t)) ^ S(pV'1) as t ^ ro.

Concerning the entropy change in exact dynamics, we have the following general result [16]:

Theorem 21 Let A*: © (H) m © (K) be a channel satisfying

trAp = trp for any p G ©(H)

S(p) < S(A*p)

8.5. Time development of mutual entropy

Frigerio studied the approach to stationarity of an open system in [68]. Let an input A and an output A be a same von Neumann algebra and A(R+) = {At; t G R+} be a dynamical semigroup (i.e., At (t G R) is a weak* continuous semigroup and A* is a normal channel) on A having at least one faithful normal stationary state 0 (i.e., A**0 = 0 for any t G R+). For this A(R+), put

Then Aa is a von Neumann subalgebra of A. Frigerio proved the following theorem [68]. Theorem 22 (1) There exists a conditional expectation E from A to Aa.

(2) When A = Aa, for any normal states u, A*u converges to a stationary state in the w*- sense. From the above theorem, we obtain [16].

Theorem 23 For a normal channel A* and a normal state p, if a measure p G M^(S), is orthogonal and if Aa = AC holds and A is type I, then IM(p; A*) decreases in time and approaches to IM(p; E*) as t m ro.

This theorem tells that the mutual entropy decreases with respect to time if the system is dissipative, so that the mutual entropy can be a measure for the irreversibility.

9. Entropies for General Quantum States

We briefly discuss some basic facts of the entropy theory for general quantum systems, which might be needed to treat communication (computation) process from general standing point, that is, independently from classical or quantum.

Let (A, ©(A)) be a C*-system. The entropy (uncertainty) of a state p G S seen from the reference system, a weak *-compact convex subset of the whole state space ©(A) on the C*-algebra A, was

Aa = {A g A ; At (A) = A, t g R+}

Ac = {A g A ; At(A*A) = At(A*)At(A), t g R+}

introduced by Ohya [16]. This entropy contains von Neumann's entropy and classical entropy as special cases.

Every state p G S has a maximal measure p pseudosupported on exS (extreme points in S) such that

p = J udp

The measure p giving the above decomposition is not unique unless S is a Choquet simplex (i.e., for the set S= {Au; u G S}, define an order such that p1 y p2 iff p1 — p2 G S, S is a Choquet simplex if S is a lattice for this order), so that we denote the set of all such measures by M^(S). Take

D^(S) = {p G M^(S); 3{pk} C R+ and {pk} C exS

s.t.J> = l,p = £pk^(pk) > kk

where £(p) is the delta measure concentrated on {p} . Put

H(p) = pk l0S pk

for a measure p G D^(S).

Definition 24 The entropy of a general state p G S w.r.t. S is defined by

SS( ) = f inf{H(p);p G D^(S)} (D„(S) = 0) (p) \ ro (D^(S) = 0)

When S is the total space © (A), we simply denote SS (p) by S(p).

This entropy (mixing S-entropy) of a general state p satisfies the following properties.

Theorem 25 When A = B(H) and at = Ad(Ut) (i.e., at(A) = Ut*AUt for any A G A) with a unitary operator Ut, for any state p given by p(-) = trp- with a density operator p, the following facts hold:

(1) S(p) = —trp log p.

(2) If p is an a-invariant faithful state and every eigenvalue of p is non-degenerate, then SI(a)(p) = S(p), where I (a) is the set of all a-invariant faithful states.

(3) If p G K(a), then SK(a) (p) = 0, where K (a)is the set of all KMS states. Theorem 26 For any p G K(a), we have

(1) SK(a)(p) < S1 (a)(p).

(2) SK(a)(p) < S(p).

This S (or mixing) entropy gives a measure of the uncertainty observed from the reference system S so that it has the following merits : Even if the total entropy S(p) is infinite, SS (p) is finite for some S, hence it explains a sort of symmetry breaking in S. Other similar properties as S(p) hold for SS(p). This entropy can be appllied to characterize normal states and quantum Markov chains in von Neumann algebras.

The relative entropy for two general states p and ^ was introduced by Araki and Uhlmann and their relation is considered by Donald and Hiai et al. <Araki's relative entropy> [8,9]

Let N be a-finite von Neumann algebra acting on a Hilbert space H and p, ^ be normal states on N given by p(-) = (x, -x) and = (y, -y) with x,y eK (a positive natural cone)c H. The operator Sx,y is defined by

Sx,y(Ay + z) = sN(y)A*x, A e N, sN(y)z = 0

on the domain N y + (I — sN'(y))H, where sN(y) is the projection from H to {N'y}-, the N -support of y. Using this Sx,y, the relative modular operator Ax,y is defined as Ax,y = (Sx,y)* Sx,y, whose spectral decomposition is denoted by f° Adex,y(A) ( Sx,yis the closure of Sx,y ). Then the Araki relative entropy is given by

Definition 27

S(^ p) = J /0°° log Ad ^ (A) y) < p)

oo otherwise

where ^ < p means that p(A*A) = 0 implies ^(A*A) = 0 for A e N.

<Uhlmann's relative entropy > [10]

Let L be a complex linear space and p, q be two seminorms on L. Moreover, let H(L) be the set of all positive hermitian forms a on L satisfying |a(x, y)| < p(x)q(y) for all x, y e L. Then the quadratical mean QM(p, q) of p and q is defined by

QM(p,q)(x) = sup{a(x,x)1/2; a e H(L)}, x eL

There exists a family of seminorms pt(x) of t e [0,1] for each x e L satisfying the following conditions:

(1) For any x e L, pt(x) is continuous in t,

(2) p1/2 = QM(p, q),

(3) pt/2 = QM(p,pt),

(4) p(t+1)/2 = QM(pt,q).

This seminorm pt is denoted by Q1t(p, q) and is called the quadratical interpolation from p to q. It is shown that for any positive hermitian forms a, ft, there exists a unique function QFt(a, ft) of t e [0,1] with values in the set H(L) such that QFt(a, ft)(x, x)1/2 is the quadratical interpolation from a(x, x)1/2 to ft(x, x)1/2. The relative entropy functional S(a, ft)(x) of a and ft is defined as

S(a, ft)(x) = — liminf - {QFt(a, ft)(x, x) — a(x, x)} t^o t

for x G L. Let L be a *-algebra A and p, ^ be positive linear functionals on A defining two hermitian forms pL, such as pL(A, B) = p(A*B) and ^R(A, B) = ^(BA*).

Definition 28 The relative entropy of p and ^ is defined by

S ty,p) = S (^R,pL)(I)

<Ohya's mutual entropy> [16]

Next we discuss the mutual entropy in C*—systems. For any p G S C ©(A) and a channel A* : ©(A) m ©(A), define the compound states by

= J u 0 A*udp

$o = p 0 A*p

The first compound state generalizes the joint probability in classical systems and it exhibits the correlation between the initial state p and the final state A* p.

Definition 29 The mutual entropy w.r.t. S and p is

IS (p; A) = S ($S, *o) and the mutual entropy w.r.t. S is defined as

Is(p; A*) = li—osup{IS(p; A*); p G

F£(S)

{p G D^,(S); SS(p) < H(p) < SS(p) + e < +ro}

^ 1 M^, (S) if SS (p) = +ro

The following fundamental inequality is satisfied for almost all physical cases.

0 < Is(p; A*) < SS(p)

The main properties of the relative entropy and the mutual entropy are shown in the following theorems.

Theorem 30 (1) Positivity : S(p, > 0 and S(p, = 0 , iff p =

(2) Joint Convexity : S(A^1 + (l — A)^2, Ap1 + (l — A)p2) < AS(^1, p1) + (l — A)S(02, p2) for any A G [0, l].

(3) Additivity : S(^1 0 ^2, p1 0 p2) = S(^1, p1) + S(^2, p2).

(4) Lower Semicontinuity : If limn—TO ||^n — =0 and limn—TO ||pn m p|| = 0, then S(0,p) < limn—inf S(^n, pn). Moreover, if there exists a positive number A satisfying < Apn, then lim„—^ Spn) = S(0, p).

(5) Monotonicity : For a channel A* from © to ©,

S(A*^, A*p) < S(^,p)

(6) Lower Bound: — p||2/4 < S (^,p).

Remark 31 This theorem is a generalization of the theorem 3. <Connes-Narnhofer-Thirring Entropy>

Before closing this section, we mention the dynamical entropy introduced by Connes, Narnhofer and Thirring [50].

The CNT entropy H (M) of C*-subalgebra McA is defined by

H(M) = suP £pjS(pj IM

where the supremum is taken over all finite decompositions p = j pj pj of p and p is the restriction of p to M. This entropy is the mutual entropy when a channel is the restriction to subalgebra and the decomposition is orthogonal. There are some relations between the mixing entropy SS (p) and the CNT entropy [26].

Theorem 32 (1) For any state p on a unital C*-algebra A,

S(p) = H (A)

(2) Let (A, G, a) with a certain group G be a W*-dynamical system andf p be a G-invariant normal state of A, then

S1 (a)(p) = H (Aa) where Aa is the fixed points algebra of A w.r.t. a.

(3) Let A be the C*-algebra C (H) of all compact operators on a Hilbert space H, and G be a group, a be a *-automorphic action of G-invariant density operator. Then

S1 (a)(p) = Hp(Aa)

(4) There exists a model such that

S1 (a)(p) > H(Aa) = 0

10. Entropy Exchange and Coherent Information

First we define the entropy exchange [43,70-73]. If a quantum operation A* is represented as

A*(p) = Y AjpA*, Y A*Ai < 1 i i

then the entropy exchange of the quantum operation A* with input state p is defined to be

Se (p, A*) = S (W) = -tr(W log W)

where the matrix W has elements

= tr^p^*)

Wij = tr(A* (p))

Remark that if i A*Ai = 1 holds, then the quantum operation A* is a channel. Definition 33 [] The coherent information is defined by

^ A*' = S (®) - Se<p, A')

Let p be a quantum state and A* and A* trace-preserving quantum operations. Then

S(p) > Ic(p, A*) > Ic(p, A2 o A*)

which has similar property of quantum mutual entropy.

Another entropy is defined by this coherent information with the von Neuman entropy S (p) such that

Icm (p, a*) = S (p) + S (A*p) - Se (p, A*)

We call this mutua type information the coherent mutual entropy here.

However these coherent information can not be considered as a candidate of the mutual entropy due to a theorem of the next section.

11. Comparison of various quantum mutual type entropies

There exist several different information a la mutual entropy. We compare these mutual type entropies [60,74].

Let {xn} be a CONS in the input Hilbert space H1, a quantum channel A* is given by

A* (•) = Y, An • Ara

where An = |xn) (xn| is a one-dimensional projection satisfying

Y An = I

Then one has the following theorem:

Theorem 34 When {Aj} is a projection valued measure and dim(ranAj) = 1, for arbitary state p we have (1) I (p, A*) < min {S (p), S (A*p)}, (2) Ic (p, A*) = 0, (3) Icm (p, A*) = S (p).

Proof. For any density operator p e S (H1) and the channel A* given above, one has Wij =trAipAj = ¿ij (xi,pxj) = (xi,pxi) = Wii so that one has

( (x1,px1) 0 \

^ (xn, pxn) An

\ 0 (xn ,pxW) )

Then the entropy exchange of p with respect to the quantum channel A* is

Se (p, A*) = S (W) = - Y (xn, pxn) log (xn, pxn)

A*p = Y AnpAn = Y (xn, pxn) An = W

the coherent information of p with respect to the quantum channel A* is given by

Ic (p, A*) = S (A*a) - Se (p, A*) = S (W) - S (W) = 0

for any p e S (H1) .The Lindblad-Nielsen entropy is defined by

Iln (p, A*) = S (p) + S (A*a) - Se (p, A*) = S (p) + Ic (p, A*) = S (p)

for any p e S (H1). The quantum mutual entropy becomes

I (p, A*) = sup j S (A*p) - Y ^nS (A*En)

where the sup is taken over all Schatten decompositions p = m = |ym) (ym|

(yn, ym) ¿nm.

so we obtain

I (p, A*) = S (A*p) - Y ^mS (A*Em)

= S (A*p) - Y ^m Y n (Tm) < min {S (p), S (A*p)}

where n (t) = -1logt and r^ = |(xk,ym)|2 .This means that Io (p, A*) takes various values depending on the input state p, for instance, ■

(1) Y ^m Y n (rm) = 0 ^ I (p, A*) = S (A*p)

(2) Y ^m Y n (rm) > 0 ^ I (p, A*) < S (A*p)

We further can prove that the coherent information vanishes for a general class of channels.

Theorem 35 Let in the input Hilbert space be given a CONS {xn} and in the output Hilbert space a sequence of the density operators {pn} . Consider a channel A* given by

A*(P) = P |Xn) p

where p is any state in the input Hilbert space. (One can check that it is a trace preserving CP map). Then the coherent information vanishes: Ic (p, A*) = 0 for any state p.

Remark 36 The channel of the form A*(p) = (xn| P |xn) pn can be considered as the classical-quantum channel iff the classical probability distribution {pn = (xn| p |xn)} is a priori given.

For the attenuation channel A0, one can obtain the following theorems [74,75]:

Theorem 37 For any state p = J2n An |n) (n| and the attenuation channel A* with |a|2 = |2 = 2, one has

1. 0 < I (p; A0) < min {S (p), S (A0p)} (Ohya mutual entropy),

2. IC (p; A*) = 0 (coherent entropy),

3. ILN (p; A0) = S (p) (Lindblad-Nielsen entropy).

Theorem 38 For the attenuation channel A* and the input state p = À 10) (01 + (1 — A) |0) (0|, we have

1. 0 < I (p; A0) < min {S (p), S (A0p)} (Ohya mutual entropy),

2. —S (p) < IC (p; A*) < S (p) (coherent entropy),

3. 0 < ILN (p; A0) < 2S (p) (Lindblad-Nielsen entropy).

The above theorem shows that the coherent entropy IC (p; A0) takes a minus value for |a|2 < |2 and the Lindblad-Nielsen entropy ILN (p; A0) is grater than the von Neumann entropy of the input state

p for |a|2 > |^|2.

From these theorems, Ohya mutual entropy I (p; A*) only satisfies the inequality held in classical systems, so that Ohya mutual entropy can be a most suitable candidate as quantum extension of the classical mutual entropy.

12. Quantum Capacity and Coding

We discuss the following topics in quantum information; (1) the channel capacity for quantum communication processes by applying the quantum mutual entropy, (2) formulations of quantum analogues of McMillan's theorem.

As we discussed, it is important to check ability or efficiency of channel. It is the channel capacity which describes mathematically this ability. Here we discuss two types of the channel capacity, namely, the capacity of a quantum channel r* and that of a classical (classical-quantum-classical) channel S * o r* o S*.

12.1. Capacity of quantum channel

The capacity of a quantum channel is the ability of information transmission of the channel itself, so that it does not depend on how to code a message being treated as a classical object.

As was discussed in Introduction, main theme of quantum information is to study information carried by a quantum state and its change associated with a change of the quantum state due to an effect of a quantum channel describing a certain dynamics, in a generalized sense, of a quantum system. So the essential point of quantum communication through a quantum channel is the change of quantum states by the quantum channel, which should be first considered free from any coding of messages. The message is treated as classical object, so that the information transmission started from messages and their quantum codings is a semi-quantum and is discussed in the next subsection. This subsection treats the pure quantum case, in which the (pure) quantum capacity is discussed as a direct extension of the classical (Shannon's) capacity.

Before starting mathematical discussion, we explain a bit more about what we mean "pure quantum" for transmission capacity. We have to start from any quantum state and a channel, then compute the supremum of the mutual entropy to define the "pure" quantum capacity. One often confuse in this point, for example, one starts from the coding of a message and compute the supremum of the mutual entropy and he says that the supremum is the capacity of a quantum channel, which is not purely quantum but a classical capacity through a quantum channel.

Even when his coding is a quantum coding and he sends the coded message to a receiver through a quantum channel, if he starts from a classical state, i.e., a probability distribution of messages, then his capacity is not the capacity of the quantum channel itself. In his case, usual Shannon's theory is applied because he can easily compute the conditional distribution by a usual (classical) way. His supremum is the capacity of a classical-quantum-classical channel, and it is in the second category discussed in the next subsection.

The capacity of a quantum channel r* : S(H) ^S(K) is defined as follows: Let S0(c S(H)) be the set of all states prepared for expression of information. Then the quantum capacity of the channel r* with respect to S0 is defined by

CSo (r*) = sup{I (p; r*); p e So}

Here I (p; r*) is the mutual entropy given in Section 7 with A* = r*. When So = S(H) , CS(H) (r*) is denoted by C (r*) for simplicity. The capacity C (r*) is the largest information possibly sent through the channel r*.

We have

Theorem 39

0 < CSo (r*) < sup {S(p); p e So}

Remark 40 We also considered the pseudo-quantum capacity Cp (r*) defined [76] with the pseudo-mutual entropy Ip (p; r*) where the supremum is taken over all finite decompositions instead of all orthogonal pure decompositions:

Ip (p; r*) = sup I J]AfcS (r* pfc, r*p) ; p = Y Afcpfc, finite decomposition >

I fc fc J

However the pseudo-mutual entropy is not well-matched to the conditions explained, and it is difficult to be computed numerically. It is easy to see that

CS° (r*) < Cp° (r*)

It is worthy of noting that in order to discuss the details of transmission process for a sequence of n messages we have to consider a channel on the n-tuple space and the average mutual entropy (transmission rate) per a message.

12.2. Capacity of classical-quantum-classical channel

The capacity of C-Q-C channel A* = S* o r* o S* is the capacity of the information transmission process starting from the coding of messages, therefore it can be considered as the capacity including a coding (and a decoding). The channel S* sends a classical state to a quantum one, and the channel S*does a quantum state to a classical one. Note that S* and S* can be considered as the dual maps of f :

B (H) ^ cn (A ^ (pi(A), P2(A),..., p„(A))) and f : cm ^ B(K) ((ci, c2, ■ ■ ■ , cm) ^ Ej CjA,) , respectively.

The capacity of the C-Q-C channel A* = S* o r* o S* is

CP° (S * o r* o S*) = sup {I (p; S * o r* o S^ ; p e Po}

where P0(c P(Q)) is the set of all probability distributions prepared for input (a-priori) states (distributions or probability measures, so that classical states). Moreover the capacity for coding free is found by taking the supremum of the mutual entropy over all probability distributions and all codings S*:

CcPo (S* o r*) = sup{I (p; S* o r* o S*) ; p e Po, S*} The last capacity is for both coding and decoding free and it is given by

Cp0 ( r*) = sup{I (p; S* o r* o S*£ ; p e Po, S*, S*}

These capacities Cp°, C^d0 do not measure the ability of the quantum channel r* itself, but measure the ability of r* through the coding and decoding.

The above three capacities CP°, Cp°, Cp° satisfy the following inequalities

0 < CPo (S* o r* o S^ < CcPo (S* o r^ < Cpd° ( r*) < sup {S(p); p e P0}

Here S(p) is the Shannon entropy: — E Pfc log Pfc. for the initial probability distribution {pk} of the message.

12.3. Bound of mutual entropy and capacity

Here we discuss the bound of mutual entropy and capacity. The discussion of this subsection is based on the papers [30,31,41,42,77].

To each input symbol xi there corresponds a state ai of the quantum communication system, ai functions as the codeword of Xj. The coded state is a convex combination

p = £pA M a = £= £pjaj i i i

whose coefficients are the corresponding probabilities, Pi is the probability that the letter xi should be transmitted over the channel. To each output symbol y there corresponds a non-negative observable, that is a selfadjoint operator Qj on the output Hilbert space K, such that j Qj = I ({Qj} is called POVM). In terms of the quantum states the transition probabilities are trr*aj(Qj) and the probability that xj was sent and y is read is

Pji = pjtrr*aj(Qj)

On the basis of these joint probability distribution the classical mutual information is given.

= £ Pji !og-

M "" " ........

' "" qj

where qj = trr*a(Qj). The next theorem provides a fundamental bound for the mutual information in terms of the quantum von Neumann entropy, which was proved by Holevo [21] in 1973. Ohya introduced in 1983 the quantum mutual entropy by means of the relative entropy as discussed above.

Theorem 41 With the above notation

Ici = VPji log j < S(r*a) — V PiS(r*ai)

holds.

Holevo's upper bound can now be expressed by

S(r*a) — £ PiS(r*ai) = £ PiS(r*ai, r*a) ii

For general quantum case, we have the following inequality according to the theorem 9.

Theorem 42 When the Schatten decomposition i.e., one dimensional spectral decomposition) p = Ei P^ is unique,

id < I (p; r*) = £ PiS(r*pi, r*p)

for any channel r*.

Go back to general discussion, an input state p is the probability distribution {Ak} of messages and its Schatten decomposition is unique as p = k Akwith delta measures , so the mutual entropy is written by

l(p;S* o r* o S* j = £ Afc S (S * o r* o S*4, S * o r* o S*p

If the coding S* is a quantum coding, then S*£k is expressed by a quantum state. Let denote the coded quantum state by ak = S*£fc as above and put a = S*p = k ^fcS* (£fc) = k ^fcak. Then the above mutual entropy in a (classical — quantum — classical) channel S* o r* o S* is written as

I (p; S * o r* o S* j = £ Afc S (S * o FVfc, S * o FVj (18)

This is the expression of the mutual entropy of the whole information transmission process starting from a coding of classical messages.

Remark that if k AkS(r*ak) is finite, then (18) becomes

I (p; S* o r* o =*) = S(S* o r*a) — £ AfcS(S* o r*afc)

Further, if p is a probability measure having a density function f (A); that is, p (A) = fA f (A)dA, where A is an interval in r , and each A corresponds to a quantum coded state a(A), then

a = J f (A)a(A)dA

I (p;S* o r- oS*) = S(S* o r-a) — /f(A)S(S* o r*a(A))dA One can prove that this is less than

S(r*a) — J f (A)S(r*a(A))dA This upper bound is a special one of the following inequality

I (p;S* o r* o =*) < I (p; r* o S*)

which comes from the monotonicity of the relative entropy and gives the proof of Theorem 42 above. We can use the Bogoliubov inequality

S(A, B) > trA (log trA — log trB)

where A and B are two positive Hermitian operators and use the monotonicity of the mutual entropy, one has the following bound of the mutual entropy I (p; r* o S*) [41].

Theorem 43 For a probability distribution p = {Ak} and a quantum coded state a = S*p = k Akak, Ak > 0, k Ak = 1, one has the following inequality for any quantum channel decomposed as r* = r* o r2 such that r*a = A^aA**, At*Ai = I:

£ AkS(ak, a)

> £ Aktr(Ai^akA**)[log tr(Air*akA**) — log tr (A^aA*)]

In the case that the channel r is identical, = ak, the above inequality reduces to the bound of Theorem 41:

Y AfcS(afc,a) > Y tr(Biafc)[logtr(B^k) — logtr(B,a)]

where B, = A** A.

Note that Ek ^fcS(ak, a) and Eik ^S(A^OfcA^A^aA*) are the quantum mutual entropy I (p; r*) for special channels r* as discussed above and that the lower bound is equal to the classical mutual entropy, which depends on the POVM {B, = A^A,} .

Using the above upper and lower bounds of the mutual entropy, we can compute these bounds of the capacity in many different cases.

13. Computation of Capacity

Shannon's communication theory is largely of asymptotic character, the message length N is supposed to be very large. So we consider the N-fold tensor product of the input and output Hilbert spaces H and K,

Note that

hn = ®n=1h, kn = ®n=1k

B(hn ) = ®N=1B(H), B(kn ) = ®i=1B(K)

A channel AN : S(HN) ^ S(KN) sends density operators acting on HN into those acting on KN. In particular, we take a memoryless channel which is the tensor product of the same single site channels: AN = A* ® ■ ■ ■ ® A* (N-fold). In this setting we compute the quantum capacity and the classical-quantum-classical capacity, denoted by Cq and Ccq below.

A pseudo-quantum code (of order N) is a probability distribution on S(HN) with finite support in the set of product states. So {(p,), (p,)} is a pseudo-quantum code if (p,) is a probability vector and p, are product states of B(HN). This code is nothing but a quantum code for a classical input (so a classical-quantum channel) such that p = J® Pj^j p = Ej PjPj , as discussed in the previous chapter. Each quantum state p, is sent over the quantum mechanical media (e.g., optical fiber) and yields the output quantum states ANp,. The performance of coding and transmission is measured by the quantum mutual entropy

I(p; AN) (= I((p,), (p,); AN)) = YP,S(ANP,, ANp)

We regard p as the quantum state of the n-component quantum system during the information transmission. Taking the supremum over certain classes of pseudo-quantum codes, we obtain various capacities of the channel. The supremum is over product states when we consider memoryless channels, so the capacity is

Ccq(AN) = sup{/((Pi), (Pi);AN);

((Pi), (pi)) is a pseudo-quantum code}

Next we consider a subclass of pseudo-quantum codes. A quantum code is defined by the additional requirement that {p^ is a set ofpairwise orthogonal pure states. This code is pure quantum, namely, we start a quantum state p and take orthogonal extremal decompositions p = i P^, whose decomposition is not unique. Here the coding is how to take such an orthogonal extremal decomposition. The quantum mutual entropy is

I (p;AN) = sup{£ PiS (AN p^ AN p);£ PiPi = p} ii

where the supremum is over all orthogonal extremal decompositions i pipi = P as defined in Section 7. Then we arrive at the capacity

Cq(AN) = sup{I(p;AN): p }

= sup {I ((pi), (pi); AN ) : ((pi), (pi)) is a quantum code}

It follows from the definition that

Cq(AN) < CCq(AN) (19)

holds for every channel.

Proposition 44 For a memoryless channel the sequences Ccq(AN ) and Cq(AN ) are subadditive. Therefore the following limits exist and they coincide with the infimum.

Ccq = lim 1 Ccq (AN ), cq» = lim 1 Cq (AN ) (20)

N^œ iV N^œ iV

(For multiple channels with some memory effect, one may take the limsup in (20) to get a good concept of capacity per single use.)

Example 45 Let A* be a channel on the 2 x 2 density matrices such that

A* : ( a M M (a 0 \b c J \0 c

Consider the input density matrix

1 ( 1 1 — 2A \

pA = - , (0 < A < 1)

' 2^1 — 2A 1 J y J

For A = 1/2 the orthogonal extremal decomposition is unique, in fact

A ( 1 — 1 \ 1 — A ( 11 pA = ^ —1 1 1

and we have

I (pA, A*) = 0 for A =1/2

However, I(p1/2, A*) = log 2. Since Cq(A*) < Ccq(A*) < log 2, we conclude that Cq(A*) = Ccq(A*) log 2.

13.1. Divergence center

In order to estimate the quantum mutual entropy , we introduce the concept of divergence center. Let {u, : i e I} be a family of states and a constant r > 0.

Definition 46 We say that the state u is a divergence center for a family of states {u, : i e I} with radius < r if

S(u,, u) < r, for every i e I

In the following discussion about the geometry of relative entropy (or divergence as it is called in information theory) the ideas of the divergence center can be recognized very well.

Lemma 47 Let ((p,), (p,)) be a quantum code for the channel A* and u a divergence center with radius < r for {A*p,}. Then

I((p,), (p,);A*) < r

Definition 48 Let {u, : i e I} be a family of states. We say that the state u is an exact divergence center with radius r if

r = inf sup{S(u,, p)}

and u is a minimizerfor the right hand side.

When r is finite, then there exists a minimizer, because p m sup{S(u,,p) : i e I} is lower semicontinuous with compact level sets: (cf. Proposition 5.27 in [17].)

Lemma 49 Let 0o, 01 and u be states of B (K) such that the Hilbert space K is finite dimensional and

set 0a = (1 — A)0o + A01 (0 < A < 1). If S (0o,u), S (0^u) are finite and

S(0a,u) > S(01,u), (0 < A < 1)

S(01, u) + S(0o,01) < S(0o,u)

Lemma 50 Let {u, : i e I} be a finite set of states of B(K) such that the Hilbert space K is finite dimensional. Then the exact divergence center is unique and it is in the convex hull on the states u,.

Theorem 51 Let A* : S(H) m S(K) be a channel with finite dimensional K . Then the capacity Ccq(A*) = Cq(A*) is the divergence radius of the range of A*.

13.2. Comparison of capacities

Up to now our discussion has concerned the capacities of coding and transmission, which are bounds for the performance of quantum coding and quantum transmission. After a measurement is performed, the quantum channel becomes classical and Shannon's theory is applied. The total capacity (or classical-quantum-classical capacity) of a quantum channel A* is

Ccqc(A*) = sup{I((p,), (p,); S* o A*)}

where the supremum is taken over both all pseudo-quantum codes (p*), (p*) and all measurements Due to the monotonicity of the mutual entropy

Ccqc(A*) < Ccq(A*)

and similarly

Ccqc(AN) = lim sup — Ccqc(AN) < Ccq(AN)

holds for the capacities per single use.

Example 52 Any 2 x 2 density operators has the following standard representation

px = 2 (I + XiCTi + X2^2 + X3CT3)

where a1, a2, a3 are the Pauli matrices and x = (x1, x2, x3) e r3 with x2 + x2 + x3 < 1. For a positive semi-definite 3 x 3 matrix A the application r* : px m pAx gives a channel when ||A|| < 1. Let us compute the capacities of r*. Since a unitary conjugation does not change capacity obviously, we may assume that A is diagonal with eigenvalues 1 > A1 > A2 > A3 > 0. The range of r* is visualized as an ellipsoid with (Euclidean) diameter 2A1. It is not difficult to see that the tracial state t is the exact divergence center of the segment connected the states (I ± A1a1 )/2 and hence t must be the divergence center of the whole range. The divergence radius is

S(2(10)+A(1 — >.t

= — S (I ( ^ + A ! — a

= log 2 — n((1 + A)/2) — n((1 — A)/2)

This gives the capacity Ccq(r*) according to Theorem 51. The inequality (19) tells us that the capacity Cq(r*) cannot exceed this value. On the other hand, I(t, r*) = log 2 — n((1 + A)/2) — n((1 — A)/2) and we have Ccq(r*) = Cq(r*).

The relations among Ccq, Cq and Ccqc form an important problem and are worthy of study. For a noiseless channel Ccqc = log n was obtained in [78], where n is the dimension of the output Hilbert space (actually identical to the input one). Since the tracial state is the exact divergence center of all density matrix, we have Ccq = log n and also Cq = log n.

We expect that Ccq < Ccqc for "truely quantum mechanical channels" but Ccqc = Ccq = Cq must hold for a large class of memoryless channels.

One can obtain the following results for the attenuation channel which is discussed in the previous chapter.

Lemma 53 Let A* be the attenuation channel. Then

suPI((pi), (Pi); A*) = logn

when the supremum is over all pseudo-quantum codes ((pi)?=1, (p/(i))n=1) applying n coherent states.

The next theorem follows directly from the previous lemma.

Theorem 54 The capacity Ccq of the attenuation channel is infinite.

Since the argument of the proof of the above Lemma works for any quasi-free channel, we can conclude Cpq = ^ also in that more general case. Another remark concerns the classical capacity Ccqc. Since the states p/(n) used in the proof of Lemma commute in the limit A m to, the classical capacity Ccqc is infinite as well. Ccqc = to follows also from the proof of the next theorem.

Theorem 55 The capacity Cq of the attenuation channel is infinite.

Let us make some comments on the previous results. The theorems mean that arbitrarily large amount of information can go through the attenuation channel, however the theorems do not say anything about the price for it. The expectation value of the number of particles needed in the pseudo-quantum code of Lemma 53 tends to infinity. Indeed,

1 1 n E np/«(N) = n E f (i)ll2 = A(n + 1)(2n + 1)1/ll2/6

which increases rapidly with n. (Above N denoted the number operator.) Hence the good question is to ask the capacity of the attenuation channel when some energy constrain is posed:

C(A*,Eo) = sup jI((p,), (p,); A*); Epp,(N) < Eo

To be more precise, we have posed a bound on the average energy, different constrain is also possible. Since

A(N) = nN

for the dual operator A of the channel A* and the number operator N, we have

C(A*,Eo) = sup Ep,S^Eppj); i , j

S(p) < E p,p,(N) < nEo ,

The solution of this problem is the same as

S(p) < sup{S(0) : 0(N) = nEo} and the well-known maximizer of this problem is a so-called Gibbs state. Therefore, we have

C(A*, Eo) = a2Eo + log(a2Eo + 1)

This value can be realized as a classical capacity if the number states can be output states of the attenuation channel.

13.3. Numerical computation of quantum capacity

Let us consider the quantum capacity of the attenuation channel for the set of density operators consisted of two pure states with the energy constrain [31,79]. Let S1 and S2 be two subsets of S (Hx) given by

51 = {p1 = A |0)(0| + (1 — A) |—0)(—0|; A e [0,1] ,0 e C}

52 = {p2 = A |0)(0| + (1 — A) |0)(0|; A e [0,1] ,0 e C}

The quantum capacities of the attenuation channel A* with respect to the above two subsets are computed under an energy constraint |0|2 < E0 for any E0 > 0:

Cf (A*,Eo) = sup {I(pfc;A*); pfc e S, |0|2 < E0} Since the Schatten decomposition is unique for the above two state subsets, by using the notations

n (t) = — t logt (Vt e [0,1]), (j = 0,1), (k = 1, 2) 2

S(A*pfc) = £ n( 1 (1 + (—1)' — 4A(1 — A) (1 — exp (—42-k |0|2

^1 + — 4A(1 — A) (1 — exp (—42-k |0|2))^ (k = 1, 2)

b n ( 1 (1 + (—1)' /1 — 44fc)(1 — 4k)) (1 — |ef|2)'

i=0 1 2

S (A*E(k)

j = n 1 +

1 ( exp (—23-2k |0|2) ) (Vf) +2 exp (—23-2k n |0|2) j1 + 1

j H exp (—23-2k n |0|2)y ^f)2 + exp (—23-2k |0|2) ^ + 1

t (k) >7

T (k) j

TfM —1

^/((Tf)2 + 1)2 — 4 exp (—42-k |0|2) (t^)2 — (1 — 2A) + (—1)j — 4A(1 — A)(1 — exp (—42-k |0|2)))

2(1 — A) exp (—42-k |0|2) we obtain the following result [31].

Theorem 56 (1) For p1 = A |0) (0| + (1 — A) |—0) (—0| e S1, the quantum mutual entropy I (p1; A*) is calculated rigorously by

I (p1; A*) = S (A*p1) — M S (A*E01)) — (1 — ||p1|) S (A*E(1))

(2) For p2 = A |0) (0| + (1 — A) |0) (0| e S2, the quantum mutual entropy I (p2;A*) is computed precisely by ( ) ( )

I (p2; A*) = S (A*p2) — ||p2|| S (A*£02)) — (1 — ||p2|) S (a*e!2))

(3) For any Eo > 0, we have inequality of two quantum capacities:

Cf (A*,Eo) > Cf (A*,Eo)

Note that S1 and S2 represent the state subspaces generated by means of modulations of PSK (Phase-Shift-Keying) and OOK (On-Off-Keying) [75].

14. Quantum Dynamical Entropy

Classical dynamical entropy is an important tool to analyse the efficiency of information transmission in communication processes. Quantum dynamical entropy was first studied by Connes, St0rmer [48] and Emch [49]. Since then, there have been many attempts to formulate or compute the dynamical entropy for some models [52-56,80]. Here we review four formulations due to (a) Connes, Narnhofer and Thirring (CNT) [50], (b) Muraki and Ohya (Complexity) [27,81], (c) Accardi, Ohya and Watanabe [58], (d) Alicki and Fannes (AF) [51]. We consider mutual relations among these formulations [80].

A dynamical entropy (Kossakowski, Ohya and Watanabe) [59] for not only a shift but also a completely positive (CP) map is defined by generalizing the entropy defined through quantum Markov chain and AF entropy defined by a finite operational partition.

14.1. Formulation by CNT

Let A be a unital C*-algebra, 9 be an automorphism of A, and p be a stationary state over A with respect to 9; p o 9 = p. Let B be a finite dimensional C*-subalgebra of A. The CNT entropy [50] for a subalgebra B is given by

(B) = sup < E AkS (wk |B, p|B); p = ^ Ak^k finite decomposition of p >

I k k )

where p|B is the restriction of the state p to B and S(■, ■) is the relative entropy for C*-algebra [7,8,10]. The CNT dynamical entropy with respect to 9 and B is given by

H„(9, B) = lim sup 1H (B V 9B V ■ ■ ■ V 9N-1B)

and the dynamical entropy for 9 is defined by

H„(9) = sup H„(9, B)

14.2. Formulation by MO

We define three complexities as follows:

Tf (p ; A*) = supjy S (A*w, A*p) d|U ; ^ G (S)

Cf (p) = Tf (p ; id) Is (p ; A*) = supj S^w 0 A* wd^ , p 0 A*p^ ; ^ G (S)

Cf (p) = Is (p; id)

Jf (p ; A*) = supj^ S (A*w, A*p) d^/ ; G (S) Cf (p) = Jf (p ; id)

Based on the above complexities, we explain the quantum dynamical complexity (QDC) [14].

Let 9 (resp. 9) be a stationary automorphism of A (resp. A); p o 9 = p, and A (the dual map of channel A*) be a covariant CP map (i.e., A o 9 = 9 o A) from A to A. Bk (resp. Bk) is a finite subalgebra of A (resp. A). Moreover, let ak (resp. ak) be a CP unital map from Bk (resp. Bk) to A (resp. A) and aM and aN are given by

aM = («1, «2, ..., «m)

aN = (A o ¿l1, A o a2, ..., A o aN)

Two compound states for aM and aN, with respect to ^ G M„(S), are defined as

K) = a^wd^

„ M N

Mi | -N\ _ I „ * , , — * A *,

(aM U afl = (g)«"^®«:A*wdß

m=1 n=1

Using the above compound states, the three transmitted complexities [81] are defined by

TS (aM ,«N) = suJ f S (( ( c£AV (aM) 0 («N)] dß

S \m=1 n=1

ß G M^(S)}

IS(aM) = sup {S ($S(aM U ), (aM) ® («NN)) ; ß G MV(S)} JS(aM) = suJ i S [(9) (^ c&AV ^(aM) 0 $S(«N) ) dßf ;

S \m=1 n=1

^ v,—" > —"A / I I I n

ßf G )}

When Bk = Bk = B, A = A, 9 = 9, ak = 9k 1 o a = ak, where a is a unital CP map from Ao to A, the mean transmitted complexities are

if (9, a, A*) = limsup-1 T* (aN, aN)

N ^ro N

(9, A*) = supTf (9, a, A*)

and the same for If and Jf. These quantities have properties similar to those of the CNT entropy [27,81].

14.3. Formulation by AOW

A construction of dynamical entropy is due to the quantum Markov chain [58]. Let A be a von Neumann algebra acting on a Hilbert space H and let p be a state on A and A0 = Md (d x d matrix algebra). Take the transition expectation Ey : A0 0 A M A of Accardi [36,82] such that

EY (A) = £ Y'AiiYi i

where A = i j ej 0 Aj e A0 0 A and y = {Yj} is a finite partition of unity I e A. Quantum Markov

chain is defined by ^ = {p, Ey , q} e S(® A0) such that

^(j\(A1) ■ ■ ■ jn(An)) = p (Ey , q(A1 ® Ey ,q(A2 0 ■ ■ ■ 0 An^E, ,q(An 0 I) ■ ■ ■))) where Ey , q = 0 o Ey, 0 eAut (A), and jk is an embeding of A0 into 0( A0 such that jk(A) = 10 ■ ■ ■ 0

10 0 I •••.

Suppose that for p there exists a unique density operator p such that p(A) =tr pA for any A e A. Let us define a state on ®n A0 expressed as

^n(A1 0 ■ ■ ■ 0 Ara) = ^(j\(A1) ■ ■ ■ j„(Ara))

The density operator tn for is given by

tn = £ ■ ■ ■ £ trA(0n(Yin) ■ ■ ■ YiipYii ■ ■ ■ 0n(Yin))Kii 0 ■ ■ ■ 0 einin

Pin-ii = tr^(0ra(Yin) ■■■ Yii pYii ■■■ 0n(Yin)))

The dynamical entropy through QMC is defined by

S^(0; y) = limsup n (—tr tn log tn) = limsup ^ | — £ P^^ log P^^

n^oo n n^oo n \

\ ii ,••• ,in

If P^-^ satisfies the Markov property, then the above equality is written by

S^(0; y) = — £ P (i2|i1)P (¿1) log P (i2 |i1) ii,i2

The dynamical entropy through QMC with respect to 0 and a von Neumann subalgebra B of A is given by

S^(0; B) = sup{S^(0; Y); Y cb}

14.4. Formulation by AF

Let A be a C*-algebra, 0 be an automorphism on A and p be a stationary state with respect to 0 and B be a unital *-subalgebra of A. A set y = {y1, Y2,..., Yk} of elements of B is called a finite operational partition of unity of size k if y satisfies the following condition:

£ Y*Yi = I (21)

The operation o is defined by

Y o t = {Yitj; i = 1, 2,...,k, j = 1, 2,.../}

for any partitions y = {Y1, Y2,..., Yk} and t = {t1, t2,..., t}. For any partition y of size k, a k x k density matrix p[y] = (p[y^) is given by

p[Y]i,j = p(Y* Yi)

Then the dynamical entropy H^(0, B,y) with respect to the partition y and shift 0 is defined by von Neumann entropy S (■ );

H^(0, B, y ) = limsup1 S (p[0n-1(Y) ◦■■■◦ 0(y ) o y]) (22)

The dynamical entropy H^(0, B) is given by taking the supremum over operational partition of unity in B as

H^(0, B) = sup jH^(0, B, y); Y cb} (23)

14.5. Relation between CNT and MO

In this section we discuss relations among the above four formulations. The S-mixing entropy in GQS introduced in [16] is

SS (p) = inf {H (p) ; p e M^ (S)}

where H (p) is given by

H (p) = sup J — £ p (Ak)log p (Ak): A e P (S) I A

and P (S) is the set of all finite partitions of S.

The following theorem [27,81] shows the relation between the formulation by CNT and that by complexity.

Theorem 57 Under the above settings, we have the following relations:

(1) 0 < Is (p ; A*) < TS (p ; A*) < JS (p ; A*),

(2) Cf(p) = CS (p) = CJ (p) = SS (p) = H (A),

(3) A = A = B (H), for any density operator p, and

0 < If (p ; A*) = Tf (p; A*) < Jf (p ; A*)

Since there exists a model showing that S1 (a) (p) > H„(Aa), Sf (p) distinguishes states more sharply than H„(A), where Aa = {A G A; a(A) = A}. Furthermore, we have the following results [83].

(1) When An, A are abelian C*-algebras and ak is an embedding map, then

TS0u; aM) = S™(V Am)

Ie(^; aM,aN) = /fssical( V Bn)

m=1 n=1

are satisfied for any finite partitions Am, Bn on the probability space (^ = spec (A), F,

(2) When A is the restriction of A to a subalgebra M of A; A = |M,

H„(M) = JS(p; |M) = Jj(id; |M)

Moreover, when

N c Ao

6 g Aut (A)

aN = (a, 6 o a,...; 6N-1 o a)

; 6N-1

A0 ^ A an embedding

we have

H(6; N) = JS(6; N) = lim sup-1 jJ(aN; |Nn )

We show the relation between the formulation by complexity and that by QMC. Under the same settings in Section 3, we define a map E*n7) from S(H), the set of all density operators in H, to

S((® cd) 0H) by 1

E*n,7)(P) = E 0---0 e

in — 1in — 1 00 »n

in — 1 »n

06n-1(Yin)6(n-2)(7i_i) ■ ■ ■ YiiPYii ■ ■ ■ 6(n-2)(7in-1)6n-1(Yin)

for any density operator p G 6(H). Let us take a map E*n) from 6((® cd) 0 H) to 6(® cd) such that

£(*n) (a) = tr^a , Va G 6((( cd) 0 H)

Then a map r*n , y) from 6(H) to 6(® cd) is given by

r*n)(p) = E*n) (E*n, 7)(p)) , Vp e S(H)

so that r*n , y) (p) = tn and

S^(0; y) = limsup-S(r*n,7)(p))

n—t^O n

From the above Theorem, we have Cf (r*n Y)(p)) = S(r*n Y)(p)). Hence

S^(0; y) = <Cf (r*7)(p))(= limsup iCf (^(p)))

n—TO n

14.6. Formulation by KOW

Let B (K) (resp. B ( H)) be the set of all bounded linear operators on separable Hilbert space K (resp. H). We denote the set of all density operators on K (resp. H) by S ( K) (resp. S (H)). Let

r : B (K) 0 B (H) ^ B (K) 0 B (H) (24)

be a normal, unital CP linear map, that is, r satisfies

B„ f B r (Ba) f r (b)

r (IK 0 IH) = IK 0 IH (IH (resp. IK) is the unity in H (resp. K)) for any increasing net < Ba ^ c B (K) 0 B (H) converging to B e B (K) 0 B (H) and

£ j (AJ5i > 0

hold for any n e N and any Aj, Bj e B (K) 0 B (H). For a normal state u on B (K), there exists a density operator u e 6 (K) associated to u (i.e., u (A) = truA, VA e B (K)). Then a map

Er'w : B (K) 0 B (H) ^ B (H) (25)

defined as

Er'w ^A^ = u (r (a)) = trKur ^A^ , VÄ e B (K) 0 B (H) (26)

is a transition expectation in the sense of [84] (i.e., Er'w is a linear unital CP map from B (K) 0 B (H) to B (H)), whose dual is a map

E*r'w (p) : 6 (H) ^ 6 (K 0 H) (27)

given by

E*r'w (p) = r* (u 0 p) (28)

The dual map E*r'w is a lifting in the sense of [84]; that is, it is a continuous map from 6 (H) to 6 (K 0 H).

For a normal, unital CP map A : B (H) m B (H), id 0 A : B (K) 0 B (H) m B (K) 0 B (H) is a normal, unital CP map, where id is the identity map on B (K). Then one defines the transition expectation

Ej'" (A = w ( (id 0 A) r (A)) , VA g B (K) 0 B (H)

and the lifting

E*r'" (p) = r* (w 0 A* (p)), Vp g s (H)

The above A* has been called a quantum channel [13] from S (H) to S (H), in which p is regarded as an input signal state and w is as a noise state. The equality

tr(®n (p) (A1 0 ■ ■ ■ 0 An 0 B)

= trnp (eJ'" (AI 0 Er" (A2 0 ■ ■ ■ An-1 0 Ej'" (An 0 B) • ••))) (31)

for all A1, A2, ■ ■ ■ , An g B (K), B g B (H) and any p g S (H) defines (1) a lifting

: S (H) ^ S

(2) marginal states

p^n = tr^i;: (p) G S^) Kj (33)

pA;: = (p) g s (h) (34)

Here, the state

(p) G S | (<2) K ) 0 H ) (35)

is a compound state for p^n and pAn in the sense of [13]. Note that generally pAn is not equal to p.

Definition 58 The quantum dynamical entropy with respect to A, p, r and w is defined by

S (A; p, r, w) = limsup—S (pa',:) (36)

n n V ' /

where S (■) is von Neumann entropy [6]; that is, S (a) = —tra log a, a G S (0n K). The dynamical entropy with respect to A and p is defined as

S (A; p) = sup jS (A; p, r, w) ; r,

14.7. Generalized AF entropy and generalized AOW entropy

In this section, we generalize both the AOW entropy and the AF entropy. Then we compare the generalized AF entropy with the generalized AOW entropy.

Let 0 be an automorphism of B (H), p be a density operator on H and EU be the transition expectation on B (K) 0 B (H) with A = 0.

One introduces a transition expectation EU from B (K) 0 B (H) to B (H) such as

EH £ Eij 0 Aj j = £ 0 (uPqkAkmUpqm) = £ 0 (<k) 0 (Akm) 0 Kqm) (37)

\ ij / k,m,p,q k,m,p,q

The quantum Markov state {pUn} on 0 B (K) is defined through this transition expectation EU by

tr<g> K [pU>n (A1 0 ■ ■ ■ 0 An)]

= tr„ [pEU (A1 0 EU (A2 0 ■ ■ ■ 0 An-1 0 EU (An 0 I) ■ ■ ■))] (38)

for all A1, ■ ■ ■ , An e B (K) and any p e S (H).

Let consider another transition expectation e^™ such that

U (£ Eij 0 Aij ) = £ V i,j / k,i,p,(

0™ (uPqk) Akl0™ (Upql) (39)

J / k,i,p,q

One can define the quantum Markov state {pUn} in terms of eU

tr^ K [pUU,n (A1 0 ■ ■ ■ 0 An)]

= tr% [peU (A1 0 e^2 (A2 0 ■ ■ ■ 0 An-1 0 e^n (An 0 I) ■ ■ ■))] (40)

for all A1, ■ ■ ■ , An e B (K) and any p e S (H). Then we have the following theorem.

Theorem 59 pU,n = pU,n

Let B0 be a subalgebra of B (K). Taking the restriction of a transition expectation E : B (K) 0 B (H) ^ B (H), to B00B (H), i.e., E0 = E |s0®b(h) , E0 is the transition expectation from Bo0B (H)

to B (H). The QMC (quantum Markov chain) defines the state pU(r°) on 0 B0 through (40 ), which is

'n ' Bo

The subalgebra B0 of B (K) can be constructed as follows: Let P^ ■ ■ ■ Pm be projection operators on mutually orthogonal subspaces of K such that m=1 P' = IK. Putting K' = P' K, the subalgebra B0 is generated by

£ PiAPi, A e B (K) (42)

One observes that in the case of n =1

,«(0) = p« -Vi — p6>,1

1 Bo = £ PiP«,1Pi

and one has for any n G N

pU(n0) = £ (Pi1 0 ■ ■ ■ 0 Pin ) pU,n (Pi1 0 ■ ■ ■ 0 Pin )

il ,••• ,in

from which the following theorem is proved (c.f., see [17]).

Theorem 60

s (Pu) ^ s (p^)

Taking into account the construction of subalgebra B0 of B (K), one can construct a transition expectation in the case that B (K) is a finite subalgebra of B (H).

Let B (K) be the d x d matrix algebra Md (d < dim H) in B (H) and Ej = |e^ (ej | with normalized vectors ej G H (i =1, 2, ■ ■ ■ , d). Let y1, ■ ■ ■ , Yd G B (H) be a finite operational partition of unity, that

is, YiYi = I, then a transition expectation

is defined by

EY : Md 0 B (H) ^ B (H)

d \ d EY ( £ Ejj 0 AjJ — £ T*AjYj vi,j=1 / i,j=1

Remark that the above type complete positive map EY is also discussed in [85].

Let M0 be a subalgebra of Md consisting of diagonal elements of Md. Since an element of Md has

the form bE^ (b G C), one can see that the restriction Ey(0) of EY to M^ is defined as

d \ d EY(0) ( £ Ejj 0 Aj — £ Y*AjjYi

Vi,j=1 /

When A : B (H) ^ B (H) is a normal unital CP map, the transition expectations E^ and E^(0) are defined by

EЛ ( £ Ejj 0 Ajj j,j=1

i04 £ Ejj 0 Ajj j,j=1

£A(Y*Ajj Yj ) j,j=1 d

£ A(Y*AjjYj)

Then one obtains the quantum Markov states {pA n} and < pA

„7(o)

pA ,n = E E tr^pA (W^ (A (Wj2(••• Aj (Ih )))))) E^ 0---0 Ej (48)

'••• , in = 1 jl' ••• , jn = 1

pA(n) = E tr^pA (W^i (A(W,2,2 (■■■ A (W^ (Ih)))))) E^ 0---0 E^

,1, ••• , in = 1 d

= E pi!'-'in Ei!i! 0 ■ ■ ■ 0 Einin (49)

,!;••• ,in = 1

where we put

Wij (A) = 7i*A7j, A G B (H) (50)

Wj (p) = YjpYi*, p G S (H) (51)

JV" 'in = trHpA(Wi!i! (A(Wi2i2 (■ ■ ■ A (WWinin (Ih)))))) (52)

= trnW*in (A* ■ ■ ■ A* (WY (A* (Wi^ (A* (p)))))) (53)

The above pAn, pA^ become the special cases of pA'n defined by taking r and w in (34). Therefore the dynamical entropy (36) becomes

S (A; p, {Yi}) = limsup1S (pA ,„) (54)

ra^ro n

S(o) (A; p, {Yi}) = limsup 1S (p^) (55)

The dynamical entropies of A with respect to a finite dimensional subalgebra BcB (H) and the transition expectations EA and EA(o) are given by

Sb (A; p) = sup{S (A; p, {Yi}) , {Yi} C b} (56)

SBo) (A; p) = sup{S(o) (A; p, {Yi}) , {Yi} C b} (57)

We call (56) and (57) a generalized AF entropy and a generalized dynamical entropy by QMC, respectively. When {y,} is PVM (projection valued measure) and A is an automorphism 9, SBo) (9; p) is equal to the AOWdynamical entropy by QMC [58]. When {y/y,} is POV (positive operater valued measure) and A = 9, SB (9; p) is equal to the AF entropy [51]. From theorem 60, one obtains an inequality

Theorem 61

Sb (a; p) ^ sb

Sb (A; p) ^ S(o) (A; p) (58)

That is, the generalized dynamical entropy by QMC is greater than the generalized AF entropy. Moreover the dynamical entropy SB (A; p) is rather difficult to compute because there exist off-diagonal parts in (48). One can easily compute the dynamical entropy SBo) (A; p).

Here, we note that the dynamical entropy defined in terms of pUn on ® B (K) is related to that of

flows by Emch [49], which was defined in terms of the conditional expectation, provided B (K) is a subalgebra of B (H).

15. Conclusion

As is mentioned above, we reviewed the mathematical aspects of quantum entropy and discussed several applications to quantum communication and statistical physics. All of them were studied by the present authors. Other topics for quantum information are recently developed in various directions, such as quantum algorithm, quantum teleportation, quantum cryptography, etc., which are discussed in [60].

References

1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27,379-423, 623-656.

2. Kullback, S.; Leibler, R. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79-86.

3. Gelfand, I.M.; Yaglom, A.M. Calculation of the amount of information about a random function contained in another such function. Amer. Math. Soc. Transl. 1959,12, 199-246.

4. Kolmogorov, A.N. Theory of transmission of information. Amer. Math. Soc. Transl. 1963, 33, 291-321.

5. Ohya, M. Quantum ergodic channels in operator algebras. J. Math. Anal. Appl. 1981, 84,318-327.

6. Von Neumann, J. Die Mathematischen Grundlagen der Quantenmechanik; Springer: Berlin, Germany, 1932.

7. Umegaki, H. Conditional expectations in an operator algebra IV(entropy and information). KodaiMath. Sem. Rep. 1962,14, 59-85.

8. Araki, H. Relative entropy of states of von Neumann Algebras. Publ.RIMS, Kyoto Univ. 1976, 11, 809-833.

9. Araki, H. Relative entropy for states of von Neumann algebras II. Publ.RIMS, Kyoto Univ. 1977, 13, 173-192.

10. Uhlmann, A. Relative entropy and the Wigner-Yanase-Dyson-Lieb concavity in interpolation theory. Commun. Math. Phys. 1977, 54, 21-32.

11. Donald, M.J. On the relative entropy. Commun. Math. Phys. 1985,105, 13-34.

12. Urbanik, K. Joint probability distribution of observables in quantum mechanics. Stud. Math. 1961, 21,317.

13. Ohya, M. On compound state and mutual information in quantum information theory. IEEE Trans. Infor. Theo. 1983, 29, 770-777.

14. Ohya, M. Note on quantum probability. L. Nuo. Cimen. 1983, 38, 402-404.

15. Schatten, R. Norm Ideals of Completely Continuous Operators; Springer Verlag: Berlin, Germany, 1970.

16. Ohya, M. Some aspects of quantum information theory and their applications to irreversible processes. Rep. Math. Phys. 1989, 27, 19-47.

17. Ohya, M.; Petz, D. Quantum Entropy and its Use; Springer Verlag: Berlin, Germany, 1993.

18. Hiai, F.; Ohya, M.; Tsukada, M. Sufficiency, KMS condition and relative entropy in von Neumann algebras. Pacif. J. Math. 1981, 96, 99-109.

19. Hiai, F.; Ohya, M.; Tsukada, M. Sufficiency and relative entropy in *-algebras with applications to quantum systems. Pacif. J. Math. 1983,107, 117-140.

20. Petz, D. Sufficient subalgebras and the relative entropy of states of a von Neumann algebra. Commun. Math. Phys. 1986,105, 123-131.

21. Holevo, A.S. Some estimates for the amount of information transmittable by a quantum communication channel (in Russian). Prob. Pered. Infor. 1973, 9, 3-11.

22. Ingarden, R.S. Quantum information theory. Rep. Math. Phys. 1976,10, 43-73.

23. Ohya, M. Entropy Transmission in C*-dynamical systems. J. Math. Anal. Appl. 1984, 100, 222-235.

24. Accardi, L.; Ohya, M.; Suyari, H. Computation of mutual entropy in quantum Markov chains. Open Sys. Infor. Dyn. 1994, 2, 337-354.

25. Akashi, S. Superposition representability problems of quantum information channels. Open Sys. Infor. Dyn. 1997,4, 45-52.

26. Muraki, N.; Ohya, M.; Petz, D. Entropies of general quantum systems. Open Sys. Infor. Dyn. 1992,1, 43-56.

27. Muraki, N.; Ohya, M. Entropy functionals of Kolmogorov—Sinai type and their limit theorems. Lett. in Math. Phys. 1996, 36, 327-335.

28. Ohya, M.; Watanabe, N. Construction and analysis of a mathematical model in quantum communication processes. Scripta Thechnica, Inc., Elect. Commun. Japan 1985, 68, 29-34.

29. Ohya, M. State change and entropies in quantum dynamical systems. Springer Lect. Not. in Math. 1985,1136, 397-408.

30. Ohya, M.; Petz, D.; Watanabe, N. On capacities of quantum channels. Prob. and Math. Stat.

1997.17, 179-196.

31. Ohya, M.; Petz, D.; Watanabe, N. Numerical computation of quantum capacity. Inter. J. Theor. Phys. 1998, 38, 507-510.

32. Ohya, M.; Watanabe, N. A mathematical study of information transmission in quantum communication processes. Quant. Commun. Measur. 1995, 2,371-378.

33. Ohya, M.; Watanabe, N. A new treatment of communication processes with Gaussian Channels. Japan J. Appl. Math. 1986, 3, 197 - 206.

34. Fichtner, K.-H.; Freudenberg, W. Point processes and the position, distribution of infinite boson systems. J. Stat. Phys. 1987, 47, 959.

35. Fichtner, K.-H.; Freudenberg, W. Characterization of states of infinite Boson systems I. On the construction of states. Commun. Math. Phys. 1991,137, 315-357.

36. Accardi, L.; Frigerio, A.; Lewis, J. Quantum stochastic processes. Publ. Res. Inst. Math. Sci.

1982.18, 97-133.

37. Accardi, L.; Frigerio, A. Markov cocycles. Proc. R. Ir. Acad. 1983, 83A, 251-269.

38. Milburn, G.J. Quantum optical Fredkin gate. Phys. Rev. Lett. 1989, 63, 2124-2127.

39. Yuen, H.P.; Ozawa, M. Ultimate information carrying limit of quantum systems. Phys. Rev. Lett. 1993, 70, 363-366.

40. Ohya, M. Fundamentals of quantum mutual entropy and capacity. Open Sys. Infor. Dyn. 1999, 6, 69-78.

41. Ohya, M.; Volovich, I.V. On quantum entropy and its bound. Infi. Dimen. Anal., Quant. Prob. and Rela. Topics 2003, 6, 301-310.

42. Holevo, A.S. The capacity of quantum channel with general signal states. IEEE Trans. Info. Theor. 1998, 44, 269-273.

43. Schumacher, B.W. Sending entanglement through noisy quantum channels. Phys. Rev. A 1996, 54, 2614.

44. Belavkin, V.P.; Ohya, M. Quantum entropy and information in discrete entangled states. Infi. Dimen. Anal., Quant. Prob. Rela. Topics 2001, 4, 137-160.

45. Belavkin, V.P.; Ohya, M. Quantum entanglements and entangled mutual entropy. Proc. Roy. Soc. Lond. A. 2002, 458, 209-231.

46. Ingarden, R.S.; Kossakowski, A.; Ohya, M. Information Dynamics and Open Systems; Kluwer, Dordrecht, the Netherlands, 1997.

47. Kolmogorov, A.N. Dokl. Akad. Nauk SSSR 1958 and 1959,119 and 124, 861 and 754.

48. Connes, A.; St0rmer, E. Entropy for automorphisms of II von Neumann algebras. Acta Math. 1975,134, 289-306.

49. Emch, G.G. Positivity of the K-entropy on non-abelian K-flows. Z. Wahrscheinlichkeitstheory verw. Gebiete 1974, 29, 241-252.

50. Connes, A.; Narnhofer, H.; Thirring, W. Dynamical entropy of C*-algebras and von Neumann algebras. Commun.Math.Phys. 1987,112,691-719.

51. Alicki, R.; Fannes, M. Defining quantum dynamical entropy. Lett. Math. Phys. 1994, 32, 75-82.

52. Benatti, F. Deterministic Chaos in Infinite Quantum Systems, Series: Trieste Notes in Physics; Springer-Verlag: Berlin, Germany, 1993.

53. Park, Y.M. Dynamical entropy of generalized quantum Markov chains. Lett. Math. Phys. 1994, 32, 63-74.

54. Hudetz, T. Topological entropy for appropriately approximated C*-algebras. J. Math. Phys. 1994, 35, 4303-4333.

55. Voiculescu, D. Dynamical approximation entropies and topological entropy in operator algebras. Commun. Math. Phys. 1995,170, 249-281.

56. Choda, M. Entropy for extensions of Bernoulli shifts. Ergod. Theo. Dyn. Sys. 1996, 16, 1197-1206.

57. Ohya, M. Information dynamics and its application to optical communication processes. Springer Lect. Not. Math. 1991, 378, 81.

58. Accardi, L.; Ohya, M.; Watanabe, N. Dynamical entropy through quantum Markov chain. Open Sys. Infor. Dyn. 1997, 4, 71-87.

59. Kossakowski, A.; Ohya, M.; Watanabe, N. Quantum dynamical entropy for completely positive maps. Infi. Dimen. Anal., Quant. Prob. Rela. Topics 1999, 2, 267-282.

60. Ohya, M.; Volovich, I.V. Mathematical Foundation of Quantum Information and Computation(in preparation).

61. Lindblad, G. Entropy, information and quantum measurements. Commun. Math. Phys. 1973, 33, 111-119.

62. Lindblad, G. Completely positive maps and entropy inequalities. Commun. Math. Phys. 1975, 40, 147-151.

63. Cecchini, C.; Petz, D. State extensions and a radon-Nikodym theorem for conditional expectations on von Neumann algebras. Pacif. J. Math. 1989,138, 9-24.

64. Fichtner, K.-H.; Ohya, M. Quantum teleportation with entangled states given by beam splittings. Commun. Math. Phys. 2001, 222, 229-247.

65. Fichtner, K.-H.; Ohya, M. Quantum teleportation and beam splitting. Commun. Math. Phys. 2002, 225, 67-89.

66. Bilingsley, L. Ergodic Theory and Information; Wiley: New York, NY, USA, 1965.

67. Ohya, M.; Tsukada, M.; Umegaki, H. A formulation of noncommutative McMillan theorem. Proc. Japan Acad. 1987, 63, Ser.A, 50-53.

68. Frigerio, A. Stationary states of quantum dynamical semigroups. Commun. Math. Phys. 1978, 63, 269-276.

69. Wehrl, A. General properties of entropy. Rev. Mod. Phys. 1978, 50, 221-260.

70. Shor, P. The Quantum Channel Capacity and Coherent Information; Lecture Notes, MSRI Workshop on Quantum Computation, San Francisco, CA, USA, 21-23 October 2002 (unpublished).

71. Barnum, H.; Nielsen, M.A.; Schumacher, B.W. Information transmission through a noisy quantum channel. Phys. Rev. A 1998, 57, 4153-4175.

72. Bennett, C.H.; Shor, P.W.; Smolin, J.A.; Thapliyalz, A.V. Entanglement-assisted capacity of a quantum channel and the reverse Shannon theorem. IEEE Trans. Info. Theory 2002, 48, 2637-2655.

73. Schumacher, B.W.; Nielsen, M.A. Quantum data processing and error correction. Phys. Rev. A

1996, 54, 2629.

74. Ohya, M.; Watanabe, N. Comparison of mutual entropy-type measures. TUS preprint, 2003.

75. Watanabe, N. Efficiency of optical modulations with coherent states. Springer Lect. Note. Phys. 1991, 378, 350-360.

76. Ohya, M. Complexities and their applications to characterization of chaos. Inter. J. Theor. Phys. 1998, 37, 495-505.

77. Ohya, M.; Petz, D. Notes on quantum entropy. Stud. Scien. Math. Hungar. 1996, 31, 423-430.

78. Fujiwara, A.; Nagaoka, H. Capacity of memoryless quantum communication channels. Math. Eng. Tech. Rep., Univ. Tokyo 1994, 94, 22.

79. Ohya, M.; Watanabe, N. Quantum capacity of noisy quantum channel. Quant. Commun. Measur.

1997, 3, 213-220.

80. Accardi, L.; Ohya, M.; Watanabe, N. Note on quantum dynamical entropies. Rep. Math. Phys. 1996, 38, 457-469.

81. Ohya, M. State change, complexity and fractal in quantum systems. Quant. Commun. Measur. 1995, 2, 309-320.

82. Accardi, L. Noncommutative Markov chains. Inter. Sch. Math. Phys., Camerino 1974, 268.

83. Ohya, M.; Heyde, C.C. Foundation of entropy, complexity and fractal in quantum systems. In Probability towards 2000. Lecture Notes in Statistics; Springer-Verlag: New York, NY, USA, 1998; pp. 263-286.

84. Accardi, L.; Ohya, M. Compound channels, transition expectations, and liftings. Appl. Math. Optim. 1999, 39, 33-59.

85. Tuyls, P. Comparing quantum dynamical entropies. Banach Cent. Pub. 1998, 43, 411-420.

© 2010 by the authors; licensee MDPI, Basel, Switzerland. This article is an Open Access article distributed under the terms and conditions of the Creative Commons Attribution license http://creativecommons.org/licenses/by/3.0/.

Copyright of Entropy is the property of Molecular Diversity Preservation International (MDPI) and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.

Copyright of Entropy is the property of Molecular Diversity Preservation International (MDPI) and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.