lopscience
¡opscience.iop.org
Home Search Collections Journals About Contact us My IOPscience
Operational discord measure for Gaussian states with Gaussian measurements
This content has been downloaded from IOPscience. Please scroll down to see the full text. View the table of contents for this issue, or go to the journal homepage for more
Download details: IP Address: 132.239.1.231
This content was downloaded on 08/09/2015 at 10:54 Please note that terms and conditions apply.
New Journal of Physics
The open access journal atthe forefront of physics
Deutsche Physikalische Gesellschaft DPG
IOP Institute of Physics
Published in partnership with: Deutsche Physikalische Gesellschaft and the Institute of Physics
OPEN ACCESS
RECEIVED
18 February 2015
REVISED
5 May 2015
ACCEPTED FOR PUBLICATION
1 June 2015
PUBLISHED
29 June 2015
Content from this work may be used under the terms ofthe Creative Commons Attribution 3.0 licence.
Any further distribution of this work must maintain attribution to the author(s) andthetitleof the work, journal citation andDOI.
Operational discord measure for Gaussian states with Gaussian measurements
Saleh Rahimi-Keshari12, Timothy C Ralph1 and Carlton M Caves2'3
1 Centre for Quantum Computation and Communication Technology, School of Mathematics and Physics, University of Queensland, St Lucia, Queensland 4072, Australia
2 Center for Quantum Information and Control, University of New Mexico, MSC07-4220, Albuquerque, New Mexico 87131-0001, USA
3 Centre for Engineered Quantum Systems, School of Mathematics and Physics, University of Queensland, St Lucia, Queensland 4072, Australia
E-mail: s.rahimik@gmail.com
Keywords: quantum correlations, Gaussian states, Gaussian measurements
Abstract
We introduce an operational discord-type measure for quantifying nonclassical correlations in bipartite Gaussian states based on using Gaussian measurements. We refer to this measure as operational Gaussian discord (OGD). It is defined as the difference between the entropies of two conditional probability distributions associated to one subsystem, which are obtained by performing optimal local and joint Gaussian measurements. We demonstrate the operational significance of this measure in terms of a Gaussian quantum protocol for extracting maximal information about an encoded classical signal. As examples, we calculate OGD for several Gaussian states in the standard form.
1. Introduction
Characterization and quantification of correlations in quantum systems are central in implementing quantum information processing tasks that cannot be done classically. Quantum discord was proposed as a measure of nonclassical correlations, which can capture correlations beyond quantum entanglement [ 1-3]. This measure of correlation was shown to be useful to characterize resources in a quantum computational model (DQC1) [4], quantum state merging [5,6],remote state preparation [7], encoding information onto a quantum state [8], quantum phase estimation [9], and quantum key distribution [10]. It was also shown that quantum discord is linked to entanglement generated by the activation protocol [11] or by a measurement [12].
In general, measures of quantum correlations can be defined as the difference between a quantum entropic measure and a classical entropic measure that is obtained from outcome probabilities oflocal measurements [13]. For a bipartite system in quantum state pAB, quantum discord from subsystem B to A is defined as
D(B ^ A) = H(Lmin)(A\B) — S(A\B), (1)
where H^HA |B) is the minimized classical conditional entropy, and the quantum conditional entropy is S (A | B) = S (A, B) — S (B),with S (A, B) and S(B) being the von Neumann entropies ofthe joint state and the marginal state pB = TrA [pAB ]. The classical conditional entropy HLmm) (A |B) is obtained from outcome probability distributions of measurements on A in the eigenbasis of the conditional states PA | b = TrB [pABn ]jpb obtained after performing a measurement on B described by POVM elements {n}, where pb = Tr^ [pAB Ub ] is the probability for outcome b [13]. Hence, we have
H^AB) = min ^S(Mb), (2)
{n"] b
where S (A | b) is the von Neumann entropy of pA | b and the remaining minimization is over the local measurements on B. In general, it is not clear how to perform this minimization for an arbitrary quantum state; however, it can be done for certain cases, including a large class of two-qubit states [14].
© 2015 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft
Quantum discord was generalized to quantify nonclassical correlations in continuous-variable systems, particularly Gaussian states. Interestingly, all Gaussian states except product states have nonzero quantum discord [15,16]. Gaussian quantum discord (GQD) was introduced as a measure of nonclassical correlations for Gaussian states in which the minimization in the classical conditional entropy (2) is restricted to Gaussian measurements on B [17,18]. To get to the form (2), however, already requires nonGaussian measurements on the conditional states of A; these are generally measurements in a displaced squeezed number basis. Thus, the GQD cannot really be used as a figure of merit for Gaussian quantum protocols that only involve Gaussian states, Gaussian operations, and Gaussian measurements, such as a Gaussian version of the protocol in [8].
By using the optimality of input Gaussian states for Gaussian channels [19,20], it was recently shown [21] that for a large class of Gaussian states, no nonGaussian measurements on B can further minimize the value of quantum discord, implying that GQD is equal to quantum discord. It seems to be an open question whether this is true for all Gaussian states.
In this paper, we introduce a new measure for quantifying nonclassical correlations in bipartite Gaussian states, based solely on Gaussian measurements, which has qualitatively different behavior from GQD. This measure is defined as the difference between the Gaussian version ofthe classical conditional entropy HGmin)(A|B), which is given by minimizing over local Gaussian measurements on both subsystems, and the minimum conditional entropy that can be measured by a joint Gaussian measurement, H,Gjln) (A | B). We refer to this measure as operational Gaussian discord (OGD) because, firstly, it only depends on quantities that can be measured via Gaussian operations and, secondly, it has an operational significance in terms of a quantum protocol that is a Gaussian version of the protocol in [8]. In this protocol, a classical signal with a Gaussian probability distribution is encoded on one subsystem of a bipartite Gaussian state; using a local Gaussian or a joint Gaussian measurement, one tries to retrieve the signal from the noise associated with the joint state and the measurement. The optimal measurement is the one that maximizes the classical mutual information between the measurement outcome and the input signal. We show that in the limit of large variances for the signal probability distribution, the difference between the maximal classical mutual informations obtained by optimal joint and local Gaussian measurements is equal to the OGD of the bipartite Gaussian state.
This paper is structured as follows. In the following section, we review GQD. We introduce OGD in section 3 and calculate it for several Gaussian states in the standard form in section 4. We demonstrate the operational significance of our measure in section 5. We conclude the paper in the last section and pose an open problem. This paper is supplemented with one appendix.
2. Gaussian quantum discord
Quantum states that have Gaussian Wigner functions are known as Gaussian states. We gather the phase-space quadratures for a two-mode system into a vector X = (xA, pA, xB, pB). A Gaussian state pAB can be fully characterized in terms of the mean quadratures (X) = X and the covariance matrix, which has elements [oAB ]ij = 1 (Xj Xj + Xj Xj) — (Xj )(Xj). As correlations in Gaussian states are independent of local
displacement operations, we can, without loss of generality, assume that X = 0. Also, by applying local phase
shifts and squeezing operations, any covariance matrix
ffAB = (& C) (
can be brought to a standard form in which A = diag(a, a), B = diag(b, b),and C = diag(c, d) [22,23],i.e.,
a 0 c 0
0 a 0 d
c 0 b 0
0 d 0 b
Matrices A and B are the covariance matrices of the marginal states pA = TrB [pAB ] and pB = TrA [pAB ], and the matrix C contains the quadrature correlations between the modes. A Gaussian state has zero discord if and only if C = 0 [15].
A closed-form expression for calculating the classical conditional entropy H_Lmin) (A | B) with restriction to Gaussian measurements on B was given in [17]. A local Gaussian measurement is described by POVM elements that are proportional to pure, single-mode Gaussian states (i.e., rank-one Gaussian operators) with covariance matrix
= 'cos 9b —sin 9b VLb 0 \(cos 9b sin 9b \ (5)
VB = 'n 9b cos 9b }( 0 ULb }( —sin 9b cos 9b} ()
which is, in fact, the covariance matrix of a squeezed-vacuum state. The various outcomes of the measurement correspond to the points {b} = {xB, pB} in the phase plane; the corresponding POVM elements are obtained by displacing the squeezed-vacuum state to all points in the phase plane. (More generally, a single-mode Gaussian measurement can have POVM elements that are Gaussian convex combinations of the rank-one POVM elements, i.e., that are proportional to mixed, single-mode Gaussian states, but such measurements are noisier versions of the rank-one Gaussian measurements and thus are never optimal for our considerations.) Homodyne measurement has LB = 0; heterodyne measurement has LB = 1; and for measurements in between, 0 < LB < 1.
After performing such a measurement on B with outcomes b, the conditional state pA | b has mean quadratures that depend on b, but its covariance matrix, given by
ga = a — C (B + Vb )—1CT, (6)
is independent of b [25]. Thus, the eigenstates of pA | b are, in general, displaced squeezed number states; measuring in this basis minimizes the the Shannon entropy of the outcome probability distribution, making it equal to the von Neumann entropy S (A | b) of the conditional state pA | b. This von Neumann entropy is given by
S (A |b) = F (^/det ga ) [22], with
F (x) = X+1 ln X+1 — X—1 ln 5-1. (7)
2 2 2 2
The GQD is then given by
Dgqd (B ^ A) = HLmin)(A|B) — S(A|B), (8)
where the classical conditional entropy
HLmin)(A|B) = min S(A|b) = min F (det ga ), (9)
is now obtained by minimizing S (A | b) only over Gaussian measurements on B.
For Gaussian states in the standard form (4) and having a = b = c + 1, it is interesting to note that the quantum conditional entropy of the state with d = c, referred to as the correlated-correlated (CC) state, is smaller than the quantum conditional entropy of the state with d = — c, referred to as the correlated-anticorrelated (CA) state. On the other hand, the classical conditional entropy (9) is the same for these two separable states [17]. This implies that the GQD of the CC state is larger than that of the CA state, although the marginal states of these two separable states are the same. Note that given c, the CC and CA states are the nonentangled states that have maximal correlations in their quadratures.
Recently, using a connection between the continuous (differential) Shannon entropy of the Wigner function and the Renyi-2 entropy, Gaussian Renyi-2 discord was defined as a measure of nonclassical correlations in Gaussian states [24]. In this measure, the von Neumann entropies in equation (8) are replaced by Renyi-2 entropies
D2(B ^ A) = min S2(A|b) — S2(A|B), (10)
2 1 1 where S2(A |b) = — ln(Tr[pA| b ]) = — ln(det ga ) and S2(A |B) = — ln(det ff^/det B). Notice that, the conditional entropy S2 (A | b) corresponds to the continuous Shannon entropy of the Wigner function of pA | b up to a constant [24]. There is, however, no Gaussian measurement whose outcome probability distribution is equal to the Wigner function, as the noncommuting observables x A and pA cannot be measured simultaneously without some noise penalty. The CC and CA states have the same Gaussian Renyi-2 discord, and Gaussian states with no correlations in one ofthe quadratures (d = 0) have zero Gaussian Renyi-2 discord.
Neither the GQD nor the Gaussian Renyi-2 discord satisfy the condition of nonclassical correlations [13] for Gaussian protocols, because they use a nonGaussian measurement on A. We turn now to formulating an operational, discord-type measure of nonclassical correlations for Gaussian states that is based purely on Gaussian measurements.
3. Operational Gaussian discord
We refer to our new measure as OGD and define it as
Dogd(B ^ A) = H(Gfn)(A|B) — Hjn)(A|B), (11)
where H^ (A |B) is the minimum conditional entropy of A after performing local Gaussian measurements on A and B, and Hjn) (A |B) is the minimum conditional entropy ofthe same subsystem after performing a joint Gaussian measurement on A and B. The entropies are the continuous (differential) Shannon entropy of
Gaussian probability distributions, which for a single mode are given by — ln (det a) + ln (2ne), with a being
the covariance matrix of the probability distribution (see appendix). In our notation, A and B denote that the entropies are calculated using outcome probability distributions of the measurements. As all the probability distributions are Gaussian, in order to calculate the OGD (11), one just needs to minimize the determinants of the covariance matrices ofthe conditional Gaussian probability distributions for the outcomes of local and joint Gaussian measurements. For a discussion of Gaussian measurements, conditional probability distributions, and the corresponding entropies, see appendix.
In general, the POVM elements of a two-mode Gaussian measurement are proportional to two-mode Gaussian states whose covariance matrix, according to the Williamson theorem [27], can be written as
V J = ST ( Vil © V21) S.
Here the 2x2 identity matrix 1 represents the single-mode vacuum state (the choice of units can be thought of as setting h = 2), vi 1 © v2 1 corresponds to product thermal states with variances vi and v2 in the two modes, and S represents a symplectic transformation [28]. For our purpose, that is to minimize the entropies of the outcome probability distributions, we consider Gaussian measurements that have rank-one POVM elements, i.e., vi = v2 = i;thus, the POVM elements are proportional to pure Gaussian states. Measurements with mixed POVMs will add more noise and increase the entropy. Any symplectic matrix can be expressed as S = K [s (ri) © s (r2)] L,where K and L represent beamsplitter transformations and s (r) represents a singlemode squeezing operation [29,30]. Using this expression for S in equation (12) and knowing that the action of a beamsplitter on vacuum states results in vacuum states, we can write the covariance matrix ofthe POVM elements of a two-mode, joint Gaussian measurement in the form
VA J VcJ
RT (a, $b ))T (n)( © Vb )B (n) R (a , $b )
Ji—n 0
0 —J 1 — n 0
0 —J 1 — n
0 yfn 0
1 — n 0 ■4n
describes a beamsplitter transformation,
R(<Pa, 0B
cos $A —sin $A 0 0
sin cos 0 0
0 0 cos $B —sin (pi
0 0 sin 4*B cos 4>B
describes pre-beamsplitter single-mode phase shifts, and vb is defined as in equation (5), with va defined analogously. As we see from the above expression, the joint Gaussian measurement corresponding to this covariance matrix can be realized by two phase shifters and a beamsplitter followed by two local Gaussian measurements. Obviously, for = 4>B = 0 and n = 1, we obtain the covariance matrix ofa local Gaussian measurement
VL = Va © Vb . (16)
As shown in the appendix, after performing a joint Gaussian measurement, the covariance matrix of the conditional probability distribution for A is given by
aAJ = A — C B—1<CT, (17)
which is obtained from a joint Gaussian probability distribution with the covariance matrix
&ab, j — gab + № j —
' A + № A, J C + №c,J CT + №cj B + №B,J
A C CT B
After local Gaussian measurements on A and B, the covariance matrix of the conditional probability distribution for A is
Ga,l — A + №A - C (B + №B) 1CT.
Thus, by using equations (19) and (17), the OGD measure becomes
Dogd (B ^ A) — min1 ln (det ga,l) — min1 ln (det ga,j) .
№ a ,№ b 2 № j 2
OGD is always nonnegative, because the set of all joint measurements includes all local measurements; hence, the conditional entropy minimized over all possible joint measurements, HGJin (A|B), can never be larger than the conditional entropy miminized over all local measurements, HG™ (A | B). In addition, the OGD of product states is zero. In this case, for a joint measurement we have ga,j — A + № A with
№a — №aj — №cj (B + №bj )—1№cj , which is equivalent to a local measurement on A with covariance matrix № 'A; hence, det gajj cannot be smaller than det ga,l.
4. Examples: OGD for some Gaussian states
In general, it is not clear how to obtain a closed-form expression for OGD for an arbitrary Gaussian quantum state. The conditional entropy with local Gaussian measurements must be minimized over four parameters, {9A, 9B, LA, LB}, and the conditional entropywith joint Gaussian measurements must be minimized over seven parameters, [$A, $B, n, dA, 9B, LA, LB}. In the following, by using analytical and numerical methods, we calculate OGD for some Gaussian states in the standard form (4).
4.1. Entangled and separable Gaussian states
Let us consider a class of Gaussian states whose covariance matrices (4) are parameterized by a and t such that a — b ^ 1 and c — — d — tyja2 — 1 ^ 0, where 0 ^ t ^ 1. When t = 1, this is a pure two-mode squeezed-vacuum state, with a — b — cosh 2r and c — — d — sinh 2r, r being the squeezing parameter. For t > (a — 1)/(a + 1), the state is entangled [23]; forthe othervalues of t, the state is separable. The boundary between separability and entanglement, i.e., t — J(a — 1)/(a + 1), is occupied by the CA state.
By minimizing det &A,L over all local Gaussian measurements we find that the optimal local measurements for all values of a and t are two heterodyne measurements, i.e., LA — LB — 1. This gives a symmetric covariance matrix for the conditional probability distribution:
1 + a + (1 — a) t2
In order to minimize det &a,j one can guess that, as the quadratures are equally correlated but with a different sign, the covariance matrix of the POVM elements of the joint Gaussian measurement must be in the same form, with the sum gab + № j — &abj enhancing the correlations and minimizing the determinant of the conditional covariance matrix ga,j . This means that № j must be the covariance matrix of a two-mode squeezed state, as it is the only pure Gaussian state in that form
№ AJ — № B,J —
-(1/ L + L) 2
№cj —
(1/L — L)
-(1/L + L) 2
(1/ L — L)
i.e., — $B — 9A — 0, 9B — nl2, n — 0.5, and LA — LB — L. Numerical calculations confirm that this is the optimal choice for the joint Gaussian measurement. Minimizing det ga,j over the parameter L gives
Figure 1. The solid blue line shows the operational Gaussian discord (OGD), and the dashed red line shows Gaussian quantum discord (GQD) for a Gaussian state in the standard form (4), with a = b = 10 and c = — d = iV9?. The optimal local Gaussian measurements are two heterodyne measurements for all values of t. For t ^ V 9/11, the optimal joint Gaussian measurement is a 50:50 beamsplitter followed by two homodyne measurements. Notice that for t > V 9/11 (vertical line), the state is entangled. For other values of t, the optimal joint Gaussian measurement is local measurements in a displaced squeezed-vacuum basis after the beamsplitter, varying between no squeezing (heterodyne) for t=0 to infinite squeezing (homodyne) for t = V9/11.
1 — at2 — a — t2 +
2t*Ja2 — 1 i-
——-, 0 ^ t <yj(a — 1)/(a + 1),
1 + t2 — a (1 — t2) ..........(24)
0, 7(a — 1)/(a + 1) ^ t ^ 1.
The corresponding covariance matrix of the conditional probability distribution for 0 ^ t < ^J (a — 1)/(a + 1)
'(1 + a) ( 1 — t2) 0
0 (1 + a) ( 1 — t2)
and for other values of t is
„ ( a — t-Ja2 — 1 0
ffA,J = 2 ,-
0 a — tv a2 — 1
For the CA state and entangled states, an optimal measurement is a beamsplitter followed by two homodyne measurements (POVM elements are two-mode infinitely squeezed states). For the separable states, the local measurements after the beamsplitter are measurements in a displaced squeezed-vacuum basis, varying between heterodyne for t = 0 and homodyne for t = -J(a — 1)/(a + 1). The OGD for these states is given by
DOGD (B ^ A) = ln(1 + a + t2 — at2)
ln( (1 + a)(1 — t2)), 0 ^ t < ^(a — 1)/(a + 1), ln( 2a — 2tVa2 — 1 ), ^(a — 1)/(a + 1) ^ t ^ 1.
For these states Dgqd(B ^ A) ^ DOGD(B ^ A), which we illustrate in a particular case in figure 1. This implies that the difference between the classical and quantum conditional entropies is less than or equal to the difference between the conditional entropies obtained by local and joint Gaussian measurements. Also, for two-mode squeezed-vacuum, we have DOGD (B ^ A) = 2r; the quantum discord of this state is equal to the von Neumann entropy of the marginal state, S(B), which for large values of r scales as 2r + 1 — 2ln2.
4.2. CC and CA states
Here we consider separable Gaussian states parameterized by c and q such that a = b = c + 1 and d = qc,with c > 0 and — 1 ^ q ^ 1. The parameter q controls the correlation in the p-quadratures; by changing q from — 1to 1, the state changes from the CA state to the CC state.
We first minimize det ¿a,l for the minimum conditional entropy with a local Gaussian measurement. Numerical calculations show that QA = QB = 0, as expected because the state is in standard form. The minimizing values of LA and LB are
2 + c — cq
(1 + c)( q2 — 1) + c |q| + 2c — 2cq2
(1 + c)2 — (1 + c2) q2
, (1 + 2c)—1/2 < |q| ^ 1, 0 ^ |q| ^ (1 + 2c)~in.
and these give
■ T c (1 + Lb ) W 1 c2q2Lb
det Ga,l = I 1 + La + —-^ II 1 + c + n
1 + c + Lb
La (1 + c) Lb + 1
According to the above expressions, the optimal local measurement on A for all values of q is in a displaced squeezed-vacuum basis, which limits to a heterodyne measurement when \ q \ = 1. The optimal local measurement on B for (1 + 2c)—1/2 <\q \< 1 is also in a displaced squeezed-vacuum basis, but for the small correlations in the p-quadratures,\q \ ^ (1 + 2c)—1/2, the optimal local measurement is homodyne. For\q \ = 1 the local measurements on both A and B are heterodyne measurements; in this case, we have ^det 6al = 4(1 + c)/(2 + c).
Using numerical calculations, we find that the POVM elements of the optimal joint Gaussian measurement are two-mode squeezed states, with covariance matrix p j given by equations (22) and (23). We obtain
4(1 + L)(1 + L + 2 cL)(1 + L + cL — cqL)(1 + c + L + cq)
det gAj which is minimized by
(1 + 2L + 2cL + L2)2
— 1 + ^ 4 + 4c — 4cq2
3 + 2c (1 — q) — q
The expression for L shows that for q =1 we have L = 1; i.e., the optimal joint Gaussian measurement is a 50:50 beamsplitter followed by two heterodyne measurements. In this case, p j = 1 0 1, and this measurement is
equivalent to two local heterodyne measurements. Moreover, it is easy to see that for (1 + 2c)—1/2 ^ q ^ 1,the minimum conditional entropies with local and joint Gaussian measurements are the same, det ga,j = det ga,L, and thus OGD is zero. We also observe that the parameter L decreases as q decreases. For q = — 1 we have L = 0 and ^Jdet gAj = 2, which corresponds to performing two homodyne measurements, with 9A = 0 and 9B = n/2, after the beamsplitter.
In figure 2, we compare GQD and OGD measures for Gaussian states with c = 9. As shown, GQD for the state with q = 1 is larger than for the state with q = — 1. In contrast, according to OGD, the state with q =1 has zero correlation, and the state with q = — 1 has the maximum correlation.
4.3. Asymmetric Gaussian states
Bipartite Gaussian states whose marginal states are not the same are asymmetric. To explore properties of such states, we consider separable, asymmetric Gaussian states in the standard form, which are parametrized by b, v, ands with a = b + v, c = \5\,and d = s, where b ^ 1, v ^ 0 and c = \5 \ ^ b — 1.
Using numerical and analytical calculations, we find that the optimal local measurements that minimize det gA,L for all values of s and v are heterodyne measurements, LA = LB = 1, which yields a symmetric covariance matrix for the conditional probability distribution
1 + b + v — s2/(1 + b) 0
0 1 + b + v — s2/(1 + b)|
In order to minimize det ga,j, we consider the cases s > 0 (c = d) and s < 0 (c = —d) separately. Note that for s = 0 the state is a product state. For c = d, as for the CC state in the previous subsection, we find that for all values of \ s \ and v the optimal joint Gaussian measurement is a 50:50 beamsplitter followed by two heterodyne measurements ($A = = 9A = 9B = 0, LA = LB = 1, n = 1/2), which implies ga,j = ga,l. For thecase c = —d,however, we find that the optimal joint Gaussian measurement is described by parameters
= = 9A = 0, 9B = n/2, n = 1/2, and LA = LB = (b — 1 + s)/(b — 1 — s); i.e., the POVM elements are two-mode squeezed states. In this case, we obtain a symmetric covariance matrix for the conditional probability distribution as
Figure 2. Operational Gaussian discord (OGD) (solid blue line) and Gaussian quantum discord (GQD) (dashed red line) for Gaussian stateswith a = b = c + 1 = 10 and d = 9q. The parameter q controls the correlation between the p-quadratures ofthe joint system. OGD monotonically decreases in the interval -1 ^ q < 1/VT9 ,andfor 1/VÏ9 ^ q ^ 1 OGD is zero; note that q = 1/VÎ9 isthe point at which the optimal local measurement on B changes to homodyne measurement. According to GQD, the CC state, q = 1, has more nonclassical correlation than the CA state, q = — 1, but the OGD measure attributes zero nonclassical correlation to the CC state and the maximal nonclassical correlation in this class to the CA state.
Figure 3. By applying a Gaussianly distributed local displacement operator, a pair of classical Gaussian random variables X s = (xs, ps), both having the same variance Vs, are encoded on x- and p-quadratures of subsystem A, which is part of a bipartite system in Gaussian state with the covariance matrix [8]. (a) In the first strategy, one performs optimal local Gaussian measurements, whose POVM elements are described by the covariance matrix pa ® Pb . After post-processing the data, one obtains a signal estimate Ye = (xe, pe) such that the mutual information Il (Xs, Ye) is maximized. The covariance matrix ofthe joint probability distribution is o'ab,l = aAB + P a ® Pb . (b) In the second strategy, one performs an optimal joint Gaussian measurement such that the mutual information Ij (Xs, Ye) is maximized. As shown in the text, the most general form of a joint Gaussian measurement consists of two phase shifters and a beamsplitter (BS) followed by two local Gaussian measurements. The covariance matrix of the probability distribution for the outcomes of this measurement is g'abj = °ab + Pj ,where Pj is the covariance matrix of the POVM elements. In the limit of maximal encoding (Vs ^ to), the difference between Ij (Xs, Ye) and Il (X s, Ye) is equal to operational Gaussian discord (OGD) of the state Pab.
1 + b + V — s2/(b — 1)
v 0 1 + b + v — s 2/(b — 1)^
As a consequence, OGD for these states is given by
'0, 0 ^ s ^ b — 1.
Dogd(B ^ A) = ^
In I 1 +
(1 + b)( b2 — s2 — 1 — v + bv)
, 1 — b ^ s < 0.
Notice that the optimal local and joint Gaussian measurement strategies are independent of the value of v, and for v ^ to, OGD is zero.
5. Operational significance
We now present the operational significance of our measure in terms of a Gaussian protocol for encoding information onto Gaussian quantum states (see figure 1). In this protocol, two independent classical random variables, xs and ps, represented by the vector Xs = (xs, ps ) and described by Gaussian probability distributions
with the same variance Vs, are encoded on the x- and p-quadratures of subsystem A of a joint system in the Gaussian state pAB .The encoding procedure is done by applying the displacement operator Da (Xs) = exp[i(px A — xspA )/2\ according to the Gaussian distributions for xs and ps. The state after encoding thus becomes
r e — (xs2+Ps2V2ys „
p'AB =/ dXs-—--Da (Xs)PabDKXs). (36)
J 2nVs
The state p'AB is also Gaussian, with covariance matrix a'AB = aAB + Vs 1 0 0, where 0 is the 2x2 zero matrix.
The aim is to obtain an estimate of the signals, Ye = (xe, pe ),byusing some measurement strategy that takes advantage of the correlations between the subsystems in such a way that the classical mutual information I(Xs, Ye)ismaximized.Itwasshown,usingHolevo'stheorem,thatformaximalencoding,i.e., Vs ^ to,the difference between the maximum extractable information with and without restricting to local measurements is equal to quantum discord of the state pAB [8]. In order to saturate the extractable information, however, nonGaussian measurements are required, in the way we described earlier for quantum discord. Here we consider a Gaussian version of the protocol, in which there are two measurement strategies: local Gaussian measurements and joint Gaussian measurements.
Consider first the case where subsystem B is not available to us. Assuming the state pAB was in the standard form, the marginal state pA is symmetric with variances a in the x- and p-quadratures. In this case, the covariance matrix of the outcome probability distribution of a Gaussian measurement is given by
'a + LA + V 0
0 a + 1/La + V
By using the expression for the mutual information of two parallel Gaussian channels [31], the mutual information between Xs and an estimate of it, Ye, that is obtained after the measurement is given by
I (Xs, Ye) = — ln(1 + V I + — ln(1 +-V-I. (38)
2 V a + La ) 2 V a + 1/La )
This quantity is maximized for LA=1, i.e., by performing heterodyne measurement. There are two sources of noise reducing the mutual information: the noise ofthe quantum state and the noise associated with the measurement. While the former noise is inevitable due to the uncertainty principle, the latter could be reduced if subsystem B was available to us. In that case, one could take advantage of the correlations by performing some measurement on B and post-processing the outcomes in order to effectively reduce the noise of the state, thus allowing extraction of more information about the signals.
When both subsystems are available, in the first strategy, one performs local Gaussian measurements on subsystems B and A. This yields a conditional probability distribution for A, with covariance matrix o'Al = ga,l + Vs 1, where dA>L is given by equation (19). The mutual information, given by
I (Xs, Ye) = — [ln(1 + Vs/aL1) + ln(1 + Vs/aL2) \, where aL1 and aL2 are the eigenvalues of o-A L, should be
maximized over the local measurements, which means to maximize it over the local-measurement covariance matrices p A and pB. Thus, the quantity of interest is
Il (xs, ye) = max1
p a ,p b 2
ln I 1 + VI + ln (1 + V
aLi) V aL2
In the second strategy, by using a joint Gaussian measurement, one obtains a conditional probability distribution with covariance matrix a'Aj = aAj + Vs 1, where aAj is given by equation (17). The mutual
information, I (Xs, Ye) = — [ln(1 + Vs/aji) + ln(1 + Vsjaj 2)\, where aJ1and aJ2 are the eigenvalues of gAj ,
should be maximized over the covariance matrix p j that describes the joint Gaussian measurement, so the quantity of interest is
Ij (xs, ye) = max1
ln 11 + VL
+ ln | 1 +
In the limit that the classical signals have very large power, Vs is much larger than any of the eigenvalues. In this situation, we have
Il(Xs, Ye) ^ ln Vs — min— ln( aL—aL2) = ln VJ — min— ln(det &a,l) (41)
P a ,P b 2 P a ,P b 2
for the first strategy, and
Ij(Xs, Ye) ~ ln Vs — min1 ln( aj1aj2) = ln V — min1 ln( det ga,j) (42)
p j 2 p J 2
for the second strategy. In the limit Vs ^ x, the difference between these two mutual informations is equal to the OGD of pAB,
Ij ( xs, Ye) — Il ( xs, Ye) = Dogd(B ^ A). (43)
This relation provides the operational significance for our measure.
For some Gaussian states the local and joint Gaussian measurements used to minimize the conditional entropies for the OGD of p^ are independent of the value Vs, as shown for the states considered in section 4.3, whose conditional probability distributions are symmetric Gaussian functions. In this case, one can easily see, for example, by considering equations (33) and (34), that
Ij(xs, Ye) — Il(xs, Ye) = Dogd(B ^ A) — DOGD(B ^ A), (44)
where DOGD (B ^ A), the OGD for the state pAab after encoding, is zero for maximal encoding. According to this relation, the difference between mutual informations obtained by the above strategies is equal to the amount of nonclassical correlation in terms of OGD consumed by encoding the signal. This implies that, for any value of Vs, there is no difference between the two strategies for the CC state, IL (Xs, Ye) = Ij (Xs, Ye); however, for the CA state the joint Gaussian measurements strategy is always advantageous with respect to the local Gaussian measurements strategy, IL (Xs, Ye) < Ij (Xs, Ye).
6. Conclusion
We have shown that OGD is a new discord-type measure of nonclassical correlations for Gaussian states, which can be experimentally measured by using local and joint Gaussian measurements. We have demonstrated an operational significance for this measure in terms of a Gaussian quantum protocol for extracting information about a classical signal encoded on one subsystem; for maximal encoding, OGD is the additional accessible information that comes available when an experimentalist throws off the shackles of local Gaussian measurements and starts using joint Gaussian measurements. This measure might also be useful for quantifying nonclassical correlations in resources of other Gaussian protocols that involve Gaussian states, Gaussian operations, and Gaussian measurements.
An interesting open question is how to define a similar measure for discrete-variable systems. This measure can be defined as the difference between two conditional entropies of one subsystem minimized by local and joint measurements. Such a measure might have an operational significance in terms of the discrete-variable version ofthe protocol we considered in this paper and other quantum protocols.
Acknowledgments
This work was supported in part by the Australian Research Council Centre of Excellence for Quantum Computation and Communication Technology (Project No. CE110001027) and by US National Science Foundation Grant Nos. PHY-1314763 and PHY-1212445.
Appendix. Gaussian measurements and entropy of Gaussian probability distributions
In quantum mechanics, the uncertainty principle only allows noisy simultaneous measurements of the phasespace quadratures [32]. In the general formalism for phase-space measurements, the POVM elements associated with the outcomes (x, p) of a single-mode measurement are given by
n (x, p) = — D (x, p)nDt(x, p), (A.1)
where D (x, p) is the displacement operator, Jdx dp n (x, p) = 1, and the quantum state n0, which can be assumed to have zero first-order moments, is a characteristic ofthe measurement device, sometimes called a quantum filter or ruler [33,34]. The phase-space measurements for which n0 is a Gaussian state are called Gaussian measurements [28]; we show in the following that the outcome probability distributions are Gaussian. It was shown, in general, that Gaussian operations can be implemented using linear-optical elements and homodyne measurements [25,26]. Hence, Gaussian measurements are equivalently defined as a set of measurements that can be implemented using Gaussian ancilla states, Gaussian unitary operations, and homodyne measurements [35,36]. As discussed in section 2, we are interested in rank-one Gaussian
measurements, i.e., n0 is a squeezed-vacuum state, as these states satisfy the minimum uncertainty relation, and the measurement is as accurate as possible.
Note that Gaussian measurements can be put in a slightly more general framework that uses measurements with noncovariant POVMs n (x, p, y) = p (y) D (x, p) n0(y) Dt (x, p)/4n, where the Gaussian states n0 (y) are labeled by an additional parameter y, and p(y) is the probability of outcome y, since
J dx dp n (x, p, y) = p (y) 1 [18]. These measurements can be thought of as performing one of a set of random Gaussian measurements by flipping a coin, governed by the probability p(y), to choose which Gaussian measurement. Since we optimize over all Gaussian measurements, a random choice of Gaussian measurements would not be optimal. Formally, including the parameter y in the minimization of the entropy would pick out the value of y for the optimal measurement and lead to the same minimum entropy.
As discussed in the main text, a two-mode Gaussian measurement can be implemented using linear-optical elements and single-mode Gaussian measurements; the POVM elements are
n (X) = -L- D (£) no D t(£). (A.2)
Here n0 is a Gaussian state with zero mean quadratures and the covariance matrix k j of equation (13); X = (X) = (xA, pA, xB, pB) is the vector ofmean quadratures of the state n (x) and represents the outcomes of the measurement; % = (|1, |2, |3, |4) e IR4 and % = -XJ/2 with
0 10 0)
J A 0 0 Jb
-10 0 0 0 0 0 1 0 0 -10
being the fundamental symplectic matrix; and
D (£) = = e-iXJJXV2 = ei(paxa-xapa +pbx b-xbpb )/2 = d(x) (A.4)
is the two-mode displacement operator.
By using POVM elements (A.2) The probability distribution of the outcomes of the Gaussian measurement performed on a bipartite quantum system in state pAB is given by
P (x) = Tr [ Pab n (x) ]. (A.5)
Byusing the characteristic function of the state, 0AB (£) = Tr[pABD (£)], and the characteristic function of the POVM-element states, 0n (X)(£) = exp(-j + i%X T)/(16n2), this distribution can be written as
For a zero-mean Gaussian state pAB with covariance matrix aab, the characteristic function becomes 0ab (£) = exp(-2&ab£T), so we have
1 i \ / e—Xc abX Tf 2
P(x) = — / d4£ e-%(b+Kj)?V2e-i£Xt = e , ■. (A.7)
16n4 J 4nydet &AB
Thus the probability distribution is a Gaussian function with covariance matrix &ABj = &AB + k j ,asin equation (18).
Using the continuous Shannon (differential) entropy, the entropy of the joint probability distribution can be found as
P (x) =J 0 AB (£) ®U (X)(-£). (A.6)
(A, B) = - Jd4x P(x)ln(P(x)) = 1 ln(det &ABJ) + 2 ln(2ne). (A.8)
The constant 2 ln (2ne) does not have an absolute significance; the continuous entropy is only defined up to an additive constant. The difference between two such entropies does, however, have an absolute significance.
When the covariance matrix &ABj is written in terms of the block matrices A, B, and C of equation (18), the inverse is given by [37]
&AJ J CB
„-1 =,Tt-1 _-1 -&B,JC A &BJ
&AJ -A 1C&BJ
-B 1cT&Aj &bj
&AJ = A - CB-1CT, &BJ = B - CTA-1C. (A.10)
By using this expression and the probability distribution (A.7), one can easily find the conditional probability distribution for A, given measurement results on B,as
e-2 roAJ Rt
(IN e 2
xA, pA\xB, pB )= P (xa \Xb ) =-, _ , (A.11)
2n J det &Aj
where R = XA — XB B C . The covariance matrix of this distribution, oAj , is independent of the outcomes xB and pB. Hence, the continuous Shannon entropy of the conditional probability distribution is given by
H (A |B) = 1 ln (det oAJ) + ln(2ne). (A.12)
The conditional entropy (A.12) can also be written as
„ „ 1 / det aABJ \
H(A|B) = -ln I-■BL I + ln(2ne), (A.13)
2 ^ det B )
since for joint classical probability distributions we have H (A |B) = H (A, B) — H (B). This relation can be regarded as a consequence of the identities
det gab,j , ,
-tA = det A
i - Tr "" c b-1c ta-1! + i^e^Cy
L J det B
det oAj . (A.14)
We note that by setting p j = 0, i.e., by removing the tildes on all quantities so that aABj = cAB, the probability distribution P (X) of equation (A.7) becomes the Wigner function of the joint state pAB, with equation (A.8) giving the continuous Shannon entropy of the Wigner function. Moreover, we can model a local measurement on B byremoving the tildes from A and C, i.e., by setting pA j = 0 = pC,j so that oAj = aA (see equation (6)); in this case the conditional distribution (A.11) becomes the conditional Wigner function of A, with equation (A.12) giving the corresponding continuous Shannon entropy.
References
Ollivier H and Zurek W H 2001 Phys. Rev. Lett. 88 017901 ZurekWH 2000 Ann. Phys., Lpz. 9 855 Henderson L and Vedral V 2001 J. Phys. A: Math. Gen. 34 6899 Datta A, Shaji A and Caves C M 2008 Phys. Rev. Lett. 100 050502 Madhok V and Datta A 2011 Phys. Rev. A 83 032323
Cavalcanti D, Aolita L, Boixo S, Modi K, Piani M and Winter A 2011 Phys. Rev. A 83 032324
Dakic B , LippYO, MaX, Ringbauer M , KropatschekS , BarzS , Paterek T , Vedral V , ZeilingerA , Brukner C and Walther P 2012 Nat. Phys 8 666
Gu M, Chrzanowski H M, Assad S M, Symul T, Modi K, Ralph T C, Vedral V and Lam P K 2012 Nat. Phys. 8 671
GirolamiD, Souza AM, Giovannetti V, TufarelliT, Filgueiras J G, Sarthour RS, Soares-PintoDO, Oliveiral S and AdessoG 2014 Phys.
Rev. Lett. 112 210401
PirandolaS2014Sci. Rep. 46956
Piani M, Gharibian S, Adesso G, Calsamiglia J, Horodecki P and Winter A 2011 Phys. Rev. Lett. 106 220403 Streltsov A, Kampermann H and Bruß D 2011 Phys. Rev. Lett. 106 160401 Lang M D, Caves C M and Shaji A 2011 Int. J. Quantum Inf. 9 1553 Luo S 2008 Phys. Rev. A 77 042303
Rahimi-Keshari S, Caves C M and Ralph T C 2013 Phys. Rev. A 87 012119
Hosseini S, Rahimi-Keshari S, Haw J Y, Assad S M, Chrzanowski H M, Janousek J, Symul T, Ralph T C and Lam P K 2014 J. Phys. B: At. Mol. Opt. Phys. 47 025503
AdessoGandDatta A2010Phys. Rev. Lett. 105 030501
Giorda P and Paris M G A 2010 Phys. Rev. Lett. 105 020503
Giovannetti V, Garcia-Patron R, Cerf N J and Holevo A S 2014 Nat. Photonics 8 796
Mari A, Giovannetti V and Holevo A S 2014 Nat. Commun. 5 3826
Pirandola S, Spedalieri G, Braunstein S L, Cerf N J and Lloyd S 2014 Phys. Rev. Lett. 113 140405 Adesso G and Illuminati F 2007 J. Phys. A: Math. Theor. 40 7821 Simon R 2000 Phys. Rev. Lett. 84 2726
Adesso G, Girolami D and Serafini A 2012 Phys. Rev. Lett. 109 190502 Giedke G and Cirac J12002 Phys. Rev. A 66 032316 Eisert J and Plenio M B 2003 Int. J. Quantum Inf. 1 479 Williamson J1936 Am. J. Math. 58 141
Weedbrook C, Pirandola S, Garcia-Patron R, Cerf N J, Ralph T C, Shapiro J H and Lloyd S 2012 Rev. Mod. Phys. 84 621 Arvind, Dutta B, Mukunda N and Simon R1995 Pramana 45 471 Braunstein S L 2005 Phys. Rev. A 71 055801
Cover TM and Thomas J A1991 Elements of Information Theory (New York: Wiley)
Leonhardt U1997 Measuring the Quantum State of Light (Cambridge: Cambridge University Press)
Wodkiewicz K1984 Phys. Rev. Lett. 52 1064
BuzekV, Keitel C H and Knight P L 1995 Phys. Rev. A 51 2575
Fiurasek J and Mista L 2007 Phys. Rev. A 75 060302
Takeoka M and Sasaki M 2008 Phys. Rev. A 78 022320
Bernstein D S 2009 Matrix Mathematics: Theory, Facts, and Formulas (Princeton: Princeton University Press)