Scholarly article on topic 'Random Bit Flipping and EXIT Charts for Nonuniform Binary Sources and Joint Source-Channel Turbo Systems'

Random Bit Flipping and EXIT Charts for Nonuniform Binary Sources and Joint Source-Channel Turbo Systems Academic research paper on "Electrical engineering, electronic engineering, information engineering"

0
0
Share paper

Academic research paper on topic "Random Bit Flipping and EXIT Charts for Nonuniform Binary Sources and Joint Source-Channel Turbo Systems"

Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 2009, Article ID 354107, 6 pages doi:10.1155/2009/354107

Research Article

Random Bit Flipping and EXIT Charts for Nonuniform Binary Sources and Joint Source-Channel Turbo Systems

Xavier Jaspar1 and Luc Vandendorpe2

1 Laborelec, Rodestraat 125, B-1630 Linkebeek, Belgium

2 Communications and Remote Sensing Laboratory, Université catholique de Louvain, Place du Levant 2, B-1348 Louvain-la-Neuve, Belgium

Correspondence should be addressed to Xavier Jaspar, xavier.jaspar@uclouvain.be

Received 29 January 2009; Revised 7 June 2009; Accepted 7 August 2009

Recommended by Athanasios Rontogiannis

Joint source-channel turbo techniques have recently been explored a lot in literature as one promising possibility to lower the end-to-end distortion, with fixed length codes, variable length codes, and (quasi) arithmetic codes. Still, many issues remain to be clarified before production use. This short contribution clarifies very concisely several issues that arise with EXIT charts and nonuniform binary sources (a nonuniform binary source can be the result of a nonbinary source followed by a binary source code). We propose two histogram-based methods to estimate the charts and discuss their equivalence. The first one is a mathematical generalization of the original EXIT charts to nonuniform bits. The second one uses a random bit flipping to make the bits virtually uniform and has two interesting advantages: (1) it handles straightforwardly outer codes with an entropy varying with the bit position, and (2) it provides a chart for the inner code that is independent of the outer code.

Copyright © 2009 X. Jaspar and L. Vandendorpe. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1. Introduction

Consider the generic system in Figure 1. It involves a serial concatenation at the transmitter and a joint source-channel serial turbo decoder [1, 2] at the receiver and is sufficiently general to describe several issues that arise with EXIT charts [3] when the bits U: are not uniform.

Let us consider that the outer component in Figure 1 is a discrete source of symbols followed by a source code that produces a sequence of N biased or nonuniform bits U1:N or U:. After random interleaving by n, the inner component is a channel code that produces a sequence of coded bits R: which are sent across the channel. We assume that the channel code is linear and the channel is binary, symmetric, memoryless, and time invariant. At the receiver, an iterative decoder is used, based on two decoders, one for each code. They exchange log-likelihood ratios (LLRs) iteratively, in a typical joint source-channel serial configuration [1,2]. In the following, let LsO: be the output LLRs of the (outer) source decoder, let LcO: be the deinterleaved output LLRs of the (inner) channel decoder, and let Lj,: = LsO: and Lf,: = LO,: be the corresponding input LLRs.

To assess the convergence of this iterative decoder, the EXIT charts introduced in [3] can be used when the bits are uniform. Unfortunately, when the bits are not uniform, a naive application of the EXIT charts that would neglect the bias might lead to inaccuracy issues. A few contributions already paid attention to some of these issues, notably [2, 4, 5]. This paper attempts to clarify them all.

This short paper presents concisely two simple histogram-based techniques in Section 2 to estimate the EXIT charts when the bits are not uniform and discusses in Section 3 the (dis)advantages of each. The first technique is based on the system in Figure 1 and is a generalization of [5]; the second one is based on the system in Figure 2 where a random bit flipping is introduced. These techniques are shown to lead to different but equivalent charts under some assumptions. At last, we show that the well-known technique in [6], though historically developed for uniform bits, provides "correct" EXIT charts with nonuniform bits, which is an interesting conclusion for readers familiar with this technique. Please note that, this paper relates only to histogram-based computations of EXIT charts but

Source coder

Channel coder

Channel

Source js _ LC LILO,: Channel

n-1 <

decoder decoder Y.

Figure 1: Generic joint source-channel turbo system.

Flipped source (de)coder

Figure 2: Equivalent system with a random bit flipping, Uk, U,Fk e {+1, -1}. As in Figure 1, we define L= LO,: and

the presented ideas and concepts are compatible with the analytical computation proposed in [7].

In the remainder, random variables are written with capital letters and realizations with small letters. P(z) is the abbreviation of the probability P(Z = z). The subsequence (Zm, Zm+i,..., Zn) is written Zm:n or Z: when m, n can be omitted. I {a} is the indicator function, that is, I {a} equals 1 if a is true, 0 otherwise. E{Z} is the expectation of Z. I (Y ; Z ) is the mutual information between Y and Z. H(Z) = I(Z; Z) is the entropy of Z. Hb (p) is the binary entropy function, that is, Hb(p) = -plog2(p) - (1 - p)log2(1 - p).

2. Computation of the EXIT Charts

The main results of this section are summarized in Table 1. For clarity, the first method of computation, which is based on biased bits, is given the name BEXIT while the second one, which is based on flipped bits, is called FEXIT. For the sake of conciseness, some familiarity with [3] is assumed.

2.1. Assumptions, Notations, and Consistency. We assume that the channel code is linear and the channel is binary, symmetric, memoryless, and time invariant. Besides, we assume that the channel and source decoders, taken apart, are optimal. Specifically, let Rk = (Y:,LcI1:k-1,LcIk+1.N) and Rk = (LS lk-i,LS k+1N), and assume that the elements in R and in Rk are independent, then the output LLRs on Uk of

the channel and source decoders in Figure 1 are considered to be, respectively,

LOk = log

p(R I Uk = +l)

pR I Uk -1)

p( Rk > Uk = +1)

p( R, Uk = -1

p(Rk I Uk = +1)

pR I Uk = -l)

+ Lu,k,

where the source bias Lu,k is defined as

P(Uk = +1)

Lu ,k = log

P(Uk = -1)'

withO <P(Uk = +1) < 1. (3)

Note that in the flipped case in Section 2.3, L'Vk = 0. Note also generalizing the results below to P(Uk = +1) e {0,1} (e.g., in the case of pilot bits) is straightforward by taking the limit of \LU,k \ toward infinity where appropriate.

To avoid any confusion, here are some further considerations. Firstly, we prefer not to use the term "extrinsic" for the LLRs LcO k and LSO k because this term is used differently by authors in literature. Some authors consider these LIO,k and LSO k in (1)-(2) as extrinsic; others consider LcO k (with Yk excluded in the case of a systematic channel code) and LSO k - LU,k as extrinsic. Secondly, we consider a typical serial concatenation where only the (inner) channel decoder has access to the channel values. It must therefore share this piece of information with the source decoder through LcO k. This is why LcO k in (1) depends on all channel values Y: through Rk (even if the channel code is systematic). Similarly, only the (outer) source decoder "knows" the source a priori probabilities. This is why to share it with the channel decoder, the source bias LU,k is included in LSO k in (2).

Definition 1 (P-consistency and L-consistency). Lp is posterior- consistent or P-consistent with U if

p(Lp = l, U = +1) = elp(Lp = l, U = -1). (4)

LL is likelihood-consistent or L-consistent with U if

P(Ll = l \ U = +1) = elp(LL = l \ U =-1). (5)

Note that if LL is L-consistent, then LL + LU is P-consistent.

Proposition 1. The output LLR of the channel decoder, LcOk in (1), is L-consistent with Uk. The output LLR of the source decoder, LSO k in (2), is P-consistent with Uk.

Proof. It follows from (1)-(2) and integration of p(Rk \ Uk) and p(Rsk \ uk). □

Note that when the symmetry condition p(L = -l \ U = + 1) = p(L = l \ U = -1) is satisfied , the consistency in [4, 6] and the L-consistency (5) are equivalent.

Table 1: Summary of the two Monte-Carlo methods.

BEXIT charts: with biased bits, Figure 1 FEXIT charts: with flipped bits, Figure 2

For both channel and source codes, given

Given a value of II, = J-1(II) and NL ~ a value of II', y!L = J-1 (If) and

Generation of the N (0,2№), NL - N(0,2^L),

input LLRs (I) LI = ^lU + NL, (III) L' = vLU' + NL.

(II) Lf = ^lU + NL + LU, Note, LU = 0.

where LIf is L-consistent and LIc is P-consistent.

Measurement of the output If LsO is P-consistent and LcO is L-consistent, we have (IV) IO = Hu - E{log2(1+ e-ULO)}, If LO is consistent, we have for both channel and source codes (VIII) IO = 1 - E{log2(1 + e-U'L0)},

information for (V) = Hu - E{Hb(1/1 + eLSO)}, (IX) = 1 - E{Hb(1/1+eL0)}.

consistent LLRs (VI) IO = Hu - E{log2(1 + e-u(Lu+LO))}, Under some ergodic assumption, these

(VII) = Hu - E{Hb(1/1 + eLu +LO)}. expectations can be approached by time averages as in [6].

Under some ergodic assumption, these expectations can be approached by time averages as in [6]. Note, L'u = 0 and H'u = 1.

Link between Let IOc be If = IO, I'Of be I[' = IO, O be If = IO, I'OS be I'f = IO, then:

BEXIT charts and FEXIT charts (X) IOC » Jlu(Jo-1(I&)) — (XI) Ifc = I'If - (1 - Hu) — Of « J1 (Of)). Io:f = IOs + (1- Hu).

Let us assume that Lu = Lu,k is independent of k; that is, the bits Uk have the same entropy Hu = Hu,k = Hy(P(Uk = + 1)) independently of k. Let us then measure the input and output levels of information as

I = lim I (Uk ; Lk) e [0, Hv ],

N - N : , k = 1

where (I,Lk) is (If,L^), (If, L^), (IO,Lc0>k), or (Is0,Ls0^). As in [3], we will compute the charts Ic0 = Tc(If) and IO = Ts(If) by feeding the decoders with input LLRs and by measuring the level of output information.

Remark 1. The channel (resp., source) decoder will be fed with input LLRs that are independent, P-consistent (resp., L-consistent, in agreement with Proposition 1), and Gaussian.

Proposition 2. Let L^ be equal to 0 ifL is P-consistent with U, and let it be equal to Lu ifL is L-consistent. Then

P(U = u | L = l) =

1 + e-u(L!j+l} '

Proof. It follows from (4)-(5) and from P(U = +1 | L = l)+ P(U =-1 I L = l) = 1. □

2.2. BEXIT Charts, Figure 1: Biased Bits. We can compute the BEXIT charts of the system in Figure 1 by analytical generalization of the original EXIT charts [3].

(1) Generating the input LLRs. Let us consider a sequence of bits U: generated by the source coder and let us focus on one of these bits, namely, U. Given a value II = If, the input

LLR Lf on U is generated as in (I) in Table 1, where NL is a centered Gaussian random variable of variance 2^L and the invertible function JLu (■) is given by

Jlu M = Hu - X P(U = u)

ue{+1-1}

e-(— u)2/(4M)

log2(1 + e-u(Lu+«) dl

This LfI in (I), Table1, is L-consistent, in agreement with Remark 1.

For the channel decoder, given a value II = IIc, the input LLR Lf on U is generated as in (II), Table 1. Compared to (I), Table 1, the constant term LU is necessary to make Lf P-consistent.

(2) Measuring the output information. Let us consider the whole sequence of bits u: and the corresponding output LLRs L0,:. For both decoders, we can measure the output information by (6) with

I{Uk ; L

Hu - E4 log.

2P(Uk I L0,k)

This expression can be evaluated by approaching P(Uk I LO,k) with histogram measurements as in [3].

Assuming consistent LLRs makes things simpler. We can indeed simplify (6) and (9) into (IV) and (VI) , Table 1, by

Proposition 2 and by (9). In addition, we can simplify (IV) into (V) since

1 + e-ULO

I Lso }

= X P(U = u I Lso)log^1 + e-uLO)

ue!+1,-1!

(1 + eLO)

(1 + e-LO) '

where P(U = u | LsO) is given in (7). Similarly, we can simplify (VI) into (VII), Table 1. Note that (IV), Table 1, is equivalent to [5, equation (4)] and is an extension of [6, equation (4)].

2.3. FEXIT Charts, Figure 2: Flipped Bits. Let us now consider the system in Figure 2. To make the bit stream uniform, we have introduced a random bit flipping before the interleaver n, that is, U' = UkFk for all k, where the Fk e {+1,-1} are independent and uniformly distributed. At the receiver, the corresponding LLRs are flipped accordingly. By linearity of the channel code and symmetry of the channel, the flipped system in Figure 2 is equivalent to the original system in Figure 1. Consequently, the EXIT charts of the flipped system, namely, FEXIT charts, can be used to characterize the original system. For clarity, all symbols related to the flipped system use a prime (') notation.

With FEXIT charts, we are interested in the exchange of information about U:' between the channel decoder and the flipped source decoder in Figure 2. Since the bits Uk are uniform, we can use the results obtained so far with LU = 0 and HU = 1 (see Table 1). This is equivalent to [3]; in particular, the function J0(-) = JLu=0(-) is related to the function J(•) in [3] with J0(p') = J(^¡2p').

3. Transformations, Equivalence, and Discussion

approximation, hence the approximation symbol "«". To prove (XI), Table 1, we use the following equalities:

IO - Hu = -E{log^1+ e-ULO) }, = -E{log2(1+ e-ULO )},

= IOS -1,

by (IV), (11) (12)

by (VIII) (13)

where (12) comes from U'L'O = (UF)(LsOF) = ULsO.

3.2. Simulation Results. To illustrate the equivalence, let us compute the charts of the following (nonoptimized) system. The outer component in Figure 1 is a memoryless source of 3 symbols with probabilities 0.85, 0.14, and 0.01, transcoded, respectively, by the variable length codewords (+1), (-1, -1), (-1,+1,-1), leading to P(U = +1) = 0.741 and HU = 0.825. The channel code is a rate -(1/2) recursive systematic convolutional code with forward generator 358 (in octal) and feedback generator 238. The channel is an additive white Gaussian noise channel with binary phase-shift keying and Eb/N0 = 1.4 dB—Eb is the energy per bit of entropy and N0/2 the double-sided noise spectral density. Note that the source decoder is based on the BCJR algorithm [8] on Balakirsky's bit-trellis [9].

The BEXIT and FEXIT charts of the system are given in Figures 3 and 4. The solid lines show the charts obtained with the methods described in Sections 2.2 and 2.3. The data points show the BEXIT and FEXIT charts obtained by applying the transformations (X)-(XI), Table 1, respectively, on the FEXIT and BEXIT charts given in solid lines. The good match between the data points and the solid lines illustrates the equivalence between FEXIT and BEXIT charts. All system configurations we have tested confirm this equivalence, at least in the tested range P(U = +1) e [0.05,0.95].

Finally, when we neglect the bias and apply blindly the original method of [3]—generating the input LLRs with [3, equation (9)] and [3, equation (18)] and measuring the output information with [3, equation (19)]—, we obtain actually the FEXIT chart of the channel decoder for the channel decoder and the dashed line in Figure 4 for the source decoder. As we can see, these charts intersect each other and give a prediction of convergence that is too pessimistic.

3.1. Transformations and Equivalence. The BEXIT and FEXIT charts are equivalent under the assumptions of Section 2.1, up to the approximation (X), Table 1. Indeed, under these assumptions, transformations to obtain the FEXIT chart from the BEXIT chart, and vice versa, are given in (X) and (XI) and Table 1. To prove (X) and Table 1, let us consider LI given in (I), Table 1, in the biased case. If we apply the flipping F on LI, we get L' = L\F = pL UF + NLF = pLU' + NlF, and it is self-evident that this L'f is equivalent to (III), Table 1, if Pl = p'l, that is, if J[1(IS) = J-Hff), which proves (XI), Table 1, for consistent Gaussian LLRs. For consistent non-Gaussian LLRs, we can invoke the empirical robustness of EXIT charts with respect to the statistical model of the LLRs and assume that (X), Table 1,isa sufficient

3.3. Discussion. In terms of (dis)advantages of each method, FEXIT charts, unlike BEXIT charts, are limited to linear channel codes and symmetric channels. Nevertheless, when the channel code is linear and the channel symmetric, FEXIT charts have at least two benefits. Firstly, the FEXIT chart of the (inner) channel code is independent of the (outer) source code; so we do not need to recompute it when the source code changes. By contrast, we need to recompute the BEXIT chart of the channel code when LU changes since it depends on LU (see (II), (VI), and (VII) in Table 1). Secondly, FEXIT charts can handle very easily (outer) source codes with an entropy HU,k that depends on k—recall that we have assumed that LU,k, therefore HU,k, is independent of k in Section 2.1—, simply because the random bit flipping makes the entropy

0 0.2 0.4 0.6 0.

- BEXIT charts

• FEXIT charts, transformed

Figure 3: BEXIT charts of the system described in Section 3.1.

0 0.2 0.4 0.6 0.

- BEXIT charts

• FEXIT charts, transformed ---EXIT chart VLC, bias neglected

Figure 4: FEXIT charts of the system described in Section 3.1.

equal to 1 for all k. On the contrary, no method to handle them with BEXIT charts is known to the authors. Note that a varying HU,k is not uncommon in practice: fixed length codes, variable length codes with periodic bit-clock trellises, mixture of such codes, and so forth.

At last, among related contributions in literature, the well-known technique in [6] is of particular interest. Though historically developed for uniform bits, this technique gives without bit flipping a correct prediction of convergence when the channel code is linear and the channel is symmetric. It computes indeed [6, equation (5)] the output information as (with some mathematical rewriting)

1 + e-

Since U'L' = (UF)(LF) = UL, it is equivalent to (VIII), Table 1, and thus to the FEXIT charts presented in this paper.

4. Conclusion

Two methods have been presented to handle nonuniform bits in the computation of EXIT charts. Though proved to be equivalent for the prediction of convergence under certain assumptions, they have different pros and cons. For example, the FEXIT method is restricted to linear inner codes and symmetric channels while the BEXIT method is not. But the FEXIT method handles very easily a mixture of bits having different entropies and offers a chart for the inner channel decoder (of a serial concatenation) that is independent of the outer source code, unlike the BEXIT method, which simplifies greatly subsequent optimizations of the concatenated code. In practice, both methods are therefore complementary and help to analyze joint source-channel turbo systems via EXIT charts.

Acknowledgment

The authors greatly thank the reviewers for their constructive comments. The work of X. Jaspar is supported by the F.R.S.-FNRS, Belgium.

References

[1] A. Guyader, E. Fabre, C. Guillemot, and M. Robert, "Joint source-channel turbo decoding of entropy-coded sources," IEEE Journal on Selected Areas in Communications, vol. 19, no. 9, pp. 1680-1696, 2001.

[2] M. Adrat and P. Vary, "Iterative source-channel decoding: Improved system design using EXIT charts," EURASIP Journal on Applied Signal Processing, vol. 2005, no. 6, pp. 928-947,2005.

[3] S. ten Brink, "Convergence behavior of iteratively decoded parallel concatenated codes," IEEE Transactions on Communications, vol. 49, no. 10, pp. 1727-1737, 2001.

[4] J. Hagenauer, "The EXIT chart—introduction to extrinsic information transfer in iterative processing," in Proceedings of the 12th European Signal Processing Conference (EUSIPCO '04), Vienna, Austria, September 2004.

[5] N. Dutsch, "Code optimisation for lossless compression of binary memoryless sources based on FEC codes," European Transactions on Telecommunications, vol. 17, no. 2, pp. 219-229, 2006.

[6] M. Tuchler and J. Hagenauer, "EXIT charts of irregular codes," in Proceedings of the 36 th Annual Conference on Information Sciences and Systems (CISS '02), Princeton, NJ, USA, March 2002.

Iе . . . Is 4 ''' lO

Vе . . . r's 4 JO

[7] M. Adrat, J. Brauers, T. Clevorn, and P. Vary, "The EXIT-characteristic of softbit-source decoders," IEEE Communications Letters, vol. 9, no. 6, pp. 540-542, 2005.

[8] L. R. Bahl, J. Cocke, F. Jelinek, and J. Raviv, "Optimal decoding of linear codes for minimizing symbol error rate," IEEE Transactions on Information Theory, vol. 20, no. 2, pp. 284-287, 1974.

[9] V. B. Balakirsky, "Joint source-channel coding with variable length codes," in Proceedings of the IEEE International Symposium on Information Theory (ISIT '97), p. 419, Ulm, Germany, July 1997.