Scholarly article on topic 'Iterative Source-Channel Decoding: Improved System Design Using EXIT Charts'

Iterative Source-Channel Decoding: Improved System Design Using EXIT Charts Academic research paper on "Electrical engineering, electronic engineering, information engineering"

0
0
Share paper
Keywords
{""}

Academic research paper on topic "Iterative Source-Channel Decoding: Improved System Design Using EXIT Charts"

EURASIP Journal on Applied Signal Processing 2005:6, 928-941 © 2005 M. Adrat and P. Vary

Iterative Source-Channel Decoding: Improved System Design Using EXIT Charts

Marc Adrat

Institute of Communication Systems and Data Processing, Aachen University of Technology (RWTH), 52056 Aachen, Germany Email: adrat@ind.rwth-aachen.de

Peter Vary

Institute of Communication Systems and Data Processing, Aachen University of Technology (RWTH), 52056 Aachen, Germany Email: vary@ind.rwth-aachen.de

Received 1 October 2003; Revised 5 April 2004

The error robustness of digital communication systems using source and channel coding can be improved by iterative source-channel decoding (ISCD). The turbo-like evaluation of natural residual source redundancy and of artificial channel coding redundancy makes step-wise quality gains possible by several iterations. The maximum number of profitable iterations is predictable by an EXIT chart analysis. In this contribution, we exploit the EXIT chart representation to improve the error correcting/concealing capabilities of ISCD schemes. We propose new design guidelines to select appropriate bit mappings and to design the channel coding component. A parametric source coding scheme with some residual redundancy is assumed. Applying both innovations, the new EXIT-optimized index assignment as well as the appropriately designed recursive nonsystematic convolutional (RNSC) code allow to outperform known approaches to ISCD by far in the most relevant channel conditions.

Keywords and phrases: iterative source-channel decoding, turbo principle, soft-input/soft-output decoding, softbit source decoding, extrinsic information, EXIT charts.

1. INTRODUCTION

The design and development guidelines for today's digital communication systems are inspired by the information theoretic considerations of C. E. Shannon. His fundamental statements indicate that, in order to find the most error resistant realization of a communication system, the transmit, respectively, receive operations are in principle separable into source coding and channel coding. However, the achievement of the global optimum using this two-stage process is possibly subject to impractical computational complexity, to unlimited signal delay, and to stationary source signals. Taking realistic constraints of real-world communication systems into account, a separate treatment of source and channel coding usually inflicts a loss of optimality. Joint source-channel coding allows to narrow the gap to the global optimum.

The present contribution addresses a novel concept for joint source-channel coding. A new method is proposed to

This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

improve the error robustness of existing or emerging digital mobile communication systems like GSM (global system for mobile communications) or UMTS (universal mobile telecommunications system), or the digital audio/video broadcasting systems (DAB/DVB). In these systems the source coding part extracts characteristic parameters from the original speech, audio, or video signal. Usually, these source codec parameters exhibit considerable natural residual redundancy such as a nonuniform parameter distribution or correlation. The utilization of this residual redundancy at the receiver helps to cope with transmission errors.

Besides several other concepts utilizing residual redundancy at the receiver to enhance the error robustness, two outstanding examples are known as source-controlled channel decoding (SCCD) [1,2,3,4] and as softbit source decoding (SBSD) [5]. On the one hand, SCCD exploits the natural residual redundancy during channel decoding for improved error correction. On the other hand, softbit source decoding performs error concealment. SBSD can reduce the annoying effect of residual bit errors remaining after channel decoding.

The error concealing capabilities of SBSD can be improved if artificial redundancy is added by channel coding. In practice, however, the optimal utilization of both,

Source encoder

Quantizer & bit mapping 1

| uT O Channel

encoder

Figure 1: Transmitter for iterative source-channel decoding interleaves).

the artificial channel coding redundancy and the natural residual source redundancy, is not feasible due to the significantly increased complexity demands. Therefore, a low-complexity approximation has recently been proposed in terms of iterative source-channel decoding (ISCD) [3, 6, 7, 8, 9, 10, 11].

In an ISCD scheme a soft-input/soft-output (SISO) channel decoder and a (derivative of a) softbit source decoder are concatenated. The first decoder exploits the artificial redundancy which has explicitly been introduced by channel encoding, and the second one mainly utilizes the natural mutual dependencies of the source codec parameters due to their residual redundancy. The reliability gains due to both terms of redundancy are exchanged iteratively in a turbolike process [12, 13, 14]. In literature, the reliability gains are also referred to as extrinsic information. This information can usually be extracted from the soft-output values provided by any SISO decoder.

In order to evaluate the number of iterations allowing noteworthy improvements of error robustness a powerful analysis tool has recently been proposed in terms of extrinsic information transfer (EXIT) charts [15,16]. This method had already been applied to an ISCD scheme in [10, 11]. However, the EXIT chart representation of ISCD schemes also reveals some new design and development guidelines which are the topic of this paper.

This paper is organized as follows. In Section 2, we give a comprehensive review of iterative source-channel decoding (ISCD). Next, we define a new, clear classification of ISCD approaches into serially and parallel concatenated schemes. Afterwards, we apply the EXIT chart analysis to ISCD in Section 3. Based on this EXIT chart analysis we develop new design guidelines for ISCD schemes in Section 4, which provide higher error correcting/concealing capabilities. Finally, the improved error robustness of these schemes is demonstrated by simulation.

2. ITERATIVE SOURCE-CHANNEL DECODING 2.1. System overview—transmitter site

At time instant t a source encoder extracts a set uT of M scalar source codec parameters u^T from a short segment of the original speech, audio, or video signal (see Figure 1). The index ^ = 1,..., M denotes the position within the set uT = (ui,T,...,uM,T). For instance, in GSM speech communication the set uT comprises the coefficients of a linear filter describing the spectral envelope of a 20 millisecond segment of a speech signal as well as some parameters repre-

senting the excitation of this filter. Each value u^r, which is continuous in magnitude but discrete in time, is individually quantized by 2Kl reproduction levels ufi with i = 0,... ,(2Kf - 1). The reproduction levels are invariant with respect to t and the whole quantizer code-book is given by

U = {ujr1,...,U2 11 -1)}. To each index i of a quantizer reproduction level u1 specified at time instant t, a unique bit pattern x1T of length K1 is assigned. The complete frame of M bit patterns x^ T specified at time instant t is denoted as

Xr = (X1, t,... , xM,T). A particular data bit of the bit pattern

x1T is addressed by an additional index k written in parentheses, that is, x1T (k) with k = 1,..., K^. For convenience, in

the following we assume that the code-books U of u1 are the same for all parameters u1,T in the set uT, that is, U^ = U and Kh = K for all 1 = 1,..., M.

A (source-related) bit interleaver O scrambles the set of data bits xT to XT using a deterministic mapping. In GSM, for example, the data bits are rearranged according to their individual importance with respect to the subjective speech quality. The reordering performs some kind of classification for unequal error protection. This helps to cope with annoying artifacts in the reconstructed speech signal if residual bit errors remain after channel decoding.

If there is no danger of confusion, the following notation will always refer to the deinterleaved domain. That means, even though bit interleaving changes the actual position of x1T(k) in the sequence of data bits xt, we keep the notation. The interleaver might be sized such that T + 1 consecutive sets xT with t = A - T,..., A are rearranged in common. To simplify notation, such time series of sequences xr are also denoted by the compact expression xA-T. The deterministic mapping O of the bit interleaver has to be designed in a way that (at the receiver site) the reliability gains resulting from softbit source decoding and from channel decoding can be considered as independent. Thus, the (source-related) bit interleaver plays a new key role in ISCD schemes (namely providing independent reliability gains) as compared to the original purpose of unequal error protection as in GSM.

As the reliability gain of source coding is due to the residual redundancy of source codec parameters u1,T, independence will be ensured if channel encoding is performed over bits Xi,t (k) of (more or less) mutually independent bit patterns x1T, for example, across different positions 1. Such channel encoding may be realized either on a single-bit sequence xr at time t, or with respect to the interleaver O on multiple xr with t = A - T,..., A. Channel codes of code rate r expand the sequences xA\-T of bit patterns to a sequence of code bits y(Z) with Z = 1, ...,(1/r) ■ (T+1) ■ M ■ K. Note, if terminated convolutional codes of rate r and memory J are applied, there exist (1/r) ■ J additional code bits. If channel encoding of the systematic form is assumed, the individual data bits x^ T(k) of xT are present in the code sequence

yA . '

In real-world communication systems a second (channel-related) interleaver is placed after channel encoding to cope with burst errors on the transmission link. This kind of

L(z^,t (k)| (k)) (if channel coding is of systematic form)

L(z(Z)ly(Z)),

lcd (k))

Softbit source decoding |

Utilization of source statistics Parameter estimation

l(xht (k))

LSBSD (k))

L(xhr (K)|zf)

Figure 2: Receiver for iterative source-channel decoding : interleaver, ® 1: deinterleaver).

interleaver is assumed to be sized sufficiently large so that the equivalent transmission channel can be considered as mem-oryless and AWGN (additive white Gaussian noise).

2.2. Receiver site

2.2.1. Transmission model for binary phase shift keying

At the receiver, reliability information about the single data bits x^,t(k) is generated from the possibly noisy received sequence zA-T corresponding to yf . In this respect, it is most convenient to express reliability information in terms of log-likelihood ratios or short L-values, for example, [13]. For instance, if the transmission channel is considered to be AWGN, the channel-related L-value is given by [13]

L{z(Z) | y(Z)) = 4 ■ Nr • z(Z)

for all y(Z). The term Es denotes the energy per transmitted BPSK-modulated (binary phase shift keying) code bit y(Z) and N0/2 the double-sided power spectral density of the equivalent AWGN channel. The possibly noisy received value z(Z) e R denotes the real-valued counterpart to the originally transmitted BPSK-modulated code bit y(Z) e {-1,+1}.

Time variant signal fading can easily be considered as well. For this purpose, a factor a has to be introduced on the right-hand side of (1). The specific probability distribution (e.g., Rayleigh or Rice distribution) of the random process a represents the characteristics of the signal fading. However, in the following we neglect signal fading, that is, a = 1 constantly.

2.2.2. Receiver model

The aim of the iterative source-channel decoding algorithm is to jointly exploit the channel-related L-values of (1), the artificial channel coding redundancy as well as the natural residual source redundancy. The combination yields a posteriori L-values L(x^,T(k) | zA) for single data bits x^,T(k) given the (entire history of) received sequences zT with t = 1,..., A (see Figure 2). This a posteriori L-value can be separated according to Bayes' theorem into four additive terms, if a mem-oryless transmission channel (and channel encoding of the

systematic form, see below) is assumed:

l{x^,t(k) l z[) = l{z^,r(k) l x^r(k)) + l(x^,t(k)) + lcd (x^,r ( k)) + lsbsd (x^,r (k)) .

The first term in (2) represents the channel-related Lvalue of the specific data bit x^,r(k) under test. Of course, this term is only available if channel encoding is of the systematic form. In this case, the data bit x^,r(k) corresponds to a particular code bit y(Z) and thus, the channel-related L-value L(z^,r(k) | x^,r(k)) is identical to one of the L-values determined in (1). Note, with respect to the correspondence of Xp,r(k) and y(Z), we used two different notations for the same received value, that is, (k) = z(Z). If channel encoding is of the nonsystematic form, the term L(z^,r(k) | x^,r(k)) cannot be separated from L(x^,r(k) | z[). In this case it can be considered to be L(z^,r(k) | x^,r(k)) = 0 in (2) constantly.1

The second term in (2) represents the a priori knowledge about bit x^,r (k). Note, this a priori knowledge comprises natural residual source redundancy on bit-level. Both terms in the first line mark intrinsic information about x^,r (k).

In contrast to these intrinsic terms, the two terms in the second line of (2) gain information about x^,r(k) from received values other than z^,r(k). These terms denote so-called extrinsic L-values which result from the evaluation of one of the two particular terms of redundancy. In the following, whenever the magnitude of these extrinsic L-values increases by the iterations we refer to this as reliability gain.

2.3. Determination of extrinsic information 2.3.1. Soft-input/soft-output channel decoder

The SISO channel decoder (CD) in Figure 2 determines extrinsic information LCDt](x^,T (k)) mainly from the artificial redundancy which has explicitly been introduced by channel encoding. For this purpose, the SISO decoder combines the channel-related soft-input values L(z(Z) | y(Z)) for the code bits y(Z) with a priori information L(x^r (k)) about the data bits x^,r (k). The valid combinations are precisely described by the channel encoding rule. The a priori information

1 This notation does not imply that channel-related knowledge remains unexploited on the right-hand side of (2). The received sequence z[ will still be utilized during the evaluation of the extrinsic L-values lCD (x^>r (k)).

L(xp,T (k)) can be improved by additional a priori information which is provided by the other constituent decoder in terms of its extrinsic information LSBsD(xp,T(k)) (feedback

line in Figure 2). These LSBsD(xP,T(k)) are usually initialized with zero in the first iteration step. As the determination rules of extrinsic L-values Lqd'Cx^.t(k)) of channel decoding are already well known, for example, in terms of the log-MAP algorithm [13, 17], we refer the reader to literature.

2.3.2. Softbit source decoder

The second decoder in the ISCD scheme is a (derivative of a) softbit source decoder (SBSD) [5]. The softbit source decoder determines extrinsic information mainly from the natural residual source redundancy which typically remains in the bit patterns xp,T after source encoding. Such residual redundancy appears on parameter-level, for example, in terms of a nonuniform distribution P(xp,T), in terms of correlation, or in any other possible mutual dependency in time2 t. The latter terms of residual redundancy are usually approximated by a first-order Markov chain, that is, by the conditional probability distribution P(xp,T | xp,T-i). These source statistics can usually be measured once in advance for a representative signal data base.

The technique how to combine this a priori knowledge on parameter-level with the soft-input values LCeDt](xp,T(k)), L(xp,T (k)) on bit-level, and (if channel encoding is of the systematic form) with L(zp,T (k) | xp,T (k)) is not widely common so far. However, the algorithm how to compute the extrinsic L-value LSBsD(xP,T(k)) of SBSD has been derived in, for example, [8, 9, 10, 11]. It is briefly reviewed in Appendix B.

After several iterative refinements of LCDt](x^,T (k)) and LsBsDW,!-(k)) the bit-level a posteriori L-values of (2) are utilized for estimation of parameters up,T. For this purpose, at first parameter-oriented a posteriori knowledge is determined and secondly combined with quantizer reproduction levels to provide the parameter estimates up,T. Parameter-oriented a posteriori knowledge like P(xpT | zA) can easily be measured either from the bit-wise a posteriori L-values of (2) or from the intermediate results of (B.5) (see Appendix B), for example, by

P (v I zf) = C • ß

) • ©(xp,r) -X P(xp

individual estimates are given by [5]

= 2 üjp ■ P(x»,T = i i zf)

The term C denotes a constant factor which ensures that the total probability theorem is fulfilled. Thus, if the minimum mean squared error (MMSE) serves as fidelity criterion, the

2For convenience, we neglect any possibly available mutual dependency in position p like cross-correlation of adjacent parameters u„>T and However, it is straightforward to extend the following formulas such that mutual dependencies in position p can be exploited by ISCD as well [11].

If a delay is acceptable, that is, T + 1 > 1, (4) performs interpolation of source codec parameters due to the look-ahead of A - t parameters. Otherwise, if T + 1 = 1 and A = t, (4) performs parameter extrapolation.

2.4. Realization schemes

In connection with the turbo principle, typically two different realization schemes have to be regarded. If two constituent encoders operate on the same set of bit patterns (either directly on xT or on the interleaved sequence xT), this kind of turbo scheme is commonly called a parallel code concatenation. A parallel code concatenation implies that at the receiver site, channel-related knowledge is available about all code bits of both decoders. In contrast to this, in a serially concatenated turbo scheme the inner encoder operates on the code words provided by the outer one. If the inner code is of the nonsystematic form, no channel-related information is available to the outer decoder.

In ISCD schemes, the constituent coders are the source and the channel encoder while the respective decoders are the channel decoder and the "utilization of source statistics" block (see Figure 2). With respect to the above considerations the amount of channel-related information which is available at both SISO decoders allows a classification into parallel, respectively, serially concatenated ISCD schemes.

(i) Parallel concatenated ISCD scheme. We define an ISCD scheme to be parallel-concatenated if channel-related information is available about all code bits to both constituent decoders. This is the case if channel encoding is of the systematic form.

(ii) Serially concatenated ISCD scheme. If channel encoding is of the nonsystematic form, channel-related knowledge is only available to the inner decoder. The outer decoding step, that is, the utilization of residual redundancy, strongly depends on the reliability information provided by the inner channel decoder.

From the above definition it follows that all formerly known approaches to ISCD, for example, [6, 7, 8, 9,10], have to be classified as parallel concatenated as in these contributions channel codes of the systematic form are used. However, albeit our definition sometimes the denotation serial concatenation has been used as a source and a channel encoder are arranged in a cascade.

3. CONVERGENCE BEHAVIOR

In order to predict the convergence behavior of iterative processes, in [15, 16] a so-called EXIT chart analysis has been proposed. By using the powerful EXIT chart analysis, the mutual information measure is applied to the input/output relations of the individual constituent SISO decoders. Figure 3 shows a generalization of the input/output

»,T-1

l(zu,r (k)| (k)),

Soft-output decoder

lifw (k))

~l(x^r (k))

Lout (xm,t (k))

Figure 3: Generalized soft-input/soft-output decoder using Lvalues.

relations of decoders in case of a parallel ISCD scheme (compare to "channel decoder" and "utilization of source statistics" in Figure 2).

On the one hand, the information exhibited by the overall a priori L-value, and on the other hand, the information comprised in the extrinsic L-values after soft-output decoding is closely related to the information content of the originally transmitted data bits x^,T ( k). For convenience, we define the simplified notations:

(i) l[apri] quantifies the mutual information between the data bit x^,T (k) and the overall a priori L-value

l(x^,t (k))+ L|nxt](x^,r (k)),

(ii) l[ext] denotes the mutual information between x^,r (k) and the extrinsic information LOx? (x^,r(k)).

If needed, an additional subscript "CD", respectively, "SBSD" will be added to differentiate between channel decoding and softbit source decoding. The upper limit for both measures is constrained to the entropy H(X) (the data bit x^,r(k) is considered to be a realization of the random process X). Note, the entropy H(X), respectively, the mutual information measures l[apri], l[ext] depend on the bit position k. To simplify matters, in the following we consider only the respective mean measures which are averaged over all bit positions k = 1,..., K.

3.1. Extrinsic information transfer characteristics

The mutual information measure l[ext] at the output of the decoder depends on the input configuration. The channel-related input value L(z^,r(k) | x^,r(k)), is mainly determined by the Es/N0 value (compare to (1)). For the overall a priori input value L(x^,r(k)) + L^ (x^,r(k)) it has been observed by simulation [15,16] that this input can be modeled by a Gaussian distributed random variable with variance aL = 4/aj (with aj, = N0/2) andmean ^L = al/2-x^,r (k). As both terms depend on a single parameter al, the a priori relation l[apri] can directly be evaluated for arbitrary aL by numerical integration. Thus, the EXIT characteristics T of SISO decoders are defined as [15, 16]

I [ext] = t( I [apri]

V 'No/'

If specific settings for l[apri], respectively, aL and for Es/N0 are given, l[ext] is quantifiable by means of Monte-Carlo simulation. Note, in case of a serially concatenated ISCD

scheme, the EXIT characteristic (5) of the outer SBSD becomes (more or less) independent of the Es/N0 value because L(zk,t(K) | Xk,t(K)) = 0 in (B.1) (see Appendix B) constantly.

While the EXIT characteristics of various channel codes have already been extensively discussed, for example, in [15, 16], in the following we extend our investigation here to the EXIT characteristics of SBSD [10, 11].

3.2. EXIT characteristics of softbit source decoding

Figure 4 depicts EXIT characteristics of SBSD if either the nonuniform distribution of the source codec parameters u^,r or additionally correlation is exploited. The u^,r are modeled by a first-order Gauss-Markov process with correlation p = 0.0 or p = 0.9 and quantized by a Lloyd-Max quantizer using K = 3 (Figures 4a and 4d), 4 (Figures 4b and 4e), or 5 bits/parameter (Figures 4c and 4f). As index assignment serves natural binary (Figures 4a-4c), respectively, an EXIT-optimized mapping (Figures 4d-4f) as proposed in Section 4.

Each subplot shows 16 simulation results for the case where SBSD is applied to a parallel-concatenated ISCD scheme. The lower subset of 8 EXIT characteristics is determined for an uncorrelated and nonuniformly distributed parameter, that is, p = 0.0. The upper subset of 8 EXIT characteristics results if in addition correlation is utilized, for example, p = 0.9. Due to correlation, more information about x^,r (k) is available and thus, mutual information lSBSD increases. The single curves of each set represent different channel conditions (from bottom to top E^/No = {-100, -10, -3, -1,0,1,3,10} dB).

If in a parallel-concatenated ISCD scheme the channel quality decreases utterly, then the channel-related L-values become negligibly small, that is, L(z^,r(k) | x^,T(k)) « 0. This resembles a serially concatenated ISCD scheme where the outer softbit source decoder is (more or less) independent of the Es/N0 value. Thus, the dashed curves in the different subplots are valid for both situations: for a very bad channel condition like Es/N0 = -100 dB in case of a parallel ISCD scheme as well as for all channel conditions Es/N0 in a serially concatenated scheme.

The simulation results depicted in all subplots reveal the same two apparent properties. Firstly, for a fixed but arbitrary parameter configuration, all curves merge in a single point if iSbsd ^ 1 bit. Secondly, in contrast to sophisticated SISO channel decoding, none of the curves reach entropy lSBSD = H(X) ~ 1 bit even if the information at the a priori input can be considered as error free, that is, iSbsd ~ 1 bit. Thus, perfect reconstruction of the data bit x^,T(k) by solely studying the extrinsic output L-value (B.5) of SBSD is impossible.

Moreover, in case of the natural binary index assignment, it can be stated for all EXIT characteristics that the mutual information at the output increases approximately linear with the mutual information at the input. Thereby, the slope is usually rather flat. The EXIT characteristics for the EXIT-optimized bit mapping are discussed in more detail in Section 4.1.

Figure 4: EXIT characteristics of SBSD for various index assignments. ((a), (b), and (c) natural binary mapping; (d), (e), and (f) EXIT-optimized mapping), quantizer code-book sizes 2K: ((a), (d) K = 3 bits/parameter; (b), (e) K = 4 bits/parameter; and (c), (f) K = 5 bits/parameter), correlation (in each subplot upper subset: p = 0.9, lower subset: p = 0.0), and channel conditions (for each configuration from bottom to top Es/N0 = {-100, -10, -3, -1,0,1,3,10} dB). The measures IbsD, IbsD are averaged over all k = 1,..., K.

3.3. Theoretical upper bound on iSBsD For every configuration of index assignment, correlation p, quantizer code-book size 2K, and look-ahead A - t the maximum mutual information value lsBsD,max can also be quantified by means of analytical considerations [10,11]. Whenever

the input relation IbsD increases to H(X) (or the channel

quality is higher than Es/N0 « 10 dB), the terms ©(xp;?'), aT-1(xpT-1), and (xpT) of (B.5) (see Appendix B) are generally valued such that all summations in the numerator and denominator degenerate to single elements. In consequence, the theoretically attainable LSBsD(xp,T(k)) are given for all possible combinations of xf,T+1, xpT'', x^--^ by

T [ext] ( / s) , P(x»,xI 1 xp,T-1, x»,t(k) = +1) • p (xp,A-T-l)nA=A-T, t=rP(xp,t 1 xp,t-l)

lSBSD(xp,t(k)) = log d( [extn-77--^A--(—:-T• (6)

P( xp,x 1 xp,r-1, XP,T (k) = -1) • p (xp,A-T-^ t=A-T, t=rp{ xp,t 1 xp,t-l)

After the discrete probability distribution of all attainable information between Lsbsd(xp,T(k)) and xp,T(k) provides the values Lsex<jd(xp,t(k)) is quantified, the evaluation of mutual upper bound for lsbsd,max (averaged over all k = 1,...,K).

Table 1: Theoretical bounds on isBSD.max (FS: full search, BSA: binary switching algorithm).

K Autocorrelation p

p = 0.0 p = 0.7 p = 0.8 p = 0.9

3 bits Natural binary Folded binary Gray-encoded 0.123 0.036 0.054 0.330 0.213 0.226 0.429 0.293 0.299 0.577 0.430 0.415

SNR opt. [18] 0.111 0.465 0.588 0.732

EXIT opt. (FS) 0.163 0.487 0.622 0.796

EXIT opt. (BSA) 0.123 0.472 0.607 0.791

4 bits Natural binary Folded binary Gray-encoded 0.127 0.043 0.068 0.298 0.190 0.208 0.380 0.260 0.270 0.507 0.388 0.374

SNR opt. [18] 0.201 0.529 0.649 0.785

EXIT opt. (BSA) 0.221 0.566 0.706 0.882

5 bits Natural binary Folded binary Gray-encoded 0.118 0.044 0.069 0.259 0.165 0.183 0.326 0.225 0.234 0.430 0.335 0.323

SNR opt. [18] 0.207 0.574 0.691 0.808

EXIT opt. (BSA) 0.257 0.613 0.758 0.905

Table 1 summarizes the upper bounds for the example situations with K = 3,4,5 bits/parameter and some frequently used index assignments: natural binary, folded binary, and Gray-encoded bit mapping. To simplify matters, softbit source decoding is restricted to parameter extrapolation, that is, A - T = 1 and A = t. Thus, the evaluation of lSbsd(xp,t (K)) of (6) reduces to an evaluation of all combinations of x^f, Xp,T-1.

The theoretical upper bounds for natural binary confirm the corresponding simulation results of Figure 4. Compared to folded binary and Gray-encoded, the natural binary bit mapping provides higher lsBsb,max for all configurations of quantizer code-book size and correlation.

Recently an advanced bit mapping for ISCb has been proposed by Hagenauer and Gortz [18]. By considering simplified constraints like single-bit errors and by neglecting parameter correlation, the optimization is realized such that the best possible parameter signal-to-noise ratio (SNR) between the original codec parameter u^,T and its reconstruction u^T is reached. If the theoretical upper bound iSBsbmax is evaluated for this SNR-optimized mapping, further substantial gains of iSeXsb max can be observed for most configurations (see Table 1).

The theoretical upper bounds iSBsbmax for the EXIT-optimized bit mapping are discussed in more detail in Section 4.1.

3.4. EXIT chart of iterative source-channel decoding

The combination of the two EXIT characteristics of both soft-output decoders in a single diagram is referred to as EXIT chart [16]. The main contribution of EXIT charts is that an analysis of the convergence behavior of a concatenated scheme is realizable by solely studying the EXIT characteristics of the single components. Both EXIT characteris-

tics are plotted into the EXIT chart considering swapped axes because the extrinsic output of the one constituent decoder serves as additional a priori input for the other one and vice versa (see Figure 2).

Figure 5 shows an exemplary EXIT chart of a parallel approach to iterative source-channel decoding for a channel condition of Es/No = —3dB. The source codec parameters u^,T are assumed to exhibit correlation of p = 0.9. The parameters are quantized by a Lloyd-Max quantizer using K = 4 bits/parameter each, and natural binary serves for index assignment. Thus, the EXIT characteristic of SBSb is taken from Figure 4b. For channel encoding a rate r = 1/2, memory J = 3 recursive systematic convolutional (RSC) code with generator polynomial G = (1,(1+ D2 + D3 )/(1 + D + D3)) is used.

Usually, the best possible error correcting/concealing capabilities of an iterative source-channel decoding process are limited by an intersection of both EXIT characteristics [10].

4. DESIGN OF IMPROVED ISCD SCHEMES

The primary objective of iterative turbo-algorithms is to gain as much information from the refinements of extrinsic Lvalues LsBsb(x^,T (k)) and lCD (x^>T (k)) as possible. This goal implies that the intersection of the EXIT characteristics of the constituent decoders is located at the highest possible

(lCebt], iSexb) pair3 in the EXIT chart.

3In the two-dimensional (1qd], Ibsd) space, that specific intersection of EXIT characteristics is considered to provide the "highest possible pair" which maximizes 1 (iCD))2 + 1 (iSBsb))2. In this sum, the term '(■) denotes the inverse function to ¥(■) which is an approximation

1[aPri] = for the numerical integration mentioned in Section 3.1 [16].

lg 0.4 0.2 0

0 0.2 0.4 0.6 0.8 1

Japri] l [ext] (,jt) lCD , lSBSD (bit)

Figure 5: Exemplary EXIT chart of iterative source-channel decoding.

Thus, an ISCD scheme with improved error correcting/concealing capabilities might be given if the (Icd', iSexSD) pair is maximized. Next, this maximization will be realized in a two-stage process. Firstly, we propose a new concept on how to design an optimal index assignment. For this purpose the highest possible l[SeBxStD] ,max value serves as optimality criterion. Secondly, we search for an appropriate channel coding component which ensures that the EXIT characteristic of CD crosses that one of SBSD at the highest possible lCeDt].

4.1. Optimization of the index assignment

In a first straightforward approach, the theoretical upper limit lsBsD,max has to be evaluated for all 2K! possible assignments of 2K-bit patterns xp T to the valid quantizer reproduction levels u^ with i = 0,... ,(2K - 1) of quantizer codebook U. That specific realization of all examined assignments which provides the maximum value for lsBS;D,max marks the optimal mapping. Of course, such a full search (FS) is only manageable if the size of the quantizer code-book U is reasonably small, that is, K < 3 bits/parameter. Otherwise, if K > 4 bits/parameter, a full search is almost impossible because there exist 2K! > 24! = 2.09E + 13 different assignments.

For the optimization of the index assignment with K > 4 bits/parameters, we propose a low-complexity approximation which resembles4 the binary switching algorithm (BSA) [19]. Starting from an initial index assignment (e.g., the natural binary mapping), that bit pattern which is assigned to u^ with i = 0 is exchanged on a trial basis with every other

4In contrast to [ 19], the BSA proposed here does not pay attention to the individual contributions of each index to an overall cost function.

bit pattern for the indices j = 0,... ,(2K - 1) (including the unmodified arrangement i = j). From the 2K possible arrangements, that combination is selected for further examination which provides the maximum lsBsD,max. Afterwards, this kind of binary switching is repeated for the other indices i = 1,...,(2K - 1). Whenever a rearrangement provides a higher l[SeBxStD] ,max value, the iterative search algorithm is restarted with i = 0, that is, the last-determined rearrangement serves as new initial index assignment. Usually, after several iterative refinements a steady-state is reached. The finally selected arrangement serves as EXIT-optimized index assignment. Some examples are listed in Table 2 in Appendix A.

The highest lsBxsDmax values for the EXIT-optimized mappings are also listed in Table 1. Compared to the classical index assignments like natural binary, folded binary, and Gray-encoded, the extrinsic mutual information at the output of the softbit source decoder has significantly been increased by the optimization. Notice that the EXIT-optimized mapping found by the BSA approximation may only be considered as a local optimum. As shown for K = 3 bits/parameter, the global optimum obtained by the full search is usually more powerful.

In addition, the theoretical analysis also reveals substantial gains in lsBsD,max over the SNR-optimized mapping [18]. The key advantage over this approach is that correlation of the source codec parameters up,T can easily be taken into account during the optimization process. As a consequence, the gap in lsBsD,max between the SNR-optimized mapping and the EXIT-optimized mapping increases with higher terms of correlation. The major drawback is that the instrumental quality measure parameter SNR is not explicitly included in the optimization. Moreover, the bounds lsBS;D,max do not comprise information about the adverse effects of different mappings on instrumental quality measures like the parameter SNR. In certain situations, a higher parameter SNR might be available even if the lsBS;D,max is smaller. Thus, it has to be confirmed by simulation if the EXIT-optimized bit mapping is able to provide a noteworthy gain in error robustness (see Section 5).

4.2. Optimization of the channel coding component of ISCD

So far, all known approaches to iterative source-channel decoding, for example, [6, 7, 8, 9, 10], consider channel codes of the systematic form and therefore, these ISCD schemes are concatenated in the parallel way. It is most common to use recursive systematic convolutional (RSC) codes of code rate r = 1/2. Due to the systematic form, one of the generator polynomials of the matrix G = (1, F(D)/H(D)) is fixed to 1, and due to the recursive structure, the second generator polynomial consists of a feed-forward part F(D) and a feedback part H(D). The term D denotes the one-tap delay operator and the maximum delay, that is, the maximum power J of DJ in F(D), respectively, H(D), determines the constraint length J + 1 of the code. There exist 2J+1 possible realizations for the feed-forward part F(D) and 2J possible realizations

Table 2: EXIT-optimized index assignment for correlation p = 0.9.

K = 3 bits

Natural EXIT-optimized

binary (FS) (BSA)

K = 4 bits

Natural EXIT-optimized

binary (BSA)

10 10

11 12

14 11

for the feed-back part H (D). The number of possible realizations of H(D) is lower than that of F(D) because the present feed-back value is usually directly applied to the undelayed input value, that is, the term D0 = 1 is always given in H(D). Thus, F(D) and H(D) offer (in maximum) 2J+1 x 2J combinatorial possibilities to design the EXIT characteristic of a rate r = 1/2, memory J RSC code. The effective number of reasonable combinations is even smaller, because in some cases F(D) and H(D) exhibit a common divisor and thus, the memory of the RSC encoder is not fully exploited.

We expect improved error correcting/concealing capabilities from ISCD schemes if the RSC code is replaced by a recursive nonsystematic convolutional (RNSC) code. These ISCD schemes are serially concatenated. At the same code rate r and constraint length J + 1 such RNSC codes offer higher degrees of combinatorial freedom. As the matrix G(D) = (Fi(D)/H(D), F2(D)/H(D)) exhibits two feedforward parts F1(D) and F2(D) and one feed-back part H(D), there exist (less than) 2J+1 x2J+1 x2J reasonable combinations. The RNSC code degenerates to an RSC code if either F1(D) or F2(D) is identical to H(D).

Hence, in our two-stage optimization process for improved ISCD schemes we have to find the most appropriate

Table 2: Continued.

K = 5 bits

Natural EXIT-optimized

binary (BSA)

combination of F1(D), F2(D), and H(D). The EXIT characteristic of the RNSC code with this specific combination will guarantee that the intersection with the EXIT characteristic of SBSb is located at the highest possible (Iqd', Ibsd) pair. Remember, in the first step of this process IBsd max had been maximized by an optimization of the index assignment.

However, even if in a real-world system the constraint length J + 1 is limited to a reasonably small number, for example, due to computational complexity requirements, the search for the globally optimal combination of F1(D), F2(D), and H(D) might enlarge to an impractically complex task. For instance, if the constraint length is limited to J +1 = 4 (as done for the simulation results in Section 5), there are (in maximum) 2048 combinatorial possibilities and thus, 2048 EXIT characteristics need to be measured. To lower these demands, we propose to carry out a presearch by finding some of the best possible RSC codes, that is, we alter F2(D) and H(D) and fix F1(D) = H(D). This requires (in maximum) only 128 measurements. Moreover, the effective number can

even be reduced to some ten. After having found some of the best possible RSC codes, for each of these combinations of F2(D) and H(D) the formerly fixed F1(D) is altered. In total, a few hundred of EXIT characteristics need to be measured to find the (at least) locally optimum RNSC code.

In the next section, it will be demonstrated by simulation that due to the higher degrees of combinatorial freedom, the usage of RNSC codes instead of RSC codes reveals remarkable benefits for the error correcting/concealing capabilities of iterative source-channel decoding.

Finally, we have to remark that due to the nonsystematic form of RNSC codes there is no channel-related reliability information available about the data bits. The additional information which is given in the extrinsic L-values ¿s'bsD(xp,t(k)) and LCD(xp>T(k)) due to the higher intersection in the EXIT chart must be (at least) higher than the information content ofL(zp,T(k) | 2

5. SIMULATION RESULTS

The error correcting/concealing capabilities and the convergence behavior of the conventional parallel approach to ISCD and the new improved serial approach using the EXIT-optimized index assignment as well as channel codes of the nonsystematic form will be compared by simulation. Instead of using any specific real-world speech, audio, or video encoder, we consider a generic model for the source codec parameter set uT. For this purpose, M components up,T are individually modeled by first-order Gauss-Markov processes with correlation p = 0.9. The parameters up,T are individually quantized by a scalar 16-level Lloyd-Max quantizer using K = 4 bits/parameter each.

After the natural binary5 index assignment (parallel ISCD scheme), respectively, after the EXIT-optimized index assignment (serial ISCD scheme), a pseudorandom, sufficiently large-sized bit interleaver O of size K x M x (T+1) = 2000 serves for spreading of data bits. For convenience, with respect to K = 4 bits/parameter, we set M = 500 and T+1 = 1. In practice, a smaller M might be sufficient if bit interleaving O is either realized jointly over several consecutive parameter sets or if an appropriately designed (nonran-dom) bit interleaver is applied. Here, pseudorandom bit interleaving is realized according to the so-called S-random design guideline [14]. A random mapping is generated in such a way that adjacent input bits are spread by at least S positions. To simplify matters, the S-constraint is given by S = 4 positions.

For channel encoding terminated memory J = 3 recursive (non-)systematic convolutional codes are used. In case of the parallel ISCD scheme it turns out that the RSC code with G = (1,(1+ D2 + D3)/(1 + D + D3)) is best suited. Notice, the same channel code has been standardized for turbo channel

5We use the natural binary index assignment as reference instead of folded binary or Gray-encoded, because in line with our optimization criterion in Section 4, natural binary reveals the highest IbsD max values (see Table 1).

decoding in UMTS. In case of the new serial ISCD scheme, an RNSC code with the same constraint length and with G = ((1+ D2+ D3)/(1+D+D2+ D3), (1+ D+D3)/(1+ D+D2+ D3)) provides the best results. For termination, J = 3 tail bits are appended to each block of 2000 data bits which force the encoder back to zero state. The overall code rate of both ISCD schemes amounts to r = 2000/4006. A log-MAP decoder which takes the recursive structure of RSC, respectively, RNSC codes into account [ 12,13] serves as component decoder for the channel code.

5.1. Convergence behavior—EXIT charts

Figures 6a-6d show the EXIT charts of the different approaches to ISCD either with or without the innovations proposed in Section 4. Each EXIT chart is measured for a particular channel condition Es/N0.

In the remainder, that specific approach to parallel ISCD using natural binary index assignment and the RSC channel code is referred to as reference approach (Figure 6a). The EXIT characteristic of SBSD is taken from Figure 4b, but with swapped axes. Both EXIT characteristics specify an envelope for the so-called decoding trajectory [10, 11, 15, 16]. The decoding trajectory denotes the step curve, and it visualizes the increase in both terms of extrinsic mutual information irrespectively, iBsD being available in each iteration step.

Decoding starts with the log-MAP channel decoder while the a priori knowledge amounts to l^D"' = 0 bit. Due to the reliability gain of SISO decoding, the decoder is able to provide iCD = 0.45 bit. This information serves as a priori knowledge for SBSD, that is, IbsD = Iqd', and thus the extrinsic mutual information of SBSD reads IsbsD = 0.37 bit. Iteratively executing both SISO decoders allows to increase the information content step-by-step. No further information is gainable, when the intersection in the enveloping EXIT characteristics is reached. In ISCD schemes intersections typically appear due to the upper

bound lS<BxstD,max.

Using the reference approach 3 iterations are required to

achieve the highest possible (iCo'1, IsbsD) = (0.78,0.45) at a channel condition of Es/N0 = -3dB.

If the natural binary index assignment is exchanged by the EXIT-optimized mapping as proposed in Section 4.1, then the EXIT characteristic of SBSD has to be replaced by the corresponding curve of Figure 4e. Due to the higher lsBsD,max, the intersection in the EXIT characteristics is located at a remarkably higher (icS", IbsD) = (0.96,0.85). This intersection can be reached quite closely by the decoding trajectory after 6 iterations.

In a third approach to ISCD (Figure 6c), the RSC channel code of the reference approach is substituted by an RNSC code of the same code rate r and constraint length J + 1 as motivated in Section 4.2. As the new channel coder is of the nonsystematic form, the EXIT characteristic of SBSD has to be replaced too because channel-related reliability information will not be available for the outer softbit source decoder

/ \ 0.91,0.47)

[apri] i[ext]

(0.97,0.85)

apri] ([ext] (bit) CD , 1SBSD (uu)

Es/N0 (dB)

S EXIT-opt., RNSC, 10 it. 0 Natural bin., RNSC, 4 it. * SNR-opt., RNSC, 10 it. O Natural bin., RSC, 3 it. □ EXIT-opt., RSC, 7 it.

Figure 6: EXIT chart representation of the various approaches to iterative source-channel decoding: (a) natural binary , RSC, Es/N0 = —3 dB; (b) EXIT-optimized, RSC, Es/N0 = —3 dB; (c) natural binary, RNSC, Es/N0 = —3 dB; and (d) EXIT-optimized, RNSC, Es/N0 = —4 dB. (e) Improvements in parameter SNR.

anymore. Once again, if compared to the reference, a higher

(lCeDt], iSexSD) = (0.91,0.47) can be reached by the decoding trajectory after 3 iterations.

Finally, both innovations will be introduced to the reference at the same time (Figure 6d). In order to illuminate the particular features of this approach the channel condition is reduced to Es/N0 = -4dB. It can be seen that the EXIT characteristic of the RNSC channel code matches very well to the EXIT characteristic of SBSD. Both characteristics span a small tunnel through which the decoding trajectory can pass. Up to 10 iterations reveal gains in both terms of extrinsic mutual information. The highest possible (lCeDt], iSexSD) pair (0.97,0.85) is higher than for all the other approaches mentioned heretofore. This is even true although the channel quality had been decreased by AEs/N0 = 1dB (Es/N0 = -4 dB instead of -3dB).

In certain situations the decoding trajectory exceeds the EXIT characteristic of SISO channel decoding. The reason is that the distribution of the extrinsic L-values LSBsD(*P,T(k)) of SBSD is usually non-Gaussian in particular if no channel-related reliability information is given. Thus, the model which was used to determine the EXIT characteristics of SISO channel decoding (see Section 3.1) does not hold strictly anymore. However, even if the precise number of required iterations cannot be predicted from the EXIT chart, the intersection of the EXIT characteristics still remains to be the limiting constraint for the iterative process.

5.2. Error robustness—parameter signal-to-noise ratio

The simulation results in Figure 6e depict the parameter signal-to-noise ratio (SNR) of the originally determined source codec parameter up,T and the corresponding estimate Up,T as a function of the channel quality Es/N0. For the first basic considerations, we use the same system configuration as for the reference approach introduced before, that is, we apply the natural binary index assignment and the RSC channel code. For every approach to ISCD, the number of iterations is chosen such that the best possible error robustness is reached in the entire range of Es/N0 = [-5,0] dB. A higher number of iterations does not yield any noteworthy increase/decrease in the parameter SNR.

The lowest curve shows the error robustness of a conventional noniterative receiver using SISO channel decoding and classical source decoding by hard decision and table lookup. If this hard decision (HD) source decoder is replaced by a conventional softbit source decoder [5] the utilization of residual redundancy permits to outperform the classical approach significantly. The maximum gain in parameter SNR amounts to ASNR = 8.76 dB at a channel condition of Es/N0 = -2.5 dB. Notice, the latter approach resembles an ISCD scheme without any iteration.

A turbo-like refinement of the extrinsic information of both SISO decoders makes further substantial quality improvements possible. Mainly one additional iteration reveals remarkable quality gains in terms of the parameter SNR by up to Asnr = 3.96 dB at Es/N0 = -2.5dB. No notewor-

thy larger improvements in error robustness are achievable by higher numbers of iterations as can be confirmed by the EXIT chart analysis (see, e.g., Figure 6a with Es/N0 = -3dB). However, in the entire range of channel conditions the reference approach to iterative source-channel decoding is superior to (or at least on a par with) the noniterative schemes marked dash-dotted.

As proposed in Section 4 the EXIT chart representation can be used to optimize the index assignment and/or the channel coding component in view of the iterative evaluation. If either of both innovations (each optimized for Es/N0 = -3.0 dB) is introduced, further remarkable quality improvements can be realized in the most interesting range of moderate channel conditions. Compared to the reference approach, additional gains in parameter SNR of Asnr = 4.54 dB are determinable at Es/N0 = -3.0 dB if the natural binary index assignment is replaced by the EXIT-optimized mapping. The gain amounts to ASNR = 1.43 dB at Es/N0 = -3.0 dB if the RSC code is substituted by the RNSC code. A quality degradation has to be accepted in case of heavily disturbed transmission channels.

If both innovations are introduced at the same time, almost perfect reconstruction of the source codec parameters becomes possible down to channel conditions of Es/N0 = -3.8 dB. If the channel condition becomes worse, the parameter SNR drops down in a waterfall-like manner. The reason for this waterfall-like behavior can be found by the EXIT chart analysis (see Figures 6d). As long as the channel condition is better than Es/N0 = -4.5 dB, there exists a tunnel through which the decoding trajectory can pass to a relatively high (lCeDt], IsbsD) pair. If the channel becomes worse, the tunnel disappears and the best possible (iCo', isBxiD) pair takes relatively small values. In view of an implementation in a real-world cellular network like the GSM or UMTS system, the Es/N0 of the waterfall region might be a new design criteria which has to be guaranteed at the cell boundaries. Here, a handover might take place and the loss of parameter SNR in channel qualities of Es/N0 < -4.5dBisnotrelevantanymore.

Finally, it has to be mentioned that the combination of the SNR-optimized mapping [18] with an RNSC code to a serially concatenated ISCD scheme also reveals remarkable improvements in error robustness. However, the EXIT-optimized mapping remains to be more powerful as correlation of the source codec parameters can be included in the optimization process.

6. CONCLUSIONS

In this contribution, the error robustness of iterative source-channel decoding has significantly been improved. After a new classification of ISCD into parallel and serially concatenated schemes has been defined, EXIT charts are introduced for a convergence analysis. Based on the EXIT chart representation, novel concepts are proposed on how to determine a powerful index assignment and on how to find an appropriate channel coding component. It has been demonstrated by example that both innovations, the EXIT-optimized index assignment as well as the RNSC channel code, allow

substantial quality gains in terms of the parameter SNR in the most interesting range of channel conditions. Formerly known parallel approaches to ISCD are outperformed by far by the new serial arrangement.

sive formulas are [10, 11]

aT-1 (xH,T-l)

= ©(x„t-i) I P(x,,T-1 I X,,t-2) ■ «T-2 (X„,t-2), (B'2)

APPENDICES

A. EXIT-OPTIMIZED BIT MAPPINGS

Table 2 summarizes the EXIT-optimized bit mappings for various quantizer code-book sizes 2K and correlation p = 0.9.

B. EXTRINSIC L-VALUE OF SBSD

The determination rules for the extrinsic L-value Lsbsd(xp,t(к)) of SBSD have been derived in [8, 9, 10, 11]. They will briefly be reviewed next. At the end, a slight modification is proposed which allows to omit a quality loss due to an approximation.

(1) Merge the bit-wise soft-inputs L(z^,r(к) | x^,r(к)), L(x^,T(к)), and LcDt](x^r(к)) of single data bits хи>т(к) to parameter-oriented soft-input information ©(x^,T) about bit patterns x^,r. For this purpose, determine for all 2K possible permutations of each bit pattern Xk,t at a specific time instant t = Л - T,..., Л and position k = 1,..., M excluding the index pair (k, t) = т) (see below) the term [10, 11]

ßT (ХИ,т)

= 2 P(x^,t+1 | X„,t) ■ ©(X^+I) ■ ßT+1 (X„,t+1). (b'3)

гл( ) V Xkt (K)

©( xk,() = eXp ^

k=1,...,K 2

■ [lcd (xk,t(к)) + L(xk,t(к)) + L(zk,t(к) | Xkt(к))].

The summation runs over the bit index K = 1,..., K.

In case of the index pair (k, t) = t), the bit index K = k of the desired extrinsic L-value lS<BsD(x^,t ( k)) has to be excluded from the summation. Thus, in this case the terms ©(x^f) have to be computed for all 2K—1 possible permutations of bit pattern x^?' by summation over all K = 1,..., K, K = k. For convenience, x^?' denotes that specific part of the pattern x^,T without x^,T( k). Thus, x^,T can also be separated

into (x^f, X^,t (k)).

(2) Combine this parameter-oriented soft-input information with the a priori knowledge about the source codec parameters. If the parameters u^,T, respectively, the corresponding bit patterns x^,T exhibit a first-order Markov property P(x^,T | x^,T—1) in time, past and (possibly given) future bit patterns x^,t with t = A — T,..., A, t = t , can efficiently be evaluated by a forward-backward algorithm. Both recur_I

The summation of the forward recursion (B.2), respectively, backward recursion (B.3) is realized over all 2K permutations of x^,T—2, respectively, x^,T+1. For initialization serve Mx^) = P(x,,0) and ^a(x^,a) = 1.

With respect to the defined size of the interleaver O, throughout the refinement of bit-wise log-likelihood values T + 1 consecutive bit patterns x^,T (with t = A — T,..., A) of a specific codec parameter are regarded in common. In consequence, the forward recursion does not need to be recalculated from the very beginning a0(x^,0) in each iteration. All terms xA,—T—1, which are scheduled before the first interleaved bit pattern x^ A—T, will not be updated during the iterative feedback of extrinsic information and can be measured once in advance.

(3) Finally, the intermediate results of (B.1), (B.2), and (B.3) have to be combined as shown in (B.5).

The inner summation of (B.5) has to be evaluated for all 2K permutations of x^,T— 1 and the outer summation for

the 2K—1 permutations of x^?'. With respect to the 2K per-(B 1) mutations of x^,T, the set of backward recursions (x^,T) of (B.3) as well as the set of parameter a priori knowledge values P(x^,T | x^,T—1) are separated into two subsets of equal

size. In the numerator only these (x^?',x^,T( k)), respectively, P(x^ext],x^,T(k) | x^,T—1) are considered where the desired data bit takes the value x^,T(k) = +1, and in the denominator x^,T(k) = — 1, respectively. Moreover, in order to extract the bit-wise a priori L-value L(x^,T ( k)) of (2) from the parameter-oriented a priori knowledge we use the approximation

.[ext]

P (x^ext], Xp,T( K) | x,,T—1)

= P(xlext] | V—1,Xf,T(K)) ■ P(x„,T( K) | x„,T—1) (B.4)

- P(x'ext] | x,,T— 1, X„,T (K)) ■ P(x„,T ( K)).

This approximation can be omitted if the bit-wise a priori Lvalue and the extrinsic information of SBSb are not treated separately as in (2), but jointly by their sum L(x^,T(k)) +

lSBSD (xp,T ( k)).

ll.T-2

^[ext] ( 1 Sxl? ß^x^,Хи,т( к) = +1) ■ ©(x^) Xx„>t-1 P^f I X^ T-1,Хи,Т(К) = +1) ■ а— (хи,т-1) ^ ^

Sxl? ßT Хи,т( к) = -1) ■ ©(хй*1) Sx„t-i Р(хиХ] 1 хи,т-и Хи,т(к) = -1) ■ ат-1 (X^,t-i)

ACKNOWLEDGMENTS

The authors would like to acknowledge T. Clevorn and U. von Agris for fruitful comments and inspiring discussions and N. Gortz for providing the SNR-optimized mappings [18]. Furthermore, we would like to thank the anonymous reviewers for their suggestions for potential improvements.

REFERENCES

[1] J. Hagenauer, "Source-controlled channel decoding," IEEE Trans. Commun., vol. 43, no. 9, pp. 2449-2457, 1995.

[2] T. Hindelang, S. Heinen, P. Vary, and J. Hagenauer, "Two approaches to combined source-channel coding: a scientific competition in estimating correlated parameters," Int. J. Electron. Commun. (AEU), vol. 54, no. 6, pp. 364-378, 2000.

[3] T. Hindelang, Source-controlled channel encodingand decoding for mobile communications, Ph.D. thesis, Institute of Comm. Engineering, Munich University of Technology, Munchen, Bavaria (Bayern), Germany, 2001.

[4] T. Fingscheidt, T. Hindelang, R. V. Cox, and N. Seshadri, "Joint source-channel (de-)coding for mobile communications," IEEE Trans. Commun., vol. 50, no. 2, pp. 200-212, 2002.

[5] T. Fingscheidt and P. Vary, "Softbit speech decoding: a new approach to error concealment," IEEE Trans. Speech Audio Processing, vol. 9, no. 3, pp. 240-251, 2001.

[6] N. Gortz, "Iterative source-channel decoding using soft-in/soft-out decoders," in Proc. IEEE International Symposium on Information Theory (ISIT), p. 173, Sorrento, Italy, June 2000.

[7] T. Hindelang, T. Fingscheidt, N. Seshadri, and R. V. Cox, "Combined source/channel (de-)coding: can a priori information be used twice?" in Proc. IEEE International Symposium on Information Theory (ISIT), p. 266, Sorrento, Italy, June 2000.

[8] M. Adrat, R. Vary, and J. Spittka, "Iterative source-channel decoder using extrinsic information from softbit-source decoding," in Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '01), vol. 4, pp. 26532656, Salt Lake City, Utah, USA, May 2001.

[9] N. Gortz, "On the iterative approximation of optimal joint source-channel decoding," IEEE J. Select. Areas Commun., vol. 19, no. 9, pp. 1662-1670, 2001.

[10] M. Adrat, U. von Agris, and P. Vary, "Convergence behavior of iterative source-channel decoding," in Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '03), vol. 4, pp. 269-272, Hong Kong, China, April 2003.

[11] M. Adrat, Iterative Source-Channel Decoding for Digital Mobile Communications, vol. 16 of ABDN, Druck & Verlagshaus Mainz GMBH Aachen, Aachen, Germany, 2003, Ph.D. thesis.

[12] C. Berrou and A. Glavieux, "Near optimum error correcting coding and decoding: turbo-codes," IEEE Trans. Commun., vol.44, no. 10,pp. 1261-1271, 1996.

[13] J. Hagenauer, E. Offer, and L. Papke, "Iterative decoding of binary block and convolutional codes," IEEE Trans. Inform. Theory, vol. 42, no. 2, pp. 429-445, 1996.

[14] C. Heegard and S. B. Wicker, Turbo Coding, vol. 476 of the Kluwer International Series in Engineering and Computer Science, Kluwer Academic Publishers, Boston, Mass, USA, 1999.

[15] S. ten Brink, "Convergence of iterative decoding," Electronic Letters, vol. 35, no. 10, pp. 806-808, 1999.

[16] S. ten Brink, "Convergence behavior of iteratively decoded parallel concatenated codes," IEEE Trans. Commun., vol. 49, no. 10, pp. 1727-1737, 2001.

[17] P. Robertson, P. Hoher, and E. Villebrun, "Optimal and suboptimal maximum a posteriori algorithms suitable for turbo decoding," European Trans. Telecommun., vol. 8, no. 2, pp. 119-125, 1997.

[18] J. Hagenauer and N. Gortz, "The turbo principle in joint source-channel coding," in Proc. IEEE Information Theory Workshop (ITW), vol. 275-278, Paris, France, 2003.

[19] K. Zeger and A. Gersho, "Pseudo-gray coding," IEEE Trans. Commun., vol. 38, no. 12, pp. 2147-2158, 1990.

Marc Adrat received the Dipl.-Ing. degree ^

in electrical engineering and the Dr.-Ing. degree from Aachen University of Technol-

respectively. His dissertation was entitled "Iterative source-channel decoding for digital mobile communications." Since 1998, he ^^^K^rk has been with the Institute of Communication Systems and Data Processing, Aachen University of Technology. His work is on

combined/joint source and channel (de)coding for wireless communication systems. The main focus is on iterative turbo-like decoding algorithms for error concealment of speech and audio signals. Further research interests are in concepts of mobile radio systems.

Peter Vary received the Dipl.-Ing. degree in electrical engineering in 1972 from the University of Darmstadt, Germany. In 1978, he received the Ph.D. degree from the University of Erlangen-Nuremberg, Germany. In 1980, he joined Philips Communication Industries (PKI), Nuremberg, where he became Head of the Digital Signal Processing Group. Since 1988, he has been Professor at Aachen University of Technology, Aachen, Germany, and Head of the Institute of Communication Systems and Data Processing. His main research interests are speech coding, channel coding, error concealment, adaptive filtering for acoustic echo cancellation and noise reduction, and concepts of mobile radio transmission.