Hindawi Publishing Corporation

EURASIP Journal on Wireless Communications and Networking Volume 2007, Article ID 74890, 10 pages doi:10.1155/2007/74890

Research Article

Characterization and Optimization of LDPC Codes for the 2-User Gaussian Multiple Access Channel

Aline Roumy1 and David Declercq2

1 Unité de recherche INRIA Rennes, Irisa, Campus universitaire de Beaulieu, 35042 Rennes Cedex, France

2 ETIS/ENSEA, University of Cergy-Pontoise/CNRS, 6 Avenue du Ponceau, 95014 Cergy-Pontoise, France

Received 25 October 2006; Revised 6 March 2007; Accepted 10 May 2007 Recommended by Tongtong Li

We address the problem of designing good LDPC codes for the Gaussian multiple access channel (MAC). The framework we choose is to design multiuser LDPC codes with joint belief propagation decoding on the joint graph of the 2-user case. Our main result compared to existing work is to express analytically EXIT functions of the multiuser decoder with two different approximations of the density evolution. This allows us to propose a very simple linear programming optimization for the complicated problem of LDPC code design with joint multiuser decoding. The stability condition for our case is derived and used in the optimization constraints. The codes that we obtain for the 2-user case are quite good for various rates, especially if we consider the very simple optimization procedure.

Copyright © 2007 A. Roumy and D. Declercq. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1. INTRODUCTION

In this paper we address the problem of designing good LDPC codes for the Gaussian multiple access channel (MAC). The corner points of the capacity region have long been known to be achievable by single-user decoding. This idea was also used to achieve any point of the capacity region by means of rate splitting [1]. Here we focus on the design of multiuser codes since the key idea for achieving any point in the capacity region of the Gaussian MAC is random coding and optimal joint decoding [2, 3]. A suboptimal but practical approach consists in using irregular low-density parity-check codes (LDPC) decoded with belief propagation (BP) [4-6]. In this paper we aim at proposing low-complexity LDPC code design methods for the 2-user MAC where joint decoding is performed with beliefpropagation decoder (BP).

Here as in [5], we tackle the difficult and important problem where all users have the same power constraint and the same rate in order to show that the designed multiuser codes can get close to any point of the boundary in the capacity region of the Gaussian MAC. We propose two optimization approaches based on two different approximations of density evolution (DE) in the 2-user MAC factor graph: the first is the Gaussian approximation (GA) of the messages,

and the second is an erasure channel (EC) approximation of the messages. These two approximations, together with constraints specific to the multiuser case, lead to very simple LDPC optimization problems, solved by linear programming. The paper is organized as follows: in Section 2, we present the MAC factor graph and the notations used for the LDPC optimization. In Section 3, we describe our approximations of the mutual information evolution through the central function node, that we call state check node. A practical optimization algorithm is presented in Section 4 and finally, we report in Section 5 the thresholds of the optimized codes computed with density evolution and we plot some finite length performance curves.

2. 2-USER MAC FACTOR GRAPH AND DECODING ALGORITHM

In a 2-user Gaussian MAC, we consider 2 independent users x[1] and x[2], being sent to a single receiver. Each user is LDPC encoded by different irregular LDPC codes (the LDPC codes could however belong to the same code ensemble) with codeword length N, and their respective received power will be denoted by a2. The codeword is BPSK modulated and the

Figure 1: Factor graph of the 2-user synchronous MAC channel: zoom around the state-check node neighborhood.

synchronous discrete model of the transmission at time n is given by, for all 0 < n < N - 1,

yn = + 0242J + Wn = I

\_Ol 02]

Zn + Wn

Throughout the paper, neither the flat fading nor the multi-path fading effect are taken into account. More precisely, we will consider the equal rate/equal power 2-user MAC channel, that is R1 = R2 = R and of = of = 1. The equal receive power channel can be encountered in practice, for example, if power allocation is performed at the transmitter side. In (1), Zn = [Xn , X2]]r is the state vector of the multiuser channel, and wn is a zero mean additive white Gaussian noise with variance o2: its probability density function (pdf) is denoted by N(0, o2).

In order to jointly decode the two users, we will consider the factor graph [7] of the whole multiuser system, and run several iterations of BP [8]. The factor graph of the 2-user LDPC-MAC is composed of the 2 LDPC graphs,1 which are connected through function nodes representing the link between the state vector Zn and the coded symbols of each user xn1 and Xn2. We will call this node the state-check node. Figure 1 shows the state check node neighborhood and the messages on the edges that are updated during a decoding iteration.

In the following, the nodes of each individual LDPC graph are referred to as variable nodes and check nodes. Let

denote the message from node a to node b for user k, where (a, b) can either be v for variable node, c for check node, or s for state check node.

From now on and as indicated on Figure 1, we will drop the time index n in the equations. All messages in the graph are given in log-density ratio form log (p( ■ I x[i] = +1) /p( • | x[i] = —1)), except for the probability message P coming from the channel observation y. P is a vector composed of

1 An LDPC graph denotes the Tanner graph [9] that represents an LDPC code.

four probability messages given by

"p(y I Z = [+1 + 1]T) p(y I Z = [+1 — 1]T) p(y I Z = [ —1 + 1]T) ' (2) _p(y I Z = [ —1 — 1]T)

Since for the equal power case p(y I Z = [ — 1 + 1]T) = Pi0 = Poi = p(y I Z = [+1 — 1]T), the likelihood message P is completely defined by only three values.

At initialization, the log likelihoods are computed from the channel observations y. The message update rules for all messages in the graph (mCV, mVVC, mV}) follow from usual LDPC BP decoding [7, 10]. We still need to give the update rule through the state-check node to complete the decoding algorithm description. The message m}V at the output

of the state-check node is computed from msV for (i, j) e {(1,2), (2,1)} and P:

Pooemgl

Pioem« + Pu Pooem* + Pio

The channel noise is Gaussian N(0,o2), and (3) can be rewritten for the equal power case as

e(2y-2)/a2 emj

emVs] + e(—2y—2)/o2

where the distribution of y is a mixture of Gaussian distributions y - (1/4)N(2,o2) + (1/2)N(0, o2) + (1/4)N(—2,o2) since the channel conditionnal distributions are

N(2,o2),

yl(+i,+i) yl(+i,—i) - N(o,a2), yl(-i,+i) - N (o, a2), yl(—i,—i) - N( — 2,a2).

Now that we have stated all the message update rules within the whole graph, we need to indicate in which order the message computation are performed. We will consider in this work the following two differents schedulings.

(i) Serial scheduling. A decoding iteration for a given user (or "round" [10]) consists in activating all the variable nodes, and thus sending information to the check nodes, activating all the check nodes and all the variable nodes again that now send information to the state check nodes, and finally activating all the state check nodes that send information to the next user. Once this iteration for one user is completed, a new iteration can be performed for the second user. In a serial scheduling, a decoding round for user two is not performed until a decoding round for user one is completed.

(ii) Parallel scheduling. In a parallel scheduling, the decoding rounds (for the two users) are activated simultaneously (in parallel).

3. MUTUAL INFORMATION EVOLUTION THROUGH

THE STATE-CHECK NODE

The DE is a general tool that aims to predict the asymptotical and average behavior of LDPC codes or more general graphs decoded with BP. However, DE is computationally intensive and in order to reduce the computational burden of LDPC codes optimization, faster techniques have been proposed, based on the approximations of DE by a one-dimensional dynamical system (see [11, 12] and references therein). This is equivalent to considering that the true density of the messages is mapped onto a single parameter, and tracking the evolution of this parameter along the decoding iterations.

It is also known that an accurate single parameter is the mutual information between the variables associated with the variable nodes and their messages [11, 12]. The mutual information evolution describes each computation node in BP-decoding by a mutual information transfer function, which is usually referred to as the EXtrinsic mutual information transfer (EXIT) function. For parity-check codes with binary variables only (as for LDPCs or irregular repeat accumulate codes), the EXIT charts can be expressed analytically [12], leading to very fast and powerful optimization algorithms.

In this section, we will express analytically the EXIT chart of the state-check node update, based on two different approximations. First, we will express a symmetry property for the state-check node, then we will present a Gaussian approximation (GA) of the messages densities, and finally we will consider that the messages are the output ofan erasure channel (EC).

Similarly to the definition ofthe messages (see Section 2), we will denote by xab the mutual information from node a to node b, where (a, b) can either be v for variable node, c for check node, or s for state-check node.

Definition 1 (State-check node symmetry condition). The state check node update rule is said to be symmetric if sign inversion invariance holds, that is,

Ys(-y, -m) = -Vs(y, m). (6)

Note that the update rule defined in (4) is symmetric.

In order to state a symmetry property for the state-check node, we further need to define some symmetry conditions for the channel and the messages passed by in the BP decoder.

Definition 2 (Symmetry conditions for the channel observation). A 2-user MAC is output symmetric if its observation y verifies

p(yt | x™,j = p( - y |-xf], -x|j]), (7)

where yt is the observation at time index t and x[k] is the tth element of the codeword sent by user k. Note that this condition holds for the 2-user Gaussian MAC.

Definition 3 (Symmetry conditions for messages). A message is symmetric if

p(mt I xt) = p( - mt I -xt), (8)

where mt is a message at time index t and xt is the variable that is estimated by message mt.

Proposition 1. Consider a state-check node. Assume a symmetric channel observation, the entire average behavior ofthe state-check node can be predicted from its behavior assuming transmission ofthe all-one BPSK sequence for the output user and a sequence with half symbols fixed at "1" and half symbols at "-1 "for the input user.

Proof. See Appendix B. □

3.1. Symmetry property

First of all, let us present one of the main differences between the single-user case and the 2-user case. For the single user, memoryless, binary-input, and symmetric-output channel, the transmission of the all-one BPSK sequence is assumed in the DE. The generalization of this property for nonsym-metric channels is not trivial and some authors have recently addressed this question [13, 14].

In the 2-user case, the channel seen by each user is not symmetric since it depends on the other users, decoding. However, the asymmetry of the 2-user MAC channel is very specific and much simpler to deal with than the general case. We proceed as explained below.

Let us denote by ¥S(y, m) the state-check node map of the BP decoder, that is the equation that takes an input message m from one user and the observation y and computes the output message that is sent to the second user. The symmetry condition of a state-check node map is defined as follows.

3.2. Gaussian approximation ofthe state-check messages (GA)

The first approximation of the DE through the state-check node considered in this work assumes that the input message mvs is Gaussian with density N (pvs,2pvs), and that the output message msv is a mixture of two Gaussian densities with means psv |(+i,+i) and psv |(+i,-i), and variances equal to twice the means. The state-check node update rule is symmetric and thus we omit the user index in the notations.

Hence by noticing that msv in (4) can be rewritten as the sum of three functions of Gaussian distributed random variables

msv = -mvs +

log 1 - log (1 + e

s+(2y-2)/a2

mvs-(2y+2)/a2

we get the output means

Psv |(+1,+1) = f+1,+1 (^vs, 0 Psv |(+1,-1) = F+1,-1 (pvs, 02),

F+1,+1 (p, a2)

F+i,-i (p, a2)

1 + g-2v" p+(2/a2)z+p+2/a2

1 + g^V p+(2/a2)z-p-6/a2

C g-z2log(i

1 + g-2V p+(2/aT)z-p-2/a1

. g-2v"p+(2/a2)z+p-2/a2

dz - p,

dz + p.

The detailed computation of these functions is reported in Appendix A. Note that these expressions need to be accurately implemented with functional approximations in order to be used efficiently in an optimization procedure.

As mentioned earlier, it is desirable to follow the evolution of the mutual information as single paramater, so we make use of the usual function that relates the mean and the mutual information: for a message m with conditional pdf m | x = 1 ~ N(p,2p), and m | x = -1 ~ N(-p,2p) the mutual information is I(x; m) = J(p) where

1 f+œ 2

J(p) = 1 - — e-z2 log, (1 + g-2^z-p)dz. (12)

Vn J-œ

Note that J(p) is the capacity of a binary-antipodal input additive white Gaussian channel (BIAWGNC) with variance 2/p.

Now that we have expressed the evolution of the mean of the messages when they are assumed Gaussian, we make use of the function J(p) (12) in order to give the evolution of the mutual information through the state check node under Gaussian approximation. This corresponds exactly to the EXIT chart [11] of the state-check node update:

v l(+1,+1) = J(F+1,+1 (J 1 (xvs), a2)),

„ | (+1,-1) = J (F+1,-1 (J-1(xvs), a2)).

It follows that

_ 1 , 1

xsv = 2 xsv K+1,+1) + 0 xsv K+1,-1)

2 J (F+1,+1 (J-1 (xvs), a2)) + 2 J (F+1,-1 (J-1 (xvs), a2)).

Erasure channel approximation of the state-check messages (EC)

This approximation assumes that the distribution of the messages at the state-check node input (mvs see Figure 1) is the output of a binary erasure channel (BEC). Thus when the symbol +1 is sent, the LLR distribution consists of two mass points, one at zero and the other at +<x>. Let us denote by 5x, a mass point at x. It follows that the LLR distribution when the symbol +1 is sent is

E+(e) = eSo + (1 - e)Sa

Similarly, when -1 is sent, the LLR distribution is 8-(e) = eS0 + (1 - e)S-œ. The mutual information associated with these distributions is the capacity of a BEC:

x = 1 - e.

The distribution of channel observation y is not consistent with the approximation presented here since y is the output of a ternary input additive white Gaussian channel (TIAWGNC) with input distribution (1/4)5-2 + (1/2)5o + (1/4)52 (because of the symmetry property, see Section 3.1) and variance a2. The capacity of such a channel is

Ctiawgnc(p)

e-z2 log2 + 2g2^z-p + 1 g-1№-p^dz

2 2-Jn J-o

- j+œ ^t^-2^,

with ^ = Ha2.

In order to use coherent hypothesis in the erasure approximation of the state-check node, the real channel is mapped onto an erasure channel with same capacity. The ternary erasure channel (TEC) used for the approximation has input distribution (1/4)5_2 + (1/2)50 + (1/4)52 and erasure probability p. The capacity of such a TEC is

Ctec = 2(1 - p).

Therefore the true channel with capacity Ctiawgnc will be approximated by a TEC with erasure probability p = 1 -(2/3) Ctiawgnc.

Because of the symmetry property (see Section 3.1), we consider only two cases.

(i) Under the (+1, +1)-hypothesis and by definition of the erasure channel, the observation y is either an erasure with probability (w.p.) p or y = 2 w.p. (1 - p). The input message corresponds to the symbol +1 and its distribution is E+(e). The output message corresponds to the symbol +1 and by applying (3), we obtain the output distribution msv |(+i,+i) ~ E+(p).

(ii) Under the (+1, -1)-hypothesis, the observation of the erasure channel y is either an erasure w.p. p or y = 0 w.p. (1 - p). The input message corresponds to the symbol -1 and its distribution is E_(e). The output message corresponds to the symbol +1 and by applying (3), we obtain the output distribution msv |(+1 -1) ~ E+(1 - (1 - p)(1 - e)).

By applying (16), (18), and the assumption CTiAWGNC = CTEC, the mutual information transfer function through the state-check node is thus

xsv k+1,+1) = 3 Ctiawgno

xsv k+1,-1) = 3xvs Ctiawgnc.

E - N X = 5dB

Eb/N0 = 3dB

/ Eb/No. = 0dB

0.9 0.8 0.7 £ °.6

0.5 0.4 0.3 0.2

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Figure 2: Mutual information evolution at the state-check node. Comparison of the approximation methods with the exact mutual information at the state-check node. The solid lines represent the GA approximation, the broken lines the EC approximation, and plus signs show Monte Carlo simulations.

It follows that

= 1 Xsv |(+1,+1) + 1 Xsv | (+1,-1) = t(1 + xvs) Ctiawgnc-

In Figure 2, we compare the two approximations for the state node EXIT function with (14) and (20), for three different signal-to-noise ratios. The solid lines show the GA approximation whereas the broken lines show the EC approximation. We have also indicated with plus signs the mutual information obtained with Monte Carlo simulations. Our numerical results show that the Gaussian a priori GA approximation is more attractive since the mutual information computed under this assumption have the smallest gap to the exact mutual information (Monte Carlo simulation without any approximation).

order to do so, we must make different assumptions that are mandatory to ensure that the evolution of the mutual information is linear in the parameters {A}:

{.H0\ hypothesis equal LDPC codes. Under this hypothesis, we assume that the 2 LDPC codes belong to the same ensemble ({Ai} =2, {pj}=2); {H1} hypothesis without interleaver. Under this hypothesis, each and every state-check node is connected to two variable nodes (one in each LDPC code) having exactly the same degree.

Proposition 2. Under hypotheses ,H0 and ,H1, the evolution of the mutual information xvc at the Ith iteration under the parallel scheduling described in Section 2 is linear in the parameters

Proof. See Appendix C.

From Proposition 2, we can now write the evolution of the mutual information for the entire graph. More precisely, by using (12), (14), and (20), we finally obtain (21) for the Gaussian approximation and (22) for the erasure channel approximation:

iAj\l-1[i J{f+1,+1{îJ-1[p(xVC-1))], a2})

i=2 L2

+ 2 J (f+1,+1 {iJ-1[p(xVC-1) )], a2}) + (i - 1)J-1[p(xVC-1))]|

= fga( {m, 4-1), a2),

ZAj\j-1[ Ctiawgnc(1+ J {iJ-1[p(xV!-1))]}) i=2 1 L 3

+ (i - 1)j-1[p(xVC-1) )]}

= Fec( {AJ, xVC-1), a2)

4. LDPC CODE OPTIMIZATION

Using the EXIT charts for the LDPC codes [12, 15] and for the state-check node under the two considered approximations (14), (20), we are now able to give the evolution of the mutual information x along a whole 2-user decoding iteration. The irregularity of the LDPC code is defined as usual by the degree sequences ({A^fl2, {pj}fl 2) that represent the fraction of edges connected to variable nodes (resp., check nodes) of degree i (resp., j). As in the single-user case, we wish to have an optimization algorithm that could be solved quickly and efficiently using linear programming. In

p(xVC-1)) = 1 -^pj((j - 1)j-1(1 -xVC-1))). (23)

It is interesting to note that, in (21) and (22), the evolution of the mutual information is indeed linear in the parameters [Aj], when {pj} are fixed.

As often presented in the literature, we will only optimize the data node parameters [Aj], for a fixed (carefully chosen) check node degree distribution [pj}. The optimization criterion is to maximize R subject to a vanishing bit error rate. The optimization problem can be written, for a given a2 and a given p(x) as follows:

maximize ^ A subject to

(Ci) ^Xj = 1 [mixing constraint],

(C2) Xj e [0,1] [proportion constraint], (24)

(C3) X2 < £Xp ^ ^ [stability constraint], Zj=2(j - 1)Pj

(64) F({Xj},x,a2) >x,

Vx e [0,1[ [convergence constraint],

where (C3) is the condition for the fixed point to be stable (see Proposition 3) and where (C4) corresponds to the convergence to the stable fixed point x = 1, which corresponds to zero error rate constraint.

Solution to the optimization problem

For a given a2 and a given p(x), the cost function and the constraints (C1), (C2), and (C3) are linear in the parameters {Xj}. The function used in constraint (C4) is either (21) or (22) which are both linear in the parameters {Xj}. The optimization problem can then be solved for a given p(x) by linear programming. We would like to emphasize the fact that the hypotheses H0 and H1 are necessary to have a linear problem, which is the key feature of quick and efficient LDPC optimization.

These remarks allow us to propose an algorithm that solves the optimization problem (24) in the class of functions p(x) of the type p(x) = xn, for all n > 0.

(i) First, we fix a target SNR (or equivalently a2).

(ii) Then, for each n > 0, p(x) = xn and we perform a linear programming in order to find a set of parameters {Xj} that maximizes the rate under the constraints (C1) to (C4) (24). In order to integrate the (C4) constraint in the algorithm, we quantize x. For each quantized value of x, the equation in (C4) leads to an additional constraint. Hence, for each n, we get a rate.

(iii) Finally, we choose n that maximizes the rate (over all n).

In practice, the search over all possible n is performed up to a maximal value. This is to insure that the graph remains sparse.

Stability of the solution

Finally, the stability condition of the fixed point for the 2-user MAC channel is given in the following proposition.

Proposition 3. The local stability condition of the DE for the 2-user Gaussian MAC is the same as that of the single user case:

exp (1/(2a2))

A2 < j

Zf= 2 (j - 1)Pj

The proof is given in Appendix D.

5. RESULTS

In this section we present results for codes designed according to the two methods presented in Section 3, for rates from 0.3 to 0.6, and we compare the methods on the basis of the true thresholds obtained by DE and finite length simulations.

Table 1 shows the performance of LDPC codes optimized with the Gaussian approximation. Table 2 shows the performance of LDPC codes designed according to the Erasure channel approximation. In both tables the code rate, the check nodes degrees p(x) = Xd== 2 pj , the optimized parameters {Xi}d= 2, and the gap to the 2-user Gaussian MAC Shannon limit are indicated.

We can see that the LDPC codes optimized for the 2-user MAC channel are indeed very good and have decoding thresholds very close to the capacity. Our numerical results show that, the Gaussian a priori approximation is more attractive since the codes designed under this assumption have the smallest gap to Shannon limit.

An interesting result is that the codes obtained for R = 0.3 and R = 0.6 are worse than the ones obtained for R = 0.5. Our opinion is that it does not come from the same reason. For small rates (R = 0.3), the multiuser problem is easy to solve because the system load (sum rate) is lower than 1, but the approximations of DE become less and less accurate as the rate decreases. R = 0.3 gives worse codes than R = 0.5 because of the LDPC part of the multiuser graph. For larger rates (R = 0.6), the DE approximations are fairly accurate, but the multiuser problem we address is more difficult, as the system load is larger than 1 (equal to 1.2). R = 0.6 gives worse codes than R = 0.5 because of the multiuser part of the graph (state-check node).

In order to verify the asymptotical results obtained with DE, we have made extensive simulations for a finite length equal to N = 50 000. The codes have been build with an efficient parity check matrix construction. Since the progressive edge growth algorithm [16] tends to be inefficient at very large code lengths, we used the ACE algorithm proposed in [17] which helps to prevent the apparition of small cycles with degree two bitnodes. The ACE algorithm generally lowers greatly the error floor of very irregular LDPC codes (like the ones in Tables 1 and 2).

Figure 3 shows the simulation results for three rates R e {0.3,0.5,0.6} and for the two different approximations of the state-check node EXIT function presented in this paper: GA and EC. The curves are in accordance with the threshold computations, except the fact that codes optimized with the EC approximation tend to be better than the GA codes for the rate R = 0.3. We confirm also the behavior previously discussed in that the codes with R = 0.5 are closer to the Shannon limit than the codes with R = 0.3 and R = 0.6.

Table 1: Optimized LDPC codes for the 2-user Gaussian channel obtained with the Gaussian Approximation of the state-check node. The distance between the (Eb/N0) threshold S (evaluated with true DE) and the Shannon limit Si is given in dBs.

Rate 0.3 0.4 0.5 0.6

p(x) x7 x8 x9 x10

A(x) Xi i Xi i Xi i Xi i

2.749809e - 01 2 2.786702e - 01 2 3.170178e - 01 2 4.393437e - 01 2

2.040936e - 01 3 2.306721 e - 01 3 2.312804e - 01 3 1.305465e - 01 3

5.708851e - 03 4 5.059420e - 02 9 4.241393e - 02 17 2.508237e - 02 20

1.817382e - 02 5 4.229097e - 04 10 1.714436e - 01 18 2.462773e - 01 21

1.891399e - 02 6 1.608676e - 01 12 2.378443e - 01 100 1.587501e - 01 100

2.682255e - 02 7 2.787730e - 01 100

7.317063e - 02 8

1.130643e - 01 13

2.650713e - 01 100

S - Si 0.22 0.15 0.19 0.52

Table 2: Optimized LDPC codes for the 2-user Gaussian channel obtained with the erasure channel approximation of the state-check node. The distance between the (Eb/N0) threshold S (evaluated with true DE) and the Shannon limit Si is given in dBs.

Rate 0.3 0.4 0.5 0.6

p(x) x7 x8 x9 x10

X(x) Xi i Xi i Xi i Xi i

2.762791 e - 01 2 2.792405e - 01 2 3.165084e - 01 2 4.388191e - 01 2

2.321906e - 01 3 2.456371e - 01 3 2.339989e - 01 3 1.303074e - 01 3

7.870900e - 02 9 1.020663e - 01 13 4.285469e - 02 18 1.649224e - 01 20

1.077795e - 01 10 8.130383e - 02 14 1.713483e - 01 19 1.093493e - 01 21

3.050418e - 01 100 2.917522e - 01 100 2.352897e - 01 100 1.566018e - 01 100

S - S, 0.38 0.26 0.21 0.59

6. CONCLUSION

This paper has tackled the optimization of LDPC codes for the 2-user Gaussian MAC and has shown that it is possible to design good irregular LDPC codes with very simple techniques, the optimization problem being solved by linear programming. We have proposed 2 different analytical approximations of the state-check node update, one based on a Gaussian approximation and one very simple based on an erasure channel approach. The codes obtained have decoding thresholds as close as 0.15 dB away from the Shannon limit, and can be used as initial codes for more complex optimization techniques based on true density evolution. Future work will deal with the generalization of our approach to more than two users and/or users with different powers.

APPENDICES

A. COMPUTATION OF FUNCTIONS F+1,+1 AND F+1,-1

We proceed to compute the state-check node update rule for the mean of the messages.

Let us first consider hypothesis Z = [+1,+1]r .Under the Gaussian assumption, the conditional input distributions are

yl(+1,+1) ~ N(2,a2), mvs|(+1,+1) ~ N (pvs,2fivs).

Therefore

2 y - 2

mvs + ■

2 y + 2

(+1,+1)

(+1,+1)

- n (v™+Jj,2^+02), ~ N(v™ + + J2).

Since for a Gaussian random variable x — N(v + a,2v + b), where a and b are real valued constants,

E[ log (1 + e±x)] = -=\ e-"" log (1 + e±(^vrbz+v+a\dz

-Jn J-w v '

£2 10" >-

10" 10" 10"

■ \\ 1 \ I \ 1 I 1 \ \ "

R 0.3R 0.5 ^ | I 1 \ 7* V . R - 0.6/ V V

- 1 1 1 1 1 1 1 1 ° 1 1 ft u r \ \ : \ o

- 1 1 ¿ Si (0.6) _

Si (0.3) SSi (0.5)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

-e- EC approximation —a— GA approximation

Figure 3: Simulation results for the optimized LDPC codes given in Tables 1 and 2. The codeword length is N = 50000. The maximum number of iterations is set to 200. For comparison, we have indicated the Shannon limit for the three considered rates.

and by using (9), we get E[msv | Z = [+1,+1]r] 1

/ 1 + e+2V Vvs+(2/o2)z+^vs+2/o2

"pVS + " e log \ 1+ e-2j^+(2/o2)z-vvs-6/o2 I dz

= F+1,+1 (pvs, o2).

Similarly we get F+1,_1(^vs, a2). B. PROOF OF PROPOSITION 1

To prove Proposition 1, we first need to show the following lemmas.

Lemma 1. Consider a state-check node. Assume a symmetric input message and a symmetric channel observation. The output message is symmetric.

Proof of Lemma 1. We consider a state- check node that verifies the symmetry condition (see Definition 1). Without loss of generality we can assume k to be the output user and j the input user.

Let y (z, resp.) denote the observation vector when the codewords x[k], x[j] (_x[k], _x[j], resp.) are sent. Now note that a symmetric-output 2-user MAC can be modeled as follows (see [10, Lemma 1]):

since p (yt | xjk], xjj]) = p( _ yt | _xjk], _xjj]) and since we are interested in the performance of the BP algorithm, that is, the densities of the messages.

Similarly we denote by mjj], mjk] (rjj], rjk], resp.) the input and output messages of the state-check node at position t when the codewords x[k],x[j] (_x[k], _x[j],resp.) are sent.

Let us assume a symmetric input message, that is, p(mjj] | xjj]) = p( _ mjj] | _xjj]). Here again we can model this input message as

mF]( 7) = -r[j].

The state-check node update rule is denoted by

Vs( yt, mtj]).

The output message verifies

m\k] = Vs{yt, j = Vs{ " zt,-r([j]) = -Vs{zt, j = -r[k](z),

where the second equation is due to the symmetry conditions of the channel and the input message and the third equation follows from the symmetry condition of the state-check node map.

This can be rewritten as

p(mf] i xf], xj]) = p(—m

and therefore

p(mf] | 4k]) = p( — m

and by

by marginalizing the probability with respect to x[ using (B.4).

Equation (B.5) implies that with symmetric observation and symmetric input message, the message at the state-check node output is also symmetric. The symmetry is conserved through the state-check node which completes the proof of Lemma 1. □

Lemma 2. Consider a state-check node. Assume a symmetric channel observation. At any iteration, the input and output messages of the state check node are symmetrics.

Proof of Lemma 2. Lemma 1 shows that the state check node conserves the symmetry condition, [10, Lemma 1] shows the conservation of the symmetry condition of the messages through the variable and check node. At initialization, the channel observation is symmetric therefore a proof by induction shows the conservation of the symmetry property at any iteration with a BP decoder. □

Proof of Proposition 1. A consequence of Lemma 1 is that the number of cases that need to be considered to determine the entire average behavior of the state-check node can be divided by a factor 2. We can assume that the all-one sequence is sent for the output user. However, all the sequences of the input user need to be considered and therefore on the average we can assume an input sequence with half symbols fixed at "1" and half symbols at " _ 1." □

C. PROOF OF PROPOSITION 2

Lemma 3. Under the parallel scheduling assumption described in Section 2 and by using hypothesis H0 (see Section 4), the entire behavior of the BP decoder can be predicted with one decoding iteration (i.e., half of a round).

Proof of Lemma 3. Under the parallel scheduling assumption described in Section 2, two decoding iterations (one for each user) are completed simultaneously. Hence by using hypothesis H0 (same code family for both users), the two decoding iterations are equivalent in the sense that they provide messages with the same distribution. This can be easily shown by induction. It follows that a whole round is entirely determined by only one decoding iteration (i.e., half of a round). □

Therefore in the following we omit the user index.

Proof of Proposition 2. We now proceed to compute the evolution of the mutual information through all nodes of the graph. By assuming that the distributions at any iteration are Gaussian, we obtain similarly to method 1 in [12] the mutual information evolutions as

xVC = x xjj-1 + - 1)J-1 (4-1))),

XV = 1 - IPjJ (( j - 1)J-1(1 - *«),

xVS = x JJ-1 №)),

XV = fX), a2),

where Xi denotes the fraction of variable nodes of degree i (Xi = (Ai/i)/(XjXj/j)) and where

f(xsv, a2) = ^ Xsv I(+1,+1) + ^ Xsv I(+1,-1)

with xsv defined either in (14) or (20), depending on the approach used.

First notice that this system is not linear in the parameters {Xj}. But by using hypothesis H1, the input message msv of a variable node of degree i results from a variable node with the same degree. It follows that the third equation in (C.1) reduces to

xVS = JiiJ-1 (x®)).

Finally the global recursion in the form (21)-(22) is obtained by combining all four equations and the global recursion is linear in the paremeters [Xj]. □

D. PROOF OF PROPOSITION 3

Similarly to the definition of the message (see Section 2) and of the mutual information (see Section 3), we will denote by

P^b the distribution of the messages from node a to node b in iteration I, where (a, b) can either be v for variable node, c for check node, or s for state-check node.

We follow in the footsteps of [18] and analyze the local stability of the zero error rate fixed point by using a small perturbation approach. Let us denote by A0 the dirac at 0, that is, the distribution with 0.5-BER and the distribution with zero-BER when the symbol "+1" is sent.

From Lemma 3 (see Appendix C) we know that only half of a complete round needs to be performed in order to get the entire behavior of the BP decoder. All distributions of the DE are conditional densities of the messages given that the symbol sent is +1. From the symmetry property of the variable and check nodes, the transformation of the distributions can be performed under the assumption that the all-one sequence is sent. However, for the state-check node, different cases will be considered as detailed below.

We consider the DE recursion with state variable of the dynamical system Pvc. In order to study the local stability of the fixed point AM, we initialize the DE recursion at the point

P<? = (1 - 2e)Aœ +2eA0

for some small e > 0, and we apply one iteration of the DE recursion. Following [18] (and also in [12]), the distribution

Pco can be computed which leads to PÎ)° as

P(0 = A^ + O(e2).

For the sake of brevity, we omit the now-well-known step-by-step derivation and focus on the transformation at the state-checknode. Note that (D.2) holds with and without the hypothesis H1 (without interleaver) since it follows from the fact that an i-fold convolution of the distribution PC0 is performed with i > 2 in both cases.

From the symmetry property (see Proposition 1) of the state check node, the entire behavior at a state-check node can be predicted under the two hypotheses called (+1,+1) and (+1, -1), that is, when the output symbol is +1 and when the input symbol is either +1 or -1 with probability 1/2 each. In the following, we seek for the output distribution PiV, for a given input distribution pV0 (conditional distribution given that the input symbol is +1) and a given channel distribution.

Hypothesis (+1, +1) w.p. 1/2. From (D.2) and (5) we get

- P(0> = Aœ + O(e2),

y - N (2, a2).

Hence, by applying (4) we have 2+2 y

rn(0) =

_2_ _4_

Hypothesis (+1, -1) w.p. 1/2. From (D.2) and from the symmetry property of the input message at the state-check node, we have

m(0> - P<0)(-z) = A-x + O(e2)

and from (5) we get

„(0)

•A-» + 0( e2),

y - N (0, a2). Hence, by applying (4) we have

m(0) =

-2y - 2

N( 4,4 a2 a2

Combining (D.4) and (D.7), we obtain

^ = N( a2, a2)

It follows that at convergence, the channel seen by one user

is PV which is exactly the LLR distribution of a BIAWGNC with noise variance a2. It follows that at convergence the DE recursion is equivalent to the single-user case and the stability condition is therefore [18]

exp ( 1/(2a2))

2 (j - 1)Pj

REFERENCES

[1] B. Rimoldi and R. Urbanke, "A rate-splitting approach to the Gaussian multiple-access channel," IEEE Transactions on Information Theory, vol. 42, no. 2, pp. 364-375, 1996.

[2] R. Ahlswede, "Multi-way communication channels," in Proceedings of the 2nd IEEE International Symposium on Information Theory (ISIT '71), pp. 23-52, Aremenian Prague, Czech Republic, 1971.

[3] H. Liao, Multiple access channels, Ph.D. thesis, University of Hawaii, Honolulu, Hawaii, USA, 1972.

[4] R. Palanki, A. Khandekar, and R. McEliece, "Graph based codes for synchronous multiple access channels," in Proceedings of the 39th Annual Allerton Conference on Communication, Control, and Computing, Monticello, Ill, USA, October 2001.

[5] A. Amraoui, S. Dusad, and R. Urbanke, "Achieving general points in the 2-user Gaussian MAC without time-sharing or rate-splitting by means of iterative coding," in Proceedings of IEEE International Symposium on Information Theory (ISIT '02), p. 334, Lausanne, Switzerland, June-July 2002.

[6] A. De Baynast and D. Declercq, "Gallager codes for multiple user applications," in Proceedings of IEEE International Symposium on Information Theory (ISIT '02), p. 335, Lausanne, Switzerland, June-July 2002.

[7] F. R. Kschischang, B. J. Frey, and H.-A. Loeliger, "Factor graphs and the sum-product algorithm," IEEE Transactions on Information Theory, vol. 47, no. 2, pp. 498-519, 2001.

[8] J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, Morgan Kaufmann, San Mateo, Calif, USA, 1988.

[9] R. M. Tanner, "A recursive approach to low complexity codes," IEEE Transactions on Information Theory, vol. 27, no. 5, pp. 533-547, 1981.

[10] T. J. Richardson and R. Urbanke, "The capacity of low-density parity-check codes under message-passing decoding," IEEE Transactions on Information Theory, vol. 47, no. 2, pp. 599618, 2001.

[11] S. Ten Brink, "Designing iterative decoding schemes with the extrinsic information transfer chart," International Journal of Electronics and Communications, vol. 54, no. 6, pp. 389-398, 2000.

[12] A. Roumy, S. Guemghar, G. Caire, and S. Verdu, "Design methods for irregular repeat-accumulate codes," IEEE Transactions on Information Theory, vol. 50, no. 8, pp. 1711-1727, 2004.

[13] A. Bennatan and D. Burshtein, "On the application of LDPC codes to arbitrary discrete-memoryless channels," IEEE Transactions on Information Theory, vol. 50, no. 3, pp. 417-438, 2004.

[14] C.-C. Wang, S. R. Kulkarni, and H. V. Poor, "Density evolution for asymmetric memoryless channels," IEEE Transactions on Information Theory, vol. 51, no. 12, pp. 4216-4236, 2005.

[15] S.-Y. Chung, T. J. Richardson, and R. Urbanke, "Analysis of sum-product decoding of low-density parity-check codes using a Gaussian approximation," IEEE Transactions on Information Theory, vol. 47, no. 2, pp. 657-670, 2001.

[16] X.-Y. Hu, E. Eleftheriou, and D.-M. Arnold, "Progressive edge-growth tanner graphs," in Proceedings of IEEE Global Telecommunications Conference (GLOBECOM '01), vol. 2, pp. 9951001, San Antonio, Tex, USA, November 2001.

[17] T. Tian, C. Jones, J. D. Villasenor, and R. D. Wesel, "Construction of irregular LDPC codes with low error floors," in Proceedings of IEEE International Conference on Communications (ICC '03), vol. 5, pp. 3125-3129, Anchorage, Alaska, USA, May 2003.

[18] T. J. Richardson, M. A. Shokrollahi, and R. Urbanke, "Design of capacity-approaching irregular low-density parity-check codes," IEEE Transactions on Information Theory, vol. 47, no. 2, pp. 619-637, 2001.

Copyright of EUIMSIP Journal on Wireless Communications & Networking is the property of Hindawi Publishing Corporation and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.