Scholarly article on topic 'A regularized matrix factorization approach to induce structured sparse-low-rank solutions in the EEG inverse problem'

A regularized matrix factorization approach to induce structured sparse-low-rank solutions in the EEG inverse problem Academic research paper on "Mathematics"

CC BY
0
0
Share paper
OECD Field of science
Keywords
{""}

Academic research paper on topic "A regularized matrix factorization approach to induce structured sparse-low-rank solutions in the EEG inverse problem"

O EURASIP Journal on

Advances in Signal Processing

a SpringerOpen Journal

RESEARCH Open Access

A regularized matrix factorization approach to induce structured sparse-low-rank solutions in the EEG inverse problem

Jair Montoya-Martínez1*, Antonio Artés-Rodrïguez1, Massimiliano Pontil2 and Lars Kai Hansen3

Abstract

We consider the estimation of the Brain Electrical Sources (BES) matrix from noisy electroencephalographs (EEG) measurements, commonly named as the EEG inverse problem. We propose a new method to induce neurophysiological meaningful solutions, which takes into account the smoothness, structured sparsity, and low rank of the BES matrix. The method is based on the factorization of the BES matrix as a product of a sparse coding matrix and a dense latent source matrix. The structured sparse-low-rank structure is enforced by minimizing a regularized functional that includes the i21 -norm of the coding matrix and the squared Frobenius norm of the latent source matrix. We develop an alternating optimization algorithm to solve the resulting nonsmooth-nonconvex minimization problem. We analyze the convergence of the optimization procedure, and we compare, under different synthetic scenarios, the performance of our method with respect to the Group Lasso and Trace Norm regularizers when they are applied directly to the target matrix.

Keywords: Structured sparsity; Low rank; Matrix factorization; Regularization; Nonsmooth-nonconvex optimization

1 Introduction

The solution of the electroencephalographs (EEG) inverse problem to obtain functional brain images is of high value for neurological research and medical diagnosis. It involves the estimation of the Brain Electrical Sources (BES) distribution from noisy EEG measurements, whose relation is modeled according to the linear model

Y = AS + E, (1)

where Y e RMxT and A e RMxN are known and represent, respectively, the EEG measurements matrix and the forward operator (a.k.a lead field matrix), S e RNxT denotes the BES matrix, and E e RMxT is a noise matrix. M denotes the number of EEG electrodes, N is the number of brain electrical sources, and T is the number of time instants.

Correspondence: jmontoya@tsc.uc3m.es

1 Department of Signal Processing and Communications, Universidad Carlos III

de Madrid, Avda. Universidad 30, Leganés, Madrid 28911, Spain

Full list of author information is available at the end of the article

This estimation problem is very challenging: N ^ M, and the existence of silent BES (BES that produce non-measurable fields on the scalp surface) implies that the EEG inverse problem has infinite solutions: a silent BES can always be added to a solution of the inverse problem without affecting the EEG measurements. For all these reasons, the EEG inverse problem is an undetermined ill-posed problem [1-4].

A classical approach to solve an ill-posed problem is to use regularization theory, which involves the replacement of the original ill-posed problem with a 'nearby' well-posed problem whose solution approximates the required solution [5]. Solutions developed by this theory are stated in terms of a regularization function, which helps us to select, among the infinite solutions, the one that best fulfills a prescribed constrain (e.g., smoothness, spar-sity, and low rank). To define the constrain, we can use mathematical restrictions (minimum norm estimates) or anatomical, physiological, and functional prior information. Some examples of useful neurophysiological information are [1,6]: the irrotational character of the brain current sources, the (smooth) dynamic of the neural signals, the clusters formed by neighboring or functional

Springer

© 2014 Montoya-Martfnez et al.; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

related BES, and the smoothness and focality of the electromagnetic fields generated and propagated within the volume conductor media (brain cortex, skull, and scalp).

Several regularization functions have been proposed in the EEG community: Hamalainen and Ilmoniemi in [7] proposed a squared Frobenius norm penalty (||S||f), which they named Minimum Norm Estimate (MNE). This reg-ularization function usually induces solutions that spread over a considerable part of the brain. Uutela et al. in [8] proposed an ¿4-norm penalty (||S||i). They named their approach Minimum Current Estimate (MCE). This penalty function promotes solutions that tend to be scattered around the true sources. Mixed i1 £2-norm penalties have also been proposed in the framework of the time basis, time frequency dictionaries, and spatial basis decomposition. These mixed norm approaches induce structured sparse solutions and depend on decomposing the BES signals as linear combinations of multiple basis functions, e.g., Ou et al. in [9] proposed the use of temporal basis functions obtained with singular value decomposition (SVD), Gramfort et al. in [10,11] proposed the use of time-frequency Gabor dictionaries, and Haufe et al. in [12] proposed the use of spatial basis Gaussian functions. For a more detailed overview on inverse methods for EEG, see [2,3,13] and references therein. For a more detailed overview on regularization functions applied to structured sparsity problems, see [14-16] and references therein.

All of these regularizers try to induce neurophysiolog-ical meaningful solutions, which take into account the smoothness and structured sparsity of the BES matrix: during a particular cognitive task, only the BES related with the brain area involved in such a task will be active, and their corresponding time evolution will vary smoothly, that is, the BES matrix will have few nonzero rows, and in addition, the columns will vary smoothly. In this paper, we propose a regularizer that takes into account not only the smoothness and structured sparsity of the BES matrix but also its low rank, capturing this way the linear relation between the active sources and their corresponding neighbors. In order to do so, we propose a new method based on matrix factorization and regulariza-tion, with the aim of recovering the latent structure of the BES matrix. In the factorization, the first matrix, which acts as a coding matrix, is penalized using the £21-norm, and the second one, which acts as a dense, full rank latent source matrix, is penalized using the squared Frobenius norm.

In our approach, the resulting optimization problem is nonsmooth and nonconvex. A standard approach to deal with the nonsmoothness introduced by the nons-mooth regularizers mentioned above is to reformulate the regularization problem as a second-order cone programming (SOCP) problem [12] and use interior point-based

solvers. However, interior point-based methods can not handle large scale problems, which is the case of large EEG inverse problems involving thousands of brain sources. Another approach is to try to solve the nonsmooth problem directly, using general nonsmooth optimization methods, for instance, the subgradient method [17]. This method can be used if a subgradient of the objective function can be computed efficiently [14]. However, its convergence rate is, in practice, slow (0(1/V*)), where k is the iteration counter. In this paper, in order to tackle the nonsmoothness of the optimization problem, we depart from these optimization methods and use instead efficient first-order nonsmooth optimization methods [5,18,19]: forward-backward splitting methods. These methods are also called proximal splitting because the nonsmooth function is involved via its proximity operator. Forward-backward splitting methods were first introduced in the EEG inverse problem by Gramfort et al. [10,11,13], where they used them to solve nonsmooth optimization problems resulting from the use of mixed l^-norm penalties functions. These methods have drawn, increasing attention in the EEG, machine learning, and signal processing community, especially because of their convergence rates and their ability to deal with large problems [19-21].

On the other hand, in order to handle the nonconvexity of the optimization problem, we use an iterative alternating minimization approach: minimizing over the coding matrix while maintaining fixed the latent source matrix and viceversa. Both of these optimization problems are convex: the first one can be solved using proximal splitting methods, while the second one can be solve directly in terms of a matrix inversion.

The rest of the paper is organized as follows. In Section 2, we give an overview of the EEG inverse problem. In Section 3, we present the mathematical background related with the proximal splitting methods. The resulting nonsmooth and nonconvex optimization problem is formally described in Section 4. In Section 5, we propose an alternating minimization algorithm, and its convergence analysis is presented in Section 6. Section 7 is devoted to the numerical evaluation of the algorithm and its comparison with the Group Lasso and Trace Norm regularizers, which consider partially the characteristics of the matrix S: its structured sparsity by using the l21-norm and its low rank by using the l*-norm, respectively. The advantages of considering both characteristics in a single method, like in the proposed one, become clear in comparison with the independent use of the Group Lasso and Trace Norm regularizers. Finally, conclusions are presented in Section 8.

2 EEG inverse problem background

The EEG signals represent the electrical activity of one or several assemblies of neurons [22]. The area of a neuron

assembly is small compared to the distance to the observation point (the EEG sensors). Therefore, the electromagnetic fields produced by an active neuron assembly at the sensor level are very similar to the field produced by a current dipole [23]. This simplified model is known as the equivalent current dipole (ECD). These ECDs are also known by other names such as BES and current sources. Due to the uniform spatial organization of their dendrites (perpendicular to the brain cortex), the pyramidal neurons are the only neurons that can generate a net current dipole over a piece of cortical surface, whose field is detectable on the scalp [3]. According to [24], it is necessary to add the field of ~ 104 pyramidal neurons in order to produce a voltage that is detectable on the scalp. These voltages can be recorded by using different types of electrodes [22], such as disposable (gel-less, and pre-gelled types), reusable disc electrodes (gold, silver, stainless steel, or tin), headbands and electrode caps, saline-based electrodes, and needle electrodes.

Under the quasi-static approximation of Maxwell's equations, we can express the general model for the observed EEG signals y(t) at time t as linear functions of the BES s(t) [9]:

y(t) = As (t) + e(t),

3 Mathematical background

3.1 Proximity operator

The proximity operator [19,25] corresponding to a convex function/ is a mapping from M" to itself and is defined as follows:

proxy(z) = argmin \f{x) + — \\x — z

xeR" T 2

where ||- || denotes the Euclidean norm. Note that the proximity operator is well defined, because the above minimum exists and is unique (the objective function if strongly convex).

3.2 Subdifferential-proximity operator relationship

If/ is a convex function on M" and y e M", then [26]

x e d/(y) ^ y = prox/(x + y), (4)

where d/(y) denotes the subdifferential of/ at y.

3.3 Principles of proximal splitting methods

Proximal splitting methods are specifically tailored to solve an optimization problem of the form

minimize f (S) + r(S),

where y(t) e MMx1 is the EEG measurements vector, s(t) e RNx1 is the BES vector, e(t) e MMx1 is the noise vector, and A e MMxN is the lead field matrix. In a typical experimental setup, the number of electrodes (M) is ~ 102, and the number of BES (N) is ~ 103, 104. We can express the former model for all time instants {t1,t2,...,tT} (corresponding to some observation time window) by using the matrix formulation (1), where Y = [y(t1), y(t2),..., y(tT)] e RMxT, S = [s(t1), s(t2),..., s(tT)] e RN xT, and E = [e(t1), e(t2),..., e(tT)] e MMxT. The ith row of the matrix Y represents the electrical activity recorded by the ith EEG electrode during the observation time window. In the BES matrix S, each row represents the time evolution of one brain electrical source, and each column represents the activity of all the corresponding sources in a particular time instant. Finally, the forward operator A summarizes the geometric and electric properties of the conducting media (brain, skull, and scalp) and establishes the link between the current sources and EEG sensors (A,y tells us how the jth BES influences the measure obtained by the ith electrode). Following this notation, the EEG inverse problem can be stated as follows: Given a set of EEG signals (Y) and a forward model (A), estimate the current sources within the brain (S) that produce these signals.

where/(S) is a smooth convex function, and r(S) is also a convex function, but nonsmooth. From convex analysis [17], we know that S is a minimizer of (5) if and only if 0 e d(/ + r)(S). This implies the following [18]:

0 e d(/ + r)(S) ^ 0 e {/(S) + dr(S)} ^ -V/(S) e dr(S) ^ -YV/(S) e ydr(S) ^ (S - y V/(S)) - S e dyr(S)

Using (4) in the former expression, we get

S = proxyr(S - y V/(S)) (6)

Equation 6 suggests that we can solve (5) using a fixed point iteration:

S^+1 = proxyr(Sk - y V/(Sk)) (7)

In optimization, (7) is known as forward-backward splitting process [19]. It consists of two steps: first, it performs a forward gradient descend step = Sk - y V/(Sk) and then it performs a backward step Sk+1 = proxyr(S|).

From (7), we can see the importance of the proximity operator (associated to yr(S)) with respect to the forward-backward splitting methods, since their main step is to calculate it. If we would have a closed-form expression for such proximity operator or if we could approximate it efficiently (with the approximation errors decreasing at appropriate rates [27]), then we could efficiently solve (7). Furthermore, when / has a Lipschitz continuous gradient, there are fast algorithms to solve (7).

For instance, the Iterative Soft Thresholding Algorithm (ISTA) has a convergence rate of 0(1/k), and the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) has a convergence rate of 0(1/k2) [5].

4 Problem formulation

The regularized EEG inverse problem can be stated as follows:

S = argmin I -11 AS — Y| |p + (S) Д > 0

where ±||AS - Y||2 is the square loss function (|| ||f denotes the Frobenius norm), and XQ(S) is a nonsmooth penalty term that is used to encode the prior knowledge about the structure of the target matrix S.

In order to induce structured sparse-low-rank solutions, we propose to reformulate (1) using a matrix factorization approach, which involves expressing the matrix S as the product of two matrices, S = BC, obtaining the following nonlinear estimation model:

Y = ABC + E,

where B and C are penalized using the £21-norm and the squared Frobenius norm, respectively. The resulting optimization problem can be stated as follows:

B,C = argmin j ^ ||A(BC) - Y||2 + A ^ ||B(/,:)||2

+ fEncftoiil)}

= ar|minJi||A(BC)-Y||2+A(||B||2,1 + ^||C||2)J,

where к > 0, p > 0, B g

, and B(i, :),

C(i, :) denote the ith row of B and C, respectively. K ^ {N, T}, к, and p are parameters of the model that must be adjusted.

In this formulation, which we denote as matrix factorization approach, the £21-norm and the squared Frobenius norm induce structured sparsity and smoothness in the rows of B and C, respectively, and therefore also in the rows of S. Finally, the parameter K encloses the low rank of S:

rank(B) < min {N, K} ^ rank(B) < K rank(C) < min {K, T} ^ rank(C) < K rank(BC) < min {rank(B), rank(C)} < K ^ rank(S) < K

Hence, the proposed regularization framework takes into account all the prior knowledge about the structure of the target matrix S.

5 Optimization algorithm

5.1 Matrix factorization approach

In this section, we address the issue of implementing the learning method (10) numerically. We propose the following reparameterization of (10):

В = ^/kp В, С = С ВС = (y/kp в)

^ BC = B C

Using (11) in the objective function of (10), we get

^-||A(BC)-Y||2+À

B||2,1 +

=> 4|A(BC) - Y||2 + kyfoWBWu + o/^J|C||I

- l|A(BC) — Y||p + A, ||В||2,i + - IIС||p

where X = and therefore, we get an optimization

problem with only one regularization parameter:

B,C = argmin |-||A(BC) - Y||2 + 1||В||2д + ~\\C\\2F Д > o| b,c 12 2 j

The optimization problem (12) is a simultaneous minimization over matrices B and C. For a fixed C, the minimum over B can be obtained using FISTA. On the other hand, for a fixed B, the minimum over C can be solved directly in terms of a matrix inversion. These observations suggest an alternating minimization algorithm [15,28]:

Algorithm 1 Alternating minimization algorithm to solve the matrix factorization-regularization-based approach

Require: Y g RMxT, A g RMxN, Q g Rkxt repeat

, K, к

Bt = argmin Ji || A(BQ_i) - Y\\j +Я||В||2д + \ ||Q-i II?)

Ct = argmin Ji||A(BtC) - Y||2 +l||Bt||2,i + i||C||2 J

until stopping condition is met

In order to obtain the initialization matrix C0, we use an approach based on the singular value decomposition of

Y. Without loss of generality, let us work with (9) in the noiseless case:

Y = ABC

From (15), we can see that {Yi, Y2,..., YM} C Row Space(C), where Yj denotes the ith row of Y.

Now, let us obtain a rank-K approximation of Y by using a truncated SVD (truncated at the singular value aK):

Y ^ UmxKSkxKV/Cxr

From the SVD theory [29], we know that {Y1,Y2,...,YM} C Row Space(VT); therefore, we can choose Co = VT. Then, given Co, we can start iterating using (13) and (14).

5.1.1 Minimization over B (fixed C)

The minimization over B can be stated as follows:

Bt = argmin {Fb(B) + A.||B|h,1 , X > 0}, (17) B

where FB(B) = ±||A(BCt-i) - Y||| + i ||Ct—i lip- This is a composite convex optimization problem involving the sum of a smooth function (FB(B)) and a nonsmooth function (X|B|2,1). As we have seen in Section 3, this kind of problem can be efficiently handled using proximal splitting methods (e.g., FISTA). In order to apply FISTA to solve (17), we first need to compute the following:

1. The gradient of the smooth function FB (B)

d FB (B) x x

VFb(B) = = A (A(BQ_i) - Y)CiT_1

where AT denotes the transpose of the matrix A. 2. An upper bound of the Lipschitz constant (L) of VFB (B) (it can also be estimated using a backtracking search routine [5])

||V Fb(B1 ) -VFb(b2 )|2 = ||AT AB1Ct-1 CT- AT AB2Ct-1 CT-11|2 = ||ATA(B1 - B2)Ct-1 CT-11|2

. T . „ „ x ^t-1 Ct-1

=£\\ (ata(bi-B2)) (ct_iC

\\vfb (Bi) - vfb (B2) \\2 = \\ (ATA (Bi - B,)"

x (Ct-1Ct-^. \\2

<^\\|ATA (Bi - B,) \\

'=1 x II

(ci_ict_i) /

< \\ |A 1 A (Bi - 12

x£\\ (Ct-1 c^)' \\2 '=1 '

< \\ |ATA (B1 - B2) \\|2 x \\Ct-lct_l \\2

From (18), taking into account that the spectral norm is submultiplicative (\\|PQ\\|2 < \\|P\\|2 \\|Q\\|2, VP e RMxN, VQ e RnxT), it follows that:

\\VFn(B1) - VFr(B2)\2 < \\|ATA\|2 \|b1 - b2\|2 \\|Ct-1ct-1\2

and, using the fact that \\|P\|2 < HP\\2> VP e RMxN, we obtain:

\\VFB(B1) - VFR(B2) \\2 < \\ |ATA\|2 \\B1 - B2 \\2

x wcf-1ct-1 \\2

< L IIBn — B

whereL = \\|ATA\\|2 \\Q-1 Ct-1\2. 3. Proximal operator associated to the nonsmooth function k\\- \\2>1

prox^\-\2,1 (B) = argrni^

X||2,I + -||X-B||2

(proxA\H\2,1 (B))

i=N i=1

.,: \ 2

ib.,: \ 2 - k)

where (•)+ = max(-, 0), and by convention § = 0.

5.1.2 Minimization over C (fixed B)

The minimization over C can be stated as follows:

where (Ct-1C^-1). denotes the j-th column of the

matrix Ct-1CT_1. Taking into account that ||Qx||2 < |||Q||2|x|2, Vx e MN, VQ e MMxN [29], where ||| • |||2 denotes the spectral norm, we get:

Ct = argmin {Fc(C)} c

where Fc(C) = |||A(BtC) - Y||| + X||Bt||2)1 + ±||C||| is a smooth function of C. In what follows, we show how

the minimum over C can be solved directly in terms of a matrix inversion:

dFC(C) t t VFc(C) = = Bt A (A(BtC) - Y) +C

VFc(c) = 0 ^ BttAT(A(BtCt) - Y) + Ct = 0

^ Ct = [BtTATABt + ik] 1 BttATY

The matrix [BtTATABt + Ik] e RKxK, and K is supposed to be small; therefore, calculating its corresponding inverse matrix is quite cheap.

6 Convergence analysis

We are going to analyze the convergence behavior of Algorithm 1 by using the global convergence theory of iterative algorithms developed by Zangwill [30]. Note that in this theory, the term 'global convergence' do not imply convergence to a global optimum for all initial points. The property of global convergence expresses, in a sense, the certainty that the algorithm converges to the solution set. Formally, an iterative algorithm f, on the set X, is said to be globally convergent provided, for any starting point x0 e X, the sequence {xn} generated by f has a limit point [31].

In order to use the global convergence theory of iterative algorithms, we need a formal definition of iterative algorithm, as well as the definition of a set-valued mapping (a.k.a point-to-set mapping) [30]:

Definition 6.1. Set-valued mapping. Given two sets, X and Y, a set-valued mapping defined on X, with range in the power set ofY, P (Y), is a map, $, which assigns to each x e X a subset $ (x) e P(Y),

$ : X ^ P(Y)

Definition 6.2. Iterative algorithm. Let X be a set and x0 e X a given point. Then, an iterative algorithm f, with initial point x0, is a set-valued mapping

f : X ^ P (X)

which generates a sequence {xn}^=1 via the rule xn+1 e f(xn), n = 0,1,...

Now that we know the main building blocks of the global convergence theory of iterative algorithms, we are in a position to state the convergence theorem related to Algorithm 1:

Theorem 6.1. Let $ denotes the iterative Algorithm 1, and suppose that given Y e RMxT, A e RMxN, B0 e RNxK, C0 e RK xT, K, andk, thesequence {Bt, Q}^ is generated and satisfies {Bt+1, Ct+1} e $(Bt, Ct). Also, let QB and denote the solution sets of (13) and (14), respectively:

B e Rnxk i Oe aq||a(bct-i)-y|£+A||b||2,i

+ -IIQ-illf

CeR^lvl l1 |A(BtC) — Y| |Bt 1124 ~ I |C| ) = 0

Then, the limit of any convergent subsequence of {Bt, Q}^ is in QB and £2C.

This convergence theorem is a direct application of Zangwill's global convergence theorem [30]. Before going in this assertion, let us show some definitions and theorems used in the proof.

Definition 6.3. Compact set. A set X is said to be compact if any sequence (or subsequence) contains a convergent subsequence whose limit is in X. More explicitly, given a subsequence {xn}neN in X, there exists aN\ c N such that

xn ^ xTO, n e N1

with xTO e X (we write convergence of subsequences as xn ^ xTO, which is equivalent to lim xn = xTO).

Definition 6.4. Composite map. Let nA : X ^ Y and nB : Y ^ Z be two set-valued mappings. The composite map nC = nB o nA which takes points x e X to sets nC(x) c Zis defined by

Uc(x) := [J UB(y)

yeUA(x)

Definition 6.5. Closed map. A set-valued mapping n : X ^ P(Y) is closed at x0 e Xprovided

1. xn ^ x0 as n

2. yn ^ y0 as n -

3. yn e U(xn)

to, xn e X to, yn, y0 e Y

implies y0 e n (x0). The map n is called closed on S c X provided is closed at each x e S.

Theorem 6.2. Composition of closed maps. Let nA : X ^ Y and nB : Y ^ Z be two set-valued mappings. Suppose

1. nA isclosedat x0

2. nB is closed on nA(x0)

3. if xn ^ x0 andyn e nA (xn), then there exists

y0 e Y, such that for some sequence [ynj}, ynj ^ y0 as j ^ to.

Then, the composite map nC = nB o nAis closed atx0.

Lemma 6.1. [32] Given a real-valued function defined on X x Y, define the set-valued mapping ^ : X ^ P(Y) by

^(x) = argmin h(x, y)

then, ^ is closed atx if ^(x) is nonempty.

Theorem 6.3. Weierstrass theorem. If f is a real continuous function on a compact set S c Rn, then the problem

argmin f (x), x e 5}

subject to

Bt = argmin |-||A(BCt_i)-Y||2 b 12

+ A.||B||2,l+^IICt-lllF|

Let r be the solution set of $

has an optimal solution x* e S.

Theorem 6.4. [30] Zangwill'sglobal convergence theorem. Let the set-valued mapping Mx(x) : X ^ P(X) determine an algorithm that given a point x0 generates a sequence |xn}^=0 through the iteration xn+1 e Mx (xn). Also, let a solution set r be given. Suppose

1. All pointxn are in a compactset S c X.

2. There is a continuous function a : X ^ R such that

(a) ifx e r, then a(x') < a(x) Vx' e Mx(x).

(b) ifx e r, then a(x') < a(x) Vx' e Mx(x).

3. The map Mx(x) is closed atx if x e r.

Then, the limit of any convergent subsequence of {xn }^=0 is in r. That is, accumulation points x* of the sequence xn lie in r. Furthermore, a(xn) converges to a*, and a(x*)=a* for all accumulation points x*.

Proof. Theorem 6.1. The iterative algorithm $ can be decomposed into two well-defined iterative algorithms and $c:

<MCt-i) = Bt = argmin U||A(BQ_i)-Y|||+A.||B||2,i b 12

1 2 + -hq-iiif

$c(Bt) = Ct = argmin \ ^||A(BtC) - Y|||

ScOMQ-i)) = Q = argmin -||A(BtC) - Y\\z¥

+ A.||Btll2,i + -NCIll

r = \ C e R

dZ(c, t) 9C

+ ^l|Bt||2,i + 2llCllF[ (24)

As we can see from (23) and (24), at iteration t, the result of becomes the input of $c, and at iteration t + 1, the result of $c becomes the input of therefore, we can express $ as the composition of $c and $B, that is, $(Ct-1) = $c ($B(Ct-1)):

where Z(C,t) = ±||A(BtC) - Y||| + X||Bt||2,i +

To prove this theorem by using Zangwill's global convergence theorem, we need to prove that all its corresponding assumptions are fulfilled. In order to prove assumption

1, let us analyze the sequences {Bt}J=1 and {Ct}J=1. The sequence {Bt}^=1 is generated by using FISTA, which is a convergent algorithm (Bt ^ BTO) that guarantees that Bt e [5,18]. Hence, using Definition 6.3, we can see that the sequence {Bt}^=1 generated by (23) lies in a compact set. On the other hand, the sequence {Ct}^=1 is generated by (22), which guarantees that Ct e . This sequence always converges to a point inside which implies that also lies in a compact set. This concludes the proof of assumption 1.

To prove assumption 2, let us use Z(C, t) as the function a(-); thus, in order to verify the fulfillment of assumption

2, we need to prove that

(a) if Ct e r, then Z(Ct+1, t + 1) < Z(Ct, t) VCt+1 e $(Ct)

(b) if Ct e r, then Z(Ct+1, t + 1) < Z(Ct, t) VCt+1 e $(Ct)

From (25), we can see that the sequence {Ct}^=1 will always lie in r (because Ct is generated by (22)); therefore, we only need to prove (b).

Let Ct+1 be the solution of (25) at iteration t + 1; this implies

i||A(Bt+iCt+i) - Y||2 + i||Bt+i||2,i + i||Ct+ill?

< i||A(Bt+iC)-Y||2+i||Bt+i||2,i

+ i||C|||, VC e RKxT

< -||a(bt+ict) — Y||p +x||bt+i||2,i

On the other hand, if Bt+1 is the solution of (23) at iteration t + 1, this implies

i||A(Bt+iCt) - Y||2 + i||Bt+i||2,i + i||Ct||?

< i||A(BQ) - Y||p +X||B||2,i

+ i||Ct||p, VB e RNxK

< -||A(BtCt) — Y||p + X||Bt||2,i

and from (26) and (27), we can prove assumption 2(b): i 11A(Bt+iQ+i) - Y| \j + k| |Bt+i | |2,i + ^ IIQ+i II\

< ^||A(BtQ)-Y|||+A.||Bt||2,i

+ ^iiqiif Z(Ct+1, t + 1) < Z(Ct, t)

In order to prove assumption 3, we need to prove that $ is closed at C if C e r. To do so, we are going to use Theorem 6.2; therefore, we need to prove that $B and $C are both closed maps: from (23) and (24), we can see that their corresponding objective functions are both continuous VB e RnxK and VC e RKxT, respectively; hence, by using Weierstrass Theorem and Lemma 6.1, we can conclude that $B and $C are both closed maps for any Ct-1 and Bt, respectively, and by using Theorem 6.2, we can conclude that $ is closed on any Ct-1.

Finally, from all the previous proofs and Zangwill's global convergence theorem, it follows that the limit of any convergent subsequence of {Bt, Ct}^=1 is in QB and .

7 Numerical experiments

In this section, we evaluate the performance of the matrix factorization approach and compare it with the Group Lasso regularizer:

S = argmin j i ||AS - Y|| J + A. £ IIS(/,:)||2 , A. > o|

and the Trace Norm regularizer:

S = argmin ji||AS - Y|||+ ^a«(S) ,A > oj

where q = min {N, T} and ai(S) denotes the ith singular value of S. Both problems (28) and (29) were solved using the FISTA implementation of the SPArse Modeling Software (SPAMS) [33,34].

In order to have a reproducible comparison of the different regularization approaches, we generated two synthetic scenarios:

• M = 1 28 EEG electrodes, T = 1 6 1 time instants, N = 413 current sources within the brain, but only 12 of them are active: 4 main active sources with their corresponding 2 nearest neighbor sources are also active. The other 401 sources are not active (zero electrical activity). Therefore, in this scenario, the synthetic matrix S is a structured sparse matrix with only 12 nonzero rows (the rows associated to the active sources).

• M = 128 EEG electrodes, T = 161 time instants,

N = 2,052 current sources within the brain, but only 40 of them are active: 4 main active sources with their corresponding 9 nearest neighbor sources are also active. The other 2012 sources are not active (zero electrical activity). Therefore, in this scenario, the synthetic matrix S is a structured sparse matrix with only 40 nonzero rows (the rows associated to the active sources).

In both scenarios, the simulated electrical activity (simulated waveforms) associated to the four Main Active Sources (MAS) was obtained from a face perception-evoked potential study [35,36]. To obtain the simulated electrical activity associated to each one of the active neighbor sources, we simply set it as a scaled version of the electrical activity of its corresponding nearest MAS (with a scaled factor equal to 0.5). Hence, there is a linear relation between the four MAS and their corresponding nearest neighbor sources; therefore, in both scenarios, the rank of the synthetic matrix S is equal to 4.

As forward model (A), we used a three-shell concentric spherical head model. In this model, the inner sphere represents the brain, the intermediate layer represents the skull, and the outer layer represents the scalp [37]. To obtain the values of each one of the components of the matrix A, we need to solve the EEG forward problem [38]: Given the electrical activity of the current sources within the brain and a model for the geometry of the conducting media (brain, skull and scalp, with its corresponding electric properties), compute the resulting EEG signals. This problem was solved by using the SPM software [39]. Taking into account the comments mentioned in Section 2, the N simulated current sources were positioned on a mesh located on the brain cortex, with an orientation fixed perpendicular to it.

Finally, the simulated EEG signals were generated according to (1), where E is a Gaussian noise G(0, a2I) whose variance was set to satisfy a SNR = 20 log10 = 10 dB. Summarizing, our syn-

thetic problems can be stated as follows: Given matrices

Y e R128x161 and A e R128xN, recover the synthetic BES matrix S e RNx161. According to this, in both scenarios, we want to estimate a BES matrix which is structured sparse and low rank, with its rank equal to the number of MAS simulated. The activity of the four MAS, the synthetic EEG measurements as well as the sparsity pattern of the synthetic BES matrix are shown in Figures 1 and 2 (Ground Truth).

We have used cross-validation to select the regulariza-tion parameter X associated to the Group Lasso and Trace Norm regularizers, as well as the parameters X and K in the case of the Matrix Factorization approach (K e [1,2,3,..., 10], X e [10-3,10-2,10-1,..., 103]): the rows of Y are randomly partitioned into three groups of approximately equal size. Each union of two groups forms a train set (TrS), while the remaining group forms a test set (TS). This procedure is carried out three times, each time selecting a different test group. Inverse reconstructions are carried out based on the training sets, obtaining

different regression matrices Si. We then evaluate the root mean square error (RMSE) using the test sets and the regression matrices S,•:

where Yts, e RmtS xT, and Ats, e Rmts<- xN (TSi denotes the index set of the rows that belongs to the ith test set). Once the estimated matrix S has been found, we apply a threshold to remove spurious sources with almost zero activity. We have set this threshold equal to the 1% of the mean energy of all the sources.

7.1 Performance evaluation

In order to evaluate the performance of the regulariz-ers, we compare the waveform and localization of the four MAS present in the synthetic BES matrix against the four MAS estimated by each one of the regularizers.

0.4 0.2 0 -0.2

MAS, Ground Truth

0 50 100 150 200 250 300 350

0.4 0.2 0 -0.2

MAS, Matrix Factorization

-200 0 200 400 600 time (ms) EEG Noisy (SNR=10 dB)

-200 0 200 400 600 time (ms) rank: 4

0 50 100 150 200 250 300 350

0.4 0.2 0 -0.2

MAS, Group Lasso

200 0 200 400 600 time (ms) EEG estimated

-200 0 200 400 600 time (ms) rank: 4

50 100 150 200 250 300 350

0.4 0.2 0 -0.2

MAS, Trace Norm

-200 0 200 400 600 time (ms) EEG estimated

-200 0 200 400 600 time (ms) rank: 19

200 0 200 400 600 time (ms) EEG estimated

-200 0 200 400 600 time (ms) 0 rank: 12

50 100 150 200 250 300 350

Figure 1 Simulation results: waveforms of the MAS, EEG estimated, and sparsity pattern of the estimated BES matrix. Experiment setup: 413 sources, 128 EEG electrodes, 161 time instants,4 main active sources with their corresponding 2 nearest neighbor sources also active.

0.4 0.2 0 -0.2

30 20 10 0 -10

MAS, Ground Truth

0 200 400 time (ms) EEG Noisy (SNR=10 dB)

0 200 400 600 800 1000 1200 1400 1600 1800 2000

0 200 400 time (ms) rank: 4

-0.4 -2

30 20 10 0 -10

MAS, Matrix Factorization

0 200 400 600 800 1000 1200 1400 1600 1800 2000

0 200 400 time (ms) EEG estimated

0 200 400 time (ms) rank: 4

0.4 0.2 0 -0.2

30 20 10 0 -10

MAS, Group Lasso

0 200 400 600 800 1000 1200 1400 1600 1800 2000

200 400 time (ms) EEG estimated

0 200 400 time (ms) rank: 34

-0.4 -2

30 20 10 0 -10 -20

MAS, Trace Norm

0 200 400 600 800 1000 1200 1400 1600 1800 2000

0 200 400 time (ms) EEG estimated

0 200 400 time (ms) rank: 6

Figure 2 Simulation results: waveforms of the MAS, EEG estimated, and sparsity pattern of the estimated BES matrix. Experiment setup: 2,052 sources, 128 EEG electrodes, 161 time instants, 4 main active sources with their corresponding 9 nearest neighbor sources also active.

We also compare the sparsity pattern of the estimated BES matrix S against the sparsity pattern of the synthetic BES matrix S, as well as the synthetic and predicted EEG measurements.

As we can see from Figures 1 and 2, the Group Lasso and Trace Norm regularizers do not reveal the correct number of linear independent sources, while the Matrix Factorization does: it finds out four linear independent sources in both scenarios. To select such four linear independent MAS, we find a basis for the Column Space(ST) (using a QR factorization), where S is a matrix whose rows are a sorted version of the rows of S (sorted in a descending order of their corresponding energy value). To get the four linear independent MAS estimated by the Group Lasso and Trace Norm regularizers, we followed the same procedure described before and retained the first four components of the basis of the Column Space(ST).

According to Figures 1 and 2, the Matrix Factorization approach is able to estimate a BES matrix with the correct

rank and whose sparsity pattern follows closely the sparsity pattern of the true BES matrix, that is, both matrices have a similar structure, which implies that the proposed approach is able to induce the desired solution: A row-structured sparse matrix, whose nonzero rows encode the linear relation between the active sources and their corresponding nearest neighbor sources. Using the estimated BES matrix, the Matrix Factorization approach is also able to predict a smooth version of the noisy EEG, and the waveforms of the estimated MAS follow closely the waveforms of the true MAS.

As we can see from Figures 1 and 2, Group Lasso is able to estimate a BES matrix with a similar row-sparsity pattern to the true BES matrix, but it does not take into account the linear relation between the nonzero rows, which can be seen from the rank of the estimated BES matrix. The waveforms of the estimated MAS are very similar to the true MAS, but they are not so smooth as the ones estimated by the Matrix Factorization approach.

As we can see from Figures 1 and 2, the Trace Norm regularizer takes into account the linear relation of the active sources by inducing solutions which are low rank, but, on the other hand, it does not take into account the structured sparsity pattern of the BES matrix. All of this implies that the Trace Norm tends to induce low rank dense solutions, which are not biologically plausible.

According to Figures 3 and 4, the position of the MAS obtained from the BES matrix estimated by the Matrix Factorization approach, the Group Lasso, and Trace Norm regularizers follows closely the position of the true MAS. Nevertheless, it is worth highlighting that before selecting the MAS, we first need an accurate estimation of their number, and the Group Lasso and Trace Norm regularizers were not able to get a precise estimate of it, only the Matrix Factorization were able to.

From these results, we can see that the proposed Matrix Factorization approach outperforms both the Group Lasso and Trace Norm regularizers. The main reason for this is because it combines their two main features: it combines the structured sparsity (from Group Lasso) and the low rank (from Trace Norm) into one unified framework, which implies that it is able to induce structured sparse-low-rank solutions which are biologically plausible: few active sources, with linear relations between them.

8 Conclusions

We have presented a novel approach to solve the EEG inverse problem, which is based on matrix factorization and regularization. Our method combines the ideas behind the Group Lasso (structured sparsity) and Trace Norm (low rank) regularizers into one unified framework. We have also developed and analyzed the convergence of an alternating minimization algorithm to solve the resulting nonsmooth-nonconvex regularization problem. Finally, using simulation studies, we have compared our method with the Group Lasso and Trace Norm regulariz-ers when they are applied directly to the target matrix, and we have shown the gain in performance obtained by our method, hence proving the effectiveness and efficiency of the proposed algorithm.

Competing interests

The authors declare that they have no competing interests. Acknowledgements

The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve the quality of the paper. They are also grateful to Dr. Carsten Stahlhut, from DTU Informatics, for valuable discussions about EEG brain imaging and also to Dr. Alexandre Gramfort, from Telecom ParisTech, for his advices related with EEG signal processing and nonsmooth convex programming theory applied to the EEG inverse problems. This work has been partly supported by Ministerio de Economía of Spain (projects 'DEIPRO' (id. TEC2009- 14504-C02-01) and 'COMONSENS'(id. CSD2008-00010), 'ALCIT' (id. TEC2012-38800-C03-01), and 'COMPREHENSION' (id. TEC2012-38883-C02-01)). Authors LKH and MP were funded by Banco Santander and Universidad Carlos III de Madrid's Excellence Chair programme.

Author details

1 Department of Signal Processing and Communications, Universidad Carlos III de Madrid, Avda. Universidad 30, Leganés, Madrid 28911, Spain. 2Department of Computer Science, University College London, Gower Street, London WC1E 6BT, UK. 3DTU Informatics, Cognitive Systems Section, Technical University of Denmark, Kongens Lyngby 2800, Denmark.

Received: 27 March 2014 Accepted: 16 June 2014 Published: 24 June 2014

References

1. M Hämäläinen, R Hari, RJ Ilmoniemi, J Knuutila, OV Lounasmaa, Magnetoencephalography—theory, instrumentation, and applications to noninvasive studies of the working human brain. Rev. Mod. Phys. 65(2), 413 (1993)

2. RD Pascual-Marqui, Review of methods for solving the EEG inverse problem. Int. J. Bioelectromagnetism 1 (1), 75-86 (1999)

3. S Baillet, JC Mosher, RM Leahy, Electromagnetic brain mapping. IEEE Signal Process. Mag. 18(6), 14-30 (2001)

4. RGrech, TCassar, J Muscat, KPCamilleri, SG Fabri, MZervakis, P Xanthopoulos, V Sakkalis, B Vanrumste, Review on solving the inverse problem in EEG source analysis. J. Neuroeng. Rehabil. 5(1), 25 (2008)

5. A Beck, MTeboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183-202 (2009)

6. RGdP Menendez, MM Murray, CM Michel, R Martuzzi, SLG Andino, Electrical neuroimaging based on biophysical constraints. NeuroImage 21 (2), 527-539 (2004)

7. MS Hämäläinen, R Ilmoniemi, Interpreting magnetic fields of the brain: minimum norm estimates. Med. Biol. Eng. Comput. 32(1), 35-42 (1994)

8. K Uutela, M Hämäläinen, E Somersalo, Visualization of magnetoencephalographic data using minimum current estimates. NeuroImage 10(2), 173-180 (1999)

9. W Ou, MS Hämäläinen, P Golland, A distributed spatio-temporal EEG/MEG inverse solver. NeuroImage 44(3), 932-946 (2009)

10. A Gramfort, D Strohmeier, J Haueisen, M Hamalainen, M Kowalski, Functional brain imaging with M/EEG using structured sparsity in time-frequency dictionaries, in Information Processing in Medical Imaging, Lecture Notes in Computer Science, ed. by G Székely, HK Hahn, vol. 6801 (Springer, Berlin, 2011), pp. 600-611

11. A Gramfort, D Strohmeier, J Haueisen, M Hämäläinen, M Kowalski, Time-Frequency Mixed-Norm Estimates: Sparse M/EEG imaging with non-stationary source activations. NeuroImage 70,410-22 (2013)

12. S Haufe, RTomioka, T Dickhaus, C Sannelli, B Blankertz, G Nolte, KR Müller, Large-scale EEG/MEG source localization with spatial flexibility. NeuroImage 54(2), 851-859 (2011)

13. A Gramfort, M Kowalski, M Hämäläinen, Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods. Phys. Med. Biol. 57(7), 1937 (2012)

14. F Bach, R Jenatton, J Mairal, G Obozinski, Optimization with Sparsity-Inducing Penalties. Foundations Trends® Mach. Learn. 4(1), 1-106 (2011)

15. CAMicchelli, JM Morales, M Pontil, Regularizers for structured sparsity. Adv. Comput. Math. 38(3), 455-489 (2013)

16. S Sra, S Nowozin, SJ Wright, Optimization for Machine Learning (MIT Press, Cambridge, 2012)

17. DP Bertsekas, Nonlinear Programming (Athena Scientific, Belmont, 1999)

18. PL Combettes, VRWajs, Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4(4), 1168-1200 (2005)

19. PL Combettes, J-C Pesquet, Proximal splitting methods in signal processing, in Fixed-Point Algorithms for Inverse Problems in Science and Engineering, Springer Optimization and Its Applications, ed. by HH Bauschke, RS Burachik, PL Combettes, V Elser, DR Luke, and H Wolkowicz, vol. 49 (Springer New York, 2011), pp. 185-212

20. Y Nesterov, Gradient methods for minimizing composite objective function. CORE Discussion Papers 2007076, Center for Operations Research and Econometrics (CORE), Université Catholique de Louvain (2007)

21. SJ Wright, RD Nowak, MA Figueiredo, Sparse reconstruction by separable approximation. IEEE Trans. Signal Process. 57(7), 2479-2493 (2009)

22. S Sanei, JA Chambers, EEG Signal Processing. (Wiley, West Sussex, 2008)

23. J Malmivuo, R Plonsey, Bioelectromagnetism: Principles and Applications of Bioelectric and Biomagnetic Fields (Oxford University Press, Oxford, 1995)

24. S Murakami, Y Okada, Contributions of principal neocortical neurons to magnetoencephalography and electroencephalography signals. J Phys. 575(3), 925-936 (2006)

25. JJ Moreau, Proximité et dualité dans un espace hilbertien. Bull. Soc. Math. France 93(2), 273-299 (1965)

26. CA Micchelli, L Shen, Y Xu, Proximity algorithms for image models: denoising. Inverse Probl. 27, 045009 (2011)

27. M Schmidt, NL Roux, F Bach, Convergence rates of inexact proximal-gradient methods for convex optimization. Adv. Neural Inform. Process. Syst. 24, 1458-1466 (2011)

28. A Argyriou, T Evgeniou, M Pontil, Convex multi-task feature learning. Mach. Learn. 73(3), 243-272 (2008)

29. RA Horn, CR Johnson, Matrix Analysis (Cambridge university press, Cambridge, 1990)

30. WI Zangwill, Nonlinear Programming: a Unified Approach. (Prentice-Hall, Englewood Cliffs, 1969)

31. B Sriperumbudur, G Lanckriet, On the convergence of the concave-convex procedure. Adv. Neural Inform. Process. Syst. 22, 1759-1767 (2009)

32. A Gunawardana, W Byrne, Convergence theorems for generalized alternating minimization procedures. J. Mach. Learn. Res. 6,2049-2073 (2005)

33. R Jenatton, J Mairal, G Obozinski, F Bach, Proximal methods for sparse hierarchical dictionary learning, in Proceedings of the International Conference on Machine Learning (ICML) (Haifa, 21-24 June 2010)

34. J Mairal, R Jenatton, FR Bach, GR Obozinski, Network flow algorithms for structured sparsity. Adv. Neural Inform. Process. Syst. 23,1558-1566 (2010)

35. KFriston, L Harrison, J Daunizeau, S Kiebel, C Phillips, NTrujillo-Barreto, R Henson, G Flandin, J Mattout, Multiple sparse priors for the M/EEG inverse problem. NeuroImage 39(3), 1104-1120 (2008)

36. R Henson, Y Goshen-Gottstein, T Ganel, L Otten, A Quayle, M Rugg, Electrophysiological and haemodynamic correlates of face perception, recognition and priming. Cereb. Cortex. 13(7), 793 (2003)

37. H Hallez, B Vanrumste, R Grech, J Muscat, W De Clercq, A Vergult, Y D'Asseler, KP Camilleri, SG Fabri, S Van Huffel, I Lemahieu, Review on solving the forward problem in EEG source analysis. J. Neuroeng. Rehabil. 4(1), 46 (2007)

38. JC Mosher, RM Leahy, PS Lewis, EEG and MEG: forward solutions for inverse methods. IEEE Trans. Biomed. Eng. 46(3), 245-259 (1999)

39. V Litvak, J Mattout, S Kiebel, C Phillips, R Henson, J Kilner, G Barnes, R Oostenveld, J Daunizeau, G Flandin, W Penny, K Friston, EEG and MEG data analysis in SPM8. Comput. Intell. Neurosci (2011). doi:10.1155/2011/852961

f \ doi:10.1186/1687-6180-2014-97

Cite this article as: Montoya-Martinez etal.: A regularized matrix factorization approach to induce structured sparse-low-rank solutions in the EEG inverse problem. EURASIP Journal on Advances in Signal Processing 2014 2014:97.

Submit your manuscript to a SpringerOpen journal and benefit from:

7 Convenient online submission 7 Rigorous peer review 7 Immediate publication on acceptance 7 Open access: articles freely available online 7 High visibility within the field 7 Retaining the copyright to your article

Submit your next manuscript at 7 springeropen.com