Scholarly article on topic 'Consistent Recovery of Sensory Stimuli Encoded with MIMO Neural Circuits'

Consistent Recovery of Sensory Stimuli Encoded with MIMO Neural Circuits Academic research paper on "Mathematics"

CC BY
0
0
Share paper
OECD Field of science
Keywords
{""}

Academic research paper on topic "Consistent Recovery of Sensory Stimuli Encoded with MIMO Neural Circuits"

Hindawi Publishing Corporation Computational Intelligence and Neuroscience Volume 2010, Article ID 469658, 13 pages doi:10.1155/2010/469658

Research Article

Consistent Recovery of Sensory Stimuli Encoded with MIMO Neural Circuits

Aurel A. Lazar and Eftychios A. Pnevmatikakis

Department of Electrical Engineering, Columbia University, New York, NY 10027, USA Correspondence should be addressed to Aurel A. Lazar, aurel@ee.columbia.edu Received 1 March 2009; Accepted 24 June 2009 Academic Editor: Zhe (Sage) Chen

We consider the problem of reconstructing finite energy stimuli encoded with a population of spiking leaky integrate-and-fire neurons. The reconstructed signal satisfies a consistency condition: when passed through the same neuron, it triggers the same spike train as the original stimulus. The recovered stimulus has to also minimize a quadratic smoothness optimality criterion. We formulate the reconstruction as a spline interpolation problem for scalar as well as vector valued stimuli and show that the recovery has a unique solution. We provide explicit reconstruction algorithms for stimuli encoded with single as well as a population of integrate-and-fire neurons. We demonstrate how our reconstruction algorithms can be applied to stimuli encoded with ON-OFF neural circuits with feedback. Finally, we extend the formalism to multi-input multi-output neural circuits and demonstrate that vector-valued finite energy signals can be efficiently encoded by a neural population provided that its size is beyond a threshold value. Examples are given that demonstrate the potential applications of our methodology to systems neuroscience and neuromorphic engineering.

Copyright © 2010 A. A. Lazar and E. A. Pnevmatikakis. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1. Introduction

Formal spiking neuron models, such as integrate-and-fire (IAF) neurons, encode information in the time domain [1]. Assuming that the input is bandlimited with a known bandwidth, a perfect recovery of the stimulus from the train of spikes is possible provided that the spike density is above the Nyquist rate [2]. Using results from frame theory [3] and statistics [4], these findings were extended to (i) bandlimited stimuli encoded with a population of IAF neurons with receptive fields modeled as linear filterbanks [5], (ii) multivariate (e.g., space-time) bandlimited stimuli encoded with a population of IAF neurons with Gabor spatiotemporal receptive fields [6], and (iii) sensory stimuli encoded with a population of leaky integrate-and-fire (LIF) neurons with random thresholds [7].

These results are based on the key insight that neural encoding of a stimulus with a population of LIF neurons is akin to taking a set of measurements on the stimulus. These measurements or encodings can be represented as projections (inner products) of the stimulus on a set

of sampling functions. Stimulus recovery therefore calls for the reconstruction of the encoded stimuli from these inner products. These findings have shown that sensory information can be faithfully encoded into the spike trains of a neural ensemble and can serve as a theoretical basis for modeling of sensory systems (e.g., auditory, vision) [8].

In this paper we investigate the problem ofreconstructing scalar and vector stimuli from a population of spike trains on a finite time horizon. The encoding circuits considered are either single-input multi-output or multi-input multi-output (MIMO). The increasing availability of multi-neuron population recordings has led to a paradigm shift towards population-centric approaches of neural coding and processing. Examples of MIMO models in systems neuroscience include extensive investigations of spike train transformations between neuron populations [9] as well as the analysis of the causal relationships between neurons in a population [10]. In neuromorphic engineering MIMO models have been used for brain-machine interfaces [11], as well as silicon retinas and related hardware applications [12].

The stimuli considered in this paper have finite energy and are defined on a finite time horizon. Even though restricted to finite time intervals, finite energy signals have infinite degrees of freedom. Consequently, the formal stimulus recovery is ill-defined. We cast the stimulus reconstruction problem in the abstract spline theory [13] and recover the stimulus as the unique solution to an interpolation spline problem. Splines serve as a valuable mathematical tool for interpolation problems, and their applications arise in many areas such as data smoothing in statistics [4], computer graphics [14], and digital signal processing [15].

Through the formulation of the interpolation spline problem, the reconstructed signal will give the same measurements as the original one. We show that this leads to a signal recovery that is consistent in the sense that the reconstructed signal triggers exactly the same spike train when passed through the same neuron as the original stimulus. The reconstructed signal is also required to achieve a maximum degree of smoothness gauged by a quadratic criterion. This condition ensures that the problem has a unique optimal solution.

A preliminary version of some of the ideas presented here appears in [16]. The analysis was based on results arising in generalized sampling [17]. Here the theory is presented in a more general setting using the spline theoretic framework, and all proofs are included. We apply our theoretical results to stimuli encoded with a number of spiking neural circuits of interest. These include populations of integrate-and-fire neurons with linear receptive fields that arise in hearing, ON-OFF neural circuits with feedback that arise in vision and multi-input multi-output (MIMO) neural circuits that arise in olfaction.

Formally, MIMO neural circuits encode M-dimensional vector-valued finite energy stimuli into the spike trains of a population of N neurons. Their architecture consists of an N X M linear, time invariant filtering kernel that feeds into an ensemble of N neurons. For this novel neural circuit we formulate and solve the problem of optimal consistent recovery and also discuss some of the key conditions that the filtering kernel has to satisfy in order to get a good reconstruction.

The paper is organized as follows. Section 2 formulates the problem of consistent reconstruction on a finite time horizon as a spline interpolation problem and presents its general solution. In Section 3 the reconstruction problem is addressed for stimuli encoded with a population of LIF neurons. Section 4 presents general MIMO neural encoding circuits and the corresponding optimal consistent stimulus reconstruction. A neuroscience inspired example is presented where the filtering kernel performs arbitrary (but known) delays and scalings to input stimuli akin to simple synaptic models. Finally Section 5 concludes our work.

2. Encoding with LIF Neurons and Consistent Stimulus Recovery

In this section we formulate and solve the problem of optimal consistent reconstruction for the simple case of a stimulus encoded with a single LIF neuron. We show how

the spiking of an LIF neuron can be associated with a series of projections in the general L2 space. We impose intuitive constraints on the desired reconstructed signal and show that the reconstruction algorithm can be reduced to a spline interpolation problem.

2.1. Neural Encoding with Single LIF Neurons. Let u = u(t), t e [0, T], be a signal (or stimulus) of finite length and energy, that is, u e L2([0, T]). In what follows we assume that the stimulus u is the input to a Leaky Integrate-and-Fire (LIF) neuron. Throughout this paper (tk), k = 1,2,..., n, denotes the set of recorded spikes. As in the case of bandlimited signals [5], neuron encoding can be associated with the projection (measurement) of the stimulus on a set of functions. By applying the t-transform [2], we can determine both the sampling functions and the projections of the stimulus on these functions using only the knowledge of the spike times.

Assume that the encoder is an LIF neuron, with threshold S, capacitance C, resistance R, and constant bias b. The membrane potential of the LIF neuron is governed by the differential equation

dV(t) _ V(t) dt

+ u(t) + b

with the initial condition V(0) = 0 and reseting conditions

V (tk ) = 5

limV(t) = 0

for all t e [0, T], and k = 1,2,...,n. By solving the differential equation in each interspike interval, the t-transform of the LIF neuron is given by

j> + b)exp(- tjk+R—i) ds = C8

for all k, k = 1,2,..., n - 1. The t-transform can be rewritten as

Lku = qk, (4)

where Lk : L2([0, T]) ^ R is a linear functional given by

Lku = jk u(s)exp^- tjkRcr) ds, qk = CS - bRc( 1 - exp(- ttk+R—k))

for all k = 1,2,..., n - 1. Therefore, we have the following result.

Lemma 1. The t-transform of the LIF neuron can be written in inner-product form as

U fa ) = qk,

fa(t) = exp(- ^Rr)1 [tk ,tk+1 ](t)

and (-, ■) : L2([0, T]) X L2([0, T]) ~ R is the standard L2 inner product restricted to the domain [0, T] for all k = 1,2,..., n - 1.

Remark 2. The inner products or projections {u, fa), k = 1,2,...,n - 1, in (7) represent a set of measurements or encodings of the signal u on [0, T]. Since (fa) and (qk), k = 1,2,...,n - 1, in (6) can be readily derived from the knowledge of the spike times and the neuron parameters, the signal encodings are available to an observer reading the spike times (tk), k = 1,2,..., n - 1.

2.2. Consistent Stimulus Recovery. The problem of stimulus reconstruction calls for estimating the signal u given the set of spikes (tk), k = 1,2,..., n. This problem is, for the class of stimuli u <E L2([0, T]), ill-defined. (Signals that lie in a L2 space have, in general, infinite degrees of freedom and thus cannot be perfectly recovered by a finite number of observations.) A remedy is provided by introducing a set of constraints on the recovery. The first constraint considered here requires the reconstructed signal u to generate the same spikes as the original stimulus. The second constraint requires choosing among the reconstructed stimuli the one with the maximum degree of smoothness. The latter is formulated as an optimization criterion.

Definition 3. A reconstruction u of u based on the spike times (tk), k = 1,2,..., n, is said to be consistent if u triggers exactly the same spike train as the original stimulus u.

Remark 4. As before, assume that at time 0 the membrane potential of the LIF neuron is set to the resting potential 0. Then the consistency condition above is equivalent with

(u, fa) = (u, fa) for all k, k = 1,2,..., n - 1.

Definition 5. A consistent reconstruction û that minimizes the quadratic criterion

wu = || I g

is called the optimal consistent reconstruction of u.

Remark 6. WJu\\ is the norm of the second derivative of the reconstructed stimulus.

Lemma 7. The optimal consistent reconstruction u solves the spline interpolation problem

u = argmin

(ufak )=qk

where [0, T ].

is the standard L2-norm restricted to the interval

Proof. It follows directly from Definitions 3 and 5.

Remark 8. An introduction to splines and the general solution to spline interpolation problems is presented in the Appendix A.

Reconstruction with a LIF neuron

2 1.5 1

3 0.5 | 0 -0.5 -1 -1.5

0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 Time (s)

- Original

- Recovered

Figure 1: Encoding and reconstruction with a single LIF neuron.

Theorem 9. The optimal consistent reconstruction is unique and is given by

u( t) = d0 + d1t ckfk (t), (12)

fk (t) = fa *|-|3)(t) = 1 |k+'\t - s |3 exp ^

|3„„„/ tk+1 - s RC

where * denotes the convolution, and | ■ | denotes the absolute value. With c = [ci,c2,...,cn-i]T, d = [d0,di] and q = [qi,q2,...,qn-1]T the coefficients c and d satisfy the matrix equations

Moreover G is an (n - 1) X (n - 1) matrix, and p and r are column vectors with entries given by

G p r c q

pT 0 0 d0 = 0

rT 0 0 d1 0

[p]k = fa ,1, [r]k = fa, t), [G]ki = (fak, fi),

where all the inner products are restricted to the interval [0, T].

Proof. The proof follows from Theorem 4 in Appendix A. Note that the function | ■ |3 is up to a multiplicative constant Green's function for the second-order iterated Laplacian. (See Lemma 5 in Appendix B).

The representation functions (13) can be explicitly given in analytical form as

fk (t)

rt tk+1 - t\ _ Jtk - t / ' RC '

tk+1 - tt RC

>2«xK+ /(^ +/ ( / (

tk - t\ ( tk+1 - tk exp

t < tt,

tk <t < tt+1,

tk - t\ ( tk+1 - tk

tk+1 - t RC

t > tk+1,

where f (x) = x3 - 3x2 + 6x - 6. The entries of the matrix G are given by

tl+1 - tk+1 RC

( tl+1 - tk exp

tk+1 - tk RC

,'ti - tk+A ( ti+1 - ti

^"R-jexp

ti - tk RC

tk+1 - tk ti+1 - ti

1(k < i)

6 1 - exp -2g (

2(tk+1 - tk) RC

tk+1 - h\ ( tk+1 - tk exp

tk+1 - ti+1 RC

tk+1 - ti

tk - ti+1 RC

tk+1 - tk RC

■ 1(k = i)

ti+1 - ti RC

tk - ti RC

ti+1 - ti tk+1 - tk

1(k > i)

with g(x) = x + 6x. Finally

[p]k = R^ 1 - exp(-

tk+1 - tk RC

[r]k = (RC)2

2( ( tk+1 \ RC

tk tk+1 - tk RC - VexH

(18) □

Remark 10. By letting R — to, one obtains the representation of the optimal consistent reconstruction for stimuli

encoded with the ideal IAF neuron. The parameters and representation functions take a simple form:

lim fa (0 = htk ^fX

R — to

lim qk = CS - b(tk+1 - tk),

R — to

lim fk (t) =

R —to

tk+1 tk

0.25 0.25[(t - tk+1)

\t - s\3 ds

[(t - tk+1)4 - (t - tk)4], t < tk,

+(t - tk)4], tk<t < tk+1,

0.25[(t - tk)4 - (t - tk+1)4], t > tk+1,

lim [p]k = tk+1 - tk,

R —to

lim [r]k

R —to

(tk+1)2 - (tk)2

lim [G]ki

R — to

0.0^(tk+1 - ti+1)5 + (tk - ti)5

-(tk - ti+1)5 - (tk+1 - ti)5], k <i,

k = i,

0.1(tk+1 - tk)5, 0.0^(ti+1 - tk+1)5 + (ti - tk)5

-(ti - tk+1)5 - (ti+1 - tk)5], k >i.

2.3. Example. The input to an LIF neuron is a bandlimited signal with bandwidth of 100 Hz. The neuron encodes the stimulus during the time interval [0,0.2] second. A bias equal to b = 3 is also added to the input. The parameters of the LIF neuron are S = 0.8, C = 0.01, and R = 50. Under these conditions the neuron generated 78 spikes. The recovered signal is shown in Figure 1. In order to quantify the quality of the recovery, we used the signal-to-noise ratio (SNR) defined by

SNR = 10 log

10 In ^||2 u - u

In the above SNR definition the noise corresponds to the error between the original and reconstructed signal. The SNR was equal to 47.53 dB.

3. Single-Input Multi-Output Encoding and Consistent Stimulus Recovery

In this section we consider the reconstruction of a stimulus encoded with a population of LIF neurons. We demonstrate that the consistent recovery can be again formulated as a spline interpolation problem and provide the reconstruction algorithm. We also show how the methodology developed in this section can be applied to a simple encoding circuit consisting of two-coupled ON-OFF neurons with feedback.

3.1. Encoding with a Population of LIF Neurons. In what follows we consider a neural encoding circuit consisting of N leaky integrate-and-fire (LIF) neurons. Neuron j has threshold 8j, bias bj, resistance Rj, and capacitance Cj for all j = 1,2,..., N. After each spike every neuron resets its membrane potential to 0. Let t'k denote the kth spike of neuron j, with k = 1,2,..., nj, where nj is the number of spikes that the neuron j generates, j = 1,2,..., N.

The t-transform of the population of N LIF neurons is given by

u(s) + b1) exp

tj+1 - s ma

= CiSi

biRiOl 1 - exp

ds = CiSi.

fk (t) = fak *\-\3)(t) = \ 1 \t - s \3 exp j

ds. (27)

The reconstruction coefficients are given in matrix form by

c G p r + q

do = pT 0 0 0

d1 rT 0 0 0

where [• ]+ denotes the pseudoinverse and

[p1 ]k = RiCi(1 - exp f-

for all j = 1,2,..., N .As in the previous section, we have the following.

Lemma 11. The t-transform of the LIF neuron can be written in inner-product form as

{U ) = qk,

tk+1 t

fak(t) = exp -^or Vk4+d(i)'

and {-, ■) : L2([0, T]) X L2([0, T]) ~ R is the standard L2 inner product restricted to the domain [0, T] for all k = 1,2,..., n and j = 1,2,..., N.

3.2. Consistent Stimulus Recovery. Let q be a column vector defined as q = [q1;q2;...;qN]T with qj =

[q{,q2,...,q'tij-1] , j = 1,2,...,N. The vectors p,r,c have the same dimension and are similarly defined. The matrix G is a block square matrix defined as

with Gij e Rn-1xnj-1.

The following theorem first appeared in [16]; its proof is presented here for the first time.

Theorem 12. Assume that at time 0 the membrane potential of all neurons is at the rest value 0. The optimal consistent reconstruction û is unique and can be written as

N ni-1

U(t) = do + d1t + ckfk(t),

i=1k=1

[r^ = (RiCi )2

2 tki +1

r1 C1 ( ti

1- 1 exp --

(k+1_tk

lG'J]kl = {k, fi).

Proof. The proof is notationally more complex but closely follows the proof of Theorem 9. The representation functions (27) can be computed analytically as

t'k (t)

(RC )4

4+1 t wo

12 exp I -

tki +1 - t

t < 1,

wo ti - t

tk+1 t wo

wo tki - t

tk+1 tj wo

i<t < tk+1,

1+1 lk wo

f 1+1 - tN -M wo

where f (x) = x3 - 3x2 + 6x - 6.

(30) □

h11(t) <r-

Reconstruction of temporal contrast

,I.Idt

t tt (t2) >

h22(t)

Figure 2: Coupled ON-OFF integrate-and-fire neurons.

3.3. Example: Encoding with an ON-OFF Neuron Pair. We consider an encoding circuit consisting of two interconnected integrate-and-fire neurons with feedback. For simplicity we assume that the IAF neurons are ideal, that is, R1,R2 to. Figure 2 depicts the circuit. Whenever a spike is generated, the firing neuron is reset and feedback is added to the membrane potential. In addition, the firing of each spike is communicated to the other neuron through cross-feedback. The two neurons in Figure 2 arise as models of ON and OFF bipolar cells in the retina and their connections through the nonspiking horizontal cells [18].

Let tk denote the kth spike of the jth neuron, k = 1,..., nj and j = 1,2. The ¿-transform of the neural circuit amounts to

[ "u(s) ds = k1S1 - b1 (fi+1 - 4) + X f Thn(s - t?) ds

J tI l<kJ ti

-Xj;h21{s - ti) dsI{t2<tl}, l J k

1 «(s) ds = K2s2 - b2 (tk+i - tk) - X f^1 h22 (s - t2)

l<kJ t2

^ Jt2+I s - fO dsi{t/<t2}

0.8 0.6 0.4 0.2 0 -0.2 -0.4 0.6 0.8 -I

0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 Time (s)

- Original

- Recovered

Figure 3: Recovery of temporal contrast from an ON-OFF IAF neural pair.

A simple example consisting of two symmetric neurons with parameters 51 = -S2 = S, K = k2 = k, and b1 =-b2 = b is considered here. The cross-feedback is of the form

hI2(t) = h2I(t) = c exp(-at)

(at)5 (at)'

[t>0]-

No other feedback is present, that is, hu = h22 = 0. The neuron parameters are S = 0.75, k = 0.01, and b = 3. In addition, a = 1/0.015 sec-1 and c = 1/3. Note that the impulse response of the filter has mean value zero. If the mean value is nonzero, the spike density of the ideal IAF neurons can be driven to zero or infinity.

The input was chosen to be the temporal contrast of an artificial (positive) input photocurrent. With v denoting the input photocurrent, the temporal contrast u is defined as

u(t) =

d log(v(t))

I dv v(t) dt'

Clearly, even when the input bandwidth of the photocurrent v is known, the effective bandwidth of the actual neuron input u cannot be analytically estimated. The input pho-tocurrent was bandlimited with bandwidth 100 Hz and had duration 200 milliseconds. Each neuron generated 75 spikes. The result of the recovery is shown in Figure 3. The SNR is equal to 28.75 dB.

and can be written in inner product form as

(«, j = qk, (32)

with qk the right-hand side of (3I) and = I^ fi ], for all k, k = 1,2,..., nj - I, and j, j = 1,2. With the t-transform in inner product form, the optimal consistent reconstruction is given by Theorem 12.

4. Multi-Input Multi-Output Encoding and Consistent Stimulus Recovery

In this section we present our model of consistent information representation of M-dimensional vector signals using an N X M-dimensional filtering kernel and an ensemble of N integrate-and-fire neurons (see Figure 4). We assume without loss of generality that the neurons are ideal (non-leaky). Each component filter of the kernel receives input from one of the M component inputs, and its output is

Figure 4: Multiple-Input Multiple-Output time encoding machine.

additively coupled into a single neuron. Finally, we describe an algorithm for stimulus reconstruction that is based on spline interpolation.

4.1. MIMO Model for Neural Encoding. Let L2M = (L2([0, T]))M denote the space of M-dimensional, vector-valued functions of finite energy over the domain [0, T]. The general element of this space is u = [u1, u2,..., uM ]T, with u' = u'(t), t e [0, T], modeling the ith component of the input signal and u' e L2([0, T]) for all i, i = 1,2,..., M. The

space L2M endowed with the inner product and norm defined by M

<u, v)l2m = M(u', v') ,

respectively, is a Hilbert space. Let H : R — rNxm be a filtering kernel defined as

'h11(t) h12(t) ■■■ h1M(ty h21(t) h22(t) ■ ■ ■ h2M(t)

hN 1(t) hN2(t) ■■■ hNM(t)

Filtering the signal u with the kernel H leads to an N -dimensional vector valued signal v defined by

h11 * u1 + h12 * u2 + ■■■ + h1M *

v = H * u

h21 * u1 + h22 * u2 + ■■■ + h2M *

hN 1 * u1 + hN2 * u2 + ■■■ + hNM *

Equation (37) can also be written in vector notation as

vj = (hjy * u, j = 1,2,...,N,

where hj = [hj1,hj2,...,hjM]T is the filtering vector of the neuron j, j = 1,2,..., N .A bias bj is added to the component vj of the signal v, and the sum is passed through an integrate-and-fire neuron with integration constant (capacitance) Kj and threshold Sj, for all j, j = 1,2,..., N (see Figure 4). For simplicity we assume that the IAF neurons are ideal, that is, Rj to. Let t'k denote the kth spike of the neuron j, with k = 1,2,..., nj, where nj is the number of spikes generated by neuron j, j = 1,2,..., N. The Time Encoding Machine in Figure 4 maps, therefore, the input vector u into the vector time sequence (i|), j = 1,2,..., N, k = 1,2,..., nj.

The t-transform for the jth neuron can be written as

vj (s) + bn ds = KjSj,

X ,+I h * u') (s) ds = qi, (40)

where qk = Kj8j - bj (tk+1 - t[), for all k, k = 1,2,..., nj - 1, and all j,j = 1,2,...,N. Note that, without any loss of generality, after firing all neurons are reset to the zero state. The t-transform (40) can be written in an inner product form as

<u, ) = qi

h1 * 1

[ti >4+1 ]

h11 ' hi12

for all j = 1,2,...,N, k = 1,2,...,nj - 1, and h denotes the involution (time reversal) of h, that is, h(t) = h(-t). Note that the impulse response of the filtering kernel H is not restricted to the interval [0, T] and can possibly have infinite support.

Remark 13. An implicit assumption in writing the t-transformin the inner product form (41) is that the sampling functions belong to L2M. A sufficient condition for the latter is that all filters are bounded-input bounded-output (BIBO) stable, that is, jR\hji(s)\ ds < oo for all i, i = 1,2,..., M, and all j, j = 1,2,..., N.

4.2. Consistent Stimulus Recovery. The optimal consistent reconstruction is given by the solution of the following spline interpolation problem:

argmin

M )=qi

We have the following result.

Theorem 14. Assume that at time 0 the membrane potential of all neurons is at the rest value 0. The optimal consistent reconstruction u is unique and can be written as

N "i-1

u(t) = do + d1t +X14 f i (t),

where d0, d1 e RM and

fi (t)

h11 ' hi12

tk1,tk1+

h 11(-) * | ■ |3)(t - s) ds

(h12(■ ) *| ■ |3)(t - s) ds

h1M(■) *

(t - s) ds

c G p r + q

do = pT 0 0 0

di rT 0 0 0

With p = [p1;p2;...;pN]T, p1 e R("1-1)xM, and r similarly defined, the reconstruction coefficients are given in matrix form

[pj ]ki = <1,4 Mki = <1,0D, (47)

[GijL = (4k, fj),

where the inner products are restricted to the interval [0, T].

Proof. The proof is based on Theorem 4. □

Remark 15. Note that since the signal reconstruction is set up as a spline interpolation problem, the algorithm presented in Theorem 14 above will produce a solution that depends on both the filtering kernel H and the spiking mechanism of the population of neurons. We will briefly mention here conditions of no information loss due to filtering. If F denotes the Fourier transform, we have

(Fv)(w) = (FH)(w) ■ (Fu)(w).

1 = 1 i=1

The requirement for no information loss implies that F H, the filtering kernel in the Fourier domain, has rank M for all frequencies of interest (here for all œ e R). A trivial necessary condition that comes out of the rank condition is that N > M; that is, the number of neurons that encode the stimulus must be at least equal to the number of its components. This intuitive argument has important ramifications in experimental neuroscience as it shows that, in general, multivariate stimuli (e.g., video sequences) cannot be efficiently represented by the spike train of a single neuron or a small neural population. Rather, the spike trains from a larger population of neurons that encode the same stimulus needs to be used.

4.3. Example: Delay Filter Bank. We present the realization of the recovery algorithm for a filtering kernel that induces arbitrary, but known, delays, and weights on the stimulus. The kernel models dendritic tree latencies in sensory neurons (motor, olfactory) [8] or, in general, delays and synaptic weights between groups of pre- and postsynaptic neurons. In order to incorporate these delays, we assume that the stimuli are defined on a time window larger than [0, T]. The inner product, however, is restricted to the time interval [0, T].

Each filter hj' delays the stimulus in time by an arbitrary positive amount aj' and scales it by an arbitrary real number wj', for all j = 1,2...,N, and all i = 1,2,...,M. Consequently, hji = wjiS(t - aji) and hji = wjiS(t + aji). From now on let Tk = t'k + aji for all i, i = 1,2..., M all j, j = 1,2,..., N and all k, k = 1,2,..., nj - 1.

The representation functions fk,j = 1,2,...,N,k = 1,2,..., nj - 1, of (45) are given by

fk (t) =

wj1\ . \t - s\3 ds

wj2\ . \t - s\3 ds

wjM\ \t - s\3 ds

I jM \ \

for all t <E [0, T]. Note that the general term of (49) can be expressed analytically similarly to (19).

The entries of (46) can be computed from (47) as

[Pj ]k

wj'{Tk+1 - Tk), Tk+1 <T,

wjiT - 4'), rj'<T<rl

T<rJk,

22 0.5wj' ((Tk+1) - (Tk')

0.5wj'[T2 - (Tky),

Tk+1 < T,

j' T j'

Tk <T<Tk+1,

T < Tk',

[G'j]kl = (cpk, fj )=£№, flm)

= X w'mwjm + fim(t) dt.

m = 1 J Tkkm

Note that the entries of G can be computed analytically

[G'j]ki

w'mw)m

jm\5 / 'm jm *5 k+1 - Tl ) +(Tk - Tl+1)

/..-'m -rjm\ t -r'm -rjm\ "(Tk - Tl ) - (Tk+1 - Tl+1)

I T jm T'm

II Tl+1 - Tk

'm jm 5 'm jm 5 'm jm 5

(Tk+1 - Tl ) - (Tk - Tl+1) - (Tk - Tl )

'm jm jm 'm jm

(Tk+1 - Tl+1) I • M Tl - Tk - Tl+1 - Ti

l+1 - Tk+1 J

/ -r'm n-jm\ i-r'm -rjm\ i-r'm Tjm\ (Tk+1 - Tl ) - (Tk - Tl+1) - (Tk - Tl )

, i-r'm r'm\ 1 . 1 -rJm < r'm <- r'm <- r

+ (Tk+1 - Tl+1) I' M Tl - Tk - Tk+1 - Tl+1

(Tkm+1 - Tjm)5 - (Tkm - j) + (Tkm - Tjm)

'm jm 'm jm jm

(Tk+1 - Tl+1) I • M Tk - Tl - Tl+1 - Tk+1

(T'k+1 - tD5 - (Tkm - j) + (Tkm - Tjm)5 + (Tkm1 - Tj+D)^ • ^Tkm - j - Tkm+1 - tD -(Tk+1 - tD5 - (Tkm - Tj+D)5 + (Tkm - Tjm)5

i jm \

+(Tk+1 - Tl+1)

!(Tk+1 - T

The vector-valued signal u(t) = [w1(t),u2(t),u3(t)]T has three bandlimited components (M = 3) each with the same bandwidth Q = 2n • 100 Hz and time length T = 100 millisecond. In total, 9 IAF neurons were used to recover the signal (N = 9). The delays were drawn randomly from an exponential distribution with mean 3 millisecond. The biases bj and thresholds Sj, j = 1,2... ,9, were drawn from uniform distributions on the intervals [2.3,3.3] and [0.5,1.5], respectively. Finally, Kj = 0.01 for all j = 1,2,... ,9.

The recovered stimuli using the spikes from 3, 6, and 9 neurons, respectively, are depicted from top to bottom in Figure 5. For each component, the recovered signal converges to the original one.

Figure 6 shows the SNR corresponding to the recovery of each stimulus component when 3,4,... ,9 neurons are used. Figure 6 demonstrates that overall, as more neurons are added to the representation of the stimulus, the SNR of all stimulus components increases. An exception is observed in the recovery of the component u3; the addition of a neuron

0.05 Time (s)

0.5 0 0.5

0.05 Time (s)

0.05 Time (s)

0.05 Time (s)

0.05 Time (s)

0.05 Time (s)

0.05 Time (s)

Original

Recovered

0.05 Time (s)

Original Recovered

e n of

0.05 Time (s)

- Original

- Recovered

Figure 5: Recovery of the 3-dimensional input vector valued signal. In each row the original (blue) and recovered (green) signals are shown for the indicated number of neurons used for recovery. The columns correspond to each component of the input signal.

from three to four leads to a decrease of the SNR. Note, however, that the SNR for the recovery of the entire vector-valued stimulus u increases with the addition of the fourth neuron from 7.71 dB to 12.23 dB (data not shown).

5. Discussion

The methodology of interpolating splines presented here applies to the deterministic case where the input stimulus and the LIF neurons have low noise levels. It ties in naturally with theoretical results that show that neural encoding of bandlimited signals leads to perfect signal reconstruction if Nyquist-type rate conditions are satisfied [5].

In neuromorphic engineering applications the noise levels can be kept low. Neuronal spike trains, however, often exhibit strong variability in response to identical inputs due to various noise sources. For stimuli encoded with neural circuits the problem of optimal reconstruction can be formulated as a smoothing spline problem [4]. This case is presented analytically in [7] for a slightly less general setup, where the signals belong to a Reproducing Kernel Hilbert Space [19]. A reconstruction of stimuli encoded with LIF neurons using both smoothing and interpolating splines offers an additional alternative. Thus, the methodology of spline theory provides a general framework for the optimal reconstruction of signals on a finite time horizon.

Number of neurons

-e- SNRi -B- SNR2 -x- SNR3

Figure 6: SNR as a function of the number of neurons.

The methodology presented here can be applied to the reconstruction of stimuli encoded with neurons that belong to other model classes of interest. An example is provided by neuron models with level crossing spiking mechanisms and feedback that have been investigated in [16]. More generally, the ¿-transform of any neuron model with piecewise linear dynamics can be described by a set of linear projections. Neurons with linear dynamics have been shown to express complex spiking behaviors [20-22].

The MIMO architecture presented here consists of a linear, time-invariant filtering kernel that is separated from the neural spiking mechanism. By relaxing the time-invariance property and embedding spike-triggered reseting mechanisms at the level of the filtering kernel, more complex transformations can be modeled. Consequently dendritic trees incorporating compartmental neuron models and spike backpropagation [23] can be analyzed with the methodology advanced in this paper. The aforementioned architectures will be the subject of future research.

Appendices

A. Interpolation Splines in Hilbert Spaces

We assume throughout that stimuli u belong to the space of functions of finite energy over a domain T, that is, u e L2(T). The information available to a decoder is a set of measurements

(0k, u) = qk,

where 0k e L2(T) are known functions and k = 1,2,..., n. The inner products can be written in operator form as

Lu = q,

Finding u by inverting (A.2) is, in general, an ill-posed problem. Additional "smoothness" conditions are needed. We introduce these by requiring that the reconstructed signal minimizes a quadratic criterion \\Ju\\, where J : L2(T) — Y is a bounded linear operator, Y is the range of J, and \\ ■ \\ denotes the standard L2-norm over T.

Definition 1. The solution to the interpolation problem

u = argmin \\Ju\\2 (a.4)

u:Lu=q

is called an interpolation spline corresponding to the initial data q, the measurement operator L, and the energy operator J.

We restrict ourselves to the case where the operator J has a finite dimensional kernel of dimension m. The following standard theorem establishes necessary conditions for the existence and uniqueness of the interpolation spline. For a proof see [13].

Theorem 2. If ker(L) n ker(J) = {0}, and the range of J is closed, then there exists a unique interpolation spline corresponding to the data q, the measurement operator L, and the energy operator J.

In order to derive the general form of the interpolation spline, we introduce the notion of reproducing kernel for a Hilbert space with respect to the energy operator J. This notion generalizes reproducing kernels associated with Reproducing Kernel Hilbert Spaces [19].

Definition 3. The function K : T X T — R is called the reproducing kernel of the space L2(T) with respect to the energy operator J, if

(1) for any functional L : L2(T) — R, the function f (s) = LK(s, ■) lies in L2(T);

(2) any functional L : L2(T) — R that vanishes on the kernel ofJ, that is,

Lu = 0, yu e ker(J) (A.5)

can be represented as

Lu = B(LK(s, ■), u), yu e L2(T). (A.6) Here B : L2(T) X L2(T) — R is a bilinear form defined by

B(u, v) = 1 (\\J (u + v)\-\\J (u - v)\\2). (A.7)

Theorem 4. The solution to the spline interpolation problem is given by

u = X diXi + X ckfk,

i=1 k=1

where q = [qi, q2,..., qn]T, and L : L2(T) — Rn is a linear operator defined by

where the set of functions (#), i = 1,2,..., m, forms an orthogonal basis for ker(J) and

Lu = [<01, u), (02, u),..., (0n, u)] .

fk(s) = (0k,K(s, ■)).

With d = [di, d2,..., dm]T, c = [c1; c2,..., cn]T, and q = [qi, q2,..., qn]T, the coefficients c and d satisfy the matrix equations

(A.10)

Moreover G is an n X n matrix and F is an n X m matrix with entries given by

G F c q

FT 0 d 0

[G]ki = fa, fi), [F]ki = {fa, x),

for all i = 1,2,..., n, j = 1,2,..., n and all l = 1,2,..., m.

(A.11)

Proof. For the representation result of (A.8) see [13]. By substituting (A.8) into (A.2), we obtain

For the rest of the equations define

(A.12)

/*(u, c) = \\Ju\\2 - 2X ck ({fa, u)-qk), (A.13) k=1

where the entries of the vector c are the Lagrange multipliers. If U is the optimal consistent reconstruction and v <E L2(T), then

■7—J*(u + av, c) da

= 0 B(u, v) = X ck fa, v). (A.14)

From the Cauchy-Schwarz inequality with v eker(/), we have

B(u, v) = 0. (A.15)

Therefore, for each of the basis functions of ker(/), \1,..., \m,

Xck{fak, Xj} = ^

which in matrix form can be written as FTc = 0,

(A.16)

(A.17)

with F defined as in (A.11). Combining (A.12) with (A.17) we obtain (A.10). For more information see [13]. □

B. Reproducing Kernels for MIMO Signal Reconstruction

Let l2M = (L2(T))M be the space of M-dimensional vector-valued functions defined over the domain T. The space equipped with inner product and norm given by (35) is a Hilbert space. Suppose that we seek a consistent reconstruction that also minimizes the energy operator

Jm (u) = X

Lemma 5. The reproducing kernel for the Hilbert space L2M with respect to the energy operator /M is given by

K(t, 5)

\t - s\2"-1 ■ [1,1, ...,1]T. (B.2)

2(2m - 1)!

Proof. It can be shown that the reproducing kernel can be written in the form

K(t, s) = E(t - 5),

where E( ■ ) is the fundamental solution for the mth-iterated Laplacian

A"E = 5.

For univariate functions the iterated Laplacian is equal to the 2mth order derivative of each component and the result follows. A complete proof can be found in a general setting in [24]. □

Note that for m = 2 the kernel becomes

K(t,s) = 112\t-s\3 ■ [1,1,...,1]T.

For the general representation result (A.8), the scalar factor can be absorbed into the coefficients.

Acknowledgments

This work was supported by NIH Grant R01 DC008701-01 and NSF Grant CCF-06-35252. E. A. Pnevmatikakis was also supported by the Onassis Public Benefit Foundation. The authors would like to thank the reviewers for their suggestions for improving the presentation of this paper.

References

[1] A. A. Lazar, "Time encoding with an integrate-and-fire neuron with a refractory period," Neurocomputing, vol. 58-60, pp. 5358, 2004.

[2] A. A. Lazar and L. T. Toth, "Perfect recovery and sensitivity analysis of time encoded bandlimited signals," IEEE Transactions on Circuits and Systems I, vol. 51, no. 10, pp. 2060-2073, 2004.

[3] O. Christensen, An Introduction to Frames and Riesz Bases, Applied and Numerical Harmonic Analysis, Birkhauser, Boston, Mass, USA, 2003.

[4] G. Wahba, Spline Models for Observational Data, Society for Industrial Mathematics, 1990.

[5] A. A. Lazar and E. A. Pnevmatikakis, "Faithful representation of stimuli with a population of integrate-and-fire neurons," Neural Computation, vol. 20, no. 11, pp. 2715-2744, 2008.

[6] A. A. Lazar and E. A. Pnevmatikakis, "A video time encoding machine," in Proceedings of the IEEE International Conference on Image Processing, San Diego, Calif, USA, October 2008.

[7] A. A. Lazar and E. A. Pnevmatikakis, "Reconstruction of sensory stimuli encoded with integrate-and-fire neurons with random thresholds," EURASIP Journal on Advances in Signal Processing, vol. 2009, Article ID 682930, 14 pages, 2009.

[8] G. L. Fain, Sensory Transduction, Sinauer Associates, Inc., 2003.

[9] D. Song, R. H. Chan, V. Z. Marmarelis, R. E. Hampson, S. A. Deadwyler, and T. W. Berger, "Nonlinear dynamic modeling of spike train transformations for hippocampal-cortical prostheses," IEEE Transactions on Biomedical Engineering, vol. 54, no. 6, pp. 1053-1066, 2007.

[10] T. P. Zanos, S. H. Courellis, T. W. Berger, R. E. Hampson, S. A. Deadwyler, and V. Z. Marmarelis, "Nonlinear modeling of causal interrelationships in neuronal ensembles," IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 16, no. 4, pp. 336-352, 2008.

[11] S.-P. Kim, J. C. Sanchez, Y. N. Rao, et al., "A comparison of optimal MIMO linear and nonlinear models for brain-machine interfaces," Journal of Neural Engineering, vol. 3, no. 2, pp. 145-161,2006.

[12] R. Serrano-Gotarredona, T. Serrano-Gotarredona, A. Acosta-Jimenez, et al., "On real-time AER 2-D convolutions hardware for neuromorphic spike-based cortical processing," IEEE Transactions on Neural Networks, vol. 19, no. 7, pp. 1196-1219, 2008.

[13] A. I. Bezhaev and V. A. Vasilenko, Variational Theory of Splines, Kluwer Academic/Plenum Publishers, New York, NY, USA, 2001.

[14] R. H. Bartels, J. C. Beatty, and B. A. Barsky, An Introduction to Splines for Use in Computer Graphics & Geometric Modeling, Morgan Kaufmann, San Francisco, Calif, USA, 1987.

[15] M. Unser, A. Aldroubi, and M. Eden, "B-spline signal processing—part I: theory," IEEE Transactions on Signal Processing, vol. 41, no. 2, pp. 821-833, 1993.

[16] A. A. Lazar and E. A. Pnevmatikakis, "Consistent recovery of stimuli encoded with a neural ensemble," in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3497-3500, Taipei, Taiwan, April 2009.

[17] J. Kybic, T. Blu, and M. Unser, "Generalized sampling: a variational approach. I: theory," IEEE Transactions on Signal Processing, vol. 50, no. 8, pp. 1965-1976, 2002.

[18] R. H. Masland, "The fundamental plan of the retina," Nature Neuroscience, vol. 4, no. 9, pp. 877-886, 2001.

[19] A. Berlinet and C. Thomas-Agnan, Reproducing Kernel Hilbert Spaces in Probability and Statistics, Kluwer Academic Publishers, Boston, Mass, USA, 2004.

[20] E. M. Izhikevich, "Resonate-and-fire neurons," Neural Networks, vol. 14, no. 6-7, pp. 883-894, 2001.

[21] S. Mihalas and E. Niebur, "A generalized linear integrate-and-fire neural model produces diverse spiking behaviors," Neural Computation, vol. 21, no. 3, pp. 704-718, 2009.

[22] R. Jolivet, R. Kobayashi, A. Rauch, R. Naud, S. Shinomoto, and W. Gerstner, "A benchmark test for a quantitative assessment of simple neuron models," Journal of Neuroscience Methods, vol. 169, no. 2, pp. 417-424, 2008.

[23] P. C. Bressloff and J. G. Taylor, "Dynamics of compartmental model neurons," Neural Networks, vol. 7, no. 6-7, pp. 11531165, 1994.

[24] J. Duchon, "Splines minimizing rotation-invariant semi-norms in sobolev spaces," in Constructive Theory of Functions of Several Variables, W. Schempp and K. Zeller, Eds., Springer, New York, NY, USA, 1977.

Copyright of Computational Intelligence & Neuroscience is the property of Hindawi Publishing Corporation and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.