Scholarly article on topic 'Dynamic evolution evoked by external inputs in memristor-based wavelet neural networks with different memductance functions'

Dynamic evolution evoked by external inputs in memristor-based wavelet neural networks with different memductance functions Academic research paper on "Mathematics"

0
0
Share paper
Academic journal
Adv Diff Equ
OECD Field of science
Keywords
{""}

Academic research paper on topic "Dynamic evolution evoked by external inputs in memristor-based wavelet neural networks with different memductance functions"

0 Advances in Difference Equations

a SpringerOpen Journal

RESEARCH Open Access

Dynamic evolution evoked by external inputs in memristor-based wavelet neural networks with different memductance functions

Ailong Wu1,2,3*, Zhigang Zeng3 and Jian Xiao3

Correspondence: alequ@126.com 'College of Mathematics and Statistics, Hubei NormalUniversity, Huangshi, 435002, China 21nstitute for Information and System Science, Xi'an Jiaotong University, Xi'an, 710049, China Fulllist of author information is available at the end of the article

Abstract

In this paper, we present a preliminary study concerning the dynamic flows in memristor-based wavelet neural networks with continuous feedback functions and discontinuous feedback functions in the presence of different memductance functions. The theoretical studies as well as the computer simulations confirm our claim. The analysis can characterize the fundamental electrical properties of memristor devices and provide convenience for applications.

Keywords: memristor; wavelet neural networks; dynamics

1 Introduction

In the recent years, numerous studies focused on the use of the memristor as a discrete element in a circuit to model phenomena or to implement novel functions. Recent advances in memristor lead to the realization of large-scale artificial neural systems subserving perception, cognition, and learning [1-9]. Memristor acts as a modulating synapse interconnection between neurons; plasticity is accomplished through adjusting the memristance via current spikes based on the relative timings of pre-synaptic and postsynaptic neuron spikes. By using memristor as synapse in artificial neural systems, the potential in creating neuromorphic computing hardware through its variable memristance is unlimited.

As we all know, memristor-based neural networks may be a real breakthrough in the fields of electronic and circuit design [2-5]. Dynamic evolution of electronic circuits and systems is extremely important in systems analysis and integration. For this reason, it is important to study what dynamics arise in memristive systems and speculate about how they could be used for meaningful tasks. One question is, the neural network with memristor bridge synapse appears a plethora of complex nonlinear behaviors [5-9]. It is hard to predict the dynamic flows of a specific memristor-based neural network when it might become detrimental to performance, so a detailed analytical study of the dynamic evolution is necessary.

Consider the memristive neurodynamic system governed by the following equations

ft Spri

i(t) = -Xi(t) + wij(xi(t)]fj{xß)) + Ui, i = 1, 2,..., «, (1)

© 2013 Wu et al.; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any

medium, provided the original work is properly cited.

where Xi(t) is the voltage of the capacitor Q, ui denotes external input, f(-) is wavelet feedback function, wij(xi(t)) represents memristor-based weights, and

Wij(xi(t)) = -W X sginj' sgin2 =

1, i =j,

-1, i = j,

in which Wj denotes the memductance of memristor R.. And R. represents the memris-tor between the feedback functionfi(xi(t)) and xi(t). Combining the physical structure of a memristor device, then one can see that

2" dOj'

where qij and aij denote charge and magnetic flux corresponding to memristor Rij, respectively.

Research shows the pinched hysteresis loops are the fingerprint of memristive devices [6, 9]. Under different pinched hysteresis loops, the evolutionary tendency or process of memristive systems evolves into different forms. It is generally known that the pinched hysteresis loop is due to the nonlinearity of memductance function. As two typical mem-ductance functions, in this paper, we discuss the following two cases.

Case 1: The memductance function Wij is given by

aij, \oj| < ¿ij, bij, \oij\ > ¿ij,

where aj, bij and Ij > 0 are constants, i,j = 1,2,..., n. Case 2: The memductance function Wij is given by

Wij = Cij + 3dij a2,

where cij and dij are constants, i, j = 1,2,..., n.

According to the features of memristor given in case 1 and case 2, the following two cases can happen. Case 1': In case 1,

l(Xi(t)) =

sgin df(X(t)) _ dXi(t) < 0

i, sginij dt dt - ^

.. sgin df(X(t)) _ dxi(£) > 0

j, sginij dt dt > 0

for i, j = 1,2,..., n, where w. and w. are constants. Case 2': In case 2,

Wij{xi(t)) is a continuous function, and Aij < Wj(xi (t)) < Aij (6)

for i,j = 1,2,..., n, where Aj and Aij are constants.

Clearly, the memristive neural network (1) with different memductance functions is a state-dependent switched system or a state-dependent continuous system, which is the generalization of those for conventional neural networks.

Several novel research results on conventional nonlinear neural networks have been reported, see [10-25]. Whereas in memristor-based neural networks, to study the dynamic flows of these systems, the classical approach on nonlinear systemic theory is invalid; since it consists of too many subsystems, it is too difficult to do so. It is also important to develop effective methods to process these issues concurrently with the development of applications, in order to allow the memristor-based neural networks to be readily used as the alternate approaches to the traditional techniques or as components of integrated systems.

In this paper, the main purpose is to make the attempt to deal with the dynamic flows for a class of memristor-based wavelet neural networks with continuous feedback functions and discontinuous feedback functions in the presence of different memductance functions. Meanwhile, the theoretical investigation would help to design efficient memristor-based neuromorphic circuits and study other memristor-based complex systems. Note that the structure of wavelet neural networks is totally different from many traditional neural networks. Hence, the existing results can not be directly applied to the wavelet neural networks. In addition, we give some sufficient conditions on dynamic evolution. All of these conditions are very easy to be verified.

Throughout this paper, solutions of all the systems considered in the following are intended in the Filippov's sense. [•, •] represents the interval. co{A, A} denotes closure of the convex hull of X" generated by real numbers A and A. Let wij = maxWj, Wj}, wij = min{Wij, Wij}, Wij = max{|Wj|, tyy|}, Ay = max{| AiJ\, \Ay\}, for i,j = 1,2,...,".

The remaining part of this paper is organized as follows. The main results are stated in Sections 2 and 3. In Section 4, two illustrative examples are provided with simulation results. Finally, concluding remarks are given in Section 5.

2 Memristor-based wavelet neural networks (1) in case 1'

In this section, we discuss the memristor-based wavelet neural networks (1) with continuous feedback functions and discontinuous feedback functions in case 1'.

Obviously, the memristor-based wavelet neural network (1) in case 1' is a state-dependent switched system, which has nonsmooth dynamics.

2.1 Mexican-hat-type feedback functions

As the most typical representative of continuous feedback functions, Mexican-hat-type feedback functions possess a unique wavelet structure [18].

A solution x(t) = (x1(t), x2(t), ...,X"(t))T (in the sense of Filippov) of system (1) with initial condition x(0) = x0, is absolutely continuous on any compact interval of [0, and

Xi(t) e -xi(t) + co{Wij, Wij}fj(xj(t)) + Ui, i = 1,2,

j=1,j=i

In fact, it is easy to find that in case 1', for i = 1,2,...,",

co j -Xi(t) + Wij(xi(t))fj(xj(t)) + Ui

'Xi(t) + CO{Wij, Wij}fj(xj(t)) + Ui. j=1,j=i

j=1,j=i

Figure 1 Mexican-hat-type feedback function (8).

Obviously, for i, j = 1,2,..., n,

cowij, iVij} = [w_ij, wij]. Consider system (1) with a class of Mexican-hat-type feedback functions defined as

f(r) =

-1, -œ < r <-1,

r, -1 < r < 1,

-r + 2, 1 < r < 3,

-1, 3< r <+œ.

Figure 1 shows the configuration of (8). Define three index subsets as follows:

N1= | i : ui <-1 - y^

j=1,j=i

N2= i :1+

Wij < ui <

j=1,j=i

j=1,j=i

i : u >

j=1,j=i

Theorem 1 All the state components xi(t) of system (1) with Mexican-hat-type feedback function (8) in case 1', i e M, will flow to the interval (-to,-1] when t ^

Proof We deliver it in the following two cases due to the different location of xi(0). Case A xi(0) e (-to, -1].

In this case, if there exists some t > 0 such that xi(t) = -1, while xi(t) < -1 for t < t, then from (7),

dxi(t)

< 1 + wij + ui <0.

j=1,j=i

Thus, Xi(t) would never get out of (-to, -1]. Similarly, we can also get that once Xi(T) e (-to, -1] for some T > 0, then xi(t) would stay in (-to,-1] for all t > T. Case B xi(0) e (-1, +to).

In this case, we claim that xi (t) would monotonously decrease until it reaches the interval (-to, -1] in some finite time i >0, i.e., xi(t) < -1. As a matter of fact, when xi(t) e (3, +to), from (7),

xi(t) < -3 + Wij + ui < 1 + ^2 wij + ui <0,

j=1,j=i j=1,j=i

when xi(t) e (1,3], from (7),

xxi(t) < -1 + ^2 Wij + Ui < 1 + ^2 Wij + Ui <0,

j=i,j=i j=i,j=i

when xi(t) e (-1,1], from (7),

xxi(t) < 1 + ^2 Wij + Ui < 0. j=1,j=i

To sum up, wherever the initial state xi(0) is located in, xi(t) would flow to and enter the interval (-to, -1]. Combining with Case A, xi(t) would eventually stay in this interval (-to,-1]. □

Theorem 2 All the state components xi(t) of system (1) with Mexican-hat-type feedback function (8) in case 1', i e N2, will flow to the interval [1,3] when t ^ +to.

Proof According to the different location of xi(0), we deduce it in three cases. Case A xi(0) e [1,3].

In this case, if there exists some i > 0 such that xi(t) = 1, while 1 < xi(t) < 3 for t < i, then from (7),

dxi(t)

> -1 - ^2 wij + ui >0.

j=1,j=i

Analogously, if there exists some i > 0 such that xi(t) = 3, while 1 < xi(t) <3 for t < i, then from (7),

dxi(t)

< -3 + ^2 wij + ui <0.

j=1,j=i

Thus, xi(t) would never get out of [1,3]. Similarly, we can also get that once xi(T) e [1,3] for some T > 0, then xi(t) would stay in [1,3] for all t > T. Case B xi(0) e (-to,1). When xi(t) e (-to,-1], from (7),

xxi(t) > 1 - ^2 wij + Ui > 1 - ^2 wij + 1 + ^2 wij = 2 > 0,

j=1,j=i j=1,j=i j=1,j=i

when xi(t) e (-1,1), from (7),

xi(t) > -1 - Wij + u > 0.

j=1,j=i

Thus, in this case, xi(t) would monotonously increase until it reaches [1,3]. Case C xi(0) e (3, +c). When xi(t) e (3, +c), from (7),

xxi(t) <-3 + wij + ui <0.

j=1j=i

Therefore, xi(t) would monotonously decrease until it enters the interval [1,3].

To sum up, wherever the initial state xi(0) is located in, xi(t) would flow to and enter the interval [1,3]. □

Theorem 3 All the state components xi (t) of system (1) with Mexican-hat-type feedback function (8) in case 1', i e N3, will flow to the interval [3, +c) when t ^ +c.

Proof Deliver it in the following two cases. Case A xi(0) e [3, +c).

In this case, if there exists some t > 0 such that xi(t) = 3, while xi(t) > 3 for t < t, then from (7),

dxi(t)

> -3 - ^2 Wj + ui >0.

j=1,j=i

So, xi(t) would never get out of [3,+c). Similarly, we can also get that, once xi(T) e [3, +c) for some T > 0, then xi(t) would stay in [3, +cc) for all t > T. Case B xi(0) e (-c,3).

In this case, we claim that xi (t) would monotonously increase until it reaches the interval [3, +c ).

As a matter of fact, when xi(t) e (-c,-1], from (7),

xi(t) > 1 - ^2 wij + ui > 1 - ^2 wij + 3 + ^2 Wij = 4 > 0,

j=1,j=i j=1,j=i j=1,j=i

when xi(t) e (-1,1], from (7),

xi(t) > -1 - ^2 Wij + ui >-1 - ^2 ^ij + 3 + ^2 wij = 2 > 0, j=1,j=i j=1,j=i j=1,j=i

when xi(t) e (1,3), from (7),

xi(t) > -3-£ wij + ui >0.

j=1,j=i

Therefore, in this case, xi (t) would monotonously increase until it reaches [3, +c).

In summary, wherever the initial state xi(0) is located in, xi(t) would flow to and enter the interval [3, □

Remark 1 In Theorems 1-3, a core idea is to employ nonsmooth analysis within mathematical framework of the Filippov solution. Generally speaking, nonsmooth analysis is suitable for analyzing nonsmooth dynamics of hybrid systems. Meanwhile, it is worth observing that the memristor-based wavelet neural network model in case 1' is a state-dependent nonlinear switching dynamical system, which extends many of the existing neural networks. Therefore, the obtained results in this paper can be used in the wider scope.

2.2 Piecewise constant feedback functions

As a representative of discontinuous feedback functions, piecewise constant feedback functions have an important position among typical wavelet neural networks [24, 25]. Consider system (1) with a class of piecewise constant feedback functions defined as

f(r) =

-1, -œ < r <-1,

0, -1 < r < 1,

1, 1 < r < 3, -1, 3< r <+œ.

Figure 2 shows the configuration of (9).

Corollary 1 All the state components xi(t) of system (1) with piecewise constant feedback function (9) in case 1', i e N1, will flow to the interval (-œ,-1] when t ^ +œ.

Corollary 2 All the state components xi(t) of system (1) with piecewise constant feedback function (9) in case 1', i e N2, will flow to the interval [1,3] when t ^ +œ.

Corollary 3 All the state components xi(t) of system (1) with piecewise constant feedback function (9) in case 1', i e N3, will flow to the interval [3, +œ) when t ^ +œ.

Corollaries 1-3 can be proved using standard arguments as Theorems 1-3.

1.5- -

0.5- -

-0.5 - -

-1 - -

-1.5- -

_2_i_i_i_i_i_i_i_i_i_

-5 -4 -3 -2 -1 0 1 2 3 4 5

Figure 2 Piecewise constant feedback function (9).

Remark 2 Theorems 1-3 and Corollaries 1-3 are obtained based on Mexican-hat-type feedback function (8) and piecewise constant feedback function (9). In fact, even if mem-ristive neurodynamic system (1) appears as other types of Mexican-hat-type feedback functions and piecewise constant feedback functions, the main results in this paper still can be made some parallel promotions.

3 Memristor-based wavelet neural networks (1) in case 2

In this section, we investigate the memristor-based wavelet neural networks (1) with continuous feedback functions and discontinuous feedback functions in case 2'.

Obviously, the memristor-based wavelet neural network (1) in case 2' is a state-dependent continuous system. By (1), it is easy to know that for i = 1,2,..., n,

xi(t) < -xi(t) + YJAfxj(t)) | + Mi (t). (10)

j=1j=i

Define three index subsets as follows:

M = ji: Ui <-1- A-ij

I j=1j=i >

In n 1

i :1+ Aij < Ui <3- Aij (,

j=1,j=i j=1,j=i J

N3= |i: Ui >3+ J2 Aij I.

Theorem 4 All the state components xi(t) of system (1) with Mexican-hat-type feedback function (8) in case 2', i e N1, will flow to the interval (-to,-1] when t ^ +to.

Proof We deliver it in the following two cases due to the different location of xi(0). Case A xi(0) e (-to, -1].

In this case, if there exists some t > 0 such that xi(t) = -1, while xi(t) < -1 for t < i, then from (10),

dxi(t)

< 1 + Aij + Ui < 0.

Thus, xi(t) would never get out of (-to, -1]. Similarly, we can also get that once xi(T) e (-to, -1] for some T > 0, then xi(t) would stay in (-to,-1] for all t > T. Case B xi(0) e (-1, +to).

In this case, we claim that xi (t) would monotonously decrease until it reaches the interval (-to, -1] in some finite time i >0, i.e., xi(i) < -1. As a matter of fact, when xi(t) e (3, +to), from (10),

Xi(t) < -3+£ Aij + Ui < 1 + ^2 Aij + Ui <0,

j=ijVi j'=i,;Vi

whenxi(t) e (1,3], from (10),

xi(t) < -1 + ^ Aij + Ui < 1 + ^ Aij + Mi < 0,

;'=i,;Vi j=ijVi

whenxi(t) e (-1,1], from (10),

Xi(t) < 1 + Aij + Ui < 0.

j=1,j=i

To sum up, wherever the initial state xi(0) is located in, xi(t) would flow to and enter the interval -1]. Combining with Case A, xi(t) would eventually stay in this interval (-^,-1]. □

Theorem 5 All the state components xi(t) of system (1) with Mexican-hat-type feedback function (8) in case 2', i e N/2, will flow to the interval [1,3] when t ^

Proof According to the different location of xi(0), we deduce it in three cases. Case A xi(0) e [1,3].

In this case, if there exists some t > 0 such that xi (t) = 1, while 1 < xi(t) < 3 for t < t, then from (10),

dxi(t) dt

> -1- £ Aj

+ ui > 0.

j=1,j=i

Analogously, if there exists some i > 0 such that xi(t) = 3, while 1 < xi(t) <3 for t < i, then from (10),

dxi(t) dt

< -3 +

+ ui < 0.

j=1,j=i

Thus, xi(t) would never get out of [1,3]. Similarly, we can also get that once xi(T) e [1,3] for some T > 0, then xi(t) would stay in [1,3] for all t > T. Case B xi(0) e (-^,1). When xi(t) e (-ro,-1], from (10),

Xi(t) > 1 - Aij + Ui > 1 - Aij + 1 + Aij = 2 > 0,

j=1,j=i j=1,j=i j=1,j=i

when xi(t) e (-1,1), from (10),

Xi(t) > -1 - Aij + Ui >0.

j=1,j=i

Thus, in this case, xi(t) would monotonously increase until it reaches [1,3].

Case C xj(0) e (3, +c). When xi(t) e (3, +c), from (10),

Xi(t) <-3+ £ Aij + Ui <0.

j=1,j=i

Therefore, xi(t) would monotonously decrease until it enters the interval [1,3].

To sum up, wherever the initial state xi(0) is located in, xi(t) would flow to and enter the interval [1,3]. □

Theorem 6 All the state components xi(t) of system (1) with Mexican-hat-type feedback function (8) in case 2', i e N/3, will flow to the interval [3, +c) when t ^ +c.

Proof Deliver it in the following two cases. Case A xi(0) e [3, +c).

In this case, if there exists some i > 0 such that xi(t) = 3, while xi(t) > 3 for t < i, then from (10),

dxi(t)

> -3 - Y^ ^ij + ui > °

i=V¥i

So xi(t) would never get out of [3, +c). Similarly, we can also get that once xi(T) e [3, +c) for some T > 0, then xi(t) would stay in [3, +cc) for all t > T. Case B xi(0) e (-c,3).

In this case, we claim that xi (t) would monotonously increase until it reaches the interval [3,+c).

As a matter of fact, when xi(t) e (-c,-1], from (10),

xxi(t) > Vi> + Ui > 1 - J2 Vi> + 3 + J2 Vi> = 4 > 0,

j=1,j=i j=1,j=i j=1,j=i

whenxi(t) e (-1,1], from (10),

xxi(t) > -1 - Aij + Ui >-1 - Aij + 3 + Aij = 2 > 0,

j=1,j=i j=1,j=i j=1,j=i

when xi(t) e (1,3), from (10),

xii(t) > -3-£ Aij + Ui >0.

j=1,j=i

Therefore, in this case, xi(t) would monotonously increase until it reaches [3, +c).

In summary, wherever the initial state xi(0) is located in, xi(t) would flow to and enter the interval [3, +c). □

Corollary 4 All the state components xi(t) of system (1) with piecewise constant feedback function (9) in case 2', i e A/1, will flow to the interval (-^,-1] when t ^

Corollary 5 All the state components xi(t) of system (1) with piecewise constant feedback function (9) in case 2', i e J\f2, will flow to the interval [1,3] when t ^

Corollary 6 All the state components xi(t) of system (1) with piecewise constant feedback function (9) in case 2', i e J/f3, will flow to the interval [3, when t ^

Corollaries 4-6 can be proved using standard arguments as Theorems 4-6.

Remark 3 It is worth noting that memristive neural networks may display different types of dynamic features in the presence of different memductance functions, i.e., state-dependent switched system and state-dependent continuous system. Although the analytical method is based on two different theory architectures, the proposed criteria is very similar. The unified form of criterion is an effective methodology of enhancing the proposed criterion to be easily applied to different situations.

4 Illustrative examples

In this section, two examples are given to illustrate our results. Simulation results show that the obtained conclusions are valid.

Example 1 Consider the two-dimensional memristive neurodynamic system as follows:

X1(t) = -Xx(t) + wi2(xi(t))f2(x2(t)) + M1, (11)

X2(t) = -X2 (t) + w21(X2(t)fi(X1(t)) + M2,

w!2(x!(t)) = w21(x2(t)) =

n o d/2(x2(t)) dxj(t) , n

0.8, dt dt < 0,

nr df2(x2(t)) dx1(t) , n

0.5, dt dt >0,

n o df1(x1(t)) dx2(t) - n

0.8, dt dt < 0

df1(x1(t)) dx2(t)

Simulation results are described in Figures 3 and 4 when u1 = -2, u2 = 4, where the trajectories of system (11) with Mexican-hat-type feedback function (8) and piecewise constant feedback function (9) under different initial values are depicted. According to Theorems 1 and 3, Corollaries 1 and 3, the results of theoretical calculations are consistent with the experiments.

Example 2 Consider a two-dimensional memristive neurodynamic system described by X1(t) = -X1(t) + 0.6 sin(x1 (t))f (x2 (t)) + M1,

X2(t) = -X2 (t) + 0.8cOs(X2(t))fi(Xi(t)) + M2.

Simulation results are described in Figures 5 and 6 when u1 = -2, u2 = 4, where the trajectories of system (12) with Mexican-hat-type feedback function (8) and piecewise constant feedback function (9) under different initial values are showed. According to Theorem 6 and Corollary 6, the experiments perfectly match the theoretical results.

U1=-2,U2=4

Figure 3 Transient behaviors of system (11) with Mexican-hat-type feedback function (8) under different initial values.

u1=-2,U2=4

Figure 4 Transient behaviors of system (11) with piecewise constant feedback function (9) under different initial values.

5 Concluding remarks

Rhythmicity represents one of most striking manifestations of dynamic behaviors in biological systems. Memristor-based neural networks have been shown to be capable of understanding of neural processes using memory devices. In this article, we give conditions to allow a dynamic orbit of memristor-based wavelet neural networks located in the designated region. The theoretical results are supplemented by simulation results in two illustrative examples.

2 V/fg

xN 1 iff

>1 2 3 4 5 6 7 8 timet

Figure 5 Transient behaviors of system (12) with Mexican-hat-type feedback function (8) under different initial values.

......_

/llP^^

2 wf//

1 2 3 4 5 6 7 8 timet

Figure 6 Transient behaviors of system (12) with piecewise constant feedback function (9) under different initial values.

Competing interests

The authors declare that they have no competing Interests. Authors' contributions

AW carried out the main results of this article and drafted the manuscript. ZZ directed the study and helped with the Inspection. JX proposed the understanding about memristive neuralnetworks, which helped in improving the paper quality. Allthe authors read and approved the finalmanuscript.

Author details

'College of Mathematics and Statistics, Hubei NormalUniversity, Huangshi, 435002, China. 21 n stitute for Information and System Science, Xi'an Jiaotong University, Xi'an, 710049, China. 3Schoolof Automation, Huazhong University of Science and Technology, Wuhan, 430074, China.

Acknowledgements

The work is supported by the NaturalScience Foundation of China under Grant 61304057, the 973 Program of China under Grant 20IICB7I0606. The work of AW was done with the Schoolof Automation, Huazhong University of Science and Technology, Wuhan, China.

Received: 7 June 2013 Accepted: 1 August 2013 Published: 22 August 2013 References

1. Cantley, KD, Subramaniam, A, Stiegler, HJ, Chapman, RA, Vogel, EM: Hebbian learning in spiking neuralnetworks with nanocrystalline silicon TFTs and memristive synapses. IEEE Trans. Nanotechnol. 10(5), 1066-1073 (2011)

2. Cantley, KD, Subramaniam, A, Stiegler, HJ, Chapman, RA, Vogel, EM: Neurallearning circuits utilizing nano-crystalline silicon transistors and memristors. IEEE Trans. NeuralNetw. Learn. Syst. 23(4), 565-573 (2012)

3. Itoh, M, Chua, LO: Memristor cellular automata and memristor discrete-time cellular neuralnetworks. Int. J. Bifurc. Chaos 19(11), 3605-3656 (2009)

4. Kim, H, Sah, MP, Yang, CJ, Roska, T, Chua, LO: Neuralsynaptic weighting with a pulse-based memristor circuit. IEEE Trans. Circuits Syst. I, Regul. Pap. 59(1), 148-158(2012)

5. Pershin, YV, Di Ventra, M: Experimentaldemonstration of associative memory with memristive neuralnetworks. NeuralNetw. 23(7), 881-886(2010)

6. Wen, SP, Zeng, ZG: Dynamics analysis of a class of memristor-based recurrent networks with time-varying delays in the presence of strong externalstimuli. NeuralProcess. Lett. 35(1), 47-59 (2012)

7. Wu, AL, Zeng, ZG: Exponentialstabilization of memristive neuralnetworks with time delays. IEEE Trans. NeuralNetw. Learn. Syst. 23(12), 1919-1929 (2012)

8. Wu, AL, Zeng, ZG: Dynamic behaviors of memristor-based recurrent neuralnetworks with time-varying delays. NeuralNetw. 36,1-10(2012)

9. Wu, AL, Zeng, ZG: Anti-synchronization controlof a class of memristive recurrent neuralnetworks. Commun. Nonlinear Sci. Numer. Simul. 18(2), 373-385 (2013)

10. Huang, TW: Robust stability of delayed fuzzy Cohen-Grossberg neuralnetworks. Comput. Math. Appl. 61 (8), 2247-2250 (2011)

11. Huang, TW, Li, CD, Duan, SK, Starzyk, JA, Robust exponentialstability of uncertain delayed neuralnetworks with stochastic perturbation and impulse effects. IEEE Trans. NeuralNetw. Learn. Syst. 23(6), 866-875 (2012)

12. Jiang, F, Shen, Y: Stability in the numericalsimulation of stochastic delayed Hopfield neuralnetworks. NeuralComput. Appl. 22(7-8), 1493-1498 (2013)

13. Jiang, F, Yang, H, Shen, Y: On the robustness of globalexponentialstability for hybrid neuralnetworks with noise and delay perturbations. Neural Comput. Appl. (2013). doi:10.1007/s00521-013-1374-2

14. Shen, Y, Wang, J: Almost sure exponentialstability of recurrent neuralnetworks with Markovian switching. IEEE Trans. NeuralNetw. 20(5), 840-855 (2009)

15. Shen, Y, Wang, J: Robustness analysis of globalexponentialstability of recurrent neuralnetworks in the presence of time delays and random disturbances. IEEE Trans. NeuralNetw. Learn. Syst. 23(1), 87-96 (2012)

16. Song, XG, Gao, HB, Ding, L, Liu, DY, Hao, MH: The globally asymptotic stability analysis for a class of recurrent neural networks with delays. NeuralComput. Appl. 22(3-4), 587-595 (2013)

17. Gao, HB, Song, XG, Ding, L, Liu, DY, Hao, MH: New conditions for globalexponentialstability of continuous-time neuralnetworks with delays. NeuralComput. Appl. 22(1), 41-48 (2013)

18. Wang, LL, Chen, TP: Multistability of neuralnetworks with Mexican-hat-type activation functions. IEEE Trans. Neural Netw. Learn. Syst. 23(11), 1816-1826 (2012)

19. Yu, W, Francisco, PC, Li, XO: Two-stage neuralsliding mode controlof magnetic levitation in minimalinvasive surgery. NeuralComput. Appl. 20(8), 1141-1147 (2011)

20. Yu, W, Li, XO: Automated nonlinear system modeling with multiple fuzzy neuralnetworks and kernelsmoothing. Int. J. NeuralSyst. 20(5), 429-435 (2010)

21. Zhang, HG, Liu, JH, Ma, DZ, Wang, ZS: Data-core-based fuzzy min-max neuralnetwork for pattern classification. IEEE Trans. NeuralNetw. 22(12), 2339-2352 (2011)

22. Zhang, HG, Ma, TD, Huang, GB, Wang, ZL: Robust globalexponentialsynchronization of uncertain chaotic delayed neuralnetworks via dual-stage impulsive control. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 40(3), 831-844 (2010)

23. Zhang, HG, Wang, YC: Stability analysis of Markovian jumping stochastic Cohen-Grossberg neuralnetworks with mixed time delays. IEEE Trans. NeuralNetw. 19(2), 366-370(2008)

24. Bao, G, Zeng, ZG: Analysis and design of associative memories based on recurrent neuralnetworkwith discontinuous activation functions. Neurocomputing 77(1), 101-107 (2012)

25. Huang, YJ, Zhang, HG, Wang, ZS: Multistability and multiperiodicity of delayed bidirectionalassociative memory neuralnetworks with discontinuous activation functions. Appl. Math. Comput. 219(3), 899-910 (2012)

doi:10.1186/1687-1847-2013-258

Cite this article as: Wu et al.: Dynamic evolution evoked by external inputs in memristor-based wavelet neural networks with different memductance functions. Advances in Difference Equations 2013 2013:258.