Scholarly article on topic 'The influence of random interactions and decision heuristics on norm evolution in social networks'

The influence of random interactions and decision heuristics on norm evolution in social networks Academic research paper on "Computer and information sciences"

Share paper

Academic research paper on topic "The influence of random interactions and decision heuristics on norm evolution in social networks"

Comput Math Organ Theory (2011) 17: 152-178 DOI 10.1007/s10588-011-9085-7

The influence of random interactions and decision heuristics on norm evolution in social networks

Declan Mungovan • Enda Howley • Jim Duggan

Published online: 3 March 2011 © Springer Science+Business Media, LLC 2011

Abstract In this paper we explore the effect that random social interactions have on the emergence and evolution of social norms in a simulated population of agents. In our model agents observe the behaviour of others and update their norms based on these observations. An agent's norm is influenced by both their own fixed social network plus a second random network that is composed of a subset of the remaining population. Random interactions are based on a weighted selection algorithm that uses an individual's path distance on the network to determine their chance of meeting a stranger. This means that friends-of-friends are more likely to randomly interact with one another than agents with a higher degree of separation. We then contrast the cases where agents make highest utility based rational decisions about which norm to adopt versus using a Markov Decision process that associates a weight with the best choice. Finally we examine the effect that these random interactions have on the evolution of a more complex social norm as it propagates throughout the population. We discover that increasing the frequency and weighting of random interactions results in higher levels of norm convergence and in a quicker time when agents have the choice between two competing alternatives. This can be attributed to more information passing through the population thereby allowing for quicker convergence. When the norm is allowed to evolve we observe both global consensus formation and group splintering depending on the cognitive agent model used.

Keywords Social networks ■ Norms ■ Agent based modeling ■ Random dynamic interactions

D. Mungovan (El) • E. Howley • J. Duggan

I.T. Department, National University of Ireland, Galway, Ireland e-mail:

E. Howley

e-mail: J. Duggan

e-mail: a Springer

1 Introduction

Social norms, or normative behaviours, are one mechanism that allows large groups of self interested humans to cooperation together and coordinate actions (Lopez y Lopez et al. 2006) thereby providing a solution to the problem of social order (Horne 2007). Norms can be defined as a set of conventions or behavioural expectations that people in a population abide by. Essentially norms inform an agent, or individual, on how to behave. Ignoring social norms can lead to negative repercussions for individuals including being excluded from a group. Social norms present a balance between individual freedom on the one hand and the goals of the society on the other (Walker and Woolridge 1995). Indeed the agency versus structure debate has been identified as an important questions in social science (Shilling 1999). It aims to understand the intrinsic motivations of how humans make decisions. Is it through the structural institutions we encounter or our own intrinsic internal decision making mechanisms? Reconciling both ideas in the theory of structuration proposed by Giddens (1984), it states that human actions are performed within social structures including norms but these social norms themselves are transient and are prone to evolution. There are two types of social norm conventions: top-down and bottom up. Top-down norms represent laws that are enforced on the population (Goldfarb and Henrekson 2003). Bottom up conventions, such as shaking hands when introducing oneself, represents emergent behaviour from within the group. In this scenario agents, acting in their own self interest, choose which action to take based upon their interactions with others in the population. This type of "conformist transmission" is a tendency to adopt the most popular behaviour in the group and aids in the convergence of social norms (Henrick and Boyd 2001). It can be seen as a form of herding (Barrat 2008) and is the type of social norm that we investigate in this paper. Norms share many characteristics in common with epidemic diseases which spread both horizontally and obliquely (Klein 1999). Agents use locally available information to determine their selection of social norms. Social evolution in turn can be described as changes in the non-genetic information stored in societies (Ehrlich and Levin 2005). In other words, the characteristics of a norm that describe its behaviour can change over time.

The model we present in this paper investigates the effects of random social contacts on norm convergence and evolution. In the real world individuals are unlikely to change their immediate social network of acquaintances very much but will have a number of once-off interactions with random members of the population. An individual will, generally speaking, have the same wife, boss and friend etc. from one day to the next. But it is the random meetings with the general public that we interested in. For instance we interact with complete strangers on buses, in shops and at parties etc. These interactions are outside of ones social network. The interactions don't necessarily need to be verbal or significant. Norms have the capacity to influence people's behaviour just through observation. We recognise, however, that because our daily activities are tied to our social network, we are more likely to randomly meet some people over others. Individuals tend to have a high degree of regularity to their movements with a significant probability to return to a few highly frequented locations interspersed with occasional irregular movements (González et al. 2008; Brockmann et al. 2006; Eagle and Pentland 2009). To account for this we bias random interactions based on the social distance that separates agents in the network. We

contrast two separate mechanisms of how agents make decisions. Firstly agents act in a perfectly rational manner based on the utility they perceive for each norm. Secondly we use a Markov Decision Process where an agent's choice is weighted toward making a particular choice given their observations on the network. We then expand the model from a simple binary decision between two separate norms to incorporate a continuum that describes the norm as a series of n bits. This examines how the characteristic of the norm evolves over time given that agents are randomly interacting with one another. Our goal is to discover at what point random interactions will influence the emergence of a global convention and how these random interactions affect the evolution of the norm. Specifically, we aim to:

1. Design an algorithm that selects a random individual based on their social distance in the network.

2. Test the effects of random interactions on both perfectly rational agents and agents that use a Markov Decision Process that selects between two competing norms.

3. Test the effect of random interactions on an evolving norm that can change over time.

The formal model we describe, and the decision making algorithms we use, have been abstracted from real human behaviour into a multi-agent based simulation. Social life has many more complications but it can be useful to consider simpler settings (Horne 2007). Agent Based Modelling (ABM)1 often has the function of identifying new questions as highlighted by Epstein (2008). One advantage of our reduction in sophistication or veridicality is that it allows for an increased population of agents. As Carley (2009) points out, models are often developed for the purpose of telling a story or making a point and are not always meant for developing policy or guiding decisions. In this spirit our work is not accompanied by a real world statistical comparison. Nevertheless, our approach offers novel insight into the process of norm convergence and evolution. The rest of the paper is structured as follows; Section 2 presents an introduction to previous work in the area of norm convergence, evolution and social networks. Section 3 gives a description of the formal model used to define the agent based simulator and an explanation of how the simulator was designed and implemented. Section 4 presents the experimental results. Finally, in Sect. 5 we outline our conclusions and possible future work.

2 Related research

We now explore the existing literature as it relates to our research. First we introduce the domain of theoretical normative systems. We then expand this to describe ABMs of simple norm convergence. This is followed by a review of cultural evolution and opinion dynamics in ABMs that have had an influence on our own model design. We follow this with an analysis of various social networks and their metrics. Networks have been shown to play an important role on dynamic social processes. We conclude

1 In this paper ABM stands for both Agent Based Modelling and Agent Based Model depending on the context of its use.

the section with a brief overview of dynamic networks and some insights into what we mean by randomness in ABMs.

2.1 Normative behaviour and social dynamics

Recent studies by Liefbroer et al. (2009) have demonstrated the importance of social norms in societies where the individualisation process is fairly advanced. They also observe the formation and emergence of new norms that replace older ones. Bikhchandani et al. (1992, 1998) describe a decision making model where individuals update their own belief system based on the observation of others. They note the formation of what they describe as an "information cascade" whereby a particular piece of information is adopted by everyone regardless of its intrinsic merit.

Agent-based Modelling has been used in recent years as a method of studying these social norms (Mukherjee et al. 2008; Lopez y Lopez et al. 2006; Conte et al. 1999; Villatoro et al. 2009). Conte et al. (1999) and Walker et al. (1995) describe a framework on integrating concepts of Multi Agent Systems with normative behaviour and how both disciplines interact. Using ABM Lopez-Pintado et al. (2008) describe social influence in the context of norms that make a binary decision.2 They describe an influence response function that assigns a weighted number to the alternatives. Others such as Bikhchandani et al. (1998, 1992), Centola et al. (2005) and Watts et al. (2007) have also taken this approach of defining the adoption or diffusion of norms within a population as a choice between two competing alternatives. In these models an agent observes the choices of others and is influenced by their decisions.

But what if we want to describe the norm as something more complex than a choice between two completing alternatives? Norms themselves can change due to alterations made by the participants. Individuals can modify different aspects of the norm to suit their own needs (Ostrom 2000). There are a number of models in the domain of opinion dynamics and the evolution of cultural features that are closely related to norm convergence and evolution that have a bearing on our research. Axel-rod's (1997) model on the dissemination of culture on a lattice network investigates how cultural traits can be influenced by interacting agents. The state of an agent is defined by its cultural traits which take the form of F components and q different values. A cultural component is a unique cultural characteristic such as music preference which can have any of q different values. We can see then that the total spectra of possible unique cultural states of an agent equals qF. Agents will interact with one another if there is enough cultural overlap. Others such as Gonzalez-Avella et al. (2005), Kuperman (2006) and Centola et al. (2007) have extended this idea to incorporate other features. In these papers, as in the Axelrod (1997) model, agents interact with a probability based on their cultural overlap. The higher the overlap, the greater chance of influence.

Next we describe a model of consensus formation which also influences our model design of norm evolution. Deffuant et al. (2001), Kozma et al. (2008) and McKeown et al. (2006) describe a consensus formation model where agents can take on an opinion in the range of between 0 and 1. The opinion of interacting agents is then

2When we say binary decision we mean the choice between two competing alternatives.

influenced by their neighbours on an Erdos and Renyi random graph. As in the models described above (Axelrod 1997; Gonzalez-Avella et al. 2005; Kuperman 2006), if the opinion between two agents is too far away then there is no influence. However the notion of a bounded confidence is described by a tolerance parameter, d, that can be tuned to lead to different states of polarisation. Once there is sufficient cultural overlap then the agent will always interact. In these cases a position halfway between both of their states is adopted.

In our models described in Sects. 3.4 and 3.5 we use this concept of similarity influence but relate it to social norms rather than the dissemination of culture. In other words, if an agent observes a norm that is too distant from their own then they will not be influenced by it. Conversely, if an agent observes a social norm that is close enough to the one they currently employ they will alter their own n-bit norm to become more closely related. A lot of social science research supports the idea of homophily when describing human cultural traits (McPherson et al. 2001; Ruef et al. 2003). Individuals will change their views, norms etc. to become more like those they interact with. We have opened this literature review by discussing generally issues relating to normative behaviour, binary decision models, evolving cultural traits and dynamic opinion paradigms. These are concepts we shall use later on when describing our own framework. In the next section we will look at social networks and how they effect the dissemination of norms and information.

2.2 Social networks

A considerable amount of the literature has studied the effects of norm emergence in populations that are fully connected or interact in a random fashion (Shoham and Tennenholtz 1995; Walker and Woolridge 1995). The network that agents interact on, however, has been shown to play a significant role on the dynamics of diffusion (Kittock 1995; Villatoro et al. 2009; Savarimuthu et al. 2007; Rahmandad and Sterman 2008). Most of this work has dealt with static networks that are generated at initialisation time and do not change for the duration of the simulation. There have, however, been attempts to frame research within the bounds of dynamic networks (Savarimuthu et al. 2007). Networks can be envisioned as a series of nodes,3 N, that each have k links to other nodes. Depending on the network type, k > 0. In the next section we give an overview of different network types that agents can interact on. We now summarise two different types of networks that are important to our research. The first of these is one of the most widely studied graphs in network theory (Erdos and Reyni 1961), the random network.

2.2.1 Random network

The Random Graph of Erdos and Renyi first gained prominence in the 1950s. However, random graphs differ from real world structures in that they lack clustering or transitivity (Newman 2002). A Random Network consists of N nodes randomly connected to one another. The probability of a connection between two nodes is

3 Keeping with the nomenclature used in the literature we use the terms Node and Vertex interchangeable. Springer

Fig. 1 (a) Random network; (b) Small world network

fixed where: pc = k/(N — 1) and k = average number of links per person. Figure 1(a) shows an example of a simple random network where k = 1.5 and N = 8 and pc = 0.21. Notice that it is possible for some nodes on the network to be completely disconnected from the rest which is an unrealistic feature when modelling social networks. However the Small World (SW) Network has been shown to more closely represent human social networks that are found in the real world.

2.2.2 Small world social networks

The idea of Small World Networks first gained widespread popularity with Stanley Milgram's (1967) small-world study of large and sparse networks. Watts et al. (1998) later describe these networks as being formed by rewiring the edges of regular lattices with probability pw. SW Networks are highly clustered, yet have length scaling properties equivalent to the expectations of randomly assembled graphs (Watts 1999a). Notice in Fig. 1(b) that the link with the dashed line has been re-wired to another part of the network. This creates an instant shortcut to distant nodes. SW graphs span the gap between ordered lattices and random graphs. Note that when pw = 1, then all links are randomly assigned and the network becomes a random network. Lee et al. (2006) investigated the effect that changing the value of pw has on the emergence of a winner take all outcome in product adoption. They discovered that as pw is increased the chance of a winner take all outcome becomes more likely. This is because as the value of pw gets closer to one the network starts to become more like a random network. This prevents localised cliques of products from existing. An analysis of a number of real world human networks (Watts 1999b; Verspagen and Duysters 2004; Davis et al. 2003; Baum et al. 2003) have shown that they form SW networks.

2.2.3 Clustering coefficient

The clustering coefficient of a vertex in a network is a measure of the proportion of neighbours of that vertex that are neighbours of one another (Watts 1999a). The more sparse and random a network is the lower the clustering. For node i the value of the clustering coefficient, ci, is calculated as described in (1) where si is the total number of edges between the neighbours of node i and ki is the total number of neighbours of node i.

Ci = -

' ki(ki -1)

Next we describe the path length which is another important network property used in analysing social networks.

2.2.4 Path length

The Maximum Path Length (MPL) is the maximum number of steps required to get to the furthest node, or nodes, on the network. Dijkstra's algorithm (Zhan and Noon 1998) uses a breadth first search to traverse the network and discover the shortest path to each agent. The average shortest path length is the average number of steps along the shortest paths for all possible pairs of network nodes. It is a measure of the how quickly information etc. can diffuse through the network. Studies have shown that random networks and SW networks have a low average path length (Fronczak et al. 2002; Watts 1999a). So far we have discussed norms in agent based models along with network theory and some of the important metrics used to analyse them. The SW and Random networks of those studies have involved static configurations that don't change. We will now mention research that utilises dynamic networks where links or nodes are changing.

2.2.5 Dynamic networks

Fenner et al. (2007) describe a stochastic model for a social network. Individuals may join the network, existing actors may become inactive and, at a later stage, reactivate themselves. The model captures the dynamic nature of the network as it evolves over time. Actors attain new relations according to a preferential attachment rule that weights different agents according to their degree.4 Savarimuthu et al. (2007) generated a network of agents whose topology changes dynamically. Agents initially randomly collide on a 2D grid and then proceed to form social ties. Kozma et al. (2008) investigate the effects of consensus formation using a dynamical network topology that allows agents to rewire links. Jackson et al. (2007) present a network formation model where links are created through a combination of uniformly random encounters and local searching using friends of friends. In this model connections are created between strangers in addition to those between mutual acquaintances on the assumption that we are more likely to meet friends of friends. These papers on dynamic networks demonstrate a departure from a static structure with a fixed social network. They explore networks that exhibit an active form of agent interactions that change over time. Next we define what we mean by randomness when we speak of agents in a simulated computer environment.

2.3 Randomness in agent based modelling

Agent based modelling has a number of advantages over classical game theory approaches. Firstly, ABMs are capable of implementing Monte Carlo5 type stochastic iterations of a complex system. Izquierdo et al. (2009) highlight the fact

4The degree of an agent is the number of acquaintances it has in its social network.

5A Monte Carlo algorithm relies on repeated random sampling to compute their results.

that any computer model is in fact, due to its very nature, deterministic. However, we can use pseudo-random number generators to simulate random variables within the model and generate an artificial Monte Carlo generator. This property allows us to simulate randomness that is present in real world systems. In this fashion, an agent based simulation that provides the same input variables but implements a level of randomness can produce significantly different outcomes. A key challenge of analysing an ABM is to identify an appropriate set of state variables. Markov decision processes is a technique where decision making is partly stochastic and partly under the control of a decision maker (Papadimitriou and Tsit-siklis 1987; White and White 1989). A number of multi agent simulations have adopted this approach to modelling an agent's decision process (Boutilier 1999; Becker et al. 2003). This technique introduces randomness and uncertainty into an agent's decision. It diverges from the utilitarian approach of perfect agent rationality used in some of the models described above (Centola et al. 2005; Bikhchandani et al. 1998) etc. We know from real life that human being often behave in a manner that is anything but rational (Denes-Raj and Epstein 1987; Ellickson 1989).

2.4 Random social interactions

Granovetter's (1973) seminal paper on social networks defines three different kinds of ties that exists between individuals: strong, weak and absent. Strong ties are those that exist between close friends and display a high degree of clustering. Weak ties, by their definition, are links that exist between acquaintances and were shown to be play an important role in the diffusion of influence and information. Absent ties exist between those without substantial significance, such as a "nodding" relationship or a vendor from whom one customarily buys a morning newspaper. It's in these latter ties that we are interested in analysing the significance of in this paper. That is, ties that are not part of ones direct social network yet can exert an influence on individuals. In the example give by Granovetter (1973), absent ties can be as innocuous as a nod. Nodding at someone is an example of a social norm which is an observable action that signifies a form of social behaviour. It's behaviours such as this that motivates our analysis of random interactions. Are we more likely to randomly meet some people over others in the population? Tracking US bank notes Brockmann et al. (2006) discovered that the distribution of travelling distances decays as a power law. The probability of staying in a small spatially confined region was defined by algebraically long tails. González et al. (2008) studied the path of 100,000 anonymised mobile phone users whose position was tracked for a six-month period. They discovered that human trajectories show a high degree of regularity. Each individual displayed a significant probability to return to a few highly frequented locations. They conclude that this propensity for regularity could impact all phenomena including agent based modelling. Eagle et al. (2009) have been able to predict the daily behaviour of volunteers based on the routine nature of human interactions. These research findings confirm why we need to frame the question of random social interactions in the context of our existing social network. The regularity of our encounters results in an increased likelihood of randomly meeting some people over others.

In summary, we discussed differing models of norm dissemination and the associated discipline of cultural evolution. We also described how norm emergence is heavily influenced by the individuals that an agent meets in the network. Real world interactions are dynamic; this is a feature we aim to capture in this paper. We have also gained an understanding of some of the key metrics that are used for analysing network models. We have also seen that there is great regularity in our daily activities which implies that random interactions with members of the population will be influenced by our social network. We aim to use this understanding to formulate a model which we describe in the next section.

3 Model design

In the following section we first describe our weighted random interaction algorithm and then formally define the decision making rules agents use when updating their normative beliefs. Table 1 shows the set of experiments that were conducted. They are dividend into simulations where agents act completely rationally and where their decision is determined probabilistically using a Markov Decision Process. Furthermore we examine the cases where agents simply have to choose between one of two norms and where an n-bit norm propagates through the population.

3.1 Weighted random interactions

Agents interact with random members of the population using what we describe as a Weighted Random Interaction (WRI) algorithm based on their distance from others on the network. As discussed in the last section there is a great amount of regularity to our daily activities. To account for this we bias random interactions to make it more likely that agents interact friends of friends than a completely random member of the population. We use a modified version of Zipf's law, (2), to calculate a nodes weight. The probability of agent i, with Maximum Path Distance (MPD) of M, randomly interacting with agent h having a path distance of d from i is equal to:

Pih (d) =


^M-1 ( X )

2-^m = 1 (m* )

where k is the exponent that characterises the distribution. For simplicity in the experiment carried out in this paper we set k = 1. Note the condition that d > 2 as a node is assumed to interact with members of its social network (d = 1). It can be seen from Fig. 2 that the distribution is normalised and the frequencies sum to 1 as

Table 1 Experiments conducted

Rational agents Markov agents

Binary norm Case 1 Case 2

n-bit norm Case 3 Case 4

expressed in (3).

J^Pihd) = 1 (3)

The graph shown in Fig. 2 shows the distance probability distribution of three different nodes with Maximum Path Distance (MPD) ranging from 5 to 10. We can see from the diagram that agents with a lower path distance are more likely to interact than ones with a higher path distance.

3.2 Case 1—rational agents, binary norm

An agent perceives a utility from observing the norms that have been adopted by other individuals it encounters. Agent i interacts with ki members of their social network and ri randomly selected agents. Initially nodes are set to having adopted either social convention m or «.In all the models described here from Sect. 3.2 to Sect. 3.5 nodes interact with a period drawn randomly from an exponential distribution with mean duration ei = 3. This models the fact that all agents don't update their norm selection simultaneously. An agent, i, will chose to adopt norm m if the utility it observes from adopting this norm is greater than the utility it perceives from adopting convention n as defined in (4).

um<><t (4)

The utility that agents perceive from each norm is defined in (5). This is divided into the utility communicated from its direct neighbours, Dj(t-1), plus the perceived utility from the random interactions it makes, Rjt-1).

"mt = aDmt-1) + Rt-1) (5)

where a is the weighting placed on an agent interacting with the members of its own social network and j is the weighting of interactions taking place with random members of the agents network. The higher the j value the more importance agents place on random interactions. The direct network effects are defined in (6) where n

is the total number of nodes on the network and Oh'(t-1) = 1 if agent h has adopted social convention m.

m sh nm 11 if i is an acquaintance of h

Di(t-1) = / IMhVhit-1) (6)

(( 1 f-' h(t 1 0 otherwise

h = 1 I

Similarly we define the random network effects in (7) where n is the total number of nodes on the network and a^ __ 1) = 1 if agent h has adopted social convention m.

„m , m 1 if i has a random interaction with h ,

Rm-1) = £ ^-1) H 0 oth erwise (7)

h = 1 I

3.3 Case 2—Markov decision process agents, binary norm

In this case agents calculate their perceived utility as in (5). But instead of choosing the norm with the highest utility they assign a weighting to each norm and select one using a Markov Decision Process. The more observations an agent has of a norm the greater the chance it will be adopted as expressed in (8).

p(ii) = zm-ihr (8)

-i,t + -i,t

where P(ii) is the probability of agent i choosing norm m at time t and -mt and -nit is the utility that agent i perceives from norms m and n respectively. Case 1 assumed that all agents used perfect rationality when calculating the most efficient norm to adopt whereas this decision making process models the uncertainty and irrationality that an individual agent may have when deciding on which norm to pick.

3.4 Case 3—rational agents, n-bit norm

Next we define the norm as being a set of n binary bits that describe its characteristics. The total number of unique states that the norm can therefore have is 2n. We assume that if the characteristics of a norm of node i are too different from those of node h then there will be no alteration to node i 's norm. We calculate the level of similarity between the norms of i and h from (9) as:

Vn K'h Oih =- K

1 if i and h share the same characteristic

at bit position p (9)

0 otherwise

If &ih > d then choose a random bit position, p, such that ip = hp and set ip = hp where d is the bounded confidence tolerance parameter and ip and hp is the value of the bit at position p of norm for agents i and h respectively. We can set d in the range of [0 : 1]. The higher the value of d the more similarity there needs to be between norms before they will have an effect on one another. For example, if agent i has a norm that is characterised by the 4-bit sequence 0000 and agent h has

Fig. 3 Norm influence on agent /by agent h agent/

a norm defined by the sequence 0101 then the similarity between the two norms is Sij = 0.5 as shown in Fig. 3. If d = 0.25 then agent h's norm will influence agent i's and a single bit will be changed to bring agent i's norm closer to agent h's. At each decision time step agents interact with one member from either their social network or the random acquaintances they meet. As in Cases 1 and 2 above, agents interact with a period drawn randomly from an exponential distribution with mean duration = 3. An agents' choice of who to interact with is a function of both the strength and number of both social and random contacts. Therefore, the probability that an agent will interact with a member of its own social network, P(is), is defined in (10).

P(is) = , , ' , (10)

(aki) + (jrt)

where k is the size of agents i's social network and ri is the number of random contacts that agent i has. Similarly, the probability that agents i will interact with a random agent not in its social network is defined as P(ir) = 1 - P(ir). We can see that the greater number of random contacts that an agent has the greater chance they have of interacting and being influenced by these random contacts. Note that the randomness in the model stems from which bit position to change rather than whether to change in the first place. In this regard an agent takes a utilitarian or completely rational approach each time when deciding whether to change their norm or not.

3.5 Case 4—Markov decision process agents, n-bit norm

In Case 4 agents now decide whether to alter their n-bit binary norm probabilistically based on the similarity of the norms as described by Axelrod (1997), Kuperman (2006) and Centola et al. (2007). At each decision time step an agent will choose a member of either its social network or random contacts to interact with as defined in (9). From (9) we can see that Sih specifies the level of similarity between the two norms. Agent i will update its own norm with probability P(i) = Sih. The more similarity that agents have the more likely it is that their norms will influence one another. We can see from this approach that an agent may be influenced to alter its norm even if it observes a norm that is very different from its own.

4 Results

In this section we will outline the results we obtained from the models described in Sect. 3. In Sect. 4.1 we examine the effects of norm convergence when simply

changing the rewiring probability of a SW Network. We define convergence to have occurred when all agents have adopted the same norm. For the rest of the experiments we generate a fixed SW Network with a population of 1000 agents with average degree of ki = 10 and pw = 0.05. We then add random interactions to the simulation using the four different scenarios described in Table 1. All the results shown are the average of 100 different simulations unless otherwise stated. A new SW network was generated for each simulation.

4.1 Varying rewiring probability

Figure 4 shows the effect of norm convergence when the rewiring probability is changed. We observe that the probability of all agents converging on a common norm is increased when the value of pw is increased. This conforms to the finding of Lee et al. (2006) in the domain of product adoption mentioned earlier. While increasing the value of pw results in an increase in norm convergence, it reduces the level of clustering in the network. Real world human networks have high levels of clustering so this means that increasing pw is unrealistic. In this section we have demonstrated:

• Norm convergence won't occur on small world networks with low pw.

• Exceeding a threshold level of pw results in convergence in all cases.

• Increasing pw reduces the level of clustering on the network.

In the next experiments we maintain a core, highly clustered, small world network but introduce ad hoc random interactions that the agents have with others in the population.

4.2 Random interactions

In the following experiments, Sects. 4.2.1-4.2.4, a SW network is created with a rewiring probability of pw = 0.05 and a = 1 as shown in Table 2. As we have seen in Fig. 4, norm convergence will not happen when pw is at this level.

Initially each agent on the network maps its social distance from every other agent. We used Dijkstra's algorithm (Johnson 1973) to calculate the MPL for each node as described in Sect. 3.1. Every time an agent interacts it generates a new set of ad hoc random interactions based on the WRI algorithm described and discards the old ones.

4.2.1 Case 1

In this model agents act in a perfectly rational fashion and choose between either of two competing norms as outlined in Sect. 3.2. In Fig. 5 we vary both the value agents place on random interactions, j, and the number of random interactions that they have, ri. The number of random interactions starts at 0 and is increased by a

Table 2 Simulation experiment

variables N a pw k -i Number of norms

1000 1.0 0.05 10 3 2

Probability of Norm Convergence —I_Clustering Coefficient —X-

value of 2 until it reaches 10. The strength of random interactions, i, is increased from 0 to 1. When i = 1 and ri = 10 then an agent has the same number of random interaction with members of the population as their social network and places the same strength on these random interactions. i therefore is a measure of how much importance agents place on encounters with individuals who are not in their social network. A low i could represent an agent passively observing a norm whereas a high value could represent an active conversation etc.

Figure 5 shows a snap shot of the level of norm convergence over a series of different time steps. In Fig. 5(a) when t = 100 the level of norm convergence is high when both i and the number of random interaction are high. Norm convergence fails to occur when both the strength and quantity of random interactions is too low. Indeed, when agents are having up to 8 random interactions but those occurrences only carry a weight of 0.1 of random interactions then norm emergence will not occur. Increasing the number of random interaction or increasing the strength of these interactions results in norm emergence. From Fig. 5 we can see that if i > 0.5 and the number of random interactions r > 6 then norm convergence is guaranteed. If agents have the same number of random interactions as members of their social network, or ki = ri = 10, then i only needs to be 0.2 to guarantee norm convergence. Interestingly, allowing agents to interact beyond ~ 300 time steps has little impact on the level of norm convergence. This experiment shows that random interactions with members of a nodes social network plays an important role in norm convergence.

Figure 5 showed the end result of an average of 100 simulations. Next, in Fig. 6, we look at a sample run with varying levels of i and ri. We can see from Fig. 6(a) that when agents are just interacting with their own social network then the agents quickly settle on two large groups of norms of approximately half the population each. Increasing both the level and strength of random interactions results in a larger portion of the population adopting the same norm as in Fig. 6(b). The random interactions are not enough to overcome the local norm bias'. Further increasing the strength and number of random interactions will results in norm convergence in quicker times as Fig. 6(c) and Fig. 6(d) show.

t = 100

t = 500

Fig. 5 Level of norm convergence over time

t = 200

Random Degree

1000 r

800 -| 600

i 400 ■

200 .....

100 Time

(a) ß — 0.0, Random Degree = 0

Norm Levels

1000 800

! : g 400

n normj • 1 1 norm k « I

100 Time

(b) ß = 0.2, Random Degree = 2

Norm Levels

(c) /3 = 0.4, Random Degree = 6

Fig. 6 Sample run over time, agents acting rationally

50 100 150 20(

(d) ß = 1.0, Random Degree = 10

Fig. 7 Level of norm convergence over time varying random degree

In Fig. 7 we set the level of j = 0.4 and increment the number of random interactions. We can see that there is no norm convergence when the number of random interactions is low. Once members of the population have enough random interactions then there is a steady increase in the number of simulations resulting in convergence. The graph appears like a terrace because the random degree is increased in steps of 2.

In Fig. 8 we set the level of random interactions to 4 and incremented the level of j . We can see that there are several jumps in norm convergence when we increase the level of j. Specifically, when j < 0.2 then none of the simulations converge to a common norm. When j > 0.5 then all the simulations converge to a common norm.

We can also see that when 0.2 < / < 0.3 then some norm emergence does occur but at a much slower rate.

In all the simulations carried out in this section we can see that unless the number and strength of random interactions is enough the population fails to converge to a single norm. This is because agents create local group of reinforcing norms. It takes random encounters from outside this group to break their local biases. In summary, the experiments on Case 1 as outlined in this section we find that:

• Norm convergence won't occur if there is insufficient random interactions.

• An increased number of random interactions results in convergence at a faster rate.

• When agents act with perfect rationality then equilibrium states can exist with multiple norms present.

4.2.2 Case 2

Next agents decide on which of two competing norms to adopt based on a Markov Decision Process as defined in Sect. 3.3. In this case the agents will converge on a single norm in every simulation. This is because the stochastic behaviour of the agents will result in local biases being broken within the population. Figure 9 shows a sample of one run for varying levels of / and r. The volatile nature of the norms is clear from this graph. When agents appear to be converging on a global consensus there can be a sudden shift in the opposite direction. These shifts are due to the sometimes irrational decision making that we have now introduced into agent behaviour. Figure 10 shows the level of norm convergence if agents were blindly following a random walk. Figure 9 displays a general trend toward norm convergence.

Figure 9 clearly shows the potential of a norm's popularity to change over time if we assume that humans don't behave as perfectly ration utilitarian decision makers. Figure 11 shows the average amount of time it takes agents to reach a consensus on which norm to adopt. The rough surface of the graph highlights how randomness in the system affects the outcome. We can clearly see, however, a general decline in the time required for norm convergence on the network as / and r are increased.

0 500 1000 1500 2000 2500 3000 3500 Time

(a) /3 = 0.0, Random Degree = 0

Norm Levels

0 500 1000 1500 2000 2500 3000 Time

(c) ß = 0.4, Random Degree = 6

0 500 1000 1500 2000 2500 3000 3500 4000 Time

(b) (3 = 0.2, Random Degree = 2

Norm Levels

500 1000 1500 2000 2500 3000 Time

(d) ¡3 = 1.0, Random Degree = 10

Fig. 9 Sample run over time, agents acting probabilistically

Fig. 10 Norm levels under random walk

Nonn Levels with Random Walk

norm j ' normk

0 500 1000 1500 2000 2500 3000

The greatest reduction is when j and ri are both small which suggests that there is diminishing returns on the network when these variable are continually incremented.

This section has highlighted a number of aspects of norm convergence, specifically we find that:

• Agents that behave with non-perfect rationality results in norm convergence in every simulation.

Fig. 11 Time to reach norm convergence

• Increasing the number of random interactions that agents have results in a reduction in the time required for norm convergence.

• This reduction in time is more prominent when / and ri are small, with diminishing returns thereafter.

In the next section we look at the effect of norm evolution when agents are interacting with random members of the population.

4.2.3 Case 3

Next we are departing from the simple binary choice model. If agents observe a norm that is close enough to their own they will alter their own n-bit norm to become closer to the one they observe as described in Sect. 3.4. Once the norm reaches a similarity level that exceeds a certain threshold level then an agent will be influenced by its observation. In both Case 3 and in Case 4 below we set n = 4. There can therefore be 24 = 16 unique representations of the norm. In many of the papers discussed in Sect. 2.1 on cultural evolution the value of n is altered. As we are primarily concerned with the effect of random interactions here we leave this for future work. As described in Sect. 3.4 the tolerance parameter, d, defines how similar the norms must be in order to interact with one another. Figure 12 shows a sample of four runs with increasing levels of both / and ri when the tolerance parameter is set to d = 0.25. Each line on the graphs represent the level of a single n-bit norm. In this case agent i only needs to share a single bit in common with agent k to be influenced by it. We find that when the level of d is this low then agents converge to a single norm in every simulation. In this case the cognitive model we have constructed for agents implies that they are easily influenced by others in the population even if the norm they observe has more differences than similarities. Notice the pattern that emerges from each of the four graphs displayed in Fig. 12. In each case two very similar norms spread throughout the population and then diverge. In Fig. 12(b) we can see that at around time t = 5500 that two norms start to be generally adopted by population. These two norms are represented by the bit strings 0001 and 1001. The two norms only have one bit difference from one another. Once the population has decided that this is the general form that the norm will take all other versions of the norm die out. Then finally at around time t = 7000 the population diverges and settles on a final norm. In Fig. 12(d) the two norms 0010 and 1010 dominate much of the simulation.

(d) (3= 1.0, Random Degree = 10 Fig. 12 Sample run over time, agents acting rationally, d = 0.25

Fig. 13 Time to reach norm convergence, d = 0.25

Random Degree

Once other representations of the norm have been eliminated then the population settles on a final social norm.

Looking at the amount of time required for agents to converge on a norm we see in Fig. 13 that there is an initial reduction in convergence time when the values of / and ri are increased but this levels off once the values incremented further. As agents are only interacting with one member of either their social network or random contacts we can see that an increase in the number of ri, while increasing the likelihood of interacting with a random contact, only changes the

(c) (3 = 0.4, Ranaom Degree = 6 (d) /3 = 1.0, Random Degree = 10

Fig. 14 Sample run over time, agents acting rationally, d = 0.75

spread of possible random contacts. And as random contacts are weighted towards agents with lower social distance on the network an increase in ri increases the chance of an agent interacting with an individual with very high social distance.

Next we increased the tolerance parameter to d = 0.75. In this case agents need to share three out of four bits of their norm in order to be influenced by their contacts. We can see from Fig. 14 that a form of polarisation occurs whereby agents fall into segregated groups of norms that are not influenced by others that they meet. As agents in this scenario require a large amount of similarity before they will be influenced by others they are unwilling to change their norm. Some of the groups can be quiet large and represent a large fraction of the total population as in Fig. 14(a). Other times it can splinter into a series of smaller norms all co-existing on the network. We discovered that as the value of j and ri are increased agents settle on a final norm choice faster. Next we look at the effect that changing the value of j and ri had on the final size of norms in the population. To that end we measured the size of the largest group in the population once the agents had all settled on a specific norm. Figure 15 shows the size of the largest component on the network. Notice how there is a slight increase in the size of the largest component when we initially increase j and ri. The rough surface of Fig. 15 demonstrates that even through these results are the averages of 100 simulations there is still much variability in the final outcome.

These results indicate that increasing the value of j and ri has a less pronounced effect on the development and dissipation of the norm when the norms themselves are

Fig. 15 Size of largest component when d = 0.75

3 340 -

'S 320 -®

</j 300

considered transient constructs that are capable of changing under constrained agent interactions. In this section we have found that:

• Agents will converge to a common norm provided the tolerance parameter is low enough.

• If the tolerance parameter is too high then polarisation into different norm subgroups will occur.

• Increased levels of random interactions fail to have a significant effect on the size of these subgroups.

4.2.4 Case 4

The following section shows the effect of norm evolution when the propensity of an agent to alter its norm is proportional to the similarity of the norms. Unlike Case 3 above, agents in this model may not act in the same fashion each time given the same information. This adds an element of cognitive agent randomness into the decision making process6 as in Case 2 above. We can see from Fig. 16 that the general behaviour of agents in the population is similar to Case 3 above when d = 0.25. That is, two similar versions of the norm rise above the rest and then finally diverge. This indicates that the population is deciding on the general form of the norm before finally converging on the exact 4-bit characteristic that is adopted by the whole population.

As every simulation results in convergence to a single norm we are interested in investigating how long it takes the population to converge when the values of j and ri are increased. Figure 17 shows the fraction of simulations that have converged to a single norm at different time steps. Figure 17(a) shows that there is initially an increase in the level of norm convergence when j and ri are increase. Beyond this the increase is less prominent and given enough time all of the simulations converge in a single norm as in both Figs. 17(b) and 17(c).

In this section, when agents act in a non-rational manner, our experiments indicate that:

6If two norms are completely opposite from one another such as 0101 and 1010 then their possibility of effecting one another is zero as Sih = 0.

Fig. 16 Sample run over time, agents acting probabilistically

• Random interactions are less of an influence on norm evolution compared to binary norm dissipation.

• Agents will modify their norm to suit others rather than completely abandoning it as in the binary case.

• Increasing the number of random interaction doesn't significantly effect the speed at which a consensus is formed.

Section 4 has shown us the importance of clearly defining both the cognitive framework of the agents and our definition of both what a norm is and how it spreads. In the first scenario agents choose between two competing binary alternatives. We observed a clear increase in norm convergence and in a quicker time when the number of random interactions is increased. When the norm can change over time based on an agents' influence we don't see the same dramatic convergence effect from random interactions. Rather, the propensity an agent has to be influenced by its interactions plays an important role.

5 Conclusions

The aim of this paper was to construct a more realistic network of agent interactions that can generate insight into the emergence of norms in society. We defined an algorithm that uses a nodes social distance on the network to calculate its chance of

Fig. 17 Level of norm convergence over time. Agents acting probabilistically

interacting with a random member of the population. We believe that this method of weighting random ad hoc interactions based on an agent's social distance on the network represents a novel way of modelling real human interactions. We then investigated norm dissemination on populations of agents interacting on small world networks with random contacts. We separated our experimentation into categories that defined both the cognitive behaviour of the agents and the characteristics of the norm that was under investigation. We demonstrated the importance of random interactions in populations where agents had to choose between two competing norms. If agents behaved in a perfectly utilitarian manner then random interactions were vital in breaking local biases and allowing the population to converge to a single norm. When an agent's cognitive behaviour was defined as a Markov Decision process then random interaction resulted in a reduction in the amount of time required for norm convergence. When we defined the norm as an n-bit sequence that characterised its behaviour we discovered that increased random interactions had a less pronounced effect on the population. This is because increased random interactions resulted in the norm itself changing over time thus negating some of the benefits from accessing agents outside ones own social network. Increasing the amount of similarities

between norms required for influence resulted in a splintering effect where different groups form all adopting a separated norm.

This research has highlighted the importance of clearly defining both the cognitive decision making process of agents and the norm representation under investigation. Our simulations produced a very different set of outcomes under alteration of these conditions.

There were several areas of this research that will be explored in the future. As we were mainly focusing on the effect of random interactions we only tested two values for the tolerance parameter d. We also hope to run additional sensitivity analysis on other variable involved in the simulation. We hope to expand this work by giving agents a memory of previous interactions. How would this effect the dissemination of norms? We could also address the issue of norm violation and punishment which we hope to pursue going forward.


Axelrod R (1997) The dissemination of culture: a model with local convergence and global polarization.

J Confl Resolut 41(2):203-226 Barrat BKA (2008) Consensus formation on adaptive networks. Phys Rev E, Stat Nonlinear Soft Matter Phys 77(1):016102

Baum JAC, Shipilov AV, Rowley TJ (2003) Where do small worlds come from? Ind Corp Change 12(4):697-725

Becker R, Zilberstein S, Lesser V, Goldman CV (2003) Transition-independent decentralized Markov decision processes. In: AAMAS '03: Proceedings of the second international joint conference on autonomous agents and multiagent systems. ACM, New York, pp 41-48 Bikhchandani S, Hirshleifer D, Welch I (1992) A theory of fads, fashion, custom, and cultural change as

informational cascades. J Polit Econ 100(5):992 Bikhchandani S, Hirshleifer D, Welch I (1998) Learning from the behavior of others: Conformity, fads,

and informational cascades. J Econ Perspect 12(3):151-170 Boutilier C (1999) Sequential optimality and coordination in multiagent systems. In: IJCAI '99: Proceedings of the sixteenth international joint conference on artificial intelligence. Morgan Kaufmann, San Francisco, pp 478-485

Brockmann D, Hufnagel L, Geisel T (2006) The scaling laws of human travel. Nature 439(7075):462-465 Carley KM (2009) Computational modeling for reasoning about the social behavior of humans. Comput

Math Organ Theory 15(1):47-59 Centola D, Willer R, Macy M (2005) The emperor's dilemma: a computational model of self-enforcing

norms. Am J Sociol 110(4):1009-1040 Centola D, Gonzalez-Avella JC, Eguiluz VM, San Miguel M (2007) Homophily, cultural drift, and the

co-evolution of cultural groups. J Confl Resolut 51(6):905-929 Conte R, Falcone R, Sartor G (1999) Introduction: agents and norms: how to fill the gap? Artif Intell Law 7(1):1-15

Davis GF, Yoo M, Baker WE (2003) The small world of the American corporate elite, 1982-2001. Strateg Organ 1(3):301-326

Deffuant G, Neau D, Amblard F, Weisbuch G (2001) Mixing beliefs among interacting agents. Adv Complex Syst 3:87-98

Denes-Raj V, Epstein S (1987) Conflict between intuitive and rational processing: when people behave

against their better judgment. J Pers Soc Psychol 66(5):819 Eagle N, Pentland A (2009) Eigenbehaviors: identifying structure in routine. Behav Ecol Sociobiol

63:1057-1066. doi: 10.1007/s00265-009-0739-0 Ehrlich PR, Levin SA (2005) The evolution of norms. PLoS Biol 3(6):e194

Ellickson R (1989) Bringing culture and human frailty to rational actors: a critique of classical law and

economics. 65 CHI-KENT L REV, 23 Epstein JM (2008) Why model? J Artif Soc Soc Simul 11(4):12

Erdos P, Reyni A (1961) On the evolution of random graphs. Publ Math Inst Hung Acad Sci 4(5):17—61 Fenner T, Levene M, Loizou G, Roussos G (2007) A stochastic evolutionary growth model for social

networks. Comput Netw 51(16):4586-4595 Fronczak A, Fronczak P, Holyst JA (2002) Average path length in random networks Giddens A (1984) The constitution of society: outline of the theory of structuration. Polity, Cambridge Goldfarb B, Henrekson M (2003) Bottom-up versus top-down policies towards the commercialization of

university intellectual property. Res Policy 32(4):639-658 González MC, Hidalgo CA, Barabási A (2008) Understanding individual human mobility patterns. Nature 453(7196):779-782

Gonzalez-Avella JC, Cosenza MG, Tucci K (2005) Nonequilibrium transition induced by mass media in a

model for social influence Granovetter MS (1973) The strength of weak ties. Am J Sociol 78(6):1360

Henrick J, Boyd R (2001) Why people punish defectors: weak conformist transmission can stabilize costly

enforcement of norms in cooperative dilemmas. J Theor Biol 208(1):79-89 Horne C (2007) Explaining norm enforcement. Ration Soc 19(2):139-170

Izquierdo LR, Izquierdo SS, Galán JM, Santos JI (2009) Techniques to understand computer simulations:

Markov chain analysis. J Artif Soc Soc Simul 12(1):6 Jackson MO, Rogers BW (2007) Meeting strangers and friends of friends: how random are social networks? Am Econ Rev 97(3):890-915 Johnson DB (1973) A note on Dijkstra's shortest path algorithm. J ACM 20(3):385-388 Kittock J (1995) Emergent conventions and the structure of multi-agent systems. In: Lectures in complex systems: the proceedings of the 1993 complex systems summer school. Santa Fe Institute studies in the sciences of complexity lecture, vol VI. Addison-Wesley, Reading, pp 507-521 Klein RG (1999) The human career: human biological and cultural origins. University of Chicago Press, Chicago

Kuperman MN (2006) Cultural propagation on social networks. Phys Rev E, Stat Nonlinear Soft Matter Phys 73(4):046139

Lee E, Lee J, Lee J (2006) Reconsideration of the winner-take-all hypothesis: complex networks and local

bias. Manag Sci 52(12):1838-1848 Liefbroer AC, Billari FC (2009) Bringing norms back in: a theoretical and empirical discussion of their

importance for understanding demographic behaviour. Popul Space Place 16:287-305 Lopez-Pintado D, Watts DJ (2008) Social influence, binary decisions and collective dynamics. Ration Soc 20(4):399-443

Lopez y Lopez F, Luck M, d'Inverno M (2006) A normative framework for agent-based systems. Comput

Math Organ Theory 12(2):227-250 Mckeown G, Sheehy N (2006) Mass media and polarisation processes in the bounded confidence model

of opinion dynamics. J Artif Soc Soc Simul, 9 McPherson M, Smith-Lovin L, Cook J (2001) Birds of a feather: homophily in social networks. Ann Rev

Sociol 27:415-444 Milgram S (1967) The small world. Psychol Today 2:60-67

Mukherjee P, Sen S, Airiau S (2008) Norm emergence under constrained interactions in diverse societies. In: AAMAS '08: Proceedings of the 7th international joint conference on autonomous agents and multiagent systems, Richland, SC. International foundation for autonomous agents and multiagent systems, pp 779-786 Newman MEJ (2002) Random graphs as models of networks

Ostrom E (2000) Collective action and the evolution of social norms. J Econ Perspect 14(3):137-158 Papadimitriou CH, Tsitsiklis JN (1987) The complexity of Markov decision processes. Math Oper Res 12(3):441-450

Rahmandad H, Sterman J (2008) Heterogeneity and network structure in the dynamics of diffusion: comparing agent-based and differential equation models. Manag Sci 54(5):998-1014 Ruef M, Aldrich HE, Carter NM (2003) The structure of founding teams: homophily, strong ties, and

isolation among US entrepreneurs. Am Sociol Rev 68(2):195-222 Savarimuthu BTR, Cranefield S, Purvis M, Purvis M (2007) Norm emergence in agent societies formed by dynamically changing networks. In: IAT '07: Proceedings of the 2007 IEEE/WIC/ACM international conference on intelligent agent technology. IEEE Computer Society, Washington, pp 464-470 Shilling C (1999) Towards an embodied understanding of the structure/agency relationship. Br J Sociol 50(4):543-562

Shoham Y, Tennenholtz M (1995) On social laws for artificial agent societies: Off-line design. Artif Intell 73:231-252

Verspagen B, Duysters G (2004) The small worlds of strategic technology alliances. Technovation 24(7):563-571

Villatoro D, Malone N, Sen S (2009) Effects of interaction history and network topology on rate of convention emergence. In: Proceedings of 3rd international workshop on emergent intelligence on networked agents

Walker A, Woolridge M (1995) Understanding the emergence of convensions in multi agent systems. In: Proceedings of the first international conference on multi-agent systems (ICMAS '95), vol 1, pp 384389

Watts D (1999a) Small worlds: the dynamics of networks between order and randomness. Princeton University Press, Princeton

Watts DJ (1999b) Networks, dynamics, and the small-world phenomenon. Am J Sociol 105(2):493-527

Watts DJ, Dodds PS (2007) Influentials, networks, and public opinion formation. J Consum Res 34(4):441-458

Watts D, Strogatz S (1998) Collective dynamics of 'small-world' networks. Nature, 440-442

White CC, White DJ (1989) Markov decision processes. Eur J Oper Res 39(1):1-16

Zhan FB, Noon CE (1998) Shortest path algorithms: an evaluation using real road networks. Transp Sci 32(1):65-73

Declan Mungovan is a Ph.D. researcher at the College of Engineering and Informatics, National University of Ireland, Galway. He obtained a Bachelor of Engineering (Electronic and Computer) in 2003. He worked for several years in the R&D department of Ericsson and other companies before returning to academic research in 2008. Since then he has published a number of papers relating to networks theory and normative behaviour. Currently, he is a member of the System Dynamics Research Group at NUI, Galway. Their research focuses on computational approaches for modelling complex social systems.

Enda Howley received a first class honours degree in Information Technology from the National University of Ireland, Galway in 2004. Subsequently, in 2004 he was awarded the prestigious Embark Research Scholarship from the Irish Research Council to undertake his Ph.D. studies. He completed his Ph.D. thesis in 2009 in the areas of evolutionary game theory, and multi-agent systems. In 2008 he was appointed a SFI Postdoctoral Researcher with the System Dynamics Research Goup in NUI Galway. To date, Enda has authored or co-authored 20 peer reviewed papers and presented at a number of highly regarded worldwide and European conferences. Enda has also actively engaged with other members of the research community through programme committees and many informal community events. His main research interests are in the areas of multi-agent systems, game theory and evolutionary computation. He is currently a member of the IEEE, ACM and Engineers Ireland.

Jim Duggan is a Senior Lecturer at the College of Engineering and Informatics, National University of Ireland, Galway. He obtained his Ph.D. in 1990, in the area of Industrial Engineering and Production Control Systems, and is a Fellow of the Institution of Engineers of Ireland. In the early 1990s, he worked as a Senior Software Engineer at Digital Equipment Corporation's Artificial Intelligence Laboratory, where his work focused on the development of scenario planning systems to support managerial decision making. Currently, he leads an interdisciplinary team as part of the System Dynamics Research Group at NUI, Gal-way, which is researching computational approaches for modelling complex social systems. The research group specialises in agent-based simulation, social network analysis, game theory and system dynamics. Dr. Duggan is a reviewer for a number of international journals, including the System Dynamics Review, Systems Research and Behavioral Science and Mathematical and Computer Modelling of Dynamical Systems.