Scholarly article on topic 'Network Analysis in Community Psychology: Looking Back, Looking Forward'

Network Analysis in Community Psychology: Looking Back, Looking Forward Academic research paper on "Sociology"

Share paper
OECD Field of science

Academic research paper on topic "Network Analysis in Community Psychology: Looking Back, Looking Forward"

Am J Community Psychol (2017) 0:1-17 DOI 10.1002/ajcp.12158


Network Analysis in Community Psychology: Looking Back, Looking Forward

Zachary P. Neal and Jennifer Watling Neal


• Network analysis is ideally suited for community psychology research because it focuses on context.

• Use of network analysis in community psychology is growing.

• Network analysis in community psychology has employed some potentially problematic practices.

• Recommended practices are identified to improve network analysis in community psychology.

© 2017 The Authors. American Journal of Community Psychology published by Wiley Periodicals, Inc. on behalf of Society for Community Research and Action

This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made.

Abstract Network analysis holds promise for community psychology given the field's aim to understand the interplay between individuals and their social contexts. Indeed, because network analysis focuses explicitly on patterns of relationships between actors, its theories and methods are inherently extra-individual in nature and particularly well suited to characterizing social contexts. But, to what extent has community psychology taken advantage of this network analysis as a tool for capturing context? To answer these questions, this study provides a review of the use network analysis in articles published in American Journal of Community Psychology. Looking back, we describe and summarize the ways that network analysis has been employed in community psychology research to understand the range of ways community psychologists have found the technique helpful. Looking forward and paying particular attention to analytic issues identified in past applications, we provide some recommendations drawn from the network analysis literature to facilitate future applications of network analysis in community psychology.

Keywords Network • Relational • Review • Best practice • Methodology

IH1 Zachary P. Neal

Michigan State University, East Lansing, MI, USA


Network analysis has been employed in numerous disciplines, including anthropology (Wolfe, 1979), business and organization studies (Borgatti & Foster, 2003), physics (Barabasi, 2012), public health and epidemiology (Luke & Harris, 2007), sociology (Marin & Wellman, 2011), and urban studies (Neal, 2013). However, network analysis also has a long history in psychology, tracing initially to the 1930s when Hungarian-American psychologist Jacob Moreno (1934) first introduced the sociogram, a visual display of a network. In 1945, significant advances in network methodology were made at MIT' s Research Center for Group Dynamics by director Kurt Lewin's students and colleagues, including Alex Bavelas, Dorwin Cartright, and Leon Festinger. For example, many years before developing his theory of cognitive dissonance, Festinger explored how spatial proximity and neighborhood layouts shape the formation of friendships among community members (Festinger, Schachter & Back, 1950). At about the same time, and long before proposing his ecological systems theory, Bronfenbrenner (1943) developed methods for measuring social networks. Two decades later and often overshadowed by his more widely known research on obedience, Milgram (1967) also made important contributions to network analysis in his small world experiment, which demonstrated that on average two strangers are connected by only six degrees of separation. Far from a recent innovation or one that was imported from another discipline, network analysis

has its roots in psychology, with the methodological groundwork laid by some of community psychology's forerunners like Lewin and Bronfenbrenner.

Network analysis holds particular promise for community psychology given the field's aim to understand the interplay between individuals and their social contexts (Bennett et al., 1966; Kornbluh & Neal, 2015; Seidman, 1988; Trickett, 2009). Indeed, because network analysis focuses explicitly on patterns of relationships between actors, its theories and methods are inherently extra-individual in nature and particularly well suited to characterizing social contexts. Likewise, many theories and frameworks used in community psychology—e.g., empowerment (Zimmerman, 2000), social regularities (Tseng & Seidman, 2007), and ecological systems theory (Neal & Neal, 2013a)—can be explored using network analysis. This has led to multiple calls for the (increased) use of network analysis in community psychology. For instance, Luke (2005) identified network analysis as one of several "useful tools to address contextual questions in community science" (p. 185). Likewise, Neal and Christens (2014) argued that network analysis allows community psychologists to assess research questions that span multiple levels of analysis. In addition, multiple researchers have advocated the use of network analysis for understanding setting-level interventions (Hawe, Shiell & Riley, 2009; Tseng & Seidman, 2007).

But, to what extent has community psychology taken advantage of network analysis as a tool for capturing context? To answer these questions, this study provides a review of the use network analysis in articles published in American Journal of Community Psychology. This review has two purposes, which serve to organize the study. First, looking back, we describe and summarize the ways that network analysis has been employed in community psychology research, with the goal of understanding how community psychologists have found the technique helpful. Second, looking forward and paying particular attention to analytic issues identified in past applications, we provide some recommendations drawn from the network analysis literature to facilitate future applications of network analysis in community psychology.

Looking Back

Literature Search Methodology

Fig. 1 Literature search

articles published in American Journal of Community Psychology (AJCP), the official journal of the Society for Community Research and Action (SCRA, APA Division 27), which provides a good window into the practices of the discipline. We recognize, however, that this scope omits other important community psychology research using network analysis (e.g., a recent special issue in Psychosocial Interventions; Maya-Jariego & Holgado, 2015).

To locate research using network analysis in AJCP, we used a multistep search process (see Fig. 1). First, using the Wiley web portal for the journal, we located every article published since the journal' s first issue in 1973 that included either the phase "social network" or "network analysis" (N = 673). Second, articles in this initial pool were retained if their title or abstract contained one or more of the following network-related keywords: network, networks, centrality, broker, density, homophil*, relational, structur*, exchange, interorg * (N = 213).1 Because this keyword criterion resulted in the exclusion of many articles (N = 460), we manually checked a random subset of excluded articles to verify that they did not use network analysis. In most cases, these excluded articles either mentioned the phrase "social network" in passing, or this phrase appeared in the title of another article listed in the bibliography.

Third, working from the reduced pool of 213 articles, both authors skimmed the papers and independently coded them into five categories based on how network analysis was employed. First, the whole network category included articles that collected and/or analyzed data on one of more types of relationships among all actors in one or more settings; these types of networks

Community psychology is an interdisciplinary field, and thus community psychology research is published in many different venues. For our review and following the lead of earlier empirical reviews (e.g., Espino & Trickett, 2008; Luke, 2005), we restricted our scope to peer-reviewed

1 The * represents a wildcard, which allowed for the inclusion of multiple words sharing the same stem. For example, the keyword structur* ensured that our results would include articles containing any of the following words: structure, structural, structuralism, structured, etc.

are typically useful for understanding setting-level processes. Second, the ego network category included articles that collected and/or analyzed data on the personal social networks of a sample of focal individuals (i.e., Egos); these types of networks are typically useful for examining how an individual is impacted by their immediate social environment. Third, the name list category included articles that collected and/or analyzed the responses of a sample of individuals to one or more name generator question only (e.g., Who do you collaborate with? Who do you talk to?). This approach generates a list of names of associates for each respondent. However, without additional information (e.g., Do the people you talk to also talk to each other?), these data cannot be used to identify the structure of the network and therefore cannot be examined using network analytic techniques. Finally, articles were categorized as "extended discussion" if they included a detailed discussion of network concepts or theories, and otherwise as "brief mention."

Finally, the authors met to compare their article categorizations and resolved any discrepancies by reading the full article and discussing. Articles coded as brief mention (N = 110), extended discussion (N = 17), and name list (N = 40) were excluded from further consideration because they did not involve analysis of network data. Articles coded as ego network (N = 17) or whole network (N = 29) constituted the final sample for this review. Both authors read all 46 articles and independently coded them in three domains—data (e.g., setting, size, response rate), analysis (e.g., measured used), and potential problems or challenges (e.g., in data collection, in analysis)—then met to resolve any discrepancies in their codes.

How have Community Psychologists Used Network Analysis?

Table 1 presents selected details of ego network studies published in AJCP, while Table 2 presents details of whole network studies. As a set, both tables summarize how community psychologists have used network analysis. Before turning to the studies' specific details, a long-term trend is observable in the type of network analyses published in AJCP (see Fig. 2). From the journal's founding in 1974 through the 1990s, ego network analyses were most common, while whole network analyses were comparatively rare. However, beginning in the mid-2000s, the frequency of whole network analysis grew rapidly and largely replaced ego network analysis. While this mirrors trends in other disciplines like sociology, it is a promising sign for network analysis in community psychology for two

reasons. First, because a whole network is essentially a collection of overlapping ego networks of related actors, it can exhibit greater structural variation and can be used to answer more complex research questions (Marsden, 1990). Second, while ego networks are focused on individual actors, whole networks are focused on entire settings and systems, suggesting that community psychology is moving closer to its goal of extra-individual explanation.

Who (is in the Network)

Networks are flexible because they allow researchers to study any type and number of actors. The ego network studies that have been published in AJCP have focused exclusively on people as actors. However, there has been substantial variation in both sample size and population. Some studies examine the ego networks of just a few dozen individuals (e.g., Birkel & Reppucci, 1983; Hirsch & David, 1983), while others present data on hundreds of individuals' ego networks (e.g., Boessen et al., 2014; Domínguez & Maya-Jariego, 2008; Dworkin, Pittenger & Allen, 2016). Similarly, there is variation in the size of these individuals' ego networks. For example, Tausig (1992) found that caregivers of chronically mentally ill adults in Ohio discuss important matters with just 2.88 others on average, while Boessen et al. (2014) found that residents of Southern California discuss such matters with 9.17 others on average. As this example illustrates, the samples of individuals examined in ego network studies have been drawn from a range of populations, including marginalized groups (e.g., Cohen, Teresi & Holmes, 1986; Defour & Hirsch, 1990), college undergraduates (e.g., Hirsch, 1979; Perl & Trickett, 1988; Stokes & Wilson, 1984), and service providers (e.g., Domínguez & Maya-Jariego, 2008; Hirsch & David, 1983).

Even greater variation can be seen in the whole network studies, which include examples of people as actors (e.g., Long, Harre & Atkinson, 2014), but also organizations (e.g., Gillespie & Murty, 1994) and stakeholder groups (e.g., Nowell, 2009). They also display a greater range of sizes, from networks composed of just four actors (e.g., Neal & Neal, 2011) to those composed of 500 or more (e.g., Long et al., 2014; Stivala, Robins, Kashima & Kirley, 2016). Moreover, as a technique that allows the researcher to examine the pattern of relationships in an entire setting, community psychologists have examined whole networks in an impressive variety of

2 This feature of whole networks means that a whole network of N actors can also be broken into and analyzed as N separate ego networks. In contrast, N separate ego networks cannot necessarily be merged into a single whole network of N actors.

Table 1 Ego network studies in AJCP, 1973-2016


Mean Fixed Total Boundary

Citation Egos Relation Size Choice Degree Density Density Other

Hirsch (1979) 32 students in psychology classes Interaction 7.8 15 X X

Hirsch (1980) 34 widows and mature women attending college Contact 13.9 20 X X X

Birkel and Reppucci (1983) 31 women in a low-income parent education program Seen frequently NR 8 X X X

Birkel and Reppucci (1983) 26 women in a low-income food program Seen frequently NR 8 X X X

Hirsch and David (1983) 21 hospital nurse managers NR NR NR X

Stokes (1983) 82 people Contact 15.07 20 X X X

Kazak and Wilcox (1984) 109 families Contact 8.9 20 X X X

Stokes and Wilson (1984) 179 intro psychology students Contact 10.87 20 X X

Vaux and Harrison (1985) 98 non-traditional women students Provide support NR 10 X X

Cauce (1986) 98 low-SES 7th graders Friends NR 13 X X

Cohen et al. (1986) 133 SRO residents over age 60 Interaction NR u X X Subgroups, Alter degree

Jennings, Stagg, and Pallay 66 mothers of preschool children Important 19.72 u X X


Perl and Trickett (1988) 92 college freshman Friends NR u X X Reciprocity

Defour and Hirsch (1990) 89 Black graduate & law students Important 10.56 16 X X X

Tausig (1992) 83 caregivers of chronically mentally ill Discuss important matters 2.88 u X X

Domínguez and Maya-Jariego 200 foreigners living in Spain Provide support 17 u X Closeness,

(2008) betweenness, eigenvector, alter degree

Domínguez and Maya-Jariego 10 European-American human service NR NR NR

(2008) providers

Boessen et al. (2014) 274 residents of Southern California Discuss important matters 9.17 U X Triangle degree

Dworkin et al. (2016) 173 people who were sexually assaulted since age 14 Discuss important matters NR 10 Alter degree, Subgroups

NR, Not Reported; U, Unlimited.

Table 2 Whole network studies in AJCP, 1973-2016



Tausig (1987) Henry, Chertok, Keys and Jegerski (1991) Luke, Rappaport and Seidman (1991) Gillespie and Murty (1994)


et al. (2001) Langhout (2003)

Hawe et al. (2009)

Nowell (2009)

Freedman and Bess (2011) Haines et al. (2011)

Neal and Neal (2011) Neal et al. (2011)


et al.(2013) Cardazone, Sy, Chik and Corlew (2014)

Evans, Rosen, Kesten and Moore (2014) Jason et al. (2014)

Mental health service system 41 Mainline protestant churches

45 agencies 100%

Members of church 100% governing body

510 Mutual help group meetings 3998 Participants

Post-disaster service system

I organizations

County service delivery system 32 organizations

Four places Daniel likes & dislikes

Hypothetical community agency

48 DV Collaborates

Food security collaborative

International Collaboration on Complex Interventions

Hypothetical org. alliance

Three Schools in an intervention

22 2nd-5th Grade classrooms

Hawaii Children's Trust Fund

Miami Thrives coalition

Five Oxford Houses

49 People Daniel

saw 15 people & 8

events Groups of

stakeholders 37 Organizations

19 Scholars

57 Organizations

28 Residents

97% n/a n/a NR 74% 100%

Four Organizations n/a

87 Teachers 100%

496 children 44%

24 Organizations NR

Langhout et al. (2014)

Elementary school yPAR program Unknown



Fixed Choice Centrality Density Subgroup

Other metrics & techniques

Involvement 5

Friend, relative, U or co-worker

Talking to

Cycle length, Longest path

Work with



participation Five types

Five types

13 types

u u u u u

Exchange n/a

Advice seeking U

Hanging out U

Communication U

Communication U

Isolates & peripheral nodes

Isolates, Projection


Isolates, Centralization, Reciprocity

Jaccard coefficient


Core-Periphery, Centralization, Reciprocity




Transitivity, Reciprocity, SIENA Projection?

Table 2. Continued




Response Rate


Fixed Choice Centrality Density Subgroup

Other metrics & techniques

Long et al. (2014)

Neal (2014a)

Neal (2014b) Neal and Neal

(2014) Bess (2015)

Jackson et al.


Neal et al. (2015)

Neal (2015)

Kornbluh et al. (2016)

Lawlor and Neal (2016)

High school recycling intervention

2 7th & 8th Grade classrooms

County service delivery system Simulated communities

Violence prevention coalition

34 2nd-4th Grade classrooms

Public education system

Simulated communities

Facebook group for yPAR in three HS classrooms

Simulated community change effort

971 (tl), 854 (t2)

Students 57 students

71%, 63% Close friends 85%, 60% Hanging out

26 organizations n/a

Community n/a


62-71 Organizations 74-89%

681 children 62%

288 people & orgs n/a

Community n/a members

Stivala et al. (2016) Simulated communities

54 Students

100 Stakeholders

500 community members

Exchange Friendship


Hanging out

Information exchange Friendship

Comments, Likes, & Tagging Collaboration


Moran's I, Mean friend behavior CSS Triangulation


coefficient Centralization,


Triangulation Brokerage types

Clustering coefficient, mean path length Mean alter frequency of use Clustering coefficient, mean path length Clustering coefficient

NR, Not reported; n/a, Not applicable; U, Unlimited; D, Degree; C, Closeness; B, Betweenness; O, Other centrality metric (e.g., Eigenvector, Power, Alter-Based, Gamma).

Fig. 2 Network studies published in AJCP, by year and type

settings including service delivery systems (e.g., Tausig, 1987), recovery homes (e.g., Jason, Light, Stevens & Beers, 2014), collaboratives and coalitions (e.g., Freedman & Bess, 2011), schools (e.g., Jackson, Cappella, & Neal, 2015; Langhout, Collins, & Ellison, 2014), online platforms (e.g., Kornbluh, Neal, & Ozer, 2016), and simulated neighborhoods (e.g., Neal & Neal, 2014).

In ego network studies, because each ego network represents an independent observation from an independently sampled focal individual (i.e., Ego), strategies for drawing a sample of Egos and response rates within the sample are subject to the same caveats as sampling and response rates in any type of survey research. For this reason, we do not examine response rates in ego network studies here. However, in whole network studies, because researchers must define the setting (e.g., an entire school, a coalition) and then collect data on all of the relationships within that setting, the response rate of the actors within the setting is of particular importance (Marin & Wellman, 2011; Wasserman & Faust, 1994). This represents a primary challenge in collecting whole network data: the goal is to obtain a census of the setting, not merely a representative sample. Whole network studies published in AJCP reported response rates ranging from 100% (i.e., all actors in the setting provided data) to 44%. As we discuss in greater detail later, low response rates are problematic in whole network studies, but some of the articles employed innovative strategies to address the issue (e.g., Cappella, Kim, Neal, & Jackson, 2013; Jackson, Cappella, & Neal, 2015; Neal, 2014a).

How (are they Related to Each Other)

Although it can be intuitive to focus on the actors, the most critical part of a network—the data that make network analysis distinctive, and which give the network its structure—are the relationships, or "edges," that exist

between the actors. Again, networks are flexible and can be used to represent many different kinds of relationships, depending on the research question and the type of nodes involved. For example, a researcher interested in social support among people might examine friendship relationships, while a researcher interested in service delivery by organizations might examine collaboration relationships. Moreover, these relationships can be measured in a number of ways. They can be measured as binary (e.g., two organizations either do or do not collaborate) or valued (e.g., two organizations collaborate daily, weekly, monthly, etc.). Likewise, some types of relationships can be measured as directed (e.g., social support: one person provides social support to, but may or may not receive social support from, another), while others are undirected (e.g., kinship: two people are related to each other).

Reflecting their narrow focus on people in social settings, ego network studies have examined two broad types of relationships: affective relationships such as friendship (e.g., Cauce, 1986) and support provision (e.g., Vaux & Harrison, 1985), and interactive relationships such as communication (e.g., Tausig, 1992) and contact (e.g., Stokes, 1983). Ego networks of affect and interaction can be useful for understanding the social and emotional environments within which people are embedded. These types of relationships were measured as binary and undirected in more than three quarters of all ego network studies published in AJCP. In part this is a period effect: as the simplest form of relationship measurement, this approach was common in the early days of social network analysis, when the majority of these studies were conducted. However, this measurement is also driven in part by the nature of the relationships under consideration. For example, the relation of "has contact with" is necessarily undirected: if I have contact with you, then you also have contact with me.

The types of relationships captured by whole network studies are much broader. In addition to the affective and interactive relationships seen in ego network studies, these studies also examine exchange relationships. Exchange relationships are most often used to understand patterns between organizations (e.g., Foster-Fishman, Salem, Allen & Fahrbach, 2001; Neal, 2014b), but also are used to understand patterns between people, for example, in the form of information or advice exchanges (e.g., Neal, Neal, Atkins, Henry, & Frazier, 2011; Neal, Neal, Kornbluh, Mills, & Lawlor, 2015). A distinctive feature of exchange relationships is the critical importance of their value and directionality; a large exchange is different from a small exchange, and a reciprocal exchange is different from a non-reciprocal exchange. As a result, relationships have been measured as valued or directed in more than two-thirds of all whole network studies. In addition to

measuring relationships in a more detailed way by capturing their value and direction, some whole network studies have also measured multiple different relationships simultaneously (e.g., Freedman & Bess, 2011; Haines, Godley, & Hawe, 2011; Nowell, 2009). Each type of relationship defines a different whole network with a potentially unique pattern. Thus, for example, Nowell (2009) obtained data not just on a single network of stakeholders, but instead on five different networks among the same set of stakeholders.

What (is Worth Noticing)

Once network data have been collected, the next step is typically to compute one or more metrics that describe features of the network' s structure. Among ego networks published in AJCP, attention has focused almost exclusively on two metrics: degree and density. Degree (sometimes also called "degree centrality") counts the number of relationships the focal individual or "ego" has, and thus captures the size of the individual's ego network. This can be useful as a summary measure, but is of particular substantive importance in ego networks of friendship and support because it directly measures an individual's number of sources of social or emotional resources. Density counts the frequency with which the others in an individual's ego network have relationships with one another (e.g., to what extent are my friends friends with each other?). In these ego network studies, density has been measured in two forms: total density includes all the people in an individual's ego network, while boundary density focuses specifically on relationships that bridge different types of people in an individual's ego network (e.g., to what extent are my family members friends with my co-workers?). Density can also be a useful summary measure, but is of particular substantive importance as a measure of an individual's ego network to provide cohesion and reinforcement because it captures the extent to which all the people in the ego network maintain relationships with each other.

Whole networks can have substantially more complex structures than ego networks, and thus more metrics are available to summarize their features, which is reflected in Table 2. Many whole network studies also compute the network's density, but two other types of metrics are also common: centrality and subgroups. Centrality is not a single metric, but rather a category of metrics each designed to capture the extent to which an actor is "important," with different centrality metrics adopting different conceptions of importance. Degree is the simplest centrality measure, while closeness (i.e., the extent to which an actor can reach other actors in the network in a short number of steps) and betweenness (i.e., the extent to which an actor

serves as a liaison between other actors in the network) are also commonly used. However, many other variants of centrality exist. More than half of all whole network studies compute one or more centrality metrics. Subgroups are not a metric per se, but rather the notion that actors can sometimes be placed into groups based on their positions in the network.3 For example, a set of organizations that all refer clients to one another might constitute a type of subgroup known as a clique because they are bound together by their within-group connections, while a set of people who all give advice but never take advice might constitute a type of subgroup known as a role because they all play the same structurally defined role in the system. When subgroups exist in a network, the researcher might focus on the number of such subgroups that exist, the number of subgroups to which each actor belongs, or the composition of the subgroups.

In addition to these three widely used metrics—density, centrality, and subgroups—the whole network studies also make use of a number of other metrics as needed to address their particular research questions. For example, several recent studies have adopted the clustering coefficient as a metric of cohesion, and an indirect measure of sense of community (Neal, 2015; Neal & Neal, 2014; Sti-vala et al., 2016). Similarly, studies that measure and examine networks that change over time often use a suite of measures designed for stochastic actor-oriented models (Bess, 2015; Jason et al., 2014).

Looking Forward

It is promising to see network analysis employed so broadly—in terms of both methods and research contexts—in community psychology research. At the same time, our review also identified some network analysis practices that can introduce error and bias, and thus could limit the potential of this approach for future community psychology research. These practices are potentially problematic, which means that they could lead a researcher to erroneous conclusions, but not necessarily that they did lead to erroneous conclusions in a specific study. That is, they are practices increase the risk of reaching erroneous conclusions. Thus, our goal is not to critique these past studies, but to learn from them with the goal of advancing the practice of network analysis in community psychology.

3 Importantly, network subgroup membership is based on actors' position in the network, and not on their attributes. For example, a set of people that are all interested in social justice are not a subgroup simply because they have a shared interest. However, if (and perhaps because they have this shared interest) this set of people all talk with each other, they may constitute a subgroup in a communication network.

Thus, looking forward with an aim to facilitating the rigorous application of network analysis in community psychology research, in this section we highlight some of the most commonly observed practices that are potentially problematic. Because the reasons that these practices can be problematic are often masked by the size and complexity of real-world networks, in each case, we offer an illustration using a simple example. These examples are not drawn from the studies we reviewed, but are purposefully constructed to clearly highlight some of the challenges that might arise when studies use these commonly observed but potentially problematic practices. In addition to considering the challenges, this section also discusses some recommended practices drawn from the network analysis methods literature. These recommended practices are summarized in the form of short questions that researchers and peer reviewers can ask themselves when evaluating their own methods, and when reading others' work (see Table 3).

Data Collection: Could There be More?

Both whole and ego network data are typically collected using a name generator question, which asks the respondent to identify others with whom he/she/it has a specific type of relation (e.g., Who do you collaborate with?). When crafting a data collection design using name generator questions, there are two key design decisions. First, these data can be collected via roster in which the respondent is asked to select responses from a list of actors in the setting (e.g., Bess, 2015; Foster-Fishman et al., 2001), or they can be collected via free recall in which the respondent is allowed to name anyone in response (e.g., Boessen et al., 2014; Neal et al., 2011). A roster design

can be useful in smaller settings that contain a known set of actors because it limits the risk of accidental omissions, while free recall is more practical in large settings or settings where not all of the actors are known in advance. Second, whether network data are collected via roster or free recall design, the number of actors who can be named in response to a name generator question can be either unlimited (e.g., Identify all the people with whom you collaborate) or limited (e.g., Identify up to three people with whom you collaborate). Designs that limit the number of choices are often called fixed-choice because the maximum number of choices has a fixed upper limit (Wasserman & Faust, 1994). In our review of articles published in AJCP, we found that 64.7% (N = 11) of ego network studies and 10.3% (N = 3) of whole network studies used a fixed-choice design, with limits ranging from 5 (Tausig, 1987) to 20 (Hirsch, 1980; Kazak & Wilcox, 1984; Stokes, 1983; Stokes & Wilson, 1984).

Fixed-choice designs are known to be problematic for both ego and whole network analysis because they artificially censor the data, thereby introducing measurement error. For example, Kossinets (2006) noted that "Fixed choice designs can easily lead to a non-random missing data pattern" (p. 12), while Holland and Leinhardt (1973) demonstrated "the ability of the fixed-choice sociometric group to distort the underlying group structure" (p. 103). Fig. 3 provides a simple hypothetical example to illustrate the potential problems of a fixed-choice design. The two figures are both friendship networks among the same six people; the edges represent confirmed friendships (e.g., A reports being friends with B, and B also reports being friends with A). The network on the left was collected using an unlimited choice design, while the network on

Table 3 Questions to ask when designing or evaluating network research


Potential problems

Recommended solutions

Could there be more?

How much is enough?

Why this one?

What are the assumptions?

Could it have been otherwise?

Limiting the maximum number of contacts identified by each respondent (i.e., fixed-choice design) can distort a network.

Even small amounts of missing data can distort a network and subsequent network statistics.

Use of an inappropriate network metric can be confusing and lead to erroneous conclusions.

Because network data typically violate the independent assumption of many parametric statistical tests, such tests yield incorrect results.

Decisions about how the data were collected, transformed, or analyzed can increase the likelihood of reaching certain conclusions.

Always allow respondents to identify as many contacts as they wish (i.e., unlimited choice design). If necessary, narrow name generator questions by time or domain.

Use traditional approaches (e.g., follow-ups, incentives) to achieve a response rate as close to 100% as possible. If a high response rate is not feasible, consider alternative strategies like Cognitive Social Structures or Projection.

Each network metric use should be explicitly justified as an operationalization of a specific theoretical construct.

Ensure that the data meet the assumption(s) of the statistical test(s) being used. In the case of network data, non-parametric or other special-purpose models (e.g., SIENA, ERGM) may be necessary.

Reflect on the impact that methodological decisions may have on the types of conclusions than could be reached.

Unlimited Choice

Fixed Choice (3)

Density = 60% Mean Degree = 3

Density = 33% Mean Degree = 1.66

Fig. 3 Impact of fixed-choice data collection design

the right shows what might be observed if a fixed-choice design had been used that limited each person to identifying up to three friends. The unlimited choice design reveals that the setting is characterized by a cohesive, dense (60%) group of friends, each of whom has on average multiple friendships (mean degree = 3). Had a fixed-choice design been employed, however, a very different characterization emerges: a relatively fragmented group of strangers (density = 33%), some of whom are totally isolated and have no friendships (mean degree = 1.66). This mischaracterization is the result of artificially limiting or fixing the maximum number of friends each person could identify.

One way to evaluate the potential risk of such an error in a fixed-choice design is to look at the number of respondents who reach the upper limit, and thus who might actually have wished to identify even more contacts. Of the fixed-choice studies we reviewed, only two reported this type of information, and in both cases one or more respondents reached the upper limit (Gillespie & Murty, 1994; Kazak & Wilcox, 1984). In several additional studies, the mean number of contacts reported by respondents was near the fixed limit, suggesting that one or more respondents may have reached the upper limit (e.g., Long et al., 2014; Stokes, 1983). As the hypothetical example in Fig. 3 illustrates, even when only some respondents (here, 2) reach the upper limit, severe biases can be introduced by a fixed-choice design. Therefore, it is likely that the type of bias illustrated in this figure occurs in many studies adopting a fixed-choice design.

Confronting this issue requires researchers to ask themselves, could there be more? Could a person have reported more friends, or could an organization have reported more collaborators, if they had been provided the opportunity? Unfortunately, it is usually impossible to know the answer ahead of time; it is an empirical question. For this reason, to avoid the type of distortion shown in Fig. 3, the recommendation in the network analysis literature is unambiguous: do not use fixed-choice designs (Holland & Leinhardt, 1973; Kossinets, 2006; Wasserman

& Faust, 1994). Whether collecting data using a roster or free recall, respondents should be permitted to name as many actors as they wish. This recommendation may worry some researchers that it will introduce undue burden on respondents, or will lead respondents to be unduly liberal in their responses. For example, Long et al. (2014) justified their use of fixed choice noting "time restrictions and the large number of potential names to nominate" (p. 465). Although these are reasonable concerns, fixed-choice design is not the answer because, for reasons noted above, it can yield biased data. Frequently these concerns can be addressed without risking bias by using an unlimited choice design, and narrowing the name generator question. Name generator questions can sometimes be narrowed by specifying a time interval. For example, rather than asking "Who have you collaborated with?" a researcher might more narrowly ask "Who have you collaborated with in the past 6 months?" Similarly, name generator questions can sometimes also be narrowed by domain. For example, rather than asking "Who provides you social support?" a researcher might more narrowly ask "Who provides you social support when you are ill?" In these cases, the appropriate timeframe or domain will be driven by the specific research question and context, but such restrictions can simultaneously reduce response burden and yield more nuanced data, while avoiding fixed-choice bias.

Response Rates: How Much is Enough?

For all types of research methods that are intended to reach conclusions about a population, response rates and sample representativeness are chief concerns. As noted above, because ego network studies examine the ego networks of multiple randomly sampled focal individuals (i.e., Egos), issues of the representativeness and response rates of these individuals are the same as in traditional survey research. Whole network studies, however, do not aim to study a sample of individuals as is common in many analytic techniques, but instead aim examine the structure of relationships in a setting by collecting data from all actors in that setting. This setting-level rather than individual-level orientation is what makes whole network analysis powerful, but it also means that response rates in whole network studies must be held to a different standard. In our review of articles published in AJCP, we found that 17.2% (N = 5) of the whole network studies did not report the response rate, while an additional 27.6% achieved a response rate less than 100% (N = 8).

The expectation that whole network studies achieve a 100% response rate might seem unreasonable, particularly given the challenges of collecting network data in community-based settings. However, Fig. 4 offers a simple

Betweenness Centrality Actor WithC-D Without C-D

«r^V ! ! !

©-—s ; :

Fig. 4 Impact of missing dyad data

example that illustrates how even very small amounts of missingness can lead to severe bias (Borgatti, Everett, & Johnson, 2013; Kossinets, 2006; Stork & Richards, 1992). In this example, six actors are linked by eight communication relationships (i.e., they talk to each other). Understanding which actors are particularly important in spreading information through this setting might involve examining each actor's betweenness centrality, which captures the extent to which an actor is necessary for linking two other actors. Here, four of the actors (B, C, D, and E) have a betweenness centrality of 2, while two actors (A & F) have a betweenness centrality of 0. This flat distribution of betweenness scores suggests that no single person is an especially critical broker in this setting. However, suppose that these data are missing one observation: the communication relationship between C and D (dashed line). The omission of just one relationship yields dramatically different betweenness centralities, and in turn suggests a radically different conclusion: B and E are absolutely critical for information sharing in this setting.

This example highlights two unique features of data missingness in whole network studies. First, small amounts of missing data have the potential to dramatically change conclusions. The precise extent to which whole network analyses are impacted by missing data is a complicated question that depends in part on the specific type of analysis. For example, metrics whose computation relies on the whole network' s structure are more susceptible to missing data biases (e.g., betweenness centrality; Foster-Fishman et al., 2001), while metrics whose computation more narrowly focused on the direct relationships of individual actors (e.g., degree; Kornbluh et al., 2016) may be more robust to small amounts of missingness. Second, a whole network can suffer from large amounts of missing data even when the response rate is reasonably high. This is possible because in whole network analysis, unlike most other types of analysis, the fundamental data unit is not the actor but the relationship. The example in Fig. 4 illustrates a case where all actors are present, but nonetheless data on one relationship is missing. Consider how much more data would have been missing in this example if actor B did not participate: the omission of this

actor risks omitting not just one data unit, but potentially three (the AB relation, the BC relation, and the BE relation; Stork & Richards, 1992).

Confronting this issue requires researchers to first ask themselves how much is enough? Given the particular types of analyses I plan to conduct, how much missing network data are tolerable. However, because less missing data are always better, it may be more fruitful for researchers to also ask, how can I limit the amount of missing data? Strategies for boosting response rates in more traditional survey research, including follow-up with non-respondents and offering participating incentives, are often helpful. In contexts where these strategies are impractical or likely to be ineffective, other strategies exist for coping with low response rates. First, cognitive social structures (CSS) is a technique that triangulates multiple reporters' perceptions of the entire network not only to obtain a more accurate picture of a setting' s network but also to fill in gaps that would otherwise have been left by missing actors (Neal, 2008; Neal & Kornbluh, 2016). Two articles included in our review used cognitive social structures to deal with the low response rates expected when collecting network data in classroom settings (Jackson et al., 2015; Neal, 2014a). Second, a technique known generally as projection (Breiger, 1974), but in developmental psychology as social cognitive mapping (Cairns, Perrin, & Cairns, 1985), involves constructing whole networks from data about actors' co-behaviors. For example, a projection approach might view two people as having a relationship if they are both members (i.e., are co-members) of the same group. One article included in our review used social-cognitive mapping to deal with low response rates, also in a classroom setting (Cappella et al., 2013). Although analyzing projected networks can require special care, they also provide a way to measure real-world networks in settings where traditional network data collection would be impossible (Neal, 2014c; Neal & Neal, 2013b).

Metric Selection: Why this One?

Although perhaps an oversimplification, most quantitative research begins by identifying constructs of interest,

which are then operationalized and measured using specific indices, scales, and other instruments. For example, a researcher might be interested in the construct of depression, which is operationalized and measured using the Beck Depression Inventory (Beck, Steer & Brown, 1996). In such cases, the researcher must provide some justification for the specific measure used (e.g., We used the Beck Depression Inventory because we wanted to assess the severity of participants' depression). The same construct-to-operationalization process applies to network analysis as well, but 26.1% (N = 12) of the AJCP studies we reviewed offered no conceptual rationale for the specific metric(s) they used. For example, Cohen et al. (1986) computed a dozen different network-related metrics including size and density in a desire "to include as many independent [network] variables as possible," but did not explain how each of these dozen metrics was designed to capture a construct relevant to their research question (p. 88). Stokes (1983) adopted a different, but related, approach computing 13 network metrics, which were then factor analyzed to identify underlying dimensions.

The list of network metrics that can be computed is vast, and each one is designed to measure a specific feature of a network's structure. Fig. 5 illustrates how problems can arise when a metric is selected without an explicit link to the construct it is intended to measure. Among the most commonly computed network metrics we observed in the studies we reviewed were degree, closeness, and betweenness centrality. As noted above, each of these centrality metrics is associated with a specific conception of what makes a node "important." In addition, in this network, each of these centrality metrics picks out a different node as most central: E is most degree central, D and K are most closeness central, and F is most betweenness central. Thus, selecting the most appropriate centrality metric requires the researcher to first

Most central node when measured using:

# Degree centrality

O Closeness centrality

# Betweenness centrality


Fig. 5 Selecting the right metric: degree, closeness, & betweenness

consider what construct of "importance" they wish to measure (Borgatti & Everett, 2006). For example, if the construct of interest involves identifying actors that are important because they can facilitate or block flows of resources through the network, then betweenness is well justified because this is precisely what it measures. If the researcher had instead selected degree or closeness, the wrong node(s) would have been identified as "important."

Confronting this issue requires researchers to ask, for each network metric they compute, why this one? For example, if betweenness centrality is computed, why was this metric selected rather than another? Is there a strong linkage between the construct to be measured and the structural property this metric is designed to capture? All of these questions are aimed at avoiding a phenomenon Wellman (1988) observed, noting that "others have bolted on variables.. .as they would a turbocharger in order to boost explained variance" (p. 19). Thus, for each network metric that is computed, the researcher should explain which construct of interest it is intended to measure and why. Ideally this process should begin with a clear articulation of the theoretical construct of interest, then proceed to the selection of a corresponding network metric from the vast range of metrics available. As is the case with non-network studies, the mere fact that a network metric can be computed and used as a variable is not a sufficient rationale that it should be used.

Statistical Inference: What are the Assumptions?

Once one or more network metrics have been computed, they are often used as variables in a statistical model designed to draw inferences about a population. Statistical models can be powerful tools for making inferences from empirical data, but there is a trade-off for this power: they require making some assumptions about the data being analyzed. When these assumptions are not met, the inferences yielded by these models cannot always be trusted. In our review of AJCP studies, we found that 21.7% (N = 10) used statistical models for which one or more assumptions was violated. The violations took two primary forms: the use of a parametric statistical test on a non-probability convenience sample, and the use of a parametric statistical test on data in which observations were not independent.4

The problems associated with drawing inferences from convenience samples are well known and not directly

4 Several studies used a mixed, or hierarchical linear, model to adjust for non-independence due to clustering (e.g., students clustered in classrooms). This is appropriate, but does not adjust for the kind of non-independence that arises in network analyses (e.g., actors cross-nested in dyads, triads, and larger structures).

Fig. 6 Probability distributions and non-independence. (a) When observations are independent: Null probability distribution between wealth and seats on city council; (b) When observations are not independent: Null probability distribution between closeness and betweenness centrality

(a) A o II I— 0 (b) J

-J.......... Ik. as si 2 Q. i thh»JlhK

T-1-1-1-n I-1-1-r

-1 -.5 0 .5 1 -.5 0 .5 1

Pearson Correlation in a Sample of 17 Pearson Correlation in a Sample of 17

related to network analysis (Hahn & Meeker, 1993), so here we focus on the problems associated with non-independence. The majority of techniques researchers commonly use to draw statistical inferences (e.g., ANOVA, Regression) are rooted in the central limit theorem, which assumes among other things that each observation is independent. When observations are drawn from a random or other type of probability sample, independence is assured or violations can be corrected (e.g., mixed models when observations are clustered). However, the observations (i.e., actors) in whole network analysis are not sampled, but instead include all the actors in a given setting. Moreover, those actors are assumed not to be independent, which is typically the motivation for undertaking network analysis in the first place. In concrete terms, the lack of independence in network data means, for example, that one actor's centrality depends at least in part on the cen-tralities of the other actors in the setting.

To illustrate why this can be a problem, Fig. 6 uses data from the classic Florentine families dataset described by Padgett and Ansell (1993) and distributed as a sample dataset with the popular network analysis software UCI-NET (Borgatti, Everett, & Freeman, 2002). For many common statistical tests (e.g., t-test, ANOVA, regression, correlation), determining whether a result is statistically significant involves examining (or having statistical software examine) the probability distribution of the test statistic. When certain assumptions are met, the central limit theorem ensures that the probability distribution of the test statistic is normal, and in turn that conventional p-values are valid. Fig. 6a shows the probability distribution for the Pearson correlation between families' wealth (in Lira) and the number of seats they hold on the city council, under the null hypothesis that wealth and council seats are unrelated. The probability distribution is normal because these data satisfy the assumptions of the central limit theorem (e.g., the observations are independent: one family's wealth is not a function of the other families' wealth), and therefore conventional p-values indicating the statistical significance of the Pearson correlation are valid.

In contrast, Fig. 6b shows the probability distribution for the Pearson correlation between families' closeness cen-trality and betweenness centrality in a network of intermarriages, under the null hypothesis that closeness and betweenness are unrelated. The probability distribution is not normal in part because these data violate the assumptions of the central limit theorem (e.g., the observations are not independent: one family's closeness depends in part on another family' s closeness), and therefore conventional p-values cannot be used to determine whether the Pearson correlation between these two variables is statistically significant (Good, 2000). In this case, pretending that the assumptions were satisfied and using a conventional p-value would have led to the decision that the Pearson correlation between closeness and betweenness (r = —.513) is statistically significant (conventional p < .05), when actually it is not (actual p = .064).

Confronting this issue requires researchers to ask what are the assumptions? That is, what are the assumptions of the statistical test, and do my data meet these assumptions? While this is a good practice for any statistical modeling, it takes on particular importance in network analysis because network data are often not independent, and therefore violate the assumptions of many common statistical tests. For this reason, a range of alternative approaches have been developed that are specifically designed for network data and do not require the assumption of independence. Indeed, several of the articles included in our review employed some of these approaches, including bootstrap tests for comparing network metrics (Freedman & Bess, 2011; Haines et al., 2011), stochastic actor-based models for dynamic networks (Bess, 2015; Jason et al., 2014), and non-parametric permutation tests for regressions involving network metrics (Kornbluh et al., 2016). Which of these modeling techniques is appropriate will depend on the specific

5 This p-value was computed using a non-parametric random permutation test, which does not require the assumption that the observations are independent (1,000,000 permutations; Good, 2000).

research questions and characteristics of the data, but as with any quantitative analysis, careful consideration of the model is important in network analysis.

Interpretation: Could it have been Otherwise?

As is the case for any other kind of analysis, the final step in a network analysis involves interpreting the results to draw conclusions about the research question. As the sections above illustrate, network analysis involves making decisions about data collection and analysis that can each impact how results should be interpreted. When interpretation focuses solely on the final results, without taking into account these preceding steps, it can sometimes lead to erroneous conclusions. Two brief examples from our review serve to illustrate how this can happen, even in studies that otherwise are methodologically quite strong.

Gillespie and Murty (1994) sought to identify "cracks" in the service delivery network following a natural disaster, where cracks refer to cases where service providers were not as tightly coupled as they could be, thereby limiting their ability to coordinate. Examining a set of 84 disaster response organizations in a Midwestern urban region, they identified a few cracks and offered recommendations for improving the system. However, a close look at their data collection procedure suggests they may have underestimated the severity of the cracks. The organizational network they investigated included organizations that "were listed by two or more of the [other organizations, and excluded] organizations that were not listed at least two times by [other organizations]" (p. 645). These inclusion and exclusion criteria guaranteed that every organization in the network coordinated with a minimum of two other organizations (i.e., had a minimum degree centrality of 2), and therefore that the opportunity for cracks to be observed was restricted. Their finding that the system exhibited relatively few cracks was driven by a methodological decision, and could not have been otherwise. They briefly comment on this, noting that "this limits the opportunity to identify isolates.. .and [provides] a conservative basis for the identification of cracks in the network" (p. 645). However, it offers an example of a seemingly minor decision about data transformation that substantially alters a study's ability to draw conclusions about its research question.

Haines et al. (2011) sought to examine the development of an interdisciplinary collaboration among scholars focused on complex interventions. Over 18 months, they observed increases in the density of the collaborative network, interpreting this as evidence that the collaboration initiative was successful. Here, it is important to closely consider how the networks were measured at T0 (baseline) and T1 (18 months post). As the authors explain,

"All relationships listed at Baseline are also counted in our Time 1 analyses; in other words, the Time 1 networks consist of the Baseline networks plus all additional ties made during the first year of ICCI" (p. 4). The authors may have adopted this strategy to reduce the data collection burden in T1 by asking only about newly formed relationships, and do observe as a future direction that "We can also expand our analysis to explore the possibility of network contraction" (p. 10). However, the assumption that all baseline relationships remained at T1 and none dissolved made it mathematically impossible for network density to decline. Thus, the finding that network density increased was at least in part consequence of the data collection strategy and could not have been otherwise.

Confronting this issue requires researchers to ask, could it have been otherwise? That is, given the methodological choices made about data collection and analysis, was it possible to have reached an alternate conclusion, or have the methodological choices increased the likelihood of reaching a particular conclusion. Researchers using qualitative methods are often called on to be reflexive when making interpretations, where "reflexivity is an attitude of attending systematically to the context of knowledge construction.. .at every step of the research process" (Cohen & Crabtree, 2006). This is good advice for quantitative network researchers too. While reflecting on the researcher' s own status and bias is important in network research, another reflexive question can help avoid problems in interpretation. Asking "could it have been otherwise?" forces the researcher to reflect on whether the findings are genuine, or are simply the byproduct of the data collection and analysis decisions made earlier in the process. As our discussion has illustrated, many different decisions may occur during the course of a network analysis, so the introduction of a logical circularity can be easy, and easily missed.


Since its first issue in 1974, the American Journal of Community Psychology has published many studies that directly use network analysis, and many more that make reference to it. In the past several years, the frequency of these studies in community psychology has been increasing. The adoption and rapid growth of network analysis in community psychology should not be surprising as it is a technique designed to explicitly recognize and understand the interdependent relationship between actors and their contexts. As our review has illustrated, the network research published in AJCP has explored a wide range of populations, settings, and issues of central concern in the

discipline. These studies often exhibit methodological strengths such as the measurement of multiple relationships (e.g., Freedman & Bess, 2011; Haines et al., 2011; Nowell, 2009) and the use of longitudinal modeling techniques (e.g., Bess, 2015; Jason et al., 2014).

As is the case with the adoption of any new technique, a learning curve is to be expected, and our review also illustrates this. In some cases, researchers publishing their work in AJCP have implemented some practices that increase the risk of reaching erroneous conclusions. In many ways, these practices mirror those in research using more conventional analytic approaches, including low response rates and a lack of clarity around measurement selection. However, these practices can have unique implications when applied to network data. Recognizing these practices offers an opportunity to reflect on how network analysis has been applied in community psychology, and on how community psychology (within the practical realities of community-based work) can take advantage of these techniques in even more powerful ways, drawing on recommendations from the network analysis methods literature. We hope these can serve as a guide not only to community psychologists seeking to incorporate network analysis into their own work but also to peer-reviewers, readers, colleagues, and community members seeking to understand and critically engage with research adopting this approach.


(*indicates a study included in the literature review)

Barabasi, A.-L (2012). The network takeover. Nature Physics, 8, 14-16.

Beck, A.T., Steer, R.A., & Brown, G.K. (1996). Manual for the beck depression inventory-II. San Antonio, TX: Psychological Corporation.

Bennett, C.C., Anderson, L.S., Cooper, S., Hassol, L., Klein, D.C., & Rosenblum, G. (1966). Community psychology: A report of the Boston Conference on the education of psychologists for community mental health. Boston: Boston University. *Bess, K.D. (2015). Reframing coalitions as systems interventions: A network study exploring the contribution of a youth violence prevention coalition to broader system capacity. American Journal of Community Psychology, 55, 381-395. *Birkel, R.C., & Reppucci, N.D. (1983). Social networks, information-seeking, and the utilization of services. American Journal of Community Psychology, 11, 185-205. *Boessen, A., Hipp, J.R., Smith, E.J., Butts, C.T., Nagle, N.N., & Almquist, Z. (2014). Networks, space, and residents' perceptions of cohesion. American Journal of Community Psychology, 53, 447-461.

Borgatti, S.P., & Everett, M.G. (2006). A graph-theoretic perspective

on centrality. Social Networks, 28, 466-484. Borgatti, S.P., Everett, M.G., & Freeman, L.C. (2002). Ucinet for Windows: Software for social network analysis. Harvard, MA: Analytic Technologies.

Borgatti, S.P., Everett, M.G., & Johnson, J.C. (2013). Analyzing social networks. London: Sage.

Borgatti, S.P., & Foster, P.C. (2003). The network paradigm in organizational research: A review and typology. Journal of Management, 29, 991-1013.

Breiger, R.L. (1974). The duality of persons and groups. Social Forces, 53, 181-190.

Bronfenbrenner (1943). A constant frame of reference for sociomet-ric research. Sociometry, 6, 363-397.

*Burgess, J.H. (1974). Mental health services systems: Approaches to evaluation. American Journal of Community Psychology, 2, 87-93.

Cairns, R.B., Perrin, J.E., & Cairns, B.D. (1985). Social structure and social cognition in early adolescence: Affiliative patterns. Journal of Early Adolescence, 5, 339-355.

*Cappella, E., Kim, H.Y., Neal, J.W., & Jackson, D.R. (2013). Classroom peer relationships and behavioral engagement in elementary school: The role of social network equity. American Journal of Community Psychology, 52, 367-379.

*Cardazone, G., Sy, A.U., Chik, I., & Corlew, L.K. (2014). Mapping one strong 'Ohana: Using network analysis and GIS to enhance the effectiveness of a statewide coalition to prevent child abuse and neglect. American Journal of Community Psychology, 53, 346-356.

*Cauce, A.M. (1986). Social networks and social competence: Exploring the effects of early adolescent friendships. American Journal of Community Psychology, 14, 607-628.

Cohen, D., & Crabtree, B. (2006). Qualitative research guidelines project. Robert Wood Johnson Foundation. Available from: [last accessed July 24 2017].

*Cohen, C.I., Teresi, J., & Holmes, D. (1986). Assessment of stress-buffering effects of social networks on psychological symptoms in an inner-city elderly population. American Journal of Community Psychology, 14, 75-91.

*Defour, D.C., & Hirsch, B.J. (1990). The adaptation of black graduate students: A social network approach. American Journal of Community Psychology, 18, 487-503.

♦Domínguez, S., & Maya-Jariego, I. (2008). Acculturation of host individuals: Immigrants and personal networks. American Journal of Community Psychology, 42, 309-327.

*Dworkin, E.R., Pittenger, S.L., & Allen, N.E. (2016). Disclosing sexual assault within social networks: A mixed-method investigation. American Journal of Community Psychology, 57, 216228.

Espino, S.L.R., & Trickett, E.J. (2008). The spirit of ecological inquiry and intervention research reports: A heuristic elaboration. American Journal of Community Psychology, 42, 6078.

*Evans, S.D., Rosen, A.D., Kesten, S.M., & Moore, W. (2014). Miami Thrives: Weaving a poverty reduction coalition. American Journal of Community Psychology, 53, 357-368.

Festinger, L., Schachter, S., & Back, K. (1950). Social pressures in informal groups: A study of human factors in housing. New York: Harper & Brothers.

*Foster-Fishman, P.G., Salem, D.A., Allen, N.A., & Fahrbach, K. (2001). Facilitating interorganizational collaboration: The contributions of interorganizational alliances. American Journal of Community Psychology, 29, 875-905.

*Freedman, D.A., & Bess, K.D. (2011). Food systems change and the environment: Local and global connections. American Journal of Community Psychology, 47, 397-409.

*Gillespie, D.F., & Murty, S.A. (1994). Cracks in a postdisaster service delivery network. American Journal of Community Psychology, 22, 639-660.

Good, P. (2000). Permutation tests: A practical guide to resampling methods for testing hypotheses. New York: Springer-Verlag.

Hahn, G.J., & Meeker, W.Q. (1993). Assumptions for statistical inference. American Statistician, 47, 1-11.

*Haines, V.A., Godley, J., & Hawe, P. (2011). Understanding interdisciplinary collaborations as social networks. American Journal of Community Psychology, 47, 1-11.

*Hawe, P., Shiell, A., & Riley, T. (2009). Theorising interventions as events in systems. American Journal of Community Psychology, 43, 267-276.

*Henry, D., Chertok, F., Keys, C., & Jegerski, J. (1991). Organizational and family systems factors in stress among ministers. American Journal of Community Psychology, 19, 931-952.

*Hirsch, B.J. (1979). Psychological dimensions of social networks: A multimethod analysis. American Journal of Community Psychology, 7, 263-277.

*Hirsch, B.J. (1980). Natural support systems and coping with major life changes. American Journal of Community Psychology, 8, 159-172.

*Hirsch, B.J., & David, T.G. (1983). Social networks and work/non-work life: Action-research with nurse managers. American Journal of Community Psychology, 11, 493-507.

Holland, P.W., & Leinhardt, S. (1973). The structural implications of measurement error in sociometry. Journal of Mathematical Sociology, 3, 85-111.

*Jackson, D.R., Cappella, E., & Neal, J.W. (2015). Aggressive norms in the classroom social network: Contexts of aggressive behavior and social preference in middle childhood. American Journal of Community Psychology, 56, 293-306.

*Jason, L.A., Light, J.M., Stevens, E.B., & Beers, K. (2014). Dynamic social networks in recovery homes. American Journal of Community Psychology, 53, 324-334.

*Jennings, K.D., Stagg, V., & Pallay, A. (1988). Assessing support networks: Stability and evidence for convergent and divergent validity. American Journal of Community Psychology, 16, 793-809.

*Kazak, A.E., & Wilcox, B.L. (1984). The structure and function of social support networks in families with handicapped children. American Journal of Community Psychology, 12, 645-661.

Kornbluh, M., & Neal, J.W. (2015). Social network analysis. In L.A. Jason & D.S. Glenwick (Eds.), Handbook of methodological approaches to community-based research (pp. 207-218). New York: Oxford University Press.

*Kornbluh, M., Neal, J.W., & Ozer, E. (2016). Scaling-up youth-led social justice efforts through an online school-based social network. American Journal of Community Psychology, 57, 266279.

Kossinets, G. (2006). Effects of missing data in social networks. Social Networks, 28, 247-268.

*Langhout, R.D. (2003). Reconceptualizing quantitative and qualitative methods: A case study dealing with place as an exemplar. American Journal of Community Psychology, 32, 229-244.

*Langhout, R.D., Collins, C., & Ellison, E.R. (2014). Examining relational empowerment for elementary school students in a yPAR program. American Journal of Community Psychology, 53, 369-381.

*Lawlor, J.A., & Neal, Z.P. (2016). Networked community change: Understanding community systems change through the lens of social network analysis. American Journal of Community Psychology, 57, 426-436.

*Long, J., Harre, N., & Atkinson, Q.D. (2014). Understanding change in recycling and littering behavior across a school social network. American Journal of Community Psychology, 53, 462-474.

Luke, D.A. (2005). Getting the big picture in community science: Methods that capture context. American Journal of Community Psychology, 35, 185-200.

Luke, D.A., & Harris, J.K. (2007). Network analysis in public health: History, methods, and applications. Annual Review of Public Health, 28, 69-93.

*Luke, D.A., Rappaport, J., & Seidman, E. (1991). Setting pheno-types in a mutual help organization: Expanding behavior settings theory. American Journal of Community Psychology, 19, 147-167.

Marin, A., & Wellman, B. (2011). Social network analysis: An introduction. In J. Scott & P.J. Carrington (Eds.), The SAGE handbook of social network analysis (pp. 11-25). London: Sage.

Marsden, P.V. (1990). Network data and measurement. Annual Review of Sociology, 16, 435-463.

Maya-Jariego, I., & Holgado, D. (2015). Network analysis for social and community interventions. Psychosocial Interventions, 24, 121-124.

Milgram, S. (1967). The small-world problem. Psychology Today, 1, 61-67.

Moreno, J. (1934). Who shall survive: A new approach to the problem of human interrelations. Washington DC: Nervous and Mental Disease.

Neal, J.W. (2008). "Kracking" the missing data problem: Applying Krackhardt's cognitive social structures to school-based social networks. Sociology of Education, 81, 140-162.

*Neal, J.W. (2014a). Exploring empowerment in settings: Mapping distributions of network power. American Journal of Community Psychology, 53, 394-406.

Neal, J.W., & Christens, B.D. (2014). Linking the levels: Network and relational perspectives for community psychology. American Journal of Community Psychology, 53, 314-323.

Neal, J.W., & Kornbluh, M. (2016). Using cognitive social structures to understand peer relations in childhood and adolescence. In Z.P. Neal (Ed.), Handbook of systems science (pp. 147-163). New York: Routledge.

*Neal, J.W., & Neal, Z.P. (2011). Power as a structural phenomenon. American Journal of Community Psychology, 48, 157-167.

Neal, J.W., & Neal, Z.P. (2013a). Nested or networked? Future directions for ecological systems theory. Social Development, 22, 722-737.

*Neal, J.W., Neal, Z.P., Atkins, M.S., Henry, D.B., & Frazier, S.L. (2011). Channels of change: Contrasting network mechanisms in the use of interventions. American Journal of Community Psychology, 47, 277-286.

*Neal, J.W., Neal, Z.P., Kornbluh, M., Mills, K.J., & Lawlor, J.A. (2015). Brokering the research-practice gap: A typology. American Journal of Community Psychology, 56, 422-435.

Neal, Z.P. (2013). The connected city: How networks are shaping the modern metropolis. New York: Routledge.

*Neal, Z.P. (2014b). A network perspective on the processes of empowered organizations. American Journal of Community Psychology, 53, 407-418.

Neal, Z.P. (2014c). The backbone of bipartite projections: Inferring relationships from co-authorship, co-sponsorship, co-attendance and other co-behaviors. Social Networks, 39, 84-97.

*Neal, Z.P. (2015). Making big communities small: Using network science to understand the ecological and behavioral requirements for community social capital. American Journal of Community Psychology, 55, 369-380.

Neal, Z.P., & Neal, J.W. (2013b). Opening the black box of social cognitive mapping. Social Development, 22, 604-608.

*Neal, Z.P., & Neal, J.W. (2014). The (in)compatibility of diversity and sense of community. American Journal of Community Psychology, 53, 1-12.

*Nowell, B. (2009). Profiling capacity for coordination and systems change: The relative contribution of stakeholder relationships in

interorganizational collaboratives. American Journal of Community Psychology, 44, 196-212.

Padgett, J.F., & Ansell, C.K. (1993). Robust action and the rise of the Medici, 1400-1434. American Journal of Sociology, 98, 1259-1319.

*Perl, H.I., & Trickett, E.J. (1988). Social network formation of college freshman: Personal and environmental determinants. American Journal of Community Psychology, 16, 207-224.

Seidman, E. (1988). Back to the future, community psychology: Unfolding a theory of social intervention. American Journal of Community Psychology, 16, 3-24.

*Stivala, A., Robins, G., Kashima, Y., & Kirley, M. (2016). Diversity and community can coexist. American Journal of Community Psychology, 57, 243-254.

*Stokes, J.P. (1983). Predicting satisfaction with social support from social network structure. American Journal of Community Psychology, 11, 141-152.

*Stokes, J.P., & Wilson, D.G. (1984). The inventory of socially supportive behaviors: Dimensionality, prediction, and gender differences. American Journal of Community Psychology, 12, 53-69.

Stork, D., & Richards, W.D. (1992). Nonrespondents in communication network studies: Problems and possibilities. Group Organization and Management, 17, 193-209.

*Tausig, M. (1987). Detecting "cracks" in mental health service systems: Application of network analytic techniques. American Journal of Community Psychology, 15, 337-351.

*Tausig, M. (1992). Caregiver network structure, support, and care-giver distress. American Journal of Community Psychology, 20, 81-96.

Trickett, E. (2009). Community psychology: Individuals and interventions in community context. Annual Review of Psychology, 60, 395-419.

Tseng, V., & Seidman, E. (2007). A systems framework for understanding social settings. American Journal of Community Psychology, 39, 217-228.

*Vaux, A., & Harrison, D. (1985). Support network characteristics associated with support satisfaction and perceived support. American Journal of Community Psychology, 13, 245-268.

Wasserman, S., & Faust, K. (1994). Social network analysis: Methods and applications. Cambridge, UK: Cambridge University Press.

Wellman, B. (1988). Structural analysis: From method and metaphor to theory and substance. In B. Wellman & S.D. Berkowitz (Eds.), Social structures: A network approach (pp. 19-61). Cambridge, UK: Cambridge University Press.

Wolfe, A.W. (1979). The rise of network thinking in anthropology. Social Networks, 1, 53-64.

Zimmerman, M.A. (2000). Empowerment theory: Psychological, organizational, and community levels of analysis. In J. Rappa-port & E. Seidman (Eds.), Handbook of community psychology (pp. 43-63). New York: Plenum.