Scholarly article on topic 'Histories of Hating'

Histories of Hating Academic research paper on "Media and communications"

0
0
Share paper
Academic journal
Social Media + Society
OECD Field of science
Keywords
{""}

Academic research paper on topic "Histories of Hating"

SI: Culture Digitally

social media + society

Histories of Hating

Tamara Shepherd1, Alison Harvey2, Tim Jordan3, Sam Srauy4 and Kate Miltner5

Abstract

This roundtable discussion presents a dialogue between digital culture scholars on the seemingly increased presence of hating and hate speech online. Revolving primarily around the recent #GamerGate campaign of intensely misogynistic discourse aimed at women in video games, the discussion suggests that the current moment for hate online needs to be situated historically. From the perspective of intersecting cultural histories of hate speech, discrimination, and networked communication, we interrogate the ontological specificity of online hating before going on to explore potential responses to the harmful consequences of hateful speech. Finally, a research agenda for furthering the historical understandings of contemporary online hating is suggested in order to address the urgent need for scholarly interventions into the exclusionary cultures of networked media.

Social Media + Society July-December 2015: 1-10 © The Author(s) 2015 DOI: 10.1177/2056305115603997 sms.sagepub.com

®SAGE

Keywords

digital culture, hate speech, regulation, affordances, trolling, #GamerGate

Introduction: A Moment for Hate

Given the context of vitriolic online misogyny so starkly illustrated in the ongoing #GamerGate campaign, it seems that iterations of hate speech have become endemic to much online discourse. Earlier, more optimistic pronouncements of the Internet's ability to offer spaces for productive and democratic interactivity seem at best naive in what can be framed as the current ascendancy of online hate. This is reflected in the string of prosecutions since 2012 for online hate speech in the United Kingdom that reflect distinct legislative frameworks being developed for online as opposed to offline speech (Rustin, 2014). While we may be experiencing something of a "moment" for hate speech online, taking a more historical perspective to networked communication shows that this kind of online hate, including a wide range of behaviors from flaming to trolling to cyberbullying, may also be characteristic and perhaps constitutive of online culture (Jane, 2014b).

In looking for precedents of online hate speech in earlier Internet spaces as well as mediated communication more generally, we point toward longstanding issues of exclusion and inequality in public speech. To trace the shifting borders of inclusion and exclusion that subtend the ebbs and flows of particular moments for hate discourses in specific political contexts offers an inroad into thinking historically about the affordances of various online spaces as platforms (Gillespie,

2010). Here, special emphasis is placed on the ontological status of social media as the primary mode through which hate is currently expressed. For example, viewing online hate as a kind of performance invites analysis of how such performative acts interface with the temporal and spatial affordances of social media. Part of this entails historical interrogation of the "social" in social media, where the community norms of different sites interact with more top-down modes of governance—as in laws that are being formulated differently in specific national jurisdictions and in individual website terms and conditions. Governance of hate thus happens in the intersections between policy and community, and these intersections themselves develop over time and according to specific spatial arrangements. The processes underlying online hate point toward a set of complex issues at the

'University of Calgary, Canada

University of Leicester, UK

3University of Sussex, UK

4Oakland University, USA

5USC Annenberg School of Communication and Journalism, USA Corresponding Author:

Tamara Shepherd, University of Calgary, 2500 University Dr. NW Calgary,

AB, Canada, T2N IN4.

Email: tamara.shepherd@ucalgary.ca

ice) CD® Creative Commons CC-BY-NC: This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 3.0 License ^^^■sSMD^B (http://www.creativecommons.org/licenses/by-nc/3.0/) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage).

center of any normative discussion of regulation and intervention, including the boundaries of free speech, asymmetries between more powerful and more marginalized actors, the meanings and implications of visibility, and the dynamics between online and offline hate.

This roundtable-style discussion article examines such topics through a series of prompting questions, answers to which are organized dialogically to put ideas into conversation with each other and build up a collaborative analytical approach to a complex problem. Growing from previous face-to-face discussions at Culture Digitally's London meeting in June 2013, the article allows for an extended consideration of online hate as historical process, with a view toward setting up a conceptual framework for future research. To that end, throughout the discussion, we engage with episte-mological questions of where to position ourselves as researchers in contentious spaces: how we should define online hate; how we should document a sprawling and uneven set of histories, contextually differing practices, speech acts, and norms; how to develop methodologies and theoretical orientations that connect micro, meso, and macro levels of analysis; how to articulate normative and interventionist concerns; and what is the best way to work with communities and within spaces where hate speech pervades.

Histories and Ontologies of Hate Online

The conversation began by establishing a foundation on which to theorize online hate, articulating the historical precedents for the communicative practices characterizing #GamerGate, and contextualizing hate and harassment within the earliest computer-mediated communication (CMC) controversies.

Tim Jordan

Is hate more prevalent in Internet- and computer-mediated communication in more recent times? Or are there increasing numbers of people online and so more who experience hate online, making such hate more obvious? This question is not meant to trivialize hate online but instead to situate it in order to better understand it. For me, the question is posed partly because one of the first collections of insight from a cultural and sociological perspective into online communication was a collection called Flamewars, first published in 1994, when the World Wide Web was new (Dery, 1994). And while "flaming" may not be quite the same as hating or trolling, it often involved both of these and thus poses the analytic question of where we might draw lines between them. If the exemplar of Dery's collection is not enough to alert us to the history of online hate, trolling, and flaming, then I will reinforce it with a very brief, personal, and impressionistic set of instances and online places in which such practices were identified.

In 1978, the proponent of the now well-known "finger" command (that would allow users to identify who else was on a particular computer system) was vigorously attacked by other online users for proposing something that they felt transgressed the open nature of the Internet. Also in 1978, off of what we now know as the Internet but still via networked CMC through bulletin board systems, someone published a "guide to flaming on BBS." Similarly off-Internet but through computer networks, the trolling group the "Meowers," who frequented Usenet, helped invent the practice of "crap-flooding" (posting so much irrelevant material to a forum that users are unable to use the forum for its stated topic). By the 2000s, diverse inflammatory groups began to emerge according with the Internet's spread as a mass medium, from sites like SomethingAwful to groups with campaigns like the Gay Niggers Association of America (which included some members who became famous for being trolls, such as Weev). From here, a progression can be constructed that leads into to the rise of sites like 4chan and the infamous/b/random board, and then to some of the trolling and "lulz" roots of Anonymous, and finally to the present day, in which hate online appears in episodes of misogyny like #GamerGate or the trolling of memorial sites in the United Kingdom that has led to arrests and the jailing of trolls.

The point of suggesting that hate, trolling, and flaming online have been present as long as we have had CMC is not to diminish it or dismiss attention to it. It is rather to ask, how should we best approach hate online and what is specific to online hating? I offer two brief suggestions for starting to answer such questions in relation to evidence and online communication.

It is important to develop detailed examinations of how hate works online to grasp it, or we miss such things as the meaning of "lulz" in online political activism. An exemplar here is Whitney Phillips' work, which focuses on trolls who target online memorial sites—on the face of it, conducting some of the most offensive online insults imaginable in targeting those mourning a death. Rather, Phillips suggests that such practices primarily target "grief tourists" who have no connection to whoever is being mourned and who, like tourists, simply travel from mourning site to mourning site. There is something more complex in this picture than we might expect from the simple idea of trolling mourning sites being targeted at families and those grieving. Trolling "grief tourists" offers instead an action focused on those seen to be already abusing memorial sites with insincere grief (Phillips, 2011).

Second, even the earliest experiments in CMC found a number of characteristics to such discussions that diverged from offline communication. These experiments were conducted pre-Internet to investigate the process of collective decision-making using computer networks. Researchers found four effects in groups making decisions through CMC: people contributed more than when face to face; people spoke more often to those further up a hierarchy; decisions were very difficult to come to; and, people were much ruder than when face to face

(Sproull & Kiesler, 1993). A psychological explanation of this has been described the "Online Disinhibition Effect" (Suler, 2004), but in a more communication studies vein, I think we should also explore the relationship between communicative practices online and offline. In particular, I feel it is important to pay attention to how an identity as a communicative subject is created, with online requiring a communicative subject to be "heard" before they become able to communicate, whereas in offline it is the emitter or author of a message who creates the possibility of communicating. If we are concerned with how meaning can be sent and received, we need to understand the cultures and technologies that constitute a presence in which authors and receivers of messages can become stable subject-positions. Online markers of identity—because they are inherently unstable, unlike the body or timbre of a voice—have to be stabilized by being heard consistently; the style of a communicant must be recognizable for communication to be possible. In this sense, one must be heard online before one can speak.

If this is the case, then perhaps we can start to differentiate online hate speech as an ongoing intensification of online communicative practice that seeks to create an identity by being heard as a hater. One source of the intensity of online hate is this struggle "to be," to exist as a communicating subject online, leading to extreme statements to draw the attention required to exist. However, the subject who comes into existence because their style is to flame, troll, and hate will then be caught by that identity as they are caught by the style they are heard (or read) through. A troll who no longer trolls is likely to be considered a different online subject and only able to be recognized by others as themselves when they resurrect the style of communication that allowed them to be heard and so to exist online.

In addition to indicating some of the continuities and discontinuities of hating with trolling, flaming, and other online practices, the struggle to be through being heard highlights the need to consider the ways in which the technologies and cultures of social media interpellate particular subject-positions, normalizing behaviors that would seem inappropriate in other contexts. This indicates the challenge of defining hate online, given that it is highly contingent on a range of factors, as the next contribution highlights.

Tamara Shepherd

I think Tim's provocation about hating as reflecting an intensification of certain transgressive subject-positions offers an important background for thinking about how hate practices are understood more broadly. In previous discussions among this group, for example, we have struggled with what to call online hate. How do different labels for hate speech online implicate different modes of affect, violence, and social exclusion? Looking at popular reactions to online harassment campaigns is one place to start thinking through this question. To take #GamerGate as an illustrative case, popular coverage of

the movement tended to oscillate between dismay at the "horrendous, upsetting and unjustifiable [ . . . ] reams of appalling threats and abuse" (Stuart, 2014), and contention that "online harassment is as old as the internet itself" (Associated Press, 2014). The abuse expressed through #GamerGate was in fact often dismissed as "trolling," or "gendertrolling" in the case of misogynist threats (Mantilla, 2013).

What does the trolling label enact or perform in the context of sexist discourse? Most obviously, it diverts attention away from systemic sexism, racism, homophobia, and so on to instead dismiss gendered abuse as the practice of a few, socially marginal individuals. Part of this diversion has to do with the initial use of trolling in the early 1990s to describe inflammatory, trickster humor on bulletin board systems (Bishop, 2014; Herring, Job-Sluder, Scheckler, & Barab, 2002). Since that time, however, trolling as a more widespread phenomenon (reflecting broader Internet adoption) seems to have become a proxy for hate speech. Here is a case where offline forms of social exclusion get amplified when combined with more aggressive strands of Internet culture. This combination is particularly salient to a discussion of online misogyny, as in the case of #GamerGate, since trolls are gendered: the origin of the term troll can be traced to US military aerial dog-fighting in the 1960s (Jansen & James, 2002). Today, the transposition of the term within Internet communication—perhaps not coincidentally also borne of the 1960s US military—serves to legitimize sexist and abusive behavior, based on amplifying existing social exclusion in ways that are not necessarily permitted by the rules of civility in similar offline spaces (Filipovic, 2007; Hardaker, 2010).

So when reflecting on the continuities and discontinuities that online hate embodies in relation to its earlier, less explicitly mediated forms, my instinct gravitates toward the argument that online sociality masks hate with appeals to play. The openness of networked communication infrastructure— heralded for its support of identity play as well as political resistance—also opens up the libertarian opportunity space for less progressive "counter publics," as evident in online hate cultures and far-Right movements (Taylor, 2014). Going back to #GamerGate as an example, the same hashtag infrastructure credited with enabling citizen protest in oppressive regimes can itself be used oppressively in campaigns of misogynist harassment. The form of the hashtag, moreover, serves to normalize hate and feed into what has been termed an "online misogyny epidemic" (Penny, 2013). In this way, trolling as a label not only legitimizes hate speech but also conceals the structural supports for gendered exclusion through a label that evokes individualized antisocial behaviors like bullying, as opposed to culturally endemic expressions of misogyny and sexism.

The power of the term "hate" for discussing the wide range of practices that fall under the broader umbrella of incivility online serves to highlight the ways in which what appears to be an ontologically unique set of activities still

serve to marginalize and oppress those least privileged historically and in offline spaces—including women, people of color, trans people, and lesbian, gay, bisexual, and transgen-der (LGBT) people. But as the next example indicates, a totalizing label might fail to account for the diversity within this wide range of practices.

Kate Miltner

As Tamara mentioned, one of the major problems we have when thinking about or engaging with "hate" online is the slipperiness of the term and the lack of consistency in the way in which it is used.

When dealing with online "hate," a variety of terms are brought into the mix: being a "hater" (haters, after all, gonna hate), bullying, trolling, harassment, antagonism, hateblogging—the list goes on. The problem is that all of these terms represent different phenomena, behaviors, and underlying motivations. Being a hater is definitely not the same thing as being a troll, and while bullies and trolls are frequently collapsed into the same category, they have different definitions and are largely carried out by different groups of people. While trolling might end up falling under the legal definition of harassment, it is not often motivated not by hatred and vitriol, but by a sort of nihilistic superiority complex. True hatred—the wish to do someone harm— and "lulz" are very different concepts, even though they may look similar to the uninitiated.

Another problem that is frequently encountered by those examining and discerning between these different forms of online hate is that the ways in which these boundaries are drawn largely depend on one's positionality. Behaviors that may seem like "hate" to one group of people may seem like valid criticism to another.

Take, for example, the "hateblog" or "internet hate site" (Orsini, 2012) Get Off My Internets (GOMI). GOMI is a blog aimed at critiquing "egobloggers," people who have built up a large following by chronicling the minutiae of their daily lives. GOMI has a large following; in 2013, Forbes named it a Top 100 Blog for Women.

GOMI's editor feels that her site offers "a necessary service to bloggers who've completely fallen out of touch with reality" (Orsini, 2012). Many of the posts focus on behaviors that the community finds grating or unethical: blatant consumerism, entitlement, and "shady" business dealings. However, for the blog's targets and their supporters, it comes across as vicious vigilantism: in the words of mommyblog-ger Morgan Shanahan (2013), "While plenty of stories published to GOMI roll off the backs of their subjects, others have contributed to legitimate damage on the lives and livelihoods of those they seek to mock."

The case of GOMI and its subjects is simply a case study of a larger conflict that is taking place on the Internet in general, and that is a shift in normative values. A 2012 Atlantic Wire article bemoaned the encroaching "niceness" of the

Internet, and pined for the days "when the internet was snarky, vicious, and brutal, a place for people to say things without fear of retribution, cloaked beneath the crude cloak of anonymity" (Doll, 2012). The fact that the author Jen Doll thinks that those days of the Internet are past is another matter entirely, but her words belie the libertarian roots of the early web, the belief that all speech, no matter how vile or offensive, is not only protected, but an essential part of what makes the Internet what it is. As the Declaration of the Independence of Cyberspace asserted, "we cannot separate the air that chokes from the air upon which wings beat" (Barlow, 1996).

This position, clearly one of privilege, can be seen echoing through the #GamerGate controversy. Historically, both the gaming world and the Internet were the provinces of a particular type of geek masculinity that sprang from the male-dominated, rational-scientific environment of early technocultures (Kendall, 2002; Turner, 2010). As women, people of color, and people of varying levels of technical expertise assert their rights to participate and engage in these spaces on their own terms, we witness backlash from those most deeply entrenched in these communities.

When we consider the cyberlibertarian origins of the Internet as well as its military-educational history, the embedded culture underpinning the rise of sociality online would seem to indicate a sedimented dynamic of exclusionary practices. It is equally enlightening to consider how the hate within a controversy such as #GamerGate functions beyond the platforms and spaces on which it operates.

Sam Srauy

At first glance, hate seems to arise from differences (Brewer, 1999; Butler, 1990; Haythornthwaite, 2007; Kaynan, 2008; Torfing, 2003). When demarcated "ingroups" (e.g., gamer-gaters) and "outgroups" (e.g., women and "social justice warriors") mix with aggression online, the lack of face-to-face social constraints allow for hate (Sproull & Kiesler, 1985). However, differences do not necessarily precipitate hate (Brewer, 1999; Kaynan, 2008). The vitriol hurled overwhelmingly at women throughout the #GamerGate ordeal seems to me to evince that hate may be power politics (O'Donnell, 2014), dependent on othering outgroups and believing in a zero-sum struggle—justified as a moral strug-gle—against (perceived) threats. Brewer (1999) and Allport (as cited in Brewer, 1999) reasoned that demarcation is not enough; boundaries need something else to turn into hateful acts. In the #GamerGate ordeal, it was the belief that "the future of games" was a zero-sum contest (Hudson, 2014). It seems that what "social justice warriors" were saying was perceived as a threat through a zero-sum lens.

Zero-sum beliefs are necessary for hate (Brewer, 1999). Through this lens, the outgroup is seen as a threat (Kaynan, 2008): "Whether actual or imagined, the perception that an outgroup constitutes a threat to ingroup interests [ . . . ] is

directly associated with fear and hostility toward the [ . . . ] outgroup" (Brewer, 1999, pp. 435-436). In other words, the perception of a threat is enough to march toward hate. Although what "social justice warriors" were saying would never threaten GamerGaters' hegemonic dominance, the perception was enough to spark outrage.

GamerGaters met this perceived threat with vitriol justified as moral superiority (Hudson, 2014). GamerGaters would not have us believe that women questioning their marginalization in video games elicited the hate. Instead, an appeal to "ethics" in games journalism was offered—including a debunked story about a female game developer (Quinn, 2014; Totilo, 2014). By asserting moral authority, hateful behavior toward out-groups is claimed to be reasonable because it attempts to disguise hate as a moral campaign against a perceived immoral threat (Brewer, 1999). In fact, undergirding all this hate is power politics. Strip away the claim of "journalistic ethics," and what is left is a cultural project that pushes back in reaction to positive developments in the past years (O'Donnell, 2014; Omi & Winant, 1994). It is, in effect, the reaction of the ingroup's perceived loss of power.

These responses engage with the complexities of staking out a conceptual framework for examining hate online, but also provide grounding for understanding the interplay between power, subjectivity, culture, space, technology, and history to begin any analysis of hating as a networked communicative practice. The next set of contributions builds on this unstable and complex foundation to address action, interventions, and regulation of hate online.

Affordances for Regulating Online Hate

Alison Harvey

How do we regulate hate online? So far, research summarizing the legal approaches to harassment, hate speech, and defamation enacted on the Internet demonstrates that contemporary legal systems are ill-equipped to deal with these cases (Citron, 2014; Franks, 2012; Marwick & Miller, 2014). Responses to the harassment of visible targets within the #GamerGate campaign indicate that law enforcement is increasingly reactive to these online threats, though the challenge remains that the content of hate-filled messages must include some indication of a plan to action violence. This means that the threat of a school shooting at Utah State University if Anita Sarkeesian went forward with giving a talk on women in games was quickly addressed, as are increasing instances of "Swatting" #GamerGate targets with false calls to law enforcement. But only the most extreme instantiations of harassment, with bodily harm explicitly threatened or with the violation of other established felonies such as fabricated emergency calls, can be handled by the processes and protections of the legal system. The barrage of sexist, misogynistic, racist, anti-Semitic, homophobic, and transphobic hatred remains in unenforceable

territory. This is further complicated by the ways in which hate online operates across borders, making the application of legal jurisdictions difficult if not impossible.

Hence, the significance of the structures and regulations implemented at the level of the platform itself, which brings us back to the historical and cultural contexts of the social media sites on which hate online operates. Targets of the hashtag campaign #GamerGate, including game designer Brianna Wu, loudly lobbied Twitter to address the time-consuming, complicated, and ultimately ineffectual tools at hand for reporting abuse and blocking accounts, particularly given the ease of creating new or multiple accounts even after others have been removed (Brustein, 2014). Some of the reasons postulated for this failure to address the flow of hate speech have included the rationale that the "straight white men of tech" simply do not understand the nature of this harassment, as well as the standard free speech arguments about the openness and neutrality of the platform. Twitter's representatives have responded in the form of promises to work with organizations such Women, Action, and the Media to tackle online harassment (Epstein, 2014), and, in the case of their CEO, apologies for self-admittedly inadequate action in light of persistent abuse (Tiku & Newton, 2015).

But are such responses anything other than cheap gestures toward corporate social responsibility from a social media company whose commodity is engagement and for whom campaigns of abuse and harassment can be understood as the creation of value? As Ben Kuchera (2014) observed in relation to the intimidation and harassment faced by marginalized users of the service before #GamerGate even began: "Twitter could fight this, of course, but the service won't. The company is enjoying high revenues and a soaring stock price, but it has yet to own up to the fact that harassment is part of the product being offered." The neutrality of this platform is thus only that which can be mobilized in service of capital. Hate—like sex— sells, and that is not a problem when those who are hurt are those who have always suffered in capitalism. When the intensification and expansion of hate online, as with #GamerGate, is revealed to be not a problem but a profitable development for those who manage the software and services designed to police online vitriol in light of legal limitations, we realize the extent to which alternative interventions are required. Thus far, though, the only interventions planned to mitigate this hatred appear to have emerged from those who have been targeted, in the form of harassment support networks and advocacy plans organized by Zoe Quinn (Hudson, 2015) and Anita Sarkeesian (Dredge, 2015). Can we expect only the marginalized to create safe spaces free of hatred online?

Raising the question of responsibility for regulating online hate—of who should be responsible and how might interventions take shape—evokes the counterpoint to thinking about rights online. Rights, to freedom of expression for example, also implicate responsibilities. Thinking further on the quality of these rights as incarnated in online versus offline

communication offers some ways into the complex problem of applying normative ethical frameworks developed in one context to another, as the next two posts indicate.

Tim Jordan

Our previous discussion suggests both a long history to hate, flaming, trolling, and other such speech online and that this hate is always part of politics. #GamerGate is clearly related to patriarchy and ways of degrading women to enforce exclusions. Remedies relate to issues of freedom of speech, where boundaries are drawn, and the possible imposition of rigid understandings of things like trolling that are more complex. The complexity and the endemic nature of these kinds of exchanges that make up online hating and its history need addressing to understand regulation.

Earlier I suggested an understanding of hating based on the idea that online communicative practices have a different (but not separate) form to offline and that online form affects "existing" online. One has to be heard online because the style of someone's online communication will have priority over the marker of identity on that communication—handle, email address, Twitter name, and so on. I tentatively suggested the intensity of online hating may be because it also invokes the problem of simply existing online. If someone exists by being heard as a hater, then they exist by their style of hating and this underpins the intensity of hating. A related consequence is that if communicating online is part of existing online then the erasure of online existence is one of the potential consequences of hating; seen in the women driven to stop using Twitter (e.g., in the United Kingdom for running a campaign to have Jane Austen on a banknote) or to stop gaming and in both cases therefore to no longer be heard. A possible way to understand this connection between hating, communication, and online existence is by seeking inspiration from Haraway's (2008) approach to the question of killing. This is not to diminish the powerful difference between killing in flesh and blood and nonexistence through not being heard online, but it is to take inspiration and to develop a question that we might apply to all domains in which we seek to ensure hating online does not help the ongoing creation of oppression and exploitation.

Haraway (2008) suggests that the absolute of "thou shalt not kill" does not reflect a world in which the intersection of bodies will, in the end, require killing of some sorts. She suggests it may be a mistake "to pretend to live outside killing." Instead, the issue is how to be responsible in relation to the question "what makes a being killable?" Haraway is not inciting killing but inciting responsibility in relation to a part of living that involves dying and doing so by drawing attention to the interwoven relations of beings, in which making a being killable must be understood and tied to broader social responsibility. In a similar way, perhaps we have to shift the debate about hating online from hating or not (particularly the dichotomy of free speech or not), to recognizing that

within such communicative spaces disagreement and abuse are inevitable, but this should come with the ethical responsibility for not allowing some beings to be made abusable to the point of not being heard.

This is only a starting point for a set of larger ethical dilemmas. It leaves open all the different answers to what would be a responsible approach to online discourse and what are the points of being not heard that are fundamental. But it at least shifts the debate from hate or not hate; it offers an approach that recognizes nuances and that these need understanding in their material settings, in their actual relations, while also asserting that we should not make anyone online un-hearable through abuse. Alison's previous post poses, for me, a complex question; how to think about regulating online hate without making the following three mistakes: devolving responsibility to those being abused (just ignore, do not go there, you do not have to read it); relying on companies and technologies (better filters, simpler appeal mechanisms); or, resorting to government intervention that also controls freedom of speech (2 years in jail for online abuse is now the penalty in the United Kingdom). Perhaps all three of these registers of dealing with online hate could be explored by asking, what in each of them makes someone abusable to the point of exclusion and so of nonexistence?

Sam Srauy

Tim's response to Alison's post, I believe, is at the heart of how we should think about online hate. Specifically, the "three mistakes" that Tim wants us to avoid forces us to ground our intervention in culture. If hate in general and events like #GamerGate specifically point to a counter movement (O'Donnell, 2014; Omi & Winant, 1994) against women's rights, then needed interventions must take shape at the level of culture. Of course, these events happen in online spaces. To that extent, I do not want to disregard the technological affordances of these spaces. But, it does not seem to me that the hate emerged from these spaces. Rather, it seems more appropriate to view this hate as structured in its presentation by online spaces.

How do we prevent victim blaming for online hate? We must examine the role of culture and power. On one hand, it is a structural issue. Online spaces afford an easy and cheap platform for speech regardless of whether or not that speech is malicious. It is often quite easy for us to identify the victims. Despite claims that #GamerGate was about journalistic ethics, clearly Zoe Quinn, Brianna Wu, and Anita Sarkeesian were among the central targets of the hate (Hudson, 2014; Quinn, 2014; Sampat, 2014). Searches for the #GamerGate hashtag yield many veiled and not-so-veiled references to these women. On the other hand, it is a cultural issue. As the participation of women in the video game world becomes increasingly normalized, there seems to be a backlash against their presence (O'Donnell, 2014). If we take #GamerGaters by their word, then #GamerGate was about ethics. However, examine

the effects and targets of #GamerGate, and it seems hard to conclude that it was anything other than a campaign of hate (Chess et al., 2014; Hudson, 2014). That is what was more difficult to see—that the campaign was a struggle over power and male privilege. Only by seeing it through the discourse of male privilege does it become clear why "journalistic ethics" could possibly mean doxxing or threatening the lives Quinn, Wu, and Sarkeesian (see Porter, 2015). It seems that the perceived zero-sum contest was the centrality of men as "true" gamers. With the increasing normalized presence of women in the games world, those who enjoyed privilege, it seems, felt threatened (see McIntosh, 2014). And, as Brewer (1999) argues, a belief in a zero-sum struggle is necessary for hate.

Culture is not the sole site of this contestation. No matter what, people have to interface with technology in the online world. The online world, after all, is a technological world. But, should we rely on technological, corporate, or legislative solutions? I am not sure. Clearly, making a claim (as I have here) that "fixing" online culture will do more to mediate online hate faces the criticism that such a claim is too naive and, if it is even possible, too slow to make any real difference in victims' lives today. I would agree with those criticisms. It would seem that some combination of technological, corporate policy, or legislative solutions is necessary. Nevertheless, I believe that any of those solutions would be bandages unless we address the underlying culture of online spaces.

Clearly, pre-existing cultural norms play a constitutive role in the way that hating gets expressed online, as is evident in the case of #GamerGate that has served as the primary case study for this discussion. But what about online hating beyond this particular example, hating that expresses not only misogyny but racism, homophobia, ableism, and so on?

Tamara Shepherd

Building from Sam's articulation of the cultural substrates for online hating, and Tim's point about the space of online expression as intimately bound up with questions of voice and being heard, I want to shift our attention toward more pervasive everyday incarnations of hating online. These tend overwhelmingly to take place in the comments sections of websites, which were initially heralded as providing the space for interactivity that marked the web as somehow more democratic than older forms of mass media (Jarrett, 2008). The affordances of comments as potentially anonymous spaces to speak back to power of course also contained the possibility of replicating existing structures of power in even more vitriolic forms that served to in fact shut down debate and deny people's ability to be heard.

For example, the anonymous comments sections of major global newspapers opened up in the mid-2000s have mostly since been restricted in response to an overwhelming array of racist, sexist, and otherwise hateful comments, especially on

contentious political topics but even on relatively innocuous stories (Santana, 2014). In terms of regulation, newspapers struggled with how to deal with such comments—turning them off at a certain point, not archiving them, doing away with anonymity, or strictly moderating them—pointing to the difficulty in enacting top-down regulation on an online culture of commenting that manifests the supposed neutrality and openness of the web in the form of systemic exclusion as a kind of "ghost in the machine" (Hughey & Daniels, 2013). In this sense, the quandary for regulation has less to do with discouraging uncivil discourse and more to do with facing expressions of hate—or what Emma Jane (2014a, p. 559) has called "e-bile" in the context of online misogyny—"in its unexpurgated entirety because euphemisms and generic descriptors such as 'offensive' or 'sexually explicit' simply cannot convey the hostile and hyperbolic misogyny which gives gendered e-bile the distinctive semiotic flavour."

The act of facing the semiotic power of online hating seems to be the crucial first challenge in approaching any kind of regulatory framework, in advance of making connections between online hate and offline legal understandings of harassment, stalking, abuse, defamation, and so on (e.g., Citron, 2014). Ways in which expressions of hate mean in online space seem to emanate from the cultural acceptance of hateful epithets as themselves constitutive of online interactivity. Kylie Jarrett's (2008) contention that "interactivity is evil" because of its contribution to capitalist inequality as a liberal value that produces governable, nonresistant neoliberal subjectivities also implicates similar inequalities at the level of cultural values and the ability to make oneself heard.

This all leads to the political-economic argument Alison alluded to about the potential profitability of online hating that supports supposedly unregulated online spaces for freedom of expression. For example, in discussing the expectations on Reddit's incoming CEO Ellen Pao to potentially "clean up" the site, a Guardian article maps out the interconnections between the "good" and "bad" Reddits in arguing that "cleaning up Reddit may only be possible as a side effect of cleaning up the world itself' (Hern & Bengtsson, 2015). And in fact, the site's attempt in June 2015 to tighten up its regulation of hate speech sparked intense backlash from members of particularly inflammatory subreddits, revealing an intense vilification of Pao herself that precipitated her departure from the company after only 8 months. Yet, in the absence of legal or other kinds of regulatory recourse, private filtering and reputation management companies profit from the individualization of risk entailed in protecting oneself from online hating (Bartow, 2009). So the focus must shift away from whether regulation makes sense and toward how the discourse questioning regulation might open up an opportunity for the increasing encroachment of individualized risk afforded by ignoring the violence enacted by online hating. In this respect, the issue of regulating hate online needs to take into account its ontological specificity within a particular moment of hegemonic Western liberal culture.

Conclusion

As the exchanges within this roundtable discussion indicate, discussions of online hating are riven with conceptual complexity. Attempts to define and historicize hate online, while also suggesting regulatory interventions, face issues of not only clarity but of implicating a broader value system that in some ways challenges liberal ideals that are so integral to popular investments in networked technologies and cultures. The moral underpinnings of such a challenge imply the need for new approaches to conceiving of liberty through culture, in order to move beyond utterances themselves to address structural inequalities. As Phillips (2015) concludes in her study of trolling, attempts to bring justice to Internetmediated hating subcultures tend to "mistake the symptom for the disease," losing the complex interplays between power and language that carry a profound moral ambiguity within both hating and regulated speech.

Amartya Sen's (2009) work might be instructive here as a corrective to John Rawls' notion of justice (which relies heavily on liberty), where Sen argues that justice ought to be thought of as degrees of fairness to all constituent stakeholders. By taking this tack, Sen sees justice as a pragmatic endeavor that ought to shy away from both the abstract, idealist conception of justice and a institutionalist view of justice that relies on purely technological interventions to produce a just environment. Following this argument, fairness ought to be thought of as degrees of fairness to participants (Sen, 2009). In other words, Sen draws us away from the zero-sum thinking that provides such fertile ground for hate.

Of course, it should not be assumed that such a position enables an evasion of institutional change. Rather what Sen's view does is allow us to make space for a cultural view of justice (fairness) that can be articulated in online interventions that may still respect liberal ideas of speech. What constitutes a "good" intervention in this model can be seen in how closely that intervention adheres to the Sanskrit concept of nyaya—or justice (or fairness or goodness) seen in its positive effects on society (Sen, 2009). Fair interventions would be based on more than abstract liberal concepts (such as free speech). Rather, fair interventions would be judged on how well those abstract concepts serve all people.

Focusing on what it means to make interventions in hateful spaces belies the pressing need to map hating's ontologi-cal messiness in order to get down to the urgent business of pragmatically addressing the ways it works to exclude marginalized subjects. Because as we debate what online hate might be, where we might locate its precursors and linkages to discourse, culture, and technology, and how to engage with its elusive slipperiness in light of established forms of regulation, real harm is being done. To return to the #GamerGate example, Brianna Wu pulled out of the game convention Penny Arcade Expo (PAX) East and Anita Sarkeesian from a public talk at Utah State University because of threats they received online, indicating the continuities between offline

and online hate through embodied forms of violence. It would seem to be time to move from descriptive and explanatory accounts of hate online to a historically informed research agenda oriented toward concrete action for intervening in the increasingly unsafe spaces of social media.

Such a research agenda, as we envision it, would require at least three thematic anchors—inclusion/exclusion, material cultures, and governance—all examined with an attun-ement to historical lineages of online hating. The first theme of inclusion and exclusion as moving boundaries requires a research methodology capable of mapping spaces and moments where and when being heard (or not being heard) online intersects with particular political struggles. Being heard, as constitutive of entering online discourse as an agentic subject, might be positioned in this way as a boundary-making activity and should be connected to how such boundaries interface with longstanding struggles associated with different forms of marginality. The second theme of material cultures is also implicated in this mapping, specifically as the work of delineating the conditions of possibility for expressions of hate, and within which online hating can serve to crystallize certain cultural norms. Invoking materiality in this way is not only about particular spaces and moments, but also about the online-offline relationship and embodied implications of online hating. These embodied implications are taken seriously in order to build up a case for the normative, interventionist element of the research agenda, the third theme of governance or regulation. As discussed, purely legal formulations are important but not sufficient to account for the complex investments in online hating. As such, this component of future study needs to think more broadly about the incarnations of governance from both top-down angles, such as site moderation, as well as bottom-up perspectives, such as shifts in community norms.

While this proposed research agenda presents its own set of practical and conceptual challenges stemming from its breadth of scope, we feel that such a diffuse approach is necessary given the field of inquiry. It can be particularly difficult to understand hateful expressions as forms of "emotional terrorism" (Wu, 2015), given that hate wears several different cloaks online, including those of humor, play, and principled critique—cloaks that have ample cause to be themselves protected. To return to the example of #GamerGate, its discussion threads on sites like Reddit, 4chan, and 8chan often contain as many articulate critiques as rape threats. Evidently, the legitimization of certain design features, practices, norms, and behaviors within the Internet's institutional origins and cultures of interaction affords a certain libertarian value system that enables hate. At the same time, these affordances, in tandem with the enduring mascu-linist ethos of the Internet, allow for not only instances of hate speech but coordinated, collective movements of hate-driven harassment, frequently against those who have been oppressed, subordinated, and silenced offline as well as in

networked cultures. In the context of such very real social threats from online hating, at a moment of rising right-wing sentiments at least in Western culture, a historically grounded, ethically oriented examination of online hating needs to inform the development of new strategies of intervention into hateful spaces.

Declaration of Conflicting Interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding

The author(s) received no financial support for the research, authorship, and/or publication of this article.

References

Associated Press. (2014, October 22). Survey: Harassment a common part of online life. The New York Times. Retrieved from http://www.nytimes.com/aponline/2014/10/22/technology/ ap-us-tec-pew-online-harassment-.html Barlow, J. P. (1996). The declaration of the independence of cyberspace. Retrieved from https://projects.eff.org/~barlow/ Declaration-Final.html Bartow, A. (2009). Internet defamation as profit center: The mon-etization of online harassment. Harvard Journal of Law and Gender, 32, 101-147. Bishop, J. (2014). Dealing with internet trolling in political online communities: Towards the this is why we can't have nice things scale. International Journal of e-Politics, 5(4), 1-19. Brewer, M. B. (1999). The psychology of prejudice: Ingroup love

or outgroup hate? Journal of Social Issues, 55, 429-444. Brustein, J. (2014, October 14). A #GamerGate target wants Twitter to make harassment harder. Bloomberg Business. Retrieved from http://www.bloomberg.com/bw/articles/2014-10-14/ a-no-gamergate-target-wants-twitter-to-make-harrassment-harder#r=hpt-ls

Butler, J. (1990). Gender trouble: Feminism and the subversion of

identity. New York, NY: Routledge. Chess, S., Consalvo, M., Huntemann, N., Shaw, A., Stabile, C., & Stromer-Galley, J. (2014). GamerGate and Academia. ICA Newsletter, 42(9). Retrieved from http://www.icahdq.org/ membersnewsletter/NOV14_ART0009.asp Citron, D. K. (2014). Hate crimes in cyberspace. Cambridge, MA:

Harvard University Press. Dery, M. (Ed.). (1994). Flamewars: The discourse of cyberculture.

South Atlantic Quarterly, 92(4), 559-568. Doll, J. (2012, January 26). Shiny, happy tweeple: Has the internet gotten too nice? The Atlantic Wire. Retrieved from http://www. thewire.com/entertainment/2012/06/shiny-happy-tweeple-has-internet-gotten-too-nice/53939/ Dredge, S. (2015, January 27). Anita Sarkeesian launching new video series focused on masculinity in games. The Guardian. Retrieved from http://www.theguardian.com/technology/2015/ jan/27/anita-sarkeesian-video-series-masculinity-gamergate Epstein, K. (2014, November 8). Twitter teams up with advocacy group to fight online harassment of women. The Guardian. Retrieved from http://www.theguardian.com/technology/2014/ nov/08/twitter-harassment-women-wam

Filipovic, J. (2007). Blogging while female: How internet misogyny parallels real-world harassment. Yale Journal of Law & Feminism, 19, 295-303.

Franks, M. A. (2012). Sexual Harassment 2.0. Maryland Law Review, 71, 655.

Gillespie, T. (2010). The politics of "platforms." New Media & Society, 12, 347-364.

Haraway, D. (2008). When species meet. Minneapolis: University of Minnesota Press.

Hardaker, C. (2010). Trolling in asynchronous computer-mediated communication: From user discussions to theoretical concepts. Journal of Politeness Research, 6, 215-242.

Haythornthwaite, C. (2007). Social networks and online community. In A. Joinson, K. McKenna, T. Postmes, & U.-D. Reips (Eds.), The Oxford handbook of Internet psychology (pp. 121137). Oxford, UK: Oxford University Press.

Hern, A., & Bengtsson, H. (2015, March 12). Reddit: Can anyone clean up the mess behind "the front page of the internet?" The Guardian. Retrieved from http://www.theguardian.com/tech-nology/2015/mar/12/reddit-can-ceo-ellen-pao-clean-up-the-mess?CMP=fb_gu

Herring, S., Job-Sluder, K., Scheckler, R., & Barab, S. (2002). Searching for safety online: Managing "trolling" in a feminist forum. The Information Society, 18, 371-384.

Hudson, L. (2014, October 21). Gamergate goons can scream all they want, but they can't stop progress. Wired. Retrieved from http://www.wired.com/2014/10/the-secret-about-gamergate-is-that-it-cant-stop-progress/

Hudson, L. (2015, January 20). Gamergate target Zoe Quinn launches anti-harassment support network. Wired. Retrieved from http:// www.wired.com/2015/01/gamergate-anti-harassment-network/

Hughey, M. W., & Daniels, J. (2013). Racist comments at online news sites: A methodological dilemma for discourse analysis. Media, Culture & Society, 35, 332-347.

Jane, E. A. (2014a). "Back to the kitchen, cunt": Speaking the unspeakable about online misogyny. Continuum, 28, 558-570.

Jane, E. A. (2014b). "You're a ugly, whorish, slut": Understanding e-bile. Feminist Media Studies, 14, 531-546.

Jansen, E., & James, V. (Eds.). (2002). NetLingo: The internet dictionary. Ojai, CA: NetLingo Inc.

Jarrett, K. (2008). Interactivity is Evil! A critical investigation of Web 2.0. First Monday, 13(3). Retrieved from http://firstmon-day.org/article/view/2140/1947

Kaynan, Y. (2008). Influences on the nature and functioning of online groups. In A. Barak (Ed.), Psychological aspects of cyberspace: Theory, research, applications (pp. 228-242). New York, NY: Cambridge University Press.

Kendall, L. (2002). Hanging out in the virtual pub: Masculinities and relationships online. Berkeley: University of California Press.

Kuchera, B. (2014, July 30). Twitter can fix its harassment problem, but why mess with success? Polygon. Retrieved from http://www.polygon.com/2014/7/30/5952135/twitter-harassment-problems

Mantilla, K. (2013). Gendertrolling: Misogyny adapts to new media. Feminist Studies, 39, 563-570.

Marwick, A., & Miller, R. (2014). Online harassment, defamation, and hateful speech: A primer of the legal landscape (Fordham Center on Law and Information Policy Report No. 2). Retrieved from http://ssrn.com/abstract=2447904

Mcintosh, J. (2014, April 23). Playing with privilege: The invisible benefits of gaming while male. Polygon. Retrieved from http://www.polygon.com/2014/4/23/5640678/playing-with-privilege-the-invisible-benefits-of-gaming-while-male O'Donnell, C. (2014, September 4). A 4-Front war. Culture Digitally. Retrieved from http://culturedigitally.org/2014/09/a-4-front-war/

Omi, M., & Winant, H. (1994). Racial formation in the united states:

From the 1960s to the 1990s. London, England: Routledge. Orsini, L. (2012). Get off her internets: Blogger Alice Wright bites back. The Daily Dot. Retrieved from http://www.dailydot.com/ society/get-off-my-internets-alice-wright-interview/ Penny, L. (2013). Cybersexism: Sex, gender and power on the internet. London, England: A&C Black. Phillips, W. (2011). LOLing at tragedy: Facebook trolls, memorial pages and resistance to grief online. First Monday, 16(12). Retrieved from http://firstmonday.org/ojs/index.php/fm/arti-cle/view/3168/3115 Phillips, W. (2015). This is why we can't have nice things: Mapping the relationship between online trolling and mainstream culture. Cambridge, MA: MiT Press. Porter, T. (2015, January 24). 4,000 demand Adam Baldwin ban from comic convention after Gamergate scandal. International Business Times. Retrieved from http://www. ibtimes.co.uk/4000-demand-adam-baldwin-banned-comic-convention-supporting-gamergate-1484991 Quinn, Z. (2014). 5 things I learned as the internet's most hated person. Retrieved from http://www.cracked.com/blog/5-things-i-learned-as-internets-most-hated-person/ Rustin, S. (2014, June 13). Is it right to jail someone for being offensive on Facebook or Twitter? The Guardian. Retrieved from http://www.theguardian.com/law/2014/jun/13/jail-some-one-for-being-offensive-twitter-facebook Sampat, E. (2014). The truth about Zoe Quinn. Retrieved from http://

elizabethsampat.com/the-truth-about-zoe-quinn/ Santana, A. D. (2014). Virtuous or vitriolic: The effect of anonymity on civility in online newspaper reader comment boards. Journalism Practice, 5(1), 18-33. Sen, A. (2009). The idea of justice. Cambridge, MA: Belknap Press. Shanahan, M. (2013, August 21). No, you get off my internet. The 818. Retrieved from http://the818.com/2013/08/no-you-get-off-my-internet/#sthash.xzq5jxND.cRV6hCd9.dpbs Sproull, L., & Kiesler, S. (1985). Reducing social context cues: Electronic mail in organizational communication. Management Science, 11, 1492-1512. Sproull, L., & Kiesler, S. (1993). Computers, networks and work. In L. Harasim (Ed.), Global networks (pp. 105-120). Cambridge, MA: MIT Press.

Stuart, K. (2014, September 3). Gamergate: The community is eating itself but there should be room for all. The Guardian. Retrieved

from http://www.theguardian.com/technology/2014/sep/03/ gamergate-corruption-games-anita-sarkeesian-zoe-quinn Suler, J. (2004). The online disinhibition effect. Cyberpsychology

& Behavior, 7(3), 321-326. Taylor, A. (2014). The people's platform: Taking back power and culture in the digital age. New York, NY: Metropolitan Books. Tiku, N., & Newton, C. (2015). Twitter CEO: "We suck at dealing with abuse." The Verge. Retrieved from http://www.theverge. com/2015/2/4/7982099/twitter-ceo-sent-memo-taking-per-sonal-responsibility-for-the Torfing, J. (2003). New theories of discourse: Laclau, Mouffe andZizek.

Laclau, Mouffe and Zizek. Oxford, UK: Oxford University Press. Totilo, S. (2014). In recent days I've been asked several times. Kotaku. Retrieved from http://kotaku.com/in-recent-days-ive-been-asked-several-times-about-a-pos-1624707346 Turner, F. (2010). From counterculture to cyberculture: Stewart Brand, the whole earth network, and the rise of digital utopia-nism. Chicago, IL: University Of Chicago Press. Wu, B. (2015, March 4). Brianna Wu on why gamergate trolls won't win. The Boston Globe. Retrieved from http://www.bos-tonglobe.com/magazine/2015/03/04/brianna-why-gamergate-trolls-won-win/l2V0PjfDRSf4Fm6F40i9YM/story.html

Author Biographies

Tamara Shepherd (PhD, Concordia University) is an Assistant Professor in Communication, Media and Film at the University of Calgary. Her research interests stem from the feminist political economy of digital culture, looking at labor, policy, and literacy in social media, mobile technologies, and digital games.

Alison Harvey (PhD, York University) is a Lecturer in Media and Communication at the University of Leicester. Her research interests revolve around issues of inclusivity and accessibility in digital culture with a focus on video game play, design, culture, and production.

Tim Jordan (PhD, Edinburgh) is Professor of Digital Cultures and Head of School at Media, Film and Music, University of Sussex. His research interests include information politics, hacking and hacktivism, and being in the zone.

Sam Srauy (PhD, Temple University) is an Assistant Professor of Communication at Oakland University. His research interests include the political economy of video game industries and the roles that race and economics play in identity construction in various forms of new media.

Kate Miltner (MSc, London School of Economics and Political Science) is a Doctoral Student at the Annenberg School of Communication and Journalism at the University of Southern California. Her research focuses on the impact of social power structures and frameworks on online cultural participation.