Episode 1: Hate and the Digital World (Transcript)
To listen to the episode, please visit the episode page here. What follows is a transcript of the episode, lightly edited for clarity.
Hate now, in modern times is different from hate pre-internet in the sense that you can now be victimised, 24/7 You know, it’s hard to escape it. If you’re being bullied in the playground, you now get home and you’re being bullied at home because you’re on Snapchat and you’re being attacked there too, you know. And so, there is something about the technology that makes everything worse, I think we have to acknowledge that.
Hello, and welcome to Hidden Hate, a brand new podcast series from the Centre for Hate Studies at the University of Leicester. I’m Neil Chakraborti.
And I’m Amy Clarke.
And we’ll be shining a light on some of the biggest challenges of our time, challenges which destroy lives, challenges which have escalated during these difficult times, and challenges which all to often slip under the radar.
To help us unpick these challenges, we’ll be joined by some fabulous guests who will be sharing insights from their research, their activism and their own lived experiences. In today’s episode, we’ll be exploring hate and the digital world. At the time when online hate feels so prevalent, we’ll be asking how the digital landscape has shaped and facilitated hate crime and how law enforcement, legislators, tech companies and the public should respond.
We are honoured and delighted to be joined by the wonderful Ashton Kingdon and Matthew Williams today, thank you both so much for joining us. You do amazing, exciting, innovative, hugely important work and it’s a privilege to be talking to you today. So our first guest is Ashton Kingdon. She’s based at the University of Southampton, and their groundbreaking research draws from criminology, history and computer science to explore the ways in which extremists utilise technology for recruitment and radicalization. Ashton, It’s brilliant to see you. How are you doing?
Yes, thanks so much for having me. So excited to be on the podcast.
Oh, we are honoured. Thank you, Ashton. And our second guest, Matthew Williams is a professor of criminology at Cardiff University, and director of Hate Lab, a global hub for data and insight into online and offline hate crime. Matt, brilliant to have here today, I should say. How are you?
Oh, it’s absolute pleasure and honour to be here. Thanks for inviting me.
Oh, thank you, Matt. We’ll get chatting to you in a second.
So first, we’d like to learn a little bit more about the both of you and what led you towards the work that you do. Because, honestly, as Neil said, it is fascinating and I think everybody should know more about what you do. So Ashton, can we begin with you? So your research interests, I love this, are described on your web page in five words.
I like to be dramatic, Amy.
So the five words are terrorism, extremism, radicalisation, propaganda and technology. These are all huge issues, and so relevant to the times that we’re in. So can you tell us a bit about how your research tackles these issues?
Yeah. So it’s actually been updated now to add climate change. That’s very exciting. Yeah. So my research so I essentially spend my life undercover online, looking at how technology and imagery act as accelerators of radicalization. So I look at the weaponization of history as well, predominantly through imagery. So in relation to terrorism, I look at the intersection between terrorism and climate change. And then in terms of radicalization, how technology and imagery are accelerating that and how online extremist subcultures help the dissemination of particular ideas through imagery in particular memes, video games, things like that.
It’s really, really interesting. And obviously, you said that you study the role of technology in radicalization. So in what ways are extremist groups making use of platforms like social media to aid recruitment? And have these strategies changed over time?
Yeah, so I spent four years undercover in different far right networks. And prior to that, I spent eight months in Islamic state networks online. So I’ve looked a lot at the way that different types of extremist groups use what we would call web 2.0 so your classic social media, Facebook, Twitter, YouTube, things like that. The way that they use it for propaganda, because obviously, they have like a front end communication where they can put things out attract users, and then they have the kind of behind the scenes, private chats as well. And the ability to advertise in the mainstream platforms, their links to more secretive or encrypted platforms. So I’ve looked a lot at that. I also look a lot at web 1.0. So in my PhD research, what I did was traced it was 23 Klan websites, so the temporal coverage from that ranged from 93 to 2001. So the ways in which traditional white supremacists utilise web 1.0 technologies in the past how that’s evolved in terms of adding multimedia, to be able to obviously aid their recruitment strategies, and then also how different types of what I would call fringe groups. So neo Nazi grips are using the web 1.0 now through platforms like Fascist Forge, I spent about eight months on there, Siege Culture, and really looking at the way that they act as incubators of radicalization and the different ideologies on them. And the ways in which, basically, I argue that there’s been this huge move to looking at web 2.0 and web 1.0 is often neglected. But my research definitely found that the most extreme individuals that had had these views for the majority of their lives, and they’re like generational, are operating on the older technologies, because they’re not looking to attract recruits, necessarily. They’re looking for a space to network. And then obviously, I look at the kind of more undercover telegrams, the dark web, things like that. So yeah, it’s definitely something to be concerned about. And more recently, I’ve turned to kind of emerging technologies. So I’ve done a lot of work on AI, the quantum Internet, and the potential for different types of extremists to capitalise on that. And obviously, we saw with the elections, how powerful AI can be in spreading misinformation and disinformation, and why we need to take it more seriously as a potential accelerator of these particular issues.
Yeah, absolutely. And is this risky research? Ashden? Because although you’re online, you are undercover in these networks. So what are the level of risks here to you as the researcher?
I mean, my ethics boards have never been particularly happy with me in terms of my research projects, I mean, everything is risky. This is more risky, but I’ve been doing it for years. So I think I know exactly how to protect myself, my institution, potential people that I’m like using their data. So I think that in terms of the risks, the things I talked about the most are mental health, cybersecurity, they’re the two big ones for me, and then also the potential for networked harassment, particularly when you’re publishing your research findings about these particular groups. And I look at the more kind of extreme in terms of the ideologies. And actually I think that makes you a bit safer. I think the people that look at the manosphere, for example, male supremacists, the alt-right they’re more open to the trolling, because the people that I wrote about and researched, they’re not going to be on Twitter, looking at my analysis of Klan forums, say, so I think that yeah, that kind of protected me in a way. So looking at the more extreme, was actually more beneficial from that angle. In terms of ethics, they were much more strict when I was doing ISIS research.
Yeah. Really, really hot on everything. In comparison to the far right, which wasn’t seen as as much of a risk.
Matt, is that something you’ve ever experienced in terms of the ethics around this research?
Yeah, sure. I mean, like, like Ashley, we obviously engage with various kinds of content, but most of our focus is on social media, as Neil said at the beginning, so that essentially means that we’re not looking at those more extreme ends of the of the spectrum. And more at the, if you like, maybe it is a wrong choice of words, but at the lighter end of bigotry, if you like, as it is expressed on Twitter and other forms. We do look a bit at Telegram which is slightly more underground than say something like Twitter. But ethically speaking, it is a bit of a nightmare with trying to get the approval. Certainly from university. It’s really hard when you want to actually do qualitative work, I think because when you want to kind of quote, maybe some of this stuff in say, a publication or in a press release or something like that, then there’s a lot of anxiety over the sort of identification of the individuals that you’re claiming are bigots or racists or whatever and some argue that, you know, but this is free speech. This is not racism, like, well it does depend on your perspective, I suppose. But you know, the ethics committees get quite jumpy about it. And so we’ve had some trouble in terms of getting that approval. In terms of safety, yeah, I’ve been exposed to more hate since I’ve been researching this topic. It just comes with the territory, doesn’t it? I suppose. And if you put yourself out there on Twitter, and other open platforms like that you’re going to attract that kind of attention if you take a particular stance on this. And yeah, I’ve had lots of stuff on Twitter trying to discredit me. It is more about discrediting my expertise than attacking me for being gay. Sometimes I don’t think people realise I’m gay, even though I’ve got the rainbow flag on my Twitter profile, they might not pick that up. But they attack my expertise instead of another aspect of my identity, if you like, and it’s how can you call yourself a professor, when you’re peddling nonsense like this, you know. I just block them and move on, it’s just what happens when you do this kind of research. And I daresay that you’ve all experienced similar in your, in your daily working lives, too.
I think, and listeners won’t be able to pick up on this, but there’s lots of nods, for from all of us here. And it does feel like a difficult space to work in at times. And that’s why I think it’s really refreshing to hear you guys talking not just about the breadth and depth of the work that you do, but reflecting honestly on risk and managing risk and navigating your way through the ethics process and also finding coping strategies to deal with torrent of abuse that we can all receive when, when working in this space. And I do agree with you, Matt, I think, and I’m disappointed to say this, but there is a degree of inevitability when it comes to receiving abuse when working in this space. And it’s really good to learn from each other, I think. And I think we will provide a sense of solidarity to one another and a source of inspiration. Certainly you do to us, both of you. And Matt, I wanted to just rewind a little bit and ask you about your own personal motivations for working within the field of hate crime and hate speech. I know this features prominently within your most recent book, The Science of Hate, I’ve actually got a copy on screen there. And it’s got stickies in it and everything which shows it’s been read a lot. And I’ve learned a lot from it. But can you say a little bit about what drew you to this field.
So as I say, in the book, and this was quite a weird thing to do, and my my editor was kind of really keen to kind of tease this out of me, because as academics, we tend to kind of, we’re told to keep the personal out, usually, in terms of trying to understand the topic in an objective sense. You know, it depends what tradition you’re coming from. But you know, for the most part, the personal story usually should be sort of left at home. And it shouldn’t really interfere with your study. I mean, that’s certainly a perspective that I’ve been taught in my career. But yeah, so it’s just quite a strange thing to actually do in the book. But I was a victim of a hate crime back in the late 1990s. So it was just after I finished my degree in sociology, and I went to London to celebrate with a group of friends. And it was a sunny day, we were in Regent’s Park, having a nice old time, went to a pub, had some lunch, had a few too many for a lunchtime, maybe. The party carried on, we went to a gay bar on Tottenham Court Road, quite a well known gay bar on Tottenham Court Road. And it was just a one of those great days, you know, and one of those days I thought I would remember forever, for all the good things that were happening that morning and early afternoon, but I ended up remembering it for all the wrong reasons. And unfortunately, I happened to step out of that bar on Tottenham Court Road, just to have a cigarette. I don’t smoke anymore. Kids don’t smoke. But at that time, you know, in my youth, I was a casual smoker, and I just stepped outside for a cigarette. And I saw this this guy in the distance sort of saunter over. And my eyes focused on him. And I was like, Okay, how are you? You know, and he was like, Oh, do you have a light mate? And I was like, Yeah, sure, sure. So I get the Zippo and flick the flint. And before before I knew it, I was on the floor, and I was looking around and dazed and confused with this metallic tongue in my mouth. I had a split lip and I was bleeding. And I just didn’t know what happened to me. I was kind of trying to figure out… Did I fall? Did something hit me from above? And then all of a sudden, my eyes focused on these three guys. So one turned into three, you know, and he had his accomplices with him, and then they spat out a homophobic slur. It became incredibly apparent at that point in time that I’d been a victim of a homophobic hate crime and they’d been waiting patiently outside that bar for a victim, not specifically for me, but someone of my kind, if you like. Someone who was gay or they thought was gay, and that was my first experience of of hate crime. And it was funny because at the time, I wanted to become a journalist and I finished a degree in sociology and wanted to move on to do an MA in journalism at Cardiff University. And the experience itself actually reshaped me. Not only personally, but professionally and I had these nagging questions about, you know, why did these guys do this? You know, do they actually hate me? Is it something innate in them that they were born with that created this desire to harm certain kinds of outgroups? Is it something they learned? Was it about defending their territory,m aybe? People like me weren’t supposed to be in their city or their town, or that part of the town or, or what? I just had all these sort of burning questions, and I couldn’t get rid of them, you know. I couldn’t answer them, which means I couldn’t address them and they just kind of filled my thought process until I couldn’t really think of much else. And that led me to switch degrees from journalism to criminology. And I thought maybe criminology is the place I need to go to find these answers, you know, so I did the MSc and then didn’t get the answers from that. Nothing wrong about the MSc in Cardiff. But you know, we didn’t have a module on hate crime, then actually, which is when I thought, right, I need to do something about this, let’s get this hate crime stuff going. And back in the late 1990s, there was some stuff being written about hate crime, as you know, but there wasn’t much. And there’s a lot more now, which is fantastic. But it just led to my career in this space. And actually, what happened to me not long after the physical attack was was a virtual attack. I ended up working in a cyber cafe and, hands up who remembers cyber cafes? Iinternet cafes, remember them? Young listeners probably have no idea, but look them up on Wikipedia. They were awesome. But we used to go to cafes, to use the internet kids. I used to work in one of these part time trying to get some money to pay for my studies and no one came to this cyber cafe in Cardiff. It was in the worst part of the city and no one turned up. So I was literally on the computer constantly on my own for eight hours a day, surfing the net and finding all these weird and wonderful sort of dark spaces on the net. And I ended up in chat rooms and American chat rooms, mainly because that’s where most of the kind of activity was taking place. And, yeah, one day, I just got attacked by a bunch of homophobes, and I was just chatting away in a chat room. And these people descended, and my screen just filled with homophobic slurs and all sorts of nasty things were being said. And it was just my first experience of hate on the internet, you know, I’d never really heard or thought about it. And that was just a lightning bolt moment that I was like, wow. Okay, so I’ve been physically attacked, and now I’m being virtually attacked. And that’s really what shaped my decision to focus my PhD on virtual crime, virtual hate.
Thanks for sharing that Matt. I think it’s fascinating and difficult for us to hear that. But I think it also highlights why it’s important that the personal is entirely relevant. And that advice that you received, right at the start of that description, I keep the personnel out of it. It’s ridiculous.
It is. Now I’ve done it. And it was quite an uncomfortable and challenging thing to write about, because I’d never written about it before. And it was the first time I’ve done that, and really therapeutic you know, I mean, there’s a lot of trauma still there. I haven’t held my husband’s hand in public ever and it’s partly because of fear from that. I assume I can attach it back to that. I don’t do PDAs. Not that I try to pass as a heterosexual person when I’m out. I don’t try to hide my my homosexuality in any way, or my sexual orientation in any way. But there is this element, I think of anxiety, when I’m in a situation and I think it stems from that moment, and I think, you know, being attacked, like that will leave its mark on you, I think. So writing about it was really therapeutic.
Yeah. And reading about it, I think it draws you in. And you talk about that within the introductory chapter of your book and it draws you in as a reader. Within your book, and obviously within your wider work, you’ve done a tremendous amount of research looking into online abuse. And I think for our listeners, and for Amy, and I’d be really interested to get your take on why people resort to online abuse and who you feel are the most common targets of online abuse.
So I tend to start from the the basics, if you like, when trying to understand, you know, if the Internet has changed things in terms of hate, you know. I’m always asked is there more hate now because of the internet and social media. Is it this kind of force amplifier, is it this accelerator of hate? And there are lots of things about the internet and technology. And you know, Ashton has been really eloquent in explaining the extreme side of it, and it was transformative, I think, for these groups. Back in the 90s, Stormfront is a classic example of one of the first groups to take advantage of the internet and start to connect people that were otherwise disparate groups that were connecting in much more kind of arcane and clunky ways before the internet came along. So I think the internet has accelerated and connected people of thought communities, whatever those communities are in ways that we couldn’t possibly imagine in the 90s, so it has accelerated in that sense. But for me, I always start at the beginning and start to think about, you know, the notion of prejudice. And the idea that we’re all prejudiced. You know, even though in this world culture, we don’t like to think that we’re prejudiced and even those with prejudices actually would baulk at the idea that they are prejudiced, you know. Racist offenders are the last to admit that they are racist in their offending, you know, even the ones in prison are reluctant to say they’re racist, even though they’ve been charged with an offence. And it’s interesting, there’s a degree of kind of shame about being called something like racist or homophobic in society. And I think that’s because partly of this sort of the civilising process that I talk about in the book, this notion that, you know, we’ve gone through the civil rights movement, the women’s lib movement, the gay rights movement, you know, and since the 50s 60s 70s 80s, you know, society has changed a lot. And now we routinely suppress the prejudices that we might have, we’re constantly suppressing negative stereotypes that might pop up in that split moment when we see something, or hear something or read something. And we’re lying to ourselves if we say that we’re not prejudiced in some way. We might have different kinds of prejudices, and some may be more extreme than others, and it might be better to call them biases, actually, and some of them might be unconscious biases, and some of them may be conscious biases. But we have them. I learned this when I was growing up in the valleys in the 1980s. I was definitely homophobic in terms of my thought process. I was a gay man. But I had what is now commonly termed internalised homophobia. And it took me a lot of internal work to kind of get over that. Homophobia, people will ask, why were you like that? You know, number one, the fear of coming out, I guess, is, is a big part of it. And number two, I grew up in the 80s, in a mining community in Wales, where, you know, ultimately, I was being bombarded with homophobic messages from my friends and peers. I wouldn’t say my family were particularly homophobic, but they weren’t necessarily very inclusive in that sense, you know, it was never really talked about. In fact, it was avoided in terms of conversation, you know. Section 28, which essentially meant that no one could speak about it in schools, you know, teachers were forbidden to talk about it. And we had the AIDS epidemic. So, you know, there was a lot going on there that shaped my perception of gay people. And there was stuff going on that shaped my perception of black and brown people, that shaped my perception of disabled people. And I grew up in a culture that was really biassed, like most of us on this podcast today. You know, we know we still live in a biassed world, and we cannot stop all that information entering our brains as kids, you know, our brains are like sponges, we just absorb it all. It’s only in our adulthood that we start challenging those perceptions and those attitudes that we’ve been exposed to. And then we get the wherewithal and sort of, I guess, the psychological maturity to start saying, “Well, I actually don’t agree with that, that stereotype”. But ultimately, you know, what that is, is suppression. We’re constantly suppressing the old baggage that we’ve carried with us over the years, and years of this kind of inculcation if you like. But ultimately, what we have on the other side of that suppression dynamic is justification. And these are justification forces. And what justification forces do, is they reinforce old stereotypes. So while we’re constantly suppressing our prejudices, there are things that happen around us, like, for example, a political speech by a politician who has something to gain from demonising an outgroup. When that happens, all of a sudden, your old prejudices, your old stereotypes that you had growing up as a kid maybe are being justified. And some people all of a sudden, if that justification force outweighs the suppression force, you know, that that notion that it’s not cool to be to be racist, you know, they might say something to a friend in confidence about what they think about immigrants maybe, whereas normally, they wouldn’t say it. They might be kind of galvanised or bolstered by the speech of the politician, that justification force. They may also go on the internet and say something in the immediate aftermath of that speech, in that sense of deep frustration they’ve been riled up by the speech, they take to the internet to say something vaguely racist or homophobic. But it may not be enough for them to go to the streets to do something and commit a hate crime. So the internet is like a gateway, in a sense. And this justification and suppression dynamic is important in understanding, at what point might someone take to the internet to send something that’s vaguely that’s vaguely racist, homophobic, disablist or whatever?
Yeah. Thanks Matt, I think that’s fascinating. And I wonder whether those kinds of themes that you’ve just referred to are amplified by particular trigger events, I know, you’ve looked into some examples of trigger events like terrorist attacks, the referendum, high profile instances involving black footballers, to name just a few examples, I wonder if you could say something about how those kinds of events? Well, the consequences and what kinds of impacts they have in terms of online…
Yeah, well, I mentioned the political speech that could, that we could class that as a trigger event, for example. Court cases, might be a trigger event, you know, I don’t want to mention his name but Tommy Robinson, in his weaponization of certain court cases up north, to stir up to division and hatred. Some of these trigger events, we can predict, or they’re scheduled. You know, we knew, for example, that the Brexit vote was going to happen. And we had a deep suspicion that there was going to be a lot of division around that, because we knew the kinds of people that were pushing for Brexit and the kinds of vehicles that were employed to kind of get that vote to pass. But some of them aren’t, like terror attacks, and, you know, Ashton described her work on ISIS. And so, you know, they don’t tell us when they’re going to happen, unlike the IRA, when they used to tell people to get out of the way. Most of the time, you know, ISIS don’t do that. And those are non scheduled trigger events, regardless of whether they’re scheduled or not, or we know about them, the important point I think our research has found is that hate tends to follow these trigger events, both online and offline. Trigger events essentially act as releases of prejudice. They’re a justification force, that allows individuals to release the prejudice that they already have, they don’t create prejudice in a person as such, they only release it. It is a really important point about Trump as well, in the US, the election of Trump was like a trigger event. Trump’s election didn’t make America more racist, it just allowed them to express the racism they already had. And these trigger events are important to understand because they allow us to mobilise resources around them before and after them. So if we know there’s a scheduled event, we need to mobilise and put in place effective mechanisms to counter that hate speech, when it erupts, and there’s things that… We can talk about this later, but there’s lots of things that can be done by the police, but also the social media companies and the government. And charities and people like us to you know, academics have a role as well, of course, but ultimately, you know, there are things that can be done and can be put in place to minimise the impact of that hateful rhetoric around those trigger events.
Thanks, Matt. I think we will definitely come back to that point, actually, a little bit later, Ashton, earlier, Matt raised the point that we develop, or we can develop a lot of our prejudice when we’re children throughout our childhood and our experiences based on where we grow up. And I think there is a lot of concern at the moment that our young people obviously spend a lot of time online. And I hear this term, digital literacy said quite a lot. And I think it relates to kind of education. Could you explain what digital literacy actually means and why it’s so important to perhaps engage our young people in this type of education.
So digital literacy is kind of this awareness, attitude and ability for people to appropriately use digital tools and facilities. In terms of digital literacy, I think it is important that we think about young people because statistically, they are the people that are more likely to be drawn into extremist views. They’re the target for many groups. However, I also think it is important if we’re talking about digital literacy to try and prevent disinformation and misinformation that we also focus on older populations because… So a lot of digital literacy literature and where it comes from Prensky’s ideas of like digital natives and digital immigrants. So I was born in the 90s. So I grew up with technology, right? It was, it’s always been in my life. I have never seen an internet cafe… I feel like I did, like, miss out on that. So this is the idea that digital immigrants are people that were born before the advent of these technologies in their lives, so they can’t use them necessarily as competently, as people that were born in the digital age. So this is all wrapped up in digital literacy. But it’s important to remember that it’s not just about knowing how to use technology, right, and oh, I can use this platform, I can use Google, I can book my holiday. It’s about navigating and communicating through the many digital environments that you will come across. So when we’re thinking about extremism online, it’s essential that users of the technology are aware of the precautions that they might need to take when they are interacting everyday with these kinds of technological resources. So some of the things that I might think that needs to be drawn upon are the potential for artificial intelligence to show you content that hasn’t been recommended by a human, right, to say that people are aware that they live in these echo chambers that are created by the people that they interact with every day. So this idea of homophily, birds of a feather flock together. To be aware of the fact that your views are going to be amplified in these chambers. I think social media companies need to be way more transparent and accountable for telling people the way that their AI works. And the fact that actually, quite a lot of the time, they don’t know… They don’t know the content that’s coming out. They can’t tell you how their AI works. They don’t know. And it’s also about informing people. Yes, we tailor it towards young people quite often. So for example, when I was in school, you’d do like media sources and you’d say, well, where did this source come from? What’s the political leanings of the newspaper? What’s the audience? Can we find another article from a different type of newspaper that shows a different opinion? It’s about teaching young people to analyse these sources in this fashion. Where did this source come from? Is it reputable? Can we trust it? What might be another opinion, and it’s getting people to realise that so that they can potentially recognise disinformation and misinformation before they become in tuned with it as being a kind of absolute fact. So Cynthia Miller-Idriss has just released, it’s called the DUCC We bsite, her students actually made it. But this is like resources for teachers to use lesson plans and things to teach digital literacy to help students try and combat and learn about disinformation and misinformation and potential extremist content online. And I think it’s important that people are aware that this content is in the mainstream. I hear so many people say “Oh, you’d have to go on the dark web to find that”… No, if it’s on the mainstream platforms, people just assume that it wouldn’t be nefarious, I think quite a lot of the time, like the ordinary public wouldn’t expect to see extremism on a mainstream platform because they associate racism, extremism, Islamophobia as being something for the fringe, and not like an institutional problem that has crept into the mainstream. And I think that as we move into these worlds where we live our lives online, and use all of these technologies daily. It’s about teaching people this new way of dealing with this information so that we can potentially help people before they go down those kind of pathways.
This, this is amazing. And this, actually, amidst all the darkness here, I think there is something quite hopeful about what you’re saying in terms of steps that we can take and just hold those thoughts, because we’re going to come back to them in a little while. And I want you to hold on to those thoughts as well about the points you made in relation to social media companies and the responsibility that they have. I know, both of you have quite strong thoughts there. And it’d be really good to hear those thoughts. I just want to interject a moment and when thinking about next steps, I want to think from a victim’s vantage point. So victims of offline hate, we’ve spent a lot of time within the policy world within the academic world within the campaigning space, to think about reporting structures and to communicate more effectively in terms of how to report where to report. How does that operate within the online world? And this is for both of you really… Do victims know how to report and where to report online incidents?
I’ll start by saying I see from the interviews we’ve done and I think you’ve said the same Neil with your great project in Leicester on on hate crime. Hate crime seems to be more like a process than a discrete event. For most victims or for a large majority of victims, hate crime is a thing that they experience multiple times throughout their lives. It’s not or it’s rarely something that’s happens once. I experienced it a few times, I’ve experienced it online a lot. Since I do put myself in harm’s way sometimes, but ultimately, you know, hate crime is a process, I think, and what we see increasingly in modern times, I think, is that this stuff can start online, and then migrate offline. So you get the kind of online hate, then it can become physical. If that online hate says, is being spread by kids in a school. And you know, you’re attending the same school or a neighbouring school and there’s threats and racist or homophobic or disablist comments begin online, and they can migrate offline, or it can go the other way, you can have a physical confrontation that can then migrate online. So you’ve got this kind of relationship between what happens online and what happens offline. So increasingly, for victims, this is a process of victimisation that tends to go between the two mediums, it’s important to understand that in terms of the effects of of hate, I’m often asked, you know, is online hate as bad as offline hate and, well, a person would usually experience both, you know, it’s not something that very often is discrete for a particular person. The research does suggest that online hates can have as much of an impact as offline hate, it depends what we’re talking about, of course, the nature of the crime offline. Obviously, online hate is less physical. It can have physical effects, though, you know, it obviously can create trauma, which can have physiological effects on the on the brain and body… Heightened cortisol levels have been recorded in people who recount experiencing hate speech, for example, there’s documented evidence of this. So there are real effects to online hate if it was just that that you were suffering from but nonetheless, when we’ve taken it into consideration that you know, hate now, in modern times is different from hate pre-internet in the sense that you can now be victimised, 24/7, you know, it’s hard to escape it. If you’re being bullied in the playground, you know, get home and you’re being bullied at home, because you’re on Snapchat, and you’re being attacked there, too. And so, there is something about the technology that makes everything worse, I think. I think we have to acknowledge that. In terms of resources for reporting, and there are more resources now than ever before, I think that’s a good thing, you know, the ability to to inform the police, the mechanisms that there are more of those. You can also inform social media companies or app companies of harassment, they will have policies. But you’re right, the question is, Do people know about them? And are they willing to report and a lot of people aren’t willing to report because of the age old reasons that, you know, some people just don’t think anything’s gonna happen. I mean, if you read the press, and you hear about footballers being attacked, you know, if you know that, you know, no one’s being brought to justice for attacking Rashford, why would you even consider reporting your victimisation? You know, if Rashford can’t be helped, how can I be helped, kind of thing? There’s this notion that, you know, if it happens online, then then the chances of any kind of justice being served is minimal. So then you think, Well, I’m not going to bother going to the police and starting that whole process anyway. So I think there is a lot of work to be done there. In terms of, I guess, going back to education, again, you know, it’s important to report not just so you can get justice, but actually so we actually know that this has happened to you, and the more that we know about hate crime, the more we know about the volume of hates, in both online and offline, the more attention you will receive from politicians in terms of how we address the problem. So reporting isn’t just about getting justice for yourself. It’s about helping the community at large as well, I think.
Thanks very much, Matt. Ashton, did you have anything you wanted to add about reporting online hate?
I think a lot of the problem now is that hate speech in the mainstream is coded and cloaked. So my thesis really works on unmasking that. I do a lot of work around the kind of anti semitic emojis that people use that don’t flag up on AI content. The fact that European civilization has become the new code word just as crime began to stand in for race. Now it’s European civilization, right? These kinds of things do not flag up as hate speech and I think that is the problem… You encourage people to report, they report. And then the social media company says “This isn’t hate speech, this is just a picture of a castle or a templar, this isn’t hate speech”. And I think that’s part of the problem, because it’s not as overt, necessarily, because AI has got better, mainly because of groups like ISIS and events like Christchurch. But it’s still out there. But it’s less obvious. I think that’s the problem. We need to catch up with that. And I know we’re going to talk about this a bit later. We won’t say much more on it now. But I think social media companies have much more that they could do.
I think we should talk about that now.
Well, I’ll ask now. And we’ll see what your views are. Because actually, we would really like to capture your views on the Online Harms Bill that some of our listeners may have heard about, because the idea of the Online Harms Bill is that it has the potential maybe to address some of the themes that we’ve discussed today. And perhaps some of that more, the hidden language or the vehicles through which there’s more hidden prejudice online is being moved around. So what are both of your thoughts? Ashton, we’ll come to you. First, what are your thoughts on the bill, and it’s likely impact on some of the issues that we’ve covered today.
If the listeners could just see Ashton’s face right now, I think that would capture all of her emotions.
Where to start, I actually posted a few of my main concerns, we’ll call them, in a separate document. The bill will strengthen people’s rights to express themselves freely online and ensure social media companies are not removing legal free speech. Freedom of speech, what could possibly go wrong when we’re trying to tackle extremist content? So that’s, I mean, that is another conversation, right? What is freedom of speech? What country is this happening in? How can you even prove people are in these countries? Everyone knows this, trying to police cyberspace is really difficult. I mean, let’s take me for example. I’m currently in about seven different countries, right? With all different types of identities. I mean, if anyone was monitoring me, from a security point of view, they’d be like, what the hell. First she’s ISIS. Now. She’s a neo-Nazi. Now she’s in the Klan, like, I’m in multiple different countries. So if you had a law that was really can’t say this in England, I would put my VPN to Kazakhstan… Happy days, right? And they can’t, how can they track this? That’s one thing. So VPNs, you can use other technologies to get access to content. I mean, the dark web exists, that’s another issue, the fact that they’re going after the large platforms, right with the widest reach, when we all know that vast majority of extremist communications happen on alt tech platforms, which are decentralised social media networks, where we own the content. So again, if you’re gonna come to someone, say on Gab, and say I don’t like that post, can you remove it, please? They’re gonna say no. Yeah, there’s so many things. I mean, I do think that it has positive stuff in that they’re trying to do something. But I think that these things are often geared more towards other online harms that it will work better for than extremism. Matt, have you’ve got thoughts on this, too?
Do you think? Yeah. Just quick note for your listeners, actually, they changed the name of it to the Online Safety Bill. Not so long ago, actually. I’m not sure why they changed it. Maybe it’s softer. I’m not sure. Maybe there’s a story behind that. But it’s important. It’s an important shift. But yeah, I share many of Ashton’s concerns, of course, I mean, everything she said, resonates with everything I’ve been thinking and writing about in terms of the bill. I mean, to be fair, not I want to be fair to the politicians right now. But to be fair, you know, it’s an incredibly difficult thing to legislate for. I mean, we’ve been doing this for 20 years now, you know, in many, many different countries across the world have tried to legislate for the internet. It’s incredibly difficult to do it. And we’re talking about hate and extremism on this podcast today. And that’s probably the most difficult of all the online harms to legislate for. And I guess the question I would ask is, you know, what’s the bill for? Who is supposedly supposed to be protecting and who is it supposed to be sort of targeting in terms of in terms of the perpetrators. And I do feel that they’ve not really thought much about the hate element and the extremism element. And for that reason, it’s going to be completely ineffective in that regard. They have lots of exclusions in it. For example, as as Ashton said, it only includes the big social media companies. So it won’t include things like Telegram 4chan, Gab, they’re too small, so they won’t even touch them. They have to have a headquarters in the UK for them to be in any way kind of liable for this stuff. They exclude things that might be described as journalistic, or content that might have democratic importance. So for example, if you’ve got actors like Tommy Robinson spouting off on a platform, he could easily call that stuff journalistic content. You know, he claims he’s a journalist of sorts, you know, so all of a sudden, his content is protected, and so there are lots of bits in the bill, that will result in a very sort of ineffective tool for the policing of this kind of content, unfortunately, unless it is changed. There is a chance it could could be changed. It’s not signed yet and delivered. But ultimately, yeah, I can’t see this really changing much at all, in terms of the the regulation of this kind of content on on online.
Can I just add as well, one of the things is if they do kind of slap restrictions on the big platforms, we know what happens, right? Everyone moves on over to the fringe, where you’re then merging with all of these disparate communities, like we’ve seen this happen before. And I think that that is also a problem. When you’re only targeting the mainstream, like big tech platforms, and like you said, it’s not looking at the cesspools of where we actually need restrictions, which I think is one of the biggest problems.
A lot of food for thought there, I think. Thank you both very much for your thoughts on that. We are going to move to our quick fire round of questions. So, first question, we’ll come to you Ashton first. So in relation to online hate, specifically, if you could change one thing about society, the justice system, the law, social media, or anything of that ilk, what would it be?
Social media companies need to tell the masses about the truth about their technologies, how they work, and how they don’t know how they work, and get some ethics. That’s all I’m gonna say, because it’s a quick fire round, and we could go on, they need to be more ethical, they need to be more transparent and accountable. That is one of my major gripes.
Thank you, Ashton. Matt. What one thing would you change?
I’d share that. But I’d also say better and bigger moderation teams. They say they’ve got great moderation teams, but most of them are English speaking. And in much of the developing world, a lot of this stuff, a lot of hate speech is getting missed, because they don’t have people that speak the language.
Really good point. Thank you. We’re also with this podcast hoping that we can encourage our listeners to be upstanding citizens, and we anticipate that a lot of them will want to do more and will want advice on on what they could be doing in the spaces. So Ashton, what can listeners do to help make our online communities less hostile?
Yeah, this is just one piece of advice I would give. If you see something that you think is offensive, call it out. The more people that do it. I’ve heard so many people say, “Oh, I saw this thing the other day online” or “I saw this”. Did you report it? No. But no one wants to report everything. No one wants to rock the boat. As soon as you start calling things out… I used to be like really… I don’t really like confrontation. So I was one of those people. I wouldn’t really be that vocal about things. But I remember a few years ago, when I first started calling people out, I was on a bus. And it was really early in the morning and it was boiling hot. You know what it’s like when you haven’t had a coffee yet, and it’s like rush hour. Some poor, just like minding our own business Muslim women got on the bus and this drunk man, it was like seven o’clock in the morning started spouting like Islamophobic nonsense, and I just turned around and I was like, “Do you know what? No one wants to hear your Islamophobia”. And I said to the bus driver, “You’re gonna let this man stay on the bus?” and I actually got off the bus because I was so enraged. But I think once you start calling people out, you will keep doing it. And now I, that’s the advice I would give. If you see something online that you think is offensive, call it out, call the people out that are sharing it, call out to the social media companies, that it is offensive. And then the more people that start doing it, the less content potentially we could have online.
I’d share that. I think it’s so important that I’ll just say the same thing. I mean, it really is the most important thing. I mean, we use the internet, we make up the internet, the people that log in every day that go on to Twitter, and Facebook, we are Twitter, we are Facebook. So we have a responsibility to police it as well, this notion of responsibility station, David Garland for any criminologists listening, right… This notion that we have to take responsibility for this stuff is so important. And we’ve done some empirical work on it in the lab. And we found that when individuals are faced with sort of counter narratives, counter speech, those who are susceptible to it, you know, we do a triage thing, we work out those who are encouragables who just aren’t susceptible, because they’re too far gone, the kind of people that Ashton looks at in her research, but there’s also the sort of a middle group that might be accelerating towards that more extreme or deaccelerating away from the more extreme, and they are more vulnerable, they’re usually younger, they’re usually more impressionable, and they will usually listen to people like themselves if they tell them that their opinion is wrong in some way. And counter speech is effective. We’ve already seen it in the science. So the more of us that do it, it’s like a net widening, or like a wisdom of the crowd kind of moment. If enough people engage in counter speech online, stand up and speak out, they will see a reduction in hate speech, we will see a reduction in hate speech.
So there we have it, folks, call it out, it really can make a difference. And it’s very similar advice to that, that we give to bystanders offline as well is if you are safe and able to, call it out because it makes a real big difference, especially to those who are actually being targeted as well. Okay, my final quick fire question for you both then is, again, for our listeners. So if you could recommend a podcast or website, a book, or anything to anyone wanting to find out more about these issues, what would it be,
I would recommend something I’ve read recently, which is really, really kind of moved me which is The Good Ally by Nova Reid. It’s, it’s a great book, written by a person that’s been involved in the anti racism space for a long time now. It’s very personal, but also really informative. It draws on science, but also her her work in anti racism classes that she puts on across the country. So it’s a great book, I’d highly recommend it.
I’m gonna do a book and a podcast, so I’m just going to be that extra. And my favourite book of recent times was Reactionary Democracy. And just Aaron Winter more generally, because I love Aaron. And he’s just done some fantastic podcasts and videos about how to understand extremism and white supremacy following the Buffalo attack that happened recently. So yeah. And then podcasts, I’ve got to rep my friends. So Maria Norris’ Enemies of the People. And then if you’re interested in the far right, more specifically, Yeah Nah Pasaran! That’s an Australian podcast, they have loads of good stuff on there.
Thank you guys, I think what we’ll be doing is putting these links and suggestions and resources up on the companion website. So listeners can access them directly there and find out more. I think the point that you made earlier, as well, in response to the second quickfire question was so important about intervening, but intervening safely, and there is guidance available on that. And we’ll put that on the website as well. Because I know that’s something that cropped up time and time again, in the conversations that we have, we want to stand up, we want to stand in solidarity and against hostile behaviour, but how do we do it safely? So yeah, we will do all of that. So listeners, please do take a look at the website. And we’ll have all sorts of interesting information out there for you. I’d like to end our questioning with like, maybe on a positive note. So I guess our listeners will be feeling troubled by a lot of what they’ve heard. I hope they are, because this is difficult, dark space. But I’m wondering whether, given that we’ve gone through difficult times, and we’re all looking for a reason to feel optimistic, during these increasingly hostile times. What gives you guys a sense of optimism going forward. If I could start with email.
I think, you know, through our empirical observations of hate online around trigger events, we always see a lot more counter speech than we do hate speech. So while we picked up hate speech, say after the Brexit vote or after a terror attack, there’s definitely hate speech, we see an uptick in it, it’s quite a dramatic uptick. It is dwarfed by the amount of counter hate speech out there. So ultimately that this gives me faith that most people are pretty decent, even those on Twitter, for example, which is a bit of a cesspit in itself. So ultimately, I know that people are genuinely for the most part, well intending and well meaning. And it’s only a minority that actually expressed their prejudices in this really kind of violent way, either online or offline. So essentially, people are good, which is a nice way to think you have to think like that. I think if you do what we do, it’s too depressing to think that the world is probably as bad as we all fear it is. But maybe it’s an illusion. I don’t know. But that’s how I keep myself safe.
Absolutely, Ashton, what about you? Oh, no. Again, listeners, if you could see Ashton’s face you might be filled with a sense of trepidation like I am.
I try to find some, I mean, I’m quite pessimistic when it comes… Because I work so much on the countering in the social media, and, you know, what can be done and what’s not. And, you know, I think one thing that I am really pleased and warms my little black heart is that so many… There’s so much more network support for people now that are researching this space. And like so many more interdisciplinary projects, so many more collaborations. And I think that that’s important for tackling extremism. So as you said, at the beginning, my work is always interdisciplinary. And I don’t think you can stay in your own silo and necessarily solve it. So I think it is positive to see so many initiatives and so many young people getting involved in these online initiatives and, you know, anti racism and all of these kind of movements online towards combating this, I think is a good thing as well.
So true, and yeah, I am totally on board with with all of that. And see, that was optimistic Ashton.
I managed to dig deep.
It’s alright, we’ll edit out all the pauses, you know, so like, it will be straight off optimism. Thank you both, honestly, for your time, for your wisdom, for your insights. You’ve been amazing. Thanks so much.
Thanks so much for having me. Can’t wait to hear the rest of the series.
And a big thank you to you our listeners for joining us today. If you enjoy our series, please subscribe wherever you get your podcasts and be sure to follow the Centre for Hate Studies on Twitter, so you can keep up to date with our work and future events. Be sure to come back next time where we’ll be discussing hating disability.