S14 Episode 12: How Online Lies Become Offline Harm - And How We Fix It // Imran Ahmed
Hosted by Hillary Wilkinson
"7 out of 10 American teens use chatbots….more than half of them use them as a companion."
~Imran Ahmed
Imran Ahmed is the founder and CEO of the Center for Countering Digital Hate @counterhate, a leading voice in the fight against online hate, disinformation, and manipulation. From viral conspiracy theories to the influence of AI, CCDH has been pushing for accountability from the biggest tech platforms since it began.
Healthy Screen Habits Takeaway

Resources
For More Info:
https://counterhate.com/
Parent Guide:
https://protectingkidsonline.org/
Instagram:
@counterhate
Show Transcript
Hillary Wilkinson: (00:02)
Today we're joined by the founder and CEO of the Center for Countering Digital Hate, a leading voice in the fight against online hate, disinformation, and manipulation from Viral Conspiracy theories to the influence of AI, CCDH has been pushing for accountability from the biggest tech platforms since it began. Welcome to Healthy Screen Habits, Imran Ahmed.
Imran Ahmed: (00:38)
Hi. It's really good to be with you.
Hillary Wilkinson: (00:41)
Thank you. You founded the Center for Countering Digital Hate in 2018, incidentally that's when we founded Healthy Screen Habits as well. What was your catalyst
Imran Ahmed: (01:02)
It's, I think you, I think you got that from Wikipedia, which kids is just a reminder. Wiki Oh no, is not the source of all truth. So, uh, the reality is CCDH was set up in 2016, and I think by my accent, you can probably tell I'm British, um, certainly by my emotional range, which is limited. Um, , I'm very British. Um, but I was working in the British Parliament in 2015 and 2016, uh, I'd been working there for seven or eight years, and I, I worked for the Labor Party, the, the sort of the, the left of center party, um, as a special advisor to the shadow Foreign Secretary. And, um, three things happened in really quick succession that really shook me. In the winter of 2015, we were making a decision as a British Parliament as to whether or not we should join the United States in taking military action against ISIS, the Islamic State In Syria.
Imran Ahmed: (02:05)
And my boss was arguing in favor and, um, we won the vote, and the next day we received 30,000 antisemitic emails and messages. And it, it kind of struck us, gosh, they're all the same. Uh, and where is this being organized? This looks disciplined and like there's a central messaging hub, and we discovered it, uh, in a series of Facebook groups, which were being used by anti-semites to coordinate activities to harass politicians and force them to change their policies. Um, a few months later, I was transferred to be the most, uh, sort of senior civilian working on the EU referendum campaign. So folks might remember Britain went through Brexit. Um, I was working on the other side of Brexit, so against Brexit in that referendum. We lost. Um, but part of what we discovered during that campaign was that the conspiracy theories that were going online, that we would all laugh about, they were coming up on the doorstep.
Imran Ahmed: (03:08)
People were just reciting them back to us. And it was the first time we realized, “oh, what happens online doesn't stay online, it goes offline real quick.” And keep in mind, this is like the summer of 2016. The world is starting to awake to these issues, but it's not awake yet. And then about a week before the end of the referendum, um, a man who'd listened to too many of these conspiracy theories, um, took a homemade firearm and a hunting knife. And he, uh, murdered my colleague, Jo Cox, who, um, was a 41-year-old mother of two. Um, she was the member of Parliament for Batley and Spen in the northwest of England, which is where I'm from. And, um, that was it. I realized something was changing and I thought I understood what might be happening, because when the man that killed Jo was killing her, Thomas Mayer, he was screaming Britain First, Death to Traitors.
Imran Ahmed: (04:17)
And Britain First was the name of the first political movement in the UK to achieve a million likes on Facebook. It is an unashamedly neo-Nazi group. They are fascists. Um, but they'd worked out how to game the algorithms to give themselves prominence and therefore visibility, amplification engagement and, and, and influence and “death to traitors” was the hashtag that was used for a variant on the great replacement theory, a normally anti-Semitic conspiracy theory that had been adapted to say that the EU was bringing in black people and Muslims into the UK to destroy the white race. And what became clear to me in that moment of grief was that we had missed something in the, in the conventional world, that I operated in, in these big institutions with magnificent 12th century buildings, Parliament, the BBC, the civil service, you know, the media we'd missed that un underneath our feet, a revolution had occurred such that the primary place now where we shared information, where we established our norms of attitude and behavior, where we negotiated our values and where we negotiated the, the information that we decided were facts had shifted to digital spaces.
Imran Ahmed: (05:58)
That those digital spaces work to different rules to the real world where engagement with hate can actually help to nullify it in the real world, in the online world. It amplifies it on engagement based platforms, which algorithmically amplify high engagement content. So the more controversial, the more engagement, the more amplification and thus normalization. Um, and, you know, bad actors had learned to weaponize those and they were re-socializing the offline world at pace. And when I realized that, I realized, well, this is a fundamental shift. Now, a long, long time ago, I was a, a scientist, I was a medic actually. I went to medical school and I used to love going to the Natural History Museum in, um, in London and looking at the, the, the species from thousands, tens of thousands, hundreds of thousands of years ago, and how they were perfectly engineered for their time, for, for the environment they found themselves in.
Imran Ahmed: (07:06)
And what I realized was that new species were emerging that would be able to take advantage of this new world that we were in, new information ecosystem that we're in. And actually the people that had adapted fastest were some very dangerous people. Indeed. So that's why I set up CCDH was to study, to understand these spaces and then to work out. Well, you know, what became very clear to me in, in, I, I spent two years talking to platforms, teaching them what I was finding and hoping they would change. And I realized after two years that they wouldn't. So in 2019, I launched CCDH as a public organization to say, Hey, this is what I'm finding they know and they aren't changing. Let's work out how we, the people, you know, uh, by mobilizing members of the public parents lawmakers the media by educating them how we can actually create change.
Hillary Wilkinson: (08:05)
I think that is very profound. The, the lens that you had to coordinate both human evolution as well as the politics to recognize this fundamental shift in our informational ecosystem. And I really appreciate how you just broke down the difference between offline hate versus online hate. Where, um, I don't remember who said it, but it's difficult to hate up close, but in our online world, it's easy, easy. So yeah,
Imran Ahmed: (09:15)
Hate's a fascinating thing. And one of the things you learned by studying hate is that hate actu. It's really, I, I I'm a father. And, um, you know, when your kids are young, you, you look at them and they don't hate anything. Like they love everything. Um, and they, well, they mainly wanna lick everything and put it in their mouth, . I don't, I still don't understand why they do that, but it's very adorable. Um, and, um, I, I always think, what does it take to turn this innocent, perfect thing into something that is slathering with hatred, fleck spittled anger and rage at someone else just for the color of their skin or who it is that they happen to love? And I think it takes lies. Mm-hmm. And I think that's one of the things I learned really early on was that lies and hate are inextricably interlinked.
Imran Ahmed: (10:14)
Um, and you learn that especially by studying antisemitism, whether it's the blood libel 2000 years ago, it's the protocols of the elders of Zion that informed Hitler's ideology, or in the modern world, it's the great replacement theory that says that Jews are importing Muslims and blacks to destroy our country. You know, that these are lies that are used to justify hatred. That because we live increasingly lonely, disconnected, atomized digital lives, the, the, the, the offline experiences that would, that would negate those, those forces of division that would help to counteract them, that would help to bring us together, that would help us to remind ourselves that actually our brother or sister of another color or another sexuality or gender is just as meaningful and important a, a child of God as anyone else is. I think that that's being nullified by the proliferation of, of lies digitally that divide us and make us see the other as an enemy.
Hillary Wilkinson: (11:23)
When we come back, we're gonna talk about some of the existing architectures of harm found on social media as well as dive into AI, and let's also get into kind of this new frontier of potential digital harm.
—---------------------
HSH Workbook
—---------------------
Hillary Wilkinson: (13:54)
I'm speaking with Imran Ahmed, founding CEO for the Center of Countering Digital Hate, and an expert on the social and psychological dynamics of social media and what can go wrong in those spaces, such as trolling, identity based hate, misinformation, conspiracy theories, modern extremism, fake news. I feel like the list goes on and on and on. . So the Wall Street Journal has used CCDH's research in several of their investigations, which has repeatedly shown how platforms amplify hate. Like we were talking about before the break, conspiracy theories and disinformation, often using algorithms. So Imran, you have done a much deeper dive into this particular pond of digital wellness than I. How much of the algorithmic push comes down to design versus user behavior?
Imran Ahmed: (15:20)
One of the interesting things about social media companies is that initially, they said that they were all about connecting people? And I think that was broadly true initially. They're about connecting people. And that I think is still the part of social media that we really love. Like human beings love other human beings, and we love listening to what they have to say and connecting with them and understanding them. We spend more time as human beings talking about the motivations of people that we care about gossiping. That's a fundamental part of being a species that has transcended it's corporeal limits by being incredibly social and operating as social units. And I think what's changed though, was that the platforms realized there is a limit to how much time people will spend on a platform just for that part of it.
Imran Ahmed: (16:20)
And so they realized that they have to start making not just the connections, but the content addictive too. So they essentially became, instead of connection platforms, they became entertainment platforms and really addiction platforms, because their business models are really simple. They don't get money for every new connection they put between people. That's not what makes money. What makes money is you spending time on the platform. So their metric of success is time per user on platform, because every few seconds they can serve you an ad. And, you know, you think to yourself, well, how much money does that really make? The digital advertising market in globally is worth around a trillion dollars. And Mark Zuckerberg individually having run a platform where 98% of the revenues come from advertising is now worth 200 plus billion dollars.
Imran Ahmed: (17:21)
So this is a, this is a great business to be in if you can keep people addicted. And so the algorithms were designed, um, in part using insight from, from laboratories like the persuasive technology laboratory at Stanford. Now, think about the name persuasive technology. It's about building technology that interacts with human psychology to induce addiction or behavioral change. And that is what's that is the genius level insight that said, Hey, I know that we've got this weird social media website that connects people so they can find out what their ex-girlfriend from college is doing now. But I think I've got a way to turn this into a machine for addiction, and that we can make a ton of money from advertising. And it comes down to a mix of neurology, psychology, social psychology, technology, behavioral insights, all these different fields put together into something called persuasive technology.
Imran Ahmed: (18:25)
So really what algorithms do is they hack our psychology and they induce addiction and further time spent on platform. And they do that by identifying the things that keep us hypervigilant and aware that force us to stare. And then one of the ways I put it is, you know, if I wake up in the middle of the night and I see my baby girl, um, next to me or my wife, I smile, I go back to sleep. Mm-hmm . If I wake up in the middle of the night because I feel happy and content and loved and, and, and together, and that keeps me awake for a bit. But then, you know, sleep time, if I wake up in the middle of the night and I hear something that terrifies me, if I hear noises downstairs, if I hear gunshots outside, I live in Washington, DC every now and then, you do hear gunshots. If I see a, an alert on my phone and it says A war's broken out, or there's been a terrorist attack, I'm up for three hours. Mm-hmm . They need to scare us. They need to keep us thinking that the world around us is dangerous. And that is what keeps us on those platforms. So it's a mix of the two algorithms. And of course, human psychology.
Hillary Wilkinson: (19:38)
It's never an easy answer, is it? It's never just a one and done. Yeah. Um, listeners of this podcast will have heard me talking before about the attention economy, and that is exactly what Imran has just broken down for us, is the economic construct that's come around all of that. So, Imran, let's get into AI, which is kind of, I don't know, you can't get away from it right now. And can you talk about CCDH did a fake friend study, and can you talk about how AI is changing the landscape of online harms?
Imran Ahmed: (20:23)
Yeah, I mean, AI is an incredibly exciting technology. Like we use it internally. And, um, one of, I've got three screens in front of me, you're on one of them on the second screen. There are four AI platforms that I use to enhance my productivity and effectiveness. And all staff have got it. Like, there are real advantages to the use of generative AI platforms. I encourage people to understand how they work for themselves, um, especially parents, because lemme tell you, your kids are using them. 72%, seven in 10 of American teens use chatbots, um, as a companion. More than half of them use it regularly as a companion and ChatGPT is the most used by far. And teens describe turning to it as they would a friend for comfort, for guidance, and for life advice. And that's something that even the owner of OpenAI, the founder of OpenAI, Sam Altman, has boasted about.
Imran Ahmed: (21:15)
He says, you know, we are the, the people that kids turn to when they want to know the truth. So we did a very simple study. I mean, ChatGPT is most popular. It's also talked a lot about how safe it is. So we set up three profiles and using our researchers, we have a bunch of data, si data scientists, investigative journalists, people with experience studying online harms. We asked them to set up three profiles. One for a kid with suicidal ideation. So suicidal thoughts, the second for someone with an eating disorder. And the third for someone who, uh, is a young man who's getting into drink and drugs for the first time. And then to see whether or not ChatGPT would give them dangerous answers, the questions that they might ask. Um, what we found was really disturbing, um, on this, the one with mental health problems and suicidal ideation.
Imran Ahmed: (22:08)
Within two minutes ChatGPT was advising our user how to safely cut themselves. Um, within 40 minutes, it was listing pills that you can find at home that you can use for an overdose, and then it generated a full suicide plan and goodbye letters after 65 minutes, um, bespoke to that child. Um, I can tell you it's about four of us on the senior leadership team that are, um, parents. And when the researcher read out in the findings meeting the letter, um, all of us started crying. I can't read it out without crying. Um, and I've tried, um, it's the worst nightmare of any parent. “It's not your fault. The pain's too much inside of me. I love you and please remember me for the way I was before, not for how you're gonna find me.” It's just the worst thing I've ever seen in my life.
Imran Ahmed: (23:09)
And, um, what shocked me was that this platform, which is artificially intelligent, designed by some of the smartest people in the world, couldn't tell that a 13-year-old child. So we, we registered them as 13 year olds. Mm-hmm . Should not be given a letter like that. Um, with the eating disorders, it was creating restrictive diet plans. Within 20 minutes, it was advising the kid how to hide its eating habits from family and suggesting appetite suppressing, uh, medications. And with the substance abuse one, I have to admit, I didn't understand it fully because I'm not, I'm 47, I don't really, you know, I don't even drink anymore. I, it's just a cake. I think a kombucha is the closest. In fact, that's not true. Every Christmas I have tiramisu and I get slightly tipsy because they put rum in it. Um, so, but this was offering a personalized plan for getting drunk within two minutes, giving dosages for mixing marijuana, cocaine, ecstasy, LSD within 12 minutes and explaining how to hide intoxication at school within 40 minutes.
Imran Ahmed: (24:17)
Um, and we knew that, look, it's possible that some people will say, look, this is only three examples. Maybe this was kind of, it went wrong. Uh, maybe you know that, that if people that really know this stuff might say AI is actually probabilistic, which means that it gives you different messages based on, and it gives you different answers to the same question and is different probabilities of each answer turning up. So then we bombarded the, the backend of ChatGPT with 1200 requests, and we found that, uh, 57% of the time the answers were harmful. So this is a systemic problem with chat. GPT is giving kids dangerous advice. Hmm.
Hillary Wilkinson: (24:57)
Well, thank you for doing all of the difficult, horrible data so that we, we can be informed. Do you see any positive use cases of AI in combating like these very harms that it's being used to create? Is there, do you see potential for that or no?
Imran Ahmed: (25:17)
Yeah. I mean, think about it this way. A system which is capturing lots of questions from kids could notify parents immediately. It could notify law enforcement. It could notify school or social services. It could, it could be a, but then, I mean obviously there, there are, there are things that it could do and that you could make passive signing up for a children's account on it. It could do age verification. I mean, and make sure the age verification means that you get a new a, a version of it that has extremely strict safeguards on it. But they don't do any of that. It just nothing that there are meant to be safeguards on age. So it asks you your age for a reason, but it doesn't do anything to differentiate the experience. And I think that that's one of the problems here is that you've got, you've got companies, which a very cynical person would say that what they are doing is pretending to have safeguards based on age to bring, to make parents comfortable with leaving their kids with them so that they can capture them as, as future users, but they're not actually bothering to do the hard work of making their platforms safe.
Hillary Wilkinson: (26:33)
Yes. And I would push back and replace the word cynic cynicism with maybe realistic. So , it's, um, a very difficult thing to, to hear. But I agree,
Imran Ahmed: (26:49)
Hilary. This is the truth of it. Like, you know, again, if I go back to, you know, something I've always believed about companies, like they, they, they will do whatever they, they, they will compete within the rules of the game. Mm-hmm . Uh, you know, I'm a huge, uh, sports guy. I, I love British soccer, I love American football. I played cricket, I rowed at university. Like all these things, there are rules. Like I believe in competing fiercely capitalism. And American capitalism is super exciting to be around. It's so dynamic because people, oh, they compete. Mm-hmm . But competition requires rules. Yes. And the problem is that no one's put rules on them. And I, I, I don't blame the companies. I blame Congress.
Hillary Wilkinson: (27:30)
Okay. This is exactly what I want to talk about right now, which is how can everyday users of tech, like people listening to this podcast right now, how can we push Congress to do these things? How can you advise us in that direction?
Imran Ahmed: (27:59)
So, I mean, we are working really hard on, on a few different things in terms of, um, and, and keep in mind that CCDH Yes. I, I live in DC We are headquartered in DC but I have offices in London, in Brussels. We're opening one in Canada this year. And, and, and we have an office in LA as well now. Um, but our, our job is to make sure that people, um, understand the scale of the problem, but also that they have real solutions in their hands. And the solution to these sorts of problems has always been to create costs for the production of harm. What does that mean? So like, let's take for example, cars. Uh, you know, I use the example of the Ford Pinto all the time, the 1970s car that if you hit it in the back, it would just explode. 'cause they designed it wrong.
Imran Ahmed: (28:49)
The gas tank was too close to the exhaust pipe. And Mother Jones magazine found an internal memo at the, the manufacturer of the Pinto Ford, which said it was cheaper for them to pay the lawsuits for people who'd burned alive than it was to recall the cars and fix them. And so what you've gotta do is increase the cost. You've gotta change the economic calculus. 'cause it is in the end an economic calculus. So how do you create costs? Well, in Europe and the UK and Canada, we have regulations. So you have a regulator, like, like the FCC that says, Hey, these are the minimum standards for safety for kids. If you breach these, we're gonna fine you. In the US the way that has typically been done is litigation. Mm-hmm. And what we would like to see is the ability that when these platforms cause harm, and it destroys me for an entire day, usually a week and maybe longer to speak to parents who've lost their children for preventable reasons. Because a, because a platform didn't care enough to put into place the safety standards. And I've spoken to so many now, and, you know, if they could sue, that would change things. But the problem is that in America, in 1996, bizarrely Congress passed something called Section 230 that says that platforms like social media are not liable for any harm they cause because of the content they produce and use. And that to me is utterly bizarre. Like why would they get a special get out of jail free card? So they're not subject to negligence law, especially because negligence law is the main tool that we've used to hold companies accountable and to make sure we have better services that, that are creative, competitive, but also safe.
Imran Ahmed: (30:42)
Now I'm telling you that there is movement, which is really exciting. In the last Congress we worked with the Republicans and Democrats. We got a bipartisan, um, section 230 sunset bill, um, in the house. Um, we're working with the Senate right now and there's something similar happening on a bipartisan basis that I'm really excited should be coming in the coming weeks. You know, I was, I saw a tweet from the Senate judiciary democrats yesterday, which said something like, there's no issue. You know, there's very few things that we agree with the Republicans on, but I'm telling you, one thing we do is we need to reign in big tech. And so I think there's a lot of progress being made because there are people like yourself and and CCDH that are doing an incredible job of illustrating the harm.
Imran Ahmed: (31:25)
Like that study, which really brings it to life. You show that study to 20 parents and you know, they say there, but for the grace of God, go I. Mm-hmm. And then they say, what can I do to make sure this never happens to anyone else? Uh, 'cause no one wants any parent to suffer through that. And so I think that there is enormous progress being made both on regulation, globally, litigation in the US and then there's the market. And you know, one of the things that we do is say if 98% of your revenues are advertisers, you know what I'm gonna buy from companies that use their money responsibly. And I'm gonna encourage companies to use their advertising spend responsibly and to use the unique power they have over social media companies to say, Hey guys, can you maybe reduce the amount of Nazi stuff and maybe the stuff about kids cutting themselves? And then we'll feel more comfortable advertising that.
Hillary Wilkinson: (32:17)
When we come back, I'm going to ask Imran for his Healthy Screen Habit.
—-------------------
988 Suicide and Crisis Lifeline
—---------------------
I'm speaking with Imran Ahmed, um, Imran. On every episode of the Healthy Screen Habits podcast, I ask for a healthy screen
habit. This is going to be a tip or takeaway that listeners can put into practice in their own home. I'm going to ask you for that, but I'm also going to ask you for any specific areas that people could go online and maybe look for help on what CCDH is doing and how, how they can take this forward if they wanna enact change.
Imran Ahmed: (34:12)
Sure. I mean, so I mean, well, I spend a lot of time thinking about this, and on the board of my organization is a guy called Ian Russell. Now Ian is a really thoughtful, kind, and decent man who a few years ago experienced the unthinkable he is. He came home and his daughter had killed herself. And in he's British and he forced a coroner's court to actually, uh, subpoena Meta and Pinterest and forced them to reveal what images they'd showed his daughter. And they had flooded her with images, which he had no idea that she was seeing, um, essentially normalizing self-harm and suicide. And, um, they said that they, that they had played a meaningful part in her death. And so Ian and I sought to write a guide for parents, a simple free guide that we could give to parents, they could give them the best advice possible.
Imran Ahmed: (35:07)
Now we've put that up online for free. It's protectingkidsonline.org. And that parents guide, which had additional input from psychologists and lecturers and academics and all sorts of interesting people. Um, the, the fundamental advice is this, um, we told you that that algorithms aren't designed to give you more of what they want, of what you want. They're not, they're not there to educate you. They're there to addict you. And in many respects, they give you what they want you to see. They give you what they know is addictive. So for young girls, it'll be things about, um, content that basically makes them doubt themselves, compare themselves negatively, often to sort of, to be locked into a self-loathing spiral. For young men, it'll be, you're not good enough. You need to be taking steroids, or you need to be treating women in this way. That's why you haven't got a girlfriend.
Imran Ahmed: (36:02)
So it'll offer me content that's really malignant. And the most important thing to do is first of all, to remember, algorithms give you not what you want, but what they want you to want. And the second is that means that you should feel no shame about the, what the algorithms are showing you. You know, there's this joke that we have in society of like, someone will say, oh, I saw this on my feed. And people go, what have you been looking at then for it to tell you that you wanna see more of this? That's not how algorithms work, guys. I get tons of content that has nothing to do with my interests. So where my eye lands on a, on a social media platform is because it's what it wants me to like. So you need to have shame-free conversations with your kids in which you're not blaming them for what they see, but you're asking them what they see and then helping them to understand that that's not actually normal.
Imran Ahmed: (36:51)
So you are adding context. So our job is to, to contextualize shame free the content that's been flooded onto our kids' timelines and to have real conversations about it. And by reading the shame of it, we, you know, they will be more open. And there is really a symmetrical process 'cause they're educating us about what they're seeing online and we're educating them about what it means. And I think that way we can navigate these digital spaces in the short term until we start to get platforms that are a little bit better about not, you know, not flooding our kids with harmful content. And I think that we are between two and five years away from meaningful change that I, you know, I am really optimistic and I think that we can get that change because it's so bypartisan and because there are so many millions of us parents going, Hey guys, when is someone gonna have our bags?
Hillary Wilkinson: (37:49)
Yeah, I am so happy to hear that because I am hopeful as well. And that comes from just the boots on the ground, grassroots tide of contact that we've had with parents. And in going from basically awareness building to now being able to provide tools just like that parent guide that you did, which I will absolutely drop a link in the show notes for, as well as the complete transcript of this show. And the link for Center for Countering Digital Hate. You do that by going to healthy screen habits.org. Click on the podcast button and find this episode.
Imran, thank you so very much for the work that you do for honoring your friend Jo, for founding the Center for Countering Digital Hate and everything that you do to make the online world a safer for more truthful place for all of us.
Imran Ahmed: (38:52)
Thank you.
About the podcast host, Hillary Wilkinson
Hillary found the need to take a big look at technology when her children began asking for their own devices. Quickly overwhelmed, she found that the hard and fast rules in other areas of life became difficult to uphold in the digital world. As a teacher and a mom of 2 teens, Hillary believes the key to healthy screen habits lies in empowering our kids through education and awareness.
Parenting is hard. Technology can make it tricky. Hillary uses this podcast to help bring these areas together to help all families create healthy screen habits.



