XClose

Faculty of Social & Historical Sciences

Home
Menu

Interview: Dr Jeffrey Howard on 'Dangerous Speech'

29 January 2021

Congratulations to Dr Jeffrey Howard (UCL Political Science) who has recently been awarded the Fred Berger Memorial Prize for his article on 'Dangerous Speech'. We catch up with Jeff to hear more about his research on speech and ask him if he thinks Twitter was right to ban Donald Trump permanently.

Hi Jeff, congratulations on being awarded the Fred Berger Memorial Prize for your article on ‘dangerous speech’. Can you summarise the article for us?

Thanks so much! The article intervenes in a debate about the limits of free speech, and in particular about how to think about speech that risks inspiring violence. In most liberal democracies, including Britain, it is a crime to advocate terrorism or to incite racial or religious hatred. But the United States is an exception, as the US Supreme Court has held for decades that the state is not allowed to suppress the expression of views it finds dangerous (except in certain emergency cases). So the laws we have in Britain would be swiftly struck down as unconstitutional in America. Who is right? Political philosophers working on free speech have tended to support the American position, arguing that citizens have a right to express (and hear) all viewpoints, even if those that can inspire serious harm. My article argues that this position is misguided: our right to free speech is properly limited by a moral obligation not to endanger others. Accordingly, laws like the ones we have in Britain are fully compatible with respect for free speech. Whether they are a good idea all-things-considered—which requires factoring in concerns like effectiveness and counterproductivity—is a crucial further question, but it isn’t a question about our fundamental right to free speech.

We understand that this article is a part of a bigger piece of work on the subject of dangerous speech. Can you tell us a bit more about your work?

I’m currently writing a book manuscript that explores this topic of dangerous speech in much more detail. When, exactly, should the state restrict citizens’ speech on the grounds that it is dangerous? No one thinks that the answer is “never”; if I offer someone a million pounds to kill a person I hate, no one doubts that I should be held criminally responsible for soliciting the offence. But suppose I simply persuade him to murder through convincing arguments. Or suppose I don’t target my speech at anyone in particular, but put it in an online video (making arguments about why it is important to attack the particular group of people I detest). Or suppose I don’t explicitly advocate harm, but I simply argue that the people in question are morally inferior or dangerous. Where, exactly, should we draw the line? This is just one of many questions about how to specify the limits of free speech that I explore in the book, with particular attention to questions about terrorist propaganda and hate speech, offline and online. 

What are your future research plans?

I aim to develop and defend a philosophical account of the moral responsibilities of social media companies. Social media platforms have an enormous amount of power over public discourse, yet political philosophers have said remarkably little about the moral principles that ought to govern their decisions. My central concern is with the ethics of content moderation: the processes by which social media companies regulate the speech of their users. I seek to understand exactly why social media companies have a duty to regulate their users’ harmful speech, to pinpoint the exact categories of harmful speech that social media companies should regulate, and to explain the kinds of regulations that are appropriate.

You have recently written an Op-ed piece for The Washington Post on Twitter’s permanent suspension of Trump. What are your thoughts on this?

I think Twitter is right to have rules against speech that glorifies and promotes violence—it would be acting irresponsibly if it didn’t have these rules. And it was fully appropriate to enforce these rules against Trump by removing his posts and suspending him. While there is public interest in knowing what the American president has to say, there is also a public interest in preventing the incitement of violence, and it is mistake to grant the former interest absolute priority over the latter.

That said, I am unsettled by the prospect of anyone facing a permanent ban from social media. Social media platforms constitute the new public square, and we should be uncomfortable about banishing people from public discourse forever. Even people in prison, I would argue, have minimal right to participate in public discourse. Now, it’s within Twitter’s legal rights to ban anyone it wants; my point here is simply that it is typically morally wrong to do so. Of course, if the platforms were much smaller and numerous, one’s decision to ban people wouldn’t have nearly as much importance. In this way, social media companies have become victims of their own success: by becoming the new infrastructure of civic discourse, they have acquired distinctive public responsibilities they otherwise would not have had. 

You can read the full article with The Washington Post.


Links