Docs & Dialogue I: AI Killer Robots
At the opening session of the Docs & Dialogue series, experts on AI convened to watch Immoral Code with the film’s director and an expert panel. We bring you highlights from the conversations…
Immoral Code is a documentary that contemplates the impact of killer robots in an increasingly automated world – one where machines make decisions over who to kill or what to destroy. The film examines whether there are situations where it’s morally and socially acceptable to take life, and importantly – would a computer know the difference?
Dangerous Locus Shift from Moral Discourse to Security Rhetoric: Change in AI Rhetoric
In recent years, conversations about AI have narrowed from big-picture questions - such as socio-technical concerns about equity - to ethics, safety, economic growth; and now, increasingly, to security—framed by elite groups. This has resulted in even greater constraints on public discussion around the direction of travel and values shaping AI development.
This rhetorical shift is happening in a time of multipolarity, where “AI nationalism” has become the dominant assumption. The language of securitisation prevails, and the outcome? The “democratising of violence”, where the need for national security in an increasingly unstable world is becoming the justification for further and expedited development of AI weaponry.
The language of patriotism and militarisation is increasingly positioned as the solution. At the same time, companies are quietly dropping their commitments not to use AI in military contexts. International conversations have produced little beyond toothless consensus and voluntary contributions. Nothing is binding—and realpolitik raises hard questions: How can you trust that your adversary isn’t developing these systems in secret?
Our literacy and numeracy skills are gradually declining – a deeply dangerous outcome
The Organisation for Economic Co-operation and Development (OECD) has found that global literacy and numeracy levels over the past 15 years are stagnating (OECD, 2024), impacting our ability to critically assess whether these systems are of value or of detriment to us. These effects – on social cognition, moral reasoning, and critical faculties – don’t happen overnight: they are a “slow violence.” As these core skills erode, our ability to think critically is undermined. The capacity to question and interrogate these systems of AI is falling.
Centred to emphasize: Without critical thinking skills, we are being taken advantage of. We are, to put it bluntly, getting easier to manipulate and govern (Lee et al., 2025).
This shift is not neutral. It’s a product of decades of neoliberal systems – market logics that reward acceleration, competition, and scale. The public is told we must act quickly or fall behind.
Who benefits from accelerating this arms race?
Such an environment is fertile ground for powerful industry rhetoric about the need for speed. Ultimately, it is a play on words and media narratives about security threats from an unstable world to publicly justify the development of AI weapons. At the same time, governments have shifted from conversations about social impact (bias, automation of jobs, etc.) to AI nationalism, resilience, and securitisation. The focus is no longer on the potential of technology for public good – such as health – but on who gets there first, and how to protect against others. This aligns with the accelerationist movement.
There are clear overlaps between the development of current AI systems and of automated weapons systems. Developers and companies are hyping their results, showcasing the impressive whilst often masking error-prone, harmful realities.
International forums are riddled with claims about accuracy and reliability used to justify previous weapons systems, despite data evidence showing high civilian casualties in practice (Stop Killer Robots, 2021). Especially in international forums with non-specialists, it’s easy to be dazzled by the promise of the tech. But we need to remember that data from real-world deployment tells a different story – one that has historically been vital in building momentum for regulation.
Audience example: complexity in practice vs. principle
Real world examples parallel the moral dilemmas proposed by the use of autonomous weapons systems. Wars are imposed, not chosen. Modern warfare relies heavily on drones, but these are vulnerable to jamming. While fibre optic cables can solve this issue, they are often impractical due to their length and fragility. As a result, one proposed solution being explored and implemented, for example, in Ukraine (Pultarova, 2025) is the use of autonomous drones that can kill based on pre-programmed criteria – for example, “if a person is seen in this zone, they are classified as an enemy.” A person dressed as a soldier carrying a weapon does not necessarily mean that they are the enemy. Therefore, attacking that person may result in killing a soldier of one’s own, or a surrendering prisoner. The edge case question shows it is complex, resulting in the need for rules and legally binding agreements.
It is important to place such examples in context. There is a distinction between what is happening in practice, under pressure (descriptive), and what is morally or normatively acceptable (prescriptive).
A participant in Immoral Code stated, “I’d kill in self-defence”. This was not judged as morally wrong – it was understood as an attempt to survive. But that does not justify the development or deployment of autonomous killer drones. The focus, therefore, must lie in addressing the root causes, such as escalation and conflict drivers, in order to prevent these edge cases from arising in the first place.
Human Control
There must exist firm rules to ensure meaningful human control over weapons systems. This is crucial to avoid crossing a moral red-line where machines make life-and-death decisions without human judgment. As the Red Cross argues, a person’s status on the battlefield can shift quickly if they surrender or become injured, but a fully autonomous weapon might still kill them, breaching international law (International Committee of the Red Cross, 2022). In Gaza, AI-powered systems have reportedly been used to generate target lists at speed, with minimal human review – sometimes just 10 seconds (Abraham, 2024). Targeting decisions were allegedly based on crude criteria such as gender, raising serious concerns about the absence of meaningful human oversight. This rushed approach risks extreme digital dehumanisation and violates human dignity, underscoring the urgent need for clear, enforceable rules to govern AI use in warfare.
Unlike nuclear weapons, which are difficult to build due to the need for rare materials and highly specialised infrastructure – and comparatively easier to monitor through inspections – autonomous weapons are easier to develop and harder to detect. The barriers to entry are much lower, as they rely on widely available technologies like AI and drones. At the same time, they are significantly harder to detect. This reversal makes global regulation far more difficult. Trust is weaker, and the risk of covert development is higher. As a result, the strategic self-interest of states – may ultimately block effective regulation.
But how do we slow down the speed?
Shaping the public conversation
The core issue is how “intelligence” is defined in AI. In many technical circles, it means reward maximisation—a narrow, inadequate notion that ignores decades of research showing human intelligence is social, embodied, emotionally rich, and morally situated.
LLMs and similar systems are “computational Frankensteins”—stitched-together architectures of planning modules, memory systems, and language generators. They are not coherent cognitive agents. They are text-output machines, not embodied, not reflexive, and certainly not socially or morally aware. They don’t grow, learn contextually, or develop like humans. They lack moral cognition—no sense of fairness, empathy, or aversion to harm. Instead, they optimise reward functions: a reductive goal.
When we talk about “aligning” AI with human values, we must first ask: what vision of intelligence are we aligning it with? If it’s just an optimiser, we are not even in the same moral universe.
AI is best suited as an augmenter, not a replacer
When the public is consulted, for example, in healthcare, people do not want AI to replace human professionals. They value human involvement and see AI as a supportive tool—not as a sole decision-maker (Thornton et al., 2024; Varnosfaderani and Forouzanfar, 2024; Frost et al., 2025; Looi, 2025). There is a broader public desire, therefore, for technology that enhances, rather than overrides, human judgment. If public trust grows in one area, such as this example of AI use in healthcare, it may inadvertently legitimise riskier applications, such as autonomous weapons. Equally, the same is true for the opposite: if riskier applications like autonomous weapons are either legitimised or rejected, this will shape public trust in other sectors—such as healthcare, social care, education, criminal justice, welfare, and housing.
Hype-busting and making the argument for (in)equity
Building trust requires actively democratising AI development through inclusive design, public consent, and governance that centres the public interest. Marginalised communities often distrust AI because technologies have historically been imposed on them, not designed with them in mind. It is rational that certain communities, marginalised, underserved will be more sceptical about implementing AI that affects their life. It is not about persuasion but inclusion. Inclusion through co-development, monitoring, and oversight of the actions based on the AI outputs: nothing about us without us (Jürgens, 2008). We have others to learn from in including disability justice, public health and data ethics.
“The hype is driven by big companies showing off flashy demos, not solving real problems. Some of these systems are clumsy and error-prone – like digital landmines. A landmine ‘makes a decision’ to explode based on a trigger, and we have international laws banning them. So why not treat some of these AI systems similarly – not as intelligent agents, but as dangerous, unreliable machines? Wouldn’t that shift the conversation from awe to accountability? So if we just treat some of these AI things as just being clumsy, full of, you know, error prone, then it’s much easier to, to convince people actually they’re just not very good.”
Collective Deliberation
Lastly, in order to counteract the speed-driven, accelerationist approach to AI, the focus should shift to collective deliberation, fostering pluralistic, empathetic, and inclusive decision-making processes. Scale also needs to be interrogated, specifically the obsession with building ever-larger models, as this pursuit often fuels urgency and bypasses critical reflection.
There was a question about whether we can identify shared concerns across different domains of AI development: for example, in medical, healthcare, or social care applications.
This is where multidisciplinary spaces come in. The first Docs & Dialogue series is not about technology. It’s about capacity building. About slowing down the story. About creating room for critique, not just within disciplines, but between them. We need spaces where those outside of tech can speak with confidence, and where those inside can be held accountable.
And that’s where I want to end: with HUMBLE. Not just as a framework, but as a method of doing this work: human-centred, uncertain, modest in its claims, and committed to collaboration. This isn’t just a technical debate. It’s a political, ethical, and epistemic one. And we all have a role to play in naming what’s happening, and challenging it.
What we can do as academics, funding bodies and as citizens?
Debunking AI – What It Is and Isn’t
Artificial Intelligence is not intelligent. Machine learning is not learning.
I asked ChatGPT for its’ opinion (see how bounded we are by the language of anthropomorphising?) Very primitive prompt: How some people argue that Artificial Intelligence shouldn’t be called intelligence at all. What do they suggest?
Some critics argue that “Artificial Intelligence” is a misleading term because it suggests human-like reasoning, understanding, or consciousness—which current systems do not possess. Instead, they propose alternatives such as:
- Automated Statistical Systems – to emphasise that AI relies on data-driven pattern recognition, not cognition.
- Computational Pattern Recognition – highlighting that AI detects correlations, not meaning.
- Machine-Based Inference Tools – to clarify that these are tools built by humans to automate specific tasks.
- Algorithmic Systems – a more neutral term that strips away the metaphor of “intelligence.”
The core argument is that using anthropomorphic terms like “intelligence” or “learning” creates false expectations, fuels hype, and obscures the very human labour, bias, and judgment embedded in these systems. (ChatGPT Version 1.2025.133)
These are metaphors that benefit the industry, not the public. And here lies a responsibility—especially for data scientists and those working at the intersection of computation and society:
To repackage, re-explain, and de-hype the technology
To reveal its scaffolding (earth and, human resource and expertise)
To challenge the dominant narrative: that bigger, faster, and “smarter” is always better.
Instead, we should focus on AI’s best and most meaningful uses—not to replace, but to augment human capabilities. AI can serve as a powerful pattern recogniser to help us we what we cannot.
From administrative automation of clinical notes to detecting abnormalities in diagnostic medical imaging.
Moreover, by training on historical data, AI systems often expose the biases embedded in past decisions—such as unequal treatment of patients. When algorithms repeat those patterns, they don’t invent the bias—they reveal it. This can be a powerful and undeniable intervention: AI can hold up a mirror, making visible uncomfortable truths we may prefer to ignore, but now have the opportunity to change.
We need critical discourse and public engagement - not hype and inevitability
Reflections from our participants post-event concerned themes of AI’s efficacy and accuracy, politicisation, collective responsibility.
We need a stronger critical discourse to challenge the accelerationist push in AI—driven by scale, speed, and the myth of inevitable Artificial General Intelligence. These ideas often collapse under scrutiny, relying on vague metaphors that benefit power, not understanding.
Since late 2022 and the rise of ChatGPT, AI has entered an industrial phase. The stakes have become real, and more disciplines—like linguistics, psychology, and biology—are finally engaging.
We’re now seeing robust critiques of core myths like “general intelligence” and the brain–machine analogy, which is increasingly exposed as a flawed metaphor.
Accelerationism isn’t neutral—it’s a neoliberal, techno-political project about consolidating power and optimising profit. It does not work (Fast food, Fast fashion, On-demand everything). From convenience to conflict, the logic remains the same: scale fast, reflect later.
Deflating its language and assumptions is essential to reclaiming space for democratic, public-interest technology.
- Paulina Bondaronek
- Tristan Anderson
- Elio Yagüe Raguz
- Mel Ramasawmy
- Tris Papakonstantinou
- Becky McCall
- David Leslie, Director of Ethics and Responsible Innovation research at the Turing Institute
- Isadora Cruxen, Lecturer and Researcher at Queen Mary’s, Data Against Feminicide Collaboration Research Project
- Elizabeth Minor, Adviser at Article 36, Policy Manager at Stop Killer Robots
Abraham, Y. (2024). ‘Lavender’: the AI Machine Directing Israel’s Bombing Spree in Gaza. [online] +972 Magazine. Available at: https://www.972mag.com/lavender-ai-israeli-army-gaza/ [Accessed 5 Sep. 2025].
Frost, E.K., Aquino, Y.S.J., Braunack‐Mayer, A. and Carter, S.M. (2025). Understanding Public Judgements on Artificial Intelligence in Healthcare: Dialogue Group Findings From Australia. Health Expectations, 28(2). doi:https://doi.org/10.1111/hex.70185.
International Committee of the Red Cross (2022). What you need to know about autonomous weapons. www.icrc.org. [online] Available at: https://www.icrc.org/en/document/what-you-need-know-about-autonomous-weapons.
Jürgens, R. (2008). ‘Nothing About Us Without Us’—Greater, Meaningful Involvement of People Who Use Illegal Drugs: A Public Health, Ethical, and Human Rights Imperative.
Lee, H., Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R. and Wilson, N. (2025). The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects from a Survey of Knowledge Workers. The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers. [online] doi:https://doi.org/10.1145/3706598.3713778.
Looi, M.-K. (2025). What does your patient think about AI in the NHS? BMJ, [online] 389, p.r391. doi:https://doi.org/10.1136/bmj.r391.
OECD (2024). Do Adults Have the Skills They Need to Thrive in a Changing World?: Survey of Adult Skills 2023. [online] Paris: OECD Publishing. Available at: https://doi.org/10.1787/b263dc5d-en [Accessed 5 Sep. 2025].
Pultarova, T. (2025). How Ukraine’s Killer Drones Are Beating Russian Jamming. [online] IEEE Spectrum. Available at: https://spectrum.ieee.org/ukraine-killer-drones.
Stop Killer Robots (2021). Problems with autonomous weapons. [online] Stop Killer Robots. Available at: https://www.stopkillerrobots.org/stop-killer-robots/facts-about-autonomous-weapons/.
Thornton, N., Binesmael, A., Horton, T. and Hardie, T. (2024). AI in Health care: What Do the Public and NHS Staff think? [online] The Health Foundation. Available at: https://www.health.org.uk/reports-and-analysis/analysis/ai-in-health-care-what-do-the-public-and-nhs-staff-think.
Varnosfaderani, S.M. and Forouzanfar, M. (2024). The Role of AI in Hospitals and Clinics: Transforming Healthcare in the 21st Century. Bioengineering, [online] 11(4), pp.1–38. Available at: https://www.mdpi.com/2306-5354/11/4/337.
Docs & Dialogue II: AI, Love and Loneliness
Following the success of our first event, we were delighted to host the second instalment of the Docs and Dialogue series.
Artificial intelligence is present not only in our work but also in our personal lives. From companionship, therapeutic support and even love, it raises questions about how we connect with one another, and what it means to seek comfort – through machines. In this session, we will explore these themes through film and discussion, considering the human – AI relationship as saviour, companion, entanglement, and downfall.
