Democracy, technocracy and AI safety
The Centre for Responsible Innovation welcomes Lou Lennad and Connor McGlynn from Harvard University.
Join the Centre for Responsible Innovation for talks from Connor McGlynn and Lou Lennad. Connor’s PhD research examines imaginaries of AI risk, investigating how the futurity of AI came to be problematized in policy settings, while Lou’s research interests span across the governance of science and technology, especially human genetics and neurotechnology, and the science and technology of governance.
Panel discussion chaired by Lucy Maun.
Unbundling TESCREAL: Towards a critical genealogy of AI safety
Michel Foucault (1975) described his project of critical genealogy as an attempt to write a “history of the present”. His aim was to understand how we come to know the world through particular grids, for example medicine, psychiatry, criminality or sexuality, by tracing how these conceptual architectures arose and took hold of the collective imagination. It is in the spirit of this approach that critical scholars Timnit Gebru and Emil Torres (2024) have proposed the “TESCREAL bundle” as an analytic for the genealogy of our current ways of understanding AI, and the “ideologies… driving the race to attempt to build AGI” in particular. Standing for “Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism,” the TESCREAL bundle is meant to shed light on how the AI safety movement has built a framing that simultaneously cautions against AI’s existential risks and enables reckless AI development. The TESCREAL bundle has gained prominence since the publication of the original article in 2024, with its own Wikipedia page and references to it by prominent figures in the AI community.
In this paper, I argue that while the “TESCREAL bundle” attempts to capture an important and understudied epistemic community, it represents a flawed approach to critique. Drawing on historical research and ethnographic fieldwork with the AI safety community, I interrogate the TESCREAL bundle from two angles. First, I show that it fails to trace a convincing intellectual history of AI safety, fixating on the context in which these ideas first appeared rather than examining how they came to achieve uptake by wider publics. As a result, rather than providing a critical genealogy in the mode of Foucault, the TESCREAL account commits what philosophers have called the “genetic fallacy” (Srinivasan 2019) by treating the origins of ideas as a determinant for their truth-value. Second, I argue that the TESCREAL bundle epitomizes a “hermeneutic of suspicion” (Ricoeur 1965) approach to critique, that is to say, one that is characterized by the exposure of lies to find the hidden truth beneath. This is exactly the approach that STS scholar Bruno Latour (2004) saw as “running out of steam” due to its undermining of the legitimacy of the critique in the eyes of the actors, hence eliminating the possibility of generating a shared object of concern. These pitfalls, I suggest, are avoided by interpretivist approaches to critique that have been used in the STS literature and come closer to Foucault’s aim of writing a history of the present.
A science of the people? Behavioral expertise at the European Commission
Lou Lennad
Lou is a PhD student in Public Policy at Harvard Kennedy School and a Fellow in the Program on Science, Technology, and Society. She holds an MSc in STS from UCL and bachelor’s degrees in Political Science, Genetics, and Law. She has also worked at the French INSERM ethics committee and at the OECD, and is an affiliate of the Minda de Gunzburg Center for European Studies and the Weatherhead Center for International Affairs.
Connor McGlynn
Conor McGlynn is a PhD student in Public Policy (Science, Technology and Policy Studies) at Harvard University. During the 2025-26 academic year he has been an AI Policy Fellow at the Institute for AI Policy and Strategy (IAPS) in Washington D.C. and a Winter Fellow at the Centre for the Governance of AI (GovAI) in London. His PhD research examines imaginaries of AI risk, investigating how the futurity of AI came to be problematized in policy settings.
Over the past decade, the European Commission has institutionalized the production and use of “behavioral insights” to inform, frame, and assess EU policies. The Commission’s Joint Research Center, whose mission is to supplement the policy process with robust scientific evidence, now includes two behavioral units: the “EU Policy Lab,” which produces behavioral knowledge for specific policy areas, and the “Science for Democracy and Evidence-informed Policymaking” group, which uses behavioral knowledge to rethink the process of policymaking itself.
How does this behavioral paradigm relate to the Commission’s long-standing quest for legitimacy?
Drawing on six months of fieldwork at the Joint Research Center, I document the history and practice of behavioral expertise in the EU and offer elements for a reflection on its resonances with the challenges and aspirations of the EU executive branch. I show that the concern with what constitutes good representation—in both its scientific and political senses—is central to the behavioral units’ discourse as much as to that of their critics, all of whom situate their arguments within the Commission’s overarching vision of “Better Regulation”. This common preoccupation points to a wider reflection on the adaptation of a technocratic structure to an increasing democratic responsibility, which places distinctive constraints on the production of policy-relevant expertise.
The democracy/technocracy tension out of which behavioral science developed at the European Commission relates, I argue, to a uniquely European political culture. In the United States, the relationship between expertise and policy has rather been problematized around the oxymoronic goal of ‘governing free people’. This comparative insight contributes to a wider conceptual discussion in Science and Technology Studies (STS) on the co-production of scientific paradigms and normative philosophies.