The HUMBLE Lab
Human-centred Method for Bias-reducing Algorithms with Natural Language Processing and Qualitative Evaluation for Improved Health Outcomes of Underserved and Marginalised Populations.
HUMBLE is an interdisciplinary initiative developing and evaluating machine-assisted methods for critical, inclusive, and context-sensitive analysis of text data in public health and beyond. We focus on transparency, bias mitigation, and meaningful collaboration between human insight and computational tools.
We combine perspectives from behavioural science, public health, qualitative methods, data science, and critical AI studies, with a commitment to challenging techno-solutionism and centring human and societal values in the loop.
About HUMBLE
HUMBLE integrates natural language processing, qualitative analysis, and behavioural science to analyse large-scale free-text datasets, such as user feedback and lived experience. The work focuses on three layers:
- Identifying meaning in unstructured data
- Surfacing perspectives from underserved or marginalised groups
- Critically evaluating bias embedded in both datasets and AI models
At its core, HUMBLE reflects a foundational belief: that technology should support an understanding of human experience, and that it can also act as a mirror, helping us uncover the assumptions, blind spots, and power dynamics embedded in our systems.
Why HUMBLE?
Free-text data holds rich and underutilised insight into health systems and inequalities. But its potential is often overlooked due to the time-intensive nature of qualitative analysis. Manual methods can’t keep up with scale; machine learning (ML) and natural language-processing (NLP) approaches, while powerful, are constrained by algorithmic bias – bias that risks exacerbating disparities in healthcare access and outcomes.
While Health Psychology and Behavioural Sciences have long addressed health inequities, their tools are rarely applied to understanding or mitigating algorithmic harms. HUMBLE helps to fill this gap.
We are developing a novel AI–human collaboration framework that combines NLP, qualitative methods, and behavioural science tools to tackle this challenge directly. Our aim is not just more efficient analysis, but more equitable insight into how healthcare is experienced by those most likely to be excluded or misrepresented by dominant systems.
By minimising bias in both data and models, HUMBLE contributes to building health democracy, centred on the voices, needs, and realities of underserved populations.
Values and Vision
These values underpin HUMBLE’s work on:
- Building machine-assisted analysis tools that amplify, rather than flatter, the voices of marginalised and overlooked groups
- Interrogating and minimising algorithmic bias in the messy, politicised realities of public health and socio-technical systems
- Engaging with communities, practitioners, and public bodies in the design and use of these tools
- Promoting transparency and frugality – doing more with less – in how AI systems are built
- Holding a critical line on AI’s role in a speed-obsessed, techno-solutionist ‘climate’ of efficiency and productivity above all