UCL Department of Science and Technology Studies is an interdisciplinary centre for the integrated study of science's history, philosophy, sociology, communication and policy, located in the heart of London. Founded in 1921. Award winning for teaching and research, plus for our public engagement programme. Rated as outstanding by students at every level.
At UCL, the academic mission is paramount. Our ambition is to achieve the highest standards in our teaching and research.
Join us for BSc, MSc, and PhD study.
Staff books include:
What is the difference between data and evidence?
Draft of 22nd July 2013
This is an apparently trivial distinction, but one that has assumed great importance in the current climate of evidence-based policy. Put crudely, most practices in evidence-based policy are based on those arising from evidence-based medicine. In this parent discipline, evidence and data are (and absolutely must be) distinguished from one another. This is because EBM proponents claim that only evidence (and not, say, data) must be used to ground healthcare decisions. And, fortunately for them, in their domain of practice, a very simple distinction is used and accepted. Information about individuals is data. But once aggregated via appropriate statistical work, and reported as the result of a trial, it is evidence. Put another way, a defining character of evidence in EBM is that it is compiled from many individuals. So data is token, evidence type. Easy!
But this distinction is rather hard to translate into EBP. Part of this difficulty comes about because EBP has much greater scope in terms of the kinds of interventions it is intended to investigate. While EBM deals with relatively few kinds of intervention (medicines, surgery, lifestyle modifications such as exercise, spoken interventions such as cognitive-behavioural therapy), EBP must be capable of dealing with a much greater range of systems. One very awkward issue thrown up immediately by this broadening of scope is that EBP may be applied to processes that can only be measured at a group level (economic interventions, for instance). Here, the data/evidence distinction used in EBM is clearly not suitable. And yet data and evidence are frequently distinguished in EBP contexts (some examples: Data Unity Network ; South Downs National Park Authority ).
This was the original starting point for my work on evidence. Given that evidential practices are generally effectively localised - like the data/evidence distinction, which has an EBM-local solution, but not an EBP solution - I wondered whether it might be possible to use some work on the philosophy of evidence to assist with these kinds of translation issues. At the outset, I thought that this might well be possible, mainly because of all the recent interest in evidence as a core philosophical issue - see the evidence project that ran at UCL in the 2000s, for instance). Another piece of evidence in favour of this (rather optimistic) idea was that many philosophers of evidence seem pretty happy to make this distinction, suggesting that there might be some generally applicable grounds for telling the two apart.
Two good examples of the data/evidence distinction come from the philosophy of probability. The first comes from Deborah Mayo’s error-statistical philosophy of evidence, which is part of the frequentist Neyman-Pearson tradition (for general background, see Gillies 2000: chapter 5). She claims that data and evidence can be distinguished as follows:
“data x are evidence for a hypothesis H to the extent that H passes a severe test with x.”
(Mayo 2004: 79)
Here, then, evidence is just a special kind of data. Data becomes evidence when it stands in a particular testing relationship with an hypothesis.
The second distinction that can be found in the philosophy of probability comes from subjective Bayesianism (see Gillies 2000: chapter 4). Here, our longed-for distinction between data and evidence turns out to be very easy. The bad news, though, is that it is easy because to conflate the two would be the grossest kind of category error. In this particular interpretation of probability, the primitive concept is the acceptance of an evidential statement, not how this statement was arrived at, or what constitutes it. This means that the grounds for the distinction between evidence and data is that evidence is an intelligible concept, and data is not:
“The Bayesian theory of support is a theory of how the acceptance as true of some evidential statement affects your belief in some hypothesis. How you came to accept the truth of the evidence, and whether you are correct in accepting it as true, are matters that, from the point of view of the theory, are simply irrelevant.”
(Howson and Urbach 1993: 419)
Between the two cases, the following becomes apparent. While both versions offer us clear distinctions, neither of the distinctions look to be useful in the EBP context. This means that my original plan, to take bits of the philosophy of evidence, and use them to make sense of some practical issues concerning the translation of evidential practices, won't work. Now the causes of this failure seem a bit hard to understand. My suspicion is that it comes about from some rather sloppy assumptions about the modularity and flexibility of bits of philosophical work. Because philosophy of science is typically abstracted away from scientific practice, it is tempting to forget the familiar lessons about ideas being theory-laden, and so on, and to falsely assume that philosophical ideas are more-or-less generalizable.
But this is a valuable lesson that the philosophy of evidence is no exception to the rule that our general conceptual frameworks will influence the details of our work. The way that we model evidence shapes our account of evidence. But, as the introductions to the two data/evidence distinction examples above show, most so-called accounts of evidence are nothing of the kind: they are instead made up of fragments of much more general philosophical models.
In this case, the recycling of ideas about evidence outside their original context is one of the reasons that our most basic questions about evidence have no answer. We’ve seen this in some detail; but there are many other non-answers in this philosophy of evidence. My conclusion is a really simple one: philosophy is not modular. We can’t just pick out the bits we like, and thoughtlessly apply them to other contexts. Here, most philosophical work about evidence fails to help the practitioner just because this work does not really concern evidence at all. And this means that we cannot expect to say anything very useful about evidence if our work is really about something else altogether.
Gillies, D. 2000. Philosophical Theories of Probability. Routledge
Howson, C. and Urbach, P. 1993. Scientific Reasoning: the Bayesian Approach. Open Court
Mayo, D. 2004. "An Error-Statistical Philosophy of Evidence," in Taper and Lele (eds.) The Nature of Scientific Evidence: Statistical, Philosophical and Empirical Considerations: 79-118.
Sackett, D. et al. 1996. Evidence based medicine: what it is and what it isn't. British Medical Journal. 312: 71
Page last modified on 23 jul 13 15:56 by Brendan Clarke
UCL Department of Science and Technology Studies (STS)
0207 679 1328 office | +44 207 679 1328 international
email@example.com | www.ucl.ac.uk/sts | @stsucl
postal address: Gower Street, London, WC1E 6BT | United Kingdom
street address: 22 Gordon Square, London, WC1E 6BT | maps