# Guide to Publications of Professor Donald A. Gillies

*In this guide, I have selected some papers (articles and chapters of books), and arranged them by topics. Under each heading I have given a brief account of the content of the papers on that topic, so that the reader can judge whether a particular paper is likely to be of interest. The papers are referred to by date of publication and title, e.g. [1991c] Intersubjective Probability and Confirmation Theory, or [1996] Chs. 4 &5 of Artificial Intelligence and Scientific Method. Full details of the place of publication are given in the complete list of publications. If a paper is included, a down-loadable version will usually be obtainable by clicking on the paper’s title. For articles from 2001, this version is normally that in UCL Eprints depositary (**http://eprints.ucl.ac.uk**). For copyright reasons, this depositary accepts the version which the author sent to the publisher rather than the printed version. So the content of the UCL Eprint should be the same as the printed version, but the formatting and pagination may be different. *

**Contents **

**1. Philosophy of Probability **

**1.1 Propensity Theory of Probability **

**1.2 Intersubjective Probability **

**1.3 Critique of Bayesianism **

**1.4 Non-Bayesian Confirmation Theory **

**1.5 Historical Aspects of Bayesianism **

**2. Philosophy of Artificial Intelligence (AI) **

**3. General Questions in Philosophy of Science **

**3.1 The Problem of Demarcation **

**3.2 The Problem of Induction **

**4. Philosophy of Logic and Language **

**5. Philosophy of Mathematics **

**5.1 Intuitionism, Constructivism, and Reductionism **

**5.2 Empiricist Philosophy of Mathematics **

**5.3 Mathematical Growth and Heuristics **

**5.4 Historical Development of Philosophy of Mathematics **

**6. Philosophy of Economics **

**7. Philosophy of Science for Medicine **

**8. Causality and Bayesian Networks **

**9. Research Assessment and Organisation **

**………………………………………………………………………………........................................................... **

**1. Philosophy of Probability **

**1.1**** Propensity Theory of Probability**

I started research in 1966 when I joined Sir Karl Popper’s department at the London School of Economics to do a PhD with Imre Lakatos as my supervisor. At that time Lakatos was working in the general area of philosophy of probability on a paper which was published in 1968 as ‘Changes in the Problem of Inductive Logic’, while Popper had recently (1957 & 1959) introduced the Propensity Theory of Probability. Popper had earlier supported an objective theory of probability along the lines of von Mises’ frequency theory of probability. In the 1950s, however, while continuing to support an objective approach to probability, he decided that von Mises’ frequency theory needed to be changed into a propensity approach. Following Popper here, I noted that von Mises’s frequency theory was based on an operationalist approach to scientific concepts. I therefore criticized this approach ([1972a] Operationalism), and suggested instead that the connection between probability and frequency be established by adopting methodological falsificationism ([1971] A Falsifying Rule for Probability Statements). This account was developed in detail in my book: [1973] An Objective Theory of Probability. This theory fitted well with Kolmogorov’s axiomatic approach to probability and with classical statistics. However, it differed in some points from Popper’s propensity theory. So I did not call the theory a propensity theory. Later on, however, it seemed to me that the term ‘propensity’ was not being used exclusively for Popper’s theory but rather to describe a family of theories of probability to which my own theory belonged. I gave an overview of some of these theories in [2000b] Varieties of Propensity. In [2000] Ch. 7 of Philosophical Theories of Probability, I gave a short account of my original objective theory of probability, now re-described as a long-run propensity theory.

**1.2 Intersubjective Probability **

The other interpretation of probability which I’ve developed is an extension of the subjective theory of probability of Ramsey and De Finetti. Both Ramsey and De Finetti interpret probability as the degree of belief of a particular individual. The idea of the intersubjective interpretation is to consider the degree of belief not of an individual but of a social group which has reached a consensus. Ramsey and De Finetti used the Dutch Book Argument to found their conception of probability, and it is shown that this can be extended from individuals to social groups. In [1991c] Intersubjective Probability and Confirmation Theory, it is argued that intersubjective probability is the appropriate interpretation of probability for confirmation theory. [1991b] Intersubjective Probability and Economics considers applications of intersubjective probability in economics. This was written with my wife, Grazia Ietto-Gillies, who is an economist.

**1.3 Critique of Bayesianism **

Popper was always a strong critic of Bayesianism, and indeed his falsificationist position agrees very well with classical statistics but not at all with Bayesian statistics. As a student of Popper’s, I began as a critic of Bayesianism, and, in [1990b] Bayesian versus Falsificationism, I defended falsificationism against the Bayesianism for which Howson and Urbach had argued in their 1989 book. Later, however, the success of the theory of Bayesianism Networks in AI led me to become more sympathetic to Bayesianism. This issue is considered in [1998c] Debates on Bayesianism and the Theory of Bayesian Networks. I came to the conclusion that Bayesianism might be valid, but only under certain specific conditions. [2001e] Bayesianism and the Fixity of the Theoretical Framework discusses one of these conditions.

**1.4 Non-Bayesian Confirmation Theory**

A critic of Bayesianism should not just be negative, but has the obligation to try to develop an alternative non-Bayesian theory of confirmation. My approach to this task was to try to simplify and extend Popper’s theory of corroboration. I introduced a new principle for confirmation theory in [1989b] Non-Bayesian Confirmation Theory and the Principle of Explanatory Surplus. I learnt from a very helpful correspondence with I.J.Good about the Turing-Good weight of evidence function, and showed that it was closely related to Popper’s measure of the severity of a test in [1990a] The Turing-Good Weight of Evidence Function and Popper’s Measure of the Severity of a Test. These strands of thought were synthesised in [1998b] Confirmation Theory, which develops a theory of confirmation and shows that it can be applied successfully in artificial intelligence.

**1.5 Historical Aspects of Bayesianism**

The history of Bayesianism is full of interest. In [1987b] Was Bayes a Bayesian? I argue that Bayes and Price in their classic paper of 1763 were consciously trying to solve Hume’s problem of induction. Price, and probably Bayes as well, attempted to rebut Hume’s argument concerning miracles. The relationship between Bayesianism and Hume’s argument about miracles is examined in two papers: [1989a] (with Philip Dawid) A Bayesian Analysis of Hume’s Argument Concerning Miracles, and [1991a] A Bayesian Proof of a Humean Principle.

**2. Philosophy of Artificial Intelligence (AI)**

Since 1990 I have been interested in the interaction between philosophy and the developing field of AI. Here I was lucky to be able to acquire hands-on experience of AI, by collaborating in joint projects with researchers in the field. One such project was with my brother Duncan Gillies and Enrique Sucar in Imperial College. They were at the time trying to construct an expert system for colon endoscopy. The three of us worked on trying to add probability to a previously non-probabilistic system. Our approach was to adopt an objective interpretation of probability and a corresponding testing methodology. The results, which were very satisfactory, are set out in [1993a] (with L.E.Sucar and D.F.Gillies) Objective Probabilities in Expert Systems. I also participated (1990-3) in a SERC funded project called the Rule-Based Systems Project with Dov Gabbay (then at Imperial College), and Stephen Muggleton (then at the Turing Institute in Glasgow), as partners. The project studied issues to do with logic, probability, and induction in the AI context. In addition to this I benefited from several discussions with Bob Kowalski one of the founders of Logic Programming and some of his students. The results of this research are set out in detail in my book [1996] Artificial Intelligence and Scientific Method. However, a shorter account of some of the main points is to be found in [1995a] Inaugural Lecture: Could Computers become Superior to Human Beings? The role of logic in computer science is discussed in [2002b] Logicism and the Development of Computer Science. Problems of handling uncertainty in AI and the role of probability theory in this context are discussed in [2004e] Handling Uncertainty in Artificial Intelligence, and the Bayesian Controversy.

**3. General Questions in Philosophy of Science**

Popper held the view that the two fundamental problems of the theory of knowledge were the problem of demarcation, and the problem of induction. I have worked on both of these problems.

**3.1 The Problem of Demarcation**

Popper is famous for having proposed the falsifiability criterion as the criterion of demarcation between science and metaphysics. Equally well-known, however, is the objection that this criterion is inadequate because of the Duhem-Quine thesis. My own view is that this standard objection is indeed valid and that the falsifiability criterion is inadequate. All the same the concept of falisfiability is a useful one, and it seems to me important to preserve its valuable features. I suggest that this can be done by replacing Popper’s 3-level model (Observation Statements, Scientific Theories, Metaphysics) by a 4-level model in which Scientific Theories are split into a lower level which consists of theories which are falsifiable, and a higher level which consists of theories which are not falsifiable but are still scientific because they are confirmable. This makes confirmability not falsifiability the criterion of demarcation. Of course this means that a theory of confirmation needs to be developed, but on this see **1.4**. An exposition of this view point is to be found in [1993] Ch 10 of Philosophy of Science in the Twentieth Century. The 4-level model can also be applied in Philosophy of Mathematics (see **5.2**). These general ideas about demarcation are used in a discussion of the problem of evaluating alternative medicine in [2004f] The Problem of Demarcation and Alternative Medicine.

**3.2 The Problem of Induction**

Popper’s approach to the problem of induction is based on the claim that induction is a myth. Yet induction’s allegedly mythical status has been called into question by machine learning programmes in AI which are capable of inducing generalisations from data. This objection to Popper is developed in [2004g] The Problem of Induction and Artificial Intelligence. In [2009a] Problem-Solving and the Problem of Induction, Popper’s theory of problem-solving is applied to Popper’s own treatment of the problem of induction. The result, so it is claimed, is that the traditional problem of induction is transformed into a new problem, namely that of choosing a confirmation function. A solution to this new problem is proposed which uses Neurath’s principle, but applied to methods not theories. This approach once again gives central importance to confirmation theory (see **1.4**)

**4. Philosophy of Logic and Language**

I have always regarded Wittgenstein’s analysis of meaning in his later philosophy as basically correct, and this led me to criticizing Chomsky in [1983] Chomsky’s Approach to Linguistics. I have also always accepted Quine’s denial that a distinction can be drawn between analytic and synthetic, and in [1985a] The Analytic/Synthetic Problem, I propose two arguments, which are different from Quine’s, but lead to the same conclusion. The first of these arguments is based on Wittgenstein’s later theory of meaning, while the second uses Tarski’s theory of truth. My research in AI in the 1990s brought to light a whole series of important issues in the philosophy of logic which had been created, or made more prominent, by developments in AI. These included the significance of non-monotonic logics, the view of logic as inference + control, the question of whether there can be an inductive logic, and the empiricist view of logic. Discussions of these are to be found in [1994c] A Rapprochement between Deductive and Inductive Logic, and in [1996] Chs 4 &5 of Artificial Intelligence and Scientific Method.

**5. Philosophy of Mathematics **

**5.1 Intuitionism, Constructivism, and Reductionism **

When I started to study the problems of intuitionism and constructivism, one thing that struck me was the curious similarities and differences between the positions of Brouwer and the later Wittgenstein. Both Brouwer and Wittgenstein criticized the law of the excluded middle, but yet their underlying philosophies were completely different. Brouwer was an extreme subjectivist who thought that mathematical entities were the subjective languageless constructions of individual mathematicians preferably working in solitude. Wittgenstein thought of mathematics as a social linguistic activity carried out by a group of mathematicians – a language game in his sense. Yet the two of them, starting from these different philosophical standpoints, seemed to agree in criticizing the law of the excluded middle. To make matters worse, the later Wittgenstein appears to contradict himself. On the one hand he is famous for maintaining that “Philosophy … leaves everything as it is. It also leaves mathematics as it is.” On the other hand he appears to reject the law of excluded middle, and this would certainly change mathematics. By contrast Brouwer was always quite clear about his wish to change mathematics. This range of problems are tackled in the following two review articles: [1980a] Brouwer’s Philosophy of Mathematics, and [1982b] Wittgenstein and Revisionism. My solution is to claim that the rejection of the law of excluded middle really is justified by Brouwer’s philosophical position, but that his philosophical position is unacceptable; while, conversely, Wittgenstein’s later philosophical position justifies an acceptance rather than a rejection of the law of excluded middle. I went on to develop an Aristotelian view of the social construction of mathematical concepts which combines Popper’s Theory of World 3 with Wittgenstein’s later theory of meaning. This is set out in: [1990c] Intuitionism versus Platonism: a 20 th Century Controversy concerning the Nature of Numbers. Another view in this area is reductionism which holds that apparent references to mathematical entities can be eliminated so that we do not need to postulate the existence of mathematical objects. I discuss Chihara’s version of this position in the review article: [1992c] Do we need Mathematical Objects? My own answer to the question is that we do need mathematical objects, and that they cannot be eliminated.

**5.2 Empiricist Philosophy of Mathematics**

John Stuart Mill is famous for advocating an empiricist philosophy of mathematics. However, Frege’s incisive criticisms of Mill’s views contributed to this approach to the philosophy of mathematics being abandoned for many decades. My own interest in the empiricist approach began when, on re-studying Frege’s critique of Mill, I reached the conclusion that Frege’s arguments could be answered. This is discussed in [1982] Chs 3 &4 of Frege, Dedekind, and Peano on the Foundations of Arithmetic. Of course this does not mean that I want to defend Mill’s precise version of an empiricist philosophy of mathematics, which involves, for example, the idea that arithmetical laws are inductive generalisations from observed facts. My own version assimilates many mathematical laws to high level scientific theories within the 4-level model (see **3.1**). Another important feature of the account is the claim that the demarcation between science and metaphysics runs through mathematics. Then again, while traditional empiricists have tended to favour a reductionist view of mathematical entities, I favour an Aristotelian account. An empiricist philosophy of mathematics with these features is set out in [2000a] An Empiricist Philosophy of Mathematics and its Implications for the History of Mathematics.

**5.3 Mathematical Growth and Heuristics**

Lakatos’ (1963-4) Proofs and Refutations introduced the historical approach to the philosophy of mathematics, or, to put it another way, started the discipline of history and philosophy of mathematics. The historical approach raises questions which were not considered by earlier philosophy of mathematics, for example the question of how mathematics grows and develops. One debate which arose in this area was whether there are revolutions in mathematics which are analogous to scientific revolutions such as the Copernican revolution, the chemical revolution, or the Einsteinian revolution. I edited a collection on this topic which was published in 1992. My own introduction to the collection and contribution are to be found in [1992] Introduction and Ch 14 of Revolutions in Mathematics. Another important question is about what heuristics have proved fruitful in facilitating mathematical discovery. The analysis of an important recent example of mathematical discovery (the discovery of Bayesian networks in the 1980s and early 1990s) led me to suggest 3 heuristics, namely: (a) the use of philosophical ideas, (b) new practical problems, and (c) domain interaction. These views are to be found in [2005b] Heuristics and Mathematical Discovery: the case of Bayesian Networks.

**5.4 Historical Development of Philosophy of Mathematics**

Developments in Germany are discussed in [1999c] German Philosophy of Mathematics from Gauss to Hilbert. This paper stresses the importance of empiricism for the work of Riemann. [2001a] (with Yuxin Zheng) Dynamic Interactions with the Philosophy of Mathematics is a paper which I wrote with Yuxin Zheng of Nanjing University. It introduces the concept of dynamic interaction which is seen as a general pattern in the growth of knowledge. This concept has roots in ancient Chinese philosophy (yin and yang), and also relates to some more recent ideas of Emily Grosholz (domain interaction). Two dynamic interactions with the philosophy of mathematics are considered. These are with (a) the philosophy of science, and (b) with computer science.

**6. Philosophy of Economics **

My work in this field was stimulated by contact with the Post-Keynesians. This school began in the 1980s and its members devoted a great deal of study to the development of Keynes’s ideas. They thought that the standard textbook version of Keynesianism was inadequate because it ignored the role of uncertainty and probability in Keynes’s own economic thinking. I had been studying Keynes’s views on probability as part of the philosophy of probability, and so an interaction with the Post-Keynesians was very natural. My own views about the role of probability and uncertainty in Keynes’s mature economics are contained in [2003c] Probability and Uncertainty in Keynes’s The General Theory. Keynes, though originally trained as a mathematician, always held that economics is not a suitable field for the application of mathematics. I defend this Keynesian thesis in [2004d] Can Mathematics be used successfully in Economics? This topic should also be of interest to philosophers of mathematics, who ponder the problem, formulated by Wigner in 1960, of ‘the unreasonable effectiveness of mathematics in the natural sciences’. Mathematics is indeed very effective in physics, but, if we want to understand why, we should compare physics to an area in which mathematics is not effective. Such an area is, so I would claim, economics.

**7. Philosophy of Science for Medicine **

During most of the 1990s my research was mainly focussed on AI. However, around 2000, I began to take an interest in medicine. This partly arose from my previous interest since one of the main applications of AI consisted of attempts to automate medical diagnosis. However, there was a practical reason as well. I had begun to teach philosophy of science to medical students, and I discovered that they, naturally enough, preferred to have the philosophical ideas illustrated by examples from medicine. Unfortunately not many such examples were available in the philosophy of science literature, since medicine had been largely ignored by the majority of philosophers of science in the twentieth century who studied mainly physics and chemistry. It occurred to me that many of the theories in philosophy of science would need to be modified and developed in order to apply to medicine, so that the study of medicine might lead to new results in the philosophy of science. One problem in the philosophy of science which could benefit from this study is the analysis of causality, since causality is a central concept in medicine where it plays a more important role than it does in physics. For this, see **8**. I have also considered how Kuhn’s ideas might be modified and developed by analysing examples from medicine. In [2005a] Hempelian and Kuhnian Approaches in the Philosophy of Medicine: the Semmelweis Case, I try to use some of Kuhn’s ideas to explain why Semmelweis found such difficulty in getting his antiseptic precautions accepted. In [2006c] Kuhn on Discovery and the Case of Penicillin, I discuss how Kuhn’s theory of scientific discovery might need to be modified to deal with the discovery of penicillin. For an application of ideas about demarcation to alternative medicine, see **3.1**.

**8. Causality and Bayesian Networks**

Among the various analyses of causality, my preference is for the agency or manipulability approach, and I have developed a particular version of this, which I called an action-related theory of causality. An account of this is to be found in [2005c] An Action-Related Theory of Causality. This paper deals mainly with determinate causality, but I also accept indeterminate causality which raises the question of relating causality to probability. This, I think, should be attempted in the context of Bayesian networks. However, Pearl, in his development of the theory of Bayesian networks, has always interpreted the probabilities involved in the subjective, degree of belief, sense. My own view is that these probabilities are in many cases better interpreted as propensities. Causality cannot, in my opinion, be defined in terms of propensities because of Humphreys’ paradox, but there is still an interesting relationship between causality and propensity. These themes are explored in [2002c] Causality, Propensity, and Bayesian Networks.

**9. Research Assessment and Organisation **

The Research Assessment Exercise (or RAE) was introduced into the UK by Thatcher in 1986. Having observed it in operation for many years, I became increasingly sceptical about whether it was doing any good. However, I did not think of writing a criticism of the RAE until around 2004 when it occurred to me that many of the ideas which I was studying in the history and philosophy of science were highly relevant to an evaluation of the RAE. Perhaps in retrospect this is rather obvious. The history of science gives an abundance of examples of situations in which research flourishes, and of other situations in which it languishes. Moreover the philosophy of science provides some theoretical explanation of why these variations occur. Applying then some results from the history and philosophy of science (where science here is taken to include computer science, medicine and mathematics as well as the natural sciences), I reached the conclusion that the net effect of the RAE is likely to be to reduce the quality of the research produced in the UK. The arguments for this conclusion are set out in [2007a] Lessons from the History and Philosophy of Science regarding the Research Assessment Exercise. If the RAE is no good, however, the question naturally arises of what alternative system should be put in its place. I give an outline of an alternative system, which I argue would be much cheaper and produce better results, in [2009b] How Should Research be Organised? An Alternative to the UK Research Assessment Exercise. These two papers have been developed with some additional material into a short book: *How Should Research by Organised?* College Publications, 2008. This was published to coincide with the announcement of the results of the RAE held in the UK in 2008.