Theoretical Virtues in Science: Uncovering Reality Through Theory
This book is under contract with Cambridge University Press (since April 18, 2017).
This book offers an in-depth discussion of the good-making features of scientific theories and draws consequences for the realism debate.
This book offers an in-depth discussion of the features that characterize good scientific theories. Theoretical virtues, as these features are also called, include testability, empirical accuracy, consistency, unifying power, simplicity, and fertility. Theoretical virtues play an important role in theory-choice, as they guide scientists in their decisions to adopt certain theories and not others. Theoretical virtues are also important for what we choose to believe in: only if the theories we possess are good ones can we be confident that our theories’ claims about nature are actually correct. By means of historical cases studies, this book challenges parts of the received view of theoretical virtues and, based on a reconsidered view, advances arguments for the belief that science successfully uncovers reality through theory.
Table of contents:
1. Theoretical virtues, truth, and the argument from simplicity
2. Pessimism, base rates, and the no-virtue-coincidence argument
3. Novel success
4. Theoretical fertility without novel success
5. Ad hoc hypotheses and the argument from coherence
6. Theoretical virtues as confidence boosters and the argument from choice
7. Philosophy of science by historical means
Epilogue: the demarcation problem
Chapter 1: Theoretical Virtues, Truth, and the argument from simplicity
This chapter reviews the standard virtues of scientific theories and introduces the scientific realism debate with a particular focus on simplicity. The chapter argues that the multi-facetedness and context-dependence of simplicity, contrary to the received view, does not undermine the potential of simplicity considerations to arbitrate theory-choice. The chapter furthermore proposes that concerns of simplicity can be justified via the ‘evidential-explanatory rationale’ which compels us to choose the theories whose postulated entities or principles are empirically supported in the explanation of the target phenomena. It is argued that the evidential-explanatory rationale for simplicity offers an argument for realism: since simpler theories are better supported empirically than complex ones (explaining the same evidence), simplicity, contrary to what the antirealist holds, is an epistemic concern. This is the ‘argument from simplicity’.
Chapter 2: Pessimism, base rates, and the no-virtue-coincidence argument
This chapter assesses the strength of two prominent arguments against realism, namely the Pessimistic Meta Induction and the (related) Problem of Unconceived Alternatives. The chapter concludes that the latter does not pose a threat that is significantly distinct from the former. The chapter then argues that the more fundamental concern of realists (and antirealists alike) must be the so-called base-rate fallacy, as highlighted by Magnus and Callender, in particular. On the basis of the Kuhnian framework of theory-choice and an epistemological insight by Earman, the chapter advances a new argument for realism. This ‘no-virtue-coincidence argument’ for realism shows that a theory that possesses all of the standard virtues and is embraced by numerous scientists is likely to be true.
Chapter 3: Novel success and predictivism
Many philosophers of science believe that a theory’s successful prediction of the phenomena, also known as novel success, is a more impressive kind of empirical success than a theory’s accommodation of already known phenomena. This chapter argues that none of the proposed rationales for predictivism, as this view has also been called, is convincing. More specifically, standard notions of temporal novelty, use-novelty, novelty as parameter fixing, comparative novelty, and others are all rejected as problematic. Given that realists usually base their commitment on whether or not a theory manages to produce novel success, it is argued, standard realist responses to the Pessimistic Meta Induction, such as a reduction of Laudan’s list and the realist’s divide et impera move, are at risk of being undermined.
Chapter 4: Theoretical fertility without novel success
This chapter considers a form of theoretical fertility that has not received much attention from philosophers: a theory’s capacity to accommodate anomalies in a non-ad hoc fashion. Contrary to what has been claimed in the literature, this kind of fertility—referred to in this chapter as M-fertility—cannot be reduced to novel success; indeed, it is incompatible with it. On the basis of a detailed discussion of Bohr’s model of the atom, this chapter also argues that M-fertility should not be construed as originating from a de-idealisation of the theory in question—contrary to McMullin, the main proponent of M-fertility. Whatever the causes of M-fertility might be, however, it is clearly virtuous: it is desirable for a theory to be able to accommodate the phenomena in a non-ad hoc fashion.
Chapter 5: Ad hoc hypotheses and the argument from coherence
It is widely agreed that ad hoc hypotheses are methodologically undesirable; they decrease the degree of a confirmation of a theory for which they are invoked. But how are ad hoc hypotheses to be understood? This chapter offers an epistemological analysis of the notion of ad hocness. Although seemingly straightforward, a descriptively adequate account of ad hocness, which goes beyond stating the motivation for introducing ad hoc hypotheses, is not easy to come by. Accounts that spell out ad hocness as the lack of testability, as the lack of independent support, as the lack of unifiedness, or as mere subjective projections are all unsatisfactory. Instead, this chapter proposes that ad hocness has to do with the lack of coherence between the hypothesis in question and (i) the theory which the hypothesis is supposed to save from refutation or (ii) the background theories at the time. This ‘coherentist conception of ad hocness’ offers another argument for realism, namely the ‘argument from coherence’.
Chapter 6: Virtues as confidence boosters and the argument from choice
Scientific experiments often produce conflicting data. How do scientists deal with data conflicts when they want to arbitrate between theories on the basis of those data? On the basis of a number of case studies, it is shown in this chapter that theoretical virtues can boost scientists’ confidence in viewing data as reliable or unreliable. The chapter argues that such cases are evidence against the empiricist’s Negative View, according to which theoretical virtues are not epistemic, but only pragmatic criteria in theory choice. More specifically, the chapter argues that a rational rendering of scientists’ theory-choices requires theoretical virtues to be epistemic criteria of theory choice. This constitutes the ‘argument from choice’ for scientific realism.
Chapter 7: Philosophy of Science by historical means
Although the methodological approach of using history of science in philosophical theorizing in science has a long tradition, a central issue has hitherto not been addressed in a satisfying fashion: how can facts about science ground philosophical norms about science? This chapter criticized two solutions proposed by Laudan. It then goes on to argue that in one viable approach within the history of science, namely the “Kuhnian mode of HPS”, historical facts may motivate philosophical norms about science without having to serve as justifiers for them. The chapter also advocates Lakatos’ idea of maximizing the number of historical facts which can be explained in a rational way (without distorting them). Finally, the chapter rejects the view that philosophical norms must always be categorical and suggests that ceteris paribus norms may be more appropriate in many contexts. The second part of the chapter argues that another fruitful role for the history and philosophy of science is concept clarification: although descriptive, it is also normative as explicating conditions which are normally left implicit can help us arbitrate between correct and incorrect uses of the concept.
Epilogue: the demarcation problem
The epilogue of this book focuses on the demarcation problem, viz. the problem of distinguishing science from non-science. It is argued that popular solutions and resolutions are unsatisfying. In particular, (i) Popper’s falsifiability is both too strong and too weak to serve as a successful demarcation criterion, (ii) Laudan’s deflationary approach inherits the problems of Popper’s account and additionally lacks normative force, and (iii) applications of the Wittgensteinian idea of ‘family resemblance’ to the usage of the term ‘science’ have substantial weaknesses. Instead, it is argued that science is to be delineated from non-science via a paradigm or ‘basic predicate’. Although there are properties that are necessary and sufficient for it, this solution nevertheless satisfies the Wittgensteinian sentiment that the sciences are diverse and that there may not be many features that all sciences share.