ARTICLES AND BOOK CHAPTERS
Causation and the Time-Asymmetry of Knowledge
Australasian Journal of Philosophy, forthcoming.
Australasian Journal of Philosophy, forthcoming.
Abstract: This paper argues that the knowledge asymmetry (the fact that we know more about the past than the future) can be explained as a consequence of the causal Markov condition. The causal Markov condition implies that causes of a common effect are generally statistically independent, whereas effects of a common cause are generally correlated. I show that together with certain facts about the physics of our world, the statistical independence of causes severely limits our ability to predict the future, whereas correlations between joint effects make it so that no such limitation holds in the reverse temporal direction. Insofar as the fact that our world conforms to the causal Markov condition can itself be explained in terms of the initial conditions of the universe, my view is compatible with Albert’s well-known account of the origins of temporal asymmetries, but also provides a more illuminating way to derive the knowledge asymmetry from those initial conditions.
Specificity of Association in Epidemiology
Synthese, forthcoming.
Synthese, forthcoming.
Abstract: The epidemiologist Bradford Hill famously argued that in epidemiology, specificity of association (roughly, the fact that an environmental or behavioral risk factor is associated with just one or at most a few medical outcomes) is strong evidence of causation. Prominent epidemiologists have dismissed Hill’s claim on the ground that it relies on a dubious `one-cause one effect’ model of disease causation. The paper examines this methodological controversy, and argues that specificity considerations do have a useful role to play in causal inference in epidemiology. More precisely, I argue that specificity considerations help solve a pervasive inferential problem in contemporary epidemiology: the problem of determining whether an exposure-outcome correlation might be due to confounding by a social factor. This examination of specificity has interesting consequences for our understanding of the methodology of epidemiology. It highlights how the methodology of epidemiology relies on local tools designed to address specific inference problems peculiar to the discipline, and shows that observational causal inference in epidemiology can proceed with little prior knowledge of the causal structure of the phenomenon investigated. I also argue that specificity of association cannot (despite claims to the contrary) be entirely explained in terms of Woodward’s well-known concept of “one-to-one” causal specificity. This is because specificity as understood by epidemiologists depends on whether and exposure (or outcome) is associated with a `heterogeneous’ set of variables. This dimension of heterogeneity is not captured in Woodward’s notion, but is crucial for understanding the evidential import of specificity of association.
Host Specificity in Biological Control
British Journal for the Philosophy of Science, forthcoming.
British Journal for the Philosophy of Science, forthcoming.
Abstract: In recent years the notion of biological specificity has attracted significant philosophical attention. This paper focuses on host specificity, a kind of biological specificity that has not yet been discussed by philosophers, and which concerns the extent to which a species is selective in the range of other species it exploits for feeding and/or reproduction. Host specificity is an important notion in ecology, where it plays a variety of theoretical roles. Here I focus on the role of host specificity in biological control, a field of applied ecology that deals with the suppression of pests through the use of living organisms. Examining host specificity and its role in biological control yields several valuable contributions to our understanding of biological specificity. In particular, I argue that host specificity cannot be fully understood in terms of Woodward’s well-known account of causal specificity. To adequately account for host specificity, we need a notion of causal specificity that takes into consideration the extent to which a variable’s effects are similar to one another – a dimension not captured in Woodward’s account. In addition, the literature on host specificity in biological control highlights certain aspects in which causally specific relationships can be practically valuable that have not yet been addressed in philosophical discussions of specificity. That literature also reveals that in certain contexts specificity can hinder rather than foster effective control, thus leading to a nuanced assessment of the practical value of specific causes.
Experiments on Causal Exclusion (with Dylan Murray and Tania Lombrozo)
Mind and Language, 37 (5), pp. 1067-1089, 2022.
Mind and Language, 37 (5), pp. 1067-1089, 2022.
Abstract: Intuitions play an important role in the debate on the causal status of high-level properties. For instance, Kim has claimed that his “exclusion argument” relies on “a perfectly intuitive … understanding of the causal relation”. We report the results of three experiments examining whether laypeople really have the relevant intuitions. We find little support for Kim’s view and the principles on which it relies. Instead, we find that laypeople are willing to count both a multiply realized property and its realizers as causes, and regard the systematic overdetermination implied by this view as unproblematic.
Explanatory Abstraction and the Goldilocks Problem: Interventionism Gets Things Just Right
British Journal for the Philosophy of Science, 71 (2), pp. 633-663, 2020.
British Journal for the Philosophy of Science, 71 (2), pp. 633-663, 2020.
Abstract: Theories of explanation need to account for a puzzling feature of our explanatory practices: the fact that we prefer explanations that are relatively abstract but only moderately so. I argue (against Franklin-Hall) that the interventionist account of explanation provides a natural and elegant explanation of this fact. By striking the right balance between specificity and generality, moderately abstract explanations optimally subserve what interventionists regard as the goal of explanation, namely identifying possible interventions that would have changed the explanandum.
Bayesianism and Explanatory Unification: A Compatibilist Account
Philosophy of Science, 85(4), pp. 682-703, 2018.
Philosophy of Science, 85(4), pp. 682-703, 2018.
Abstract: Proponents of IBE claim that the ability of a hypothesis to explain a range of phenomena in a unifying way contributes to the hypothesis’s credibility in light of these phenomena. I propose a Bayesian justification of this claim that reveals a hitherto unnoticed role for explanatory unification in evaluating the plausibility of a hypothesis: considerations of explanatory unification enter into the determination of a hypothesis’s prior by affecting its ‘explanatory coherence’, that is, the extent to which the hypothesis offers mutually cohesive explanations of various phenomena.
Stability, Breadth and Guidance (with Nadya Vasilyeva and Tania Lombrozo)
Philosophical Studies, 175(9), pp. 2263-83, 2018.
Philosophical Studies, 175(9), pp. 2263-83, 2018.
Abstract: Much recent work on explanation in the interventionist tradition emphasizes the explanatory value of stable causal generalizations - causal generalizations that remain true in a wide range of background circumstances. We argue that two separate explanatory virtues are lumped together under the heading of `stability’. We call these two virtues breadth and guidance respectively. In our view, these two virtues are importantly distinct, but this fact is neglected or at least under-appreciated in the literature on stability. We argue that an adequate theory of explanatory goodness should recognize breadth and guidance as distinct virtues, as breadth and guidance track different ideals of explanation, satisfy different cognitive and pragmatic ends, and play different theoretical roles in (for example) helping us understand the explanatory value of mechanisms. Thus keeping track of the distinction between these two forms of stability yields a more accurate and perspicuous picture of the role that stability considerations play in explanation.
Stable Causal Relationships Are Better Causal Relationships (with Nadya Vasilyeva and Tania Lombrozo)
Cognitive Science, 42(4), pp. 1265-98, 2018.
Cognitive Science, 42(4), pp. 1265-98, 2018.
Abstract: We report three experiments investigating whether people's judgments about causal relationships are sensitive to the robustness or stability of such relationships across a range of background circumstances. In Experiment 1, we demonstrate that people are more willing to endorse causal and explanatory claims based on stable (as opposed to unstable) relationships, even when the overall causal strength of the relationship is held constant. In Experiment 2, we show that this effect is not driven by a causal generalization's actual scope of application. In Experiment 3, we offer evidence that stable causal relationships may be seen as better guides to action. Collectively, these experiments document a previously underappreciated factor that shapes people's causal reasoning: the stability of the causal relationship.
Bayesian Occam's Razor is a Razor of the People (with Tania Lombrozo and Shaun Nichols)
Cognitive Science, 42(4), pp. 1345-59, 2018.
Cognitive Science, 42(4), pp. 1345-59, 2018.
Abstract: Occam's razor—the idea that all else being equal, we should pick the simpler hypothesis—plays a prominent role in ordinary and scientific inference. But why are simpler hypotheses better? One attractive hypothesis known as Bayesian Occam's razor (BOR) is that more complex hypotheses tend to be more flexible—they can accommodate a wider range of possible data—and that flexibility is automatically penalized by Bayesian inference. In two experiments, we provide evidence that people's intuitive probabilistic and explanatory judgments follow the prescriptions of BOR. In particular, people's judgments are consistent with the two most distinctive characteristics of BOR: They penalize hypotheses as a function not only of their numbers of free parameters but also as a function of the size of the parameter space, and they penalize those hypotheses even when their parameters can be “tuned” to fit the data better than comparatively simpler hypotheses.
Cause without Default (with Jonathan Schaffer)
In Beebee, H., Hitchcock, C., & Price, H. (Eds.) Making a Difference. Oxford University Press, pp. 175-214, 2017.
In Beebee, H., Hitchcock, C., & Price, H. (Eds.) Making a Difference. Oxford University Press, pp. 175-214, 2017.
Abstract: Menzies (2004, 2007), Hitchcock (2007), Hall (2007), and Halpern (2008) have argued that standard causal models must be supplemented with a distinction between default and deviant events. We critically evaluate this proposal. We grant that the notions of ‘default’ and ‘deviant’ influence causal judgement, but we claim that this influence is best understood as arising through a general cognitive bias concerning the availability of alternatives. We also argue that arguments for incorporating a default-deviant distinction in causal models reveal that more attention is needed concerning what counts as an apt causal model.
Physics and Causation
Philosophy Compass, 11, pp. 256-66, 2016.
Philosophy Compass, 11, pp. 256-66, 2016.
Abstract: More than a century ago, Russell launched a forceful attack on causation, arguing not only that modern physics has no need for causal notions but also that our belief in causation is a relic of a pre‐scientific view of the world. He thereby initiated a debate about the relations between physics and causation that remains very much alive today. While virtually everybody nowadays rejects Russell's causal eliminativism, many philosophers (although by no means all) have been convinced by Russell that the fundamental physical structure of our world doesn't contain causal relations. This raises the question of how to reconcile the central role of causal concepts in the special sciences and in common sense with the putative absence of causation in fundamental physics.
Default Knowledge, Time Pressure, and the Theory-Theory of Concepts (Commentary on Edouard Machery's Doing without Concepts)
Behavioral and Brain Sciences 33(2-3), pp. 206-7, 2010.
Behavioral and Brain Sciences 33(2-3), pp. 206-7, 2010.
Abstract: I raise two issues for Machery's discussion and interpretation of the theory-theory. First, I raise an objection against Machery's claim that theory-theorists take theories to be default bodies of knowledge. Second, I argue that theory-theorists' experimental results do not support Machery's contention that default bodies of knowledge include theories used in their own proprietary kind of categorization process.
OTHER PUBLICATIONS
La causalité (Causation)
Encyclopédie Philosophique, 2018.
Review of Jenann Ismael, How Physics Makes Us Free (Oxford University Press).
Journal of Philosophy, 114(3), pp. 60-4, 2017.
Review of Matthias Frisch, Causal Reasoning in Physics (Cambridge University Press).
Notre Dame Philosophical Reviews, 2015.
Review of Douglas Kutach, Causation and its Basis in Fundamental Physics (Oxford University Press).
Philosophy of Science 82(2), 330-333, 2015.
Bibliography on Social Epistemology (with Alvin Goldman)
Oxford Online Bibliographies, 2012.
Encyclopédie Philosophique, 2018.
Review of Jenann Ismael, How Physics Makes Us Free (Oxford University Press).
Journal of Philosophy, 114(3), pp. 60-4, 2017.
Review of Matthias Frisch, Causal Reasoning in Physics (Cambridge University Press).
Notre Dame Philosophical Reviews, 2015.
Review of Douglas Kutach, Causation and its Basis in Fundamental Physics (Oxford University Press).
Philosophy of Science 82(2), 330-333, 2015.
Bibliography on Social Epistemology (with Alvin Goldman)
Oxford Online Bibliographies, 2012.